Towards Real-Time and Efficient Compression of Human Time-Varying-Meshes
In this work, a novel skeleton-based approach to Human Time-Varying Mesh (TVM) compression is presented. The topic of TVM compression is new and has many challenges: handling the lack of obvious mapping of vertices across frames, handling the variable connectivity across frames, while maintaining efficiency with respect to the above are the most important ones. Very few works exist in the literature, while not all of the challenges have been addressed yet. Moreover, developing an efficient and real-time solution, handling the above, obviously is a difficult task. We attempt to address the Human TVM compression problem inspired from video coding by using different types of frames and trying to efficiently remove inter-frame geometric redundancy utilizing the recent advances in human skeleton tracking. The overall approach focuses on compression efficiency, low distortion and low computation time enabling for real time transmission of Human TVMs. It efficiently compresses geometry and vertex attributes of TVMs. Moreover, this work is the first to provide an efficient method for connectivity coding of TVMs, by introducing a modification to the state-of-the-art MPEG-4 TFAN algorithm. Experiments are conducted in the MPEG-3DGC TVM database. The method outperforms the state-of-the-art standardized static mesh coder MPEG-4 TFAN at low bit-rates, while remaining competent at high bit-rates. It gives a practical proof of concept that in the combined problem of geometry, connectivity and vertex attribute coding of TVMs, efficient interframe redundancy removal is possible, establishing ground for further improvements. Finally, this work proposes a method for motion-based coding of Human TVMs that can further enhance the overall experience when Human TVM compression is used in a Tele-Immersion (TI) scenario.