An integrated platform for live 3D human reconstruction and motion capturing
The latest developments in 3D capturing, processing and rendering provide means to unlock novel 3D application pathways. The main elements of an integrated platform, which target Tele-Immersion (TI) and future 3D applications, are described in this paper, addressing the tasks of real-time capturing, robust 3D human shape/appearance reconstruction and skeletonbased motion tracking. More specifically, initially the details of a multiple RGB-D capturing system are given, along with a novel sensors’ calibration method. A robust, fast reconstruction method from multiple RGB-D streams is then proposed, based on an enhanced variation of the volumetric Fourier Transformbased method, parallelized on the GPU, accompanied with an appropriate texture mapping algorithm. On top of that, given the lack of relevant objective evaluation methods, a novel framework is proposed for the quantitative evaluation of realtime 3D reconstruction systems. Finally, a generic, multiple depth streams-based method for accurate real-time human skeleton tracking is proposed. Detailed experimental results with multi-Kinect2 datasets verify the validity of our arguments and the effectiveness of the proposed system and methodologies.