3D Human Mesh Estimation from Virtual Markers
CVPR 2023

1School of Computer Science, Center on Frontiers of Computing Studies, Peking University
2Inst. for Artificial Intelligence, Peking University    3Microsoft Research Asia    4Nat'l Eng. Research Center of Visual Technology
*Corresponding author

TL;DR: We introduce a novel representation named Virtual Markers for 3D human mesh estimation.

Abstract

Inspired by the success of volumetric 3D pose estimation, some recent human mesh estimators propose to estimate 3D skeletons as intermediate representations, from which, the dense 3D meshes are regressed by exploiting the mesh topology. However, body shape information is lost in extracting skeletons, leading to mediocre performance. The advanced motion capture systems solve the problem by placing dense physical markers on the body surface, which allows to extract realistic meshes from their non-rigid motions. However, they cannot be applied to wild images without markers. In this work, we present an intermediate representation, named virtual markers, which learns 64 landmark keypoints on the body surface based on the large-scale mocap data in a generative style, mimicking the effects of physical markers. The virtual markers can be accurately detected from wild images and can reconstruct the intact meshes with realistic shapes by simple interpolation. Our approach outperforms the state-of-the-art methods on three datasets. In particular, it surpasses the existing methods by a notable margin on the SURREAL dataset, which has diverse body shapes.

Video



Results on natural videos



Comparison on the SURREAL dataset




Comparison on the 3DPW dataset




Citation

Template courtesy of Jon Barron.