Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reconstructing Hands in 3D with Transformers
2
Zitationen
6
Autoren
2023
Jahr
Abstract
We present an approach that can reconstruct hands in 3D from monocular input. Our approach for Hand Mesh Recovery, HaMeR, follows a fully transformer-based architecture and can analyze hands with significantly increased accuracy and robustness compared to previous work. The key to HaMeR's success lies in scaling up both the data used for training and the capacity of the deep network for hand reconstruction. For training data, we combine multiple datasets that contain 2D or 3D hand annotations. For the deep model, we use a large scale Vision Transformer architecture. Our final model consistently outperforms the previous baselines on popular 3D hand pose benchmarks. To further evaluate the effect of our design in non-controlled settings, we annotate existing in-the-wild datasets with 2D hand keypoint annotations. On this newly collected dataset of annotations, HInt, we demonstrate significant improvements over existing baselines. We make our code, data and models available on the project website: https://geopavlakos.github.io/hamer/.
Ähnliche Arbeiten
Stacked Hourglass Networks for Human Pose Estimation
2016 · 5.146 Zit.
Pfinder: real-time tracking of the human body
1997 · 4.163 Zit.
Impedance Control: An Approach to Manipulation: Part I—Theory
1985 · 3.583 Zit.
DeepPose: Human Pose Estimation via Deep Neural Networks
2014 · 3.215 Zit.
Online and off-line handwriting recognition: a comprehensive survey
2000 · 2.475 Zit.