EgoPCA: A New Framework for Egocentric Hand-Object Interaction Understanding

MVIG-RHOS, SJTU

demo
With the surge in attention to Egocentric Hand-Object Interaction (Ego-HOI), large-scale datasets such as Ego4D and EPIC-KITCHENS have been proposed. However, most current research is built on resources derived from third-person video action recognition. This inherent domain gap between first- and third-person action videos, which have not been adequately addressed before, makes current Ego-HOI suboptimal. This paper rethinks and proposes a new framework as an infrastructure to advance Ego-HOI recognition, contributing a new baseline, comprehensive pretrain sets, and balanced test sets, which are complete with a training-finetuning strategy. With our new framework, we not only achieve state-of-the-art performance on Ego-HOI benchmarks but also build several new and effective mechanisms and settings to advance further research. We believe our data and the findings will pave a new way for Ego-HOI understanding.

News and Olds

[2023.09] Our paper is available on arXiv.
[2023.07] EgoPCA will appear at ICCV 2023.

Download

Our data subset and code will come soon!

Publications

If you find our paper, data or code usefull, please cite:
@article{egopca,
  title={EgoPCA: A New Framework for Egocentric Hand-Object Interaction Understanding},
  author={Xu, Yue and Li, Yong-Lu and Huang, Zhemin and Liu, Michael Xu
          and Lu, Cewu and Tai, Yu-Wing and Tang, Chi-Keung},
  journal={arXiv preprint arXiv:2309.02423},
  year={2023}
}
Map
© Copyright 2022 MVIG-RHOS • Based on tbakerx