CRNet: Context feature and refined network for multi-person pose estimation
Abstract: Multi-person pose estimation is a challenging problem. Bottom-up methods have been greatly studied because the prediction speed of top-down methods is related to the number of people in the input image, making these methods difficult to apply in real-time environments. To solve the problems of scale sensitivity and quantization error in bottom-up methods, it is necessary to have a model that can predict multi-scale keypoints and refine quantization error. To achieve this, we propose context feature and refined network for multi-person pose estimation (CRNet), which can effectively solve the problems of scale sensitivity and quantization error in bottom-up methods. We use a multi-scale feature pyramid and context feature to achieve scale invariance of the network. We extract global and local features and then fuse them by attentional feature fusion (AFF) to obtain context feature that adapt to multi-scale keypoints. In addition, we propose an efficient refined network to solve the problem of quantization error and use multi-resolution supervised learning to further improve the prediction accuracy of CRNet. Comprehensive experiments are conducted on two benchmarks: COCO and MPII datasets. The average precision of CRNet reached 72.1 and 80.2%, respectively, surpassing most state-of-the-art methods.
- Location
-
Deutsche Nationalbibliothek Frankfurt am Main
- Extent
-
Online-Ressource
- Language
-
Englisch
- Bibliographic citation
-
CRNet: Context feature and refined network for multi-person pose estimation ; volume:31 ; number:1 ; year:2022 ; pages:780-794 ; extent:15
Journal of intelligent systems ; 31, Heft 1 (2022), 780-794 (gesamt 15)
- Creator
-
Zhao, Lanfei
Chen, Zhihua
- DOI
-
10.1515/jisys-2022-0060
- URN
-
urn:nbn:de:101:1-2022071514301983739820
- Rights
-
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
- Last update
-
15.08.2025, 7:32 AM CEST
Data provider
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.
Associated
- Zhao, Lanfei
- Chen, Zhihua