COST-EFFECTIVE CAMERA LOCALIZATION AIDED BY PRIOR POINT CLOUDS MAPS FOR LEVEL 3 AUTONOMOUS DRIVING VEHICLES

Abstract. Precise and robust localization is critical for many navigation tasks, especially autonomous driving systems. The most popular localization approach is global navigation satellite systems (GNSS). However, it has several shortcomings such as multipath and nonline- of-sight reception. Vision-based localization is one of the approaches without using GNSS which is based on vision. This paper used visual localization with a prior 3D LiDAR map. Compared to common methods for visual localization using camera-acquired maps, this paper used the method that tracks the image feature and poses of a monocular camera to match with prior 3D LiDAR maps. This paper reconstructs the image feature to several sets of 3D points by a local bundle adjustment-based visual odometry system. Those 3D points matched with the prior 3D point cloud map to track the globe pose of the user. The visual localization approach has several advantages. (1) Since it only relies on matching geometry, it is robust to changes in ambient luminosity appearance. (2) Also, it uses the prior 3D map to provide viewpoint invariance. Moreover, the proposed method only requires users to use low-cost and lightweight camera sensors.

Standort
Deutsche Nationalbibliothek Frankfurt am Main
Umfang
Online-Ressource
Sprache
Englisch

Erschienen in
COST-EFFECTIVE CAMERA LOCALIZATION AIDED BY PRIOR POINT CLOUDS MAPS FOR LEVEL 3 AUTONOMOUS DRIVING VEHICLES ; volume:XLVIII-1/W1-2023 ; year:2023 ; pages:227-234 ; extent:8
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ; XLVIII-1/W1-2023 (2023), 227-234 (gesamt 8)

Urheber
Leung, Y.-T.
Zheng, X.
Ho, H.-Y.
Wen, W.
Hsu, L.-T.

DOI
10.5194/isprs-archives-XLVIII-1-W1-2023-227-2023
URN
urn:nbn:de:101:1-2023060104315484140667
Rechteinformation
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
Letzte Aktualisierung
14.08.2025, 10:56 MESZ

Datenpartner

Dieses Objekt wird bereitgestellt von:
Deutsche Nationalbibliothek. Bei Fragen zum Objekt wenden Sie sich bitte an den Datenpartner.

Beteiligte

  • Leung, Y.-T.
  • Zheng, X.
  • Ho, H.-Y.
  • Wen, W.
  • Hsu, L.-T.

Ähnliche Objekte (12)