CONTRASTIVE SELF-SUPERVISED DATA FUSION FOR SATELLITE IMAGERY

Abstract. Self-supervised learning has great potential for the remote sensing domain, where unlabelled observations are abundant, but labels are hard to obtain. This work leverages unlabelled multi-modal remote sensing data for augmentation-free contrastive self-supervised learning. Deep neural network models are trained to maximize the similarity of latent representations obtained with different sensing techniques from the same location, while distinguishing them from other locations. We showcase this idea with two self-supervised data fusion methods and compare against standard supervised and self-supervised learning approaches on a land-cover classification task. Our results show that contrastive data fusion is a powerful self-supervised technique to train image encoders that are capable of producing meaningful representations: Simple linear probing performs on par with fully supervised approaches and fine-tuning with as little as 10% of the labelled data results in higher accuracy than supervised training on the entire dataset.

Standort
Deutsche Nationalbibliothek Frankfurt am Main
Umfang
Online-Ressource
Sprache
Englisch

Erschienen in
CONTRASTIVE SELF-SUPERVISED DATA FUSION FOR SATELLITE IMAGERY ; volume:V-3-2022 ; year:2022 ; pages:705-711 ; extent:7
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences ; V-3-2022 (2022), 705-711 (gesamt 7)

Urheber
Scheibenreif, L.
Mommert, Michael
Borth, D.

DOI
10.5194/isprs-annals-V-3-2022-705-2022
URN
urn:nbn:de:101:1-2022051905170557714117
Rechteinformation
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
Letzte Aktualisierung
15.08.2025, 07:21 MESZ

Datenpartner

Dieses Objekt wird bereitgestellt von:
Deutsche Nationalbibliothek. Bei Fragen zum Objekt wenden Sie sich bitte an den Datenpartner.

Beteiligte

Ähnliche Objekte (12)