Practical Techniques for Vision-Language Segmentation Model in Remote Sensing

Abstract. Traditional semantic segmentation models often struggle with poor generalizability in zero-shot scenarios such as recognizing attributes unseen in the training labels. On the other hands, language-vision models (VLMs) have shown promise in improving performance on zero-shot tasks by leveraging semantic information from textual inputs and fusing this information with visual features. However, existing VLM-based methods do not perform as effectively on remote sensing data due to the lack of such data in their training datasets. In this paper, we introduce a two-stage fine-tuning approach for a VLM-based segmentation model using a large remote sensing image-caption dataset, which we created using an existing image-caption model. Additionally, we propose a modified decoder and a visual prompt technique using a saliency map to enhance segmentation results. Through these methods, we achieve superior segmentation performance on remote sensing data, demonstrating the effectiveness of our approach.

Location
Deutsche Nationalbibliothek Frankfurt am Main
Extent
Online-Ressource
Language
Englisch

Bibliographic citation
Practical Techniques for Vision-Language Segmentation Model in Remote Sensing ; volume:XLVIII-2-2024 ; year:2024 ; pages:203-210 ; extent:8
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ; XLVIII-2-2024 (2024), 203-210 (gesamt 8)

Creator
Lin, Yuting
Suzuki, Kumiko
Sogo, Shinichiro

DOI
10.5194/isprs-archives-XLVIII-2-2024-203-2024
URN
urn:nbn:de:101:1-2408051410527.834882234899
Rights
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
Last update
14.08.2025, 10:55 AM CEST

Data provider

This object is provided by:
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.

Associated

  • Lin, Yuting
  • Suzuki, Kumiko
  • Sogo, Shinichiro

Other Objects (12)