OmniSat: Self-Supervised Modality Fusion for Earth Observation
Guillaume Astruc*, Nicolas Gonthier, Clement Mallet, Loic Landrieu
;
Abstract
"The diversity and complementarity of sensors available for Earth Observations (EO) calls for developing bespoke self-supervised multimodal learning approaches. However, current multimodal EO datasets and models typically focus on a single data type, either mono-date images or time series, which limits their impact. To address this issue, we introduce OmniSat, a novel architecture able to merge diverse EO modalities into expressive features without labels by exploiting their alignment. To demonstrate the advantages of our approach, we create two new multimodal datasets by augmenting existing ones with new modalities. As demonstrated for three downstream tasks—forestry, land cover classification, and crop mapping—OmniSat can learn rich representations without supervision, leading to state-of-the-art performances in semi- and fully supervised settings. Furthermore, our multimodal pretraining scheme improves performance even when only one modality is available for inference. The code and dataset are available at ."
Related Material
[pdf]
[supplementary material]
[DOI]