BERT BI-MODAL SELF-SUPERVISED LEARNING FOR CROP CLASSIFICATION USING SENTINEL-2 AND PLANETSCOPE

BERT Bi-modal self-supervised learning for crop classification using Sentinel-2 and Planetscope

BERT Bi-modal self-supervised learning for crop classification using Sentinel-2 and Planetscope

Blog Article

Crop identification and monitoring of crop dynamics are essential for agricultural planning, environmental monitoring, and ensuring food security.Recent advancements in remote sensing technology and state-of-the-art machine learning have enabled large-scale automated crop classification.However, these methods rely on labeled training data, which requires skilled human annotators or extensive field campaigns, making the process expensive and time-consuming.

Self-supervised learning techniques have demonstrated promising results in leveraging large unlabeled datasets across domains.Yet, self-supervised representation learning for crop classification from remote sensing time series remains click here under-explored due to challenges in curating suitable pretext tasks.While bimodal self-supervised approaches combining data from Sentinel-2 and Planetscope caruso milk thistle sensors have facilitated pre-training, existing methods primarily exploit the distinct spectral properties of these complementary data sources.

In this work, we propose novel self-supervised pre-training strategies inspired from BERT that leverage both the spectral and temporal resolution of Sentinel-2 and Planetscope imagery.We carry out extensive experiments comparing our approach to existing baseline setups across nine test cases, in which our method outperforms the baselines in eight instances.This pre-training thus offers an effective representation of crops for tasks such as crop classification.

Report this page