GEOINFORMATION TECHNOLOGY NEURAL NETWORK SEGMENTATION FOR LAND COVER MAPPING

Authors

DOI:

https://doi.org/10.32782/IT/2024-3-6

Keywords:

network segmentation, deep learning, confusion matrix, error matrix, optical satellite images, ResNet model

Abstract

The relevance of developing modern technologies for land cover segmentation is growing due to increased requirements for accurate monitoring and management of land resources including for agricultural purposes. Traditional segmentation methods often lack accuracy in classifying complex classes such as crops, trees, buildings, and roads. The work aims to develop geographic information technology for extracting multiple features from Sentinel-2 satellite images and using them to segment the land cover using the ResNet neural network. Methodology. This study uses Sentinel-2 images for land cover analysis. First, the images undergo preprocessing, which includes atmospheric correction and geometric and radiometric calibration. Then, data is normalized to improve the stability of the neural network training. At the next stage, the images are processed to extract spectral, morphological, and textural features, which are the input to the ResNet model. The model uses convolutional layers and the ReLU activation function to extract features automatically. A fully connected layer with Softmax and Cross-Entropy functions is used for classification. After training, the model classifies each pixel, creating a segmented image that shows different classes of land cover, including farmland, buildings, trees, and roads. The scientific novelty of the research is the development of the latest methodology for processing Sentinel-2 satellite images, including integration of complex pre-processing, data normalization, multimodal feature extraction, and the use of deep neural networks for automatic feature extraction and classification. The new approaches to atmospheric, geometric, and radiometric correction, as well as the use of ResNet with ReLU activation and fully connected layers with Softmax and Cross-Entropy functions, improve the accuracy of classification and detail of land cover segmentation. Conclusions. The study showed that the proposed technology provides a significant improvement in classification accuracy and quality compared to traditional methods such as IsoData, K-means, SVM, Minimum Distance, Maximum Likelihood, and Parallelepiped. The results demonstrate that the ResNet-based technology demonstrates high precision in segmenting the main land cover classes–crops, trees, buildings, and roads–which is crucial for effective land monitoring and management.

References

Solórzano J. V., Mas J. F., Gao Y., Gallardo-Cruz J. A. Land Use Land Cover Classification with U-Net: Advantages of Combining Sentinel-1 and Sentinel-2 Imagery. Remote Sens. 2021, 13, 3600

Zhang H., Wang L., Tian T., Yin J. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221.

Peng X., Han W., Ao J., Wang Y. Assimilation of LAI Derived from UAV Multispectral Data into the SAFY Model to Estimate Maize Yield. Remote Sens. 2021, 13, 1094.

Lianze T., Yong L., Hongji Z., Sijia L. Summary of UAV Remote Sensing Application Research in Agricultural Monitoring. Sci. Technol. Inf. 2018, 16, 122–124.

Vincent G., Antin C., Laurans M., Heurtebize J., Durrieu S., Lavalley C., Dauzat J. Mapping plant area index of tropical evergreen forest by airborne laser scanning. A cross-validation study using LAI2200 optical sensor. Remote. Sens. Environ. 2017, 198, 254–266.

Rakhlin A., Davydow A., Nikolenko S. Land cover classification from satellite imagery with u-net and lovász-softmax loss. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018, 262–266.

Bossard M., Feranec J., Otahel J. CORINE Land Cover Technical Guide: Addendum; European Environment Agency: Copenhagen, Denmark, 2000; Volume 40.

Zanaga D., Van De Kerchove R., De Keersmaecker W., Souverijns N., Brockmann C., Quast R., Wevers J., Grosu A., Paccini A., Vergnaud S., et al. ESA WorldCover 10 m 2020 v100; OpenAIRE: Los Angeles, CA, USA, 2021.

Makantasis K., Karantzalos K., Doulamis A., Doulamis N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 2015, 4959–4962

Xie C., Zhu H., Fei Y. Deep coordinate attention network for single image super-resolution. IET Image Process. 2022, 16, 273–284.

Kamilaris A., Prenafeta-Boldú F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90.

Yang Y., Newsam S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems, ACM, 2010, 270–279.

Zhao L., Tang P., Huo L. Feature significance-based multibag-ofvisual-words model for remote sensing image scene classification. Journal of Applied Remote Sensing, 10(3):035004–035004, 2016.

Zhou W., Newsam S., Li C., Shao Z. Patternnet: a benchmark dataset for performance evaluation of remote sensing image retrieval. ISPRS Journal of Photogrammetry and Remote Sensing, 2018.

Cheng G., Han J.i, and Lu X. Remote sensing image scene classification: benchmark and state of the art. Proceedings of the IEEE, 105(10):1865–1883, 2017.

Basu S., Ganguly S., Mukhopadhyay S., DiBiano R., Karki M., Nemani R. Deepsat: a learning framework for satellite imagery. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2015, 37.

Каштан В. Ю., Шевцова О. С. Інформаційна технологія попередньої обробки супутникових зображень з використанням згорткової нейронної мережі. Системні технології. Регіональний міжвузівський збірник наукових робіт. – Випуск 1 (150). Дніпро, 2024. С. 36–50. DOI: 10.34185/1562-9945-1-150-2024-04.

Selmi L. Land Use and Land Cover Classification using a ResNet Deep Learning Architecture, 2022.

Published

2024-12-06