Background

The goal of the challenge is to locate ancient Maya architectures (aguadas, platforms and buildings) by performing integrated image segmentation of different types of satellite imagery data and aerial laser scanning data (ALS,lidar).

Remote sensing has greatly accelerated traditional archaeological landscape surveys in the forested regions of the ancient Maya. Typical exploration and discovery attempters focus on individual buildings and structures, as well as whole ancient cities. In terms of utilising machine learning approaches, there have been few very recent successful attempts of identifying Maya settlements (Somrak et al. 2020, Bundzel et al. 2020) focusing on narrow areas and pertaining to high-quality ALS data. However, typically ALS data covers only a fraction of the region where ancient Maya were once settled, are not easily accessible and may be prone to poor resolution and quality.

On the other hand, the satellite image data produced by the European Space Agency’s (ESA) Sentinel missions is abundant and, more importantly, publicly available. The Sentinel-1 A and B satellites are equipped with Synthetic Aperture Radar (SAR) operating globally with frequent revisit periods of approximately 5-6 days. The Sentinel-2 A and B satellites, on the other hand, are equipped with one of the most sophisticated optical sensors (MSI and SWIR), capturing imagery from visible to medium-infrared spectrum with a spatial resolution of 10-60m, with frequent revisit periods of approximately 5-10 days. While the latter has been shown to lead to accurate performance on a variety of remote sensing tasks, the data from the optical sensors is heavily dependent on the presence of cloud cover, which for this particular challenge (related to tropics regions) means that only a handful high-quality/cloudfree images are available per year. In such scenarios, employing radar data from the Sentinel-1 satellites can additionally help. Combining Sentinel data has been shown to lead to improved performance for different tasks of land-use and land-cover classification.

Recently, there have been several attempts focusing on remote-sensing tasks of Maya settlement identification. In particular, in terms of state-of-the-art computer vision approaches, they typically employ ALS data. Recently, Bundzel et al. (2020) use ALS data from the Pacunam Lidar Initiative (PLI) of Maya Biosphere reserve in Guatemala and rely on manually-labeled Maya structures. They exploit two architectures, U-Net and Mask-RCNN, for semantic segmentation. The segmentation models used as input the digital elevation model (DEM), derived from ALS data, to solve two tasks: identification of areas of ancient construction activity, and identification of the remnants of ancient Maya buildings. They report that the U-Net-based model performs better in both tasks and is capable of correctly identifying 60-66% of all objects, and 74-81% of medium sized objects. In our prior work (Somrak et al. 2020), we use the manually labeled ALS data from the Chactún archaeological site to train a classification model, for the task of recognizing three types of man-made structures (buildings, platforms and aguadas) apart from surrounding natural formations (terrain) on ALS visualizations. The trained VGG-19 Convolutional Neural Network classified images into four classes with the highest overall accuracy of 95%.

However, both of these studies rely on ALS data, that covers only a fraction of the region where ancient Maya were once settled. Recent work by Ienco et al. (2019) argue that combining satellite data will further improve the predictive capability of the models as well as will generalize on larger study areas typically not covered by ALS data or hidden in the forest canopy.

More importantly, we expect that this challenge will bring together experts, researchers and enthusiasts not only from the machine learning and remote sensing communities but also archaeologists. On the one hand, this challenge poses interesting challenges for designing and developing novel integrative machine learning approaches for image segmentation. On the other hand, the results of the challenge will help uncover centuries-old mysteries hidden under the tropical forest canopy.

References

  • Marek Bundzel, Miroslav Jascur, Milan Kovacc, Tibor Lieskovsky, Peter Sincak, and Tomas Tkacik. Semantic segmentation of airborne lidar data in maya archaeology. Remote Sensing, 12(22), 2020.
  • Dino Ienco, Roberto Interdonato, Raffaele Gaetano, and Dinh Ho Tong Minh. Combining sentinel-1 and sentinel-2 satellite image time series for land cover mapping via a multi-source deep learning architecture. ISPRS Journal of Photogrammetry and Remote Sensing, 158:11–22, 2019.
  • Maja Somrak, Saso Dzeroski, and Ziga Kokalj. Learning to classify structures in ALS- derived visualizations of ancient maya settlements with cnn. Remote Sensing, 12(14), 2020.