Skip to content

Industrial mapping through fusion of Sentinel-2 and LIDAR data

Impact
The technological development in the fields of aerial/space technology, computer vision, and image processing, provides new tools and automated solutions for 2D/3D mapping and reconstruction. However, the accurate and reliable 3D reconstruction as well as the fully exploitation of remote sensing data for feature extraction and object detection of complex scenes, such the industrial ones, still remains challenging. Research activities that combine satellite remote sensing data and Aerial Laser Scanning (ALS) point clouds (well known as LIDAR) have increased in recent years to exploit both: the available rich spectral image information from satellite sensors and the good geometric quality of LIDAR point clouds. Sentinel-2 data from ESA’s Copernicus program can play a key role to this methodology (https://sentinel.esa.int/web/sentinel/home).

Concept
This study explores the complementary use of Sentinel-2 data with LIDAR point clouds at industrial scenes for: 1) 2D/3D mapping, 2) feature extraction and 3) object detection. A complex industrial area of 4.3 km2 near at Patras city, Greece, was used as case study. Several 2D/3D feature-spatial data products were extracted using the Erdas Imagine software (Raster and Point cloud tools, Terrain analysis tool, VirtualGIS tool, Machine Learning tool and Spatial Modeler).

Technical Details

Data description
At first step, the LIDAR point clouds and the Sentinel-2 images were collected over the area of interest for the same time period. Concerning the Sentinel-2 images (type MSIL2A), the Copernicus Open Access Hub (https://scihub.copernicus.eu/) was used. Concerning the LIDAR point clouds, the adjustment of neighboring LIDAR strips was firstly performed through strip alignment and then a georeference process using ground control points (GCPs) and check points (CPs) was carried out afterwards. LIDAR point density varied considerably over the whole block depending on the LIDAR strip overlap, i.e., 5 points/m2 and 30 points/m2 for regions covered by only one strip and more than one strip respectively. Multiple echoes and intensities were recorded. Concerning the used GCPs and CPs: 1) were consisted of characteristic points (e.g., corners at buildings, etc), 2) were measured by the ComNav Τ300 receiver station both on GPS/RTK and GPS/Relative static modes, and 3) their coordinates were calculated using the GNSS network of reference stations of SmartNet Europe/MetricaNET.

2D/3D mapping
The LIDAR point clouds were classified and refined to extract the bare-earth points. The bare-earth points were used to extract the corresponding Digital Terrain Model (DTM) (Figure 1) while the total LIDAR point clouds were used to extract the Digital Surface Model (DSM). The orthoimage of the area of interest was generated using the LIDAR/DTM (Figure 2). The corresponding coloured LIDAR point clouds were also extracted (Figure 3).

Figure 1: Extraction of bare-earth points and generation of the orthoimage.


Figure 2: Superimposition of the DSM to the orthoimage.

Figure 3: LIDAR point clouds coloured (RGB) from Sentinel-2 data.

Feature extraction
Proper features were calculated through Spatial Models (Figure 4) to distinguish built-up areas from bare soil, vegetation, roads, etc, such:
- Intensity (from LIDAR data),
- Normalized DSM-nDSM (from LIDAR data),
- Combination 4/3/2 (from Sentinel-2, pixel size of 10 m)
- Combination 8/4/3 (from Sentinel-2, pixel size of 10 m)
- Band Ratio for Built-up Area-BRBA (from Sentinel-2, pixel size of 10 m)
- Combination 12/11/4 (from Sentinel-2, pixel size of 20 m)
- Combination 11/8A/2 (from Sentinel-2, pixel size of 20 m)
- Bare Soil Index-BSI (from Sentinel-2, pixel size of 20 m)
- Normalized Difference Water Index-NDWI (from Sentinel-2, pixel size of 20 m)

Figure 4: Extracted features through Spatial Models.

Object detection
The industrial areas include buildings premises and structures with several complex shapes (e.g., tanks, etc). Typical LIDAR point cloud classification techniques cannot distinguish buildings from such structures. To this end, several machine learning techniques (SVM, CART, KNN, Naïve Bayes and Random Forest) fed by several combinations of features (Multi Dimensional Feature Vectors-MDFVs) were performed and evaluated for a sub-region (Figure 5). Training samples for three classes were collected: 1) Buildings 2) Ground and 3) Structures. The best results were achieved via the combination of CART+MDFV3 (Figure 6). To optimize the results, a post process was performed through a majority voting filter and regularization methods. Figure 6 also shows the 3D models of the buildings and structures of the area of interest in Level of Detail 1 (LoD 1).

Figure 5: Considered MDFVs: Each MDFV includes several features (left); Example of a Spatial Model for the training and classification process of the machine learning method “Random Forest” (right).

Figure 6: The best results were achieved via the combination of CART+MDFV3. The 3D models of the buildings and structures of the area of interest in LoD 1 were automatically extracted.

Conclusions
This study explores the complementary use of Sentinel-2 data with LIDAR point clouds (augmented with additional information such intensity) for 2D/3D mapping of complex scenes such the industrial ones. The results illustrate the functionality and the utility of such multi-modal approach. Proper features from both datasets can contribute to distinguish built-up areas from bare soil, vegetation, roads, etc, especially by feeding machine learning techniques with high generalization capabilities.

Contact Info
o Contact person : Maltezos Evangelos, Vasiliki (Betty) Charalampopoulou
o E-mail: mail@geosystems-hellas.gr