Skip to content

Aerial Mapping March 2010

Processing the increasing volumes of imaging data resulting from today’s aerial mapping technology can prove daunting, but object-based image analysis software can make it manageable.

Since the introduction of photogrammetrical methods in the 20th century for mapping terrain and coastlines, imaging data acquisition for surveying has evolved into more complex and highly precise techniques. As sensor technologies have increased in accuracy, higher resolutions have made aerial imaging technologies applicable to a wider range of surveying and land use analysis. And the development of stereophotogrammetry, orthography, and lidar has made it possible to gather precise estimations of an object’s three-dimensional coordinates over varying scales.

However, as the output data from aerial and satellite remote sensing technologies becomes increasingly precise and detailed, it also becomes more voluminous, and the challenge of managing and interpreting this data grows. High volumes of imaging data have exposed the limitations of traditional manual sorting and analysis methods and have encouraged the increased adoption of software tools that provide automation to image data analysis.

Rapidly improving optical sensor technologies and lidar systems have made it possible to deliver high-quality maps and models of much larger areas of land. But though these technologies are readily available, the volumes of data associated with high-resolution imaging of large areas limits their applicability. Even with the establishment of base stations and the introduction of shapefile formats that have allowed for continuous field operations, completion of the ground component of surveys and the compilation and modelling of post-flight data typically take several weeks.

Until now, the processing of optical and lidar data has not been completely automated, which suggests that cost-intensive manual data manipulation is needed. The subsequent analysis of compiled images is a daunting proposition and requires personnel to use subjective criteria to analyze individual images. This may create inconsistencies when locating and classifying distinct features such as roads, buildings, bodies of water, or forests in a given area. The overall process compares to traditional ground-based mapping methods, which took months to years in the past.

The willingness of regional and federal governments to share digital aerial information has facilitated the development of large-scale mapping projects internationally and has made vast amounts of information available to regional entities at a reduced cost per square kilometer. However, the cost and time associated with assembling and processing detailed image data manually is significant. The feasibility of completing image analysis for projects covering large areas of land depends on implementing more streamlined processes and using automation tools. Automated image analysis software enables large volumes of imaging data to be processed objectively and rapidly. This enables geo-information professionals to contemplate larger-scale projects and tackle novel challenges previously unattainable with manual image analysis.

A Novel Approach

Recent developments in image analysis software alleviate much of the labor and cost associated with processing lidar and optical data. The employment of object-based image analysis (OBIA) software to accomplish tasks on a massive scale is a relatively novel approach in the surveying community but offers great promise. The software automates manually intensive processes, making it feasible for organizations to produce detailed analyses of larger areas of land in a cost-effective and timely manner. Definiens introduced OBIA to the remote sensing community in the early 2000s, and the latest iteration of its software, Definiens eCognition 8, enables the simultaneous, automated classification and segmentation of lidar, GIS, and optical sensor datasets.

OBIA works by either using sample objects to guide the program as it identifies similar objects or by establishing rule sets under which the software can extract objects described by specific characteristics and features. The latter method is usually applied to more demanding image analysis tasks. When using a rule-based approach, logical processes, similar to those humans use to understand images, are applied. Areas within the image are recognized based on their shape, texture, and local context, thereby standardizing OBIA processes.

Context-based principles underlie Definiens OBIA technology, and through segmentation and classification processes it renders knowledge in a semantic network. The software identifies objects rather than examining individual pixels. It then makes inferences about those objects by looking at them in context, iteratively building out an image. The analysis process that ensues can be categorized into segmentation and classification steps.

During the segmentation phase, a tiling and stitching technique is applied. Each scene is broken into pixel tiles, which are processed in parallel. Results are then stitched together and border effects removed, ensuring that images of all possible sizes can be processed. Small objects of interest are merged based on spectral information and elevation data to provide the most accurate approximation of targeted objects. This process accounts for the “fuzziness” associated with the parameters of many real-life objects in high-resolution images, employing “building generalization,” for example, to classify urban objects intuitively.

The image objects generated in the segmentation phase provide the foundation for the subsequent classification, which uses prescribed conditions such as average elevation, shadow index, and normalized differenced vegetation index.

Put to Use in Austria

The Department of Surveying and Geo-Information of the State Government of Lower Austria is currently using Definiens eCognition to develop a land-use and land-cover model of more than 20,000 square km of territory, encompassing bodies of water, forests, and urban and rural areas. Using a beta version of the newly released eCognition 8 software, the State Government developed an application to detect and quantify changes in forests, buildings, and field and water areas from airborne lidar data and orthophotos. The lidar data consist of a 1m grid size and the orthophotos of red, green, blue, and near infrared (nIR) channels and a ground sampling distance of 12.5 cm. Each individual scene has a size of 1.25km x 1km.

The land area is segmented into 2000 × 2000 pixel tiles, which are rapidly processed on eCognition Server. Objects on the tile borders are identified to guide future tessellation of the tiled segments. The rules for segmentation and classification were developed on a representative set of scenes. Within each tile segment, the software automatically classifies elevated objects and distinguishes buildings, trees, and scrub from them.

The objects are classified based on the following logic:
-Vegetation Forest:Average elevation above ground. Normalized differenced vegetation index (NDVI)
-Building:Average elevation above ground. Normalized differenced vegetation index

The initial classification provides a basis with a relatively small number of misclassifications, predominantly in shadowed transition areas between forests and built-up areas. These misclassified areas are corrected using rules leveraging the object area, standard deviation of the normalized surface model, and the local context. Other misclassifications in shadow areas next to buildings are corrected using a shadow index computed from the red, green, and nIR channel combined with the saturation of red, green, and nIR. Building generalization tools then improve the classification of the buildings before the individual tiles are merged and the data exported to create a holistic land-use model.

The application was tested for transferability on an area of 200 square km. An accuracy assessment of the resulting shapefiles showed that the built-up area was correctly classified for 94.3 percent of the area while forested areas were classified correctly for 96.1 percent of the area. The Department of Geo-Information will use the software application for urban planning initiatives and as part of a European Union higher-traffic network project to develop soundwave propagation models of traffic noise. The initiative represents the inaugural project of a planned five-yearly periodic land use analysis. Producing a land-use model of such a large area has been made feasible by minimizing manual image data processing.

As advances in optical sensors and aerial lidar produce ever-increasing volumes of high-quality data describing vast areas of land, software developers must create tools that realize the potential of this data in remote sensing and surveying applications. With image analysis software, companies like Definiens enable organizations to automatically process imaging data from all remote sensing acquisition modalities rapidly and objectively, alleviating the labor and cost barriers associated with large-scale land-use modelling. These developments in automated image analysis technology now allow organizations to engage in novel and ambitious projects, from the development of land-use models for entire regions and countries to the establishment of models for cellular phone network providers to optimally place their reception towers to the creation of maps identifying the most viable rooftop orientations for solar panel installation in towns, cities, and regions.

Gregor Willhauck is product marketing manager in the Earth Science business unit of Definiens.
Christian Weise is a senior consultant at Definiens and has a graduate diploma in geography from Friedrich-Schiller-University of Jena.
Michael Pregesbauer is the deputy head of the Geo-Information Department of the State Government of Lower Austria, overseeing photogrammetry, remote sensing data acquisition and processing, and aids in image interpretation.

By Michael Pregesbauer, Christian Weise, and Gregor Wilhauck

Source