Such questions are addressed on a daily basis by geospatial intelligence professionals in their work of mapping buildings and infrastructure, identifying vehicles, people or objects, and searching for needles in a global haystack.
Extracting this geospatial intelligence at a planetary scale is an enormous challenge. For example, the Arab Spring in 2010 inspired protests in Syria in early 2011, which spilled over into revolution by 2012 and is now a full-blown civil war. Tracking the evolution of these events, mapping them across an entire nation and understanding the real-time consequences are daunting tasks for any one analyst.
However, with global challenges comes a global solution. The international community connected online, now rallies around challenges like reporting damage, counting vehicles, tracking events and mapping war zones. Using new forms of data, including satellite images, photo-sharing and social media, the “crowd” has become an indispensable producer of geospatial intelligence. But, as with any new form of data, gathering, understanding and assessing the reliability of crowd-sourced information is a new frontier for GEOINT.
Satellites and eyeballs
Satellites collect millions of square kilometers of earth imagery every day, gathering amazing data about our planet. This relentless influx of pixels contains valuable information about important locations, objects, and events across the globe. Potentially, every home and office, every car and plane, every flood and fire may be captured, recorded and extracted by satellite imagery.
Crowdsourcing satellite imagery after the devastating tornado in Moore, OK pinpointed every destroyed home (orange) and damaged roof (blue)
While we gain increasing sophistication utilizing algorithms to analyze imagery, nothing to date matches the perception and intuition of the human brain. Humans excel at identifying locations that look “interesting”, objects that are “new”, or events that seem “important”. These complex cognitive tasks, while simple for us, are very difficult to automate with machine algorithms. Exploiting human analysis at the volume and velocity of any constellation is a huge task: much of the valuable insight that is locked inside these pixels is never realized simply because of the overwhelming challenge of looking at them all.
So how can we extract rapid, reliable, human insight from trillions of pixels? By scaling the data analysis challenge across a massive human network, all working in synchrony, we can expand our understanding of what imagery tells us about the world. The idea that “many hands make light work” is the essence of crowdsourcing. Large, interconnected groups of humans, working in a coordinated effort on a shared goal can uncover insights and accomplish feats that would be impossible for a single individual.
The “ideal” is achieved by combining the efficiencies of technology with the intelligence of human analysis. One company, DigitalGlobe achieves this ideal with Tomnod, an online crowdsourcing network of thousands of volunteers who all contribute to analysis of satellite imagery.
Anyone who has ever used their smartphone to map their commute or looked up their house on Google Earth is familiar with basic GEOINT and satellite imagery interpretation. An intuitive web interface builds on this widespread familiarity and empowers almost anyone to contribute to imagery analysis campaigns. Tomnod divides massive image datasets into many small “tiles” and sends each tile to multiple individual users. Each member of the crowd is asked to identify relevant features in the tile: maybe locating homes damaged by a tornado, pinpointing cars in a parking lot or mapping religious sites in a city.
It’s impossible to guarantee that every individual in the crowd has the experience, expertise and energy to identify all the complex or subtle features in a satellite image. But crowdsourcing works by identifying consensus among multiple, independent people, each working on the same image. As users examine each pixel from each image tile and provide their interpretation of the imagery, the “wisdom of the crowd” begins to emerge. Each member of the crowd works in isolation but when multiple individuals agree about a particular location or feature, we can have confidence that something relevant has been detected.
To date, the Tomnod crowd has been deployed on hundreds of satellite imagery exploitation campaigns, including:
- Situational awareness for Humanitarian Assistance & Disaster Response (Typhoon Haiyan case below),
- Search and rescue support for missing people, planes and ships
(Malaysia Airlines flight 370 case below),
- Building damage detection for post-disaster Insurance &
- Vehicle and human activity detection for Defense & Homeland Security,
- Wide-area surveying for Oil & Gas Exploration, and
- Mapping and monitoring of crucial Supply Chain Infrastructure
CrowdRank™ is a geospatial consensus algorithm that quantifies the
degree of confidence in crowdsourced information. CrowdRank works by calculating the agreement between all the individual contributors in the crowd. Every click from every user on the Tomnod website is analyzed by CrowdRank to compute two scores:
1. The confidence in each location, based on consensus among the crowd and
2. The reliability of each individual, based on their agreement with
the rest of the crowd
By assessing confidence and reliability of the crowd’s contributions, CrowdRank takes hundreds of thousands of unverified inputs and transforms them into qualified, consensus detection. The result is a ranked list of important locations that decision makers, field responders or expert analysts can exploit to understand the information contained in the pixels. CrowdRank insight is delivered or integrated into existing GEOINT workflows via shape file, KML, GeoJSON API, web feature service (WFS), spreadsheets or custom analytics reports.
Big Data problems can often be characterized by having four challenges: Volume, Velocity, Variety and Veracity. Here’s how crowdsourcing meets each of these challenges:
|THE FOUR V’S OF BIG DATA|
|Volume||Crowdsourcing mobilizes a team of hundreds, thousands, or even tens of thousands of volunteers who can cover vast areas of imagery many times over. Using crowdsourcing as a first pass over the imagery, provides expert analysts and response teams with the clues they need to hone in on the most important regions|
|Veracity||Everyone makes mistakes. But when consensus emerges between tens or hundreds of individuals, all pinpointing the same feature in imagery, we extract true insight from crowdsourcing. Crowdsourcing gathers input from a crowd of independent humans and identifies the locations of maximum agreement. An algorithm such as CrowdRank™ computes the reliability of each person in the crowd and statistically determines the most relevant locations|
|Variety||A machine algorithm can learn to recognize cars but it will fail to detect planes, ships, or any of the infinite variety of other interesting objects on the earth. Crowdsourcing is flexible to match the needs of the analysis by tasking the crowd to identify a variety of features such as buildings, infrastructure, objects, and natural or man-made events|
|Velocity||Exploiting satellite imagery with human analysts is an expert process that takes time. By applying hundreds or thousands of people to the problem, crowdsourcing increases the scale and speed of analysis immensely, while still retaining the accuracy of human insight. The task of analyzing 250,000 km2 of imagery that might take a single analyst weeks, can be competed in a day using crowdsourcing|
Case Study: Haiyan Typhoon
In November 2013, devastation hit the Philippines when Super Typhoon Haiyan made landfall, becoming the strongest typhoon ever recorded in terms of wind speed.
DigitalGlobe satellites immediately began to document the devastation, and captured over 100,000km2 of imagery in the week following the storm. This real-time imagery was immediately loaded onto the Tomnod
platform as it came in. The call was sent out to the Tomnod crowd, asking for their help to map thousands of affected locations and rapidly assess the damage.
Within minutes of getting access to imagery, thousands of damaged buildings, destroyed homes and blocked roads were identified. These crowdsourced results provided situational awareness to aid first responders and humanitarian groups, and delivered rapid damage assessments to assist reconstruction efforts and advise on future disaster mitigation planning.
Before and after: one of the most damaged areas identified by crowdsourcing, near Tacloban city
Case Study: Malaysia Airlines Flight 370
Flight 370 left Kuala Lumpur at 12:41am on Saturday March 8 2014 with 239 passengers and crew en route to Beijing. An hour later, the transponder stopped working and the plane’s location became a mystery that captured the world’s imagination. Was it hijacked? Did it crash? Was there a malfunction or was foulplay involved? Most importantly: where was the plane or its wreckage? By Sunday March 9, satellite images were captured over the Gulf of Thailand, close to the last-known position of the plane. The call went out to the Tomnod crowd and, within minutes, thousands of people were identifying evidence of oil slicks on the water or possible signs of wreckage. As search boats and planes were mobilized, new information came in revealing that the plane was likely airborne for many more hours after the transponder stopped. The search zone widened to include the Strait of Malacca to the west, the South China Sea to the east and the Indian Ocean to the south.
As more and more imagery poured in, more and more volunteers joined the Tomnod site to contribute their insight about any possible clues. At time of writing, almost four million volunteers have viewed over 120,000km2 of high resolution satellite imagery. Every pixel has been viewed by at least 10 volunteers and millions of possible clues have been tagged. CrowdRank collects these inputs and produces a daily ranking of the most likely search spots which are then vetted by expert analyst and search teams.
Search area for Malaysia Airlines Flight 370. Green rectangles represent areas of satellite imagery collection while yellow circles indicate the top locations of thousands of crowdsourced detections. View this map at tomnod.com
Join the Crowd!
The phenomenal response to Tomnod crowdsourcing campaigns as engaged a new kind of analysis where millions of volunteers use high-resolution imagery to search vast areas with incredible precision. Crowdsourcing illustrates a new direction for GEOINT where individuals are both producers and consumers of data, experts and novices work side-by-side and human insight is augmented by machine
Crowdsource satellite images yourself by visiting Tomnod.com. You can view results from previous crowdsourcing campaigns and, with more pixels pouring in all the time, contribute to understanding new images of our ever-changing planet.
This article was originally published by Earth Imaging Journal, Jan/Feb 2014. Find out more digitalglobe.com