Member News

Reading time: about 3 minutes.

The Pentagon Is Asking for Satellite Imagery Algorithms

ON A TRIP to Silicon Valley last year, Defense Secretary James Mattis openly envied tech companies’ superior use of artificial intelligence technology. To help close the gap, one Pentagon unit is now offering $100,000 in prizes to develop algorithms that can interpret high-resolution satellite images.

Oroville Dam in California in September 2016. Photo: Planet
The contest is called the xView Detection Challenge, and starts next month. Entrants will use a trove of hand-annotated satellite images released by the Pentagon to train algorithms to identify details relevant to disaster relief or humanitarian missions. Objects of interest include damaged buildings, utility trucks, and fishing boats.

The project is being run by DIUx, an organization started by former Defense Secretary Ashton Carter to make it easier for his department to work with technology companies, particularly startups. The need to close the Pentagon’s AI gap with industry was a major motivation for the creation of DIUx, says Brendan McCord, head of machine learning at the organization.

DIUx’s challenge is a partnership with the National Geospatial-Intelligence Agency, which serves the US military and intelligence apparatus. The competition is modeled on the NGA’s work after events such as hurricane Irma, which swept a trial of destruction and flooding from the Bahamas to Florida last year. Each day, a team of 10 analysts scrutinized hundreds of high-resolution satellite images of the disaster zone, grading damaged or destroyed buildings, and annotating details like impassable roads or bridges. The data was passed onto other agencies helping with the clean up, including FEMA.

One goal of the challenge is to automate such work. McCord says algorithms developed for the xView challenge could help NGA after future disasters. If software could make a first pass at annotating new images for damaged buildings and the like, for example, analysts could be more productive.

Algorithms good at tagging items of humanitarian interest might also be re-trained to aid other work, such as NGA’s core mission of supporting U.S. warfighters and intelligence analysts. The contest rules grant NGA license to both use and build on winning software. DIUx says winners may be offered the chance to do follow-on work on other defense missions. It is also offering a special prize of $5,000 for the best open source entry, to encourage sharing of ideas created for the contest. The satellite images for the contest are released under a public, noncommercial license for anyone to use.

Anyone hoping to win money in the challenge should start by checking their nationality. Contest rules disqualify entrants from several countries, including Cuba and Iran. For those whose papers are in order, the next step is to download a cache of satellite images covering 1,400 km2 from locations around the world at a resolution of 30 centimeters (1 foot). The images cover both visible and infrared light, and have been hand-annotated with a million examples of 60 different objects. Entrants will use the labeled images to train their algorithms; their software will be tested against a collection of images not made public. The contest will be judged on accuracy, but DIUx also wants the software to be practical, says McCord.

Software competing in the challenge must identify and distinguish objects such as trucks with tanker trailers and cement mixers. The objects were chosen to be relevant to humanitarian projects, and push the limits of existing image-processing algorithms.

Stefano Ermon, a professor at Stanford, says that the challenge and dataset could become an important contribution to both machine-learning research, and humanitarian projects worldwide. His research group has developed machine-learning software that maps areas of poverty in African countries using clues such as roads and waterways.

The most mature image-recognition technology is focused on online consumer and product photos, thanks to the piles of readily available data, and strong commercial interest from internet companies such as Google. Much less work has been done on interpreting satellite imagery, and the data needed to do so is scant, says Ermon. “We don’t have a lot of labeled data, which is crucial,” he says.