Skip to content

The “Copernicus and Cultural Heritage workshop” will take place in Brussels on 24 April in the Breydel Auditorium, DG GROW from 09:00 to 17:30.

The workshop intends to assess the potential of Copernicus in support of Cultural Heritage preservation and management, and to provide inputs for further research and/or operational implementation.

The objectives of the workshop are:

  • To identify intermediate and end-users’ needs in the Cultural Heritage domain, and assess and characterise space-based applications in support of Cultural Heritage at EU and global level
  • To assess capabilities and outline requirements for Copernicus-based products/services in support of Cultural Heritage
  • To propose and assess implementation scenarios for a structured Copernicus-based approach for Cultural Heritage support

The workshop will aim at identifying the main user requirements for space-based applications associated to the preservation and management of cultural heritage assets in Europe and Worldwide. Opportunities for standardisation shall be analysed taking into account what is already done in some European Countries, with risk assessments associated to each Cultural asset subject to environmental risks. The main focus will however be on the characterisation and mapping of Copernicus capabilities and existing solutions over the identified user needs, and the identification of potential evolutions to effectively support cultural heritage needs.

Registration and agenda can be found at the event website

A new processing tool has been developed to bundle information contained in large amounts of satellite data, paving the way for the wealth of Copernicus Sentinel satellite data to be more easily incorporated into online environment-monitoring services.

ESA’s online Urban Thematic Exploitation Platform makes information from satellite data available for the non-expert user for urban environment monitoring.

To do this, it processes hundreds of terabytes of data gathered by Earth-observing satellites, and translates them into easy-to-use products for scientists, urban planners and decision-makers.

U-TEP reached a milestone recently with the integration of some 450 000 scenes from the US Landsat-8 mission acquired between 2013 and 2015. The 500 TB was reduced to about 25 TB thanks to the TimeScan processor developed by the DLR German Aerospace Center. The resulting TimeScan Landsat 2015 product is already available on the U-TEP geobrowser.

This novel tool that distils a single information product from a multitude of satellite scenes is a step towards more efficient access, processing and analysis of the massive amount of high-resolution image data provided by the latest satellites.

The Copernicus Sentinel satellites, for example, are supplying an unprecedented wealth of measurements. By the end of 2017, the operational Sentinel-1, -2 and -3 satellites alone will continuously collect a daily volume of about 20 TB of open and free satellite imagery.

In the past, users had to individually download and process data on their own computers. Now, mass data can be directly archived and processed at the point of reception for maximum speed and efficiency.

Within U-TEP, user algorithms are brought to the data where they run in cloud computing environments. This avoids the transfer of large amounts of input data and makes it unnecessary for the individual user to set up inhouse computing services.

In the near future, the TimeScan approach will be used by the U-TEP team to process both Landsat optical imagery and Sentinel-1 radar data to automatically map human settlements with unprecedented precision: 10 m resolution. This will help entities such as scientists, urban planners, environmental agencies or development banks to better understand urbanisation, as well as respond to the challenges posed by growing cities, population increase, climate change and loss of biodiversity.

The data processed by TimeScan will not only benefit urban monitoring, but also land use/land cover mapping, agriculture, forestry, the monitoring of polar and coastal regions, risk management and disaster prevention, or natural resource management.

The TimeScan processor is being used at the DLR, IT4Innovation and Brockmann Consult processing centres to create products based on Sentinel-1, Sentinel-2 and Landsat data.

U-TEP is one of six Thematic Exploitation Platforms developed by ESA to serve data user communities. These cloud-based platforms provide an online environment to access information, processing tools and computing resources for collaboration. TEPs allow knowledge to be extracted from large environmental datasets produced through Europe’s Copernicus programme and other Earth observation satellites.

Source

20 February 2017. ESA announced it has adopted an Open Access policy for its content such as still images, videos and selected sets of data.

For more than two decades, ESA has been sharing vast amounts of information, imagery and data with scientists, industry, media and the public at large via digital platforms such as the web and social media. ESA’s evolving information management policy increases these opportunities.

In particular, a new Open Access policy for ESA’s information and data will now facilitate broadest use and reuse of the material for the general public, media, the educational sector, partners and anybody else seeking to utilise and build upon it.

“This evolution in opening access to ESA’s images, information and knowledge is an important element of our goal to inform, innovate, interact and inspire in the Space 4.0 landscape,” said Jan Woerner, ESA Director General. “It logically follows the free and open data policies we have already established and accounts for the increasing interest of the general public, giving more insight to the taxpayers in the member states who fund the Agency.

ESA, international organisations and Creative Commons

In conjunction with many other intergovernmental organisations (IGOs) such as the UN Educational, Scientific and Cultural Organisation, the World Intellectual Property Organisation and the World Health Organisation, who have recently adopted similar Open Access policies, ESA has decided to release more contents under the Creative Commons IGO licencing scheme, with the Open Access compliant Creative Commons Attribution-ShareAlike 3.0 IGO or, in short, CC BY-SA 3.0 IGO licence as the standard.

CC IGO licences were designed for use by intergovernmental organisations and allow, in the case of CC BY-SA IGO, for example, images to be widely used on Wikipedia and its media repository Wikimedia Commons.

Over the past two years, ESA has trialled use of the CC BY-SA IGO licences and released images from the popular Rosetta comet-chasing mission, sets of Mars images as well as other imagery under that credit.

Creative Commons is a global non-profit organisation that enables sharing and reuse of creativity and knowledge through the provision of free legal tools. It continues to be a major partner and facilitator with ESA and the other international organisations in using and further developing the licences.

Marco Trovatello, who follows the project for ESA, believes that “Free and open access to ESA’s knowledge, information and data are a cornerstone regarding our link with the larger public and user communities and will thus contribute to societal benefit.”

The ESA digital agenda

“The recognition of the value of information ESA holds on behalf of its member states and the appropriate management are key instruments of ESA’s Space 4.0 approach to reinforcing collaboration with industry, science and member states,” notes Gunther Kohlhammer, who as Chief Digital Officer oversees the ESA Digital Agenda for Space and ESA’s information management policy, the large projects that make ESA fit for a fully digital future.

Why subsets of content?

Many of ESA’s images, videos and other contents are produced with partners, for example, in science and industry. In this first phase of Open Access at ESA, priority is given to material that is either fully owned by ESA or for which third-party rights have already been cleared.

What is Open Access?

Generally speaking, Open Access stands for free and unrestricted online access to research results and findings. Usage rights are often granted via Creative Commons Licences. There is not one, but various statements and definitions of Open Access, such as the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, the Budapest Open Access Initiative or the Bethesda Statement on Open Access Publishing.

Further information

A website pointing to sets of content already available under Open Access, a set of Frequently Asked Questions and further background information can be found at http://open.esa.int
More information on the ESA Digital Agenda for Space is available at http://www.esa.int/digital

A new processing tool has been developed to bundle information contained in large amounts of satellite data, paving the way for the wealth of Copernicus Sentinel satellite data to be more easily incorporated into online environment-monitoring services.

ESA’s online Urban Thematic Exploitation Platform (U-TEP) makes information from satellite data available for the non-expert user for urban environment monitoring. To do this, it processes hundreds of terabytes of data gathered by Earth-observing satellites, and translates them into easy-to-use products for scientists, urban planners and decision-makers.

U-TEP reached a milestone recently with the integration of some 450,000 scenes from the US Landsat-8 mission acquired between 2013 and 2015. The 500TB was reduced to about 25TB thanks to the TimeScan processor developed by the DLR German Aerospace Center. The resulting TimeScan Landsat 2015 product is already available on the U-TEP geobrowser. This novel tool that distils a single information product from a multitude of satellite scenes is a step towards more efficient access, processing and analysis of the massive amount of high-resolution image data provided by the latest satellites.

Cloud computing

The Copernicus Sentinel satellites, for example, are supplying an unprecedented wealth of measurements. By the end of 2017, the operational Sentinel-1, -2 and -3 satellites alone will continuously collect a daily volume of about 20TB of open and free satellite imagery. In the past, users had to individually download and process data on their own computers. Now, mass data can be directly archived and processed at the point of reception for maximum speed and efficiency.

Within U-TEP, user algorithms are brought to the data where they run in cloud computing environments. This avoids the transfer of large amounts of input data and makes it unnecessary for the individual user to set up inhouse computing services.

In the near future, the TimeScan approach will be used by the U-TEP team to process both Landsat optical imagery and Sentinel-1 radar data to automatically map human settlements with unprecedented precision: 10 m resolution. This will help entities such as scientists, urban planners, environmental agencies or development banks to better understand urbanisation, as well as respond to the challenges posed by growing cities, population increase, climate change and loss of biodiversity.

Data applications

The data processed by TimeScan will not only benefit urban monitoring, but also land use/land cover mapping, agriculture, forestry, the monitoring of polar and coastal regions, risk management and disaster prevention, or natural resource management.

The TimeScan processor is being used at the DLR, IT4Innovation and Brockmann Consult processing centres to create products based on Sentinel-1, Sentinel-2 and Landsat data.

U-TEP is one of six Thematic Exploitation Platforms developed by ESA to serve data user communities. These cloud-based platforms provide an online environment to access information, processing tools and computing resources for collaboration. TEPs allow knowledge to be extracted from large environmental datasets produced through Europe’s Copernicus programme and other Earth observation satellites.

The animation (Click on the image above) shows the TimeScan Landsat data derived for the Pearl River delta in China for 2002–03 and 2014–15. The illustrated TimeScan RGB images are composed of the temporal maximum built-up index in red, the maximum vegetation index in green and the temporal mean value of the water index in blue. A specific image analysis algorithm developed by DLR in ESA’s SAR4Urban project uses the TimeScan data to map the extent of the built-up area (highlighted in black in the animation) in order to finally pinpoint the urban growth that took place in the region over the last 10 years. (Copyright: DLR)

Source: ESA.

One of our planet’s few exposed lava lakes is changing, and artificial intelligence is helping NASA understand how.

On 21 January, a fissure opened at the top of Ethiopia’s Erta Ale volcano – one of the few in the world with an active lava lake in its caldera. Volcanologists sent out requests for NASA’s Earth Observing 1 (EO-1) spacecraft to image the eruption, which was large enough to begin reshaping the volcano’s summit.

As it turned out, that spacecraft was already busy collecting data of the lava lake. Alerted by a detection from another satellite, an artificial intelligence (A.I.) system had ordered it to look at the volcano. By the time scientists needed these images, they were already processed and on the ground.

Source NASA

From 6 March 2017, SRAL Level-1A products will be available at Short Time Critical (STC) and Non-Time Critical (NTC) timeliness.

SRAL Level-1A products SRAL Level 1A Unpacked L0 Complex echos in STC and SRAL Level 1A Unpacked L0 Complex echos in NTC are unpacked L0 Complex echos.

The Sentinel-3A Product Notice, Sentinel-3A Product Notice – STM Level-1 (PDF, 822 KB), describes the Level-1 current status, processing baseline, product quality and limitations, and product availability status.

The SRAL Level-1A products can be accessed via the following means:

The new pilot Copernicus Online Data Access (CODA) service, an online rolling archive with http access. CODA can be accessed at https://coda.eumetsat.int.. If you are a first time user, click on ‘Sign Up’. You will then be taken to our Earth Observation Portal where you will be able to request access to CODA. If you don’t have an Earth Observation Portal (EOP) account, you will need to create one first. A CODA account will be set up for you and you will receive an email with your CODA account credentials within the next working day.
The EUMETSAT Data Centre, accessible via our Earth Observation Portal (EOP).

More information on SRAL products and accessing the data can be found on the Sentinel-3 Services Altimetry page.

For any questions, contact our EUMETSAT Service Helpdesk

Source

On 23 February 2017, a new version of CODA was released by EUMETSAT.

The pilot CODA service is an online rolling archive with http access and 14 days of data online. It provides access to Sentinel-3 Level 1 and Level 2 global data in near real-time (NRT), short time critical (STC) & non time critical (NTC) latency mode.

Main enhancements

The new version of CODA will include the following main enhancements.

Quicklook
It will be possible to display a thumbnail image (Quicklook) showing a preview of the products. Quicklooks will be displayed in the list of search results in the main Map Tool (in order to have a quick overview of the search results) and also for every single product.

To access a single product’s Quicklook, click on ‘View Products Details’: a window will open displaying information about the selected file e.g. Footprint (Figure 1, orange arrow), Attributes (green arrow), Inspector (red arrow), and also a bigger version of the Quicklook (blue arrow).

Single file download

In the old version it was possible to download the complete product compressed in zip format only, the new version also supports download of manifest and single files contained in the product.

Click on ‘View Products Details’, a window with information about the file will open. In the section ‘Inspector’ you will find the list of the single files contained in the zip and a button to download every single file.

Sort/Order results

With this new version of CODA it will be possible to sort the results of a search using two parameters: Ingestion Date and Sensing Date. It will also be possible to display the sorted list in descending or ascending order.

Read the CODA User Guide

Access CODA

CODA can be accessed at https://coda.eumetsat.int.

If you are a first time user, click on ‘Sign Up’. You will then be taken to EUMETSAT’s Earth Observation Portal where you will be able to request access to CODA.

If you don’t have an Earth Observation Portal (EOP) account, you will need to create one first. A CODA account will be set up and you will receive an email with your CODA account credentials within the next working day.

For more information, contact EUMETSAT User Service Helpdesk.

Source

NASA has officially launched a new resource to help the public search and download out-of-this-world images, videos and audio files by keyword and metadata searches from NASA.gov—the NASA Image and Video Library website consolidates imagery spread across more than 60 collections into one searchable location.

NASA Image and Video Library allows users to search, discover and download a treasure trove of more than 140,000 NASA images, videos and audio files from across the agency’s many missions in aeronautics, astrophysics, Earth science, human spaceflight, and more. Users now can embed content in their own sites and choose from multiple resolutions to download. The website also displays the metadata associated with images.

Users can browse the agency’s most recently uploaded files, as well as discover historic and the most popularly searched images, audio files and videos. Other features include:

  • Automatically scales the interface for mobile phones and tablets
  • Displays the EXIF/camera data that includes exposure, lens used, and other information, when available from the original image
  • Allows for easy public access to high resolution files
  • All video includes a downloadable caption file
  • NASA Image and Video Library’s Application Programmers Interface (API) allows automation of imagery uploads for NASA, and gives members of the public the ability to embed content in their own sites and applications. This public site runs on NASA’s cloud native “infrastructure-as-a-code” technology enabling on-demand use in the cloud.

The library is not comprehensive, but rather provides the best of what NASA makes publicly available from a single point of presence on the web. Additionally, this is a living website, where new and archival images, video and audio files will continually be added.

images.nasa.gov

Source

Remote GeoSystems, Inc. is pleased to announce the release and availability of the all new LineVision™ Google Earth Extension – commercial software for UAV, airborne & terrestrial mobile inspection and survey projects requiring georeferenced video playback, analysis, collaboration and reporting using Google Earth & other GIS applications.

Unlike its stand-alone predecessor, the new LineVision Google Earth is a true application extension and gives users the full functionality of native Google Earth, including Pro edition. Now anyone with a GPS-enabled video camera, drone or geospatial DVR that can geotag video in the proper format can immediately load their videos and photos to Google Earth along with compatible KML and other traditional geospatial data.

As the video plays, a position marker moves along an aerial or terrestrial GPS track positioned three-dimensionally in Google Earth, continuously indicating where the current frames were recorded. Users may also geospatially “navigate” a video recording by simply clicking a single point along an aerial or terrestrial GPS track. The video then automatically advances to that point in the recording so that users can visually interpret what was recorded at that specific place and time. If something of interest is detected in the video, users may also “snap” a still image from the video, which is geotagged and saved for future analysis.

This easy-to-use software is one of the most “open” and versatile geotagged video analysis tools available. The LineVision Google Earth Extension is compatible with properly formatted georeferenced video files from a variety of consumer hand-held and action video cameras, drones and specialized mobile geospatial DVRs, including our own geoDVR™ geospatial FMV recorder.

The LineVision Google Earth Extension is much more than a multi-channel geospatial video tool. In addition to video, users can import oblique photos and KML data from survey and inspection projects. All these imported data types can be saved in a Remote GeoSystems geoProject file for data portability, reporting and future analysis in other versions of LineVision desktop, cloud and server applications.

Key Features Include:

  • Play videos from single and multi-camera data collection platforms
  • “Click-on-Map” video navigation
  • Set a custom geo-fence around the moving position marker
  • Load any Google Earth-compatible KML or shapefiles
  • Save video and photo work as geoProjects for simple project reporting, archive and search

Interested parties can learn more at: https://www.remotegeo.com/lv-google-earth or request a free 7 day trial by completing the online form at: https://www.remotegeo.com/contact-us/request-trial/

About Remote GeoSystems, Inc.

Remote GeoSystems is a geospatial software and hardware company offering turnkey solutions to easily record, map, report, archive & search “moving-track” geotagged videos, photos and other location-based project files. Our products include the industry-first, patent-pending geoDVR (geospatial digital video recorder) and LineVision suite of software.

Unlike traditional video recording systems, our rugged geoDVRs log GPS and permanently embed the video with this important location and time data. Post-mission, our LineVision video mapping and photo inspection reporting software provides users with simple but powerful tools for geographic video analysis, editing and project packaging while leveraging existing Geographic Information Systems (GIS).

These capabilities allow for more efficient and accurate data collection and the creation of reusable aerial and ground-based survey and inspection work-products across a broad range of industries including: Unmanned Vehicles, Aerospace & Aviation, Electric Utilities, Oil & Gas, Rail Transportation, Defense & Security, Engineering & Survey and Natural Resources Management.

Source

The new Radiant platform will support open formats, data standards, as well as common core and extensible metadata. All data, applications and services will be available with open licenses and freely accessible via shared infrastructure.

“We intend to change the paradigm and make earth imagery, data, and the software tools to analyze the imagery more easily accessible for the global development community. The international and global development community is increasingly pressured to find impactful solutions dealing with traditional societal issues and fast moving calamities,” says Radiant CEO Anne Hale Miglarese. “We want to provide the best open data that the geospatial sector has to offer whether for food security, property rights, global health or other societal issues.

Radiant’s Chief Technology Officer Dan Lopez further adds, “These awards are critical for Radiant to deliver a shared infrastructure allowing us to improve collaboration and discovery of imagery data worldwide. A major goal is to reduce costs and empower decision-making . . . the opportunity for innovation is also enormous. Developers will be able to access the data with open APIs, which will foster open source collaboration and grow the development ecosystem working from the platform.”

Craig Mills, CEO of Vizzuality emphasizes the need for simplicity in the design and architecture of the Radiant platform. “We’re honored to contribute to the Radiant vision. For Radiant to be successful we are going to need the help of the community. As we design, build and share early versions we will learn how to make an application that saves people hours, days and weeks of time. For the people finding, analyzing and using earth observation imagery through Radiant, we hope to make the experience as beautiful as the imagery of earth itself,” says Mills.

Azavea CEO Robert Cheetham believes that Radiant’s platform will fill a vital gap in the imagery technology landscape. “Azavea is excited to be contributing its Raster Foundry technology as a foundation for the new Radiant platform. This will create a central location to share and analyze openly-licensed imagery data. The growth in the volume of imagery data has outstripped the ability for even experienced analysts to extract value from it. Azavea is pleased to work with Radiant to create software that will make it simpler and easier to access, use, and analyze imagery for everyone,” says Cheetham.

The MVP is expected to be ready in mid-July.

*About Radiant *

A registered non-profit, Radiant responds to continuous calls by the global development community for better access to open imagery, analytical tools and capacity building, all of which fuel greater analysis and insight into the various challenges facing societies across the globe. Radiant is funded by Bill & Melinda Gates Foundation and Omidyar Network.

To learn more about Radiant, visit www.radiant.earth