GWPrime

Analytics Bridging Data Divide

Until recently, the industrial world was split between data ‘have-nots’ who had to rely on lagged and patchy government data, and a small elite of data ‘haves’ made up of large industrial players and trading giants who sat on a great wealth of proprietary data, explains Antoine Halff, Co-founder and Chief Analyst at Kayrros.

By Avneep Dhingra
By Avneep Dhingra

Associate Editor | Policy & Public Affairs

The Global Methane Assessment report released by the United Nations Environment Programme (UNEP) suggests that human-caused methane emissions can be reduced by up to 45% this decade. How can integration of satellite imagery with other data help in achieving this target?

Leveraging satellite imagery is crucial to achieving UNEP methane abatement targets. Satellites give us the fullest picture of methane emissions available to date. The largest methane dumps — the ones that have the worst impact on the climate — do not occur continuously, but are inherently intermittent and can occur at any time and from virtually anywhere around the planet. The only way to track them is through monitoring satellites. It so happens that these monstrous methane dumps can also be easily avoided, so the potential payoff from this detection capacity is phenomenal.

How have Artificial Intelligence (AI) and Machine Learning (ML) transformed the data analytics market globally?

The market transparency made possible by AI and ML would have been unthinkable a few years ago. The amount of data that can be processed has prodigiously increased. At Kayrros, we monitor more than 200,000 assets and process 5 billion position data points per day. This could not be achieved without more than 100 Remote Sensing and data fusion algorithms powered by AI and ML. Until recently, even sophisticated market participants and industrial actors had to rely on reported data that were incredibly time-consuming and cumbersome to collect and disseminate, let alone process. These data were often of questionable accuracy. Almost by design, they tended to miss the most dynamic geographies and fastest-moving sectors. Today, our capacity to monitor and digitally replicate the physical world is virtually limitless. We can make tangible things that previously escaped human detection.

There is also an element of increased reactiveness. Change detection algorithms applied to satellite images and natural language processing are used to detect and anticipate market disruptions with higher precision than ever before. Real-time monitoring often blurs the line between observation and short-term forecasting. Some physical developments have predictable impacts on industrial operations or commodities pricing. For example, real-time crude oil inventories are a good leading indicator of oil price changes, while monitoring maintenance operations at refineries give reliable estimates of the restart dates of idled units.

It’s clear that the future of geospatial data lies in linkages such as the ones that we have established with government and multilateral agencies like ESA

How do insight algorithms fuse data from multiple Remote Sensing sources, and how has that impacted areas like energy, environment, and natural resource management?

Every single Remote Sensing hardware technology has pluses and minuses. There is always a trade-off between spatial resolution, temporal resolution, sensitivity, coverage, cost, etc. Insight algorithms overcome these constraints by seamlessly blending inputs from various sources, taking the best from each one of them for optimal results. This is like oil refiners blending different grades of crude oil, each with its own chemical and physical properties, to optimize their yields and achieve the perfect specifications for their refined products. Every data source brings something to the table. In many cases, combining different sources brings more accuracy and context to the data. For example, when monitoring a construction site, high resolution images are necessary to identify each building. But in certain conditions, such as a cloudy day, the visibility of the satellite may be affected, and that’s where radar imagery can cut through the noise. On top of that, input from geolocation data helps identify the various working crews and their purpose, and natural language processing of social media posts can validate the detection of various events, weed out false positives or otherwise add context to satellite detections.

Kayrros has a history of working with government agencies, including ESA, on multiple projects. How is the public-private-partnership environment in Europe, and are there policies in place to encourage such engagements?

It’s clear that the future of geospatial data lies in linkages such as the ones that we have established with government and multilateral agencies like ESA. Startups like Kayrros bring to the world of data nimbleness, creativity, a wealth of academic and industry expertise and experience, and a responsiveness to private sector needs that are virtually impossible to replicate in the public sector. But our work benefits tremendously from public sector support and open-access raw data from public satellites. Indeed, our use of publicly available data from ESA, NASA and others is part of our value proposition and is the cornerstone of the verifiability and trustworthiness of our measurements.

Until recently the industrial world was essentially split between data ‘have-nots’ who had to rely on lagged and patchy government data, and a small elite of data ‘haves’ made up of large industrial players and trading giants who sat on a great wealth of proprietary data. Advanced data analytics are bridging this divide by extending the reach of government and public data. Not only are we making available to smaller players the kind of transparency that was long restricted to dominant actors, we are actually pushing the boundaries of transparency far beyond what any one actor could achieve on their own, while exposing the performance of individual actors to a level of public scrutiny and accountability that never existed before. The potential implications of this revolution in data transparency are not lost on anyone, least of all public institutions like ESA.

The term satellite data is often associated with low-resolution optical imagery, which has limitations. Can you tell us how do you use a mix of technologies to address this concern?

Not all satellites are low resolution; some provide high resolution images up to 30cm, which we regularly use. Very high resolution images come with a big price tag, which makes them prohibitive to acquire for every location on a daily basis. On the other hand, most open-source data are lower resolution images. So, there always tends to be a trade-off between spatial resolution and revisit time, or temporal resolution. Machine Learning helps address this through image processing innovation by combining the respective advantages of all types of hardware. By training our models, we can detect changes with high reliability on low resolution images, with a result that is extremely cost-competitive with any other technology.

More Prime Read