BRGM is placing increasing emphasis on artificial intelligence and digital twin approaches, which are transforming geoscientific practices. © BRGM
Digital technology for geoscience
Outstanding result / The rise of AI in geoscience
“We’ve been using artificial intelligence for a long time, for example in signal processing to assess seismic risks,” explains Nicolas Gilardi. “What has changed is that we now have access to new exceptionally efficient algorithms!” This opens up many new possibilities. BRGM, for example, is using AI advances in image processing for the CERES regional project, which aims to map and characterise exposed features in the Centre-Val de Loire region, using satellite images. Artificial intelligence also facilitates the processing of natural language, as illustrated by the ANR-funded RéSoCIO project, which seeks to automate the use of textual data from the X social network (formerly Twitter) during sudden natural disasters, such as flash floods and earthquakes. The aim is to provide relevant, consolidated information as quickly as possible to the departments responsible for managing the crisis.
This example highlights the initial benefits of artificial intelligence: time savings and the ability to process large volumes of data. “This tool can help researchers keep abreast of new knowledge being produced, and to complement the databases they use for their work,” says Cécile Gracianne. AI thus helps to enhance knowledge. Thanks also to the automation of operations previously carried out manually, such as the pre-processing of the geophysical data collected by airborne or ground surveys, experts can concentrate instead on high added-value tasks. Artificial intelligence even enables certain operations, such as compiling data on boreholes in the subsurface database (BSS): “AI could extract the most useful information from the hundreds of thousands of records in the BSS, processing perhaps as much as 80% of these documents, and all the faster because we could launch several queries simultaneously;” hopes Nicolas Gilardi.
Exceeding limits
Researchers are therefore relying more and more on AI to overcome the limits they face, the amount of processing time needed, in other words, but also the complexity of the phenomena involved. This is the case, for example, in the Junon regional project, which involves creating a digital twin of the Beauce plain, in order to predict the future state of the aquifer. Here, artificial intelligence is used to predict time series, i.e. to estimate changes in variables over time, in this case piezometric levels in the aquifer over a given period. However, these levels are governed by complex processes that are difficult to model with conventional methods: rain, sunshine, geology, plants, etc. AI can automatically and rapidly construct links between variations in the different data observed.
How? “The AI machine learning model uses statistics to establish correlations, while the numerical model solves the physical equations describing the phenomenon: these are two different approaches, which can complement each other,” explains Jérémy Rohmer. He gives the examples of the SIRENES project in Hauts-de-France and the ORACLES project in Nouvelle-Aquitaine, which focus on anticipating and managing the risk of coastal flooding. In the latter, AI uses data from numerical simulations to produce flood maps for the Arcachon basin much faster than a numerical model
Comparison of numerically simulated flood prediction maps (top), for three events, with results from machine learning models (bottom), at the Gavres site in Brittany (ANR RISCOPE project). © BRGM
Understanding AI
Finally, AI facilitates the spatial interpolation of data to provide information over an entire surface. Once again, very quickly – but not without uncertainty. It is this question which is addressed by the HOUSES project (ANR), based on a number of case studies: the mapping of hydrocarbon concentrations in the ground of the city of Toulouse, the contamination of groundwater by trace elements in the Paris Basin, the erosion of dunes in coastal areas and the monitoring of geophysical data. “We carry out a critical analysis of the data processing chain, including the interpolation done by the AI, in order to assess the uncertainty of the forecasts and assign them a confidence rating,” says Jérémy Rohmer.
Like AIDA (ANR), which aims to develop the concept of ‘actionable explainable’ learning models, more and more projects are seeking to understand how artificial intelligence establishes its correlations. The subject is all the more sensitive because BRGM’s role in supporting public policy involves making recommendations that may have an impact on society. “It is essential to know how the AI arrived at its predictions in order to strengthen the validity of our results and the acceptability of the proposed solutions,” emphasises Michaël Chelle. “The nature of geoscientific data – heterogeneous, derived from different models, often limited in space and/or time, and of uncertain value because it concerns the subsurface, which is by its nature inaccessible – hinders the performance of AI;” acknowledges Cécile Gracianne. However, “this complementary technology is already changing our approach and our scientific practices,” observes Michaël Chelle.
The complementary contribution of data from social networks and measuring instruments has made it possible to produce a more precise map of the earthquake: outlined in black is the zone where it was felt, based on the accounts gathered on Twitter; in red, orange, green and blue are the zones of seismic intensity established by a physical propagation model. © BRGM