Enter Zoom Meeting

ITS4.4/AS4.1

EDI
Machine learning for Earth system modelling

There are many ways in which machine learning promises to provide insight into the Earth System, and this area of research is developing at a breathtaking pace.
Unsupervised, supervised as well as reinforcement learning are now increasingly used to address Earth system related challenges.
Machine learning could help extract information from numerous Earth System data, such as satellite observations, as well as improve model fidelity through novel parameterisations or speed-ups. This session invites submissions spanning modelling and observational approaches towards providing an overview of the state-of-the-art of the application of these novel methods

Co-organized by CL5.2/ESSI1/NP4
Convener: Julien Brajard | Co-conveners: Peter Düben, Redouane Lguensat, Francine SchevenhovenECSECS, Maike SonnewaldECSECS
Welcome to this vPICO session. All conveners, speakers, and attendees join the Zoom Meeting for the live presentations through the green button to the top right. On this page, you will find a list of presentations, their abstracts linked, and you can use the handshake to start spontaneous chats with others.

Activation of the text chat sets a cookie in your browser that is automatically deleted at the end of the conference.

A chat user is typing ...
SHIFT+ENTER for line break
We are sorry but we encountered a problem while running the chat ITS4.4/AS4.1 . Please reload this browser window. In case this message is shown again after reloading, please contact us at: egu21@copernicus.org. We are sorry for this inconvenience.

Fri, 30 Apr, 11:00–12:30

Chairpersons: Julien Brajard, Francine Schevenhoven

11:00–11:05
5-minute convener introduction

11:05–11:15
|
EGU21-16087
|
ECS
|
solicited
|
Highlight
Christopher Kadow et al.

Historical temperature measurements are the basis of global climate datasets like HadCRUT4. This dataset contains many missing values, particularly for periods before the mid-twentieth century, although recent years are also incomplete. Here we demonstrate that artificial intelligence can skilfully fill these observational gaps when combined with numerical climate model data. We show that recently developed image inpainting techniques perform accurate monthly reconstructions via transfer learning using either 20CR (Twentieth-Century Reanalysis) or the CMIP5 (Coupled Model Intercomparison Project Phase 5) experiments. The resulting global annual mean temperature time series exhibit high Pearson correlation coefficients (≥0.9941) and low root mean squared errors (≤0.0547 °C) as compared with the original data. These techniques also provide advantages relative to state-of-the-art kriging interpolation and principal component analysis-based infilling. When applied to HadCRUT4, our method restores a missing spatial pattern of the documented El Niño from July 1877. With respect to the global mean temperature time series, a HadCRUT4 reconstruction by our method points to a cooler nineteenth century, a less apparent hiatus in the twenty-first century, an even warmer 2016 being the warmest year on record and a stronger global trend between 1850 and 2018 relative to previous estimates. We propose image inpainting as an approach to reconstruct missing climate information and thereby reduce uncertainties and biases in climate records.

From:

Kadow, C., Hall, D.M. & Ulbrich, U. Artificial intelligence reconstructs missing climate information. Nature Geoscience 13, 408–413 (2020). https://doi.org/10.1038/s41561-020-0582-5

The presentation will tell from the journey of changing an image AI to a climate research application.

How to cite: Kadow, C., Hall, D., and Ulbrich, U.: Artificial intelligence reconstructs missing climate information, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16087, https://doi.org/10.5194/egusphere-egu21-16087, 2021.

11:15–11:17
|
EGU21-12253
|
ECS
Carlos Alberto Gómez-Gonzalez et al.

Seasonal climate predictions can forecast the climate variability up to several months ahead and support a wide range of societal activities. The coarse spatial resolution of seasonal forecasts needs to be refined to the regional/local scale for specific applications. Statistical downscaling aims at learning empirical links between the large-scale and local-scale climate, i.e., a mapping from a low-resolution gridded variable to a higher-resolution grid.

Statistical downscaling of gridded climate variables is a task closely related to that of super-resolution in computer vision, and unsurprisingly, several deep learning-based approaches have been explored by the climate community in recent years. In this study, we downscale the SEAS5 ECMWF seasonal forecast of temperature over the Iberian Peninsula using deep convolutional networks in supervised and generative adversarial training frameworks. Additionally, we apply the traditional analog method for statistical downscaling, which assumes that similar atmospheric configurations (e.g., the predictors) lead to similar meteorological outcomes in a K-Nearest Neighbors fashion. 

The deep learning-based algorithms are trained on the UERRA gridded temperature  dataset and several ERA5 reanalysis predictor variables. Finally, we evaluate the accuracy of our deep learning-based downscaling of SEAS5 temperature and compare it to the analog downscaling and a bicubic interpolation, as the simplest baseline method.

How to cite: Gómez-Gonzalez, C. A., Palma Garcia, L., Lledó, L., Marcos, R., Gonzalez-Reviriego, N., Carella, G., and Soret Miravet, A.: Deep learning-based downscaling of seasonal forecasts over the Iberian Peninsula, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12253, https://doi.org/10.5194/egusphere-egu21-12253, 2021.

11:17–11:19
|
EGU21-2046
Hrvoje Kalinić et al.

In certain measurement endeavours spatial resolution of the data is restricted, while in others data have poor temporal resolution. Typical example of these scenarios come from geoscience where measurement stations are fixed and scattered sparsely in space which results in poor spatial resolution of acquired data. Thus, we ask if it is possible to use a portion of data as a proxy to estimate the rest of the data using different machine learning techniques. In this study, four supervised machine learning methods are trained on the wind data from the Adriatic Sea and used to reconstruct the missing data. The vector wind data components at 10m height are taken from ERA5 reanalysis model in range from 1981 to 2017 and sampled every 6 hours. Data taken from the northern part of the Adriatic Sea was used to estimate the wind at the southern part of Adriatic. The machine learning models utilized for this task were linear regression, K-nearest neighbours, decision trees and a neural network. As a measure of quality of reconstruction the difference between the true and estimated values of wind data in the southern part of Adriatic was used. The result shows that all four models reconstruct the data few hundred kilometres away with average amplitude error below 1m/s. Linear regression, K-nearest neighbours, decision trees and a neural network show average amplitude reconstruction error of 0.52, 0.91, 0.76 and 0.73, and standard deviation of 1.00, 1.42, 1.23 and 1.17, respectively. This work has been supported by Croatian Science Foundation under the project UIP-2019-04-1737.

How to cite: Kalinić, H., Bilokapić, Z., and Matić, F.: Oceanographic data reconstruction using machine learning techniques, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2046, https://doi.org/10.5194/egusphere-egu21-2046, 2021.

11:19–11:21
|
EGU21-402
|
ECS
Yifei Guan et al.

In large eddy simulations (LES), the subgrid-scale effects are modeled by physics-based or data-driven methods. This work develops a convolutional neural network (CNN) to model the subgrid-scale effects of a two-dimensional turbulent flow. The model is able to capture both the inter-scale forward energy transfer and backscatter in both a priori and a posteriori analyses. The LES-CNN model outperforms the physics-based eddy-viscosity models and the previous proposed local artificial neural network (ANN) models in both short-term prediction and long-term statistics. Transfer learning is implemented to generalize the method for turbulence modeling at higher Reynolds numbers. Encoder-decoder network architecture is proposed to generalize the model to a higher computational grid resolution.

How to cite: Guan, Y., Chattopadhyay, A., Subel, A., and Hassanzadeh, P.: Stable and accurate a posteriori LES of 2D turbulence with convolutional neural networks: Backscatter analysis and generalization via transfer learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-402, https://doi.org/10.5194/egusphere-egu21-402, 2021.

11:21–11:23
|
EGU21-5507
|
ECS
Jan Ackmann et al.

Semi-implicit grid-point models for the atmosphere and the ocean require linear solvers that are working efficiently on modern supercomputers. The huge advantage of the semi-implicit time-stepping approach is that it enables large model time-steps. This however comes at the cost of having to solve a computationally demanding linear problem each model time-step to obtain an update to the model’s pressure/fluid-thickness field. In this study, we investigate whether machine learning approaches can be used to increase the efficiency of the linear solver.

Our machine learning approach aims at replacing a key component of the linear solver—the preconditioner. In the preconditioner an approximate matrix inversion is performed whose quality largely defines the linear solver’s performance. Embedding the machine-learning method within the framework of a linear solver circumvents potential robustness issues that machine learning approaches are often criticized for, as the linear solver ensures that a sufficient, pre-set level of accuracy is reached. The approach does not require prior availability of a conventional preconditioner and is highly flexible regarding complexity and machine learning design choices.

Several machine learning methods of different complexity from simple linear regression to deep feed-forward neural networks are used to learn the optimal preconditioner for a shallow-water model with semi-implicit time-stepping. The shallow-water model is specifically designed to be conceptually similar to more complex atmosphere models. The machine-learning preconditioner is competitive with a conventional preconditioner and provides good results even if it is used outside of the dynamical range of the training dataset.

How to cite: Ackmann, J., Düben, P., Palmer, T., and Smolarkiewicz, P.: Machine-Learned Preconditioners for Linear Solvers in Geophysical Fluid Flows, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-5507, https://doi.org/10.5194/egusphere-egu21-5507, 2021.

11:23–11:25
|
EGU21-1754
Takahito Mitsui and Niklas Boers

The prediction of the onset date of the Indian Summer Monsoon (ISM) is crucial for effective agricultural planning and water resource management on the Indian subcontinent, with more than one billion inhabitants. Existing approaches focus on extended-range to subseasonal forecasts, i.e., provide skillful predictions of the ISM onset date at horizons of 10 to 60 days. Here we propose a method for ISM onset prediction and show that it has high forecast skill at longer, seasonal time scales. The method is based on recurrent neural networks and allows for ensemble forecasts to quantify uncertainties. Our approach outperforms state-of-the-art numerical weather prediction models at comparable or longer lead times. To our knowledge, there is no statistical forecasting approach at comparable, seasonal time scales. Our results suggest that predictability of the ISM onset emerges earlier than previously assumed.

How to cite: Mitsui, T. and Boers, N.: Seasonal prediction of Indian Summer Monsoon onset with machine learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1754, https://doi.org/10.5194/egusphere-egu21-1754, 2021.

11:25–11:27
|
EGU21-2175
|
ECS
Michael Steininger et al.

Climate models are an important tool for the assessment of prospective climate change effects but they suffer from systematic and representation errors, especially for precipitation. Model output statistics (MOS) reduce these errors by fitting the model output to observational data with machine learning. In this work, we explore the feasibility and potential of deep learning with convolutional neural networks (CNNs) for MOS. We propose the CNN architecture ConvMOS specifically designed for reducing errors in climate model outputs and apply it to the climate model REMO. Our results show a considerable reduction of errors and mostly improved performance compared to three commonly used MOS approaches.

How to cite: Steininger, M., Abel, D., Ziegler, K., Krause, A., Paeth, H., and Hotho, A.: Deep Learning for Climate Model Output Statistics, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2175, https://doi.org/10.5194/egusphere-egu21-2175, 2021.

11:27–11:29
|
EGU21-4457
|
ECS
Marlene Klockmann and Eduardo Zorita

We present a flexible non-linear framework of Gaussian Process Regression (GPR) for the reconstruction of past climate indexes such as the Atlantic Multidecadal Variability (AMV). These reconstructions are needed because the historical observation period is too short to provide a long-term perspective on climate variability. Climate indexes can be reconstructed from proxy data (e.g. tree rings) with the help of statistical models. Previous reconstructions of climate indexes mostly used some form of linear regression methods, which are known to underestimate the true amplitude of variability and perform poorly if noisy input data is used.

We implement the machine-learning method GPR for climate index reconstruction with the goal of preserving the amplitude of past climate variability. To test the framework in a controlled environment, we create pseudo-proxies from a coupled climate model simulation of the past 2000 years. In our test environment, the GPR strongly improves the reconstruction of the AMV with respect to a multi-linear Principal Component Regression. The amplitude of reconstructed variability is very close to the true variability even if non-climatic noise is added to the pseudo-proxies. In addition, the framework can directly take into account known proxy uncertainties and fit data-sets with a variable number of records in time. Thus, the GPR framework seems to be a highly suitable tool for robust and improved climate index reconstructions.

How to cite: Klockmann, M. and Zorita, E.: Gaussian Process Regression – A tool for improved climate index reconstructions, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4457, https://doi.org/10.5194/egusphere-egu21-4457, 2021.

11:29–11:31
|
EGU21-8262
|
ECS
Maximilian Gelbrecht et al.

When predicting complex systems such as parts of the Earth system, one typically relies on differential equations which can often be incomplete, missing unknown influences or higher order effects. By augmenting the equations with artificial neural networks we can compensate these deficiencies. The resulting hybrid models are also known as universal differential equations. We show that this can be used to predict the dynamics of high-dimensional chaotic partial differential equations, such as the ones describing atmospheric dynamics, even when only short and incomplete training data are available. In a first step towards a hybrid atmospheric model, simplified, conceptual atmospheric models are used in synthetic examples where parts of the governing equations are replaced with artificial neural networks. The forecast horizon for these high dimensional systems is typically much larger than the training dataset, showcasing the large potential of the approach. 

How to cite: Gelbrecht, M., Boers, N., and Kurths, J.: Neural Partial Differential Equations for Simple Climate Models , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8262, https://doi.org/10.5194/egusphere-egu21-8262, 2021.

11:31–11:33
|
EGU21-8357
|
Bastien François et al.

Climate model outputs are commonly corrected using statistical univariate bias correction methods. Most of the time, those 1d-corrections do not modify the ranks of the time series to be corrected. This implies that biases in the spatial or inter-variable dependences of the simulated variables are not adjusted. Hence, over the last few years, some multivariate bias correction (MBC) methods have been developed to account for inter-variable structures, inter-site ones, or both. As proof-of-concept, we propose to adapt  a computer vision technique used for Image-to-Image translation tasks (CycleGAN) for the adjustment of spatial dependence structures of climate model projections. The proposed algorithm, named MBC-CycleGAN, aims to transfer simulated maps (seen as images) with inappropriate spatial dependence structure from climate model outputs to more realistic images with spatial properties similar to the observed ones. For evaluation purposes, the method is applied to adjust maps of temperature and precipitation from climate simulations through two cross-validation approaches. The first one is designed to assess two different post-processing schemes (Perfect Prognosis and Model Output Statistics). The second one assesses the influence of non-stationary properties of climate simulations on the performance of MBC-CycleGAN to adjust spatial dependences. Results are compared against a popular univariate bias correction method, a "quantile-mapping" method, which ignores inter-site dependencies in the correction procedure, and two state-of-the-art multivariate bias correction algorithms aiming to adjust spatial correlation structure. In comparison with these alternatives, the MBC-CycleGAN algorithm reasonably corrects spatial correlations of climate simulations for both temperature and precipitation, encouraging further research on the improvement of this approach for multivariate bias correction of climate model projections.

How to cite: François, B., Thao, S., and Vrac, M.: Adjusting spatial dependence of climate model outputs with Cycle-Consistent Adversarial Networks , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8357, https://doi.org/10.5194/egusphere-egu21-8357, 2021.

11:33–11:35
|
EGU21-9141
|
ECS
|
Salva Rühling Cachay et al.

Deep learning-based models have been recently shown to be competitive with, or even outperform, state-of-the-art long range forecasting models, such as for projecting the El Niño-Southern Oscillation (ENSO). However, current deep learning models are based on convolutional neural networks which are difficult to interpret and can fail to model large-scale dependencies, such as teleconnections, that are particularly important for long range projections. Hence, we propose to explicitly model large-scale dependencies with Graph Neural Networks (GNN) to enhance explainability and improve the predictive skill of long lead time forecasts.

In preliminary experiments focusing on ENSO, our GNN model outperforms previous state-of-the-art machine learning based systems for forecasts up to 6 months ahead. The explicit modeling of information flow via edges makes our model more explainable, and it is indeed shown to learn a sensible graph structure from scratch that correlates with the ENSO anomaly pattern for a given number of lead months.

 

How to cite: Rühling Cachay, S., Erickson, E., Fender C. Bucker, A., Pokropek, E., Potosnak, W., Osei, S., and Lütjens, B.: Graph Deep Learning for Long Range Forecasting, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9141, https://doi.org/10.5194/egusphere-egu21-9141, 2021.

11:35–11:37
|
EGU21-11905
|
ECS
Matt Amos et al.

To fuse together output from ensembles of climate models with observations, we have developed a custom Bayesian neural network that produces more accurate and uncertainty aware projections.

Ensembles of physical models are typically used to increase the accuracy of projections and quantify projective uncertainties. However, few methods for combining ensemble output consider differing model performance or similarity between models. Current weighting strategies that do, typically assume model weights are invariant in time and space though this is rarely the case in models.

Our Bayesian neural network infers spatiotemporally varying model weights, bias and uncertainty to capture that some regions or seasons are better simulated in certain models. The Bayesian neural network learns how to optimally combine multiple models in order to replicate observations and can also be used to infill gaps in historic observations. In regions of sparse observations, it infers from both the surrounding data and similar physical conditions. Although we are using a typically black box technique, the attribution of model weights and bias maintains interpretability.

We demonstrate the utility of the Bayesian neural network by using it to combine multiple chemistry climate models to produce continuous historic predictions of the total ozone column (1980-2010) and projections of total ozone column for the 21st century, both with principled uncertainty estimates. Rigorous validation shows that our Bayesian neural network predictions outperform standard methods of assimilating models.

How to cite: Amos, M., Sengupta, U., Hosking, S., and Young, P.: Fusing model ensembles and observations together with Bayesian neural networks, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-11905, https://doi.org/10.5194/egusphere-egu21-11905, 2021.

11:37–11:39
|
EGU21-12541
David Hall

This talk gives an overview of cutting-edge artificial intelligence applications and techniques for the earth-system sciences. We survey the most important recent contributions in areas including extreme weather, physics emulation, nowcasting, medium-range forecasting, uncertainty quantification, bias-correction, generative adversarial networks, data in-painting, network-HPC coupling, physics-informed neural nets, and geoengineering, amongst others. Then, we describe recent AI breakthroughs that have the potential to be of greatest benefit to the geosciences. We also discuss major open challenges in AI for science and their potential solutions. This talk is a living document, in that it is updated frequently, in order to accurately relect this rapidly changing field.

How to cite: Hall, D.: The Frontiers of Deep Learning for Earth System Modelling , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12541, https://doi.org/10.5194/egusphere-egu21-12541, 2021.

11:39–11:41
|
EGU21-1158
Stephen Haddad et al.

Historical ocean temperature measurements are important in studying climate change due to the high proportion of heat absorbed by the ocean. These measurements come from a variety of sources, including Expendable Bathythermographs (XBTs), which are an important source of such data. Their measurements need bias corrections which are dependent on the type of XBT used, but poor metadata collection practices mean the type is often missing, increasing the measurement uncertainty and thus the uncertainty of the downstream dataset. 

 

This talk will describe efforts to fill in missing instrument type metadata using machine learning techniques so better bias corrections can be applied and the uncertainty in ocean temperature datasets reduced. I will describe the challenge arising from the nature of the dataset in applying standard ML techniques to the problem. I will also describe how we have used this project to explore the benefits of different platforms for ML and what open reproducible science looks like for Machine Learning projects.

How to cite: Haddad, S., Killick, R., Palmer, M., and Webb, M.: Using Machine Learning to reduce uncertainty in historical ocean temperature measurements, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1158, https://doi.org/10.5194/egusphere-egu21-1158, 2021.

11:41–11:43
|
EGU21-10045
|
ECS
Giulia Carella et al.

Although the air-sea gas transfer velocity k is usually parameterized with wind speed, the so-called small-eddy model suggests a relationship between k and the ocean surface turbulence in the form of the dissipation rate of turbulent kinetic energy ε. However, available observations of ε from oceanographic cruises are spatially and temporally sparse. In this study, we use a Gaussian Process (GP) model to investigate the relationship between the observed profiles of ε and co-located atmospheric and oceanic fields from the ERA5 reanalysis. The model is then used to construct monthly maps of ε and to estimate the climatological air-sea gas transfer velocity from existing parametrizations. As an independent  validation,  the same model is also trained on EC-Earth3 outputs with the objective of reproducing the temporal and spatial patterns of turbulence kinetic energy as simulated by EC-Earth3. The ability to predict ε is instrumental to achieve better estimates of air-sea gas exchange that take into account multiple sources of upper ocean turbulence beyond wind stress.

How to cite: Carella, G., Esters, L., Galí Tàpias, M., Gomez Gonzalez, C., and Bernardello, R.: Estimating the air-sea gas transfer velocity from a statistical reconstruction of ocean turbulence observations, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10045, https://doi.org/10.5194/egusphere-egu21-10045, 2021.

11:43–11:45
|
EGU21-11356
|
ECS
Anna Denvil-Sommer et al.

A lot of effort has been put in the representation of surface ecosystem processes in global carbon cycle models, in particular through the grouping of organisms into Plankton Functional Types (PFTs) which have specific influences on the carbon cycle. In contrast, the transfer of ecosystem dynamics into carbon export to the deep ocean has received much less attention, so that changes in the representation of the PFTs do not necessarily translate into changes in sinking of particulate matter. Models constrain the air-sea CO2 flux by drawing down carbon into the ocean interior. This export flux is five times as large as the CO2 emitted to the atmosphere by human activities. When carbon is transported from the surface to intermediate and deep ocean, more CO2 can be absorbed at the surface. Therefore, even small variability in sinking organic carbon fluxes can have a large impact on air-sea CO2 fluxes, and on the amount of CO2 emissions that remain in the atmosphere.

In this work we focus on the representation of organic matter sinking in global biogeochemical models, using the PlankTOM model in its latest version representing 12 PFTs. We develop and test a methodology that will enable the systematic use of new observations to constrain sinking processes in the model. The approach is based on a Neural Network (NN) and is applied to the PlankTOM model output to test its ability to reconstruction small and large particulate organic carbon with a limited number of observations. We test the information content of geographical variables (location, depth, time of year), physical conditions (temperature, mixing depth, nutrients), and ecosystem information (CHL a, PFTs). These predictors are used in the NN to test their influence on the model-generation of organic particles and the robustness of the results. We show preliminary results using the NN approach with real plankton and particle size distribution observations from the Underwater Vision Profiler (UVP) and plankton diversity data from Tara Oceans expeditions and discuss limitations.

How to cite: Denvil-Sommer, A., Le Quéré, C., Buitenhuis, E., Guidi, L., and Irisson, J.-O.: Using new observations and Machine Learning to improve organic sinking processes in the PlankTOM global ocean biogeochemical model , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-11356, https://doi.org/10.5194/egusphere-egu21-11356, 2021.

11:45–11:47
|
EGU21-11974
|
ECS
Joana Roussillon et al.

Phytoplankton plays a key role in the carbon cycle and constitutes the basis of the marine food web. Its seasonal and interannual cycles are relatively well-known on a global scale thanks to continuous ocean color satellite observations acquired since 1997. The satellite-derived chlorophyll-a concentrations (Chl-a, a proxy of phytoplankton biomass) time series are still too short to investigate phytoplankton biomass low-frequency variability. However, it is a vital prerequisite before being able to confidently detect anthropogenic signals, as natural decadal variability can accentuate, weaken or even mask out any anthropogenic trends. Machine learning appears as a promising tool to reconstruct Chl-a past signals (including periods before satellite Chl-a era), and deep learning models seem particularly relevant to explore the spatial and/or temporal structure of the data.

Here, different neural network architectures have been tested on a 18-year satellite and re-analysis dataset to infer Chl-a from physical predictors. Their ability to reconstruct spatial and temporal (seasonal and interannual) variations on a global scale will be presented. Convolutional neural networks (CNN) better capture Chl-a spatial fields than models that do not account for the structure of the data, such as multi-layer perceptrons (MLPs). We also assess how the selection of training period may affect the reconstruction performance. This is a necessary step before being able to reconstruct any past Chl-a multi-decadal time series with confidence, which is the ultimate goal of this work.

Our study also addresses the carbon footprint associated with the use of GPU resources when training the CNN. GPUs are energy intensive, and their use in geosciences is expected to grow fast. Systematically reporting the computational energy costs in the geoscience community studies would provide an overview of models energy-efficiency on different kinds of datasets and may encourage actions to reduce consumption when possible.

How to cite: Roussillon, J., Fablet, R., Drumetz, L., Gorgues, T., and Martinez, E.: Deep learning approach to reconstruct satellite ocean color time series in the global ocean, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-11974, https://doi.org/10.5194/egusphere-egu21-11974, 2021.

11:47–11:49
|
EGU21-15128
|
ECS
Said Ouala et al.

Spatio-temporal interpolation applications are important in the context of ocean surface modeling. Current state-of-the-art techniques typically rely either on optimal interpolation or on model-based approaches which explicitly exploit a dynamical model. While the optimal interpolation suffers from smoothing issues making it unreliable in retrieving fine-scale variability, the selection and parametrization of a dynamical model, when considering model-based data assimilation strategies, remains a complex issue since several trade-offs between the model's complexity and its applicability in sea surface data assimilation need to be carefully addressed. For these reasons, deriving new data assimilation architectures that can perfectly exploit the observations and the current advances in signal processing, modeling and artificial intelligence is crucial.

In this work, we explore new advances in data-driven data assimilation to exploit the classical Kalman filter in the interpolation of spatio-temporal fields. The proposed algorithm is written in an end-to-end differentiable setting in order to allow for the learning of the linear dynamical model from a data assimilation cost. Furthermore, the linear model is formulated on a space of observables, rather than the space of observations, which allows for perfect replication of non-linear dynamics when considering periodic and quasi-periodic limit sets and providing a decent (short-term) forecast of chaotic ones. One of the main advantages of the proposed architecture is its simplicity since it utilises a linear representation coupled with a Kalman filter. Interestingly, our experiments show that exploiting such a linear representation leads to better data assimilation when compared to non-linear filtering techniques, on numerous applications, including the sea level anomaly reconstruction from satellite remote sensing observations.

How to cite: Ouala, S., Fablet, R., Pascual, A. P., Chapron, B., Collard, F., and Gaultier, L.: Reconstructing Sea Surface Dynamics Using a Linear Koopman Kalman Filter, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15128, https://doi.org/10.5194/egusphere-egu21-15128, 2021.

11:49–11:51
|
EGU21-16181
|
ECS
|
James Harding

Earth Observation (EO) satellites are drawing considerable attention in areas of water resource management, given their potential to provide unprecedented information on the condition of aquatic ecosystems. Despite ocean colours long history; water quality parameter retrievals from shallow and inland waters remains a complex undertaking. Consistent, cross-mission retrievals of the primary optical parameters using state-of-the-art algorithms are limited by the added optical complexity of these waters. Less work has acknowledged their non- or weakly optical parameter counterparts. These can be more informative than their vivid counterparts, their potential covariance would be regionally specific. Here, we introduce a multi-input, multi-output Mixture Density Network (MDN), that largely outperforms existing algorithms when applied across different bio-optical regimes in shallow and inland water bodies. The model is trained and validated using a sizeable historical database in excess of 1,000,000 samples across 38 optical and non-optical parameters, spanning 20 years across 500 surface waters in Scotland. The single network learns to predict concurrently Chlorophyll-a, Colour, Turbidity, pH, Calcium, Total Phosphorous, Total Organic Carbon, Temperature, Dissolved Oxygen and Suspended Solids from real Landsat 7, Landsat 8, and Sentinel 2 spectra. The MDN is found to fully preserve the covariances of the optical and non-optical parameters, while known one-to-many mappings within the non-optical parameters are retained. Initial performance evaluations suggest significant improvements in Chl-a retrievals from existing state-of-the-art algorithms. MDNs characteristically provide a means of quantifying the noise variance around a prediction for a given input, now pertaining to real data under a wide range of atmospheric conditions. We find this to be informative for example in detecting outlier pixels such as clouds, and may similarly be used to guide or inform future work in academic or industrial contexts. 

How to cite: Harding, J.: Unified, high resolution water quality retrievals from Earth Observation satellites, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16181, https://doi.org/10.5194/egusphere-egu21-16181, 2021.

11:51–12:30
Meet the authors in their breakout text chats

Fri, 30 Apr, 13:30–15:00

Chairpersons: Peter Düben, Julien Brajard

13:30–13:40
|
EGU21-7596
|
ECS
|
solicited
|
Highlight
|
Clara Betancourt et al.

Through the availability of multi-year ground based ozone observations on a global scale, substantial geospatial meta data, and high performance computing capacities, it is now possible to use machine learning for a global data-driven ozone assessment. In this presentation, we will show a novel, completely data-driven approach to map tropospheric ozone globally.

Our goal is to interpolate ozone metrics and aggregated statistics from the database of the Tropospheric Ozone Assessment Report (TOAR) onto a global 0.1° x 0.1° resolution grid.  It is challenging to interpolate ozone, a toxic greenhouse gas because its formation depends on many interconnected environmental factors on small scales. We conduct the interpolation with various machine learning methods trained on aggregated hourly ozone data from five years at more than 5500 locations worldwide. We use several geospatial datasets as training inputs to provide proxy input for environmental factors controlling ozone formation, such as precursor emissions and climate. The resulting maps contain different ozone metrics, i.e. statistical aggregations which are widely used to assess air pollution impacts on health, vegetation, and climate.

The key aspects of this contribution are twofold: First, we apply explainable machine learning methods to the data-driven ozone assessment. Second, we discuss dominant uncertainties relevant to the ozone mapping and quantify their impact whenever possible. Our methods include a thorough a-priori uncertainty estimation of the various data and methods, assessment of scientific consistency, finding critical model parameters, using ensemble methods, and performing error modeling.

Our work aims to increase the reliability and integrity of the derived ozone maps through the provision of scientific robustness to a data-centric machine learning task. This study hence represents a blueprint for how to formulate an environmental machine learning task scientifically, gather the necessary data, and develop a data-driven workflow that focuses on optimizing transparency and applicability of its product to maximize its scientific knowledge return.

How to cite: Betancourt, C., Stadtler, S., Stomberg, T., Edrich, A.-K., Patnala, A., Roscher, R., Kowalski, J., and Schultz, M. G.: Global fine resolution mapping of ozone metrics through explainable machine learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7596, https://doi.org/10.5194/egusphere-egu21-7596, 2021.

13:40–13:42
|
EGU21-678
|
ECS
|
Philipp Hess and Niklas Boers

The accurate prediction of precipitation, in particular of extremes, remains a challenge for numerical weather prediction (NWP) models. A large source of error are subgrid-scale parameterizations of processes that play a crucial role in the complex, multi-scale dynamics of precipitation, but are not explicitly resolved in the model formulation. Recent progress in purely data-driven deep learning for regional precipitation nowcasting [1] and global medium-range forecasting [2] tasks has shown competitive results to traditional NWP models.
Here we follow a hybrid approach, in which explicitly resolved atmospheric variables are forecast in time by a general circulation model (GCM) ensemble and then mapped to precipitation using a deep convolutional autoencoder. A frequency-based weighting of the loss function is introduced to improve the learning with regard to extreme values.
Our method is validated against a state-of-the-art GCM ensemble using three-hourly high resolution data. The results show an improved representation of extreme precipitation frequencies, as well as comparable error and correlation statistics.
   

[1] C.K. Sønderby et al. "MetNet: A Neural Weather Model for Precipitation Forecasting." arXiv preprint arXiv:2003.12140 (2020). 
[2] S. Rasp and N. Thuerey "Purely data-driven medium-range weather forecasting achieves comparable skill to physical models at similar resolution." arXiv preprint arXiv:2008.08626 (2020).

How to cite: Hess, P. and Boers, N.: Inferring precipitation from atmospheric general circulation model variables, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-678, https://doi.org/10.5194/egusphere-egu21-678, 2021.

13:42–13:44
|
EGU21-1487
|
ECS
Sayahnya Roy

Wind energy is widely used in renewable energy systems but the randomness and the intermittence of the wind make its accurate prediction difficult. This study develops an advanced and reliable model for multi-step wind variability prediction using long short-term memory (LSTM) network based on deep learning neural network (DLNN). A 20 Hz Ultrasonic anemometer was positioned in northern France (LOG site) to measure the random wind variability for the duration of thirty-four days. Real-time turbulence kinetic energy is computed from the measured wind velocity components, and multi-resolution features of wind velocity and turbulent kinetic energy are used as input for the prediction model. These multi-resolution features of wind variability are extracted using one-dimensional discrete wavelet transformation. The proposed DLNN is framed to implement multi-step prediction ranging from 10 min to 48 h. For velocity prediction, the root mean square error, mean absolute error and mean absolute percentage error are 0.047 m/s, 0.19 m/s, and 11.3% respectively. These error values indicate a good reliability of the proposed DLNN for predicting wind variability. We found that the present model performs well for mid-long-term (6-24h) wind velocity prediction. The model is also good for the long-term (24-48h) turbulence kinetic energy prediction.

How to cite: Roy, S.: Multi-step wind variability prediction based on deep learning neural network, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1487, https://doi.org/10.5194/egusphere-egu21-1487, 2021.

13:44–13:46
|
EGU21-2577
|
ECS
Elżbieta Lasota

Precise and reliable information on the tropospheric temperature and water vapour profiles plays a key role in weather and climate studies. Among the sensors supporting the atmosphere's observation, one can distinguish the Global Navigation Satellite System Radio Occultation (RO) technique, which provides accurate and high-quality meteorological profiles of temperature, pressure and water vapour. However, external knowledge about temperature is essential to estimate other physical atmospheric parameters. Hence, to overcome the constraint of the need of a priori temperature profile for each RO event, I trained and evaluated 4 different machine learning models comprising Artificial Neural Network (ANN) and Random Forest regression algorithms, where no auxiliary meteorological data is needed. To develop the models, I employed almost 7000 RO profiles between October 2019 and June 2020 over the part of the western North Pacific in Taiwan's vicinity (110-130° E; 10-30° N). Input vectors consisted of bending angle or refractivity profiles from the Formosa Satellite‐7/Constellation Observing System for Meteorology, Ionosphere, and Climate-2 mission together with the month, hour, and latitude of the RO event. Whilst temperature, pressure and water vapour profiles derived from the modern ERA5 reanalysis and interpolated to the RO location served as the models' targets. Evaluation on the testing data set revealed a good agreement between all model outputs and ERA5 targets. Slightly better statistics were noted for ANN and refractivity inputs, however, these differences can be considered as negligible. Root mean square error (RMSE) did not exceed 2 K for the temperature, 1.5 hPa for pressure, and reached slightly more than 2.5 hPa for water vapour below 2 km altitude. Additional validation with 56 colocated radiosonde observations and operational one-dimensional variational product confirms these findings with vertically averaged RMSE of around 1.3 K, 1.0 hPa and 0.5 hPa for the temperature, pressure and water vapour, respectively.

How to cite: Lasota, E.: New machine learning approaches for tropospheric profiling based on COSMIC-2 data over Taiwan, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2577, https://doi.org/10.5194/egusphere-egu21-2577, 2021.

13:46–13:48
|
EGU21-3342
David Meyer et al.

The treatment of cloud structure in radiation schemes used in operational numerical weather prediction and climate models is often greatly simplified to make them computationally affordable. Here, we propose to correct the current operational scheme ecRad – as used for operational predictions at the European Centre for Medium-Range Weather Forecasts – for 3D cloud radiative effects using computationally cheap neural networks. The 3D cloud radiative effects are learned as the difference between ecRad’s fast Tripleclouds solver that neglects 3D cloud radiative effects, and its SPeedy Algorithm for Radiative TrAnsfer through CloUd Sides (SPARTACUS) solver that includes them but increases the cost of the entire radiation scheme. We find that the emulator increases the overall accuracy for both longwave and shortwave with a negligible impact on the model’s runtime performance.

How to cite: Meyer, D., Hogan, R. J., Dueben, P. D., and Mason, S. L.: Machine Learning Emulation of 3D Cloud Radiative Effects, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3342, https://doi.org/10.5194/egusphere-egu21-3342, 2021.

13:48–13:50
|
EGU21-4570
|
ECS
Frauke Albrecht et al.

In coupled global circulation models, chemical interaction between atmospheric trace gases is modelled through dedicated atmospheric chemistry submodels. As these components tend to be computationally expensive, one is often faced with the situation to either run the models with chemistry in relatively coarse resolution, or to ignore atmospheric chemistry altogether. Here an alternative approach is presented in order to overcome the high computational costs while attaining comparable quality of results. A fully connected neural network is used to make predictions of chemical tendencies. As input data of the neural network serve chemical mixing ratios, temperature, pressure, the ozone column and the solar zenith angle, all resulting from the global numerical atmosphere-chemistry model EMAC. The time period considered is 3 month, divided in time steps of consecutive 11 hours. In total, 181 time steps are analysed, from which the first 128 are used as training data, the following 26 as validation data and the last 27 are kept for final testing. The EMAC model produces results of 110 chemicals at a horizontal grid of 160x320 and 90 vertical levels. In our preliminary approach, only 6 of these chemicals - which correspond to the chemicals describing the Chapman mechanism and the nitrogen oxides - are predicted and the analysis area is restricted to the stratosphere. Further, chemicals that are zero at 95% or more of the data points have been deleted from the input data. The results of the neural network represent the spatial patterns of the climate model data very well and are in the same order of magnitude. Spatial correlations depend on the chemical and the vertical level, but are in general >0.95 at levels where the considered variable is present. However, errors are increasing during the validation period, which is probably due to trends in the analysed data. This work presents a proof of concept that neural networks are able to predict atmospheric chemistry tendencies. Left for future work is a detailed hyperparameter tuning in order to optimize the model and the extension to longer time periods to overcome modelling problems due to seasonal trends in the data. 

How to cite: Albrecht, F., Stiehler, F., Sinnhuber, B.-M., Versick, S., and Weigel, T.: AI for Fast Atmospheric Chemistry, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4570, https://doi.org/10.5194/egusphere-egu21-4570, 2021.

13:50–13:52
|
EGU21-5826
|
ECS
Baki Harish et al.

The Indian subcontinent is prone to tropical cyclones that originate in the Indian Ocean and cause widespread destruction to life and property. Accurate prediction of cyclone track, landfall, wind, and precipitation are critical in minimizing damage. The Weather Research and Forecast (WRF) model is widely used to predict tropical cyclones. The accuracy of the model prediction depends on initial conditions, physics schemes, and model parameters. The parameter values are selected empirically by scheme developers using the trial and error method, implying that the parameter values are sensitive to climatological conditions and regions. The number of tunable parameters in the WRF model is about several hundred, and calibrating all of them is highly impossible since it requires thousands of simulations. Therefore, sensitivity analysis is critical to screen out the parameters that significantly impact the meteorological variables. The Sobol’ sensitivity analysis method is used to identify the sensitive WRF model parameters. As this method requires a considerable amount of samples to evaluate the sensitivity adequately, machine learning algorithms are used to construct surrogate models trained using a limited number of samples. They could help generate a vast number of required pseudo-samples. Five machine learning algorithms, namely, Gaussian Process Regression (GPR), Support Vector Machine, Regression Tree, Random Forest, and K-Nearest Neighbor, are considered in this study. Ten-fold cross-validation is used to evaluate the surrogate models constructed using the five algorithms and identify the robust surrogate model among them. The samples generated from this surrogate model are then used by the Sobol’ method to evaluate the WRF model parameter sensitivity.

How to cite: Harish, B., Chinta, S., Balaji, C., and Srinivasan, B.: Use of Machine Learning algorithms in evaluating the WRF model parameter sensitivity for the simulation of tropical cyclones, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-5826, https://doi.org/10.5194/egusphere-egu21-5826, 2021.

13:52–13:54
|
EGU21-6338
|
ECS
|
Weixin Jin and Yong Luo

Summer precipitation in China exhibits considerable spatial-temporal variation with direct social and economic impact. Yet seasonal prediction remains a long-standing challenge. The dynamical models even with a 1-month lead still shows limited forecast skill over China in summer. The present study focuses on applying deep learning to summer precipitation prediction in China. We train a convolutional neural network (CNN) on seasonal retrospective forecast from forecast centres in several European countries, and subsequently use transfer learning on reanalysis and observational data of 160 stations over China. The Pearson’s correlation coefficient (PCC) and the root mean square error (RMSE) are used to evaluate the performance of precipitation forecasts. The results demonstrate that deep learning approach produces skillful forecast better than those of current state-of-the-art dynamical forecast systems and traditional statistical methods in downscaling, with PCC increasing by 0.1–0.3, at 1–3 months leads. Moreover, experiments show that the data-driven model is capable to learn the complex relationship of input atmospheric state variables from reanalysis data and precipitation from station observations, with PCC of about 0.69. Image-Occlusion technique are also performed to determine variables and  spatial features of the general circulation in the Northern Hemisphere which contribute maximally to the spatial distribution of summer precipitation in China through the automatic feature representation learning, and help evaluate the weakness of dynamic models, in order to gain a better understanding of the factors that limit the capability to seasonal prediction. It suggests that deep learning is a powerful tool suitable for both seasonal prediction and for dynamical model assessment.

How to cite: Jin, W. and Luo, Y.: Improving Summer Precipitation Prediction in China Using Deep Learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6338, https://doi.org/10.5194/egusphere-egu21-6338, 2021.

13:54–13:56
|
EGU21-7678
Matthew Chantry et al.

We assess the value of machine learning as an accelerator for a kernel of an operational weather forecasting system, specifically the parameterisation of non-orographic gravity wave drag. Emulators of this scheme can be trained that produce stable and accurate results up to seasonal forecasting timescales. By training on an increased complexity version of the parameterisation scheme we build emulators that produce more accurate forecasts than the existing parameterisation scheme. Leveraging the differentiability of neural networks we generate tangent linear and adjoint versions of our parameterisation, key components in 4D-var data-assimilation. We test our tangent linear and adjoint codes within an operational-like 4D-var setup and find no degradation in skill vs hand-written tangent-linear and adjoint codes.

How to cite: Chantry, M., Hatfield, S., Duben, P., Polichtchouk, I., and Palmer, T.: Machine learning emulation of gravity wave drag in numerical weather forecasting, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7678, https://doi.org/10.5194/egusphere-egu21-7678, 2021.

13:56–13:58
|
EGU21-9333
|
ECS
Le-Yi Wang and Zhe-Min Tan

Tropical cyclone (TC) is among the most destructive weather phenomena on the earth, whose structure and intensity are strongly modulated by TC boundary layer. Mesoscale model used for TC research and prediction must rely on boundary layer parameterization due to low spacial resolution. These boundary layer schemes are mostly developed on field experiments under moderate wind speed. They often underestimate the influence of shear-driven rolls and turbulences. When applied under extreme condition like TC boundary layer, significant bias will be unavoidable. In this study, a novel machine learning model—one dimensional convolutional neural network (1D-CNN)—is proposed to tackle the TC boundary layer parameterization dilemma. The 1D-CNN saves about half of the learnable parameters and accomplishes a steady improvement compared to fully-connected neural network. TC large eddy simulation outputs are used as training data of 1D-CNN, which shows strong skewness in calculated turbulent fluxes. The data skewness problem is alleviated in order to reduce 1D-CNN model bias. It is shown in an offline TC boundary layer test that our proposed model, the 1D-CNN, performs significantly better than popular schemes now utilized in TC simulations. Model performance across different scales is essential to final application. It is found that the high resolution data contains the information of low resolution data but not vise versa. The model performance on the extreme data is key to final performance on the whole dataset. Training the model on the highest resolution non-extreme data plus extreme data of different resolutions can secure the robust performance across different scales.

How to cite: Wang, L.-Y. and Tan, Z.-M.: Machine Learning Parameterization of Mature Tropical Cyclone Boundary Layer, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9333, https://doi.org/10.5194/egusphere-egu21-9333, 2021.

13:58–14:00
|
EGU21-9910
|
ECS
Petrina Papazek and Irene Schicker

In this study, we address point-forecasting using a deep learning LSTM-approach for renewable energy systems with focus on the short- to medium-range. Hourly resolution (medium-range) as well as 10-minute resolution (nowcasting) are the anticipated forecasting frequency. The forecasting approach is applied to: (i) wind speed at 10 meters height (observation sites), (ii) wind speed at hub-height of wind turbines, and (iii) solar power forecasts for selected solar power plants.

As input to the proposed method numerical weather prediction (NWP) data, gridded observations (analysis and/or reanalysis), and point data are used. The data of studied test-cases is extracted from the Austrian TAWES system (Teilautomatische Wetterstationen, meteorological observations in 10-minute intervals),  SCADA data of wind farms, solar power output of a solar power plant, INCA's (Integrated Nowcasting through Comprehensive Analysis) gridded observation fields, reanalysis fields from Merra2 and Era5-land, as well as, NWP data from the ECMWF IFS (European Center for Medium-Range Weather Forecast’s Integrated Forecasting System). These data-sources embrace very different temporal and spatial semantics, thus, careful pre-processing was carried out. Four daily runs over the course of one year for 12 synoptic sites + 38 wind turbines + 1 solar power plant test locations are conducted.

The advantage of an LSTM architecture is that it includes recurrent steps in the ANN and, thus, is useful especially for time-series, such as meteorological observations or NWP forecasts. So far, comparatively few attempts have been made to integrate time-series with different semantics of a sensor network and physical models in one LSTM. We tackle this issue by conserving the time-steps of the delayed NWP along with their difference to recently observed time-series and, additionally, separate them into forecasting-intervals (e.g., of 3 to 12 subsequent forecasting hours being shortest in nowcasting). This enables us to employ a sequence-to-sequence LSTM based artificial neural network (ANN). The benefit of a sequence-to-sequence setup is to match an input- and output time-series in each sample, thereby, learning complex temporal relationships. To fully use the advantage of the diverse data a tailored pre- and post-processing of these heterogenous data sources in the renewable energy applications is needed.

The ANN’s results yield, in general, high forecast-skills, indicating a successful learning based on the used training data. Different combinations of inputs and processing-steps were investigated. It is shown that combining various data sources and implement an adequate pre- and post-processing yields the most promising results in the case studies (e.g.: a heuristic to estimate produced power based on the meteorological parameters and prediction of the offset to NWPs tailored to the studied location). Results are compared to traditional forecast methods and statistical methods such as a random forest and multiple-linear-regression.

How to cite: Papazek, P. and Schicker, I.: A deep learning LSTM forecasting approach for renewable energy systems, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9910, https://doi.org/10.5194/egusphere-egu21-9910, 2021.

14:00–14:02
|
EGU21-10035
|
ECS
Kara D. Lamb and Pierre Gentine

Aerosols sourced from combustion such as black carbon (BC) are important short-lived climate forcers whose direct radiative forcing and atmospheric lifetime depend on their morphology. These aerosols are typically fractal aggregates consisting of ~20-80 nm spheres. This complex morphology makes modeling their optical properties difficult, contributing to uncertainty in both their direct and indirect climate effects. Accurate and fast calculations of BC optical properties are needed for remote sensing inversions and for radiative forcing calculations in atmospheric models, but current methods to accurately calculate the optical properties of these aerosols such as the multi-sphere T-matrix method or generalized multiple-particle Mie Theory are computationally expensive and must be compiled in extensive data-bases off-line and then used as a look-up table. Recent advances in machine learning approaches have applied the graph convolutional neural network (GCN) to various physical science applications, demonstrating skill in generalizing beyond initial training data by exploiting and learning internal properties and interactions inherent to the larger system. Here we demonstrate for the first time that a GCN trained to predict the optical properties of numerically-generated BC fractal aggregates can accurately generalize to arbitrarily shaped aerosol particles, even over much larger aggregates than in the training dataset, providing a fast and accurate method to calculate aerosol optical properties in atmospheric models and for observational retrievals. This approach could be integrated into atmospheric models or remote sensing inversions to more realistically predict the physical properties of arbitrarily-shaped aerosol and cloud particles. In addition, GCN’s can be used to gain physical intuition on the relationship between large-scale properties (here of the radiative properties of aerosols) and small-scale interactions (here of the spheres’ positions and their interactions).

How to cite: Lamb, K. D. and Gentine, P.: Predicting the Optical Properties of Arbitrarily Shaped Black Carbon Aerosols with Graph Neural Networks, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10035, https://doi.org/10.5194/egusphere-egu21-10035, 2021.

14:02–14:04
|
EGU21-11105
|
ECS
Andrew Barnes et al.

Traditional weather forecasting approaches utilize numerous numerical simulations and empirical models to produce a gridded estimate of rainfall, the cells of which often span multiple regions and struggle to capture extreme events. The approach presented here combines the power of modern meteorological forecasts from the ECMWF C3S seasonal forecasts service with convolutional neural networks (CNNs) to improve the forecasting of total monthly regional rainfall in the UK. The CNN is trained using mean sea-level pressure and 2m air temperature forecasts from the ECMWF C3S service using three lead-times: one month, three months and six months. The training is supervised using the equivalent true rainfall data provided by the CEH-GEAR (Centre for Ecology and Hydrology, gridded estimates of areal rainfall). The resulting predictions are then compared with the total monthly regional rainfall values calculated from the precipitation forecasts provided by the ECMWF C3S service. The results of this comparison show the new CNN model out-performs the ECMWF model  across all three leadtimes. This performance is calculated using the root-mean square error between the predicted rainfall values for each region and the true values calculated from the CEH-GEAR dataset. The largest gap is found at a one month leadtime where the CNN model scores a root-mean square error (RMSE) 13% lower than the ECMWF model (RMSEs: 46.5 and 53.4 respectively), the smallest gap is found at a six month leadtime where the CNN scores an RMSE only 2.2% lower than the ECMWF model (RMSEs: 48.5 and 49.6 respectively). However, these differences are exacerbated at the extremes with the CNN producing errors 26% lower than the ECMWF model at a one-month leadtime, 19% lower at a three-month leadtime and 3% at a six-month leadtime. These results are then extended to show how the CNN made the predictions and by comparing the attribution patterns of North West and South East England we are able to show a reliance on both the mean sea-level pressure to the west of the UK and the 2m air temperature to the south west of the UK and over the European continent.

How to cite: Barnes, A., McCullen, N., and Kjeldsen, T. R.: Improving Regional Rainfall Forecasts using Convolutional-Neural Networks, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-11105, https://doi.org/10.5194/egusphere-egu21-11105, 2021.

14:04–14:06
|
EGU21-11448
|
ECS
|
Sagar Garg et al.

Because the atmosphere is inherently chaotic, probabilistic weather forecasts are crucial to provide reliable information. In this work, we present an extension to the WeatherBench, a benchmark dataset for medium-range, data-driven weather prediction, which was originally designed for deterministic forecasts. We add a set of commonly used probabilistic verification metrics: the spread-skill ratio, the continuous ranked probability score (CRPS) and rank histograms. Further, we compute baseline scores from the operational IFS ensemble forecast. 

Then, we compare three different methods of creating probabilistic neural network forecasts: first, using Monte-Carlo dropout during inference with a range of dropout rates; second, parametric forecasts, which optimize for the CRPS; and third, categorical forecasts, in which the probability of occurrence for specific bins is predicted. We show that plain Monto-Carlo dropout does not provide enough spread. The parametric and categorical networks, on the other hand, provide reliable forecasts, with the categorical method being more versatile.

How to cite: Garg, S., Rasp, S., and Thuerey, N.: WeatherBench Probability: Medium-range weather forecasts with probabilistic machine learning methods., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-11448, https://doi.org/10.5194/egusphere-egu21-11448, 2021.

14:06–14:08
|
EGU21-12146
|
ECS
Felix Kleinert et al.

Machine learning techniques like deep learning gained enormous momentum in recent years. This was mainly caused by the success story of the main drivers like image and speech recognition, video prediction and autonomous driving, to name just a few.
Air pollutant forecasting models are an example, where earth system scientists start picking up deep learning models to enhance the forecast quality of time series. Almost all previous air pollution forecasts with machine learning rely solely on analysing temporal features in the observed time series of the target compound(s) and additional variables describing precursor concentrations and meteorological conditions. These studies, therefore, neglect the "chemical history" of air masses, i.e. the fact that air pollutant concentrations at a given observation site are a result of emission and sink processes, mixing and chemical transformations along the transport pathways of air.
This study develops a concept of how such factors can be represented in the recently published deep learning model IntelliO3. The concept is demonstrated with numerical model data from the WRF-Chem model because the gridded model data provides an internally consistent dataset with complete spatial coverage and no missing values.
Furthermore, using model data allows for attributing changes of the forecasting performance to specific conceptual aspects. For example, we use the 8 wind sectors (N, NE, E, SE, etc.) and circles with predefined radii around our target locations to aggregate meteorological and chemical data from the intersections. Afterwards, we feed this aggregated data into a deep neural network while using the ozone concentration of the central point's next timesteps as targets. By analysing the change of forecast quality when moving from 4-dimensional (x, y, z, t) to 3-dimensional (x, y, t or r, φ, t) sectors and thinning out the underlying model data, we can deliver first estimates of expected performance gains or losses when applying our concept to station based surface observations in future studies.

How to cite: Kleinert, F., Leufen, L. H., Lupascu, A., Butler, T., and Schultz, M. G.: Representing chemical history for ozone time-series predictions - a method development study for deep learning models, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12146, https://doi.org/10.5194/egusphere-egu21-12146, 2021.

14:08–14:10
|
EGU21-12525
Fabian Romahn et al.

The increasing amount of Earth observation data provided by the Copernicus Satellite Sensors, the already operational Sentinel-5 Precursor (S5P) and the upcoming Sentinel-4 (S4), that has to be processed within strict near real time (NRT) requirements demands the use of new approaches to cope with this challenge.

In order to solve the inverse problems that arise in atmospheric remote sensing, usually complex radiative transfer models (RTMs) are used. These are very accurate, however also computationally very expensive and therefore often not feasible in combination with the time requirements of operational products. With the recent significant breakthroughs in machine learning, easier application through better software and more powerful hardware, the methods of this field have become very interesting as a way to improve the classical remote sensing algorithms.

In this presentation we show a general approach in order to replace the RTM of an inversion algorithm with an artificial neural network (ANN) with sufficient accuracy while at the same time increasing the performance by several orders of magnitude. The several steps, sampling and scaling of the training data, the selection of the ANN architecture and the training itself, is explained in detail. This is then demonstrated at the example of the ROCINN (Retrieval of cloud information using neural networks) algorithm for the operational cloud product of S5P. It is then shown how this approach can also be easily applied to the upcoming S4 mission and how the current algorithm for S5P can be improved by replacing or adding new physical models (e.g. for ice-clouds) in the form of ANNs.

The procedure has been continuously developed and evaluated over time and the most important results, in terms of sampling, architecture selection, activation functions and training parameters, are presented.

Finally, the huge performance benefits of using an ANN instead of the original RTM also allow for improvements in the inversion algorithm. Several ideas regarding this, e.g. global optimization techniques, are also shown.

How to cite: Romahn, F., Molina Garcia, V., Argyrouli, A., Lutz, R., and Loyola, D.: Application of Machine Learning for the operational Cloud Product of the Copernicus Satellite Sensors Sentinel-4 (S4) and TROPOMI / Sentinel-5 Precursor (S5P), EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12525, https://doi.org/10.5194/egusphere-egu21-12525, 2021.

14:10–14:12
|
EGU21-211
Scarlet Stadtler et al.

Artificial intelligence (AI) methods currently experience rapid development and are also used more and more frequently in environmental and Earth system sciences. To date however, this is often done in the context of isolated rather than systematic solutions. In particular, for researchers there is often a discrepancy between the requirements of a solid and technically sound environmental data analysis and the availability of modern AI methods such as deep learning. Their systematic use is not yet established in environmental and Earth system sciences.

The recently started KI:STE project bridges this gap with a dedicated strategy that combines both, the development of AI applications and a strong training and network concept, thereby covering  different relevant aspects of environmental and Earth system research. It creates the technical prerequisites to make high-performance AI applications on environmental data portable for future users and to establish environmental AI as a key technology. 

Specifically, within KI:STE an AI-platform is envisioned which unifies machine learning (ML) workflows designed to study five core Earth system topics: cloud variability, hydrology, earth surface processes, vegetation health and air quality. All of them are strongly coupled and will profit from ML, e.g. to extend locally available information into global maps, or the track the interplay of spatio-temporal variability on different scales along process cascades. Besides being already connected across disciplines in the classical sense, KI:STE aims to furthermore bridge between these different topics by jointly addressing cutting edge ML research questions beyond pure algorithmic approaches. In particular, we will put emphasize on an explainable AI approach, which itself is a yet to be explored highly relevant topic within the Earth system sciences. It has the potential to connect the interdisciplinary work on yet another level.

KI:STE will also launch an e-learning platform in order to support the usage of the AI-platform as well as to communicate the knowledge to adequately use ML techniques within the different Earth system science domains.

How to cite: Stadtler, S., Kowalski, J., Abel, M., Roscher, R., Crewell, S., Gräler, B., Kollet, S., and Schultz, M.: KI:STE Project − AI Strategy for Earth System Data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-211, https://doi.org/10.5194/egusphere-egu21-211, 2020.

14:12–14:14
|
EGU21-13307
|
ECS
Lucile Ricard et al.

Aerosol-cloud interactions remain the largest uncertainty in assessments of anthropogenic climate forcing, while the complexity of these interactions require methods that enable abstractions and simplifications that allow their improved treatment in climate models. Marine boundary layer clouds are an important component of the climate system as their large albedo and spatial coverage strongly affect the planetary radiative balance. High resolution simulations of clouds provide an unprecedented understanding of the structure and behavior of these clouds in the marine atmosphere, but the amount of data is often too large and complex to be useful in climate simulations. Data reduction and inference methods provide a way that to reduce the complexity and dimensionality of datasets generated from high-resolution Large Eddy Simulations.

In this study we use network analysis, (the δ-Maps method) to study the complex interaction between liquid water, droplet number and vertical velocity in Large Eddy Simulations of Marine Boundary Layer clouds. δ-Maps identifies domains that are spatially contiguous and possibly overlapping and characterizes their connections and temporal interactions. The objective is to better understand microphysical properties of marine boundary layer clouds, and how they are impacted by the variability in aerosols. Here we will capture the dynamical structure of the cloud fields predicted by the MIMICA Large Eddy Simulation (LES) model. The networks inferred from the different simulation fields are compared between them (intra-comparisons) using perturbations in initial conditions and aerosol, using a set of four metrics. The networks are then evaluated for their differences, quantifying how much variability is inherent in the LES simulations versus the robust changes induced by the aerosol fields. 

How to cite: Ricard, L., Nenes, A., Runge, J., and Georgakaki, P.: Exploring microphysical properties of marine boundary layer clouds through network analysis, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13307, https://doi.org/10.5194/egusphere-egu21-13307, 2021.

14:14–14:16
|
EGU21-13334
|
ECS
|
Yael Sde-Chen et al.

Clouds are a key factor in Earth's energy budget and thus significantly affect climate and weather predictions. These effects are dominated by shallow warm clouds (shown by Sherwood et al., 2014, Zelinka et al., 2020) which tend to be small and heterogenous. Therefore, remote sensing of clouds and three-dimensional (3D) volumetric reconstruction of their internal properties are of significant importance.

Recovery of the volumetric information of the clouds relies on 3D radiative transfer, that models 3D multiple scattering. This model is complex and nonlinear. Thus, inverting the model poses a major challenge and typically requires using a simplification. A common relaxation assumes that clouds are horizontally uniform and infinitely broad, leading to one-dimensional modeling. However, generally this assumption is invalid since clouds are naturally highly heterogeneous. A novel alternative is to perform cloud retrieval by developing tools of 3D scattering tomography. Then, multiple satellite images of the clouds are acquired from different points of view. For example, simultaneous multi-view radiometric images of clouds are proposed by the CloudCT project, funded by the ERC. Unfortunately, 3D scattering tomography require high computational resources. This results, in practice, in slow run times and prevents large scale analysis. Moreover, existing scattering tomography is based on iterative optimization, which is sensitive to initialization.

In this work we introduce a deep neural network for 3D volumetric reconstruction of clouds. In recent years, supervised learning using deep neural networks has led to remarkable results in various fields ranging from computer vision to medical imaging. However, these deep learning techniques have not been extensively studied in the context of volumetric atmospheric science and specifically cloud research.

We present a convolutional neural network (CNN) whose architecture is inspired by the physical nature of clouds. Due to the lack of real-world datasets, we train the network in a supervised manner using a physics-based simulator that generates realistic volumetric cloud fields. In addition, we propose a hybrid approach, which combines the proposed neural network with an iterative physics-based optimization technique.

We demonstrate the recovery performance of our proposed method in cloud fields. In a single cloud-scale, our resulting quality is comparable to state-of-the-art methods, while run time improves by orders of magnitude. In contrast to existing physics-based methods, our network offers scalability, which enables the reconstruction of wider cloud fields. Finally, we show that the hybrid approach leads to improved retrieval in a fast process.

How to cite: Sde-Chen, Y., Schechner, Y. Y., Holodovsky, V., and Eytan, E.: Deep Learning for Three-Dimensional Volumetric Recovery of Cloud Fields, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13334, https://doi.org/10.5194/egusphere-egu21-13334, 2021.

14:16–14:18
|
EGU21-14485
|
ECS
Julius Polz et al.

We can observe a global decrease of well maintained weather stations by meteorological services and governmental institutes. At the same time, environmental sensor data is increasing through the use of opportunistic or remote sensing approaches. Overall, the trend for environmental sensor networks is strongly going towards automated routines, especially for quality-control (QC) to provide usable data in near real-time. A common QC scenario is that data is being flagged manually using expert knowledge and visual inspection by humans. To reduce this tedious process and to enable near-real time data provision, machine-learning (ML) algorithms exhibit a high potential as they can be designed to imitate the experts actions. 

Here we address these three common challenges when applying ML for QC: 1) Robustness to missing values in the input data. 2) Availability of training data, i.e. manual quality flags that mark erroneous data points. And 3) Generalization of the model regarding non-stationary behavior of one  experimental system or changes in the experimental setup when applied to a different study area. We approach the QC problem and the related issues both as a supervised and an unsupervised learning problem using deep neural networks on the one hand and dimensionality reduction combined with clustering algorithms on the other.

We compare the different ML algorithms on two time-series datasets to test their applicability across scales and domains. One dataset consists of signal levels of 4000 commercial microwave links distributed all over Germany that can be used to monitor precipitation. The second dataset contains time-series of soil moisture and temperature from 120 sensors deployed at a small-scale measurement plot at the TERENO site “Hohes Holz”.

First results show that supervised ML provides an optimized performance for QC for an experimental system not subject to change and at the cost of a laborious preparation of the training data. The unsupervised approach is also able to separate valid from erroneous data at reasonable accuracy. However, it provides the additional benefit that it does not require manual flags and can thus be retrained more easily in case the system is subject to significant changes. 

In this presentation, we discuss the performance, advantages and drawbacks of the proposed ML routines to tackle the aforementioned challenges. Thus, we aim to provide a starting point for researchers in the promising field of ML application for automated QC of environmental sensor data.

How to cite: Polz, J., Schmidt, L., Glawion, L., Graf, M., Werner, C., Chwala, C., Mollenhauer, H., Rebmann, C., Kunstmann, H., and Bumberger, J.: Supervised and unsupervised machine-learning for automated quality control of environmental sensor data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14485, https://doi.org/10.5194/egusphere-egu21-14485, 2021.

14:18–14:20
|
EGU21-14819
|
ECS
Kevin Debeire et al.

14:20–15:00
Meet the authors in their breakout text chats

Fri, 30 Apr, 15:30–17:00

Chairpersons: Maike Sonnewald, Redouane Lguensat

15:30–15:40
|
EGU21-10507
|
ECS
|
solicited
|
Highlight
Maha Mdini et al.

In the era of modern science, scientists have developed numerical models to predict and understand the weather and ocean phenomena based on fluid dynamics. While these models have shown high accuracy at kilometer scales, they are operated with massive computer resources because of their computational complexity.  In recent years, new approaches to solve these models based on machine learning have been put forward. The results suggested that it be possible to reduce the computational complexity by Neural Networks (NNs) instead of classical numerical simulations. In this project, we aim to shed light upon different ways to accelerating physical models using NNs. We test two approaches: Data-Driven Statistical Model (DDSM) and Hybrid Physical-Statistical Model (HPSM) and compare their performance to the classical Process-Driven Physical Model (PDPM). DDSM emulates the physical model by a NN. The HPSM, also known as super-resolution, uses a low-resolution version of the physical model and maps its outputs to the original high-resolution domain via a NN. To evaluate these two methods, we measured their accuracy and their computation time. Our results of idealized experiments with a quasi-geostrophic model [SO3] show that HPSM reduces the computation time by a factor of 3 and it is capable to predict the output of the physical model at high accuracy up to 9.25 days. The DDSM, however, reduces the computation time by a factor of 4 and can predict the physical model output with an acceptable accuracy only within 2 days. These first results are promising and imply the possibility of bringing complex physical models into real time systems with lower-cost computer resources in the future.

How to cite: Mdini, M., Miyoshi, T., and Otsuka, S.: Accelerating Climate Model Computation by Neural Networks: A Comparative Study, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10507, https://doi.org/10.5194/egusphere-egu21-10507, 2021.

15:40–15:42
|
EGU21-590
|
ECS
|
Yueling Ma et al.

Near real-time groundwater table depth measurements are scarce over Europe, leading to challenges in monitoring groundwater resources at the continental scale. In this study, we leveraged knowledge learned from simulation results by Long Short-Term Memory (LSTM) networks to estimate monthly groundwater table depth anomaly (wtda) data over Europe. The LSTM networks were trained, validated, and tested at individual pixels on anomaly data derived from daily integrated hydrologic simulation results over Europe from 1996 to 2016, with a spatial resolution of 0.11° (Furusho-Percot et al., 2019), to predict monthly wtda based on monthly precipitation anomalies (pra) and soil moisture anomalies (θa). Without additional training, we directly fed the networks with averaged monthly pra and θa data from 1996 to 2016 obtained from commonly available observational datasets and reanalysis products, and compared the network outputs with available borehole in situ measured wtda. The LSTM network estimates show good agreement with the in situ observations, resulting in Pearson correlation coefficients of regional averaged wtda data in seven PRUDENCE regions ranging from 42% to 76%, which are ~ 10% higher than the original simulation results except for the Iberian Peninsula. Our study demonstrates the potential of LSTM networks to transfer knowledge from simulation to reality for the estimation of wtda over Europe. The proposed method can be used to provide spatiotemporally continuous information at large spatial scales in case of sparse ground-based observations, which is common for groundwater table depth measurements. Moreover, the results highlight the advantage of combining physically-based models with machine learning techniques in data processing.

 

Reference:

Furusho-Percot, C., Goergen, K., Hartick, C., Kulkarni, K., Keune, J. and Kollet, S. (2019). Pan-European groundwater to atmosphere terrestrial systems climatology from a physically consistent simulation. Scientific Data, 6(1).

How to cite: Ma, Y., Montzka, C., Bayat, B., and Kollet, S.: Knowledge transfer from simulation to reality via Long Short-Term Memory networks:  Estimating groundwater table depth anomalies over Europe, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-590, https://doi.org/10.5194/egusphere-egu21-590, 2021.

15:42–15:44
|
EGU21-647
|
ECS
Blanka Balogh et al.

The development of atmospheric parameterizations based on neural networks is often hampered by numerical instability issues. Previous attempts to replicate these issues in a toy model have proven ineffective. We introduce a new toy model for atmospheric dynamics, which consists in an extension of the Lorenz'63 model to a higher dimension. While neural networks trained on a single orbit can easily reproduce the dynamics of the Lorenz'63 model, they fail to reproduce the dynamics of the new toy model, leading to unstable trajectories. Instabilities become more frequent as the dimension of the new model increases, but are found to occur even in very low dimension. Training the neural network on a different learning sample, based on Latin Hypercube Sampling, solves the instability issue. Our results suggest that the design of the learning sample can significantly influence the stability of dynamical systems driven by neural networks.

How to cite: Balogh, B., Saint-Martin, D., and Ribes, A.: A toy model to investigate stability of AI-based dynamical systems, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-647, https://doi.org/10.5194/egusphere-egu21-647, 2021.

15:44–15:46
|
EGU21-783
|
ECS
Christopher Irrgang et al.

Space-borne observations of terrestrial water storage (TWS) are an essential ingredient for understanding the Earth's global water cycle, its susceptibility to climate change, and for risk assessments of ecosystems, agriculture, and water management. However, the complex distribution of water masses in rivers, lakes, or groundwater basins remains elusive in coarse-resolution gravimetry observations. We combine machine learning, numerical modeling, and satellite altimetry to build and train a downscaling neural network that recovers simulated TWS from synthetic space-borne gravity observations. The neural network is designed to adapt and validate its training progress by considering independent satellite altimetry records. We show that the neural network can accurately derive TWS anomalies in 2019 after being trained over the years 2003 to 2018. Specifically for validated regions in the Amazonas, we highlight that the neural network can outperform the numerical hydrology model used in the network training.

https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL089258

How to cite: Irrgang, C., Saynisch-Wagner, J., Dill, R., Boergens, E., and Thomas, M.: Self-validating deep learning of continental hydrology through satellite gravimetry and altimetry, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-783, https://doi.org/10.5194/egusphere-egu21-783, 2021.

15:46–15:48
|
EGU21-2401
|
ECS
Yann Haddad et al.

Ensemble predictions are essential to characterize the forecast uncertainty and the likelihood of an event to occur. Stochasticity in predictions comes from data and model uncertainty. In deep learning (DL), data uncertainty can be approached by training an ensemble of DL models on data subsets or by performing data augmentations (e.g., random or singular value decomposition (SVD) perturbations). Model uncertainty is typically addressed by training a DL model multiple times from different weight initializations (DeepEnsemble) or by training sub-networks by dropping weights (Dropout). Dropout is cheap but less effective, while DeepEnsemble is computationally expensive.

We propose instead to tackle model uncertainty with SWAG (Maddox et al., 2019), a method to learn stochastic weights—the sampling of which allows to draw hundreds of forecast realizations at a fraction of the cost required by DeepEnsemble. In the context of data-driven weather forecasting, we demonstrate that the SWAG ensemble has i) better deterministic skills than a single DL model trained in the usual way, and ii) approaches deterministic and probabilistic skills of DeepEnsemble at a fraction of the cost. Finally, multiSWAG (SWAG applied on top of DeepEnsemble models) provides a trade-off between computational cost, model diversity, and performance.

We believe that the method we present will become a common tool to generate large ensembles at a fraction of the current cost. Additionally, the possibility of sampling DL models allows the design of data-driven/emulated stochastic model components and sub-grid parameterizations.

Reference

Maddox W.J, Garipov T., Izmailov P., Vetrov D., Wilson A. G., 2019: A Simple Baseline for Bayesian Uncertainty in Deep Learning. arXiv:1902.02476

How to cite: Haddad, Y., Defferrard, M., and Ghiggi, G.: The SWAG solution for probabilistic predictions with a single neural network, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2401, https://doi.org/10.5194/egusphere-egu21-2401, 2021.

15:48–15:50
|
EGU21-2681
|
ECS
|
Michaël Defferrard et al.

Deep Learning (DL) has the potential to revolutionize numerical weather predictions (NWP) and climate simulations by improving model components and reducing computing time, which could then be used to increase the resolution or the number of simulations. Unfortunately, major progress has been hindered by difficulties in interfacing DL with conventional models because of i) programming language barriers, ii) difficulties in reaching stable online coupling with models, and iii) the inability to exploit the horizontal spatial information as classical convolutional neural networks can’t be used on spherical unstructured grids.

We present a solution to perform spatial convolutions directly on the unstructured grids of NWP models. Our convolution and pooling operations work on any pixelization of the sphere (e.g., Gauss-Legendre, icosahedral, cubed-sphere) provided a mesh or the pixel’s locations. Moreover, our solution allows mixing data from different grids and scales linearly with the number of pixels, allowing it to ingest millions of inputs from 3D spherical fields.

We show that a proper treatment of the spherical topology and geometry of the Earth (as opposed to a projection to the plane, cylinder, or cube) i) yields geometric constraints that provide generalization guarantees (i.e., the learned function does not depend on its localization on the Earth), and ii) induces prior biases that facilitate learning. We demonstrate that doing so improves prediction performance at no computational overhead for data-driven weather forecasting. We trained autoregressive ResUNets on five spherical samplings, covering those adopted by the major meteorological centers.

We believe that the proposed solution can find immediate use for post-processing (e.g., bias correction and downscaling), model error corrections, linear solvers pre-conditioning, model components emulation, sub-grid parameterizations, and many more applications. To that end, we provide open-source and easy-to-use code accompanied by tutorials.

How to cite: Defferrard, M., Feng, W., Bolón Brun, N., Lloréns Jover, I., and Ghiggi, G.: Deep Learning on the sphere for weather/climate applications, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2681, https://doi.org/10.5194/egusphere-egu21-2681, 2021.

15:50–15:52
|
EGU21-4007
|
ECS
Alban Farchi et al.

Recent developments in machine learning (ML) have demonstrated impressive skills in reproducing complex spatiotemporal processes. However, contrary to data assimilation (DA), the underlying assumption behind ML methods is that the system is fully observed and without noise, which is rarely the case in numerical weather prediction. In order to circumvent this issue, it is possible to embed the ML problem into a DA formalism characterised by a cost function similar to that of the weak-constraint 4D-Var (Bocquet et al., 2019; Bocquet et al., 2020). In practice ML and DA are combined to solve the problem: DA is used to estimate the state of the system while ML is used to estimate the full model. 

In realistic systems, the model dynamics can be very complex and it may not be possible to reconstruct it from scratch. An alternative could be to learn the model error of an already existent model using the same approach combining DA and ML. In this presentation, we test the feasibility of this method using a quasi geostrophic (QG) model. After a brief description of the QG model model, we introduce a realistic model error to be learnt. We then asses the potential of ML methods to reconstruct this model error, first with perfect (full and noiseless) observation and then with sparse and noisy observations. We show in either case to what extent the trained ML models correct the mid-term forecasts. Finally, we show how the trained ML models can be used in a DA system and to what extent they correct the analysis.

Bocquet, M., Brajard, J., Carrassi, A., and Bertino, L.: Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models, Nonlin. Processes Geophys., 26, 143–162, 2019

Bocquet, M., Brajard, J., Carrassi, A., and Bertino, L.: Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization, Foundations of Data Science, 2 (1), 55-80, 2020

Farchi, A., Laloyaux, P., Bonavita, M., and Bocquet, M.: Using machine learning to correct model error in data assimilation and forecast applications, arxiv:2010.12605, submitted. 

How to cite: Farchi, A., Laloyaux, P., Bonavita, M., and Bocquet, M.: Using machine learning to correct model error in data assimilation and forecast applications, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4007, https://doi.org/10.5194/egusphere-egu21-4007, 2021.

15:52–15:54
|
EGU21-7484
Lan Hoang

The water cycle connects many essential parts of the environment and is a key process supporting life on Earth. Amid climate change impacts and competing water consumptions from a growing population, there is a need for better management of this scarce resource. Yet, water management is complex. As a resource, water exists under various forms, from water droplets in the atmosphere to embodied water in consumer products. Its flows and existence transcend national and geographic borders; its management, however, are limited by boundaries. To date, machine learning has shown potentials in applications across domains, from showing skills in game plays to improving efficiencies and operation of real-life processes. The system-of-systems perspective has emerged in many fields as an attempt to capture the complexity arising from individual components. Within a system, the interactions and interdependencies across components can produce unintended consequences. Moreover, their effects that are not explainable just from studying a component on its own. Its concept intertwines with Complexity Science, and points to Wicked Problems, solutions of which are difficult to find and achieve. Climate change itself has been recognised as a ‘Super Wicked’ problem, for which deadlines are approaching but for which there are no clear solutions. Yet, there is often a lack of understanding of the interactions and dependencies, even from a physical modelling perspective. A comprehensive approach to capturing these interactions is through physical modelling of water processes, such as hydraulics and hydrological modelling. The structure and data pipelines of such an approach, nevertheless, is static and does not evolve unless reconfigured by model experts. 

We propose that a form of machine learning, Deep Reinforcement Learning, can be used to better capture the complex whole system interactions of components in the water cycle and assist in their management. This approach capitalises the rapid advances of Machine Learning in environmental applications and differ to traditional optimisation techniques in that it provides distributed learning, consistent models for components that can evolve to connect and continuously adapt to the operating environment. This is key in capturing the changes brought about by climate change and the subsequent environmental and human change in response.

1. Reinforcement Learning for improving process modelling to produce a spectrum of fully physical models for hybrid physical-neural networks to full Deep Learning models that can mimic the natural processes of interest, such as streamflows or rainfall-runoff. An example case study could be a hydrological model of a river catchment and its upstream-downstream dam operation. The components in this case can be individual reservoir models, neural network-based emulators, or differential equation models.

2. Reinforcement Learning for holistic modelling of physical processes in water managemen to capture the whole system. Since each component is modelled as a full or hybrid physical-neural network model, the components could be integrated to provide a whole system approach. Within this, Reinforcement Learning can act as the constructor or go beyond this to provide solutions for targeted problems.


 

How to cite: Hoang, L.: Reinforcement Learning for a system-of-systems approach in water management, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7484, https://doi.org/10.5194/egusphere-egu21-7484, 2021.

15:54–15:56
|
EGU21-8279
|
ECS
Christos Pylianidis et al.

In this work we compare the performance of machine learning metamodels of different scale for the prediction of pasture grass nitrogen response rate using a case study across different locations in New Zealand. We first used a range of soil, plant and management parameters known to affect grass growth and/or nitrogen response. These generated a complete factorial that enabled us to run virtual nitrogen response rate experiments, using the APSIM simulation model, in eight locations around the country. We included 40 years of weather data to capture the effect of weather variability on response rate. This created a large database with which to train machine learning models. We created local, regional, and nation-wide models using Random Forest and tested them on known and unknown locations. To evaluate the models, we first calculated the RMSE, MAE and R2 and then determined if the distributions of the predictions were statistically different using the Mann-Whitney U test. Finally, we explore the generalizability of the models using the error metrics and the results of the statistical test.

How to cite: Pylianidis, C., Snow, V., Overweg, H., and Athanasiadis, I. N.: Comparing machine learning metamodels of different scale for pasture nitrogen response rate prediction, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8279, https://doi.org/10.5194/egusphere-egu21-8279, 2021.

15:56–15:58
|
EGU21-8746
|
ECS
Isaac Newton Buo et al.

The frequency of heatwave events has increased in recent decades because of global warming. Satellite observed Land Surface Temperature (LST) is a widely used parameter for assessing heatwaves. It provides a wide spatial coverage compared to surface air temperature measured at weather stations. However, LST quality is limited by cloud contamination. Because heatwaves have a limited temporal frame, having a full and cloud-free complement of LST for that period is necessary.  We explore gap filling of LST using other spatial features like land cover, elevation and vegetation indices in a machine learning approach. We use a seamless open and free daily vegetation index  product which is paramount to the success of our study.  We create a Random Forest model that provides a ranking of features relevant for predicting LST. Our model is used in filling gaps in Moderate Resolution Imaging Spectroradiometer (MODIS) over three heat wave periods in different summers in Estonia. We compare the output of our model to an established spatiotemporal gap filling algorithm and with in-situ measured temperature to validate the predictive capability of our model. Our findings validate machine learning as a suitable tool for filling gaps in satellite LST and very useful when short time frames are of interest. In addition, we acknowledge that while time is an important factor in predicting LST, additional information on vegetation can improve the predictions of a model.

How to cite: Buo, I. N., Sagris, V., and Jaagus, J.: Application of machine learning as a gap-filling tool for satellite land surface temperature, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8746, https://doi.org/10.5194/egusphere-egu21-8746, 2021.

15:58–16:00
|
EGU21-12814
|
ECS
Eva van der Kooij et al.

Accurate short-term forecasts, also known as nowcasts, of heavy precipitation are desirable for creating early warning systems for extreme weather and its consequences, e.g. urban flooding. In this research, we explore the use of machine learning for short-term prediction of heavy rainfall showers in the Netherlands.

We assess the performance of a recurrent, convolutional neural network (TrajGRU) with lead times of 0 to 2 hours. The network is trained on a 13-year archive of radar images with 5-min temporal and 1-km spatial resolution from the precipitation radars of the Royal Netherlands Meteorological Institute (KNMI). We aim to train the model to predict the formation and dissipation of dynamic, heavy, localized rain events, a task for which traditional Lagrangian nowcasting methods still come up short.

We report on different ways to optimize predictive performance for heavy rainfall intensities through several experiments. The large dataset available provides many possible configurations for training. To focus on heavy rainfall intensities, we use different subsets of this dataset through using different conditions for event selection and varying the ratio of light and heavy precipitation events present in the training data set and change the loss function used to train the model.

To assess the performance of the model, we compare our method to current state-of-the-art Lagrangian nowcasting system from the pySTEPS library, like S-PROG, a deterministic approximation of an ensemble mean forecast. The results of the experiments are used to discuss the pros and cons of machine-learning based methods for precipitation nowcasting and possible ways to further increase performance.

How to cite: van der Kooij, E., Schleiss, M., Taormina, R., Fioranelli, F., Lugt, D., van Hoek, M., Leijnse, H., and Overeem, A.: Nowcasting heavy precipitation over the Netherlands using a 13-year radar archive: a machine learning approach, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12814, https://doi.org/10.5194/egusphere-egu21-12814, 2021.

16:00–16:02
|
EGU21-13409
|
ECS
|
Brian Groenke et al.

Permafrost thaw is considered one of the major climate feedback processes and is currently a significant source of uncertainty in predicting future climate states. Coverage of in-situ meteorological and land-surface observations is sparse throughout the Arctic, making it difficult to track the large-scale evolution of the Arctic surface and subsurface energy balance. Furthermore, permafrost thaw is a highly non-linear process with its own feedback mechanisms such as thermokarst and thermo-erosion. Land surface models, therefore, play an important role in our ability to understand how permafrost responds to the changing climate. There is also a need to quantify freeze-thaw cycling and the incomplete freezing of soil at depth (talik formation). One of the key difficulties in modeling the Arctic subsurface is the complexity of the thermal regime during phase change under freezing or thawing conditions. Modeling heat conduction with phase change accurately requires estimation of the soil freeze characteristic curve (SFCC) which governs the change in soil liquid water content with respect to temperature and depends on the soil physical characteristics (texture). In this work, we propose a method for replacing existing brute-force approximations of the SFCC in the CryoGrid 3 permafrost model with universal differential equations, i.e. differential equations that include one or more terms represented by a universal approximator (e.g. a neural network). The approximator is thus tasked with inferring a suitable SFCC from available soil temperature, moisture, and texture data. We also explore how remote sensing data might be used with universal approximators to extrapolate soil freezing characteristics where in-situ observations are not available.

How to cite: Groenke, B., Langer, M., Gallego, G., and Boike, J.: Learning Soil Freeze Characteristic Curves with Universal Differential Equations, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13409, https://doi.org/10.5194/egusphere-egu21-13409, 2021.

16:02–16:04
|
EGU21-13729
|
ECS
Jonas Pilot et al.

Estimating the probability of a wildfire occurring at a specific location on a given day comes with the challenge that it not only depends to a high degree on weather conditions and soil moisture, but also on the presence of an ignition source [1]. A commonly used index to assess wildfire risks is the Canadian Fire Weather Index [2], which does, however, not model the presence of an ignition source. 

We develop a machine learning model which discriminates between (1) the probability of a wildfire occurring given an ignition source, and (2) the probability of an ignition source being present, and inferences both. We first demonstrate the performance of our approach by estimating these probabilities on simulated data. With these simulations, we also assess the robustness of our model to machine learning-related challenges that arise with wildfire data, such as extreme class imbalance and label uncertainty. We then show the performance of our model trained on satellite-derived global wildfire occurrences between 2001 and 2017. The dataset FireTracks, which includes a comprehensive record of wildfire occurrences [3], is used as ground truth. Input features include weather data (ERA5 [4]) and population densities (GPW4 [5]). Finally we compare wildfire risk ratings computed with the Canadian Fire Weather Index to the probabilities estimated by our model.

References
[1] K. Rao et al., SAR-enhanced mapping of live fuel moisture content, Remote Sensing of Environment, 2020. 
[2] R. D. Field et al., Development of a Global Fire Weather Database. Natural Hazards and Earth System Sciences, 2015. 
[3] D. Traxl, FireTracks Scientific Dataset, 2021. (https://github.com/dominiktraxl/firetracks) 
[4] H. Hersbach et al., ERA5 hourly data on single levels from 1979 to present, Copernicus Climate Change Service (C3S) Climate Data Store (CDS), 2018. 
[5] Center for International Earth Science Information Network - CIESIN - Columbia University, Gridded Population of the World, Version 4 (GPWv4): Population Density, NASA Socioeconomic Data and Applications Center (SEDAC), 2016.

How to cite: Pilot, J., Bui, T. B., and Boers, N.: Towards machine learning for the estimation of wildfire risk from weather and sociological data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13729, https://doi.org/10.5194/egusphere-egu21-13729, 2021.

16:04–16:06
|
EGU21-14183
|
ECS
Abhilash Singh and Kumar Gaurav

Soil surface attributes (mainly surface roughness and soil moisture) play a critical role in land-atmosphere interaction and have several applications in agriculture, hydrology, meteorology, and climate change studies. This study explores the potential of different machine learning algorithms (Support Vector Regression (SVR), Gaussian Process Regression (GPR), Generalised Regression Neural Network (GRNN), Binary Decision Tree (BDT), Bragging Ensemble Learning, and Boosting Ensemble Learning) to estimate the surface soil roughness from Synthetic Aperture Radar (SAR) and optical satellite images in an alluvial megafan of the Kosi River in northern India. In a field campaign during 10-21 December 2019, we measured the surface soil roughness at 78 different locations using a mechanical pin-meter. The average value of the in-situ surface roughness is 1.8 cm. Further, at these locations, we extract the multiple features (backscattering coefficients, incidence angle, Normalised Difference Vegetation Index, and surface elevation) from Sentinel-1 A/B, LANDSAT-8 and SRTM data. We then trained and evaluated (in 60:40 ratio) the performance of all the regression-based machine learning techniques. 

We found that SVR method performs exceptionally well over other methods with (R= 0.74, RMSE=0.16 cm, and MSE=0.025 cm2). To ensure a fair selection of machine learning techniques, we have calculated some additional criteria that include Akaike’s Information Criterion (AIC), corrected AIC and Bayesian Information Criterion (BIC). On comparing, we observed that SVR exhibits the lowest values of AIC, corrected AIC and BIC amongst all other methods, indicating best goodness-of-fit.

How to cite: Singh, A. and Gaurav, K.: Machine learning to estimate surface roughness from satellite images, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14183, https://doi.org/10.5194/egusphere-egu21-14183, 2021.

16:06–16:08
|
EGU21-14386
|
ECS
Ahmet Batuhan Polat et al.

Obtaining high accuracy in land cover classification is a non-trivial problem in geosciences for monitoring urban and rural areas. In this study, different classification algorithms were tested with different types of data, and besides the effects of seasonal changes on these classification algorithms and the evaluation of the data used are investigated. In addition, the effect of increasing classification training samples on classification accuracy has been revealed as a result of the study. Sentinel-1 Synthetic Aperture Radar (SAR) images and Sentinel-2 multispectral optical images were used as datasets. Object-based approach was used for the classification of various fused image combinations. The classification algorithms Support Vector Machines (SVM), Random Forest (RF) and K-Nearest Neighborhood (kNN) methods were used for this process. In addition, Normalized Difference Vegetation Index (NDVI) was examined separately to define the exact contribution to the classification accuracy.  As a result, the overall accuracies were compared by classifying the fused data generated by combining optical and SAR images. It has been determined that the increase in the number of training samples improve the classification accuracy. Moreover, it was determined that the object-based classification obtained from single SAR imagery produced the lowest classification accuracy among the used different dataset combinations in this study. In addition, it has been shown that NDVI data does not increase the accuracy of the classification in the winter season as the trees shed their leaves due to climate conditions.

How to cite: Polat, A. B., Akcay, O., and Balik Sanli, F.: Analysing Temporal Effects on Classification of SAR and Optical Images, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14386, https://doi.org/10.5194/egusphere-egu21-14386, 2021.

16:08–16:10
|
EGU21-14548
|
ECS
Mirela Beloiu et al.

Recent advances in deep learning techniques for object detection and the availability of high-resolution images facilitate the analysis of both temporal and spatial vegetation patterns in remote areas. High-resolution satellite imagery has been used successfully to detect trees in small areas with homogeneous rather than heterogeneous forests, in which single tree species have a strong contrast compared to their neighbors and landscape. However, no research to date has detected trees at the treeline in the remote and complex heterogeneous landscape of Greece using deep learning methods. We integrated high-resolution aerial images, climate data, and topographical characteristics to study the treeline dynamic over 70 years in the Samaria National Park on the Mediterranean island of Crete, Greece. We combined mapping techniques with deep learning approaches to detect and analyze spatio-temporal dynamics in treeline position and tree density. We use visual image interpretation to detect single trees on high-resolution aerial imagery from 1945, 2008, and 2015. Using the RGB aerial images from 2008 and 2015 we test a Convolution Neural Networks (CNN)-object detection approach (SSD) and a CNN-based segmentation technique (U-Net). Based on the mapping and deep learning approach, we have not detected a shift in treeline elevation over the last 70 years, despite warming, although tree density has increased. However, we show that CNN approach accurately detects and maps tree position and density at the treeline. We also reveal that the treeline elevation on Crete varies with topography. Treeline elevation decreases from the southern to the northern study sites. We explain these differences between study sites by the long-term interaction between topographical characteristics and meteorological factors. The study highlights the feasibility of using deep learning and high-resolution imagery as a promising technique for monitoring forests in remote areas.

How to cite: Beloiu, M., Poursanidis, D., Hoffmann, S., Chrysoulakis, N., and Beierkuhnlein, C.: Using high‐resolution aerial imagery and deep learning to detect tree spatio-temporal dynamics at the treeline, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14548, https://doi.org/10.5194/egusphere-egu21-14548, 2021.

16:10–16:12
|
EGU21-15564
|
ECS
|
Sergey Kharchenko

The trying of automatic creation of the Kola Peninsula geomorphological map [after Grave M.K. et al., 1971] at the morphogenetic legend’s principle was provided based on the "random forest" classification technique. As input data a several geomorphometric variables were used only (the basic variables – elevation, slope angle, curvatures etc., and the relatively rare variables including spectral terrain variables – result of the decomposition of digital elevation model into 2D Fourier series). On the training data covering only 1.3 % study area with known labels for one of thirteen probable landform types, it were carried out the reconstruction of geomorphological boundaries and the automatic creation of the geomorphological map. The accuracy of resulting map was 81 % (area’s share with the correct classification result – the same landform type that expertly way defined). This result gives increasing of accuracy over “zero accuracy” (random guessing) more than x10. In general, a large visual similarity between the expertly created geomorphological map and the one created automatically based on the known typological affiliation of the landforms of a small part of the territory is also noticeable. Mistakenly recognized affiliation to one or another genetic type of landform in 19% of tries is rather not a problem, but a good opportunity to improve the predictive power of the model by targeted search of representative morphometric variables. We emphasize - the obtained accuracy of the model is achieved only when using variables extracted entirely from the DEM and calculated fully automatically. The use of data from tectonic and surficial geology maps, maps of quaternary deposits and other data sources can significantly improve the accuracy of the classification and bring it to the level of confident use of the model in practical work. As a by-product of landform classification by the random forest method - the characteristics that are most representative of the prediction of the genetic types of the landforms of the Kola Peninsula have been identified. Almost all of them turned out to be relatively rarely used focal geomorphometric variables. More standard and familiar parameters - slopes, aspect, curvatures are not characterized by significant representativeness. The predictive power of the model was considerably increased by using the spectral characteristics of the relief (parameters of periodicity of the elevation field, calculated by the sliding window method of two-dimensional discrete Fourier transform). The obtained results, we think, convince us that the possibilities of morphometric indicators alone in general geomorphological mapping are underestimated.

The study was supported by the Russian Science Foundation (project No. 19-77-10036).

How to cite: Kharchenko, S.: Automated recognition of the landforms origin for the Kola Peninsula based on morphometric variables, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15564, https://doi.org/10.5194/egusphere-egu21-15564, 2021.

16:12–16:14
|
EGU21-15964
|
ECS
Sonal Bindal

In the recent years, prediction modelling techniques have been widely used for modelling groundwater arsenic contamination. Determining the accuracy, performance and suitability of these different algorithms such as univariate regression (UR), fuzzy model, adaptive fuzzy regression (AFR), logistic regression (LR), adaptive neuro-fuzzy inference system (ANFIS), and hybrid random forest (HRF) models still remains a challenging task. The spatial data which are available at different scales with different cell sizes. In the current study we have tried to optimize the spatial resolution for best performance of the model selecting the best spatial resolution by testing various predictive algorithms. The model’s performance was evaluated based of the values of determination coefficient (R2), mean absolute percentage error (MAPE) and root mean square error (RMSE). The outcomes of the study indicate that using 100m × 100m spatial resolution gives best performance in most of the models. The results also state HRF model performs the best than the commonly used ANFIS and LR models.

How to cite: Bindal, S.: Mapping arsenic vulnerability at different spatial scales using statistical and machine learning models , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15964, https://doi.org/10.5194/egusphere-egu21-15964, 2021.

16:14–16:16
|
EGU21-16355
|
ECS
Ruslan Chernyshev et al.

This work is devoted to development of neural networks for identification of partial differential equations (PDE) solved in the land surface scheme of INM RAS Earth System model (ESM). Atmospheric and climate models are in the top of the most demanding for supercomputing resources among research applications. Spatial resolution and a multitude of physical parameterizations used in ESMs continuously increase. Most of parameters are still poorly constrained, many of them cannot be measured directly. To optimize model calibration time, using neural networks looks a promising approach. Neural networks are already in wide use in satellite imaginary (Su Jeong Lee, et al, 2015; Krinitskiy M. et al, 2018) and for calibrating parameters of land surface models (Yohei Sawada el al, 2019). Neural networks have demonstrated high efficiency in solving conventional problems of mathematical physics (Lucie P. Aarts el al, 2001; Raissi M. et al, 2020). 

We develop a neural networks for optimizing parameters of nonlinear soil heat and moisture transport equation set. For developing we used Python3 based programming tools implemented on GPUs and Ascend platform, provided by Huawei. Because of using hybrid approach combining neural network and classical thermodynamic equations, the major purpose was finding the way to correctly calculate backpropagation gradient of error function, because model trains and is being validated on the same temperature data, while model output is heat equation parameter, which is typically not known. Neural network model has been runtime trained using reference thermodynamic model calculation with prescribed parameters, every next thermodynamic model step has been used for fitting the neural network until it reaches the loss function tolerance.

Literature:

1.     Aarts, L.P., van der Veer, P. “Neural Network Method for Solving Partial Differential Equations”. Neural Processing Letters 14, 261–271 (2001). https://doi.org/10.1023/A:1012784129883

2.     Raissi, M., P. Perdikaris and G. Karniadakis. “Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations.” ArXiv abs/1711.10561 (2017): n. pag.

3.     Lee, S.J., Ahn, MH. & Lee, Y. Application of an artificial neural network for a direct estimation of atmospheric instability from a next-generation imager. Adv. Atmos. Sci. 33, 221–232 (2016). https://doi.org/10.1007/s00376-015-5084-9

4.     Krinitskiy M, Verezemskaya P, Grashchenkov K, Tilinina N, Gulev S, Lazzara M. Deep Convolutional Neural Networks Capabilities for Binary Classification of Polar Mesocyclones in Satellite Mosaics. Atmosphere. 2018; 9(11):426.

5.     Sawada, Y.. “Machine learning accelerates parameter optimization and uncertainty assessment of a land surface model.” ArXiv abs/1909.04196 (2019): n. pag.

6.     Shufen Pan et al. Evaluation of global terrestrial evapotranspiration using state-of-the-art approaches in remote sensing, machine learning and land surface modeling. Hydrol. Earth Syst. Sci., 24, 1485–1509 (2020)

7.     Chaney, Nathaniel & Herman, Jonathan & Ek, M. & Wood, Eric. (2016). Deriving Global Parameter Estimates for the Noah Land Surface Model using FLUXNET and Machine Learning: Improving Noah LSM Parameters. Journal of Geophysical Research: Atmospheres. 121. 10.1002/2016JD024821.

 

 

How to cite: Chernyshev, R., Krinitskiy, M., and Stepanenko, V.: Applying neural network for identification of land surface model parameters, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16355, https://doi.org/10.5194/egusphere-egu21-16355, 2021.

16:16–17:00
Meet the authors in their breakout text chats

A chat user is typing ...