Enter Zoom Meeting


Mathematical methods for the analysis of potential field data and geodetic time series

The analysis of the Earth's gravity and magnetic fields is becoming increasingly important in geosciences. Modern satellite missions are continuing to provide data with ever improving accuracy and nearly global, time-dependent coverage. The gravitational field plays an important role in climate research, as a record of and reference for the observation of mass transport. The study of the Earth's magnetic field and its temporal variations is yielding new insights into the behavior of its internal and external sources. Both gravity and magnetic data furthermore constitute primary sources of information also for the global characterization of other planets. Hence, there continues to be a need to develop new methods of analysis, at the global and local scales, and especially on their interface. For over two decades now, methods that combine global with local sensitivity, often in a multiresolution setting, have been developed: these include wavelets, radial basis functions, Slepian functions, splines, spherical cap harmonics, etc. One purpose of this session is to provide a forum for exchange of research projects, whether related to forward or inverse modeling, theoretical, computational, or observational studies.
Besides monitoring the variations of the gravity and magnetic fields, space geodetic techniques deliver time series describing changes of the surface geometry, sea level change variations or fluctuations in the Earth's orientation. However, geodetic observation systems usually measure the integral effect. Thus, analysis methods have to be applied to the geodetic time series for a better understanding of the relations between and within the components of the system Earth. The combination of data from various space geodetic and remote sensing techniques may allow for separating the integral measurements into individual contributions of the Earth system components. Presentations to time frequency analysis, to the detection of features of the temporal or spatial variability of signals existing in geodetic data and in geophysical models, as well as to the investigations on signal separation techniques, e.g. EOF, are highly appreciated. We further solicit papers on different prediction techniques e.g. least-squares, neural networks, Kalman filter or uni- or multivariate autoregressive methods to forecast Earth Orientation Parameters, which are needed for real-time transformation between celestial and terrestrial reference frames.

Co-organized by EMRP2
Convener: Volker Michel | Co-conveners: Katrin BentelECSECS, Christian Gerhards, Wieslaw Kosek, Michael Schmidt
Welcome to this vPICO session. All conveners, speakers, and attendees join the Zoom Meeting for the live presentations through the green button to the top right. On this page, you will find a list of presentations, their abstracts linked, and you can use the handshake to start spontaneous chats with others.

Activation of the text chat sets a cookie in your browser that is automatically deleted at the end of the conference.

A chat user is typing ...
SHIFT+ENTER for line break
We are sorry but we encountered a problem while running the chat G1.2 . Please reload this browser window. In case this message is shown again after reloading, please contact us at: egu21@copernicus.org. We are sorry for this inconvenience.

Wed, 28 Apr, 13:30–14:15

Chairpersons: Michael Schmidt, Christian Gerhards, Wieslaw Kosek

Naomi Schneider and Volker Michel

In the light of significant challenges like the climate change, the visualization of the gravitational potential remains a priority in geodesy. A decade ago, the Geomathematics Group Siegen proposed alternative representations for such problems.

The respective methods are based on matching pursuits: hence, they build a representation in a so-called best basis. However, they include additional aspects which occur, for instance, when the downward continuation of the gravitational potential is approximated.

In this talk, we summarize the different developmental stages from 2011 up to now which started with a basic implementation and then included aspects of orthogonality, weakness and dictionary learning, respectively. Further, we give an outlook on our next steps with these methods. For the current status-quo, we show numerical results with respect to the downward continuation of the gravitational potential.

How to cite: Schneider, N. and Michel, V.: 10 years of matching pursuits for inverse problems: what’s been done and what’s to come, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-267, https://doi.org/10.5194/egusphere-egu21-267, 2020.

Frederik J. Simons and Georg S. Reuber

Conventionally, exploration in geology involves distinct research groups, each looking at a different observable and performing separate inversions for subsurface structure. In this work we discuss the advantages and performance of a combined inversion coupling gravity-anomaly, acoustic-wavefield and surface velocities as observables in one single framework. The gravity potential, which varies across the Earth, is sensitive to density anomalies at depth and can be obtained by solving a Poisson type equation. Its inversion is ill-posed since its solutions are non-unique in the depth and the density of the inverted anomaly. We also consider the surface displacement caused by a compressible wave as a consequence of an earthquake at depth. This inversion results in a wavespeed reconstruction but lacks interpretability, i.e. whether the anomaly is thermal or chemical in origin. The surface velocity, caused by the motion of highly viscous rocks in the subsurface, is the third observable. It can be modelled by the (nonlinear) Stokes equations, which account for the density and viscosity of a subsurface anomaly.

All three equations and their adjoints are implemented in one single Python framework using the finite element library FeNICS. To investigate the shape of the cost function, a grid search in the parameter space for three geological settings is presented. Additionally, the performance of gradient-based inversions for each observable separately or in combination, respectively, is presented. We further investigate the performance of a shape-optimizing inversion method, assuming the material parameters are known, while the shape is unknown.

How to cite: Simons, F. J. and Reuber, G. S.: Investigation of a Coupled Deterministic Inversion for the Interior of the Earth by using Gravity-Anomaly, Acoustic-Wavefield and Geodetic Velocity measurements, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6648, https://doi.org/10.5194/egusphere-egu21-6648, 2021.

Leonardo Uieda et al.

Gravimetry is a routine part of the geophysicists toolset, historically used in geophysics following the geodetic definitions of gravity anomalies and their related “reductions”. Several authors have shown that the geodetic concept of a gravity anomaly does not align with goals of gravimetry in geophysics (the investigation of anomalous density distributions). Much of this confusion likely stems from the lack of widely available tools for performing the corrections needed to arrive at a geophysically meaningful gravity disturbance. For example, free-air corrections are completely unnecessary since analytical expressions for theoretical gravity at any point have existed for over a decade. Since this is not easily done in a spreadsheet or short script, modern tools for processing and modelling gravity data for geophysics are needed. These tools must be trustworthy (i.e., extensively tested) and designed with software development and geophysical best practices in mind.

We present the Python libraries Harmonica and Boule, which are part of the Fatiando a Terra project (https://www.fatiando.org). Both tools are open-source under the permissive BSD license and are developed in the open by a community of geoscientists and programmers.

Harmonica provides tools for processing, forward modelling, and inversion of gravity and magnetic data. The first release of Harmonica was focused on implementing methods for processing and interpolation with the equivalent source technique, as well as forward modelling with right-rectangular prisms, point sources, and tesseroids. Current work is directed towards implementing a processing pipeline for gravity data, including topographic corrections in Cartesian and spherical coordinates, atmospheric corrections, and more. The software is still in early stages of development and design and would benefit greatly from community involvement and feedback.

Boule implements reference ellipsoids (including oblate ellipsoids, spheres, and soon triaxial ellipsoids), conversions between ellipsoidal and geocentric spherical coordinates, and normal gravity calculations using analytical solutions for gravity fields at any point outside of the ellipsoid. It includes ellipsoids for the Earth as well as other planetary bodies in the solar system, like Mars, the Moon, Venus, and Mercury. This enables the calculation of gravity disturbances for Earth and planetary data without the need for free-air corrections. Boule was created out of the shared needs of Harmonica, SHTools (https://github.com/SHTOOLS), and pygeoid (https://github.com/ioshchepkov/pygeoid) and is developed with input from developers of these projects.

We welcome participation from the wider geophysical community, irrespective of programming skill level and experience, and are actively searching for interested developers and users to get involved in shaping the future of these projects.

How to cite: Uieda, L., Soler, S. R., Pesce, A., Perozzi, L., and Wieczorek, M. A.: Harmonica and Boule: Modern Python tools for geophysical gravimetry, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8291, https://doi.org/10.5194/egusphere-egu21-8291, 2021.

Lucia Seoane et al.

The AGOSTA project initially proposed by our team and lately funded by CNES TOSCA consists of developing efficient approaches to restore seafloor shape (or bathymetry), as well as lithospheric parameters such as the crust and elastic thicknesses, by combining different types of observations including gravity gradient data. As it is based on the second derivatives of the potential versus the space coordinates, gravity gradiometry provides more information inside the Earth system at short wavelengths. The GOCE mission has measured the gravity gradient components of the static field globally and give the possibility to detect more details on the structure of the lithosphere at spatial resolutions less than 200 km. We propose to analyze these satellite-measured gravity tensor components to map the undersea relief more precisely than using geoid or vertical gravity previously considered for this purpose. Inversion of vertical gravity gradient data derived from the radar altimetry technique also offers the possibility to reach greater resolutions (at least 50 km) than the GOCE mission one. The seafloor topography estimates are tested in areas well-covered by independent data for validation, such as around the Great Meteor guyot [29°57′10.6″N, 28°35′31.3″W] and New England seamount chain [37°24′N 60°00′W, 120° 10' 30.4" W] in the Atlantic Ocean as well as the Acapulco seamount [13° 36' 15.4" N, 120° 10' 30.4" W] in the Central Pacific.

How to cite: Seoane, L., Ramillien, G., Darrozes, J., Frappart, F., Rouxel, D., Schmitt, T., and Salaun, C.: Seafloor topography variations mapped by regional gravity tensor analysis , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12570, https://doi.org/10.5194/egusphere-egu21-12570, 2021.

Youchao Xie et al.

We proposed an alternative method to determine the height of Mount Everest (HME) based on the shallow layer method (SLM), which was put forward by Shen (2006). We use the precise external global Earth gravity field model (i.e., EGM2008 and EIGEN-6C4 models) as input information, and the digital topographic model (i.e., DTM2006.0) and crust models (i.e., CRUST2.0 and CRUST1.0 models) to construct the shallow layer model. There are four combined strategies:(1) EGM2008 and CRUST1.0 models, (2) EGM2008 and CRUST2.0 models, (3) EIGEN-6C4 and CRUST1.0 models, and (4) EIGEN-6C4 and CRUST2.0 models, respectively. We calculate the HME by two approaches: first approach, the HME is directly calculated by combining the geoid undulation (N) and geodetic height (h); second approach, we calculate the HME by the segment summation approach (SSA) using the gravity field inside the shallow layer determined by the SLM. Numerical results show that for four combined strategies, the differences between our results and the authoritatively released value 8848.86 m by the Chinese and Nepalese governments on December 8, 2020 are 0.448 m, -0.009 m, -0.295 m, and -0.741 m using first approach and 0.539 m, 0.083 m, -0.214 m, and -0.647 m using second approach. The combined calculation of the HME by the terrain model and gravity field model is more accurate than that by the gravity field model alone. This study is supported by the National Natural Science Foundations of China (NSFC) under Grants 42030105, 41721003, 41804012, 41631072, 41874023, Space Station Project (2020)228.

How to cite: Xie, Y., Shen, W., Han, J., and Deng, X.: Determining the height of Mount Everest using the shallow layer method, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1886, https://doi.org/10.5194/egusphere-egu21-1886, 2021.

Jörg Ebbing et al.

The thickness of the magnetized layer in the crust (or lithosphere) holds valuable information about the thermal state and composition of the lithosphere. Commonly, maps of magnetic thickness are estimated by spectral methods that are applied to individual data windows of the measured magnetic field strength. In each window, the measured power spectrum is fit by a theoretical function which depends on the average magnetic thickness in the window and a ‘fractal’ parameter describing the spatial roughness of the magnetic sources. The limitations of the spectral approach have long been recognized and magnetic thickness inversions are routinely calibrated using heat flow measurements, based on the assumption that magnetic thickness corresponds to Curie depth. However, magnetic spectral thickness determinations remain highly uncertain, underestimate uncertainties, do not properly integrate heat flow measurements into the inversion and fail to address the inherent trade-off between lateral thickness and susceptibility variations.

We present a linearized Bayesian inversion that works in space domain and addresses many issues of previous depth determination approaches. The ‘fractal’ description used in the spectral approaches translates into a Matérn covariance function in space domain. We use a Matérn covariance function to describe both the spatial behaviour of susceptibility and magnetic thickness. In a first step, the parameters governing the spatial behaviour are estimated from magnetic data and heat flow data using a Bayesian formulation and the Monte-Carlo-Markov-Chain (MCMC) technique. The second step uses the ensemble of parameter solution from MCMC to generate an ensemble of susceptibility and thickness distributions, which are the main output of our approach.

The newly developed framework is applied to synthetic data at satellite height (300 km) covering an area of 6000 x 6000 km. These tests provide insight into the sensitivity of satellite magnetic data to susceptibility and thickness. Furthermore, they highlight that magnetic inversion benefits greatly from a tight integration of heat flow measurements into the inversion process.

How to cite: Ebbing, J., Szwillus, W., and Dilixiati, Y.: A Bayesian framework for simultaneous determination of susceptibility and magnetic thickness from magnetic data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10940, https://doi.org/10.5194/egusphere-egu21-10940, 2021.

William Brown et al.

Geomagnetic Virtual Observatories (GVOs) are a method for processing magnetic satellite data in order to simulate the observed behaviour of the geomagnetic field at a fixed location. As low-Earth orbit satellites move at around 8 km/s and have an infrequent re-visit time to the same location, a trade-off must be made between spatial and temporal coverage, typically averaging over half the local time orbit precession period, within a radius of influence of 700 km. The annual differences (secular variation, SV) of residuals between GVO time series data and an internal field model at a single GVO location will be strongly correlated with its neighbours due to the influence of large-scale external field sources and the effect of local time precession of the satellite orbit. Using Principal Component Analysis we identify and remove signals related to these noise sources to better resolve internal field variations on sub-annual timescales.

We apply our methodology to global grids of monthly GVOs for the Ørsted, CHAMP, CryoSat-2 and Swarm missions, covering the past two decades. We identify common principle components representing orbit precession rate dependent local time biases, and major external field sources, for all satellites. We find that the analysis is enhanced by focussing on regions of geomagnetic latitude where different external field sources dominate, identifying distinct influences in polar, auroral and low-to-mid latitude regions. Annual differences are traditionally used to calculate SV so as to remove annual and semi-annual external field signals, but these signals can be re-introduced if our corrected SV is re-integrated. We find that by representing secular variation with monthly first differences, rather than annual differences, we can identify and remove annual and semi-annual external field variations from the SV, which then improves the use of re-integrated main field GVO time series. By better accounting for contaminating signals from correlated external fields and aliasing, we are able to produce a global grid of GVO time series which better represents internal secular variation at monthly time resolution.

How to cite: Brown, W., Beggan, C., Hammer, M., Finlay, C., and Cox, G.: Isolating internal secular variation in Geomagnetic Virtual Observatory time series using Principle Component Analysis, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1082, https://doi.org/10.5194/egusphere-egu21-1082, 2021.

Christian Gerhards et al.

Nonuniqueness is a well-known issue with inverse problems involving geophysical potential fields (typically gravitational or magnetic fields). If no additional assumptions are made on the underlying source, only certain harmonic contributions can be reconstructed uniquely from knowledge of the potential. Such harmonic contributions have no intuitive geophysical interpretation. However, in various applications some specific properties are of particular interest: e.g., the direction of the magnetization in paleomagnetic studies or the lithospheric susceptibility in geomagnetism. In this presentation, we give a brief overview on the characterization of nonuniqueness and on a priori assumptions on the underlying magnetization that might lead to uniqueness or at least partial uniqueness.

How to cite: Gerhards, C., Kegeles, A., and Menzel, P.: Nonuniqueness and Uniqueness for Inverse Magnetization Problems, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3359, https://doi.org/10.5194/egusphere-egu21-3359, 2021.

Florian Faucher et al.

DAS finds growing interest in seismic exploration by offering a dense and low-cost coverage of the area investigated. Nonetheless, contrary to the usual geophones that measure the displacement, DAS provides information on the strain. In this work, we perform quantitative imaging of elastic media designing a new misfit functional that is adapted to these data-sets. This misfit criterion is based on the reciprocity-gap, hence defining the full reciprocity-gap waveform inversion. The main feature of our misfit is that it does not require the knowledge of the exciting source positions, and it allows us to combine data from active and passive (of unknown location) sources. In particular, the data from passive sources contain the low-frequency information needed to build initial models, while the exploration data contain the higher frequencies. We consequently follow a multi-resolution framework that we illustrate with two-dimensional elastic experiments.

How to cite: Faucher, F., Scherzer, O., and de Hoop, M. V.: Seismic imaging combining active and passive sources using distributed acoustic sensing, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15142, https://doi.org/10.5194/egusphere-egu21-15142, 2021.

Bramha Dutt Vishwakarma et al.

Geophysical inversions are usually solved with the help of a-priori constraints and several assumptions that simplify the physics of the problem. This is true for all the inversion approaches that estimate GIA signal from contemporary datasets such as GNSS vertical land motion (VLM) time-series and GRACE geopotential time-series. One of the assumptions in these GIA inversions is that the change in VLM due to GIA can be written in terms of surface mass change and average mantle density. Furthermore, the surface density change is obtained from GRACE data using the relations derived in Wahr et al., 1998, which actually is only applicable for surface processes (such as hydrology) and not for sub-surface processes such as GIA. This leaves us with a tricky signal-separation problem. Although many studies try to overcome this by constraining the inversion with the help of constrains from a priori GIA models, the output is not free from influence of GIA models that are known to have huge uncertainties. In this presentation, we discuss this problem in detail, then provide a novel mathematical framework that solves for GIA without any a priori GIA model. We validate our method in a synthetic environment first and then estimate a completely data-driven GIA field from contemporary Earth-observation data.

How to cite: Vishwakarma, B. D., Ziegler, Y., Royston, S., and Bamber, J. L.: A novel data-driven method to estimate GIA signal from Earth observation data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8876, https://doi.org/10.5194/egusphere-egu21-8876, 2021.

Roland Hohensinn et al.

High-precision GNSS (Global Navigation Satellite System) positioning can reach an accuracy at the millimeter level, and by collecting these data over years, very small ground movements can be resolved. These data can be used to study geodynamics, like continental drifts, tidal- and non-tidal loading effects, or earthquakes, for example. Here we focus on the sensitivity of GNSS for resolving these different types of movements. We derive minimal detectable displacements for linear drift rates, annual- and semiannual periodic motions, and offsets – these are main parameters of the so-called standard linear trajectory model for GNSS station motions. For our analysis, our data comes from several hundreds of permanent GNSS stations across Europe– the GNSS stations’ coordinates are obtained at a daily sampling rate, with almost 25 years of data available for some stations. Based on cleaned residual GNSS time series we calibrate a “power-law plus white noise” stochastic model for each station. Together with the functional trajectory model we compute the formal errors of the movement parameters based on a least-squares adjustment. Based on these errors we then introduce the statistics to derive minimum detectable displacements for the movement parameters.

Our analysis shows that the minimum detectable trends can be as low as few tenth of millimeters per year. The minimum detectable amplitudes at the annual and semiannual periods are at the millimeter level or lower, and the detectable offset is few millimeters on average, too. As expected, the minimum detectable displacements depend strongly on the length of the datasets and on the noise characteristics. Another important parameter is the number of discontinuities and offsets for each station – it impacts the minimum detectable trend. We conclude that such an analysis can be very useful for sensitivity studies in climate change monitoring. Furthermore, the methodology cannot only be applied in the field of GNSS time series analysis, but also to any other time series data in geosciences.

How to cite: Hohensinn, R., Ruttner, P., Soja, B., and Rothacher, M.: How small can ground movements be to be detectable with GNSS?, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1999, https://doi.org/10.5194/egusphere-egu21-1999, 2021.

Yener Turen et al.

In this study, we investigate the effect of gaps in data on the accuracy of deformation rates produced from GNSS campaign measurements. Our motivation in investigating gaps in data is that campaign GNSS time series might not be collected regularly due to various constraints in real life conditions. We used the baseline components produced from continuous GPS time series of JPL, NASA from a global network of the IGS to generate data gaps. The solutions of the IGS continuous GNSS time series were decimated to the solutions of the campaign data sampled one measurement per each month or three measurements per year. Furthermore, the effect of antenna set-up errors, which show Gaussian distribution, in campaign measurements was taken into account following the suggestions from the literature. The number of gaps in campaign GNSS time series was incremented plus one for each different trial until only one month is left within the specific year. Eventually, we tested whether the velocities obtained from GNSS campaign series containing data gaps differ significantly from the velocities derived from continuous data which is taken as to be the “truth”. The initial efforts using the samples from a restricted amount of data reveal that the deformation rate produced from the east component is more sensitive to the gaps in data than that of the components north and vertical.

Keywords: GPS time series; GPS campaigns; Velocity estimation; Gaps in data; Deformation.

How to cite: Turen, Y., Sanli, D. U., and Erol, T.: The quality of velocities from GNSS campaign measurements when gaps in data exist, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7421, https://doi.org/10.5194/egusphere-egu21-7421, 2021.

Wieslaw Kosek

The frequency-dependent autocovariance (FDA) function is defined in this paper as the autocovariance function of a wideband oscillation filtered by the Fourier transform bandpass filter (FTBPF). It was shown that the FDA estimation is a useful algorithm to detect mean amplitudes of oscillations in a very noisy time series. In this paper the least-squares polynomial harmonic model was used to remove the trend, low frequency as well as the annual and semi-annual oscillations from the IERS eopc04R_IAU2000_daily length of day (LOD) time series to compute their residuals. Next, the mean amplitudes of the signal as a function of frequency were determined from the difference between the FDA of LOD residuals and FDA of power-law noise model similar to the noise present in LOD residuals.  Several power-law noise model data were generated with a similar spectral index and variance as the noise in LOD data to estimate the mean amplitude spectrum in the seasonal and shorter period frequency band.  It was shown that the mean amplitudes of the oscillations in LOD residuals are very small compared to the noise standard deviation and do not depend on the filter bandwidth of the FTBPF. These small amplitudes explain why LOD prediction errors increase rapidly with the prediction length.

How to cite: Kosek, W.: Mean amplitudes of the signal in the seasonal and shorter period length of day time series computed by the frequency-dependent autocovariance, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9080, https://doi.org/10.5194/egusphere-egu21-9080, 2021.

Meet the authors in their breakout text chats

A chat user is typing ...