RUS  ENG
Full version
JOURNALS // Numerical methods and programming // Archive

Num. Meth. Prog., 2024 Volume 25, Issue 2, Pages 155–174 (Mi vmp1115)

Methods and algorithms of computational mathematics and their applications

Optimization of the training dataset for NDM-net (Numerical Dispersion Mitigation neural network)

E. A. Gondyul, V. V. Lisitsa, K. G. Gadylshin, D. M. Vishnevskii

Trofimuk Institute of Petroleum Geology and Geophysics SB RAS, Novosibirsk, Russia

Abstract: In this paper we present a new approach to building a training dataset for the NDMnet (Numerical Dispersion Mitigation neural network) neural network, which suppresses numerical dispersion in modeling seismic wave fields. The NDM-net is trained to display the solution of the system of equations of the dynamic theory of elasticity, calculated on a coarse grid, into a solution modeled on a fine grid. However, in order to train an NDM-net, it is necessary to pre-calculate seismograms on a fine grid, which is a time-consuming procedure. To reduce the computational costs of the algorithm, an original approach is proposed that allows to reduce the learning time without loss of accuracy. It is proposed to consider a linear combination of three different parameters: the distance between sources, the similarity of seismograms and the similarity of velocity models as an effective metric for generating a training dataset. The weights of the linear combination are determined using a global sensitivity analysis.

Keywords: numerical dispersion, seismic modelling, deep learning.

UDC: 550.34.01

Received: 24.01.2024
Accepted: 16.03.2024

DOI: 10.26089/NumMet.v25r213



© Steklov Math. Inst. of RAS, 2024