Deep learning becomes to be a very powerful and efficient technique in many fields, where the recurrent neural network (RNN) has significant benefits of exhibiting temporal dynamic behavior for time dependency tasks by building a directed graph of a sequence. In this paper, with a self-designed RNN framework, the forward modeling of wave propaga- tion is casted into a forward propagation of RNN, which allows the inversion problem being treated as the training process of RNN. Using this specific network, we numerically ana- lyze the influence and playing role of learning rate (i.e., step-size) for each gradient-based optimization algorithm. Comparisons of gradient-based and non-linear algorithms are also discussed and analyzed. To examine our analysis, the Marmousi model is employed to perform the inversion on the proposed RNN using both gradient-based and non-linear al- gorithms.
View full article as PDF (1.19 Mb)