All seismic waves experience some degree of anelastic attenuation as they propagate because the earth is not a perfect, homogeneous, elastic solid. This causes the seismic wavelet to evolve as it propagates in a way characterized by progressive diminishment of high frequencies and progressive phase rotations. As a result, there is no single “wavelet” embedded in a seismic record, instead, there is a changing and evolving wavelet which has progressively lower bandwidth as traveltime increases. Mathematically, data with this property are said to be nonstationary. In contrast, the major wavelet shaping step in seismic data processing remains stationary spiking deconvolution, an algorithm that has changed very little since its introduction some 70 years ago. This algorithm explicitly assumes that seismic data are stationary, or equivalently, that the wavelet does not evolve. Data processors often cope with this conflict between physics and algorithmic assumptions by designing the deconvolution operator over a limited time window containing the exploration target. While this can optimize the image at target, the essential nonstationarity of the data is not addressed and the result is data with characteristic wavelet distortions at times outside the design window. I present a systematic study of this issue using a sophisticated synthetic dataset created with finite-difference modelling over a 2D earth whose stratigraphy comes from well logs and whose attenuation is prescribed by constant-Q theory. The study reveals the characteristic wavelet distortions that are present in real seismic data when the processing includes only stationary methods.
View full article as PDF (4.20 Mb)