One of the principal difficulties with the inversion of the potential field data is the inherent non-uniqueness. In fact, by Gauss' theorem we know that there are infinitely many equivalent source distributions that can produce a measured field (Blakely, 1996). In the paragraph 2.3 we will see that when the number of model parameters M is greater than the number of observations N, a unique solution for the inverse problem does not exist (underdetermined problem). This represents an algebraic ambiguity and represents the most common problem in geophysics inversion. Thus we have two causes of ambiguity, both implying that there are infinitely many models that will fit the data to the same degree. To solve an undetermined problem and obtain a unique solution we need to add a priori information. Prior information takes numerous forms (geological, geophysical or mathematical) and a good inversion algorithm is able to incorporate this information into the inversion. One of the most important and common prior information is a reference model. In reconnaissance surveys, where little about geology and structures at depth is known, the reference model might be a uniform half space and for some problems just the zero model. In other surveys, the knowledge of the physical property distribution, built up through previous analyses or direct measurements, might be quite detailed (Oldenburg and Li, 2005) and could be incorporated in the reference model. In the last few years, with the development of ever more efficient computers, many potential fields inversion algorithms have been developed. The origin goes back to 1967, when Bott (1967) used this approach to interpret marine magnetic anomalies. Since then many different algorithms were proposed, each one characterized by a different type of a priori information and then to provide different solutions. Green (1975) searched for a density model that minimizes its weighted norm to some reference model. Safon at al. (1977) used the method of linear programming to compute moments of the density distribution. Fisher and Howard (1980) solved a linear least-squares problem constrained for upper and lower density bounds. Last and Kubik (1983) introduced a 'compact' inversion minimizing the body volume. Guillen and Menichetti (1984) assumed as a constraint the minimum momentum of inertia. Barbosa and Silva (1994) suggested allowing compactness along given directions using a priori information. Li and Oldenburg (1996, 1998) introduced model weighting as a function of depth using a subspace algorithm. Pilkington (1997, 2002) used preconditioned Conjugate Gradients (CG) method to solve the system of linear equations. Portniaguine and Zhdanov (1999, 2002) introduced regularized CG method and focusing using a reweighted least squares algorithm with different focusing functional. Li and Oldenburg (2003) use wavelet compression of the kernel with logarithmic barrier and conjugate gradient iteration. Pilkington (2009) used data space inversion in Fourier domain. Other relevant ways to introduce a priori information involve "soft constraints", such as positivity constraint for density and magnetization, or "hard constraints", such as empirical laws, constraints for upper and lower density bounds and for a density monotonically increasing with depth (Fisher and Howard, 1980) and external information from well-logs, geological studies and other geophysical investigations. An adaptive learning procedure was presented by Silva and Barbosa (2006) for incorporating prior knowledge. Such procedures lead to a reduction of the general ambiguity, but is based on relatively strong assumptions about the source characteristics, which may often be too subjective. So it is obvious that the solution is highly dependent on the prior information and for this reason, as previously said, just one algorithm suitable in every geological context does not cannot exist. This means that it is very important to choose the correct algorithm according to the geological context of the studied area and according to the available a priori information. For this reasons in this thesis we have studied and implemented three different algorithms for potential field inversion, and in particular those proposed by Li and Oldenburg (2003), Portniaguine and Zhdanov (2002) and Pilkington (2009). Each of these allows incorporating different prior information and then provide different solutions of the inverse problem. For example, if we study an area for oil exploration and we want to know the morphology of the basement we need an algorithm producing smooth solutions and allowing to introduce a reference model. An algorithm that works very well in these problems is the Li and Oldenburg (1996, 1998, 2003) algorithm. If we work Instead on environmental problems and we want to study an area characterized by the presence of sinkholes and we want to know depth and shape of these cavities, we need an algorithm producing compact solutions, and we can use the focusing inversion algorithm of Zhdanov (2002) or the one by Pilkington (2009) with sparseness constraints.

Depth resolution in potential field inversion: theory and applications / Florio, Giovanni. - (2013).

Depth resolution in potential field inversion: theory and applications

FLORIO, GIOVANNI
2013

Abstract

One of the principal difficulties with the inversion of the potential field data is the inherent non-uniqueness. In fact, by Gauss' theorem we know that there are infinitely many equivalent source distributions that can produce a measured field (Blakely, 1996). In the paragraph 2.3 we will see that when the number of model parameters M is greater than the number of observations N, a unique solution for the inverse problem does not exist (underdetermined problem). This represents an algebraic ambiguity and represents the most common problem in geophysics inversion. Thus we have two causes of ambiguity, both implying that there are infinitely many models that will fit the data to the same degree. To solve an undetermined problem and obtain a unique solution we need to add a priori information. Prior information takes numerous forms (geological, geophysical or mathematical) and a good inversion algorithm is able to incorporate this information into the inversion. One of the most important and common prior information is a reference model. In reconnaissance surveys, where little about geology and structures at depth is known, the reference model might be a uniform half space and for some problems just the zero model. In other surveys, the knowledge of the physical property distribution, built up through previous analyses or direct measurements, might be quite detailed (Oldenburg and Li, 2005) and could be incorporated in the reference model. In the last few years, with the development of ever more efficient computers, many potential fields inversion algorithms have been developed. The origin goes back to 1967, when Bott (1967) used this approach to interpret marine magnetic anomalies. Since then many different algorithms were proposed, each one characterized by a different type of a priori information and then to provide different solutions. Green (1975) searched for a density model that minimizes its weighted norm to some reference model. Safon at al. (1977) used the method of linear programming to compute moments of the density distribution. Fisher and Howard (1980) solved a linear least-squares problem constrained for upper and lower density bounds. Last and Kubik (1983) introduced a 'compact' inversion minimizing the body volume. Guillen and Menichetti (1984) assumed as a constraint the minimum momentum of inertia. Barbosa and Silva (1994) suggested allowing compactness along given directions using a priori information. Li and Oldenburg (1996, 1998) introduced model weighting as a function of depth using a subspace algorithm. Pilkington (1997, 2002) used preconditioned Conjugate Gradients (CG) method to solve the system of linear equations. Portniaguine and Zhdanov (1999, 2002) introduced regularized CG method and focusing using a reweighted least squares algorithm with different focusing functional. Li and Oldenburg (2003) use wavelet compression of the kernel with logarithmic barrier and conjugate gradient iteration. Pilkington (2009) used data space inversion in Fourier domain. Other relevant ways to introduce a priori information involve "soft constraints", such as positivity constraint for density and magnetization, or "hard constraints", such as empirical laws, constraints for upper and lower density bounds and for a density monotonically increasing with depth (Fisher and Howard, 1980) and external information from well-logs, geological studies and other geophysical investigations. An adaptive learning procedure was presented by Silva and Barbosa (2006) for incorporating prior knowledge. Such procedures lead to a reduction of the general ambiguity, but is based on relatively strong assumptions about the source characteristics, which may often be too subjective. So it is obvious that the solution is highly dependent on the prior information and for this reason, as previously said, just one algorithm suitable in every geological context does not cannot exist. This means that it is very important to choose the correct algorithm according to the geological context of the studied area and according to the available a priori information. For this reasons in this thesis we have studied and implemented three different algorithms for potential field inversion, and in particular those proposed by Li and Oldenburg (2003), Portniaguine and Zhdanov (2002) and Pilkington (2009). Each of these allows incorporating different prior information and then provide different solutions of the inverse problem. For example, if we study an area for oil exploration and we want to know the morphology of the basement we need an algorithm producing smooth solutions and allowing to introduce a reference model. An algorithm that works very well in these problems is the Li and Oldenburg (1996, 1998, 2003) algorithm. If we work Instead on environmental problems and we want to study an area characterized by the presence of sinkholes and we want to know depth and shape of these cavities, we need an algorithm producing compact solutions, and we can use the focusing inversion algorithm of Zhdanov (2002) or the one by Pilkington (2009) with sparseness constraints.
2013
Depth resolution in potential field inversion: theory and applications / Florio, Giovanni. - (2013).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/597222
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact