Inverse Problems = Quest for Information

Size: px
Start display at page:

Download "Inverse Problems = Quest for Information"

Transcription

1 Journal of Geophysics, 198, 50, p Inverse Problems = Quest for Information Albert Tarantola an Bernar Valette Institut e Physique u Globe e Paris, Université Pierre et Marie Curie, 4 place Jussieu, Paris, France Note: The 198 paper has been rewritten, to prouce a searchable PDF file. This text is essentially ientical to the original. Abstract We examine the general non-linear inverse problem with a finite number of parameters. In orer to permit the incorporation of any a priori information about parameters an any istribution of ata not only of gaussian type we propose to formulate the problem not using single quantities such as bouns, means, etc. but using probability ensity functions for ata an parameters. We also want our formulation to allow for the incorporation of theoretical errors, i.e. non-exact theoretical relationships between ata an parameters ue to iscretization, or incomplete theoretical knowlege; to o that in a natural way we propose to efine general theoretical relationships also as probability ensity functions. We show then that the inverse problem may be formulate as a problem of combination of information: the experimental information about ata, the a priori information about parameters, an the theoretical information. With this approach, the general solution of the non-linear inverse problem is unique an consistent solving the same problem, with the same ata, but with a ifferent system of parameters oes not change the solution. Key wors: Information Inverse problems Pattern recognition Probability 1 Introuction Inverse problem theory was essentially evelope in geophysics, to eal with largely uneretermine problems. The most important approaches to the solution of this kin of problem are well known to toay s geophysicists Backus an Gilbert 1967, 1968, 1970; Keilis-Borok an Yanovskaya 1967; Franklin 1970; Backus 1971; Jackson 197; Wiggins 197; Parker 1975; Rietsch 1977; Sabatier The minimal constraints necessary for the formulation of an inverse problem are: 1. The formulation must be vali for linear as well as for strongly non linear problems.. The formulation must be vali for overetermine as well as for uneretermine problems. 3. The formulation of the problem must be consistent with respect to a change of variables. This is not the case with orinary approaches: solving an inverse problem with a given parameter, e.g. a velocity v, leas to a solution v 0 ; solving the same problem with the same ata but with another parameter, e.g. the slowness n = 1/v, leas to a solution n 0. There is no natural relation between v 0 an n 0 in orinary approaches. 4. The formulation must be general enough to allow for general error istributions in the ata which may be not gaussian, asymmetric, multimoal, etc.. 5. The formulation must be general enough to allow for the formal incorporation of any a priori assumption positivity constraints, smoothness, etc.. 6. The formulation must be general enough to incorporate theoretical errors in a natural way. As an example, in seismology, the theoretical error mae by solving the forwar travel time problem is often one orer of magnitue larger that the experimental error of reaing the arrival time on a seismogram. A coherent hypocenter computation must take into account experimental as well as theoretical errors. Theoretical errors may be ue, for example, to the existence of some ranom parameters in the theory, or to theoretical simplifications, or to a wrong parameterization of the problem. None of the approaches by previous authors satisfies this set of constraints. The main task of this paper is to emonstrate that all these constraints may be fulfille when formulating the inverse problem using a simple extension of probability theory an information theory. To o this we will limit ourselves to the stuy of systems which can be escribe with a finite set of parameters. This limitation is twofol: first, we will only be able to hanle quantitative characteristics of systems. All qualitative aspects are beyon the scope of this paper. The secon limitation is that to escribe some of the characteristics of the systems we shoul employ functions rather than iscrete parameters, as for example for the output of a continuously recoring seismograph, or the velocity of seismic waves as a function of epth. In such cases we ecie to sample the corresponing function. The problem of aequate sampling is not a trivial one. For example, if the sampling interval of a seismogram is greater than the correlation length of the noise seismic noise, finite pass-ban filter, etc., errors in ata may be assume to be inepenent; this will not be true when ensifying the sampling. We explicitly assume in this paper that the iscretization has been mae carefully enough, so that ensifying the sampling will only negligibly alter the results.

2 160 Tarantola an Valette: Inverse Problems = Quest for Information In the next section we will efine precisely concepts such as parameter, probability ensity function, information, an combination of information; in Section 3 we iscuss the concept of null information; in Section 4 we efine the a priori information on a system; in Section 5 we efine general theoreticaln relationships between ata an parameters; finally in Section 6 we give the solution of the inverse problem. In Sections 7 10 we iscuss this solution, we examine particular cases, an give a seismological illustration with actual ata. Parameters an Information Let S be a physical system in a wie sense. By wie sense we mean that S consists of a physical system strictu senso, plus a family of measuring instruments an their outputs. We assume that S is a iscrete system or that it has been iscretize for convenience of escription or because the mathematical moel which escribes the physical system is iscrete. In that case S can be escribe using a finite perhaps large set of parameters X = X 1,..., X m }; any set of specific values of this set of parameters will be enote x = x 1,..., x m }. Each point x may be name a moel of S. The m-imensional space E m where the parameters X take their values may be name the moel space, or the parameter space. When a physical system S can be escribe by a set X of parameters, we say that S is parametrizable. It shoul be note that the parameterization of a system is not unique. We say that two parameterizations are equivalent if they are relate by a bijection. Let X = XX X = X X.1 be two equivalent parameterizations of S. We emphasize that equation.1 represent a transformation between mathematically equivalent parameters an that they o not represent any relationship between physically correlate parameters. An example of equivalent parameters is a velocity v an the corresponing slowness efine by n = 1/v. Let us remark that two equivalent parameterizations of S can also be seen as two ifferent choices of system of coorinates in E m. The egree of knowlege that we have about the values of the parameters of our system may range from total knowlege to total ignorance. A first postulate of this paper is that any state of knowlege on the values of X can be escribe using a measure ensity function fx; i.e. a real, positive, locally Lebesgue integrable function such that the positive measure efine by ma = fx x A E m. A is absolutely continuous with respect to the Lebesgue measure efine over E m. The quantity ma is name the measure of A. If me m is finite then fx can be normalize in such a way that me m = 1; in that case fx is name a probability ensity function, ma is then note pa an is name the probability of A. All through this paper a measure ensity function, non normalize or non normalizable, will simply be name a ensity function. Of course, the form of fx epens on the chosen parameterization. Let X an X be two equivalent parameterizations As,we want the measure ma to be invariant, it is easy to see that there exists a ensity function f x, which is relate to fx by the usual formula: f x = fx x x,.3 where the symbol x x stans for the Jacobian of the transformation. It never vanishes for equivalent parameterizations. Let us efine a particular ensity function µx representing the state of total ignorance Jaynes 1968; Rietsch Often the state of total ignorance will correspon to a uniform function µx = const., sometimes it will not, as iscusse in Section 3. We will assume µx 0.4 everywhere in E m In fact this means that we restrict the space of parameters to the region not exclue by the state of total ignorance. We shoul nee a ensity function, rather than a probability ensity function when we are not able to efine the absolute probability of a subset A, but we can efine the relative probabilities of two subsets A an B. The most trivial example is when fx = const. an the space is not boune. Two ensity functions which iffer only by a multiplicative constant will give the same relative probabilities, an all through this paper they will be consiere ientical: fx const. fx.5 If the state of total ignorance correspons to a probability ensity µx, then the content of information of any probability ensity fx is efine by Shannon 1948 If; µ = fx Log fx µx x..6 This efinition has the following properties, which are easily verifie: a I is invariant with respect to a change of variables: If; µ = If ; µ..7 In Shannon s original efinition of information for continuous variables the term µx in equation 6 is missing, so that Shannon s efinition is not invariant.

3 Journal of Geophysics, 198, 50, p b Information cannot be negative: If; µ 0.8 c the information of the state of total ignorance is null: the reciprocal being also true: Iµ; µ = 0.9 If; µ = 0 = f = µ..10 We will say that each probability ensity or, by extension, each ensity function f i x represents a state of information, which will be note s i. Let us now set up a problem which appears very often uner ifferent aspects. Its general formulation may be: Let X be a set a parameters escribing some system S. Let µx be a ensity function representing the state of null information on the system. If we receive two pieces of information on our system, represente by the ensity functions f i x an f j x, how o we combine f i an f j to obtain a ensity function fx representing the final state of information? We must first state which kin of combination we wish. To o this, let us first recall the way use in classical logic to efine the combination of logical propositions. If p i is a logical proposition, one efines its value of truth vp i by taking the values 1 or 0 when p i is respectively certain or impossible true or false. Let p i an p j be two logical propositions. It is usual to combine them in orer to obtain new propositions, as for example by efining the conjunction of two propositions, p i p j p i an p j, or by efining the isjunction of two propositions p i p j p i or p j, an so on. The usual way for efining the result of these combinations is by establishing their values of truth. For example, the conjunction p i p j is efine by: vp i = 0 or v p i p j = vp j = 0 For our purposes, we nee the efinition of the conjunction of two states of information, s i s j. This efinition must be the generalization to the concept of states of information of the properties of the conjunction of logical propositions. We will see later that this efinition will allow the solution of many seemingly ifferent problems, in particular it contains the solution of the general inverse problem as it has been state in the preceing section. Let us note f = f i f j.1 the operation which combines f i an f j representing two states of information s i an s j to obtain f representing the conjunction s = s i s j. The efinition must satisfy the following conitions: a f i f j must be a ensity function. In particular, the content of information, of f i f j must be invariant with respect to a change of parameters, i.e. equation.3 must be verifie. b The operation must be commutative, i.e., for any f i an f j : f i f j = f j f i..13 c The operation must be associative; i.e. for any f i, f j an f k : f i f j f k = fi f j fk.14 the conjunction of any state of information f i with the null information µ must give f i, i.e. must not result in any loss of information: f i µ = f i..15 This equation means that µ is the neutral element for the operation. e The final conition correspons to an extension of the efining property of the conjunction of logical propositions equation.11. For any measurable A E m f i x = 0 A or fi f j x = 0,.16 A f j x = 0 A which means that a necessary an sufficient conition for f i f j to give a null probability to a subset A is that either f i or f j give a null probability for A. This last conition implies that the measure engenere by f i f j is absolutely continuous with respect to the measures engenere by f i an f j respectively. Using Nikoym s theorem it can be shown Descombes 197 that f i f j may then necessarily be written in the form f i f j = f i f j Φ f i, f j,.17 where Φf i, f j is a locally Lebesgue integrable, positive function. This last conition is strong, because it is vali everywhere in E m, in particular where f i or f j are null otherwise this equation woul be trivial. The simplest choice for Φf i, f j in orer to satisfy conitions b an c is to take it as inepenent of f i or f j. Conition then imposes Φ f i, f j = 1 µ..18 Conition a is then automatically verifie.

4 16 Tarantola an Valette: Inverse Problems = Quest for Information The above iscussion suggests then the following efinition: Let s i an s j be two states of information represente respectively by the ensity functions f i an f j, let µ be a ensity function representing the state of null information. By efinition, the conjunction of s i an s j, enote s = s i s j is a state of information represente by the ensity function fx given by: fx = f ix f j x µx..19 The ensity function fx is not necessarily normalizable, but except in some a hoc examples, in most actual problems when one or both of the ensity function f i x or f j x is normalizable, the ensity function fx is also normalizable, i.e. it is, in fact, a probability ensity. In the following sections we will show that the efinition.19 allows for a simple solution of general inverse problems. In the appenix we recall the efinition of marginal probability ensities, we show that the conitional p..f can be efine as a particular case of conjunction of states of information, an emonstrate the Bayes theorem which is not use in this work because it is too restrictive for our purposes. Let us emphasize that, given two states of information s i an s j on a system S, the resulting state of information oes not necessarily correspon to the conjunction s i s j. The conjunction, as efine above, must be use to combine two states of information only if these states of information have been obtaine inepenently, as for example, for two inepenent physical measurements on a given set of parameters, or for combining experimental an theoretical information see section 6. Let us conclue this section by the remark that in our use of probability calculus we o not use concepts such as ranom, variable, realization of a ranom variable, true value of a parameter, an so on. Our ensity functions are interprete in terms of human knowlege, rather than in terms of statistical properties of the Earth. Of course, we accept statistics, an we use the language of statistics when statistical computations are possible, but this is generally not the case in geophysical experiments. 3 The Null Information on a System As the concept of null information is not straightforwar, let us iscuss it in some etail an start with some examples. Assume that our problem consists in the location of an earthquake focus from some set of ata. Assume also that we are using Cartesian coorinates X, Y, Z. The question is: which will be the ensity function µx, y, z which is least informative on the location of the focus? The intuitive answer is that the least informative ensity function will be the one that assigns the same probability P to all regions of equal volume V : P = const. V. 3.1 Since in Cartesian coorinates V = x y z, equation. gives the solution: µx, y.z = const. 3. If instea of Cartesian coorinates we use spherical coorinates R, Θ, Φ, the null information ensity function µ r, θ, φ may be obtaine from equation 3.1 writing the elementary volume V in spherical coorinates, V = r sin θ r θ φ or from equation 3. by means of a change of variables. We arrive at µr, θ, φ = const r sin θ 3.3 which is far from a constant function. We see in this example that the ensity function representing the null information nee not be constant. We will now try to solve a less trivial question. Let V = V be some velocity. Coul the null information ensity function µv = const.? To those who are tempte to answer yes, we ask another question. Let N be some slowness N = 1/V. Coul the null information ensity function µ n = const.? Obviously, if µv is constant, µ n cannot be, an vice-versa. To properly efine the null information ensity function µ, we will follow Jaynes 1968, who suggeste that suitable ensity functions are obtaine uner the conition of invariance of form of the function µ uner the transformgroups which leave invariant the equations of physics see also Rietsch, Clearly, the form of µ must be invariant uner a change of space-time origin an uner a change of space-time scale. To see its consequences let O an Ô be two observers an let X, Y, Z, T an ˆX, Ŷ, Ẑ, ˆT be their coorinate system. The fact that observer Ô has chosen a ifferent space-time origin an scale is easily written in Cartesian coorinates: ˆX = X 0 + a X Ŷ = Y 0 + a Y Ẑ = Z 0 + a Z, ˆT = T0 + b T 3.4 where a an b are constants. Thus, by the efinition of velocity: ˆv = ˆr = a r = c v, 3.5 ˆt b t where c = a/b is a new constant. Let µv be the null information ensity function for O an ˆµˆv be the one of Ô. From equation 3.5 an.4 we must have: µv = ˆµˆv ˆv v = c ˆµc v. 3.6

5 Journal of Geophysics, 198, 50, p The invariance uner transformations 3.4 will be realize if µ an ˆµ are the same function, that is: µw = ˆµw 3.7 for all w. From equations 3.6 an 3.7 it follows µv = c µc v, 3.8 i.e. µv = const v This result may appear puzzling to some. Let us ask which is the form of the null information ensity function for the slowness N = 1/V. We reaily fin µ n = µv v n = const n We see that the equivalent parameters V an N have null information ensity function of exactly the same form. In fact, it was in orer to warrant this type of symmetry between all the powers of a parameter that Jeffreys 1939, 1957 suggeste assigning to all continuous parameters X known to be positive a ensity function, representing the null information, of the form const./x. Some formalisms of inverse problems attempt a efinition of some probabilistic properties in parameters space computation of stanar eviations, etc.. We claim that these kin of problems cannot be consistently pose without explicitly stating the null information ensity function µ. In most orinary cases the choice µ = const will give reasonable results. Nevertheless we must emphasize that the solution of the same problem using a ifferent set of parameters will be inconsistent with the choice of equation 3.11 for representing the state of null information in the new set of parameters, unless the change of parameters is linear. 4 Data an A Priori Information Among the set of parameters X escribing a system S, the parameters escribing the outputs of the measuring instrument are name ata an written D = D 1,..., D r. The rest of the parameters are then name parameters strictu senso, or, briefly, parameters, an are written P = P 1,..., P s. If a partition of X into X = D, P is mae, then any ensity function on X may be equivalently written: fx = f, p. 4.1 Let us consier a particular geophysical measurement, for example, the act of reaing the arrival time of a particular phase on a seismogram. In the simplest case the seismologist puts all the information he has obtaine from his measurement in the form of a given value, say t, an an uncertainty, say σ t. In more ifficult cases, he may hesitate between two or more values. What he may o, more generaly, is to efine, for each time interval t on the seismogram, the probability P which he assigns to the arrival time t to be in the interval t. Doing this, he is putting the information which he obtains from his measurement into the form of a probability ensity ρt = P / t. This probability ensity can be asymmetric, multimoal, etc. Extracting from this probability ensity a few estimators, such as mean or variance, woul certainly lea to a loss of information, thus we have to take as an elementary atum the probability ensity ρt itself. Let us now consier a non-irectly measurable parameter P α. Some examples of a priori information are: a We know only that P α, is boune by two values a a p α b. We will obviously represent this a priori information by a ensity function which is null outsie the interval an which coincies with the null information ensity function insie the interval. b Inequality constraint P α P β : we take a ensity function null for p α > p β an equal to the null information ensity function for p α p β. c Some parameters P α+1, P α+,..., P β are spatially or temporally istribute, an we know that their variation is smooth. Accoringly, we will represent this a priori information by using a joint ensity function ρp α+1, p α+,..., p β with the corresponing non-null assume correlations covariances. We have some iffuse a priori information about some parameters. In that case we will efine a priori ensity function with weak limits an large variances. e We have no a priori information at all. This a priori information is then represente by the null information function µ. We see then that we may assume the existence of a ensity function ρx = ρ, p 4. name the a priori ensity function, representing both, the results of measurements an all a priori information on parameters.

6 164 Tarantola an Valette: Inverse Problems = Quest for Information From the efinition of conitional probability see Appenix, θ, p = θ p θ p p, 5.5 where θ p p is the marginal ensity function for P. In the class of problems where the simplification 5. is use, the theory oes not impose any constraint on P but only in D. Equation 5.5 may then be rewritten Figure 1: An exact theory, viewe as a functional relationship. 5 Theoretical Relationships A theoretical relationship is usually viewe as a functional relation between the values of the parameters: Fx = F, p = Often the form 5.1 of a functional relationship may be simplifie an may be written see figure 1: = Gp. 5. This view is too restrictive. In most cases, even if the value p is given we are not able to exactly compute the corresponing value of, because our theory is incomplete, or because the theory contains some ranom parameters, or because we have roughly parametrize the system uner stuy. In such cases, to be rigorous, we may exhibit not the value = Gp but the probability ensity for, given p, i.e. the conitional probability ensity see figure θ p. 5.3 Figure : Putting error-bars on the theoretical relation = Gp. We will see in Section 10 how to isplay such a conitional probability ensity for actual problems. In all generality, we will assume that any theoretical relationship may be represente by a joint ensity function θx = θ, p. 5.4 θ, p = θ p µ p p, 5.6 where µ p p is the null information ensity function. The particular case of an exact theory equation 5. obviously correspons to θ p = δ Gp where δ is the Dirac istribution. So, for an exact theory: θ, p = δ Gp µ p p. 5.7 In cases where a rigorous computation of θ p cannot be mae, but where we have an iea of the theoretical errorbar σ T, choices of θ, p of a form similar to θ p = const. exp 1 Gp } 5.8 may be goo enough to take into account this theoretical error. In any case, we assume that theoretical relationships are in general represente by the joint ensity function of equation 5.4 which will be name the theoretical ensity function. 6 Statement an Solution of Inverse Problems Let S be a physical system, an let X be a parametrization of S. In Section 3 we have efine the ensity function µx representing the state of null information on the system. In Section 4 we have efine the ensity function ρx representing all a priori information on the system, in particular the results of the measurements an a priori constraints on parameters. In Section 5 we have efine the ensity function θx representing the theoretical relationships between parameters. The conjunction of ρx an θx gives a new state of information, which will be name the a posteriori state of information. The corresponing ensity function will be enote σx an is given, using equation.19, by σx = ρx θx µx σ T. 6.1 To examine inverse problems we separate our set of parameters X into the subsets X = D, P representing ata an parameters strictu sensu respectively. Equation 6.1 may then be rewritten σ, p = ρ, p θ, p µ, p. 6.

7 Journal of Geophysics, 198, 50, p From this equation we may compute the a posteriori marginal ensity functions: ρ, p θ, p σ = p, 6.3 µ, p ρ, p θ, p σ p p =. 6.4 µ, p Equation 6.4 performs the task of transferring to parameters, via theoretical correlations, the information containe in the ata set. This is, by efinition, the solution to an inverse problem see figure 3. Equation 6.4 may then be simplifie to ρ θ p σ p p = ρ p p. 6.7 µ If, furthermore, the theoretical relationship may be consiere as exact i.e. we can write = Gp, then using equation 5.7. θ p = δ Gp. 6.8 Equation 6.7 may be easily integrate to: σ p p = ρ p p ρgp µ Gp. 6.9 This last equation solves the inverse problem for an exact, non-linear theory with arbitrary a priori constraints on parameters ρ p, an an arbitrary probabilistic istribution of ata ρ. Returning to the general solution 6.4 let us answer the question of isplaying the information containe in σ p p. If we are intereste in a particular parameter, say P 1, all the information on P 1 is containe in the marginal ensity function equation.5: σ 1 p 1 = σ p p p p 3... p s Figure 3: a The theoretical moel is often non exact simplifie, rough parameterization, etc.. We can then introuce the theoretical relationship between parameters as a ensity function θ, p see Section 5. b The solution of the problem is then efine by σx = ρx θx µx. If the a priori ensity function contains small variances for ata an great variances for parameters, the marginal ensity function σ p p solves an inverse problem. On the contrary, if in ρx the ata have large variances an the parameters have small variances, σ solves the forwar problem. Equation 6.3 solves what coul be name a generalize forwar problem. In most cases the a priori information on D is inepenent from the a priori information on P, ρ, p = ρ ρ p p 6.5 an the theoretical ensity function is obtaine in the form of a conitional ensity function equation 5.6: θ, p = θ p µ p p. 6.6 As far as we are intereste in the parameter P 1 an not in the correlations between this an other parameters, σ 1 p 1 exhibits all the available information about P 1. For example, from σ 1 p 1 we can precisely answer questions such as the probability that P 1 lies between two values. Alternatively, from σ 1 p 1 it is possible to extract the mean value, the meian value, the maximumlikelihoo value, the stanar eviation, the mean eviation, or any estimator we nee. Let us remark that it is possible to compute from the general solution σ p p the a posteriori mathematical expectation: Ep = p σ p p p 6.11 or the a posteriori covariance matrix: p } T C = E Ep p Ep = p Ep p Ep Tσp pp = p p T σ p pp Ep Ep T. 6.1 Estimators such as EP an C are similar to what is obtaine in traitional approaches to inverse problems, but here they can be obtaine without any linear approximation.

8 166 Tarantola an Valette: Inverse Problems = Quest for Information 7 Existence, Uniqueness, Consistency, Robustness, Resolution In inverse problem theory, it is not always possible to prove the existence or the uniqueness of the solution. With our approach, the existence of the solution is merely the existence of the a posteriori ensity function σx of equation 6.1 an results trivially from the assumption of the existence of ρx an θx. Of course we can obtain a solution σx with some pathologies non-normalizable, infinite variance, not unique maximum likelihoo point, etc., but the solution is the a posteriori ensity function, with all the pathologies it may present. Consistency is warrante because equation 6.1 is consistent, i.e. the function σx is a ensity function. Let us suppose again that we use velocity as a parameter an obtain as solution the ensity function σv, then using the same ata in the same problem with the same iscretization but using slowness as parameter we obtain as a solution the ensity function σ n. Since equation 6.1 is consistent, σ n will be relate to σv by the usual formula.3 σ n = σv v n : σ n an σv represent exactly the same state of information. In those approaches where all the information containe in the ata is conense into the form of central estimators mean, meian, etc., the notion of robustness must be carefully examine if we suspect that there may be bluners in the ata set Clearbout an Muir, In our approach, the suspicion of the presence of bluners in a ata set may be introuce using longtaile ensity functions in ρ, ecreasing much more slowly than gaussian functions as, for example, exponential functions. Our experience shows that with such longtaile functions, the solution σ p p is rather insensitive to one bluner. The concept of resolution must be consiere uner two ifferent aspects: to what extent a given parameter has been resolve by the ata? an what is the spatial resolution attaine with our ata set? For the first aspect, let us consier a parameter P i whose value oes not influence the values of the ata. This means that the theoretical ensity function θ, p oes not epen on P i. Even in this case we can obtain a certain amount of information on this parameter, if the other parameters are resolve by the ata, an if the a priori ensity function ρ p p introuces some correlation between parameters. Furthermore, let us assume that no correlation is introuce by ρ p p between P i an the other parameters. This is the worst case of nonresolution we can imagine for a parameter. Uner these assumptions equation 6.7 can be written σ p p = ρ i P i ρ q q ρ θ q, 7.1 µ where q is the vector p 1,..., p i, p i+1,..., p s. After integration over the set of q, we fin ropping the multiplicative constant: σ i pi = ρi pi, 7. which means that for a completely unresolve parameter, the a posteriori marginal ensity function equals the a priori one. The more σ i p i iffers from ρ i p i, the more the parameter P i has been resolve by the ata set. The concept of spatial resolution applies to a ifferent problem: Assume that P i,..., P j form a set of parameters spatially or temporally istribute as for example when the parameters represent the seismic velocities of successive geologic layers or values from the sampling of some continuous geophysical recor. Assume that we are not intereste in obtaining the a posteriori ensity function for each parameter, but only the a posteriori mean values, as given by equation There are two reasons for EP to be a smooth vector i.e. to have small variations between consecutive values EP i an EP i+1. The first reason may be the type of ata use; it is well known for instance that long perio surface wave ata only give a smoothe vision of the Earth. The secon reason for obtaining a smoothe solution may simply be that we ecie a priori to impose such smoothness introucing non null a priori covariances in ρ p p see section 4. This question of spatial resolution has been clearly pointe out, an extensively stuie by Backus an Gilbert From our point of view, this problem must be solve by stuying the a posteriori correlations between parameters. From equation 6.1: C ij = p i p j σ p i, p j pi p j E p i E pj. 7.3 If, for a given j, we plot the curve C ij versus i we simultaneously obtain information on the a posteriori variance of the parameter C ii an the spatial resolution the length of correlation see figure 4. Figure 4: Rows or columns of the a posteriori covariance matrix showing both the length of spatial resolution an the a posteriori variance of the parameter. In applications of Backus an Gilbert s point of view on inverse problems it is usual to stuy the trae-off be-

9 Journal of Geophysics, 198, 50, p tween variance an resolution in orer to choose the esire solution. In our approach, such a trae-off also exists: moifying the a priori variances or a priori correlations of parameters in ρp results in a change of the a posteriori variances an resolutions. But, in our opinion, the a priori information on the values of parameters, as containe in ρp, must not be state in orer to obtain a pleasant solution, but in orer to closely correspon to the actual a priori information. 8 Computational Aspects For linear problems, all the integrations of section 6 can often be performe analytically, an the most general solution can sometimes be reache easily see for example the next section. Linear proceures may of course be use to obtain aequate approximations for the solution of weakly nonlinear problems. For non-linear problems, the solution is less straightforwar. Often the integrations in equation 6.4 or 6.7 can be performe analytically, no matter what the egree of nonlinearity see for example section 10. The computation of the ensity of probability σ p p at a given point p then involves mainly the solution of a forwar problem. If the number of parameters is small we can then explicitly compute the marginal probability ensity for each one of the parameters, using agri in the parameter space orinary methos of numerical integration. If the number of parameters is great, Monte-Carlo methos of numerical integration shoul be use. The possibility of conveniently solving non-linear inverse problems will then epen on the possibility of solving the forwar problem a large enough number of times. Let us remark that if we are not able to compute the marginal probability ensity for each one of the parameters of interest, we can limit ourselves to the computation of mean values an covariances equations 6.11 an 6.1. In problems where the solution of the forwar problem is so costly that either the explicit computation of the ensity of probability in the parameter space an the computation of mean values an covariances cannot be performe in a reasonable computer time, we suggest to restrict the problem to the search of the maximum likelihoo point in the parameter space point at which the ensity of probability is maximum. This computation is often very easy to perform, an classical methos can be use for particular assumptions about the form of the probability ensities representing experimental ata an a priori assumptions on parameters. For example, it is easy to see that with gaussian assumptions, the search of the maximum likelihoo point simply becomes a classical leastsquares problem Tarantola an Valette, 198. With exponential assumptions, the search of the maxima likelihoo point becomes a L 1 -norm problem which shows that the exponential assumption gives a result more robust than the gaussian one. With the use of step functions, the a posteriori probability ensity in the parameter space is constant insie a given boune omain. The point of this omain the maximizes some function of the parameters can be reache using the linear or non-linear programming techniques. We see thus that orinary methos for solving parameterize inverse problems can be euce as particular cases of our approach, an we want to emphasize that such methos shoul only be use when the explicit computation of the ensity of probability in the parameter space or the non-linear computation of mean values an covariances woul be too much time consuming. 9 Gaussian Case Since the gaussian linear problem is wiely use, we will show how the usual formulae may be erive from our results. By gaussian problem we mean that the a priori ensity function has a gaussian form for all parameters: ρx = exp 1 T x x0 C 0 x } x 0, 9.1 where x 0 is the a priori expecte value an C 0, is the a priori covariance matrix. By the linear problem we mean that if the theory may be assume to be exact, the theoretical relationship between parameters takes the general linear form: F x = On the other han, if theroretical errors may not be neglecte, we assume that the theoretical ensity function also has a gaussian form: θx = exp 1 F xt C T F x }, 9.3 where the covariance matrix C T escribes theoretical errors an tens to vanish for an exact theory. We finally assume that for the parameters chosen, the null information is represente by a constant function: µx = const. 9.4 The a posteriori ensity function equation 6.1 is then given by: σx = ρx θx = exp 1 [ x x0 T C 0 x x 0 + x T F T C T F x ] } 9.5

10 168 Tarantola an Valette: Inverse Problems = Quest for Information an after some matrix manipulations, see appenix we obtain σx = exp 1 T x x C x } x, 9.6 where an P = I Q x = P x 0, 9.7 C = P C Q = C 0 F T F C 0 F T + C T F. 9.9 Equation 9.6 shows that the a posteriori ensity function is gaussian, centere in x an with the covariance matrix C. If theoretical errors may be neglecte, i.e. if equation 9. hols, we just rop the term C T in equation 9.9 to obtain the corresponing solution. To compare our results to those publishe in the literature, let us assume that the separation of X into the sets D an P is mae: [ [ ] 0 x = x p] 0 = p 0 [ ] [ ] 9.10 C C x = C 0 = p. p C p C pp We also assume that equation 9. simplifies to: F x = [I G] [ ] = G p = p Substituting equations 9.10, 9.11 in equations 9.7, 9.8, 9.9 we obtain the solution publishe by Franklin 1970 for the parametrize problem, which was obtaine using the classical least squares approach. Our equations 9.7, 9.8, 9.9 are more compact than those of Franklin because we use the parameter space E m, an more general because we allow theoretical errors C T. Let us emphasize that in traitional approaches x, is interprete as the best estimator of the true solution an C is interprete as the covariance matrix of the estimator. Our approach emonstrates that the a posteriori ensity function is gaussian, an that x an C are, respectively, the center an the isnersion of the ensity function. The results shown here only apply to the linear least squares problem. For the non-linear problem, the reaer shoul refer to Tarantola an Valette 198. station x y z t σ t Table 1: Coorinates of stations km, arrival times an errors s 10 Example with Actual Data: Hypocenter Location The ata for a hypocenter location are the arrival times of phases at stations. The basic unknowns of the problem are the spatiotemporal coorinates of the focus. Some of the parameters which may be relevant to the problem are: the coorinates of seismic stations, the parameters escribing the velocity moel, etc. We will assume that the coorinates of the stations are accurately enough known to treat them as constants an not as parameters. The parameters escribing the velocity moel woul be taken into account if we were performing a simultaneous inversion for hypocenter location an velocity etermination, but this is not the case in this simple illustration. In the example treate below we will then consier the parameters of the velocity moel as constants, an we will assume that the only effect of our imprecise knowlege of the meium is not to allow an exact theoretical computation of the arrival times at the stations from a known source. Let t = gx, Y, Z, T 10.1 be the theoretical an not exact relationship between arrival times an the spatio-temporal coorinates of the focus, erive from the wave propagation theory an the velocity moel. Let C T be a covariance matrix which is a reasonable estimation of the errors mae when theoretically computing the arrival time at stations from a source at X, Y, Z. If we assume that the theoretical errors are gaussian, the theoretical relationship between ata an parameters will be written θt X, Y, Z, T = exp 1 [ ] T t gx, Y, Z, T

11 Journal of Geophysics, 198, 50, p C T [t gx, Y, Z, T ]}, 10. which correspon to equation 5.8. The next simplifying hypothesis is to assume that our ata possess a gaussian structure. Let t 0 be their vector of mean values an C t, their covariance matrix: ρt = exp 1 T t t0 C t t } t As all our ata an parameters consist in Cartesian spacetime coorinates, the null information function is constant an nee not be consiere see section 3. The a posteriori ensity function for parameters is irectly given by equation 6.7 an after analytical integration we obtain see appenix: σx, Y, Z, T = ρx, Y, Z, T exp 1 t0 gx, Y, Z, T T Ct + C T t 0 gx, Y, Z, T } The a posteriori ensity function 10.4 gives the general solution for the problem of spatio-temporal hypocenter location in the case of gaussian ata. We emphasize that this solution oes not contain any linear approximation. We are sometimes intereste in the spatial location of the quake focus, an not in its temporal location. The ensity function for the spatial coorinates is obtaine, of course, by the marginal ensity function σx, Y, Z = + σx, Y, Z, T T, 10.5 where we integrate over the range of the origin time T. Classical least squares computations of hypocenter are base on the maximization of σx, Y, Z, T. It is clear that if we are only intereste in the spatial location we must maximize σx, Y, Z given by 10.5 instea of maximizing σx, Y, Z, T. Let us show how the integration in 10.5 can be performe analytically. We will assume that while we may sometimes have a priori information about the spatial location of the focus from tectonic arguments, or from past experience in the region, etc. it is generally impossible to have a priori information inepenent from the ata about the origin time T. We will thus assume an a priori ensity function uniform on T, ρx, Y, Z, T = ρt ρx, Y, Z = ρx, Y, Z The compute arrival time at a station i, g i X, Y, Z, T can be written g i X, Y, Z, T = h i X, Y, Z + T, 10.7 where h i is the travel time between the point X, Y, Z an the station i. With 10.6 an 10.7, equation 10.5 can be integrate see appenix an gives σx, Y, Z = K ρx, Y, Z exp 1 [ t 0 hx, Y, Z ] T Here is a weight matrix, are weights, an P [ t 0 hx, Y, Z ]} P = C t + C T 10.9 K = i p i = j p i = ij P ij P ij Moreover, t i 0 is the observe arrival time minus the weighte mean of observe arrival times, t i 0 = t i 0 j p jt j 0 j p j 10.1 an h i X, Y, Z is the compute travel time minus the weighte mean of compute travel times h i = h i j p jh j j p j Note that C T may epen on X, Y, Z an therefore P ij, p i, an K also. Equation 10.8 gives the general solution for the spatial location of a quake focus in the Gaussian case. Table 1 shows the observe arrival times an their stanar eviation. We have assume that the theoretical errors are of the form : [ CT X, Y, Z ] ij = σ T exp 1 D ij }, where D ij is the istance between the station i an the station j, σ T is some theoretical error, an is the correlation length of errors the wavelength or the length of lateral heterogeneities of the meium. By comparison of the layere moel of velocities for the Western Pyrenees Gagnepain et al., 1980 with ata from refraction profiles Gallart, 1980 we have chosen σ T = 0. s an = 0.1 km.

12 170 Tarantola an Valette: Inverse Problems = Quest for Information We also assume that no a priori information is known about the epicenter, but that we know that the epth of the hypocenter is greater than 0.5 km the mean topography: ρx, Y, Z = ρz = 1 if Z 0.5 km 0 if Z < 0.5 km We have then compute numerically from 10.8 the a posteriori marginal ensity functions for the epicenter an for the epth: σx, Y = σz = km + X σx, Y, ZZ, Y σx, Y, Z The results are shown in figures 5 an 6. Figure 5: a. an b. Results of the inverse problem of hypocenter computation. a. shows the position of the stations an the probability ensity obtaine for the epicentral coorinates. b. shows the computer output. Curves are visual interpolations. We have also compute mean values an variances. The corresponing results are, in the local frame of figure 5: EX = 51.7 km EY = 7.8 km σ X = 1.5 km σ Y = 0.9 km EZ = 5.6 km σ Z =.1 km. Figure 6: The probability ensity for epth. The layere velocity moel is also shown. Note the existence of a seconary maximum likelihoo point. We wish to make the following remarks about these results. First, they have been obtaine exactly without the use of linear approximations. We have use numerical integration instea of matrix algebra an the computation of partial erivatives. The results shown in figures 5 an 6 represent the most general knowlege which can be obtaine for the hypocenter coorinates from the arrival times, from the given velocity moel, an from the given theoretical moel of wave propagation. Since the velocity moel is iscontinuous in Z the a posteriori ensity functions have iscontinuities in slope, as it is clearly seen in figure 6 for σz To the extent that the iscontinuities in the velocity moel are artificial, the iscontinuities of slope are of course also artificial. From figure 6 it is easy to visualize some of the problems which may affect the maximum likelihoo approach. If the iscontinuity of slope is similar to the one at 5 km epth, we will have seconary maxima. We can also have a iscontinuity of slope of the type rawn in figure 7. In that case, algorithms searching for the maximum likelihoo point will oscillate arount the point of slope iscontinuity, leaing to the well-known artificial situation in which hypocenters accumulate at the interface between layers of constant velocity. 11 Conclusion Our informational approach to probability calculus allows us to formulate inverse problems in such a way that all necessary constraints see Section 1 are satisfie. Essentially, we propose to work with the probability ensities for parameters rather than with central estimators, as it is usually one.

13 Journal of Geophysics, 198, 50, p a probability ensity f i x = f i x I, x II with the information X II = x 0 II. The information X II = x 0 II clearly correspons to the probability ensity f j xi, x II = µi = bigx I δ xii x 0 II A. Figure 7: Example of iscontinuity of slope leaing to oscillations in maximum likelihoo algorithms. The effect will be an artificial accumulation at the interfaces between layers The general solution of inverse problems is expresse by the simple formula 6.1. We emphasize that inverse problems cannot be correctly state until the three ensity functions ρx ata an a priori information about parameters, θx theory an theoretical errors, an µx null information have been precisely efine. We have emonstrate that the ieas evelope in this paper give new insights into the olest an best know inverse problem in geophysics: the hypocenter location. Of course out theory also applies to more ifficult inverse problems. The only practical limitation comes from problems where the solution of the forwar problem is very time-consuming an the number of parameters is high. Acknowlegements. We woul like to thank our colleagues of the I.P.G. for very helpful iscussions, an Professor G. Jobert, Dr. A. Nercessian, T. Van er Pyl, Dr. J.C. Houar, Dr. Pienoir, an Professor C.J. Allegre for their suggestions. This work has been partially supporte by the Recherche Cooperative sur Programme no. 64, Etue Interisciplinaire es Problemes Inverses. Contribution I.P.G. no Appenix Some Remarks on Probability Calculus Let X be a parametrization of a physical system S an let X I = X 1,..., X r } an X II = X r+1,..., X m } be a partition of X. For any probability ensity fx = fx I, x II we can efine the marginal probability ensity f I x I = fx I, x II x II. A.1 The interpretation of this efinition is as follows: if we amit that fx I, x II represents all the knowlege that we possess on the whole set of parameters an if we isregar the parameters X II, then all the information on X I is containe in f I x I. The conitional probability ensity for X I, given X II = x 0 II may be efine, in our approach, as the conjunction of a general state of information represente by because f j oes not contain information on X I an gives null probability for all values of X II ifferent from x 0 II. µx I represents the null information on X I. Amitting that null informations are inepenent: µx I, x II = µ I x II µ II x II an using equation.19 we obtain the combine probability: fx I, x II = f ix I, x II µ I x I δx II x 0 II µ I x I µ II x II Using efinition A.1 we obtain f I x I = f i x I, x 0 II fi x I, x 0 II x I. A.3, A.4 which correspons to what is orinarily name the conitional probability ensity for X I given f i x I, x II an X II = x 0 II. To follow the usual notation we will write this solution f i x I x 0 II rather than f Ix I : f i xi x 0 f i x I, x 0 II = II fi x I, x 0 II x I. A.5 The Bayes problem may be states as follows: Let fx I, x II be the joint probability ensity representing all the available information on X I an X II. If we learn that X II = x II we obtain gx I x II using equation A.4. To the contrary, if we learn X I = x I we obtain gx II x I. Which is the relation between gx I x II an gx II x I? We have g x I x II = fx I, x II f xi, x II xi = fx I, x II f II x II gx II x I = an hence = gx II x I = fx I, x II fxi, x II x II fx I, x II gxi x II f II x II x II gx I x II f II x II gxi x II f II x II x II, A.6 A.7 which correspons to Bayes theorem. We have thus shown that well known theorems may be obtaine using the concept of the conjunction of states of information. Many other problems may be solve using this concept. Consier for example n inepenent measurements of a given parameter X. In the particular case where the null information ensity is uniform

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Some Examples. Uniform motion. Poisson processes on the real line

Some Examples. Uniform motion. Poisson processes on the real line Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information Parameter estimation: A new approach to weighting a priori information J.L. Mea Department of Mathematics, Boise State University, Boise, ID 83725-555 E-mail: jmea@boisestate.eu Abstract. We propose a

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

Calculus of Variations

Calculus of Variations Calculus of Variations Lagrangian formalism is the main tool of theoretical classical mechanics. Calculus of Variations is a part of Mathematics which Lagrangian formalism is base on. In this section,

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

05 The Continuum Limit and the Wave Equation

05 The Continuum Limit and the Wave Equation Utah State University DigitalCommons@USU Founations of Wave Phenomena Physics, Department of 1-1-2004 05 The Continuum Limit an the Wave Equation Charles G. Torre Department of Physics, Utah State University,

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

θ x = f ( x,t) could be written as

θ x = f ( x,t) could be written as 9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)

More information

Optimization of Geometries by Energy Minimization

Optimization of Geometries by Energy Minimization Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Jointly continuous distributions and the multivariate Normal

Jointly continuous distributions and the multivariate Normal Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Chapter 4. Electrostatics of Macroscopic Media

Chapter 4. Electrostatics of Macroscopic Media Chapter 4. Electrostatics of Macroscopic Meia 4.1 Multipole Expansion Approximate potentials at large istances 3 x' x' (x') x x' x x Fig 4.1 We consier the potential in the far-fiel region (see Fig. 4.1

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

arxiv: v1 [physics.flu-dyn] 8 May 2014

arxiv: v1 [physics.flu-dyn] 8 May 2014 Energetics of a flui uner the Boussinesq approximation arxiv:1405.1921v1 [physics.flu-yn] 8 May 2014 Kiyoshi Maruyama Department of Earth an Ocean Sciences, National Defense Acaemy, Yokosuka, Kanagawa

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

A Second Time Dimension, Hidden in Plain Sight

A Second Time Dimension, Hidden in Plain Sight A Secon Time Dimension, Hien in Plain Sight Brett A Collins. In this paper I postulate the existence of a secon time imension, making five imensions, three space imensions an two time imensions. I will

More information

Track Initialization from Incomplete Measurements

Track Initialization from Incomplete Measurements Track Initialiation from Incomplete Measurements Christian R. Berger, Martina Daun an Wolfgang Koch Department of Electrical an Computer Engineering, University of Connecticut, Storrs, Connecticut 6269,

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France APPROXIMAE SOLUION FOR RANSIEN HEA RANSFER IN SAIC URBULEN HE II B. Bauouy CEA/Saclay, DSM/DAPNIA/SCM 91191 Gif-sur-Yvette Ceex, France ABSRAC Analytical solution in one imension of the heat iffusion equation

More information

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

Chapter 6: Energy-Momentum Tensors

Chapter 6: Energy-Momentum Tensors 49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.

More information

The Press-Schechter mass function

The Press-Schechter mass function The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

A simple tranformation of copulas

A simple tranformation of copulas A simple tranformation of copulas V. Durrleman, A. Nikeghbali & T. Roncalli Groupe e Recherche Opérationnelle Créit Lyonnais France July 31, 2000 Abstract We stuy how copulas properties are moifie after

More information

ELECTRON DIFFRACTION

ELECTRON DIFFRACTION ELECTRON DIFFRACTION Electrons : wave or quanta? Measurement of wavelength an momentum of electrons. Introuction Electrons isplay both wave an particle properties. What is the relationship between the

More information

Exam 2 Review Solutions

Exam 2 Review Solutions Exam Review Solutions 1. True or False, an explain: (a) There exists a function f with continuous secon partial erivatives such that f x (x, y) = x + y f y = x y False. If the function has continuous secon

More information

Center of Gravity and Center of Mass

Center of Gravity and Center of Mass Center of Gravity an Center of Mass 1 Introuction. Center of mass an center of gravity closely parallel each other: they both work the same way. Center of mass is the more important, but center of gravity

More information

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7. Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency

More information

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems Construction of the Electronic Raial Wave Functions an Probability Distributions of Hyrogen-like Systems Thomas S. Kuntzleman, Department of Chemistry Spring Arbor University, Spring Arbor MI 498 tkuntzle@arbor.eu

More information

A Review of Multiple Try MCMC algorithms for Signal Processing

A Review of Multiple Try MCMC algorithms for Signal Processing A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications

More information

Chaos, Solitons and Fractals Nonlinear Science, and Nonequilibrium and Complex Phenomena

Chaos, Solitons and Fractals Nonlinear Science, and Nonequilibrium and Complex Phenomena Chaos, Solitons an Fractals (7 64 73 Contents lists available at ScienceDirect Chaos, Solitons an Fractals onlinear Science, an onequilibrium an Complex Phenomena journal homepage: www.elsevier.com/locate/chaos

More information

Monte Carlo Methods with Reduced Error

Monte Carlo Methods with Reduced Error Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for

More information

Entanglement is not very useful for estimating multiple phases

Entanglement is not very useful for estimating multiple phases PHYSICAL REVIEW A 70, 032310 (2004) Entanglement is not very useful for estimating multiple phases Manuel A. Ballester* Department of Mathematics, University of Utrecht, Box 80010, 3508 TA Utrecht, The

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

ELEC3114 Control Systems 1

ELEC3114 Control Systems 1 ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.

More information

Calculus in the AP Physics C Course The Derivative

Calculus in the AP Physics C Course The Derivative Limits an Derivatives Calculus in the AP Physics C Course The Derivative In physics, the ieas of the rate change of a quantity (along with the slope of a tangent line) an the area uner a curve are essential.

More information

1 Heisenberg Representation

1 Heisenberg Representation 1 Heisenberg Representation What we have been ealing with so far is calle the Schröinger representation. In this representation, operators are constants an all the time epenence is carrie by the states.

More information

and from it produce the action integral whose variation we set to zero:

and from it produce the action integral whose variation we set to zero: Lagrange Multipliers Monay, 6 September 01 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold CHAPTER 1 : DIFFERENTIABLE MANIFOLDS 1.1 The efinition of a ifferentiable manifol Let M be a topological space. This means that we have a family Ω of open sets efine on M. These satisfy (1), M Ω (2) the

More information

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential Avances in Applie Mathematics an Mechanics Av. Appl. Math. Mech. Vol. 1 No. 4 pp. 573-580 DOI: 10.4208/aamm.09-m0946 August 2009 A Note on Exact Solutions to Linear Differential Equations by the Matrix

More information

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains Hyperbolic Systems of Equations Pose on Erroneous Curve Domains Jan Norström a, Samira Nikkar b a Department of Mathematics, Computational Mathematics, Linköping University, SE-58 83 Linköping, Sween (

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Generalization of the persistent random walk to dimensions greater than 1

Generalization of the persistent random walk to dimensions greater than 1 PHYSICAL REVIEW E VOLUME 58, NUMBER 6 DECEMBER 1998 Generalization of the persistent ranom walk to imensions greater than 1 Marián Boguñá, Josep M. Porrà, an Jaume Masoliver Departament e Física Fonamental,

More information

Chapter 2 Lagrangian Modeling

Chapter 2 Lagrangian Modeling Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie

More information

The new concepts of measurement error s regularities and effect characteristics

The new concepts of measurement error s regularities and effect characteristics The new concepts of measurement error s regularities an effect characteristics Ye Xiaoming[1,] Liu Haibo [3,,] Ling Mo[3] Xiao Xuebin [5] [1] School of Geoesy an Geomatics, Wuhan University, Wuhan, Hubei,

More information

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Switching Time Optimization in Discretized Hybrid Dynamical Systems Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set

More information

Polynomial Inclusion Functions

Polynomial Inclusion Functions Polynomial Inclusion Functions E. e Weert, E. van Kampen, Q. P. Chu, an J. A. Muler Delft University of Technology, Faculty of Aerospace Engineering, Control an Simulation Division E.eWeert@TUDelft.nl

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

PHYS 414 Problem Set 2: Turtles all the way down

PHYS 414 Problem Set 2: Turtles all the way down PHYS 414 Problem Set 2: Turtles all the way own This problem set explores the common structure of ynamical theories in statistical physics as you pass from one length an time scale to another. Brownian

More information

Quantum mechanical approaches to the virial

Quantum mechanical approaches to the virial Quantum mechanical approaches to the virial S.LeBohec Department of Physics an Astronomy, University of Utah, Salt Lae City, UT 84112, USA Date: June 30 th 2015 In this note, we approach the virial from

More information

12.11 Laplace s Equation in Cylindrical and

12.11 Laplace s Equation in Cylindrical and SEC. 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential 593 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential One of the most important PDEs in physics an engineering

More information

Partial Differential Equations

Partial Differential Equations Chapter Partial Differential Equations. Introuction Have solve orinary ifferential equations, i.e. ones where there is one inepenent an one epenent variable. Only orinary ifferentiation is therefore involve.

More information

G4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const.

G4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const. G4003 Avance Mechanics 1 The Noether theorem We alreay saw that if q is a cyclic variable, the associate conjugate momentum is conserve, q = 0 p q = const. (1) This is the simplest incarnation of Noether

More information

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University

More information

Qubit channels that achieve capacity with two states

Qubit channels that achieve capacity with two states Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March

More information

Permanent vs. Determinant

Permanent vs. Determinant Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an

More information

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS Yannick DEVILLE Université Paul Sabatier Laboratoire Acoustique, Métrologie, Instrumentation Bât. 3RB2, 8 Route e Narbonne,

More information

Final Exam Study Guide and Practice Problems Solutions

Final Exam Study Guide and Practice Problems Solutions Final Exam Stuy Guie an Practice Problems Solutions Note: These problems are just some of the types of problems that might appear on the exam. However, to fully prepare for the exam, in aition to making

More information

u t v t v t c a u t b a v t u t v t b a

u t v t v t c a u t b a v t u t v t b a Nonlinear Dynamical Systems In orer to iscuss nonlinear ynamical systems, we must first consier linear ynamical systems. Linear ynamical systems are just systems of linear equations like we have been stuying

More information

Transmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency

Transmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency Transmission Line Matrix (TLM network analogues of reversible trapping processes Part B: scaling an consistency Donar e Cogan * ANC Eucation, 308-310.A. De Mel Mawatha, Colombo 3, Sri Lanka * onarecogan@gmail.com

More information

A Modification of the Jarque-Bera Test. for Normality

A Modification of the Jarque-Bera Test. for Normality Int. J. Contemp. Math. Sciences, Vol. 8, 01, no. 17, 84-85 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1988/ijcms.01.9106 A Moification of the Jarque-Bera Test for Normality Moawa El-Fallah Ab El-Salam

More information

Short Intro to Coordinate Transformation

Short Intro to Coordinate Transformation Short Intro to Coorinate Transformation 1 A Vector A vector can basically be seen as an arrow in space pointing in a specific irection with a specific length. The following problem arises: How o we represent

More information

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae

More information

On conditional moments of high-dimensional random vectors given lower-dimensional projections

On conditional moments of high-dimensional random vectors given lower-dimensional projections Submitte to the Bernoulli arxiv:1405.2183v2 [math.st] 6 Sep 2016 On conitional moments of high-imensional ranom vectors given lower-imensional projections LUKAS STEINBERGER an HANNES LEEB Department of

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Hybrid Fusion for Biometrics: Combining Score-level and Decision-level Fusion

Hybrid Fusion for Biometrics: Combining Score-level and Decision-level Fusion Hybri Fusion for Biometrics: Combining Score-level an Decision-level Fusion Qian Tao Raymon Velhuis Signals an Systems Group, University of Twente Postbus 217, 7500AE Enschee, the Netherlans {q.tao,r.n.j.velhuis}@ewi.utwente.nl

More information

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

WUCHEN LI AND STANLEY OSHER

WUCHEN LI AND STANLEY OSHER CONSTRAINED DYNAMICAL OPTIMAL TRANSPORT AND ITS LAGRANGIAN FORMULATION WUCHEN LI AND STANLEY OSHER Abstract. We propose ynamical optimal transport (OT) problems constraine in a parameterize probability

More information