An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach. Thesis by Sharefa Mohammad Asiri

Size: px
Start display at page:

Download "An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach. Thesis by Sharefa Mohammad Asiri"

Transcription

1 An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach Thesis by Sharefa Mohammad Asiri In Partial Fulfillment of the Requirements For the Degree of Masters of Science King Abdullah University of Science and Technology, Thuwal, Kingdom of Saudi Arabia May, 213

2 2 The thesis of Sharefa Mohammad Asiri is approved by the eamination committee Committee Chairperson: Taous-Meriem Laleg-Kirati Committee Member: Ying Wu Committee Member: Christian Claudel

3 3 Copyright 213 Sharefa Mohammad Asiri All Rights Reserved

4 4 ABSTRACT An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach Sharefa Mohammad Asiri Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However recently observers have also been developed to to estimate some unknowns for systems governed by partial differential equations. Our aim is to design an observer to solve inverse source problem for a onedimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.

5 5 ACKNOWLEDGEMENTS I would like to epress my gratitude to all those who gave me the possibility to complete this thesis. I sincerely would like to thank my supervisor, Prof. Taous- Meriem Laleg-Kirati, for her support, encouragement, and advice. I also take this opportunity to epress a deep sense of gratitude to Dr. Chadia Zayane for her valuable information and guidance. Finally, an honorable mention goes My husband, Ahmad Ali, for his understandings, supports and great patience.

6 6 TABLE OF CONTENTS Eamination Committee Approval 2 Copyright 3 Abstract 4 Acknowledgements 5 List of Abbreviations 8 List of Figures 9 List of Tables 14 1 Introduction 15 2 Introduction to Inverse Problems Inverse Problem Eamples of Inverse Problems Well-posedness Functional Analysis Approach Stochastic Inversion Approach Regularization Approach Tikhonov regularization Tikhonov Approach Selecting the regularization parameter Chapter Summary Observers Theory State-Space Representation Deriving the state-space representation Observability

7 7 3.3 Observer Chapter Summary A Tikhonov Regularization to Solve Inverse Source Problem for Wave Equation Problem Statement Inverse Problem s Operator and its Properties Construct the Operator by Solving the Direct Problem [1], [2] Operator s Properties Well-posedness of the Inverse Problem Numerical Simulations Chapter Summary An Observer to Solve Inverse Source Problem for Wave Equation Problem Statement A State-Space Representation for the Wave Equation Discretization Observer Design Numerical Simulations Preliminary Noise-Free Case Noise-Corrupted Case Comparison Between Observer and Tikhonov Numerical Simulations Chapter Summary Conclusion 9 References 92 Appendices 99

8 8 LIST OF ABBREVIATIONS GCV NCP SISO MIMO ODEs PDEs BCs ICs IBVPs SNR FDM CFL MSE Generalized Cross Validation Normalized Cumulative Periodogram Single-input, Single-output Multiple-input, Multiple-output Ordinary Differential Equations Partial Differential Equations Boundary Conditions Initial Conditions Initial Boundary Value Problems Signal-to-Noise Ratio Finite Difference Method Courant-Fridrichs-Lewy condition Mean squared Error

9 9 LIST OF FIGURES 2.1 Direct problems and inverse problems Behavior of the total error of regularization approach corresponding to α The stability region of continuous linear time-invariant systems is in the left, and the stability region of discrete linear time-invariant systems is in the right Observer principle The eact source f with ˆf = K 1 u T Measurements with and without noise The eact source f and the estimated source ˆf without Tikhonov regularization The selected regularized parameter through L-curve, GCV, and NCP The eact source f and the estimated source ˆf after Tikhonov regularization (left) where α was chosen using Discrepency Principle of Morozov, the error is on the right The eact source f and the estimated source ˆf after Tikhonov regularization (left) where α was chosen using L-curve, the error is on the right The eact source f and the estimated source ˆf after Tikhonov regularization (left) where α was chosen using GCV, the error is on the right The eact source f and the estimated source ˆf after Tikhonov regularization (left) where α was chosen using NCP, the error is on the right The eact source f() = 3 sin(5) The state ξ for one-dimensional wave equation where c 2 =.9, f() = 3 sin(5), and zeros boundaries and initial conditions

10 1 5.3 (a): the eact source f (blue) and the estimated source ˆf (black) using full measurements. (b): the relative error of the source estimation in % State error in the noise-free case with full measurements; (a): the state error ξ ˆξ. (b) the state relative error in %. (c): the state error, in %, after removing the initial phase. (d): the state relative error after removing the outliers where most of them concentrated in the initial phase The estimated source in different time steps starting form the initial guess (a): the eact source f (blue) and the estimated source ˆf (black) using partial measurements (5% of the state components taken from the middle). (b): the relative error of the source estimation in % Zoom-in for the relative error in Figure 5.6.b State error in the noise-free case with partial measurements (5% of the state components taken from the middle); (a): the state error ξ ˆξ. (b) the state relative error in %. (c): the state error, in %, after removing the initial phase. (d): the state relative error after removing the outliers where most of them concentrated in the initial phase (a): the eact source f (blue) and the estimated source ˆf (black) using observer with partial measurements (5% of the state components taken from the end). (b): the relative error of the source estimation in % Zoom-in for the relative error in Figure 5.9.b State error in the noise-free case with partial measurements (75% of the state components taken from the end); (a): the state error ξ ˆξ. (b) the state relative error in %. (c): the state error, in %, after removing the initial phase. (d): the state relative error after removing the outliers where most of them concentrated in the initial phase (a): the state ξ after adding a white noise with a standard deviation σ ξ =.78. (b): the output z after adding a white noise with a standard deviation σ z = (a): the eact source f (blue) and the estimated source ˆf (black) using observer with full measurements. (b): the relative error of the source estimation in % Zoom-in for the relative error in Figure 5.13.b

11 State error in the noisy case with full measurements; (a): the state error ξ ˆξ. (b) the state relative error in %. (c): the state error, in %, after removing the initial phase. (d): the state relative error after removing the outliers where most of them concentrated in the initial phase (a): the eact source f (blue) and the estimated source ˆf (black) using partial measurements (5% of the state components taken from the middle). (b): the relative error of the source estimation in % Zoom-in for the relative error in Figure 5.16.b State error in the noisy case with partial measurements (5% of the state components taken from the middle); (a): the state error ξ ˆξ. (b) the state relative error in %. (c): the state error, in %, after removing the initial phase. (d): the state relative error after removing the outliers where most of them concentrated in the initial phase (a): the eact source f (blue) and the estimated source ˆf (black) using partial measurements (5% of the state components taken from the end). (b): the relative error of the source estimation in % State error in the noisy case with partial measurements (75% of the state components taken from the end); (a): the state error ξ ˆξ. (b) the state relative error in %. (c): the state error, in %, after removing the initial phase. (d): the state relative error after removing the outliers where most of them concentrated in the initial phase Zoom-in for the relative error in Figure 5.19.b (a): the eact source f (blue) and the estimated source ˆf (black) using observer with full measurements. (b): the relative error of the source estimation in % (a): the eact source f (blue) and the estimated source ˆf (black) using Tkhonov with full measurements. (b): is the corresponding relative error of the source estimation in % Comparison between observer and Tikhonov in noise-free case with full measurements (a): the eact source f (blue) and the estimated source ˆf (black) using observer with partial measurements in the middle. (b):the relative error of the source estimation in %

12 (a): the eact source f (blue) and the estimated source ˆf (black) using Tkhonov with partial measurements in the middle. (b): the corresponding relative error of the source estimation in % Comparison between observer and Tikhonov in noise-free case with partial measurements taken from the middle (a): the eact source f (blue) and the estimated source ˆf (black) using observer with partial measurements at the end. (b):the relative error of the source estimation in % (a): the eact source f (blue) and the estimated source ˆf (black) using Tkhonov with partial measurements in the middle. (b): the corresponding relative error of the source estimation in % Comparison between observer and Tikhonov in noise-free case with partial measurements taken from the end (a): the eact source f (blue) and the estimated source ˆf (black) using observer with full measurements. (b):the relative error of the source estimation in % (a): the eact source f (blue) and the estimated source ˆf (black) using Tikhonov with full measurements in the noisy case where α was selected manually. (b): the corresponding relative error of the source estimation in % Comparison between observer and Tikhonov in noise-corrupted case with full measurements (a): the eact source f (blue) and the estimated source ˆf (black) using observer with partial measurements taken from the end. (b):the relative error of the source estimation in % (a): the eact source f (blue) and the estimated source ˆf (black) using Tkhonov with partial measurements from the middle. (b): the corresponding relative error of the source estimation in % Comparison between observer and Tikhonov in the noise-corrupted case with partial measurements taken form the middle (a): the eact source f (blue) and the estimated source ˆf (black) using observer with partial measurements taken from the end. (b): the corresponding relative error of the source estimation in % (a): the eact source f (blue) and the estimated source ˆf (black) using Tkhonov with partial measurements taken from the end. (b): the corresponding relative error of the source estimation in %

13 Comparison between observer and Tikhonov in the noise-corrupted case with partial measurements taken from the middle

14 14 LIST OF TABLES 4.1 Values of α using the four different approaches and the total error f ˆf Relative errors for noise-free case (full measurements) Relative errors for noise-free case (partial measurements form the middle) Relative errors for noise-free case (partial measurements form the end) Relative errors for the noisy case (full measurements) Relative errors for the noisy case (partial measurements from the middle) Relative errors for noisy case (partial measurements from the end) MSE in the noisy case (partial measurements)

15 15 Chapter 1 Introduction Wave equation is a crucial hyperbolic partial differential equation that arose early to describe the motion of vibrating strings and membranes. It is a basis in many areas such as seismic imaging and imaging steep dipping structures. In wave applications, if the aim is to find the propagation of the wave, eactly or approimately, this is called a direct problem. However, in most of these applications, the direct solution is not always needed but often the wave speed, the initial state, or the source need to be estimated. This kind of problem is called inverse problem. Inverse problem is a research area that uses observed data (measurements) to obtain knowledge about physical systems. Solving these inverse problems helps to determine the location of oil in oil eploration applications [3], to find the shape of a scattering object, for eample in computer tomography [4], to detect tumors in medical imaging [5], to get an image for the subsurface in marine survey acquisition, etc. For instance, marine survey acquisition application, air guns are generally used as a source to send sound waves into the water. The waves propagate in the water and so can be modeled mathematically using the wave equation. These waves reflect, and the reflected waves are received by hydrophones (sensors) located on stream where the measurements are obtained. These receivers measure the velocity of the waves and the time elapsed from the source to the hydrophones. Finally, these measurements are transferred to an image of the subsurface of the earth. The first and second steps

16 16 of this eperiment are actually the direct problem, while the final step is the inverse problem. The same principle is used in sonography, seismology and many other fields. However, inverse problems are usually ill-posed in the sense of Hadamard, who proposed that the solution of any well-posed problem should satisfy three proprieties: eistence, uniqueness, and stability. If one of the proprieties is not satisfied, then the problem is ill-posed, [6]. Inverse problems for wave equations have been studied for many decades, see [7, 8, 9, 1, 11, 12, 13, 14]. The classical way to solve these problems, or inverse problems in general, is to minimize a suitable cost function which is solved using optimization techniques. For instance, in [1] and [13], inverse problems for wave equation were solved using the Tikhonov regularization method which led to optimization problems. In [1], the optimization problem was solved using an iterative numerical algorithm called the Pulse-Spectrum Technique. Although this method was ecellent with a two dimensional wave equation and robust with a one dimensional wave equation, it requires many computations. This computational cost was reduced in [11] by using the Garlerkin method to solve the appeared integral equation. However, in [13], Tikhonov regularization was combined with the widely convergent homotopy method in order to obtain a good initial guess for the iterative method of the optimization problem. In [12], a new minimization algorithm was proposed to solve an inverse problem for the wave equation where the unknown is the speed wave function inside a bounded domain. In all the previous work, optimization methods are required. These methods are,in general, heavy computationally especially in the case of high order systems, or if there are a large number of unknowns. Therefore, they require an etensive storage. Moreover, the convergence of the optimization methods is affected by the initial guess and the stop condition. The objective of this thesis is to solve the inverse source problem for the wave

17 17 equation using an alternative method based on the concept of observers, which are well-known in control theory. An observer is used to estimate the hidden states of a dynamical system using only the available input and output measurements [15]. The first observer was introduced by Luenberger in the 196s; his observer is well known for state estimation in linear dynamical systems [45]. Since then different types of observers have been proposed to deal with specific applications; for instance, robust observers for models corrupted by disturbances [49], adaptive observers for the joint estimation of states and parameters [46], and optimal observers [52]. One advantage of using an observer solving inverse problems is that it only requires the solution to the direct problems which are in general well-posed and well-studied. Moreover, observers operate recursively; thus, their implementation is straightforward with low computational cost, especially when it comes to high order systems. Recently researchers proposed to solve inverse problems for wave equations using observers [16, 17, 18, 19, 2]. In [16] states and parameters are estimated using an observer depending on a discretized space for a mechanical system. In [17], the initial state of a distributed parameter system was estimated using two observers; one for the forward time and the other for the backward time. Similarly in [18], but the forward backward observer has been adapted to solve inverse source problem for the wave equation. An adaptive observer was applied in [19] for parameter estimation and stabilization for one-dimensional wave equation where the boundary observation suffers from an unknown constant disturbance. A similar work was studied in [2], however, the unknown was the state and the boundary observation suffers from an arbitrary long time delay. One of the difficulties in solving inverse problems consists in the lack of measurements available. Indeed for physical or practical constraints, we usually do not have enough measurements to estimate all the unknowns which makes the problem unobservable in the sense that we can not estimate all the states or unknowns from

18 18 the available measurements. For this reason, the observers in the previous works based on partial measurements. However, these measurements, in [16, 17, 19, 2], were taken from the time derivative of the solution of wave equation. This kind of measurements gives a typical observability condition which has a positive effect on the stabilization, but it is less readily available than filed measurements. Hence, some authors sought to solve inverse problems for wave equation using observers based on partial filed measurements, i.e. measurements taken from the solution of the wave equation, as in [21], [22], and [23]. In addition, the observer in [22] was based on a discretized system, in both space and time, which can be considered as a different methodology comparing with the previous works. This methodology improved the convergence properties of observers based on partial field measurements. In this thesis, we use an observer to solve an inverse problem for the one-dimensional wave equation where the source is unknown. The problem is first discretized in both space and time and then an adaptive observer based on partial measurements of the field is applied to estimate both the states and the source of the discrete dynamical system. Moreover, we test the method in two cases: noise-free case noise-corrupted case. This thesis is organized as follows: Chapter 2 provides the reader with an introduction to inverse problems field and regularization methods. Chapter 3 is on observers theory. In Chapter 4, inverse source problem for a one-dimensional wave equation using Tikhonov regularization is studied, and the same problem but using observer is studied in Chapter 5. In addition, a comparison between observer and an original Tikhonov approach is presented also in Chapter 5. Finally, The conclusion is drawn in Chapter 6.

19 19 Chapter 2 Introduction to Inverse Problems Inverse problem field has appeared since the first half of the 2th century. It is a research area that uses the observed data (measurements) to obtain some information about a physical system. In other words, it is a determination of some unknowns from measurements and other known information. Any problem can be either a direct problem or an inverse problem. In the direct problems, we try to find the solution which describes phenomena, for eample, the propagation of heat or waves where the model parameter, the initial state, and boundary properties are known. However, model parameters such as speed, density, and conductivity are often unknowns, and we need to estimate them; this is an inverse problem. Inverse problems arise in many fields. They arise in geophysical field such as seismic imaging [24], in image processing like medical imaging [25], in physical sciences as in deconvolution problems for ground based telescopes [26], and in many other areas. This reflects the importance of inverse problem. In this chapter, general concepts on inverse problems with some eamples are introduced. Then, the definition of a well-posed versus ill-posed problem is presented. In the third section, three classical approaches to overcome the ill-posedness of a problem are presented, which are functional analysis, stochastic inversion, and regularization. Finally, the last section focuses on the Tikhonov regularization and how

20 2 to choose the regularization parameter. 2.1 Inverse Problem Consider the following mathematical model: K() = y, (2.1) where y denotes the data (measurements), denotes the unknown that can be some parameters or the input, and K an operator which represents the relation between the output and the unknowns (see Figure. (2.1)). The problem is called direct (forward) problem if is known and y to be determined, and it can be solved directly from (2.1). If the data y is measured, and the unknown is to be estimated, this is called inverse problem. In this case, the problem consists in inverting the operator K, which is not easy in general. Figure 2.1: Direct problems and inverse problems Eamples of Inverse Problems Eample 1. Linear equation: Consider the linear equation y(ϑ) = aϑ + b. First, the problem can be written ( ) in the form K = y where K = ϑ 1 and = a. If the constants a b

21 21 and b are known, then it is easy to solve the direct problem to obtain y for any ϑ. However, if y is given, and the problem is to find the constants a and b that satisfy the linear equation; this is an inverse problem. Solving this inverse problem is fitting straight line to the data while solving the direct problem is evaluating a polynomial of first order; accordingly, it is clear through this simple eample that solving inverse problem is not easy as its corresponding direct problem. Eample 2. Integral of First Kind: Frequently, inverse problems can be written as an integral of first kind such as inverse wave equation (see Chapter 5), inverse heat equation [27]. It describes a linear relation between the data and the unknown [28]. If the operator K is an integral of first kind, then it can be written as: K((t)) = 1 k(t, s)(s)ds = y(t), t 1 (2.2) where k is a kernel which is known, is unknown, and y denotes the data (measurements). Eample 3. Wave Equation: Consider the following one-dimensional wave equation 2 u(, t) c 2 2 u(, t) = Q(, t), t 2 2 u(, t) = g 1 (t), u(l, t) = g 2 (t); u(, ) = r 1 (), u t (, ) = r 2 (); l, t ; (2.3) where u(, t) is the displacement, c is the wave speed, and Q(, t) is the source function. g 1 (t) and g 2 (t) are the boundary conditions. r 1 () and r 2 () are the initial position and the initial velocity, respectively, and they represent the initial conditions.

22 22 The direct problem is to find the solution u(, t) such that the wave speed c, the source Q(, t), the boundary conditions, and the initial conditions are known. If one of the parameters is unknown, such as c, g 1, g 2, r 1, r 2, or Q, and we would like to determine it using available measurements, then this problem is called inverse problem. Based on these unknowns, the inverse problems can be classified as follows: if the problem is required to estimate the wave speed or generally any model parameter, then it is called coefficients inverse problem or inverse media problem. If the source is the unknown then the problem is inverse source problem. The inverse problem is called retrospective if the initial conditions are unknowns, and it is called boundary problem if the boundary conditions are knowns. These are not all the classes; there are some mied cases e.g. the unknowns are the initial and boundary conditions; for more details in the classification of inverse problems see [29]. As seen through these eamples, some questions on the eistence of the inverse of K raise, this leads to the definition of well-posedness. 2.2 Well-posedness The definition of a well-posed problem has been given in 192 by Hadamard. In the sense of Hadamard, a mathematical problem is well posed if and only if the following three conditions are satisfied [6]: 1. Eistence: the solution of the problem eists. 2. Uniqueness: the problem has at most one solution. 3. Stability: the solution depends continuously on the data which can be related to the stability when dealing with numerical solution. Net definition gives a mathematical description for the three conditions.

23 23 Definition 1. Let X and Y be normed spaces, K : X Y a (linear or non-linear) mapping. The equation K = y is called well-posed if the following holds: 1. Eistence: For every y Y there is (at least one) X such that K = y. 2. Uniqueness: For every y Y there is at most one X such that K = y. 3. Stability: The solution depends continuously on y; that is, for every sequence ( n ) X with K n K (n ), it follows that n (n ). A problem that loses one of the previous conditions is called ill-posed problem. In fact, inverse problems are usually ill-posed. If the solution does not eist, then it can be solved by etending the solution s space; and if it is not unique, then adding additional information or some constraints can solve the uniqueness issue. However, the stability is a crucial condition and it is mostly violated. The eistence and the uniqueness conditions can be eplained simply through Eample 2; in case k(t, s) = 1, (2.2) will be 1 (s)ds = y(t). (2.4) Calculating the left hand side of (2.4) gives a constant because it is independent on t. If y(t) is not constant, then (2.4) has no solution. Thus, the eistence condition is not satisfied. Now, if we assume that the solution eists, the solution is not unique. Because it can be found an infinite number of solutions (s) such that the integral over [, 1] gives the same constant and thus satisfy (2.4) eactly. The stability issue can be more clear if we choose (s) to be (s) = sin(ηs). Then by taking the infinite limit for (2.2) and using Riemann-Lebesgue lemma [3] (see

24 24 Theorem 7 in Appendi A), one can get: 1 k(t, s) sin(ηs)ds as η. (2.5) From (2.5), it is clear that a very small change in the data y leads to a huge change in the solution ; thus, the problem is not stable [31]. Generally, inverse problems are solved by minimizing the error between predicted data and observed data (measurements) i.e. we seek to minimize the following cost function: J() = K y 2 p, (2.6) where p 1. To restore the numerical stability of an inverse problem, one can distinguish between three approaches: functional analysis, stochastic inversion, and regularization. A description for each approach is provided in the net section Functional Analysis Approach Here, the ill-posedness is solved by changing the space of the variables and their topologies. This change is under physical considerations [32] Stochastic Inversion Approach In this approach, all the variables are considered as random variables in order to take into account the uncertainties, and the solution is a probability distribution for the unknowns. Bayesian approach is one of stochastic inversion approaches. In this approach priory information on the solution is epressed as prior distribution, then it is combined with the data to obtain posterior distribution through Bayes rule. Ultimately, the solution is the maimizer of this a posterior distribution (MAP), [31] [33].

25 Regularization Approach The idea of regularization methods is to define a regularized solution that depends on the data and takes into account available prior information about the eact solution. In order to obtain better understanding regularization approaches, which is part of the contributions of this thesis, some definitions and theorems in regularization are presented [34]. Definitions and theorems on operator s properties such as linearity, boundedness, compactness, and self adjointness can be found in Appendi A. Definition 2. Let K : X Y be a compact and one-to-one operator between two Hilbert spaces X and Y such that K() = y, X and y Y. A regularization strategy can be defined as a family of operators R α (y) : Y X, that depend on a parameter α > such that lim R α(k()) =, (2.7) α i. e. the operators R α K converge pointwise to the identity; then R α is a regularized operator for K() = y. Theorem 1. Let R α : Y X be a regularization operator where dim(x) = then there eists a sequence (α i ) with R αi as i. The defined notation for a regularization strategy in Definition 2 is based on unperturbed data; that is the regularizer R α y converges to the eact solution for y = K. However, if perturbed data y δ was considered such that y y δ δ, then a regularized solution can be defined as δ α = R α y δ. Thus, the error in the solution can be formulated as δ α = R α y δ = R α y δ R α y + R α y R α y δ y + R α K

26 26 thus, δ α δ R α + R α K. (2.8) It appears form (2.8) that the total error between the eact solution and the regularized solution δ α occurs due to two sources of errors. The first error is the error due to uncertainty in the measurements δ R α ; this error goes to infinity when α goes to zero (by Theorem 1). The second one is the regularization error R α K, and it goes to zero when α goes to zero (by Definition 2). Figure (2.2) illustrates the effect of α on the two types of errors. Figure 2.2: Behavior of the total error of regularization approach corresponding to α. The choice of the regularization operator defines the regularization strategy. Different regularization techniques aim to construct this operator; for eample, Tikhonov regularization, Landweber iteration, total variation, and so on. Tikhonov regularization (1977) is the most widely used technique for regularizing discrete ill-posed problems [31].

27 Tikhonov regularization Tikhonov Approach In a simple description, it is a least square problem with a penalization term that includes priory information multiplied by a regularization parameter α >. Thus, Tikhonov functional of the system K = y can be written as J α () = 1 2 K y α (2.9) One can minimize (2.9) as follows: J α =, K (K y) + α =, (K K + αi) = K y. Thus, the regularized solution can be written as: α = R α y, (2.1) such that R α = (K K + αi) 1 K ; where K is the adjoint operator of K and I is the identity operator. Consequently, the regularization parameter α has to be chosen dependently on δ such that the right hand side of (2.8) is minimum as possible. Different methods eist to find this parameter such as Discrepancy Principle of Morozov, L-curve, Generalized Cross Validation (GCV), and Normalized cumulative Periodogram (NCP) analysis, [35]. All these methods seek to find the best trade-off between these two errors. In the net section, a short description for these methods is shown.

28 2.3.2 Selecting the regularization parameter 28 Discrepancy Principle of Morozov, [36] [37] Definition 3. In Morozov s Discrepancy Principle, α = α(δ, y δ ) is chosen and δ α such that for 1 < µ 1 µ 2 µ 1 δ K( δ α) y δ µ 2 δ holds. In this principle, the measurements error (δ) is assumed to be known, which is not often the case. Thus, small α can be chosen to gain the accuracy. This method is simple to apply and good for theoretical study, but it is risky when dealing with real data because δ is unknown in general. L-curve, [38] [39] [4] In this method, the regularization parameter α is chosen such that the regularization and perturbation errors are balanced. No guarantee that good results will be obtained using this method, but in general it is a good heuristic approach. Generalized Cross Validation, [41] Generalized Cross Validation, GCV, method is derived from a classical statistical technique called cross validation. In cross validation, we leave out the i th element if the data, y i, and then compute the regularized solution δ α(i) such that δ (i) α = R α (i) y (i) where (i) indicates that y i was left out. Then, y i is estimated such that y i = K i δ α(i). The aim is to select α that minimizes the estimated errors for all i. Finally, after some technical steps, the following formula of generalized cross validation is obtained: 1 arg min α m m ( ) K i δ α y i 2. (2.11) 1 trace(r α )/m i=1

29 29 where m refers to the number of measurements. GCV is considered as a robust method for finding the regularization parameter. Normalized cumulative Periodogram (NCP) analysis, [41], [31] NCP method is based on the Fourier transform of the residual vector. Let r = y K δ α. Then after taking the discrete Fourier transformation, one can get: ζ = F(r) = (ζ 1, ζ 2,, ζ q+1 ) T, (2.12) where q = n, and n refers to the dimension of. After that, we define the Peri- 2 odogram P vector with the coefficient p j = ζ 2 + ζ ζ j+1, j = 1,, q. (2.13) ζ 2 + ζ ζ q+1 Finally, we search for regularization parameter α such that the coefficients of P lie (approimately) on a right line. One of the advantages of NCP is that it is not epensive computationally. Also, it is good when the noise is white noise. As we see, inverse problems lead to optimization problems. Optimization techniques are heavy computationally especially if the number of parameters is high. Also for large systems, they require etensive storage. Moreover, they need good initial guess and clever stop condition to obtain good results; which are generally not easy tasks. For more details on solving inverse problems see [34], [41], [42], and [32]. 2.4 Chapter Summary From the previous discussion, it appears that solving inverse problems is not easy at least by comparing them with their corresponding direct problems. Moreover, they are in general ill-posed. We highlighted three standard approaches to overcome

30 3 ill-posedness, which are functional analysis, stochastic inversion, and regularization. In the regularization approach, Tikhonov regularization for constructing the regularized operator was eplained. Then different methods for choosing the regularization parameter have been presented. These methods aim to restore the stability while minimizing the error between the regularized and the eact solutions. There is an alternative approach derived form control theory for solving inverse problems [43]. It is an observer based-approach. Observers operate recursively which implies their implementation low computational cost. The concepts of observers are highlighted in the net chapter.

31 31 Chapter 3 Observers Theory We recall basic definitions of the observers. In the first section, we introduce the definition of the state space representation, how it is derived, and how can the stability be studied through this representation. Then, the observability property, which is an essential condition for applying observer, is presented. Finally, the observer method is defined and presented. 3.1 State-Space Representation State-space form is a mathematical representation that can describe the dynamics in physical systems such as biological systems, mechanical systems, and economic systems. Linear continuous-time state-space systems can be written in the following state-space representation [44]: ξ = A(t)ξ(t) + B(t)ν(t), z(t) = C(t)ξ(t) + D(t)ν(t), (3.1) where ξ(t) R n is the state vector, z(t) R m is the output vector, ν(t) R r is the input vector, A is the state matri of dimension n n, B is the input matri of dimension n r, C is the output matri with dimension m n, and D is transmission (feedthrough) matri from input to output with dimension m r. Moreover, in the

32 32 system (3.1) the first equation is called the state equation while the second equation is called the output equation. Also, (3.1) has the following solution [44]: ξ(t) = Φ(t, t )ξ(t ) + t t Φ(t, τ)b(τ)ν(τ)dτ, (3.2) where Φ(t, τ) = U(t)U(t) 1, and U(t) is the solution of U(t) = A(t)U(t). If the system is invariant then Φ(t, t ) can be defined as: Φ(t, t ) = e A(t t ). (3.3) Similar to system (3.1), linear discrete-time system can be put on the state-space form: ξ(k + 1) = A(k)ξ(k) + B(k)ν(k), z(k) = C(k)ξ(k) + D(k)ν(k), (3.4) where k refers to the time step Deriving the state-space representation We define the state variables using the input-output differential or difference equations to obtain the state-space representation. The general idea is to move from an n th order of the differential equation to n fist order differential equations. To illustrate the procedure, we consider the following linear ordinary differential equation which represents a single input, single output (SISO) system d n z(t) dt n + a n 1 d n 1 z(t) dt n a 1 dz(t) dt + a z(t) = ν(t). (3.5)

33 33 Now the state variables are defined as the following: ξ 1 = z, ξ 2 = dz dt, ξ 3 = d2 z dt 2,. ξ n = dzn 1 dt n 1, (3.6) It is worth mentioning that the used method in (3.6) to define the sate variable is not unique. System (3.6) has n state equations, by differentiating (3.6), one can get the following n 1 first order differential equations: ξ 1 = ξ 2, ξ 2 = ξ 3,. ξ n 1 = ξ n, ξ n = a ξ 1 a 1 ξ 2 a n 1 ξ n + ν(t). (3.7) Therefore, (3.7) can be written as ξ = a a 1 a 2 a 3... a n 1 ξ +. 1 ν(t) = Aξ + Bν(t). (3.8)

34 34 And the output can be [ z(t) = ξ 1 (t) = 1 ] ξ(t) = Cξ(t). (3.9) Similarly for the discrete-time systems where the difference can be written as: z(k + n) + a n 1 z(k + n 1) + + a 1 z(k + 1) + a z(k) = ν(k). (3.1) Thus, the state variables can be written as ξ 1 (k) = z(k), ξ 2 (k) = z(k + 1), ξ 3 (k) = z(k + 2),. ξ n (k) = z(k + n 1), (3.11) Thus, the state-variables are: ξ 1 (k + 1) = ξ 2 (k), ξ 2 (k + 1) = ξ 3 (k),. ξ n 1 (k + 1) = ξ n (k), ξ n (k + 1) = a ξ 1 (k) a 1 ξ 2 (k) a n 1 ξ n (k) + ν(k). (3.12) Thus, we get eactly the same presentation form in (3.8). It appears from the previous study of both continuous and discrete systems have the same structures of the matrices A,B, and C, but the coefficients a i are different. If the coefficients are functions of time, the system (continuous or discrete) is called time-varying system; otherwise, it called invariant-system.

35 35 One of the most important properties of a system is stability. Stability of a system studies the behavior of the state vector relative to an equilibrium state. There are many definitions for stability such as uniform stability, asymptotic stability, eponential stability, and bounded input, bounded output stability [44]. However, if the systems were in time-invariant case and were written in the state-space representation, then the stability can be studied easily through the eigenvalues of the state matri. The continuous time-invariant systems is stable if eigenvalues of A are located at the left half plane while discrete time-invariant systems is stable if eigenvalues of A are located inside the unit circle (see Figure 3.1). If the system is not stable then it should be stabilized; otherwise, human losses, financial losses and other losses may occur. The ability to stabilize a system depends on some conditions; the most prominent of these conditions are controllability and its dual notation observability. Figure 3.1: The stability region of continuous linear time-invariant systems is in the left, and the stability region of discrete linear time-invariant systems is in the right. In general, only few measurements (system output) are available, and we often need to know the states for control purposes for eample. So the hidden states have to be estimated under some conditions using for instance an observer. Net section presents what the observability of a system is.

36 3.2 Observability 36 Observability is a structural property of a system, and it means the ability to reconstruct the state vector using system outputs. In other words; it means the possibility to determine the behavior of the state using some measurements. The net definition defines the observability in linear system [44]. Definition 4. A linear system is observable at t T if it is possible to determine ξ(t ) from the output z [t,t 1 ] where t 1 is finite time in T. If this condition is satisfied for all t and ξ(t ), then the system is completely observable. For linear time-invariant systems, we have the following theorem [44]. Theorem 2. A linear time-invariant state-space system is completely observable if and only if the observability matri W has full rank, i.e. rank(w ) = n; where W = C CA CA 2. CA n 1, (3.13) and n is the dimension of the state matri. 3.3 Observer An observer is a dynamical system used to estimate the state or part of the state of an observable dynamical system using the available input and output measurements (see Figure 3.2). Its concept was defined by Luenberger many decades ago [45], [15]. The Luenberger observer is well known for states estimation in linear dynamical systems.

37 37 Many other kinds of observer have been proposed to deal with specific and more realistic situations. We can classify them into adaptive observer for the joint estimation of states and parameters [46] [47] [48], robust observers against perturbations such as sliding mode observers [49] [5] [51], and optimal observers such as Kalman filter [52] [53]. Figure 3.2: Observer principle This thesis is focused on discrete linear time invariant (LTI) systems which has the form ξ(k + 1) = Aξ(k) + Bν(k), z(k) = Cξ(k), (3.14) where the matri D set to be a null matri as the case in many physical systems. To eplain the basic idea behind the observer, we propose, as an eample, the following observer for (3.14): ˆξ(k + 1) = Aˆξ(k) + Bν(k) + L(z(k) ẑ(k)), ẑ(k) = C ˆξ(k), (3.15) where L is the observer gain matri which will be determined to insure the convergence of the error of estimation to zero. If the observer error is defined as e(k) = ˆξ(k) ξ(k), then the dynamics of the error

38 38 of (3.15) can be written as e(k + 1) = (A LC)e(k). (3.16) To insure the convergence of the error to zero, the matri (A LC) must be Hurwitz, which means that the eigenvalues of this matri must be inside the unit circle. Therefore, the observer gain matri L should be chosen appropriately to obtain stable system. In other words, L is chosen such that the dynamics of the observer is much faster than the system itself; in this case the error converges asymptotically eponentially to zero. Observer gain matri can be obtained by pole placement [44]. This method consists in choosing the matri L such that the system is still stable, i.e., the eigenvalues of the matri (A LC) has a magnitude strictly less than one for this discrete system. To get this L, we first fi the appropriate eigenvalues of (A LC), say {λ 1, λ 2,, λ n }; then we solve the problem of determining the coefficients of the matri L such that det (λi (A LC)) = (λ λ 1 )(λ λ 2 ) (λ λ n ). (3.17) 3.4 Chapter Summary This chapter has introduced the concept of observer. It has been discussed how a differential equation can be written into state-space representation which is a standard formulation for dynamical systems. Also, it was clarified how the stability of a system can be studied through this representation. Some definitions and theorems in observability terminology were presented. Finally, the idea of observer was shown in the last section.

39 39 Chapter 4 A Tikhonov Regularization to Solve Inverse Source Problem for Wave Equation This chapter presents an inverse source problem for the wave equation. We start by analyzing the solution to the direct problem that allows us to define an operator relating the unknown source and the measurements which are the position at some points. Then some properties for this operator are proved. Finally, Tikhonov regularization is applied to solve instability problems where its regularization parameter is chosen through different approaches: Discrepency Principle of Morozov, L-curve, Generalized Cross Validation (GCV), and Normalized cumulative Periodogram (NCP). 4.1 Problem Statement Consider the following one-dimensional wave equation with Dirichlet boundary conditions u tt (, t) c 2 u (, t) = f(), u(, t) =, u(l, t) =, u(, ) = r 1 (), u t (, ) = r 2 (), (4.1)

40 4 where is the space coordinate defined in [, l], t is the time coordinate defined in [, T ], u denotes the derivative of u with respect to and u t the derivative with respect to t, r 1 () and r 2 () are the initial conditions in L 2 [, l], and f() L 2 [, l] is the source function which is assumed, for simplicity, to be independent on time. The direct problem and eamples of inverse problems for the wave equation were described in Chapter 2, Eample 3. In this chapter we focus on the inverse source problem. First, We propose to determine the operator linking the unknowns to the measurements. 4.2 Inverse Problem s Operator and its Properties We try in the net section to find the operator of this inverse problem through the analytic solution of the direct problem of (4.1) Construct the Operator by Solving the Direct Problem [1], [2] (4.1) can not be solved directly by separation of variables method. Because separation of variables method requires both the PDE and the boundary conditions, BCs, to be homogeneous. Although, some transformations can be applied first on initial boundary value problems, IBVPs, then separation of variables method can be used. However, in some cases this transformation may not solve the inhomogeneity issue. For a problem such as (4.1) where the inhomogeneity is in the PDE and the BCs are zeros, eigenfunctions epansion can be used. If the boundary conditions are not zeros, we need to zero out them using some transformation functions. Proposition 1. Using the eigenfunction epansion method, the solution of the direct

41 41 problem of system (4.1) is: u(, t) = where l r 1 ( ) t G(, t,, )d + l l r 2 ( )G(, t,, )d + t f( )G(, t,, t)d td, (4.2) G(, t,, t) = k=1 2 ckπ sin kπ l sin ckπ (t t) sin kπ l l is the Green function. (4.3) However, if the source f() needs to be estimated where the wave speed and the initial conditions are known, the initial conditions are zeros for simplicity, and we have some measurements for u at time t = T, then this is an inverse source problem, and it has an operator K such that (Kf)() = l H(, ) f( )d = g(). (4.4) where K is an integral of first kind oprator with kernel H(, ) such that H(, ) = k=1 ( 2 1 cos ckπt ) sin kπ ckπ l l sin kπ l. (4.5) Proof. In the eigenfunction epansion method, we look for a solution of the form: u(, t) = a k (t)φ k (), (4.6) k=1 where φ k () are the eigenfunctions. They can be found after solving the homogenous version of (4.1) which is: u tt (, t) c 2 u (, t) =, u(, t) =, u(l, t) =, u(, ) =, u t (, ) =. (4.7)

42 42 By applying separation of variables method on the homogenous problem (4.7) where the boundary conditions are Dirichlet boundary conditions, one can get: φ k () = sin kπ l ; k = 1, 2, (4.8) Thus, the solution of (4.1) will be in the form: u(, t) = a k (t) sin kπ l k=1. (4.9) Differentiate (4.9) with respect to t and with respect to then substitute in (4.1), one can get: [ d 2 a k (t) k=1 dt 2 + ( kπ ] l )2 c 2 a k (t) sin kπ l = f(). (4.1) Let q k (t) be the k th Fourier coefficient of f decomposition i.e. d 2 a k (t) dt 2 + c 2 ( kπ l )2 a k (t) = q k (t), (4.11) then q k (t) can be epressed as: q k (t) = 2 l l f( ) sin kπ l d. (4.12) Moreover, (4.11) is just an inhomogeneous second order ODE, and its solution is in the form a k (t) = a kh + a kp, (4.13) where a kh is the homogeneous solution and a np is the particular solution. The homogeneous solution: first: the characteristic equation of the homogeneous version of (4.11) is r 2 + c 2 ( kπ l )2 = ; thus, its solution is r = ± ckπ i, where l

An Inverse 1-D Heat conduction problem (IHCP): An Observer Based Approach

An Inverse 1-D Heat conduction problem (IHCP): An Observer Based Approach Global Journal of Pure and Applied Mathematics. ISSN 973-1768 Volume 13, Number 8 (217), pp. 433-4311 Research India Publications http://www.ripublication.com An Inverse 1-D Heat conduction problem (IHCP):

More information

Shooting methods for numerical solutions of control problems constrained. by linear and nonlinear hyperbolic partial differential equations

Shooting methods for numerical solutions of control problems constrained. by linear and nonlinear hyperbolic partial differential equations Shooting methods for numerical solutions of control problems constrained by linear and nonlinear hyperbolic partial differential equations by Sung-Dae Yang A dissertation submitted to the graduate faculty

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

State Space and Hidden Markov Models

State Space and Hidden Markov Models State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State

More information

Choosing the Regularization Parameter

Choosing the Regularization Parameter Choosing the Regularization Parameter At our disposal: several regularization methods, based on filtering of the SVD components. Often fairly straightforward to eyeball a good TSVD truncation parameter

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Vibrating Strings and Heat Flow

Vibrating Strings and Heat Flow Vibrating Strings and Heat Flow Consider an infinite vibrating string Assume that the -ais is the equilibrium position of the string and that the tension in the string at rest in equilibrium is τ Let u(,

More information

Multidimensional partitions of unity and Gaussian terrains

Multidimensional partitions of unity and Gaussian terrains and Gaussian terrains Richard A. Bale, Jeff P. Grossman, Gary F. Margrave, and Michael P. Lamoureu ABSTRACT Partitions of unity play an important rôle as amplitude-preserving windows for nonstationary

More information

In particular, if the initial condition is positive, then the entropy solution should satisfy the following positivity-preserving property

In particular, if the initial condition is positive, then the entropy solution should satisfy the following positivity-preserving property 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 17 18 19 1 3 4 IMPLICIT POSITIVITY-PRESERVING HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR CONSERVATION LAWS TONG QIN AND CHI-WANG SHU Abstract. Positivity-preserving

More information

Basic Linear Inverse Method Theory - DRAFT NOTES

Basic Linear Inverse Method Theory - DRAFT NOTES Basic Linear Inverse Method Theory - DRAFT NOTES Peter P. Jones 1 1 Centre for Compleity Science, University of Warwick, Coventry CV4 7AL, UK (Dated: 21st June 2012) BASIC LINEAR INVERSE METHOD THEORY

More information

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the

More information

Lecture 4.2 Finite Difference Approximation

Lecture 4.2 Finite Difference Approximation Lecture 4. Finite Difference Approimation 1 Discretization As stated in Lecture 1.0, there are three steps in numerically solving the differential equations. They are: 1. Discretization of the domain by

More information

A Padé approximation to the scalar wavefield extrapolator for inhomogeneous media

A Padé approximation to the scalar wavefield extrapolator for inhomogeneous media A Padé approimation A Padé approimation to the scalar wavefield etrapolator for inhomogeneous media Yanpeng Mi, Zhengsheng Yao, and Gary F. Margrave ABSTRACT A seismic wavefield at depth z can be obtained

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

F = m a. t 2. stress = k(x) strain

F = m a. t 2. stress = k(x) strain The Wave Equation Consider a bar made of an elastic material. The bar hangs down vertically from an attachment point = and can vibrate vertically but not horizontally. Since chapter 5 is the chapter on

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Seismic Waves Propagation in Complex Media

Seismic Waves Propagation in Complex Media H4.SMR/1586-1 "7th Workshop on Three-Dimensional Modelling of Seismic Waves Generation and their Propagation" 5 October - 5 November 004 Seismic Waves Propagation in Comple Media Fabio ROMANELLI Dept.

More information

Long Time Dynamics of Forced Oscillations of the Korteweg-de Vries Equation Using Homotopy Perturbation Method

Long Time Dynamics of Forced Oscillations of the Korteweg-de Vries Equation Using Homotopy Perturbation Method Studies in Nonlinear Sciences 1 (3): 57-6, 1 ISSN 1-391 IDOSI Publications, 1 Long Time Dynamics of Forced Oscillations of the Korteweg-de Vries Equation Using Homotopy Perturbation Method 1 Rahmat Ali

More information

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching

I. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 1 Adaptive Control Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 2 Outline

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

PROBLEMS In each of Problems 1 through 12:

PROBLEMS In each of Problems 1 through 12: 6.5 Impulse Functions 33 which is the formal solution of the given problem. It is also possible to write y in the form 0, t < 5, y = 5 e (t 5/ sin 5 (t 5, t 5. ( The graph of Eq. ( is shown in Figure 6.5.3.

More information

Part E1. Transient Fields: Leapfrog Integration. Prof. Dr.-Ing. Rolf Schuhmann

Part E1. Transient Fields: Leapfrog Integration. Prof. Dr.-Ing. Rolf Schuhmann Part E1 Transient Fields: Leapfrog Integration Prof. Dr.-Ing. Rolf Schuhmann MAXWELL Grid Equations in time domain d 1 h() t MC e( t) dt d 1 e() t M Ch() t j( t) dt Transient Fields system of 1 st order

More information

Random Fields in Bayesian Inference: Effects of the Random Field Discretization

Random Fields in Bayesian Inference: Effects of the Random Field Discretization Random Fields in Bayesian Inference: Effects of the Random Field Discretization Felipe Uribe a, Iason Papaioannou a, Wolfgang Betz a, Elisabeth Ullmann b, Daniel Straub a a Engineering Risk Analysis Group,

More information

Boundary value problems for the elliptic sine Gordon equation in a semi strip

Boundary value problems for the elliptic sine Gordon equation in a semi strip Boundary value problems for the elliptic sine Gordon equation in a semi strip Article Accepted Version Fokas, A. S., Lenells, J. and Pelloni, B. 3 Boundary value problems for the elliptic sine Gordon equation

More information

Nonlinear State Estimation Methods Overview and Application to PET Polymerization

Nonlinear State Estimation Methods Overview and Application to PET Polymerization epartment of Biochemical and Chemical Engineering Process ynamics Group () onlinear State Estimation Methods Overview and Polymerization Paul Appelhaus Ralf Gesthuisen Stefan Krämer Sebastian Engell The

More information

A study on regularization parameter choice in Near-field Acoustical Holography

A study on regularization parameter choice in Near-field Acoustical Holography Acoustics 8 Paris A study on regularization parameter choice in Near-field Acoustical Holography J. Gomes a and P.C. Hansen b a Brüel & Kjær Sound and Vibration Measurement A/S, Skodsborgvej 37, DK-285

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Nonlinear error dynamics for cycled data assimilation methods

Nonlinear error dynamics for cycled data assimilation methods Nonlinear error dynamics for cycled data assimilation methods A J F Moodey 1, A S Lawless 1,2, P J van Leeuwen 2, R W E Potthast 1,3 1 Department of Mathematics and Statistics, University of Reading, UK.

More information

Diffusion on the half-line. The Dirichlet problem

Diffusion on the half-line. The Dirichlet problem Diffusion on the half-line The Dirichlet problem Consider the initial boundary value problem (IBVP) on the half line (, ): v t kv xx = v(x, ) = φ(x) v(, t) =. The solution will be obtained by the reflection

More information

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS COMPUTATION OF FOURIER TRANSFORMS FOR NOISY BANDLIMITED SIGNALS October 22, 2011 I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt, ω R (1) I. Introduction Definition

More information

Fréchet Sensitivity Analysis for Partial Differential Equations with Distributed Parameters

Fréchet Sensitivity Analysis for Partial Differential Equations with Distributed Parameters 211 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 29 - July 1, 211 Fréchet Sensitivity Analysis for Partial Differential Equations with Distributed Parameters Jeff Borggaard*

More information

Robust Fault Diagnosis of Uncertain One-dimensional Wave Equations

Robust Fault Diagnosis of Uncertain One-dimensional Wave Equations Robust Fault Diagnosis of Uncertain One-dimensional Wave Equations Satadru Dey 1 and Scott J. Moura Abstract Unlike its Ordinary Differential Equation ODE counterpart, fault diagnosis of Partial Differential

More information

Applied Mathematics Masters Examination Fall 2016, August 18, 1 4 pm.

Applied Mathematics Masters Examination Fall 2016, August 18, 1 4 pm. Applied Mathematics Masters Examination Fall 16, August 18, 1 4 pm. Each of the fifteen numbered questions is worth points. All questions will be graded, but your score for the examination will be the

More information

Chapter 18. Remarks on partial differential equations

Chapter 18. Remarks on partial differential equations Chapter 8. Remarks on partial differential equations If we try to analyze heat flow or vibration in a continuous system such as a building or an airplane, we arrive at a kind of infinite system of ordinary

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Problem Set 5 Solutions 1

Problem Set 5 Solutions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Observability for deterministic systems and high-gain observers

Observability for deterministic systems and high-gain observers Observability for deterministic systems and high-gain observers design. Part 1. March 29, 2011 Introduction and problem description Definition of observability Consequences of instantaneous observability

More information

Given the vectors u, v, w and real numbers α, β, γ. Calculate vector a, which is equal to the linear combination α u + β v + γ w.

Given the vectors u, v, w and real numbers α, β, γ. Calculate vector a, which is equal to the linear combination α u + β v + γ w. Selected problems from the tetbook J. Neustupa, S. Kračmar: Sbírka příkladů z Matematiky I Problems in Mathematics I I. LINEAR ALGEBRA I.. Vectors, vector spaces Given the vectors u, v, w and real numbers

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Rates of Convergence to Self-Similar Solutions of Burgers Equation

Rates of Convergence to Self-Similar Solutions of Burgers Equation Rates of Convergence to Self-Similar Solutions of Burgers Equation by Joel Miller Andrew Bernoff, Advisor Advisor: Committee Member: May 2 Department of Mathematics Abstract Rates of Convergence to Self-Similar

More information

November 13, 2018 MAT186 Week 8 Justin Ko

November 13, 2018 MAT186 Week 8 Justin Ko 1 Mean Value Theorem Theorem 1 (Mean Value Theorem). Let f be a continuous on [a, b] and differentiable on (a, b). There eists a c (a, b) such that f f(b) f(a) (c) =. b a Eample 1: The Mean Value Theorem

More information

Implicit Scheme for the Heat Equation

Implicit Scheme for the Heat Equation Implicit Scheme for the Heat Equation Implicit scheme for the one-dimensional heat equation Once again we consider the one-dimensional heat equation where we seek a u(x, t) satisfying u t = νu xx + f(x,

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

L20: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs

L20: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs L0: MLPs, RBFs and SPR Bayes discriminants and MLPs The role of MLP hidden units Bayes discriminants and RBFs Comparison between MLPs and RBFs CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU

More information

Feedback Stabilization of Inverted Pendulum Models

Feedback Stabilization of Inverted Pendulum Models Virginia Commonwealth University VCU Scholars Compass Theses and Dissertations Graduate School 5 Feedback Stabilization of Inverted Pendulum odels Bruce Co Virginia Commonwealth University Follow this

More information

Solution to Problems for the 1-D Wave Equation

Solution to Problems for the 1-D Wave Equation Solution to Problems for the -D Wave Equation 8. Linear Partial Differential Equations Matthew J. Hancock Fall 5 Problem (i) Suppose that an infinite string has an initial displacement +, u (, ) = f ()

More information

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

Solve Wave Equation from Scratch [2013 HSSP]

Solve Wave Equation from Scratch [2013 HSSP] 1 Solve Wave Equation from Scratch [2013 HSSP] Yuqi Zhu MIT Department of Physics, 77 Massachusetts Ave., Cambridge, MA 02139 (Dated: August 18, 2013) I. COURSE INFO Topics Date 07/07 Comple number, Cauchy-Riemann

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

A Parameter-Choice Method That Exploits Residual Information

A Parameter-Choice Method That Exploits Residual Information A Parameter-Choice Method That Exploits Residual Information Per Christian Hansen Section for Scientific Computing DTU Informatics Joint work with Misha E. Kilmer Tufts University Inverse Problems: Image

More information

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: Chapter 7 Heat Equation Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: u t = ku x x, x, t > (7.1) Here k is a constant

More information

MATH 220 solution to homework 1

MATH 220 solution to homework 1 MATH solution to homework Problem. Define z(s = u( + s, y + s, then z (s = u u ( + s, y + s + y ( + s, y + s = e y, z( y = u( y, = f( y, u(, y = z( = z( y + y If we prescribe the data u(, = f(, then z

More information

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli

Control Systems I. Lecture 2: Modeling. Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch Emilio Frazzoli Control Systems I Lecture 2: Modeling Suggested Readings: Åström & Murray Ch. 2-3, Guzzella Ch. 2-3 Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 29, 2017 E. Frazzoli

More information

An eigenvalue method using multiple frequency data for inverse scattering problems

An eigenvalue method using multiple frequency data for inverse scattering problems An eigenvalue method using multiple frequency data for inverse scattering problems Jiguang Sun Abstract Dirichlet and transmission eigenvalues have important applications in qualitative methods in inverse

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization

Lecture 9 Nonlinear Control Design. Course Outline. Exact linearization: example [one-link robot] Exact Feedback Linearization Lecture 9 Nonlinear Control Design Course Outline Eact-linearization Lyapunov-based design Lab Adaptive control Sliding modes control Literature: [Khalil, ch.s 13, 14.1,14.] and [Glad-Ljung,ch.17] Lecture

More information

Past Cone Dynamics and Backward Group Preserving Schemes for Backward Heat Conduction Problems

Past Cone Dynamics and Backward Group Preserving Schemes for Backward Heat Conduction Problems Copyright c 2006 Tech Science Press CMES, vol12, no1, pp67-81, 2006 Past Cone Dynamics and Backward Group Preserving Schemes for Backward Heat Conduction Problems C-S Liu 1,C-WChang 2, J-R Chang 2 Abstract:

More information

First Variation of a Functional

First Variation of a Functional First Variation of a Functional The derivative of a function being zero is a necessary condition for the etremum of that function in ordinary calculus. Let us now consider the equivalent of a derivative

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Electromagnetic Modeling and Simulation

Electromagnetic Modeling and Simulation Electromagnetic Modeling and Simulation Erin Bela and Erik Hortsch Department of Mathematics Research Experiences for Undergraduates April 7, 2011 Bela and Hortsch (OSU) EM REU 2010 1 / 45 Maxwell s Equations

More information

Eigenvalues of Trusses and Beams Using the Accurate Element Method

Eigenvalues of Trusses and Beams Using the Accurate Element Method Eigenvalues of russes and Beams Using the Accurate Element Method Maty Blumenfeld Department of Strength of Materials Universitatea Politehnica Bucharest, Romania Paul Cizmas Department of Aerospace Engineering

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27 Table of contents YES, Eurandom, 10 October 2011 p. 2/27 Table of contents 1)

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Linear State Feedback Controller Design

Linear State Feedback Controller Design Assignment For EE5101 - Linear Systems Sem I AY2010/2011 Linear State Feedback Controller Design Phang Swee King A0033585A Email: king@nus.edu.sg NGS/ECE Dept. Faculty of Engineering National University

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

M445: Heat equation with sources

M445: Heat equation with sources M5: Heat equation with sources David Gurarie I. On Fourier and Newton s cooling laws The Newton s law claims the temperature rate to be proportional to the di erence: d dt T = (T T ) () The Fourier law

More information

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani

Control Systems I. Lecture 2: Modeling and Linearization. Suggested Readings: Åström & Murray Ch Jacopo Tani Control Systems I Lecture 2: Modeling and Linearization Suggested Readings: Åström & Murray Ch. 2-3 Jacopo Tani Institute for Dynamic Systems and Control D-MAVT ETH Zürich September 28, 2018 J. Tani, E.

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

California Subject Examinations for Teachers

California Subject Examinations for Teachers CSET California Subject Eaminations for Teachers TEST GUIDE MATHEMATICS SUBTEST III Sample Questions and Responses and Scoring Information Copyright 005 by National Evaluation Systems, Inc. (NES ) California

More information

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

MA3232 Numerical Analysis Week 9. James Cooley (1926-) MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

Evolution equations with spectral methods: the case of the wave equation

Evolution equations with spectral methods: the case of the wave equation Evolution equations with spectral methods: the case of the wave equation Jerome.Novak@obspm.fr Laboratoire de l Univers et de ses Théories (LUTH) CNRS / Observatoire de Paris, France in collaboration with

More information

Ill-Posedness of Backward Heat Conduction Problem 1

Ill-Posedness of Backward Heat Conduction Problem 1 Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that

More information

Calculus of Variation An Introduction To Isoperimetric Problems

Calculus of Variation An Introduction To Isoperimetric Problems Calculus of Variation An Introduction To Isoperimetric Problems Kevin Wang The University of Sydney SSP Working Seminars, MATH2916 May 4, 2013 Contents I Lagrange Multipliers 2 1 Single Constraint Lagrange

More information

Generalized Local Regularization for Ill-Posed Problems

Generalized Local Regularization for Ill-Posed Problems Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is FOURIER INVERSION 1. The Fourier Transform and the Inverse Fourier Transform Consider functions f, g : R n C, and consider the bilinear, symmetric function ψ : R n R n C, ψ(, ) = ep(2πi ), an additive

More information

Harmonic Analysis Homework 5

Harmonic Analysis Homework 5 Harmonic Analysis Homework 5 Bruno Poggi Department of Mathematics, University of Minnesota November 4, 6 Notation Throughout, B, r is the ball of radius r with center in the understood metric space usually

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS

SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS STRUCTURE OF EXAMINATION PAPER. There will be one -hour paper consisting of 4 questions..

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

9.8 APPLICATIONS OF TAYLOR SERIES EXPLORATORY EXERCISES. Using Taylor Polynomials to Approximate a Sine Value EXAMPLE 8.1

9.8 APPLICATIONS OF TAYLOR SERIES EXPLORATORY EXERCISES. Using Taylor Polynomials to Approximate a Sine Value EXAMPLE 8.1 9-75 SECTION 9.8.. Applications of Taylor Series 677 and f 0) miles/min 3. Predict the location of the plane at time t min. 5. Suppose that an astronaut is at 0, 0) and the moon is represented by a circle

More information

FLOW AROUND A SYMMETRIC OBSTACLE

FLOW AROUND A SYMMETRIC OBSTACLE FLOW AROUND A SYMMETRIC OBSTACLE JOEL A. TROPP Abstract. In this article, we apply Schauder s fied point theorem to demonstrate the eistence of a solution to a certain integral equation. This solution

More information

18.303: Introduction to Green s functions and operator inverses

18.303: Introduction to Green s functions and operator inverses 8.33: Introduction to Green s functions and operator inverses S. G. Johnson October 9, 2 Abstract In analogy with the inverse A of a matri A, we try to construct an analogous inverse  of differential

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

The spacetime of special relativity

The spacetime of special relativity 1 The spacetime of special relativity We begin our discussion of the relativistic theory of gravity by reviewing some basic notions underlying the Newtonian and special-relativistic viewpoints of space

More information

Small BGK waves and nonlinear Landau damping (higher dimensions)

Small BGK waves and nonlinear Landau damping (higher dimensions) Small BGK waves and nonlinear Landau damping higher dimensions Zhiwu Lin and Chongchun Zeng School of Mathematics Georgia Institute of Technology Atlanta, GA, USA Abstract Consider Vlasov-Poisson system

More information

Likelihood Bounds for Constrained Estimation with Uncertainty

Likelihood Bounds for Constrained Estimation with Uncertainty Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 WeC4. Likelihood Bounds for Constrained Estimation with Uncertainty

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information