Data-to-Born transform for inversion and imaging with waves
|
|
- Morris Kennedy
- 5 years ago
- Views:
Transcription
1 Data-to-Born transform for inversion and imaging with waves Alexander V. Mamonov 1, Liliana Borcea 2, Vladimir Druskin 3, and Mikhail Zaslavsky 3 1 University of Houston, 2 University of Michigan Ann Arbor, 3 Schlumberger-Doll Research Center Support: NSF DMS , ONR N A.V. Mamonov Data-to-Born transform 1 / 26
2 Introduction Inversion with waves: determine acoustic properties of a medium in the bulk from response measured at or near the surface Highly nonlinear problem due to, in part, multiple scattering Given the full waveform response, can we deduce what response will the same medium have if waves propagated under the single scattering approximation, i.e. in Born regime? Turns out we can! A highly nonlinear transform takes full waveform data to single scattering data: Data-to-Born (DtB) transform Acts as a data preprocessing algorithm, can be integrated into existing workflows A.V. Mamonov Data-to-Born transform 2 / 26
3 Forward model: 1D Acoustic wave equation for pressure with operator and initial conditions ( tt + A)p(t, x) = 0, x (0, l), t > 0, [ c(x) ] A = σ(x)c(x) x σ(x) x, p(0, x) = σ(x)b(x), p t (0, x) = 0 Wave speed: c(x), impedance: σ(x) Solution: pressure wavefield p(t, x) = cos(t A) σ(x)b(x) A.V. Mamonov Data-to-Born transform 3 / 26
4 Liouville transform and travel time coordinates Using Liouville transform and travel time coordinates T (x) = x ds 0 c(s), can write everything in first order form ( ) ( ) ( ) P 0 Lq P t = U L T, q 0 U where with L q = T T q, L T q = T T q, q(t ) = ln σ ( x(t ) ) Why use Liouville transform? L q is affine in q! We consider Born approximation w.r.t. q around some q 0 A.V. Mamonov Data-to-Born transform 4 / 26
5 Data sampling and propagator Data model for collocated sources/receivers: ( ) D(t) = b, cos t L q L T q b Data is sampled discretely at t k = kτ: D k = D(t k ) = b, cos ( k arccos ( cos ( ))) τ L q L T q b = b, Tk (P)b, where the propagator (Green s function) is ( ) P = cos τ L q L T q Works best with τ approximately around Nyquist sampling for source b A.V. Mamonov Data-to-Born transform 5 / 26
6 Wavefield snapshots and time stepping Data sampling induces wavefield snapshots P k (T ) = P(t k, T ) = T k (P)b(T ) Snapshots satisfy exactly a time-stepping scheme with 1 [ ] τ 2 P k+1 (T ) 2P k (T ) + P k 1 (T ) = ξ(p)p k (T ), ξ(p) = 2 τ 2 ( I P ) 0 A.V. Mamonov Data-to-Born transform 6 / 26
7 Reduced order model Factorize ξ(p) = L q L T q Simple Taylor expansion Why work with ξ(p)? L q = L q + O(τ 2 ) Can estimate P just from the sampled data D k! Reduced order model (ROM): matrix P R n n and vector b R n satisfying data interpolation conditions D k = b, T k (P)b = b T T k ( P) b, k = 0, 1,..., 2n 1 A.V. Mamonov Data-to-Born transform 7 / 26
8 Projection ROM ROM interpolating the data is a projection P i,j = V i, PV j where V k form an orthonormal basis for K n (P, b) = span{b, Pb,..., P n 1 b} = span{p 0, P 1,..., P n 1 } Problem: snapshots P k are unavailable, thus orthogonalized snapshots V k are unknown Solution: data gives us inner products of snapshots, entries of mass and stiffness matrices M i,j = P i, P j, Si,j = P i, PP j A.V. Mamonov Data-to-Born transform 8 / 26
9 Mass and stiffness matrices from data Snapshots and data in terms of Chebyshev polynomials P k = T k (P)b, D k = b, P k = b, Tk (P)b Chebyshev polynomials obey a multiplication property T i (P)T j (P) = 1 2 [ ] T i+j (P) + T i j (P) Can express mass and stiffness matrix entries in terms of data M i,j = 1 ] P i, P j = [D i+j + D 2 i j S i,j = 1 ] P i, PP j = [D i+j+1 + D 4 i+j 1 + D i j+1 + D i j 1 A.V. Mamonov Data-to-Born transform 9 / 26
10 Implicit snapshot orthogonalization Consider snapshots matrices P = [P 0, P 1,..., P n 1 ], V = [V 0, V 1,..., V n 1 ], formally M = P T P, V T V = I (1) If snapshots were known, we could use Gram-Schmidt orthogonalization (QR factorization) with upper trangular R R n n Combine (1) (2) to get P = VR, V = PR 1 (2) M = R T R, a Cholesky factorization of the mass matrix known from data A.V. Mamonov Data-to-Born transform 10 / 26
11 ROM from data ROM is a projection, formally P = V T PV (3) We have Cholesky factor R of the mass matrix M = R T R Substitute V = PR 1 into (3): P = V T PV = R T (P T PP)R 1 = R T SR 1, where S = P T PP is known from data ROM sensor vector b = Re 1 = (D 0 ) 1/2 e 1 A.V. Mamonov Data-to-Born transform 11 / 26
12 Factorization of ξ( P) Once P is computed, form ξ( P) = 2 τ 2 ( I P ) 0 and take another Cholesky factorization ξ( P) = L q LT q Note: propagator ROM P is tridiagonal, thus its Cholesky factor L q R n n is lower bidiagonal Since ξ(p) = L q L T q L q L T q, we conclude that L q approximates affine in q! L q = T T q, A.V. Mamonov Data-to-Born transform 12 / 26
13 Data-to-Born transform Assume known c (travel-time coordinates) and choose a reference impedance q 0, consider the perturbation L ε = L q 0 + ε ( Lq L ) q 0 Solve ξ( P) = 2 τ 2 ( I P ) = L q LT q for P and perturb the propagator correspondingly P ε = I τ 2 2 L ε LεT A.V. Mamonov Data-to-Born transform 13 / 26
14 Data-to-Born transform Recall data interpolation for unknown q: D k = b T T k ( P) b, similarly D q0 k for reference q 0 The Data-to-Born transform (DtB) is given by B k = D q0 k + b [ T d dε T k ( P ε) ε=0 ] b Chain rule does not apply to matrix functions, use Chebyshev polynomial three-term recurrence to differentiate A.V. Mamonov Data-to-Born transform 14 / 26
15 Numerical results: 1D Born approximation Compare DtB to data generated using true Born approximation in q = ln σ with known c and reference non-reflective q 0 : ( ) ( ) P Born (P ) ( ) 0 Lq Born 0 t = U Born L T q U 0 Born 2 (q 0 q)u q0 T (q q 0, )P q0 where as before L q 0 = T T q 0, L T q 0 = T T q 0, and t ( P q0 U q0 ) = ( 0 Lq 0 L T q 0 0 ) ( ) P q0 U q0 A.V. Mamonov Data-to-Born transform 15 / 26
16 Numerical results: 1D snapshots σ P 20 P 80 Reference c = σ 1, q 0 0 Source/receiver at x = 0 (T = 0) Spatial axis in k, integer units of τ: T k = kτ V 20 V 80 A.V. Mamonov Data-to-Born transform 16 / 26
17 Numerical results: 1D true Born vs. DtB Full suppression of multiple reflections Both arrival times and amplitudes are matched exactly A.V. Mamonov Data-to-Born transform 17 / 26
18 Higher dimensions Approach generalizes naturally to higher dimensions Data recorded at an array of m sensors Data is an m m matrix function of time ( D i,j (t) = b i, cos t L q L T q )b j, i, j = 1,..., m, where L q = c(x) c(x) q(x), L T q = c(x) c(x) q(x) No travel time coordinate transformation in higher dimensions A.V. Mamonov Data-to-Born transform 18 / 26
19 Higher dimensions: ROM If sampled data is D k = D(kτ) R m m, ROM satisfies matrix interpolation conditions D k = b T T k ( P) b, k = 0,..., 2n 1, with block matrices P R mn mn, b R mn m ROM and DtB transform are computed with block versions of the same algorithms as in 1D All linear algebraic procedures (Gram-Schmidt, Cholesky) are replaced with their block counterparts A.V. Mamonov Data-to-Born transform 19 / 26
20 Numerical results: 2D snapshots σ c P q0 V q0 P q V q Array with m = 50 sensors Snapshots plotted for a single source A.V. Mamonov Data-to-Born transform 20 / 26
21 Numerical results: 2D true Born vs. DtB Single row of data matrix corresponding to source Vertical: time (in units of τ) Horizontal: receiver index (out of m = 50) Measured data Born data DtB A.V. Mamonov Data-to-Born transform 21 / 26
22 Numerical results: 2D DtB + RTM Reverse time migration (RTM) image computed from both measured full waveform data and DtB transformed data RTM from measured data RTM from DtB A.V. Mamonov Data-to-Born transform 22 / 26
23 Numerical results: 2D Elasticity Measured data DtB A.V. Mamonov Data-to-Born transform 23 / 26
24 Numerical results: 2D Elasticity Measured data DtB A.V. Mamonov Data-to-Born transform 24 / 26
25 Conclusions and future work Data-to-Born: transform acoustic full waveform data to single scattering (Born) data for the same medium Based on techniques of model order reduction Data-driven approach relying on classical linear algebra algorithms (Cholesky, QR), no computations in the continuum Easy to integrate into existing workflows as a preprocessing step Enables the use of linearized inversion algorithms Future work: Reverse direction, Born-to-Data: use a cheap single scattering solver to generate full waveform data Test performance of linearized inversion algorithms (e.g. LS-RTM) on DtB data Extend to frequency domain wave equation (Helmholtz) A.V. Mamonov Data-to-Born transform 25 / 26
26 References Untangling the nonlinearity in inverse scattering with data-driven reduced order models, L. Borcea, V. Druskin, A.V. Mamonov, M. Zaslavsky, 2017, arxiv: [math.na] Related work: 1 Direct, nonlinear inversion algorithm for hyperbolic problems via projection-based model reduction, V. Druskin, A. Mamonov, A.E. Thaler and M. Zaslavsky, SIAM Journal on Imaging Sciences 9(2): , Nonlinear seismic imaging via reduced order model backprojection, A.V. Mamonov, V. Druskin, M. Zaslavsky, SEG Technical Program Expanded Abstracts 2015: pp A nonlinear method for imaging with acoustic waves via reduced order model backprojection, V. Druskin, A.V. Mamonov, M. Zaslavsky, 2017, arxiv: [math.na] A.V. Mamonov Data-to-Born transform 26 / 26
Seismic imaging and multiple removal via model order reduction
Seismic imaging and multiple removal via model order reduction Alexander V. Mamonov 1, Liliana Borcea 2, Vladimir Druskin 3, and Mikhail Zaslavsky 3 1 University of Houston, 2 University of Michigan Ann
More informationNonlinear acoustic imaging via reduced order model backprojection
Nonlinear acoustic imaging via reduced order model backprojection Alexander V. Mamonov 1, Vladimir Druskin 2, Andrew Thaler 3 and Mikhail Zaslavsky 2 1 University of Houston, 2 Schlumberger-Doll Research
More informationNonlinear seismic imaging via reduced order model backprojection
Nonlinear seismic imaging via reduced order model backprojection Alexander V. Mamonov, Vladimir Druskin 2 and Mikhail Zaslavsky 2 University of Houston, 2 Schlumberger-Doll Research Center Mamonov, Druskin,
More informationarxiv: v1 [math.na] 1 Apr 2015
Nonlinear seismic imaging via reduced order model backprojection Alexander V. Mamonov, University of Houston; Vladimir Druskin and Mikhail Zaslavsky, Schlumberger arxiv:1504.00094v1 [math.na] 1 Apr 2015
More informationEfficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques
1 Efficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques From Resonators to Open Domains Rob Remis Delft University of Technology November 3, 2017 ICERM Brown University 2 Thanks
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationPart IB Numerical Analysis
Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationResistor Networks and Optimal Grids for Electrical Impedance Tomography with Partial Boundary Measurements
Resistor Networks and Optimal Grids for Electrical Impedance Tomography with Partial Boundary Measurements Alexander Mamonov 1, Liliana Borcea 2, Vladimir Druskin 3, Fernando Guevara Vasquez 4 1 Institute
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationWe G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies
We G15 5 Moel Reuction Approaches for Solution of Wave Equations for Multiple Frequencies M.Y. Zaslavsky (Schlumberger-Doll Research Center), R.F. Remis* (Delft University) & V.L. Druskin (Schlumberger-Doll
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent
More informationSpring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier
Spring 0 CIS 55 Fundamentals of Linear Algebra and Optimization Jean Gallier Homework 5 & 6 + Project 3 & 4 Note: Problems B and B6 are for extra credit April 7 0; Due May 7 0 Problem B (0 pts) Let A be
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Vectors Arrays of numbers Vectors represent a point
More informationMathematisches Forschungsinstitut Oberwolfach. Computational Inverse Problems for Partial Differential Equations
Mathematisches Forschungsinstitut Oberwolfach Report No. 24/2017 DOI: 10.4171/OWR/2017/24 Computational Inverse Problems for Partial Differential Equations Organised by Liliana Borcea, Ann Arbor Thorsten
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationStanford Exploration Project, Report SERGEY, November 9, 2000, pages 683??
Stanford Exploration Project, Report SERGEY, November 9, 2000, pages 683?? 682 Stanford Exploration Project, Report SERGEY, November 9, 2000, pages 683?? Velocity continuation by spectral methods Sergey
More informationQuadrature Formula for Computed Tomography
Quadrature Formula for Computed Tomography orislav ojanov, Guergana Petrova August 13, 009 Abstract We give a bivariate analog of the Micchelli-Rivlin quadrature for computing the integral of a function
More informationThe local structure of affine systems
The local structure of affine systems César Aguilar and Andrew D. Lewis Department of Mathematics and Statistics, Queen s University 05/06/2008 C. Aguilar, A. Lewis (Queen s University) The local structure
More informationChapter 7. Seismic imaging. 7.1 Assumptions and vocabulary
Chapter 7 Seismic imaging Much of the imaging procedure was already described in the previous chapters. An image, or a gradient update, is formed from the imaging condition by means of the incident and
More informationApplied Linear Algebra
Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University
More informationBoundary Value Problems and Iterative Methods for Linear Systems
Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space
More informationRational Chebyshev pseudospectral method for long-short wave equations
Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationg(t) = f(x 1 (t),..., x n (t)).
Reading: [Simon] p. 313-333, 833-836. 0.1 The Chain Rule Partial derivatives describe how a function changes in directions parallel to the coordinate axes. Now we shall demonstrate how the partial derivatives
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More information(v, w) = arccos( < v, w >
MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:
More informationSeparation of Variables in Linear PDE: One-Dimensional Problems
Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,
More informationFractional Spectral and Spectral Element Methods
Fractional Calculus, Probability and Non-local Operators: Applications and Recent Developments Nov. 6th - 8th 2013, BCAM, Bilbao, Spain Fractional Spectral and Spectral Element Methods (Based on PhD thesis
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationCollocation approximation of the monodromy operator of periodic, linear DDEs
p. Collocation approximation of the monodromy operator of periodic, linear DDEs Ed Bueler 1, Victoria Averina 2, and Eric Butcher 3 13 July, 2004 SIAM Annual Meeting 2004, Portland 1=Dept. of Math. Sci.,
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationLinear Analysis Lecture 16
Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,
More informationAcoustic Wave Equation
Acoustic Wave Equation Sjoerd de Ridder (most of the slides) & Biondo Biondi January 16 th 2011 Table of Topics Basic Acoustic Equations Wave Equation Finite Differences Finite Difference Solution Pseudospectral
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationLinear algebra II Homework #1 due Thursday, Feb A =
Homework #1 due Thursday, Feb. 1 1. Find the eigenvalues and the eigenvectors of the matrix [ ] 3 2 A =. 1 6 2. Find the eigenvalues and the eigenvectors of the matrix 3 2 2 A = 2 3 2. 2 2 1 3. The following
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationMATH 22A: LINEAR ALGEBRA Chapter 4
MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate
More informationReview of linear algebra
Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More informationThe Adjoint of a Linear Operator
The Adjoint of a Linear Operator Michael Freeze MAT 531: Linear Algebra UNC Wilmington Spring 2015 1 / 18 Adjoint Operator Let L : V V be a linear operator on an inner product space V. Definition The adjoint
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationVolterra Integral Equations of the First Kind with Jump Discontinuous Kernels
Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Denis Sidorov Energy Systems Institute, Russian Academy of Sciences e-mail: contact.dns@gmail.com INV Follow-up Meeting Isaac
More informationMATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.
MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:
More informationReduction to the associated homogeneous system via a particular solution
June PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 5) Linear Algebra This study guide describes briefly the course materials to be covered in MA 5. In order to be qualified for the credit, one
More informationLeast-Squares Systems and The QR factorization
Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationComparison between least-squares reverse time migration and full-waveform inversion
Comparison between least-squares reverse time migration and full-waveform inversion Lei Yang, Daniel O. Trad and Wenyong Pan Summary The inverse problem in exploration geophysics usually consists of two
More informationThe 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.
Orthogonality and QR The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next. So, what is an inner product? An inner product
More information1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)
Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty
More informationMultistage Methods I: Runge-Kutta Methods
Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationNote that, when perturbing c(x) instead of m(x), an additional Taylor approximation is necessary: 1 c 2 (x) 1. c 2 0(x) 2εc 1(x).
Chapter 3 Scattering series In this chapter we describe the nonlinearity of the map c u in terms of a perturbation (Taylor) series. To first order, the linearization of this map is called the Born approximation.
More informationPART II : Least-Squares Approximation
PART II : Least-Squares Approximation Basic theory Let U be an inner product space. Let V be a subspace of U. For any g U, we look for a least-squares approximation of g in the subspace V min f V f g 2,
More informationMATH 235. Final ANSWERS May 5, 2015
MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationChapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs
Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationTwo-Scale Wave Equation Modeling for Seismic Inversion
Two-Scale Wave Equation Modeling for Seismic Inversion Susan E. Minkoff Department of Mathematics and Statistics University of Maryland Baltimore County Baltimore, MD 21250, USA RICAM Workshop 3: Wave
More informationModel reduction of wave propagation
Model reduction of wave propagation via phase-preconditioned rational Krylov subspaces Delft University of Technology and Schlumberger V. Druskin, R. Remis, M. Zaslavsky, Jörn Zimmerling November 8, 2017
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationFinite difference method for elliptic problems: I
Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen
More informationReflection Seismic Method
Reflection Seismic Method Data and Image sort orders; Seismic Impedance; -D field acquisition geometries; CMP binning and fold; Resolution, Stacking charts; Normal Moveout and correction for it; Stacking;
More information22A-2 SUMMER 2014 LECTURE Agenda
22A-2 SUMMER 204 LECTURE 2 NATHANIEL GALLUP The Dot Product Continued Matrices Group Work Vectors and Linear Equations Agenda 2 Dot Product Continued Angles between vectors Given two 2-dimensional vectors
More informationMA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS
MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationTitle: Exact wavefield extrapolation for elastic reverse-time migration
Title: Exact wavefield extrapolation for elastic reverse-time migration ummary: A fundamental step of any wave equation migration algorithm is represented by the numerical projection of the recorded data
More information1 Nonlinear deformation
NONLINEAR TRUSS 1 Nonlinear deformation When deformation and/or rotation of the truss are large, various strains and stresses can be defined and related by material laws. The material behavior can be expected
More informationMODELLING OF RECIPROCAL TRANSDUCER SYSTEM ACCOUNTING FOR NONLINEAR CONSTITUTIVE RELATIONS
MODELLING OF RECIPROCAL TRANSDUCER SYSTEM ACCOUNTING FOR NONLINEAR CONSTITUTIVE RELATIONS L. X. Wang 1 M. Willatzen 1 R. V. N. Melnik 1,2 Abstract The dynamics of reciprocal transducer systems is modelled
More informationKadath: a spectral solver and its application to black hole spacetimes
Kadath: a spectral solver and its application to black hole spacetimes Philippe Grandclément Laboratoire de l Univers et de ses Théories (LUTH) CNRS / Observatoire de Paris F-92195 Meudon, France philippe.grandclement@obspm.fr
More information1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal
. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P
More informationModel Reduction for Dynamical Systems
MAX PLANCK INSTITUTE Otto-von-Guericke Universitaet Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 10 Peter Benner Lihong Feng Max Planck Institute for
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationNumerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018
Numerical Analysis Preliminary Exam 0.00am.00pm, January 9, 208 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More information(v, w) = arccos( < v, w >
MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,
More informationInterpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning
76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t
More informationMATH 423/ Note that the algebraic operations on the right hand side are vector subtraction and scalar multiplication.
MATH 423/673 1 Curves Definition: The velocity vector of a curve α : I R 3 at time t is the tangent vector to R 3 at α(t), defined by α (t) T α(t) R 3 α α(t + h) α(t) (t) := lim h 0 h Note that the algebraic
More informationNumerical Methods in Geophysics. Introduction
: Why numerical methods? simple geometries analytical solutions complex geometries numerical solutions Applications in geophysics seismology geodynamics electromagnetism... in all domains History of computers
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More information2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9
Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More information