Seismic imaging and multiple removal via model order reduction

Size: px
Start display at page:

Download "Seismic imaging and multiple removal via model order reduction"

Transcription

1 Seismic imaging and multiple removal via model order reduction Alexander V. Mamonov 1, Liliana Borcea 2, Vladimir Druskin 3, and Mikhail Zaslavsky 3 1 University of Houston, 2 University of Michigan Ann Arbor, 3 Schlumberger-Doll Research Center Support: NSF DMS , ONR N A.V. Mamonov ROMs for imaging and multiple removal 1 / 26

2 Motivation: seismic oil and gas exploration Problems addressed: 1 Imaging: qualitative estimation of reflectors on top of known velocity model 2 Multiple removal: from measured data produce a new data set with only primary reflection events Common framework: data-driven Reduced Order Models (ROM) A.V. Mamonov ROMs for imaging and multiple removal 2 / 26

3 Forward model: acoustic wave equation Acoustic wave equation in the time domain with initial conditions u tt = Au in Ω, t [0, T ] u t=0 = B, u t t=0 = 0, sources are columns of B R N m The spatial operator A R N N is a (symmetrized) fine grid discretization of A = c 2 with appropriate boundary conditions Wavefields for all sources are columns of u(t) = cos(t A)B R N m A.V. Mamonov ROMs for imaging and multiple removal 3 / 26

4 Data model and problem formulations For simplicity assume that sources and receivers are collocated, receiver matrix is also B The data model is D(t) = B T u(t) = B T cos(t A)B, an m m matrix function of time Problem formulations: 1 Imaging: given D(t) estimate reflectors, i.e. discontinuities of c 2 Multiple removal: given D(t) obtain Born data set F(t) with multiple reflection events removed In both cases we are provided with a kinematic model, a smooth non-reflective velocity c 0 A.V. Mamonov ROMs for imaging and multiple removal 4 / 26

5 Reduced order models Data is always discretely sampled, say uniformly at t k = kτ The choice of τ is very important, optimally τ around Nyquist rate Discrete data samples are D k = D(kτ) = B T cos ( kτ ) A B = B T T k (P)B, where T k is Chebyshev polynomial and the propagator (Green s function over small time τ) is ( P = cos τ ) A R N N A reduced order model (ROM) P R mn mn, B R mn m should fit the data D k = B T T k (P)B = B T T k ( P) B, k = 0, 1,..., 2n 1 A.V. Mamonov ROMs for imaging and multiple removal 5 / 26

6 Projection ROMs Projection ROMs are of the form P = V T PV, B = V T B, where V is an orthonormal basis for some subspace What subspace to project on to fit the data? Consider a matrix of wavefield snapshots U = [u 0, u 1,..., u n 1 ] R N mn, u k = u(kτ) = T k (P)B We must project on Krylov subspace K n (P, B) = colspan[b, PB,..., P n 1 B] = colspan U Reasoning: the data only knows about what P does to wavefield snapshots u k A.V. Mamonov ROMs for imaging and multiple removal 6 / 26

7 ROM from measured data Wavefields in the whole domain U are unknown, thus V is unknown How to obtain ROM from just the data D k? Data does not give us U, but it gives us inner products! Multiplicative property of Chebyshev polynomials T i (x)t j (x) = 1 2 (T i+j(x) + T i j (x)) Since u k = T k (P)B and D k = B T T k (P)B we get (U T U) i,j = u T i u j = 1 2 (D i+j + D i j ), (U T PU) i,j = u T i Pu j = 1 4 (D j+i+1 + D j i+1 + D j+i 1 + D j i 1 ) A.V. Mamonov ROMs for imaging and multiple removal 7 / 26

8 ROM from measured data Suppose U is orthogonalized by a block QR (Gram-Schmidt) procedure U = VL T, equivalently V = UL T, where L is a block Cholesky factor of the Gramian U T U known from the data U T U = LL T The projection is given by P ( ) = V T PV = L 1 U T PU L T, where U T PU is also known from the data Cholesky factorization is essential, (block) lower triangular structure is the linear algebraic equivalent of causality A.V. Mamonov ROMs for imaging and multiple removal 8 / 26

9 Problem 1: Imaging ROM is a projection, we can use backprojection If span(u) is suffiently rich, then columns of VV T should be good approximations of δ-functions, hence P VV T PVV T = V PV T As before, U and V are unknown We have an approximate kinematic model, i.e. the travel times Equivalent to knowing a smooth velocity c 0 For known c 0 we can compute everything, including U 0, V 0, P0 A.V. Mamonov ROMs for imaging and multiple removal 9 / 26

10 ROM backprojection Take backprojection P V PV T and make another approximation: replace unknown V with V 0 P V0 PV T For the kinematic model we know V 0 exactly 0 P 0 V 0 P0 V T 0 Approximate perturbation of the propagator P P 0 V 0 ( P P 0 )V T 0 is essentially the perturbation of the Green s function δg(x, y) = G(x, y, τ) G 0 (x, y, τ) But δg(x, y) depends on two variables x, y Ω, how do we get a single image? A.V. Mamonov ROMs for imaging and multiple removal 10 / 26

11 Backprojection imaging functional Take the imaging functional I to be I(x) δg(x, x) = G(x, x, τ) G 0 (x, x, τ), x Ω In matrix form it means taking the diagonal ( I = diag V 0 ( P P ) 0 )V T 0 diag(p P 0 ) Note that ) I = diag ([V 0 V T ] P [VV T 0 ] [V 0V T 0 ] P 0 [V 0 V T 0 ] Thus, approximation quality depends only on how well columns of VV T 0 and V 0V T 0 approximate δ-functions A.V. Mamonov ROMs for imaging and multiple removal 11 / 26

12 Simple example: layered model True c ROM backprojection image I A simple layered model, p = 32 sources/receivers (black ) Constant velocity kinematic model c 0 = 1500 m/s Multiple reflections from waves bouncing between layers and reflective top surface Each multiple creates an RTM artifact below actual layers RTM image A.V. Mamonov ROMs for imaging and multiple removal 12 / 26

13 Snapshot orthogonalization Snapshots U Orthogonalized snapshots V t = 10τ t = 15τ t = 20τ A.V. Mamonov ROMs for imaging and multiple removal 13 / 26

14 Snapshot orthogonalization Snapshots U Orthogonalized snapshots V t = 25τ t = 30τ t = 35τ A.V. Mamonov ROMs for imaging and multiple removal 14 / 26

15 Approximation of δ-functions Columns of V0 VT0 Columns of VVT0 y = 345 m y = 510 m y = 675 m A.V. Mamonov ROMs for imaging and multiple removal 15 / 26

16 Approximation of δ-functions Columns of V0 VT0 Columns of VVT0 y = 840 m y = 1020 m y = 1185 m A.V. Mamonov ROMs for imaging and multiple removal 16 / 26

17 High contrast example: hydraulic fractures True c RTM image Important application: hydraulic fracturing Three fractures 10 cm wide each Very high contrasts: c = 4500 m/s in the surrounding rock, c = 1500 m/s in the fluid inside fractures A.V. Mamonov ROMs for imaging and multiple removal 17 / 26

18 High contrast example: hydraulic fractures True c ROM backprojection image I Important application: hydraulic fracturing Three fractures 10 cm wide each Very high contrasts: c = 4500 m/s in the surrounding rock, c = 1500 m/s in the fluid inside fractures A.V. Mamonov ROMs for imaging and multiple removal 18 / 26

19 Large scale example: Marmousi model A.V. Mamonov ROMs for imaging and multiple removal 19 / 26

20 Problem 2: multiple removal Introduce Data-to-Born (DtB) transform: compute ROM from original data, then generate a new data set with primary reflection events only Born with respect to what? Consider wave equation in the form ( c ) u tt = σc σ u, where acoustic impedance σ = ρc Assume c = c 0 is a known kinematic model Only the impedance σ changes Above assumptions are for derivation only, the method works even if they are not satisfied A.V. Mamonov ROMs for imaging and multiple removal 20 / 26

21 Born approximation Can show that where P I τ 2 L q = c c q, 2 L ql T q, LT q = c c q, are affine in q = log σ Consider Born approximation (linearization) with respect to q around known c = c 0 Perform second Cholesky factorization on ROM 2 τ 2 (Ĩ P) = L q LT q Cholesky factors L q, L T q are approximately affine in q, thus the perturbation L q L 0 is approximately linear in q A.V. Mamonov ROMs for imaging and multiple removal 21 / 26

22 Data-to-Born transform 1 Compute P from D and P 0 from D 0 corresponding to q 0 (σ 1) 2 Perform second Cholesky factorization, find L q and L 0 3 Form the perturbation L ε = L 0 + ε( L q L 0 ), affine in εq 4 Propagate the perturbation ( D ε k = B T T k Ĩ τ 2 ) 2 L ε LT ε B 5 Differentiate to obtain DtB transformed data F k = D 0 k + ddε k dε, k = 0, 1,..., 2n 1 ε=0 A.V. Mamonov ROMs for imaging and multiple removal 22 / 26

23 Example: DtB seismogram comparison Impedance σ = ρc Velocity c Original data D k D 0 k DtB transformed data F k D 0 k A.V. Mamonov ROMs for imaging and multiple removal 23 / 26

24 Example: DtB+RTM imaging Impedance σ = ρc Velocity c RTM image from original data RTM image from DtB data A.V. Mamonov ROMs for imaging and multiple removal 24 / 26

25 Conclusions and future work ROMs for imaging and multiple removal (DtB) Time domain formulation is essential, linear algebraic analogues of causality: Gram-Schmidt, Cholesky Implicit orthogonalization of wavefield snapshots: removal of multiples in backprojection imaging and DtB transform Existing linearized imaging (RTM) and inversion (LS-RTM) methods can be applied to DtB transformed data Future work: Data completion for partial data (including monostatic, aka backscattering measurements) Elasticity: promising preliminary results Stability and noise effects (SVD truncation of the Gramian, etc.) Frequency domain analogue (data-driven PML) A.V. Mamonov ROMs for imaging and multiple removal 25 / 26

26 References 1 Nonlinear seismic imaging via reduced order model backprojection, A.V. Mamonov, V. Druskin, M. Zaslavsky, SEG Technical Program Expanded Abstracts 2015: pp Direct, nonlinear inversion algorithm for hyperbolic problems via projection-based model reduction, V. Druskin, A. Mamonov, A.E. Thaler and M. Zaslavsky, SIAM Journal on Imaging Sciences 9(2): , A nonlinear method for imaging with acoustic waves via reduced order model backprojection, V. Druskin, A.V. Mamonov, M. Zaslavsky, 2017, arxiv: [math.na] 4 Untangling the nonlinearity in inverse scattering with data-driven reduced order models, L. Borcea, V. Druskin, A.V. Mamonov, M. Zaslavsky, 2017, arxiv: [math.na] A.V. Mamonov ROMs for imaging and multiple removal 26 / 26

Nonlinear acoustic imaging via reduced order model backprojection

Nonlinear acoustic imaging via reduced order model backprojection Nonlinear acoustic imaging via reduced order model backprojection Alexander V. Mamonov 1, Vladimir Druskin 2, Andrew Thaler 3 and Mikhail Zaslavsky 2 1 University of Houston, 2 Schlumberger-Doll Research

More information

Nonlinear seismic imaging via reduced order model backprojection

Nonlinear seismic imaging via reduced order model backprojection Nonlinear seismic imaging via reduced order model backprojection Alexander V. Mamonov, Vladimir Druskin 2 and Mikhail Zaslavsky 2 University of Houston, 2 Schlumberger-Doll Research Center Mamonov, Druskin,

More information

Data-to-Born transform for inversion and imaging with waves

Data-to-Born transform for inversion and imaging with waves Data-to-Born transform for inversion and imaging with waves Alexander V. Mamonov 1, Liliana Borcea 2, Vladimir Druskin 3, and Mikhail Zaslavsky 3 1 University of Houston, 2 University of Michigan Ann Arbor,

More information

arxiv: v1 [math.na] 1 Apr 2015

arxiv: v1 [math.na] 1 Apr 2015 Nonlinear seismic imaging via reduced order model backprojection Alexander V. Mamonov, University of Houston; Vladimir Druskin and Mikhail Zaslavsky, Schlumberger arxiv:1504.00094v1 [math.na] 1 Apr 2015

More information

Model reduction of wave propagation

Model reduction of wave propagation Model reduction of wave propagation via phase-preconditioned rational Krylov subspaces Delft University of Technology and Schlumberger V. Druskin, R. Remis, M. Zaslavsky, Jörn Zimmerling November 8, 2017

More information

Efficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques

Efficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques 1 Efficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques From Resonators to Open Domains Rob Remis Delft University of Technology November 3, 2017 ICERM Brown University 2 Thanks

More information

Resistor Networks and Optimal Grids for Electrical Impedance Tomography with Partial Boundary Measurements

Resistor Networks and Optimal Grids for Electrical Impedance Tomography with Partial Boundary Measurements Resistor Networks and Optimal Grids for Electrical Impedance Tomography with Partial Boundary Measurements Alexander Mamonov 1, Liliana Borcea 2, Vladimir Druskin 3, Fernando Guevara Vasquez 4 1 Institute

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

Attenuation compensation in least-squares reverse time migration using the visco-acoustic wave equation

Attenuation compensation in least-squares reverse time migration using the visco-acoustic wave equation Attenuation compensation in least-squares reverse time migration using the visco-acoustic wave equation Gaurav Dutta, Kai Lu, Xin Wang and Gerard T. Schuster, King Abdullah University of Science and Technology

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

An acoustic wave equation for orthorhombic anisotropy

An acoustic wave equation for orthorhombic anisotropy Stanford Exploration Project, Report 98, August 1, 1998, pages 6?? An acoustic wave equation for orthorhombic anisotropy Tariq Alkhalifah 1 keywords: Anisotropy, finite difference, modeling ABSTRACT Using

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Comparison between least-squares reverse time migration and full-waveform inversion

Comparison between least-squares reverse time migration and full-waveform inversion Comparison between least-squares reverse time migration and full-waveform inversion Lei Yang, Daniel O. Trad and Wenyong Pan Summary The inverse problem in exploration geophysics usually consists of two

More information

Vectors. Vectors and the scalar multiplication and vector addition operations:

Vectors. Vectors and the scalar multiplication and vector addition operations: Vectors Vectors and the scalar multiplication and vector addition operations: x 1 x 1 y 1 2x 1 + 3y 1 x x n 1 = 2 x R n, 2 2 y + 3 2 2x = 2 + 3y 2............ x n x n y n 2x n + 3y n I ll use the two terms

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Elastic wave-equation migration for laterally varying isotropic and HTI media. Richard A. Bale and Gary F. Margrave

Elastic wave-equation migration for laterally varying isotropic and HTI media. Richard A. Bale and Gary F. Margrave Elastic wave-equation migration for laterally varying isotropic and HTI media Richard A. Bale and Gary F. Margrave a Outline Introduction Theory Elastic wavefield extrapolation Extension to laterally heterogeneous

More information

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies We G15 5 Moel Reuction Approaches for Solution of Wave Equations for Multiple Frequencies M.Y. Zaslavsky (Schlumberger-Doll Research Center), R.F. Remis* (Delft University) & V.L. Druskin (Schlumberger-Doll

More information

Part IB Numerical Analysis

Part IB Numerical Analysis Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS

GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Methods in Geochemistry and Geophysics, 36 GEOPHYSICAL INVERSE THEORY AND REGULARIZATION PROBLEMS Michael S. ZHDANOV University of Utah Salt Lake City UTAH, U.S.A. 2OO2 ELSEVIER Amsterdam - Boston - London

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Chapter 7. Seismic imaging. 7.1 Assumptions and vocabulary

Chapter 7. Seismic imaging. 7.1 Assumptions and vocabulary Chapter 7 Seismic imaging Much of the imaging procedure was already described in the previous chapters. An image, or a gradient update, is formed from the imaging condition by means of the incident and

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Approximate Methods for Time-Reversal Processing of Large Seismic Reflection Data Sets. University of California, LLNL

Approximate Methods for Time-Reversal Processing of Large Seismic Reflection Data Sets. University of California, LLNL 147th Annual Meeting: Acoustical Society of America New York City, May 25, 2004 Approximate Methods for Time-Reversal Processing of Large Seismic Reflection Data Sets Speaker: James G. Berryman University

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Title: Exact wavefield extrapolation for elastic reverse-time migration

Title: Exact wavefield extrapolation for elastic reverse-time migration Title: Exact wavefield extrapolation for elastic reverse-time migration ummary: A fundamental step of any wave equation migration algorithm is represented by the numerical projection of the recorded data

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

Discontinuous Galerkin and Finite Difference Methods for the Acoustic Equations with Smooth Coefficients. Mario Bencomo TRIP Review Meeting 2013

Discontinuous Galerkin and Finite Difference Methods for the Acoustic Equations with Smooth Coefficients. Mario Bencomo TRIP Review Meeting 2013 About Me Mario Bencomo Currently 2 nd year graduate student in CAAM department at Rice University. B.S. in Physics and Applied Mathematics (Dec. 2010). Undergraduate University: University of Texas at

More information

Shifted Laplace and related preconditioning for the Helmholtz equation

Shifted Laplace and related preconditioning for the Helmholtz equation Shifted Laplace and related preconditioning for the Helmholtz equation Ivan Graham and Euan Spence (Bath, UK) Collaborations with: Paul Childs (Schlumberger Gould Research), Martin Gander (Geneva) Douglas

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

SUMMARY REVIEW OF THE FREQUENCY DOMAIN L2 FWI-HESSIAN

SUMMARY REVIEW OF THE FREQUENCY DOMAIN L2 FWI-HESSIAN Efficient stochastic Hessian estimation for full waveform inversion Lucas A. Willemsen, Alison E. Malcolm and Russell J. Hewett, Massachusetts Institute of Technology SUMMARY In this abstract we present

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Edinburgh Research Explorer

Edinburgh Research Explorer Edinburgh Research Explorer Nonlinear Scattering-based Imaging in Elastic Media Citation for published version: Ravasi, M & Curtis, A 213, 'Nonlinear Scattering-based Imaging in Elastic Media' Paper presented

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Organization. I MCMC discussion. I project talks. I Lecture.

Organization. I MCMC discussion. I project talks. I Lecture. Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems

More information

PART II : Least-Squares Approximation

PART II : Least-Squares Approximation PART II : Least-Squares Approximation Basic theory Let U be an inner product space. Let V be a subspace of U. For any g U, we look for a least-squares approximation of g in the subspace V min f V f g 2,

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

variability and their application for environment protection

variability and their application for environment protection Main factors of climate variability and their application for environment protection problems in Siberia Vladimir i Penenko & Elena Tsvetova Institute of Computational Mathematics and Mathematical Geophysics

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

Mathematisches Forschungsinstitut Oberwolfach. Computational Inverse Problems for Partial Differential Equations

Mathematisches Forschungsinstitut Oberwolfach. Computational Inverse Problems for Partial Differential Equations Mathematisches Forschungsinstitut Oberwolfach Report No. 24/2017 DOI: 10.4171/OWR/2017/24 Computational Inverse Problems for Partial Differential Equations Organised by Liliana Borcea, Ann Arbor Thorsten

More information

Rational Chebyshev pseudospectral method for long-short wave equations

Rational Chebyshev pseudospectral method for long-short wave equations Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

The Discontinuous Galerkin Method for Hyperbolic Problems

The Discontinuous Galerkin Method for Hyperbolic Problems Chapter 2 The Discontinuous Galerkin Method for Hyperbolic Problems In this chapter we shall specify the types of problems we consider, introduce most of our notation, and recall some theory on the DG

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

Quadrature Formula for Computed Tomography

Quadrature Formula for Computed Tomography Quadrature Formula for Computed Tomography orislav ojanov, Guergana Petrova August 13, 009 Abstract We give a bivariate analog of the Micchelli-Rivlin quadrature for computing the integral of a function

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 6. Orthogonal Projections Math 2 Linear Algebra 6. Orthogonal Projections Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2 Jiwen He, University of

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini Computational methods for large-scale linear matrix equations and application to FDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it Joint work with: Tobias Breiten,

More information

F-K Characteristics of the Seismic Response to a Set of Parallel Discrete Fractures

F-K Characteristics of the Seismic Response to a Set of Parallel Discrete Fractures F-K Characteristics of the Seismic Response to a Set of Parallel Discrete Fractures Yang Zhang 1, Xander Campman 1, Samantha Grandi 1, Shihong Chi 1, M. Nafi Toksöz 1, Mark E. Willis 1, Daniel R. Burns

More information

Solutions to Review Problems for Chapter 6 ( ), 7.1

Solutions to Review Problems for Chapter 6 ( ), 7.1 Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,

More information

Approximation of Geometric Data

Approximation of Geometric Data Supervised by: Philipp Grohs, ETH Zürich August 19, 2013 Outline 1 Motivation Outline 1 Motivation 2 Outline 1 Motivation 2 3 Goal: Solving PDE s an optimization problems where we seek a function with

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS Victor S. Ryaben'kii Semyon V. Tsynkov Chapman &. Hall/CRC Taylor & Francis Group Boca Raton London New York Chapman & Hall/CRC is an imprint of the Taylor

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information