Inverse Methods for Atmospheric Sounding: Theory and Practice

Size: px
Start display at page:

Download "Inverse Methods for Atmospheric Sounding: Theory and Practice"

Transcription

1 Inverse Methods for Atmospheric Sounding: Theory and Practice Clive D. Rodgers July 14, 2016 Page 25, line -3: Remove minus sign. [Thanks to Randy VanValkenburg] Page 26, Figure 2.4: ERRATA The figure labelling is inconsistent. Either the K T S 1 ɛ K should be (K T S 1 ɛ or the other two S s should be S 1 s. [Thanks to Justus Notholt.] Page 27, Equation 2.34: where y a = Kx a, and on page 28, Equation 2.38: K) 1 x = S 1 2 a (x x a ) and ỹ = S 1 2 ɛ (y y a ) (1) x = T T a (x x a ), ỹ = T T ɛ (y y a ) and K = T T ɛ KT T a, (2) [The y a was omitted in both cases.] Page 28, After Equation 2.40: Insert: where y, x and ɛ are now of dimension p, the rank of K. And in line -3, replace I m by I p. [Thanks to Randy VanValkenburg] Page 30, Equation 2.42 Replace both x s by y s. [Thanks to Randy VanValkenburg] 1

2 Page 31, Equation 2.57: Append +m n : d n = tr(s ɛ [KS a K T + S ɛ ] 1 ) = tr([k T S 1 ɛ Page 31, Equation 2.58: K + S 1 a ] 1 S 1 a ) + m n. (3) The term Gy should clearly be G(y Kx a ), from Eq. (2.44). However it seems neater to rewrite the sentence and equation as: For the moment simply note, from Eq. (2.44), that it relates the expected state ˆx to the true state x: [Thanks to Jörn Ungermann] Page 68, after Equation 4.16: ˆx x a = A(x x a ) + Gɛ (4) If the measurement errors are independent we can expand this as P (x y 1, y 2,..., y l ) = P (y 1 x)p (y 2 x)... P (y l x)p (x) (5) P (y 1, y 2,..., y l ) P (x) = P (y i x). (6) P (y 1, y 2,..., y l ) To find the maximum probability solution (i.e. MAP if there is a priori, or ML if there is not) we maximise with respect to x, so the term P (y 1, y 2,..., y l ) can be omitted as it is independent of x. [The point is that the y i are not independent. My thanks to Randy Van Valkenberg for this one.] i Page 69, Equation 4,24: The derivative applies to both terms on the left hand side. brackets is required. [Thanks to Randy VanValkenburg] Another pair of Page 74, Equation 4.52 This equation should read: G = S a K T (KS a K T + γs ɛ ) 1 (7) [Thanks to Mick Christi] 2

3 Page 77, Equation 4.65 Should be r(z) = 12 [z c(z)] 2 A 2 (z, z ) dz / ( A(z, z ) dz ) 2, (8) [The [z c(z)] 2 was [z c(z)] 2. Thanks to Yasjka Meijer.] Page 92, Equation 5.34: [K T K not KK T.] x i+1 = x i + (K T K + γ i I) 1 K T [y F(x i )] (9) Page 97, section Sequential updating [This topic is poorly described, and Equation (5.47) has errors. I have rewritten it.] If the measurement error covariance is diagonal, as it often is, then there is a further economy available in the case of the m-form. Within each iteration cycle, the state estimate can be updated sequentially, one measurement at a time, thus replacing the matrix inverse in Eq. (5.44) by a scalar reciprocal. The process is as follows, where the column vector k j is the weighting function for the j-th channel, i.e. the j-th row of K, and superscript i refers to the iteration cycle. x i+1 0 := x a S 0 := S a for { j := 1 to m do : x i+1 j := x i+1 j 1 + S j 1k j [y j F j (x l ) + k T j (x l x i+1 j 1 )]/(kt j S j 1k j + σj 2) S j := S j 1 S j 1 k j k T j S j 1/(k T j S j 1k j + σj 2) x i+1 := x i+1 m (10) For each iteration cycle, the estimate starts with the a priori, and updates it channel by channel. The choice of linearisation point x l for each of these updates determines the convergence rate. The best linearisation point would of course be the final solution, but this is not known initially. For the first iteration cycle, the best we can do for channel j is the current estimate x 1 j 1. For subsequent cycles, the end point of the previous cycle, x i is likely to be better. This has two potential advantages over updating with a vector of measurements. Some measurements are likely to be more linearly related to the state vector than others; if they are assimilated first in the initial iteration cycle, then the intermediate state will be closer to the final solution when the more non-linear measurements are used, so the linearisation will be more accurate, and fewer iterations may be needed... [N.B. This erratum has been corrected 23 Jan An x a in the fourth line of the display equation has been replaced by x i+1 j 1. My thanks to 3

4 Jörn Ungermann. Also 13 Jul 2016, when k T i to Nathaniel Livesey.] was replaced by+k T j. Thanks Page 108, Section 6.3 This section is a mess. As well as about seven typos, it is not clearly written. Here is another attempt for the part of this section on page 108: However it is more convenient to use the alternative formulation in terms of singular vectors, mainly because singular vectors are orthogonal. Put C 1 U tλ 2 t V tt, (11) where U t and V t are matrices of left and right singular vectors of C truncated to t columns, and the truncated matrix of singular values has been written as Λ 2 t for reasons which will be apparent later. This gives ˆx = WU tλ 2 t V tt y (12) = WU tλ 2 t V t T Kx + WU t Λ 2 t V tt ɛ. (13) If ɛ has covariance σ 2 ɛ I the error term will have covariance σ 2 ɛ WU tλ 4 t U tt W T, each left singular vector contributing an independent error equal to a column of WU t multiplied by a random normal variable with standard deviation (σ ɛ /λ 2 i ). Rather than retaining a general representation function let us simplify matters by only considering W = K T. In this case C = KK T is symmetric and its singular vectors and eigenvectors are identical, and equal to U, the left singular vectors of K, K = UΛV T. Thus U t = V t = U t. The singular values (and eigenvalues) of C are then the squares of the singular values of K (hence the choice of Λ 2 t above). In this case ˆx = K T U t Λ 2 t U T t y (14) = V t Λ 1 t U T t y (15) = t i=1 λ 1 i v i u T i y (16) because K T U t = V t Λ t. Note that Eq. (15) is equivalent to using a truncation of the pseudo inverse K = VΛ 1 U T based on K = UΛV T. The relation of the retrieval to the state vector is therefore ˆx = V t Λ 1 t U T t Kx + V t Λ 1 t U T t ɛ (17) = V t Vt T x + V t Λ 1 t U T t ɛ (18) showing that the averaging kernel matrix is A = V t Vt T. If ɛ has covariance σɛ 2 I the error term will have covariance σɛ 2 V t Λ 2 t Vt T, the error patterns being given by e i = (σ ɛ /λ i )v i. [Thanks to Lars Hoffmann and Jörn Ungermann] 4

5 Page 124, Equation 7.16: Remove the hats from x t and S t in the last two equations. [Thanks to Bojan Bojkov for pointing this out.] Page 139, Equation 8.23: [Sign error.] Page 144, Equation 9.12: This equation should read: i.e change B i 1 to B i+1. [Thanks to Rachel Hodos] Page 144, Equation 9.16: This equation should read: y = U T (y + K x 0 ) = ΛV T x + ɛ r + ɛ s, (19) L n = (L 0 B 1 )τ 0 + B n 1 n + τ i ( B i B i+1 ) (20) i.e. omit the B i in the last term. [Thanks to Scott Paine] Page 147, Equation = B i + B i 1 B i χ i ( 1 e χ i ) Bi 1 e χi (21) As it stands, p in this equation is the partial pressure of dry air, not the total pressure as implied. However the coefficient 5748 taken from Kaye and Laby appears to be about a factor of 1000 too large in comparison with the formula adopted by the International Association of Geodesy in 1999, based on Ciddor (1996). Replace Equation 9.24 by: ( p T 0 n = 1 + N g p 0 T 11.27e ) 10 6 (22) T where p 0 = mb, T 0 = K, e is the partial pressure of water vapour in mb, and N g is the refractivity of dry air at STP, given by N g = /λ /λ 4 (23) at wavelength λ in micrometers. [Thanks to Susan Sund Kukawik for pointing this out] 5

6 Page 163, line 1: subject to z = W x.... Page 179, Equation 11.2 and following text: Ŝ z,ii = trace(s 1 2 a ŜS 1 2 a ) = trace(ŝs 1 a ). (24) i From Eqs. (2.79) and (2.80), we can see that trace(ŝs 1 a ) = trace(i n A) = n d s. (25) so that minimising the trace is the same as maximising the degrees of freedom for signal. Alternatively, the information content of the retrieval, Eq. (2.73), could be optimised by maximising the determinant of ŜS 1 a. The same result should be obtained if the information content of the measurement alone is maximised, provided an information-preserving retrieval is used, e.g. an optimal estimator. [Thanks to Anu Dudhia for this one.] Page 199, Section A.2 This section contains a significant error. In the line after Eq. (A.7) I have assumed without justification that L is normalised. Usually it is not. This could be corrected by replacing this line by Thus L is a matrix of eigenvectors of A T..., when the rest of this section would be correct. However this is unsatisfying because it would be much more sensible to give the relationships for the case where R and L are both normalised. Therefore replace the part after Eq. (A.6) by the following:...then premultiplying and postmultiplying by L = (R T ) 1 A T L = LΛ (A.7) Thus L is a matrix whose columns are eigenvectors of A T, with eigenvalues which are the same as those of the corresponding column of R. However these eigenvectors are not necessarily normalised. Let ν i be the length of l i, the i-th column of L, i.e. ν i = ( l T i l i ) 1/2, and N be a diagonal matrix containing the ν i as its diagonal elements. Then L, the normalised matrix of eigenvectors of A T, is given by L = LN 1 and satisfies: A T L = LΛ (A.8) These are called the left eigenvectors, because they operate on A (rather then A T ) on the left: L T A = ΛL T, while R are the right eigenvectors. By postmultiplying Eq. (A.5) by R 1 = L T = NL T we can express A in terms of its eigenvectors as A = RΛNL T = λ i ν i r i l T i (A.9) i 6

7 which is described as a spectral decomposition of A. In the case of a symmetric matrix S, where S = S T, we must have R = L by symmetry. By premultiplying (A.8) by L T and postmultiplying its transpose by L we see that L T LΛ = ΛL T L, so L T L must be diagonal. As L is normalised we must have L T L = LL T = I or L T = L 1, i.e. the eigenvectors are orthonormal, and N = I. In this case the eigenvalues are all real. If the matrix is positive definite the eigenvalues are all greater than zero, and similarly for a negative definite matrix. The following is a summary of useful relations involving eigenvectors: Asymmetric Matrices AR = RΛ (26) L T A = ΛL T (27) NL T = R 1, NR T = L 1 (28) L T R = R T L = N 1 (29) A = RΛNL T = i λ iν i r i l T i (30) A 1 = RΛ 1 NL T (31) A n = RΛ n NL T (32) L T AR = N 1 Λ (33) L T A n R = N 1 Λ n (34) L T A 1 R = N 1 Λ 1 (35) A = i λ i (36) Symmetric Matrices SL = LΛ (37) L T S = ΛL T (38) L T = L 1 (39) LL T = L T L = I (40) S = LΛL T = i λ il i l T i (41) S 1 = LΛ 1 L T (42) S n = LΛ n L T (43) L T SL = Λ (44) L T S n L = Λ n (45) L T S 1 L = Λ 1 (46) S = i λ i (47) Square roots of matrices The relation A n = RΛ n NL T (where N = I for a symmetric matrix) can be used for arbitrary powers of a matrix, in particular the square root such that A = A 1 2 A 1 2. This square root of a matrix is not unique, because the diagonal elements of Λ in RΛ 2 NL T can have either sign, leading to 2 n possibilities. We only use square roots of covariance matrices in this book. In this case we can see that S 1 2 = LΛ 1 2 L T is symmetric. As well as these roots, symmetric matrices can also have non-symmetric roots satisfying S = (S 1 2 ) T S 1 2, of which the Cholesky decomposition, S = T T T where T is upper triangular, is the most useful, see section and Exercise 5.3. There are an infinite number of nonsymmetric square roots. If S 1 2 is a square root, then clearly so is XS 1 2 where X is any orthonormal matrix. The inverse symmetric square root is S 1 2 = LΛ 1 2 L T, and the inverse Cholesky decomposition is S 1 = T 1 T T. The inverse square root T 1 is triangular, and its numerical effect is implemented efficiently by back substitution. [My thanks to Randy VanValkenburg and Javier Martin-Torres for bringing this to my attention.] 7

8 Page 206, Equation B.8 and line before: The other expression follows from using this with Eq. (2.45), giving d n = tr(i m KS a K T [KS a K T + S ɛ ] 1 ) = m tr(kg) = m tr(k[k T S 1 ɛ K + S 1 a ] 1 K T S 1 = m n + tr(i n [K T S 1 ɛ = tr([k T S 1 ɛ K + S 1 a ɛ ) Minor clarifications and typographic errors Page 1, line 15: insert is after it. Page 4, line 12: insert of after kind. Page 9, line 2: replace b by w. Page 10, line -9: replace G j (ζ) by G i (ζ). Page 13, line -10: separate vector and x. Page 13, line -8: insert than after rather. K + S 1 a ] 1 K T S 1 ɛ K) ] 1 S 1 a ) + m n (48) Page 17, first line of second paragraph of 2.2.1: italicise coordinate system (RVV) Page 17, line 13: replace k j by k i for consistency with page 16. Page 17, middle of first paragraph of 2.2.1: Even when m > n,..., the rank cannot be greater than n,... (RVV) Page 17, line 19: change to called the row space of K or range of K T. Page 18, lines 13, 21 and 23: replace k j by k i for consistency with page 16. Page 20, line after Eqn. 2.8, replace and that by where. Page 21, line 1: delete is used. Page 21, Eqn. 2.15, replace S 1 by S 1 y. (DW) Page 22, line -16: insert of after pdf. Page 23, lines 5 and 7: replace P(y,x) by P(x,y). Page 23, line -3: replace This is what we want by This equation is what we need in order. (RVV) Page 25, line 3: replace constant by independent of x. (RVV) 8

9 Page 25, line after Eqn. 2.28: delete is used. (DW) Page 28, line 14: replace T T by T T ɛ. Page 28, line -2: replace Element by Elements. Page 29, line 13: replace ln p by ln(p/p 0 ). (RVV) Page 29, line 19: replace.. by.. Page 30, line -12: insert be after therefore. Page 33, section 2.5.2, line 2: replace it by its. (DW) Page 33, section 2.5.2, para. 2, line 1: delete one about. (DW) Page 33, section 2.5.2, para. 3, line 1: can be defined. (DW) Page 35, Eqn. 2.71, line 1: e should be e. Page 35, line -16: replace first the by a. Page 37, last para.,line 2: delete therefore. (DW) Page 39, Figure 2.5 caption: Solid: eigenvalues of a non-diagonal model covariance in decreasing order of size; dotted: the same for a diagonal covariance matrix with the same total variance. Page 40, line 8: replace on by one. Page 44, line -2: replace an by a. (DW) Page 45, Equations (3.3), (3.5), and (3.7): replace R(...) by R[...]. Page 46, section 3.1.4, last line:... total error in the measured signal relative.... (DW) Page 46, section 3.1.5, line 4:... need to know.... (DW) Page 47, line 13: replace first a i by a T i. Page 48, line 4: insert the before bias. Page 48, lines 8 and 9: replace b by ˆb. Page 49, middle paragraph, line 4: delete that. (DW) Page 50, section 3.2.3, line 3: delete second not. (DW) Page 50, line -5: insert that after terms. Page 51, section 3.2.6, line 4: hyper-ellipsoid. (DW) Page 54, line 6: replace it by its. 9

10 Page 54, line -11: delete second should. (DW) Page 55, section 3.3, line -2: delete give. Page 55, line -10: replace matrices by matrix. (DW) Page 56, line 7 of text, replace that by they. (DW) Page 60, Figure 3.8 caption: replace eight by ten. Page 61, line 1: delete first to. (DW) Page 61, line 3: replace S ɛ + K T b S bk b by S ɛ + K b S b K T b Page 68, line before Equation 4.19: replace maximising by minimising. (RVV) Page 62, Figure 3.11 caption: at end insert Original sinusoid: dashed. Response: solid.. Page 66, line 4: replace x 0 by x 0. Page 73, lines -14, -7: replace measurement error by retrieval noise. Page 72, lines -6, -5: replace maximum likelihood by MAP. Page 73, section 4.3, line -4: delete the. (DW) Page 76, line -3: replace (3.23) by (4.59). Page 77, line 3: insert the before standard. Page 77, line 8: delete the first of. Page 78, line -10, replace large by small! (DW) Page 82, line 1: insert treatment of after exhaustive. Page 83, line 12:... example has... Page 84, line after Eqn. 5.3: delete second is. (DW) Page 84, Equation (5.4) : delete 2 on LHS of Equation (5.4). Page 85, line 3 before Eqn. 5.8: literature. (DW) Page 85, line 3 after Eqn. 5.7: replace Jacobean by Jacobian. (RJW) Page 86, lines 2, -5: replace moderately linear by moderately non-linear. Page 90, line 3: delete so after this. Page 90, line -11: replace then by than. 10

11 Page 91, line 11: replace 11 by 12. Page 91, line 14: replace moderately linear by moderately non-linear. Page 91, line -10: may be needed... (DW) Page 93, line -6: evaluation of its Jacobian... (DW) Page 94, line -18: delete be after been. Page 95, line -11: by the pivot (B (DW) Page 95, line before Eqn. 5.40: replace involved by involves. Page 95, line -3: insert it after if. Page 99, 5/6 lines after Equation 5.49: replace corresponding by correspond. Page 103, Equation (6.8): insert minus on RHS of equation. Page 103, Equation (6.9): insert minus on LHS of equation. Page 105, Equation (6.12): change i =... m, to i = 1,... m,. Page 107, line 12: replace Not by Note. Page 113, line -9: insert a after Thus. Page 116, Figure 6.3a: Abcissa should be labelled eigenvector. Page 119, line 8: replace give by gives. Page 120, line 11: replace give by gives. Page 124, Equation (7.16): Remove the hats from x t+1 and S t+1 in the first two equations. They are superfluous as the double prime already indicates a backwards estimate. Also, remove the commas in the subscripts on the left hand side of these equations. Page 145, Equation (9.19): insert minus before κ k. Page 146, line 6: replace is by a. Page 146, line 7: delete first the. Page 146, Equation (9.23): Encase expression on RHS after in square brackets. Page : replace all occurrences of (a), (b) and (c) by (i), (ii) and (iii) respectively. Page 159, line -4: replace and by an. (DW) 11

12 Page 160, line 10:... has been chosen for... (DW) Page 160, line -5:... is to be... (DW) Page 161, line 8:... or a coarse representation... Page 162, section , line 3: replace reduces by reduced. Page 164, Subtitle : replace a priori by a posteriori. Page 167, line 9: replace measure by measured. Page 168, line 10: insert this after Unfortunately. Page 168, line -11: insert a comma after strictly Page 171, Equation (10.38): replace + ɛ r + ɛ s by + Gɛ r + Gɛ s. Page 172, line -7: replace moderately linear by moderately non-linear. Page 173, Equation (10.50): insert W T between terms in brackets (i.e. change...w] 1 [Ŝ 1... to...w] 1 W T [Ŝ 1... ). Page 177, line 4:... stage be regarded... (DW) Page 186, line -2: delete second priori. Page 187, line 13: delete is being used. (DW) Page 193, Equation (12.19): change S T ɛ 2 to S ɛ2. Page 194, line 11: replace our by or. Page 194, line 11: replace compared by compare. Page 195, line 12: insert in after Substituting. Page 207, line 4: replace.. by.. Page 229, Bretherton reference: J. Climate. (DW) My thanks to Daniela Wurl for finding these. My thanks to Mick Christi for his amazing diligence in finding all these! (RVV) My thanks to Randy VanValkenburg for finding these. (RJW) My thanks to Robert Watson for finding this. The following are corrected in the published version, but may exist in some late pre-publication drafts: 12

13 Page 53, Figure caption 3.2: The most significant ten error patterns of the a priori covariance matrix S ij = σ 2 a exp( i j δz/h), for σ a 3 K and h = 1 Page 162, Equation 10.8: P195, Eqn W = (W T S 1 e W) 1 W T S 1 e. (49) (K 1 S c K T 1 + S ɛ1 ) 1 2 K1 S c K T 2 (K 2 S c K T 2 + S ɛ2 ) 1 2 (50) P195, line 6 from bottom:... more or less... 13

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Pattern Recognition and Machine Learning Errata and Additional Comments. Markus Svensén and Christopher M. Bishop

Pattern Recognition and Machine Learning Errata and Additional Comments. Markus Svensén and Christopher M. Bishop Pattern Recognition and Machine Learning Errata and Additional Comments Markus Svensén and Christopher M. Bishop September 21, 2011 2 Preface This document lists corrections and clarifications for the

More information

JUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson

JUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson JUST THE MATHS UNIT NUMBER 9.9 MATRICES 9 (Modal & spectral matrices) by A.J.Hobson 9.9. Assumptions and definitions 9.9.2 Diagonalisation of a matrix 9.9.3 Exercises 9.9.4 Answers to exercises UNIT 9.9

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Pattern Recognition and Machine Learning Errata and Additional Comments. Markus Svensén and Christopher M. Bishop

Pattern Recognition and Machine Learning Errata and Additional Comments. Markus Svensén and Christopher M. Bishop Pattern Recognition and Machine Learning Errata and Additional Comments Markus Svensén and Christopher M. Bishop September 21, 2011 2 Preface This document lists corrections and clarifications for the

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

More information

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown. Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real

More information

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?) Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from

More information

Least squares: introduction to the network adjustment

Least squares: introduction to the network adjustment Least squares: introduction to the network adjustment Experimental evidence and consequences Observations of the same quantity that have been performed at the highest possible accuracy provide different

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later. 34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

. D CR Nomenclature D 1

. D CR Nomenclature D 1 . D CR Nomenclature D 1 Appendix D: CR NOMENCLATURE D 2 The notation used by different investigators working in CR formulations has not coalesced, since the topic is in flux. This Appendix identifies the

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Matrix Algebra Review

Matrix Algebra Review APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Matrix Vector Products

Matrix Vector Products We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Application of Principal Component Analysis to TES data

Application of Principal Component Analysis to TES data Application of Principal Component Analysis to TES data Clive D Rodgers Clarendon Laboratory University of Oxford Madison, Wisconsin, 27th April 2006 1 My take on the PCA business 2/41 What is the best

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Corrections and Minor Revisions of Mathematical Methods in the Physical Sciences, third edition, by Mary L. Boas (deceased)

Corrections and Minor Revisions of Mathematical Methods in the Physical Sciences, third edition, by Mary L. Boas (deceased) Corrections and Minor Revisions of Mathematical Methods in the Physical Sciences, third edition, by Mary L. Boas (deceased) Updated December 6, 2017 by Harold P. Boas This list includes all errors known

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I) CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Solutions to Review Problems for Chapter 6 ( ), 7.1

Solutions to Review Problems for Chapter 6 ( ), 7.1 Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,

More information

Exercises * on Principal Component Analysis

Exercises * on Principal Component Analysis Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Vectors Arrays of numbers Vectors represent a point

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Principal

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

A strategy to optimize the use of retrievals in data assimilation

A strategy to optimize the use of retrievals in data assimilation A strategy to optimize the use of retrievals in data assimilation R. Hoffman, K. Cady-Pereira, J. Eluszkiewicz, D. Gombos, J.-L. Moncet, T. Nehrkorn, S. Greybush 2, K. Ide 2, E. Kalnay 2, M. J. Hoffman

More information

Tensors, and differential forms - Lecture 2

Tensors, and differential forms - Lecture 2 Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient. ENGI 940 Lecture Notes - Matrix Algebra Page.0. Matrix Algebra A linear system of m equations in n unknowns, a x + a x + + a x b (where the a ij and i n n a x + a x + + a x b n n a x + a x + + a x b m

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Linear Hyperbolic Systems

Linear Hyperbolic Systems Linear Hyperbolic Systems Professor Dr E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro October 8, 2014 1 / 56 We study some basic

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Matrix Representation

Matrix Representation Matrix Representation Matrix Rep. Same basics as introduced already. Convenient method of working with vectors. Superposition Complete set of vectors can be used to express any other vector. Complete set

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Numerical Methods for Engineers, Second edition: Chapter 1 Errata

Numerical Methods for Engineers, Second edition: Chapter 1 Errata Numerical Methods for Engineers, Second edition: Chapter 1 Errata 1. p.2 first line, remove the Free Software Foundation at 2. p.2 sixth line of the first proper paragraph, fe95.res should be replaced

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information