Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Size: px
Start display at page:

Download "Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??"

Transcription

1 Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering piccolom elena.loli@unibo.it

2 Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement errors are inevitable in observational and experimental sciences. Errors can be smoothed out by averaging over many cases, i.e., taking more measurements than are strictly necessary to determine parameters of system. Resulting system is overdetermined, so usually there is no exact solution. For linear problems, we obtain overdetermined linear system Ax = b, with m n matrix A, m > n System is better written Ax b since equality is usually not exactly satisfiable when m > n Least squares solution x minimizes squared Euclidean norm of residual vector r = b Ax min x r 2 2 min x b Ax 2 2

3 Metodi Numerici M p. 3/?? Data Fitting Given m data points (t i,y i ), i = 1,...,m find the n vector x of parameters that best fits the model function f(t i,x) in the least squares sense, i.e. min x m (y i f(t i,x)) 2 i=1 The problem is linear if the function f is linear in components of x: f(t,x) = x 1 φ 1 (t)+x 2 φ 2 (t)+ +x n φ n (t) where the function φ j are known and depend only on t. In this case the problem can be written in matrix form as Ax = b where A = [a i,j ], b = [y i ] a i,j = φ j (t i ), b i = y i, i = 1,...,m, j = 1,...,n

4 Metodi Numerici M p. 4/?? Polynomial Fitting Define the basis or shape functions φ j as: The model function f is: φ j (t) = t j 1, j = 1,1,...,n f(t,x) = x 1 +x 2 t+x 3 t 2 + +x n t n 1 Example: n = 3,m = 5, the model function f is a quadratic polynomial: f(t,x) = x 1 +x 2 t+x 3 t 2 A = 1 t 1 t t 2 t t 3 t t 4 t 2 4, x = x 1 x 2 x 3, b = t 1 t 2 t 3 t 4 1 t 5 t 2 5 t 5 Matrix whose columns (or rows) are successive powers of independent variable is called Vandermonde matrix

5 Metodi Numerici M p. 5/?? Example.. For data: t y we obtain the overdetermined linear system Ax b where: A = , b = The computed solution is x = [.86,.4,1.4] t, so approximating polynomial is p(t) =.86+.4t+1.4t

6 Metodi Numerici M p. 6/?? ls fit y y fit The residual norm is given by r 2 = b Ax 2 =.341.

7 Metodi Numerici M p. 7/?? Existence and Uniqueness Linear least squares problem Ax b always has solution. Solution is unique if, and only if, columns of A are linearly independent, i.e., rank(a) = n, where A is m n and m n. If rank(a) < n, then A is rank-deficient, and solution of linear least squares problem is not unique. For now, we assume that A has full column rank n. To minimize squared Euclidean norm of residual vector r 2 2 =r t r = (b Ax) t (b Ax) = = b t b 2x t A t b+x t A t Ax Φ(x) derivating for x and setting the value to : x Φ(x) = 2A t Ax 2A t b = obtain the n n linear system of normal equations A t Ax = A t b (1)

8 Metodi Numerici M p. 8/?? Orthogonality Vectors v 1 and v 2 are orthogonal if their inner product is zero. i.e. v t 1 v 2 =. Space spanned by columns of m n matrix A, R(A) = {Ax : x R n }, is of dimension at most n If m > n, b generally does not lie in R(A), so there is no exact solution to Ax = b Vector y = Ax in R(A) closest to b in 2-norm occurs when residual r = b Ax is orthogonal to R(A), i.e. giving again the normal equations: A t r = A t (b Ax) = A t Ax = A t b

9 Metodi Numerici M p. 9/??. Geometric relationships among b, r, and R(A). R(A)

10 Metodi Numerici M p. 1/?? Orthogonal Projectors Matrix P is orthogonal projector if it is idempotent (P 2 = P) and symmetric (P T = P) Orthogonal projector onto orthogonal complement R(P) is given by P = I P For any vector v: v = (P+(I P))v = Pv+P v For least squares problem Ax b, if rank(a) = n, Ax b = P(b Ax)+P (b Ax) = P(b Ax) + P (b Ax) = Since PA = A and P A = O = Pb PAx + P b P Ax Ax b = Pb Ax + P b The second term does not depend on x thus the residual norm is minimized if x makes the first term zero.

11 Metodi Numerici M p. 11/??. The least squares solution x is given by the overdetermined but consistent linear system: Ax = Pb If rank(a) = n one way to obtain explicitly an orthogonal projector onto R(A) is: P = A(A T A) 1 A T And b is the sum of two mutually orthogonal vectors y R(A) and r R(A) : b = Pb+P b = Ax+b Ax = y+r An alterantive way to obtain an orthogonal projector onto R(A) is that of computing a matrix Q whose columns form an orthonormal basis of R(A). P = QQ t Ax = QQ t b Q t Ax = Q t b

12 Metodi Numerici M p. 12/?? Pseudoinverse and Condition Number Nonsquare m n matrix A has no inverse in usual sense. If rank(a) = n, pseudoinverse is defined by A + = (A T A) 1 A T and the condition number as: K(A) = A A + Just as condition number of square matrix measures closeness to singularity, condition number of rectangular matrix measures closeness to rank deficiency. Least squares solution of Ax b is given by x = A + b

13 Metodi Numerici M p. 13/?? Sensitivity and Conditioning Sensitivity of least squares solution to Ax b depends on b as well as A. large residual more sensitive small residual less sensitive Define angle θ between b and y = Ax by cos(θ) = y 2 b 2 = Ax b 2 measures the closedness of b to R(A). We expect greater sensitivity when this ratio is small (θ π/2). Bound on perturbation x in solution x due to perturbation b in b is given by: x 2 1 b K(A) x 2 cos(θ) b 2

14 Metodi Numerici M p. 14/??. proof. A t A(x+ x) = A t (b+ b) A t A x = A t b x = (A t A) 1 A t b = A + b x 2 A + 2 b 2 = A + 2 b 2 A 2 A 2 x 2 A + A 2 2 b 2 A + 2 A 2 b 2 x 2 A 2 x 2 Ax 2 Since Ax b 2 = cos(θ) then x 2 x 2 A + 2 A 2 b 2 cos(θ) b 2

15 Metodi Numerici M p. 15/??. Similarly, for perturbation E in matrix A, x 2 x 2 ( ) K(A) 2 E 2 tan(θ)+k(a) A 2 Condition number of least squares solution is about K(A) if residual is small, but can be squared or arbitrarily worse for large residual.

16 Metodi Numerici M p. 16/?? Normal Equations Method If m n matrix A has rank n, then symmetric n n matrix A t A is positive definite, so its Cholesky factorization A t A = LL t can be used to obtain solution x to system of normal equations A t Ax = A t b which has same solution as linear least squares problems Ax b.

17 Metodi Numerici M p. 17/?? Example For polynomial data-fitting example given previously, normal equations method gives A t A = , A t b = Cholesky factorization of symmetric positive definite matrix A t A gives: L = Solving lower triangular system Lz = A t b by forward-substitution gives z = (1.7889,.6325,1.3363) t Solving upper triangular system L t x = z by back-substitution gives x = (.857,.4, )

18 Metodi Numerici M p. 18/?? Shortcomings of Normal Equations Information can be lost in forming A t A and A t b, For example A = 1 1 ǫ ǫ, A t A = if ǫ ǫ mach then in floating-point arithmetic A t A = [ Sensitivity of solution is also worsened, since ] [ 1+ǫ ǫ 2, is singular K(A t A) = K(A) 2 ]

19 Metodi Numerici M p. 19/?? Augmented System Method Definition of residual r + Ax = b together with orthogonality requirement A t r = give (m+n) (m+n) augmented system [ I A t A O ][ Augmented system is not positive definite, is larger than original system, and requires storing two copies of A Augmented system is sometimes useful, but is far from ideal in work and storage required. r x ] = [ b ]

20 Metodi Numerici M p. 2/?? Orthogonal Transformations We seek alternative method that avoids numerical difficulties of normal equations. We need numerically robust transformation that produces easier problem without changing solution What kind of transformation leaves least squares solution unchanged? Square matrix Q is orthogonal if Q t Q = I Multiplication of vector by orthogonal matrix preserves Euclidean norm Qv 2 2 = (Qv) t (Qv) = v t v = v 2 2 Thus, multiplying both sides of least squares problem by orthogonal matrix does not change its solution.

21 Metodi Numerici M p. 21/?? Triangular Least Squares Problems As with square linear systems, suitable target in simplifying least squares problems is triangular form. Upper triangular overdetermined (m > n) least squares problem has form [ ] [ ] R b 1 x O where R is n n upper triangular and b is partitioned similarly. Residual is r 2 2 = b 1 Rx b We have no control over second term, b 2 2 2, but first term becomes zero if x satisfies n n triangular system: Rx = b 1 which can be solved by back-substitution. Resulting x is least squares solution, and minimum sum of squares is r 2 2 = b So our strategy is to transform general least squares problem to triangular form using orthogonal transformation so that least squares solution is preserved. b 2

22 Metodi Numerici M p. 22/?? QR Factorization Given m n matrix A, with m > n, we seek m m orthogonal matrix Q such that: [ ] R A = Q O where R is n n and upper triangular. Linear least squares problem Ax b is then transformed into triangular least squares problem: [ ] [ ] Q t R c 1 Ax = x = Q t b O c 2 which has same solution, since r 2 2 = b Ax 2 2 = b Q [ R O ] x 2 2 = Q t b [ R O ] x 2 2

23 Metodi Numerici M p. 23/?? Orthogonal Bases If we partition m m orthogonal matrix Q = [Q 1 Q 2 ], where Q 1 is m n, then [ ] [ ] R R A = Q = [Q 1,Q 2 ] = Q 1 R O O is called reduced QR factorization of A. Columns of Q 1 are orthonormal basis for R(A), and columns of Q 2 are orthonormal basis for R(A) Q 1 Q 1 is orthogonal projector onto R(A). Solution to least squares problem Ax b is given by solution to square system Q t 1Ax = Rx = c 1 = Q t 1b To compute QR factorization of m n matrix A, with m > n, we annihilate subdiagonal entries of successive columns of A, eventually reaching upper triangular form. Similar to LU factorization by Gaussian elimination, but use orthogonal transformations instead of elementary elimination matrices

24 Metodi Numerici M p. 24/?? Householder Transformations Householder transformation has form H = I 2 vvt v t v for nonzero vector v H is orthogonal and symmetric:h = H t = H 1 Given vector a, we want to choose v so that α 1 Ha =. = α Substituting into formula for H, we can take. = αe 1 v = a αe 1 and α = ± a 2, with sign chosen to avoid cancellation.

25 Metodi Numerici M p. 25/??. αe 1 = Ha = ) (I 2 vvt v t a = a 2 vt a v v t v v hence v = (a αe 1 ) vt v 2v t a The scalar factor is irrelevant in determining v so we can take v = a αe 1. Example If a = [2,1,2] t then we take α v = a αe 1 = 1 α = where α = ± a 2 = ±3. Since a 1 is positive, we choose negative sign for α to avoid cancellation, so v = 1 = 1 2 2

26 Metodi Numerici M p. 26/??.. To confirm that transformation works: Ha = a 2 vt a v t v v = = 3 To compute QR factorization of a matrix A, use Householder transformations to annihilate subdiagonal entries of each successive columns Each Householder transformation is applied to entire matrix, but does not affect prior columns, so zeros are preserved In applying Householder transformation H to arbitrary vector u, Hu = ) ( ) (I 2 vvt v t u = u 2 vt u v v t v v which is much cheaper than general matrix-vector multiplication and requires only vector v, not full matrix H

27 Metodi Numerici M p. 27/?? Householder QR Factorization Applying Hauseholder transformations H j to the columns a j (j = 1,...,n) of matrix A, produces factorization H n,h n 1 H 1 A = [ R O ] where R is n n and upper triangular. Setting Q = H 1 H n we have: [ A = Q R O ] To preserve solution of linear least squares problem, right-hand side b is transformed by same sequence of Householder transformations: Ax b = Q ([ R O ] x Q t b )

28 Metodi Numerici M p. 28/??. Then solve triangular least squares problem: Q ([ R O ] x Q t b ) For solving linear least squares problem, product Q of Householder transformations need not be formed explicitly. R can be stored in upper triangle of array initially containing A Householder vectors v can be stored in (now zero) lower triangular portion of A (almost) Householder transformations most easily applied in this form anyway

29 Example For polynomial data-fitting example given previously, with A = , b = Householder vector v 1 for annihilating subdiagonal entries of first column of A is: v 1 = = Metodi Numerici M p. 29/??

30 Metodi Numerici M p. 3/??. Applying resulting Householder transformation H 1 yields transformed matrix and right-hand side: H 1 A = , H 1 b = Householder vector v 2 for annihilating subdiagonal entries of second column of H 1 A is v 2 = =

31 Metodi Numerici M p. 31/??.. Applying resulting Householder transformation H 2 yields transformed matrix and right-hand side: H 2 H 1 A = , H 2 H 1 b = Householder vector v 3 for annihilating subdiagonal entries of third column of H 2 H 1 A is v 3 = =

32 Metodi Numerici M p. 32/??... Applying resulting Householder transformation H 3 yields transformed matrix and right-hand side: H 3 H 2 H 1 A = , H 3 H 2 H 1 b = Now solve upper triangular system Rx = c 1 by back-substitution to obtain x = (.857,.4,1.4286) t

33 Metodi Numerici M p. 33/?? Rank Deficiency If rank(a) < n, then QR factorization still exists, but yields singular upper triangular factor R, and multiple vectors x give minimum residual norm. Common practice selects minimum residual solution x having smallest norm. Can be computed by QR factorization with column pivoting or by Singular Value Decomposition (SVD) Rank of matrix is often not clear cut in practice, so relative tolerance is used to determine rank

34 Metodi Numerici M p. 34/?? Example: Near Rank Deficiency Consider the 3 2 matrix A = Computing QR factorization, R = [ ] R is extremely close to singular (exactly singular to 3-digit accuracy of problem statement) If Ris used to solve linear least squares problem, result is highly sensitive to perturbations in right-hand side For practical purposes, rank(a) = 1 rather than 2, because columns are nearly linearly dependent

35 Metodi Numerici M p. 35/?? Singular value Decomposition Diagonal factorization of a rectangular matrix using orthogonal transformations. Singular value decomposition (SVD) of m n matrix A has form: A = UΣV t where U is m m orthogonal matrix, V is n n orthogonal matrix, and U is m m diagonal matrix, with elements s i,j s i,j = { i j i = j σ i Diagonal entries σ i, called singular values of A, are usually ordered so that σ 1 σ 2 σ n Columns u i of U and v i of V are called left and right singular vectors. The algorithm for computing SVD is iterative and strictly related to the computation of eigenvalues.

36 Metodi Numerici M p. 36/?? Applications of SVD Minimum norm solution to Ax b is given by x = σ i u i b σ i v i Proof Let s assume: σ 1 σ 2 σ k, and σ k+1 = σ k+2 = = σ n = i.e. rank(a) = k < n. Ax b 2 2 = UΣV t x b 2 2 = ΣV t x U t b 2 2 = Σ 1 y 1 g g where Σ = [ Σ 1 O O O ],y = V t x, g = U t b and Σ 1 is k k diagonal matrix, with elements σ 1,...,σ k, y 1 and g 1 are k elements vectors,g 2 has m k elements.

37 Metodi Numerici M p. 37/??. In order to minimize the residual norm Ax b 2 2 we find y 1 = (y 1,y 2,...,y k ) t s.t. Σ 1 y 1 g i.e. y i = g 1(i) σ i = ut i b σ i, i = 1,...,k leaving the elements y k+1,y k+2,...,y n undetermined. Since x = Vy = y, we can find the minimum norm solution by imposing y i =,i = k +1,...,n hence x = Vy = y 1 v 1 + +y k v k = k y i v i = i=1 k i=1 u t i b σ i v i For ill-conditioned or rank deficient problems, small singular values can be omitted from summation to stabilize solution.

38 Metodi Numerici M p. 38/?? Properties of SVD Euclidean matrix norm: A 2 = UΣV t 2 = Σ 2 = σ 1 Euclidean condition number of matrix: K(A) = A 2 A + 2 = σ 1 σ k, k rank(a) n Rank of matrix : number of nonzero singular values. Pseudoinverse The pseudoinverse of general real m n matrix A is given by A + = VΣ + U t where Σ + is diagonal with elements σ + i = { 1/σ i σ i σ i = If A is square and nonsingular, then A + = A 1. In all cases, minimum-norm solution to Ax b is given by x = A + b

39 Metodi Numerici M p. 39/?? Comparison of Methods Forming normal equations matrix A t A requires about n 2 m/2 multiplications, and solving resulting symmetric linear system requires about n 3 /6 multiplications Solving least squares problem using Householder QR factorization requires about mn 2 n 3 /3 multiplications If m n, both methods require about same amount of work. If m n, Householder QR requires about twice as much work as normal equations. Cost of SVD is proportional to mn 2 +n 3, with proportionality constant ranging from 4 to 1, depending on algorithm used.

40 Metodi Numerici M p. 4/?? Matlab: Linear Least Squares Matlab book (Attaway) par. 2.6, chp 8 Matlab functions: house, qr, svd. 1. Computer Problem 3.5 pg. 153 Heath

41 Metodi Numerici M p. 41/?? SVD Singular value decomposition (SVD) of m n matrix A has form: A = UΣV t where U is m m orthogonal matrix, V is n n orthogonal matrix, and U is m m diagonal matrix, with elements s i,j s i,j = { i j i = j σ i

42 Metodi Numerici M p. 42/?? Algorithm Use matlab function SVD to compute SVD factorization [U,S,V]=svd(A); Compute Minimum norm solution to Ax b as x = σ i u t i b σ i v i x=zeros(size(v,1),1); for i=1:size(v,1) if COND SIGMA f= ut i b σ i x=x+f*v i end end Write the matlab function svd solve 1 to compute the minimum norm solution to Ax b where COND SIGMA is σ i >

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Q T A = R ))) ) = A n 1 R

Q T A = R ))) ) = A n 1 R Q T A = R As with the LU factorization of A we have, after (n 1) steps, Q T A = Q T A 0 = [Q 1 Q 2 Q n 1 ] T A 0 = [Q n 1 Q n 2 Q 1 ]A 0 = (Q n 1 ( (Q 2 (Q 1 A 0 }{{} A 1 ))) ) = A n 1 R Since Q T A =

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 5 Eigenvalue Problems Section 5.1 Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information