12. Cholesky factorization

Size: px
Start display at page:

Download "12. Cholesky factorization"

Transcription

1 L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1

2 Definitions a symmetric matrix A R n n is positive semidefinite if x T Ax 0 for all x a symmetric matrix A R n n is positive definite if x T Ax > 0 for all x 0 this is a subset of the positive semidefinite matrices note: if A is symmetric and n n, then x T Ax is the function x T Ax = n n A ij x i x j = n A ij x i x j i=1 j=1 i=1 A ii x 2 i + 2 i>j this is called a quadratic form Cholesky factorization 12-2

3 Example A = [ a ] x T Ax = 9x x 1 x 2 + ax 2 2 = (3x 1 + 2x 2 ) 2 + (a 4)x 2 2 A is positive definite for a > 4 x T Ax > 0 for all nonzero x A is positive semidefinite but not positive definite for a = 4 x T Ax 0 for all x, x T Ax = 0 for x = (2, 3) A is not positive semidefinite for a < 4 x T Ax < 0 for x = (2, 3) Cholesky factorization 12-3

4 Simple properties every positive definite matrix A is nonsingular Ax = 0 = x T Ax = 0 = x = 0 (last step follows from positive definiteness) every positive definite matrix A has positive diagonal elements A ii = e T i Ae i > 0 every positive semidefinite matrix A has nonnegative diagonal elements A ii = e T i Ae i 0 Cholesky factorization 12-4

5 Schur complement partition n n symmetric matrix A as A = [ A11 A T 2:n,1 A 2:n,1 A 2:n,2:n ] the Schur complement of A 11 is defined as the (n 1) (n 1) matrix S = A 2:n,2:n 1 A 11 A 2:n,1 A T 2:n,1 if A is positive definite, then S is positive definite to see this, take any x 0 and define y = (A T 2:n,1x)/A 11 ; then x T Sx = [ y x ] T [ A11 A T 2:n,1 A 2:n,1 A 2:n,2:n ] [ y x ] > 0 because A is positive definite Cholesky factorization 12-5

6 Singular positive semidefinite matrices we have seen that positive definite matrices are nonsingular (page 12-4) if A is positive semidefinite, but not positive definite, then it is singular to see this, suppose A is positive semidefinite but not positive definite there exists a nonzero x with x T Ax = 0 since A is positive semidefinite the following function is nonnegative: f(t) = (x tax) T A(x tax) = x T Ax 2tx T A 2 x + t 2 x T A 3 x = 2t Ax 2 + t 2 x T A 3 x f(t) 0 for all t is only possible if Ax = 0, i.e., Ax = 0 hence there exists a nonzero x with Ax = 0 Cholesky factorization 12-6

7 Exercises show that if A R n n is positive semidefinite, then B T AB is positive semidefinite for any B R n m show that if A R n n is positive definite, then B T AB is positive definite for any B R n m with linearly independent columns Cholesky factorization 12-7

8 Outline positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods

9 Exercise: resistor circuit R 1 R 2 x 1 x 2 y 1 + R 3 + y 2 [ y1 y 2 ] = [ ] [ ] R1 + R 3 R 3 x1 R 2 + R 3 x 2 R 3 show that A = [ ] R1 + R 3 R 3 R 3 R 2 + R 3 is positive definite if R 1, R 2, R 3 are positive Cholesky factorization 12-8

10 Solution Solution from physics x T Ax = y T x is the power delivered by sources, dissipated by resistors power dissipated by the resistors is positive unless both currents are zero Algebraic solution x T Ax = (R 1 + R 3 )x R 3 x 1 x 2 + (R 2 + R 3 )x 2 2 = R 1 x R 2 x R 3 (x 1 + x 2 ) 2 0 and x T Ax = 0 only if x 1 = x 2 = 0 Cholesky factorization 12-9

11 Gram matrix recall the definition of Gram matrix of a matrix B (page 4-21): A = B T B every Gram matrix is positive semidefinite x T Ax = x T B T Bx = Bx 2 0 x a Gram matrix is positive definite if x T Ax = x T B T Bx = Bx 2 > 0 x 0 in other words, B has linearly independent columns Cholesky factorization 12-10

12 Graph Laplacian recall definition of node-arc incidence matrix of a directed graph (page 3-29) B ij = 1 if arc j points to node i 1 if arc j points from node i 0 otherwise assume there are no self-loops and at most one arc between any two nodes B = Cholesky factorization 12-11

13 Graph Laplacian the positive semidefinite matrix A = BB T is called the Laplacian of the graph A ij = degree of node i 1 0 otherwise if i = j if i j and there is an arc i j or j i the degree of a node is the number of arcs incident to it A = BB T = Cholesky factorization 12-12

14 Laplacian quadratic form recall the interpretation of matrix-vector multiplication with B T (page 3-31) if y is vector of node potentials, then B T y contains potential differences: (B T y) j = y k y l if arc j goes from node l to k y T Ay = y T BB T y is the sum of squared potential differences y T Ay = B T y 2 = (y j y i ) 2 arcs i j (this is also known as the Dirichlet energy function) Example: for the graph on the previous page y T Ay = (y 2 y 1 ) 2 + (y 4 y 1 ) 2 + (y 3 y 2 ) 2 + (y 1 y 3 ) 2 + (y 4 y 3 ) 2 Cholesky factorization 12-13

15 Outline positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods

16 Cholesky factorization every positive definite matrix A R n n can be factored as A = R T R where R is upper triangular with positive diagonal elements complexity of computing R is (1/3)n 3 flops R is called the Cholesky factor of A can be interpreted as square root of a positive definite matrix gives a practical method for testing positive definiteness Cholesky factorization 12-14

17 Cholesky factorization algorithm [ A11 A 1,2:n A 2:n,1 A 2:n,2:n ] = [ R11 0 R1,2:n T R2:n,2:n T ] [ R11 R 1,2:n 0 R 2:n,2:n ] = [ R 2 11 R 11 R T 1,2:n R 11 R 1,2:n R1,2:nR T 1,2:n + R2:n,2:nR T 2:n,2:n ] 1. compute first row of R: 2. compute 2, 2 block R 2:n,2:n from R 11 = A 11, R 1,2:n = 1 R 11 A 1,2:n A 2:n,2:n R T 1,2:nR 1,2:n = R T 2:n,2:nR 2:n,2:n this is a Cholesky factorization of order n 1 Cholesky factorization 12-15

18 Discussion the algorithm works for positive definite A of size n n step 1: if A is positive definite then A 11 > 0 step 2: if A is positive definite, then A 2:n,2:n R T 1,2:nR 1,2:n = A 2:n,2:n 1 A 11 A 2:n,1 A T 2:n,1 is positive definite (see page 12-5) hence the algorithm works for n = m if it works for n = m 1 it obviously works for n = 1; therefore it works for all n Cholesky factorization 12-16

19 Example = R R 12 R 22 0 R 13 R 23 R 33 R 11 R 12 R 13 0 R 22 R R 33 = Cholesky factorization 12-17

20 Example = R R 12 R 22 0 R 13 R 23 R 33 R 11 R 12 R 13 0 R 22 R R 33 first row of R = 3 R R 23 R R 22 R R 33 second row of R [ ] [ 3 1 [ ] [ ] [ ] [ ] R22 0 R22 R 3 1 = 23 R 23 R 33 0 R 33 ] = [ ] [ ] R 33 0 R 33 third column of R: 10 1 = R 2 33, i.e., R 33 = 3 Cholesky factorization 12-18

21 Solving equations with positive definite A solve Ax = b with A a positive definite n n matrix Algorithm factor A as A = R T R solve R T Rx = b solve R T y = b by forward substitution solve Rx = y by back substitution Complexity: (1/3)n 3 + 2n 2 (1/3)n 3 flops factorization: (1/3)n 3 forward and backward substitution: 2n 2 Cholesky factorization 12-19

22 Cholesky factorization of Gram matrix suppose B is an m n matrix with linearly independent columns the Gram matrix A = B T B is positive definite (page 4-21) two methods for computing the Cholesky factor of A, given B 1. compute A = B T B, then Cholesky factorization of A A = R T R 2. compute QR factorization B = QR; since A = B T B = R T Q T QR = R T R the matrix R is the Cholesky factor of A Cholesky factorization 12-20

23 Example B = , A = B T B = [ ] 1. Cholesky factorization: A = [ ] [ ] 2. QR factorization B = = 3/5 0 4/ [ ] Cholesky factorization 12-21

24 Comparison of the two methods Numerical stability: QR factorization method is more stable see the example on page 8-16 QR method computes R without squaring B (i.e., forming B T B) this is important when the columns of B are almost linearly dependent Complexity method 1: cost of symmetric product B T B plus Cholesky factorization mn 2 + (1/3)n 3 flops method 2: 2mn 2 flops for QR factorization method 1 is faster but only by a factor of at most two (if m n) Cholesky factorization 12-22

25 Sparse positive definite matrices Cholesky factorization of dense matrices (1/3)n 3 flops on a standard computer: a few seconds or less, for n up to several 1000 Cholesky factorization of sparse matrices if A is very sparse, R is often (but not always) sparse if R is sparse, the cost of the factorization is much less than (1/3)n 3 exact cost depends on n, number of nonzero elements, sparsity pattern very large sets of equations can be solved by exploiting sparsity Cholesky factorization 12-23

26 Sparse Cholesky factorization if A is sparse and positive definite, it is usually factored as A = P R T RP T P a permutation matrix; R upper triangular with positive diagonal elements Interpretation: we permute the rows and columns of A and factor P T AP = R T R choice of permutation greatly affects the sparsity R there exist several heuristic methods for choosing a good permutation Cholesky factorization 12-24

27 Example sparsity pattern of A Cholesky factor of A pattern of P T AP Cholesky factor of P T AP Cholesky factorization 12-25

28 Solving sparse positive definite equations solve Ax = b with A a sparse positive definite matrix Algorithm 1. compute sparse Cholesky factorization A = P R T RP T 2. permute right-hand side: c := P T b 3. solve R T y = c by forward substitution 4. solve Rz = y by back substitution 5. permute solution: x := P z Cholesky factorization 12-26

29 Outline positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods

30 Quadratic form suppose A is n n and Hermitian (A ij = Āji) x H Ax = = = n n A ij x i x j i=1 n j=1 (A ij x i x j + Āijx i x j ) A ii x i 2 + i=1 i>j n A ii x i Re A ij x i x j i=1 i>j note that x H Ax is real for all x C n Cholesky factorization 12-27

31 Complex positive definite matrices a Hermitian n n matrix A is positive semidefinite if x H Ax 0 for all x C n a Hermitian n n matrix A is positive definite if x H Ax > 0 for all nonzero x C n Cholesky factorization every positive definite matrix A C n n can be factored as A = R H R where R is upper triangular with positive real diagonal elements Cholesky factorization 12-28

32 Outline positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods

33 Regularized least squares model fitting we revisit the data fitting problem with linear-in-parameters model (page 9-9) ˆf(x) = θ 1 f 1 (x) + θ 2 f 2 (x) + + θ p f p (x) = θ T F (x) F (x) = (f 1 (x),..., f p (x)) is a p-vector of basis functions f 1 (x),..., f p (x) Regularized least squares model fitting (page 10-7) minimize N (θ T F (x (k) ) y (k)) 2 p + λ k=1 j=1 θ 2 j (x (1), y (1) ),..., (x (N), y (N) ) are N examples to simplify notation, we add regularization for all coefficients θ 1,..., θ p next discussion can be modified to handle f 1 (x) = 1, regularization p j=2 θ2 j Cholesky factorization 12-29

34 Regularized least squares problem in matrix notation minimize Aθ y d 2 + λ θ 2 A has size N p (# examples # basis functions) A = F (x (1) ) T F (x (2) ) T. F (x (N) ) T = f 1 (x (1) ) f 2 (x (1) ) f p (x (1) ) f 1 (x (2) ) f 2 (x (2) ) f p (x (2) )... f 1 (x (N) ) f 2 (x (N) ) f p (x (N) ) y d is the N-vector y d = (y (1),..., y (N) ) we discuss methods for problems with N p (A is very wide) equivalent stacked least squares problem (page 10-3) has size (p + N) p QR factorization method may be too expensive when N p Cholesky factorization 12-30

35 Solution of regularized LS problem from the normal equations: ˆθ = (A T A + λi) 1 A T y d = A T (AA T + λi) 1 y d second expression follows from the push-through identity (A T A + λi) 1 A T = A T (AA T + λi) 1 this is easily proved, by writing it as A T (AA T + λi) = (A T A + λi)a T from the second expression for ˆθ and the definition of A, ˆf(x) = ˆθ T F (x) = w T AF (x) = N w i F (x (i) ) T F (x) i=1 where w = (AA T + λi) 1 y d Cholesky factorization 12-31

36 Algorithm 1. compute the N N matrix Q = AA T, which has elements Q ij = F (x (i) ) T F (x (j) ), i, j = 1,..., N 2. use a Cholesky factorization to solve the equation (Q + λi)w = y d Remarks ˆθ = A T w is not needed; w is sufficient to evaluate the function ˆf(y): ˆf(y) = N w i F (x (i) ) T F (y) i=1 complexity: (1/3)N 3 flops for factorization plus cost of computing Q Cholesky factorization 12-32

37 Example: multivariate polynomials ˆf(x) is a polynomial of degree d (or less) in n variables x = (x 1,..., x n ) ˆf(x) is a linear combination of all possible monomials x k 1 1 xk 2 2 xk n n where k 1,..., k n are nonnegative integers with k 1 + k k n d number of different monomials is ( n + d n ) = (n + d)! n! d! Example: for n = 2, d = 3 there are ten monomials 1, x 1, x 2, x 2 1, x 1 x 2, x 2 2, x 3 1, x 2 1x 2, x 1 x 2 2, x 3 2 Cholesky factorization 12-33

38 Multinomial formula (x 0 + x x n ) d = k 0 + +k n =d (d + 1)! k 0! k 1! k n! xk 0 0 xk 1 1 xk n n sum is over all nonnegative integers k 0, k 1,..., k n with sum d setting x 0 = 1 gives (1 + x 1 + x x n ) d = k 1 + +k n d c k1 k 2 k n x k 1 1 xk 2 2 xk n n the sum includes all monomials of degree d or less with variables x 1,..., x n coefficient c k1 k 2 k n is defined as c k1 k 2 k n = (d + 1)! k 0! k 1! k 2! k n! with k 0 = d k 1 k n Cholesky factorization 12-34

39 Vector of monomials write polynomial of degree d or less, with variables z R n, as ˆf(x) = θ T F (x) F (x) is vector of basis functions ck1 k n x k 1 1 xk 2 2 xk n n for all k 1 + k k n d length of F (x) is p = (n + d)!/(n! d!) multinomial formula gives simple formula for inner products F (u) T F (v): F (u) T F (v) = k 1 + +k n d = (1 + u 1 v u n v n ) d c k1 k 2 k n (u k 1 1 uk n n )(v k 1 1 vk n n ) only 2n + 1 flops needed for inner product of length p = (n + d)!/(n! d!) Cholesky factorization 12-35

40 Example vector of monomials of degree d = 3 or less in n = 2 variables F (u) T F (v) = 1 3u1 3u2 3u 2 1 6u1 u 2 3u 2 2 u 3 1 3u 2 1 u 2 3u1 u 2 2 T 1 3v1 3v2 3v 2 1 6v1 v 2 3v 2 2 v 3 1 3v 2 1 v 2 3v1 v 2 2 u 3 2 v 3 2 = (1 + u 1 v 1 + u 2 v 2 ) 3 Cholesky factorization 12-36

41 Least squares fitting of multivariate polynomials fit polynomial of n variables, degree d, to points (x (1), y (1) ),..., (x (N), y (N) ) Algorithm (see page 12-32) 1. compute the N N matrix Q with elements Q ij = K(x (i), x (j) ) where K(u, v) = (1 + u T v) d 2. use a Cholesky factorization to solve the equation (Q + λi)w = y d the fitted polynomial is ˆf(x) = N w i K(x (i), x) = i=1 N w i (1 + (x (i) ) T x) d i=1 complexity: nn 2 flops for computing Q, plus (1/3)N 3 for the factorization, i.e., nn 2 + (1/3)N 3 flops Cholesky factorization 12-37

42 Kernel methods Kernel function: a generalized inner product K(u, v) K(u, v) is inner product of vectors of basis functions F (u) and F (v) F (u) may be infinite-dimensional kernel methods work with K(u, v) directly, do not require F (u) Examples the polynomial kernel function K(u, v) = (1 + u T v) d the Gaussian radial basis function kernel u v 2 K(u, v) = exp ( 2σ 2 ) kernels exist for computing with graphs, texts, strings of symbols,... Cholesky factorization 12-38

43 Example: handwritten digit classification we apply the method of page to least squares classification training set is images from MNIST data set ( 1000 examples per digit) vector x is vector of pixel intensities (size n = 28 2 = 784) we use the polynomial kernel with degree d = 3: K(u, v) = (1 + u T v) 3 hence F (z) has length p = (n + d)!/(n! d!) = we calculate ten Boolean classifiers ˆf k (x) = sign( f k (x)), k = 1, ˆf k (x) distinguishes digit k 1 (outcome +1) form other digits (outcome 1) the Boolean classifiers are combined in the multi-class classifier ˆf(x) = argmax k=1,...,10 f k (x) Cholesky factorization 12-39

44 Least squares Boolean classifier Algorithm: compute Boolean classifier for digit k 1 versus the rest 1. compute N N matrix Q with elements Q ij = (1 + (x (i) ) T x (j) ) d, i, j = 1,..., N 2. define N-vector y d with elements y d i = { +1 x (i) is an example of digit k 1 1 otherwise 3. solve the equation (Q + λi)w = y d the solution w gives the Boolean classifier for digit k 1 versus rest f k (x) = N w i (1 + (x (i) ) T x) d i=1 Cholesky factorization 12-40

45 Complexity the matrix Q is the same for each of the ten Boolean classifiers hence, only the right-hand side of the equation (Q + λi)w = y d is different for each Boolean classifier Complexity constructing Q requires N 2 /2 inner products of length n: nn 2 flops Cholesky factorization of Q + λi: (1/3)N 3 flops solve the equation (Q + λi)w = y d for the 10 right-hand sides: 20N 2 flops total is (1/3)N 3 + nn 2 Cholesky factorization 12-41

46 Classification error 6 Training set Test set Classification error (%) λ percentage of misclassified digits versus λ Cholesky factorization 12-42

47 Confusion matrix Predicted digit Digit Total All multiclass classifier (λ = 10 4 ) on test examples 292 digits are misclassified (2.9% error) Cholesky factorization 12-43

48 Examples of misclassified digits Predicted digit Digit Cholesky factorization 12-44

49 Examples of misclassified digits Predicted digit Digit Cholesky factorization 12-45

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

L. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions.

L. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions. L Vandenberghe EE133A (Spring 2017) 3 Matrices notation and terminology matrix operations linear and affine functions complexity 3-1 Matrix a rectangular array of numbers, for example A = 0 1 23 01 13

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

9. Least squares data fitting

9. Least squares data fitting L. Vandenberghe EE133A (Spring 2017) 9. Least squares data fitting model fitting regression linear-in-parameters models time series examples validation least squares classification statistics interpretation

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Chapter 7 Network Flow Problems, I

Chapter 7 Network Flow Problems, I Chapter 7 Network Flow Problems, I Network flow problems are the most frequently solved linear programming problems. They include as special cases, the assignment, transportation, maximum flow, and shortest

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Least Squares Classification

Least Squares Classification Least Squares Classification Stephen Boyd EE103 Stanford University November 4, 2017 Outline Classification Least squares classification Multi-class classifiers Classification 2 Classification data fitting

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

EE364a Review Session 7

EE364a Review Session 7 EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Chapter 1: Systems of Linear Equations and Matrices

Chapter 1: Systems of Linear Equations and Matrices : Systems of Linear Equations and Matrices Multiple Choice Questions. Which of the following equations is linear? (A) x + 3x 3 + 4x 4 3 = 5 (B) 3x x + x 3 = 5 (C) 5x + 5 x x 3 = x + cos (x ) + 4x 3 = 7.

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2. MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

MATH529 Fundamentals of Optimization Unconstrained Optimization II

MATH529 Fundamentals of Optimization Unconstrained Optimization II MATH529 Fundamentals of Optimization Unconstrained Optimization II Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 31 Recap 2 / 31 Example Find the local and global minimizers

More information

14. Nonlinear equations

14. Nonlinear equations L. Vandenberghe ECE133A (Winter 2018) 14. Nonlinear equations Newton method for nonlinear equations damped Newton method for unconstrained minimization Newton method for nonlinear least squares 14-1 Set

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information