Linear Algebra in Numerical Methods Lecture on linear algebra MATLAB/Octave works well with linear algebra
Linear Algebra A pseudo-algebra that deals with a system of equations and the transformations of those equations (This) Linear algebra is technically not an algebra per definition Algebra (not linear algebra) studies vectors (and vector fields), matrices, tensors (and tensor fields), quaternions, abstract concepts like groups and rings, etc..; this sometimes is referred to as abstract algebra Elementary algebra is the algebra learned in secondary school. BTW, there is a linear algebra that is more akin to algebra, but it is not the linear algebra most people refer to
Linear Algebra A pseudo-algebra that deals with a system of equations and the transformations of those equations Fitting and smoothing (in a different lecture) is an application of this algebra Solving the system of equations is an application of this algebra Biggest issue is the inverse of a matrix Normally linear algebra works on solving the following equation problem and it's solution A x= y x= A 1 y
Linear Algebra This linear algebra method leads to transform
System of equations Linear system of equations Graphical solution (tough for big systems) Gauss method Use method similar to what we used for the simplex method Just a fancy method of solving linear equations by addition and substitution Non-linear system of equations (not linear algebra BTW) System of equations are equal to zero Use root finding method in multiple dimensions Newton-Raphson Set-up in matrix formation and use Gauss elimination Other similar but better methods exist
Eigen (own value) Eigenvalues: Characteristic roots (values) of a linear system of equations Eigenvectors: Vectors associated with a linear system of equations Eigenfunction: A function that is operated on by some operator and has associated eigenvalues from this operation is called an eigenfunction (in essence a transformed function that has eigenvalues from a particular operation). Solved using different decomposition methods
Decomposition There are a number of matrix decomposition methods Decomposes a matrix into simpler matrices in order to make it easier to do a time-consuming operation Eigen decomposition A matrix can be decomposed into a given a matrix of eigenvectors, V, and a diagonal matrix with eigenvalues on the diagonal, D, from matrix A Given that V is a square matrix then LU decomposition A=V DV 1 Works with square matrix Solves a linear equation QR decomposition Works with rectangular matrix Solves linear equations
Decomposition (cont.) Matrix decomposition methods Single Value Decomposition (SVD) Cholesky Decomposition Works with symmetric positive definite matrices Faster then LU decomposition if can be uses Schur Decomposition Works with complex square matrix Hessenberg Decomposition Decomposes into an Unitary matrix (U* ) T = U -1 {for real is the same as orthogonal} and Hessenberg Matrix {special matrix band and upper triangle not zero} For eigenvalues and eigenvectors many applications perform a Hessenberg Decomposition and then a Schur Decomposition
Matrix Definitions A matrix can have different operations done to it that are useful in linear algebra Transpose is when the the elements of a matrix are transposed B= A T Given A ij then B= A ji Adjugate (formally adjoint which is a conjugate transpose now) of a matrix is the transpose of the cofactor matrix of a matrix C ij =( 1) i+ j cofactor ( A ij ) adj ( A)=C T =C ji
Inverse of Matrix A matrix can have different operations done to it that are useful in linear algebra Inverse of a matrix is the matrix when multiplied by the original matrix equal to the identity matrix I = A 1 A Inverse is the adjugate divided by the determinant A 1 =adj ( A)/ det ( A) The inverse of an triangular matrix is triangular itself and lends itself to an easy equation form given the zeros on the other triangular half lots of cofactor determinants
LU Decomposition Reduces the time consuming forward elimination that is in Gauss elimination Decomposes matrix into lower (L) and upper (U) triangular matrices L is used to produce an intermediate vector through elimination U is used to produce the answer with the intermediate vector Many variation to improve this simple description Very useful for matrix inversion (a very time consuming task) LU reduction in computing is a parallelized version of LU decomposition Method is a modified Gaussian elimination called the Doolittle algorithm (except for LUP Crout algorithm)
LU Decomposition Applications First to solve a set of linear equations in two steps The advantage is it avoids Gaussian elimination (though to get the LU decomposition a similar process is uses; so this is only good if the A matrix is used multiple times) Problem and solutions follow A x=b L U x=b solve for L y=b that is y=l 1 b finally solve for U x= y that is x=u 1 y
LU Decomposition Applications Solves inverse and normally is how computer applications like MATLAB preform an inverse of a matrix, see example Solves determinants quickly, see example A 1 =U 1 L 1 det A =det L det U MATLAB det(a) det(l)*det(p*u) det(p*l)*det(u) MATLAB [l,u,p]=lu(a) inv(u)*inv(l)*p Octave [l,u]=lu(a) inv(u)*inv(l)=inv(a)
QR Decomposition Used to solve least squares problems Used to solve linear equations in the same manner as LU decomposition Can be used to get orthonormal basis of a set of vectors Octave [q,r]=qr(a) inv(r)*inv(q)=inv(a)
SVD Decomposition Decomposes a matrix into eigenvectors and eigenvalues (in essence really AA T and A T A) Can be done on any type of matrix (non-square, sparse, singular, large, etc.) More then one type of SVD Think of this method as factoring a matrix (normally into three simpler matrices) A=USV T The U and V represents the eigenvectors (AA T and A T A respectively) and S represents the square root of the eigenvalue The basis for Principal Component Analysis Used to reduce a complicated multi-variable data set to its principal components That is factor it using SVD Goal would be to reduce the dimensionality of space Also known as Karhunen-Loeve Transform (KLT), proper orthogonal decomposition (POD), empirical orthogonal functions (meteorology and geophysics), or Hotelling transform (in economics and imaging) with modifications to fit the field
SVD Decomposition Decomposes a matrix into eigenvectors and eigenvalues (in essence really AA T and A T A) The basis for Principal Component Analysis Scree test (graph the eigenvalues and keep the larger valued eigenvalue Use only the most important eigenvector and eigenvalues
Special Matrices Banded matrices Coefficients banded about the center LU decomposition no good Other methods Sparse matrix Very few coefficients scattered throughout Special methods Fast Handles big matrices Iterative elimination solutions Gauss-Seidel
Determinants and Tensors Determinants The sum of the signed permutations of a matrix Can be used to get an inverse of a matrix (and hence can be used to solve a system of equations using the cofactors) Cramer's method: usually taught in linear algebra in general doesn't work for large matrices so value is limited Tensors!!!!! (Won't review as we did this in EGR 1010) Tensor 0 th order: Scalar Tensor 1 st order: Vector Dot product Cross product Tensor 2 nd order and greater: Tensor Direct cross product Stresses, etc.
Spaces Vector fields A set of scalars in a region (say all the potentials in a square area) A set of vectors in a region Not necessarily a vector space Vector space A set of vectors with defined operations on them Subspace Could think of this as an object in programming languages Spaces are useful definitions for mathematicians A vector subspace is a subset of the vector space All vector spaces have at least two subspace in itself and an empty set These ideas should be fully developed in a good linear algebra class
Eigenmath A x= x Given a matrix A, x is defined as the eigenvector and lambda is the eigenvalues I A =0 I A x=0 Where det I A =0 is the characteristic equation of A. has a set of eigenvector for each eigenvalue that definethe eigenspace of A Note that for a triangular matrix and a diagonal matrix the eigenvalues of A are just the values of the diagonal
Linear Transformations Transformations are the basis for systems control descriptions Basically you change space from one space to another space Remember EGR 1010: Fourier transform changes your signal from time space to frequency space Remember EGR 1010: Laplace transform changes your signal from time space to s space A simple transform would be the rotation or reflection transform Say you have a vector that is (0,0 x 1,y 1 ) We can rotate this or reflect it with a very simple matrix using ones and negative ones Try it More complicated descriptions would involve spaces
Control Blocks General block for control Example for numerical derivative (think Taylor) Example of derivative (transfer function) General block for summing/multiplying, etc. Example combining block
Sampling In the digital world we need to take what would be an analog signal and sample it So if we take f(t) and sample it at equal spaces we will have a set of points f n were n goes from 1 to I (say) When taking these samples we only take a few digits which we round (as opposed to infinite precision which only works in a theoretical world). We refer to this as quantization of the signal. That is the rounding, not the sampling. There are obvious problems to this with regards to error We discussed the error in the numerical analysis portion of the class and will only briefly mention it here
Sampling This digitized sample is what we will transform In the numerical methods portion we already did some of these transformations though we didn't express it in the jargon of signals We can smooth or fit, etc. using filters This is akin to transforming the input A typical filter is the nonrecursive filter
Sampling Non-recursive filter ( transforming ) Filter coefficients are represented by c k (since these are constants this is a time-invariant filter) Types of nonrecursive filters Finite impulse response (FIR) filter Easier to implement than IIR filter (see later) This is the most general name: names below same thing Transversal filter Tapped delay line filter Moving average filter (common) u n = c k f n k k= Example : u n = 1 5 ( f n 2 + f n 1 + f n + f n+ 1 + f n+ 2 ) Example 2( flat top?): u n = 1 35 ( 3 f n 2+ 12 f n 1 + 17 f n + 12 f n+ 1 3 f n+ 2 )
Sampling Non-recursive filter ( transforming ) A set of coefficients multiply a strip of function points to create one point (u n ) To get the next point (u n+1 or u n-1 ) the coefficients shift and multiply the function points again This is referred to as a convolution u n =c 2 f n 2 c 1 f n 1 c 0 f n c 1 f n 1 c 2 f n 2 u n 1 =c 2 f n 1 c 1 f n c 0 f n 1 c 1 f n 2 c 2 f n 3
Sampling These filter can be known as windowing Rectangular function (first example) Bartlett window (Triangular function/window) Hann function (Hanning window) Bartlett-Hann window Hamming function/window Blackman function/window Lanczos function (Sinc window) Gaussian function/window Kaiser window (Bessel function/window) Tukey function/window
Sampling These filter can be known as windowing Cosine function/window Connes function/window Kaiser function/window Spencer window (usually used in accounting) Welch window (improvement of Bartlett method)
Sampling These filter can be known as windowing Nuttal window Blackman-Harris window Blackman-Nuttal window Poisson window Hann-Poisson window Rife-Vincent window (used for tones music) DPSS (Discrete prolate spheroidals sequences) window (Slepian window)
Sampling Recursive filter ( transforming ) Filter coefficients are represented by c k and d k Akin to feedback and feedforward system Types of recursive filters Infinite Impulse Response Filter (IIR) Ladder filter Lattice filter Wave Digital Filter (WDF) All coefficients are physical (spring, mass, damper or inductor, capacitor, resistor) Autoregressive (integrated) moving average filter (ARMA or ARIMA) u n = c k f n k d k u n k k= k = Example :u n =u n 1 1 2 [ f n f n 1 ] Trapezoid rule
Sampling All the numerical analysis that we did previously can be recast as digital filters, this includes Fitting (least-squares and more) Smoothing Differences and derivatives Integration
Aliasing Problems in sampling Aliasing Different frequencies are found in another frequency (aliased) Seen typically in car wheel rotation when we see it slow down, stop, and maybe move backwards depending on your speed Sample a sine wave at different intervals...different frequency appear indistinguishable
Aliasing Problems in sampling Aliasing Different frequencies are found in another frequency (aliased) Seen typically in car wheel rotation when we see it slow down, stop, and maybe move backwards depending on your speed Sample a sine wave at different intervals...different frequency appear indistinguishable