Linear Algebra in Numerical Methods. Lecture on linear algebra MATLAB/Octave works well with linear algebra

Similar documents
EE731 Lecture Notes: Matrix Computations for Signal Processing

Mathematics. EC / EE / IN / ME / CE. for

Conceptual Questions for Review

Numerical Methods in Matrix Computations

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

FINITE-DIMENSIONAL LINEAR ALGEBRA

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Statistical and Adaptive Signal Processing

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

MATH 583A REVIEW SESSION #1

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Numerical Linear Algebra

Numerical Linear Algebra

1 9/5 Matrices, vectors, and their applications

Fundamentals of Engineering Analysis (650163)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

MAA507, Power method, QR-method and sparse matrix representation.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Applied Linear Algebra

Matrix Factorization and Analysis

Review problems for MA 54, Fall 2004.

Iterative Methods. Splitting Methods

5.6. PSEUDOINVERSES 101. A H w.

Numerical Linear Algebra

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Practical Linear Algebra: A Geometry Toolbox

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

A Review of Matrix Analysis

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

G1110 & 852G1 Numerical Linear Algebra

Math 307 Learning Goals

Linear Algebra in Actuarial Science: Slides to the lecture

1 Matrices and Systems of Linear Equations. a 1n a 2n

Linear Least-Squares Data Fitting

Statistical Geometry Processing Winter Semester 2011/2012

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

1 Number Systems and Errors 1

Linear Algebra Primer

AMS526: Numerical Analysis I (Numerical Linear Algebra)

MA3025 Course Prerequisites

Linear Algebra Highlights

Introduction to Mathematical Programming

1 Singular Value Decomposition and Principal Component

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Linear Algebra & Geometry why is linear algebra useful in computer vision?

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

Lecture 2: Linear Algebra Review

(Refer Slide Time: )

B553 Lecture 5: Matrix Algebra Review

1 Matrices and Systems of Linear Equations

EC5555 Economics Masters Refresher Course in Mathematics September 2014

Numerical Methods - Numerical Linear Algebra

Math 307 Learning Goals. March 23, 2010

LINEAR ALGEBRA KNOWLEDGE SURVEY

Foundations of Matrix Analysis

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Hands-on Matrix Algebra Using R

In this section again we shall assume that the matrix A is m m, real and symmetric.

Solution of Linear systems

Computational Data Analysis!

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Main matrix factorizations

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Algebra: Matrix Eigenvalue Problems

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

ANSWERS. E k E 2 E 1 A = B

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

APPLIED NUMERICAL LINEAR ALGEBRA

Linear Algebra Review

Next topics: Solving systems of linear equations

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

TEACHING NUMERICAL LINEAR ALGEBRA AT THE UNDERGRADUATE LEVEL by Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University

Course Name: Digital Signal Processing Course Code: EE 605A Credit: 3

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Matrix decompositions

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Elementary Linear Algebra

Linear Algebraic Equations

Numerical Linear Algebra

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Maths for Signals and Systems Linear Algebra in Engineering

Lecture 3: QR-Factorization

Engineering Computation

Notes on Eigenvalues, Singular Values and QR

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Background Mathematics (2/2) 1. David Barber

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Transcription:

Linear Algebra in Numerical Methods Lecture on linear algebra MATLAB/Octave works well with linear algebra

Linear Algebra A pseudo-algebra that deals with a system of equations and the transformations of those equations (This) Linear algebra is technically not an algebra per definition Algebra (not linear algebra) studies vectors (and vector fields), matrices, tensors (and tensor fields), quaternions, abstract concepts like groups and rings, etc..; this sometimes is referred to as abstract algebra Elementary algebra is the algebra learned in secondary school. BTW, there is a linear algebra that is more akin to algebra, but it is not the linear algebra most people refer to

Linear Algebra A pseudo-algebra that deals with a system of equations and the transformations of those equations Fitting and smoothing (in a different lecture) is an application of this algebra Solving the system of equations is an application of this algebra Biggest issue is the inverse of a matrix Normally linear algebra works on solving the following equation problem and it's solution A x= y x= A 1 y

Linear Algebra This linear algebra method leads to transform

System of equations Linear system of equations Graphical solution (tough for big systems) Gauss method Use method similar to what we used for the simplex method Just a fancy method of solving linear equations by addition and substitution Non-linear system of equations (not linear algebra BTW) System of equations are equal to zero Use root finding method in multiple dimensions Newton-Raphson Set-up in matrix formation and use Gauss elimination Other similar but better methods exist

Eigen (own value) Eigenvalues: Characteristic roots (values) of a linear system of equations Eigenvectors: Vectors associated with a linear system of equations Eigenfunction: A function that is operated on by some operator and has associated eigenvalues from this operation is called an eigenfunction (in essence a transformed function that has eigenvalues from a particular operation). Solved using different decomposition methods

Decomposition There are a number of matrix decomposition methods Decomposes a matrix into simpler matrices in order to make it easier to do a time-consuming operation Eigen decomposition A matrix can be decomposed into a given a matrix of eigenvectors, V, and a diagonal matrix with eigenvalues on the diagonal, D, from matrix A Given that V is a square matrix then LU decomposition A=V DV 1 Works with square matrix Solves a linear equation QR decomposition Works with rectangular matrix Solves linear equations

Decomposition (cont.) Matrix decomposition methods Single Value Decomposition (SVD) Cholesky Decomposition Works with symmetric positive definite matrices Faster then LU decomposition if can be uses Schur Decomposition Works with complex square matrix Hessenberg Decomposition Decomposes into an Unitary matrix (U* ) T = U -1 {for real is the same as orthogonal} and Hessenberg Matrix {special matrix band and upper triangle not zero} For eigenvalues and eigenvectors many applications perform a Hessenberg Decomposition and then a Schur Decomposition

Matrix Definitions A matrix can have different operations done to it that are useful in linear algebra Transpose is when the the elements of a matrix are transposed B= A T Given A ij then B= A ji Adjugate (formally adjoint which is a conjugate transpose now) of a matrix is the transpose of the cofactor matrix of a matrix C ij =( 1) i+ j cofactor ( A ij ) adj ( A)=C T =C ji

Inverse of Matrix A matrix can have different operations done to it that are useful in linear algebra Inverse of a matrix is the matrix when multiplied by the original matrix equal to the identity matrix I = A 1 A Inverse is the adjugate divided by the determinant A 1 =adj ( A)/ det ( A) The inverse of an triangular matrix is triangular itself and lends itself to an easy equation form given the zeros on the other triangular half lots of cofactor determinants

LU Decomposition Reduces the time consuming forward elimination that is in Gauss elimination Decomposes matrix into lower (L) and upper (U) triangular matrices L is used to produce an intermediate vector through elimination U is used to produce the answer with the intermediate vector Many variation to improve this simple description Very useful for matrix inversion (a very time consuming task) LU reduction in computing is a parallelized version of LU decomposition Method is a modified Gaussian elimination called the Doolittle algorithm (except for LUP Crout algorithm)

LU Decomposition Applications First to solve a set of linear equations in two steps The advantage is it avoids Gaussian elimination (though to get the LU decomposition a similar process is uses; so this is only good if the A matrix is used multiple times) Problem and solutions follow A x=b L U x=b solve for L y=b that is y=l 1 b finally solve for U x= y that is x=u 1 y

LU Decomposition Applications Solves inverse and normally is how computer applications like MATLAB preform an inverse of a matrix, see example Solves determinants quickly, see example A 1 =U 1 L 1 det A =det L det U MATLAB det(a) det(l)*det(p*u) det(p*l)*det(u) MATLAB [l,u,p]=lu(a) inv(u)*inv(l)*p Octave [l,u]=lu(a) inv(u)*inv(l)=inv(a)

QR Decomposition Used to solve least squares problems Used to solve linear equations in the same manner as LU decomposition Can be used to get orthonormal basis of a set of vectors Octave [q,r]=qr(a) inv(r)*inv(q)=inv(a)

SVD Decomposition Decomposes a matrix into eigenvectors and eigenvalues (in essence really AA T and A T A) Can be done on any type of matrix (non-square, sparse, singular, large, etc.) More then one type of SVD Think of this method as factoring a matrix (normally into three simpler matrices) A=USV T The U and V represents the eigenvectors (AA T and A T A respectively) and S represents the square root of the eigenvalue The basis for Principal Component Analysis Used to reduce a complicated multi-variable data set to its principal components That is factor it using SVD Goal would be to reduce the dimensionality of space Also known as Karhunen-Loeve Transform (KLT), proper orthogonal decomposition (POD), empirical orthogonal functions (meteorology and geophysics), or Hotelling transform (in economics and imaging) with modifications to fit the field

SVD Decomposition Decomposes a matrix into eigenvectors and eigenvalues (in essence really AA T and A T A) The basis for Principal Component Analysis Scree test (graph the eigenvalues and keep the larger valued eigenvalue Use only the most important eigenvector and eigenvalues

Special Matrices Banded matrices Coefficients banded about the center LU decomposition no good Other methods Sparse matrix Very few coefficients scattered throughout Special methods Fast Handles big matrices Iterative elimination solutions Gauss-Seidel

Determinants and Tensors Determinants The sum of the signed permutations of a matrix Can be used to get an inverse of a matrix (and hence can be used to solve a system of equations using the cofactors) Cramer's method: usually taught in linear algebra in general doesn't work for large matrices so value is limited Tensors!!!!! (Won't review as we did this in EGR 1010) Tensor 0 th order: Scalar Tensor 1 st order: Vector Dot product Cross product Tensor 2 nd order and greater: Tensor Direct cross product Stresses, etc.

Spaces Vector fields A set of scalars in a region (say all the potentials in a square area) A set of vectors in a region Not necessarily a vector space Vector space A set of vectors with defined operations on them Subspace Could think of this as an object in programming languages Spaces are useful definitions for mathematicians A vector subspace is a subset of the vector space All vector spaces have at least two subspace in itself and an empty set These ideas should be fully developed in a good linear algebra class

Eigenmath A x= x Given a matrix A, x is defined as the eigenvector and lambda is the eigenvalues I A =0 I A x=0 Where det I A =0 is the characteristic equation of A. has a set of eigenvector for each eigenvalue that definethe eigenspace of A Note that for a triangular matrix and a diagonal matrix the eigenvalues of A are just the values of the diagonal

Linear Transformations Transformations are the basis for systems control descriptions Basically you change space from one space to another space Remember EGR 1010: Fourier transform changes your signal from time space to frequency space Remember EGR 1010: Laplace transform changes your signal from time space to s space A simple transform would be the rotation or reflection transform Say you have a vector that is (0,0 x 1,y 1 ) We can rotate this or reflect it with a very simple matrix using ones and negative ones Try it More complicated descriptions would involve spaces

Control Blocks General block for control Example for numerical derivative (think Taylor) Example of derivative (transfer function) General block for summing/multiplying, etc. Example combining block

Sampling In the digital world we need to take what would be an analog signal and sample it So if we take f(t) and sample it at equal spaces we will have a set of points f n were n goes from 1 to I (say) When taking these samples we only take a few digits which we round (as opposed to infinite precision which only works in a theoretical world). We refer to this as quantization of the signal. That is the rounding, not the sampling. There are obvious problems to this with regards to error We discussed the error in the numerical analysis portion of the class and will only briefly mention it here

Sampling This digitized sample is what we will transform In the numerical methods portion we already did some of these transformations though we didn't express it in the jargon of signals We can smooth or fit, etc. using filters This is akin to transforming the input A typical filter is the nonrecursive filter

Sampling Non-recursive filter ( transforming ) Filter coefficients are represented by c k (since these are constants this is a time-invariant filter) Types of nonrecursive filters Finite impulse response (FIR) filter Easier to implement than IIR filter (see later) This is the most general name: names below same thing Transversal filter Tapped delay line filter Moving average filter (common) u n = c k f n k k= Example : u n = 1 5 ( f n 2 + f n 1 + f n + f n+ 1 + f n+ 2 ) Example 2( flat top?): u n = 1 35 ( 3 f n 2+ 12 f n 1 + 17 f n + 12 f n+ 1 3 f n+ 2 )

Sampling Non-recursive filter ( transforming ) A set of coefficients multiply a strip of function points to create one point (u n ) To get the next point (u n+1 or u n-1 ) the coefficients shift and multiply the function points again This is referred to as a convolution u n =c 2 f n 2 c 1 f n 1 c 0 f n c 1 f n 1 c 2 f n 2 u n 1 =c 2 f n 1 c 1 f n c 0 f n 1 c 1 f n 2 c 2 f n 3

Sampling These filter can be known as windowing Rectangular function (first example) Bartlett window (Triangular function/window) Hann function (Hanning window) Bartlett-Hann window Hamming function/window Blackman function/window Lanczos function (Sinc window) Gaussian function/window Kaiser window (Bessel function/window) Tukey function/window

Sampling These filter can be known as windowing Cosine function/window Connes function/window Kaiser function/window Spencer window (usually used in accounting) Welch window (improvement of Bartlett method)

Sampling These filter can be known as windowing Nuttal window Blackman-Harris window Blackman-Nuttal window Poisson window Hann-Poisson window Rife-Vincent window (used for tones music) DPSS (Discrete prolate spheroidals sequences) window (Slepian window)

Sampling Recursive filter ( transforming ) Filter coefficients are represented by c k and d k Akin to feedback and feedforward system Types of recursive filters Infinite Impulse Response Filter (IIR) Ladder filter Lattice filter Wave Digital Filter (WDF) All coefficients are physical (spring, mass, damper or inductor, capacitor, resistor) Autoregressive (integrated) moving average filter (ARMA or ARIMA) u n = c k f n k d k u n k k= k = Example :u n =u n 1 1 2 [ f n f n 1 ] Trapezoid rule

Sampling All the numerical analysis that we did previously can be recast as digital filters, this includes Fitting (least-squares and more) Smoothing Differences and derivatives Integration

Aliasing Problems in sampling Aliasing Different frequencies are found in another frequency (aliased) Seen typically in car wheel rotation when we see it slow down, stop, and maybe move backwards depending on your speed Sample a sine wave at different intervals...different frequency appear indistinguishable

Aliasing Problems in sampling Aliasing Different frequencies are found in another frequency (aliased) Seen typically in car wheel rotation when we see it slow down, stop, and maybe move backwards depending on your speed Sample a sine wave at different intervals...different frequency appear indistinguishable