Lecture 6. Numerical methods. Approximation of functions

Similar documents
Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 3 Transformations

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Linear Least-Squares Data Fitting

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

MATH 350: Introduction to Computational Mathematics

Fitting functions to data

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Linear Algebra (Review) Volker Tresp 2018

8. Diagonalization.

Q T A = R ))) ) = A n 1 R

Lecture notes: Applied linear algebra Part 1. Version 2

Linear Algebra, part 3 QR and SVD

The Singular Value Decomposition

B553 Lecture 5: Matrix Algebra Review

7. Symmetric Matrices and Quadratic Forms

Roundoff Error. Monday, August 29, 11

Computational Methods. Least Squares Approximation/Optimization

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Chap 3. Linear Algebra

MATRICES. a m,1 a m,n A =

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra (Review) Volker Tresp 2017

Linear Models Review

Basic Concepts in Linear Algebra

Review problems for MA 54, Fall 2004.

Lecture 7. Econ August 18

Math 407: Linear Optimization

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

Properties of Matrices and Operations on Matrices

Cheat Sheet for MATH461

Review of Basic Concepts in Linear Algebra

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

DS-GA 1002 Lecture notes 10 November 23, Linear models

I. Multiple Choice Questions (Answer any eight)

Linear Algebra Massoud Malek

CS6964: Notes On Linear Systems

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Numerical Linear Algebra

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

7. Dimension and Structure.

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

Linear Algebra in Actuarial Science: Slides to the lecture

1 Last time: least-squares problems

Appendix A: Matrices

Least squares problems Linear Algebra with Computer Science Application

Singular Value Decompsition

Solution of Linear Equations

Matrix decompositions

6. Orthogonality and Least-Squares

Linear Algebra. and

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra Review. Vectors

Lecture 6: Geometry of OLS Estimation of Linear Regession

COMP 558 lecture 18 Nov. 15, 2010

Econ Slides from Lecture 7

TBP MATH33A Review Sheet. November 24, 2018

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Maths for Signals and Systems Linear Algebra in Engineering

Linear Algebra: Matrix Eigenvalue Problems

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

Lecture 3: QR-Factorization

Linear Algebra- Final Exam Review

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Linear Least squares

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear Algebra - Part II

Conceptual Questions for Review

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Matrix Algebra: Definitions and Basic Operations

There are six more problems on the next two pages

MATH 22A: LINEAR ALGEBRA Chapter 4

Large Scale Data Analysis Using Deep Learning

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Main matrix factorizations

Lecture 4: Applications of Orthogonality: QR Decompositions

LINEAR ALGEBRA SUMMARY SHEET.

ELE/MCE 503 Linear Algebra Facts Fall 2018

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Maths for Signals and Systems Linear Algebra for Engineering Applications

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

2. Review of Linear Algebra

LinGloss. A glossary of linear algebra

Pseudoinverse & Moore-Penrose Conditions

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Transcription:

Lecture 6 Numerical methods Approximation of functions

Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation Gramian matrix examples solution of overdetermined systems 3. Repetition

Approximation and interpolation To approximate function f (x) means to substitute it by a function φ(x), which is in some sense close to function f (x). We will deal with two basic types of approximation: interpolation and least-square method Definition: Interpolation is such approximation, in which the function φ(x) goes exactly through given points [x i,y i ], where y i =f (x i ). Sometimes we also require that functions f and φ have the same derivatives in points x i.

Approximation and interpolation To approximate function f (x) means to substitute it by a function φ(x), which is in some sense close to function f (x). We will deal with two basic types of approximation: interpolation and least-square method Definition: Least-square method is such approximation, in which φ(x) is interlaced between given points [x i,y i ] in such a way, that the distance between functions f and φ is in some sense minimal. Usually the function φ(x) does not go through points [x i,y i ].

Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation Gramian matrix examples solution of overdetermined systems 3. Repetition

Least-square method means procedure for approximate solution of overdetermined equations or inaccurately defined linear systems based on minimization of quadrate of residuals Curve fitting is an important group of problem, which could be solved by least-square method We will describe what is it about.

Least-square method Let t is an independent variable, e.g. time, and y(t) is an unknown function of variable t we want to approximate. Suppose that we performed m measurements, i.e. values y were measured for specified values of t: Our aim is to model y(t) using linear combination of n basis functions for some : We propose basis function based on expected course of unknown function y(t). Then we have to estimate parameters. Function is called (in statistics) linear regression function.

Design matrix Design matrix A of a model is an rectangular matrix, which has m rows and n columns: where is i-th column of A. Matrix formulation of a model is where are measured data and is a vector of unknown parameters.

Least-square method Residuals are differences between measured and modelled data: where. In matrix form We want to determine the parameters x j in such a way, that the residual will be minimal. We can derive least-square method by solving quadratic minimization problem:

Least-square method Principle of least squares

Least-square method Approximate solution of overdetermined system A x = y (i.e. we have more equations than unknowns), which minimize the residual r = y Ax, is called solution of linear system by least squares.

Least-square method Sometimes we have to use weighted linear least squares: If measurements are not equally reliable, then we can assign to each measurement the weight and then we minimize sum of weighted quadrates If, e.g. the error of i-th measurement is approximately equal to e i, we will choose. Each method for solving unweighted LSM is possible to use also for weighted LSM: it is enough to multiply y i and i-th row of A by.

Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation Gramian matrix examples solution of overdetermined systems 3. Repetition

Normal equations Solution of minimization problem have to fulfill the necessary condition for extrema: After derivation we obtain and then which could be written in matrix form as.

Normal equations Linear system is known as normal equations. If the columns of matrix A are linearly independent, then the matrix is positive definite and solution x* of normal equations is the only solution of minimization problem, i.e. it holds

Normal equations If we express the normal equations using vectors we get where and are scalar products of vectors and. Matrix G of this system is called Gramian matrix of a set of vectors.

Normal equations During the design of approximation R n (t) we should select functions in such a way, that columns of matrix A to be linearly independent. If it to be the contrary than it is possible to show that the minimization problem has infinite number of solutions which is obviously not desirable.

Normal equations Two important special cases, for which the columns of matrix A are linearly independent: 1. is a polynomial of degree j-1, e.g. 2. for n=2n+1, where N is a whole nonnegative integer, we chose and time of measurement t i we chose from interval where c is an arbitrary number. Approximation is in the first case an algebraic polynomial and in the second case a trigonometric polynomial.

Normal equations If m=n and the matrix A is regular, then x*=a -1 y and r=0, i.e.. However if measured data y i contains errors, then it is not practical, that function follows those errors. On the contrary, we want that authentically reconstructs the unknown function y(t), therefore it is desirable, that smooths measured data. This is possible only if the number of measurements m is much larger than the number of design parameters n, i.e. for.

Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation Gramian matrix examples solution of overdetermined systems 3. Repetition

Example For data given by the table estimate an approximation. using least squares. Obviously and. Normal equations are

Example For data given by the table estimate an approximation. using least squares. After computations of sums, we get system which solution is and. Seeking approximation is and

Example For data given by the table gradually we will approximate the data by polynomials of the first, second and third degree. As basis functions we chose and

Example Polynomial of first degree For and we obtain normal equations which has solution and therefore Approximation by linear polynomial is not good.

Example Polynomial of second degree Normal equations have solution,, The residual is smaller but still large than in the previous example.

Example Polynomial of the third degree Normal equations have solution,,,

Example If we enlarge the degree of the polynomial, we could see that polynomial of the sixth degree passes through all points, so we will obtain interpolating polynomial. Notice the elements of Gramian matrices: the larger degree, the larger maximal coefficient. This means that the condition number of Gramian matrices is growing. Table shows the condition numbers :

Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation Gramian matrix examples solution of overdetermined systems 3. Repetition

Solution of overdetermined systems Normal equations are not suitable for solving large overdetermined systems because the condition number of Gramian matrix is considerably larger than condition number of design matrix A, because The better way is to use singular value decomposition or QR decomposition.

Singular value decomposition - introduction If the number of equations M is less than the number of unknowns N, or if M = N but equations are linearly dependent, then the system has no solution or it has more than one solution. In the later case the space of solutions is given by a particular solution added to the any linear combination of N - M vectors. The task to find the space of solution for matrix A is possible to solve using singular value decomposition of matrix A.

Singular value decomposition - introduction If the number of equations M is greater than the number of unknowns N, then in general there is no solution vector and the system is called overdetermined. However, we could find a best compromise solution, which is the closest solution that satisfy all equations. If the closest we define in a sense of least square, i.e. the sum of square of residuals is the smallest possible, then the overdetermined system is reduced to solvable problem called the least square method.

Singular value decomposition - introduction Reduced system of equations could be written as system N x N equation ( T ) ( T A A x= A b). This equations we call normal equations of a least square problem. Singular value decomposition has many common features with the least square method, which we show later. Direct solution of normal equations is in general not the best way to find a solution of least square problem.

Singular value decomposition SVD is based on the following theorem of linear algebra: Each matrix A of type M x N, which the number of rows M is greater or equal to the number of columns N, could be decomposed to a product of matrix with orthogonal columns U of type M x N, diagonal matrix W of type N x N with positive or zero entries (singular values) and transpose orthogonal matrix V of type N x N.

Singular value decomposition Orthogonality of matrices U and V could be written as

Singular value decomposition SVD could be done also if M < N. In such a case the singular values w j are all zero for j = M+1,, N as well as corresponding columns of matrix U. There is a lot of algorithms of SVD, proven is subroutine svdcmp from Numerical Recipes.

Singular value decomposition for more equations than unknowns If we have more equations than unknowns we are seeking solution in a least square sense. We are solving system written as:

Singular value decomposition for more equations than unknowns After SVD of matrix A we have the solution in a form In this case usually it is not necessary set to zero values w j however the unusually small values indicate that the data are not sensitive to some parameters.

QR decomposition Another useful method for solving least squares is using QR decomposition of m n matrix A of normal equations, with m n. QR decomposition is the product of an m m unitary matrix Q and an m n upper triangular matrix R which can be computed using e.g. the Gram Schmidt process. As the bottom (m n) rows of matrix R consist entirely of zeroes, it is often useful to partition R, or both R and Q: where R 1 is an n n upper triangular matrix, 0 is an (m n) n zero matrix, Q 1 is m n, Q 2 is m (m n), and Q 1 and Q 2 both have orthogonal columns.

Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation Gramian matrix examples solution of overdetermined systems 3. Repetition