Expansion formula using properties of dot product (analogous to FOIL in algebra): u v 2 u v u v u u 2u v v v u 2 2u v v 2

Size: px
Start display at page:

Download "Expansion formula using properties of dot product (analogous to FOIL in algebra): u v 2 u v u v u u 2u v v v u 2 2u v v 2"

Transcription

1 Least squares: Mathematical theory Below we provide the "vector space" formulation, and solution, of the least squares prolem. While not strictly necessary until we ring in the machinery of matrix algera, we usually think of a vector as a column with "n" entries, and use the "arrow" notation to denote a vector, e.g. u, v, etc. When we get to the matrix formulation, we sometime drop the "arrow", i.e. if we write x we mean a column matrix. Basic least-squares prolem: Find coefficients c 1,.. c k so as to approximate as closely as possile a given vector y a vector of the specified form c 1 v 1... c k v k, in the sense that the sum of the squares of the components of error vector e c 1 v 1... c k v k is as small as possile. Alternatively, we can descrie the prolem as that of getting as close as possile to a given vector y using a comination of vectors v 1,.., v k, so that the sum of the squares of the component errors is as small as possile. Background theory: The dot product of two vectors: If u u 1,.., u n and v v 1,.., v n then u v u 1 v 1... u n v n Note that u u u u n 2. This is the sum of the squares of the components of u. Geometrically, u u can e interpreted as the square of the length of the vector u, and we write u u u 2, where the non-negative symol u is the "norm" or "length" of u and is defined through the dot product, namely u u u 1/2. We do not, however, need any geometric arguments here, rather, geometry is simply a motivation for certain definitions. We use only the intrinsic properties of the dot product which we enumerate elow. Properties of the dot product we wish to distinguish: u v w u w v w cu v c u v u v v u u u 0 and u u 0 if and only if u 0 Note also that cu c u, which follows from the definition of u. Terminology: If vectors u and v satisfy u v 0 we say that u and v are orthogonal to each other, or mutually orthogonal, or simply orthogonal. We also write u v to denote the fact that u and v are orthogonal to each other. (Orthogonality is motivated y the geometric property of two vectors eing perpendicular to each other.) Expansion formula using properties of dot product (analogous to FOIL in algera): u v 2 u v u v u u 2u v v v u 2 2u v v 2 Important special case: If u and v are orthogonal, u v 2 u 2 v 2 Next, a simple ut fundamental special case of the least squares prolem, with its solution: 1

2 Given a vector and a (nonzero) vector v, the minimum norm e of e cv occurs when c is chosen so that e is orthogonal to v. 1) First, it s easy to find the c that works, and it is unique: We set e v 0, and otain cv v 0, v cv v 0, c v as the unique value of c. In what follows, v v we let c v v v denote this optimal value, we let v cv denote the optimal approximating vector, and let e c v denote the corresponding error vector. (In short, if you see " " it means we are talking aout an optimal quantity.) 2) It s easy to show now that c gives the smallest of e. For consider that in general, e cv c v c c v e c c v. Recalling that e is orthogonal to v, (and hence orthogonal to any scalar multiple of v ) we otain from the expansion formula, e 2 e 2 c c v 2 e 2 c c 2 v 2 and see that the smallest value of e 2 is otained y choosing c c (so that the second term in the sum is zero). This completes the proof, ut a couple of additional oservations: 3) Note that if is orthogonal to v, then c v 0. The est approximation of v v in this case is the zero vector. 4) For general, v, we can write c v e v e and since the two vectors on the right are orthogonal, we have 2 v 2 e 2, and so e 2 2 v 2. We wish to point out, complementary to the oservation in 3), that if v 0 then e. Now we can formulate the solution to the general least squares prolem. It is called the Orthogonal Projection Theorem: Given vectors and v 1,.., v k, the minimum value of e, where e c 1 v 1 c 2 v 2... c k v k, is otained if and only if the coefficients c 1,.., c k are chosen so that e is orthogonal to each of the vectors v 1,.., v k. Moreover, this choice of coefficients gives the unique vector that minimizes e Proof: 1) First, the orthogonality condition is shown to e necessary. For if the coefficients are chosen so that e c 1 v 1 c 2 v 2... c k v k is not orthogonal to, say, v j, then using the special case aove in the case where e plays the role of, we see that e c v j e, where c e v j, which means that the coefficient of v j can e changed so as to reduce v j v j the magnitude of the error vector. 2) Next, the orthogonality condition is shown to e sufficient, and the optimal vector 2

3 v c 1 v 1 c 2 v 2... c k v k is shown to e unique. For let c 1,..., c k e such that e c 1 v 1 c 2 v 2... c k v k is orthogonal to each of v 1,..., v k. As in the simple case aove, we can calculate for a generic choice of coefficients: e c 1 v 1 c 2 v 2... c k v k v (where v is used to replace that whole expression) v v v (where we added and sutracted our purportedly optimal v ) e v v Now v v c 1 c 1 v 1 c 2 c 2 v 2... c k c k v k and we see that e is orthogonal to each term in this sum and so is orthogonal to v v itself. By orthgonality and the expansion formula, we then otain e 2 e 2 v v 2 and we see that e 2 is minimized if and only if we choose v v, which of course can e done y letting c 1 c 1,.., c k c k. To completely "solve" the least squares prolem it only remains to show that in fact a solution always exists (for if a solution exists it must have, and need only have, the property of the orthogonal projection theorem). This can e done either y showing the existence of an orthogonal asis using the Gram-Schmit procedure on the vectors v 1,.., v k (if you don t know what that means, that s OK) or y appeal to some theorems of analysis. Least-squares and linear systems We can descrie the least squares prolem and the orthogonal projection theorem very succinctly using matrix algera, and conversely, we can interpret the least-squares solution of a linear system as a least-squares prolem as discussed aove. We note that a (linear) comination of vectors can e written as a matrix times a vector of coefficients: c 1 c 1 v 1... c k v k c 1 v 1... c k v k v 1 v 2... v k c 2 c k Ac, where the matrix A is composed of the vectors as columns, and c is the matrix of unknown coefficients. Then e Ac. Next, note that the dot product u v of two vectors can e carried out in matrix algera y u t v. The orthogonal projection theorem states that e is minimized when e is orthogonal to each v 1, v 2,..., v k. In matrix form, this results in the equations: v 1 t Ac 0 or v 1 t v 1 t Ac v 2 t Ac 0 or v 2 t v 2 t Ac 3

4 v k t Ac 0 or v k t v k t Ac These are sometimes called the "normal equations" for the least squares solution. However, this system of k linear equations (for the k unknown coefficients in the vector c ) can e assemled into a single matrix form. Noting that v 1 t,..., v k t are simply the columns of A turned into rows, we can write the system as the single matrix equation: A t Ac 0 or A t A t A c This system is also referred to as the "normal equations". Regardless of the matrix A, this is also represents a square system of linear equations, and it always has a solution (though the solution is not guraranteed to e unique unless the columns of A are linearly independent). Now, if we wish to solve the overdetermined system Ax so as to minimize e Ax it is clear that the minimum value of e is otained when x satisfies the normal equations A t A t A x. This is the system that MATLAB solves when it is presented with an overdetermined system (more equations than unknowns). Data fitting: In curve fitting we are given a set of x, y values, where y is assumed to e a function of x (or simply determined y x in some way). We wish to find a function x from among some simple collection of functions which fairly well approximates the given data values in the sense that x, x x, y for each given data value. To e more specific, we we suppose our data values are x i, y i, i 1,.., n then we wish to choose a function x so as to minimize the pointwise error x i y i in the least squares sense, i.e. we want to minimize n i 1 x i y i 2. Our function f x is assumed otained from a (linear) comination of a simple set of functions (e.g. polynomials). (This is an important assumption!) We assume there are k such functions and we write x c 1 1 x... c k k x. Now what we wish is that we could otain: y 1 x 1 c 1 1 x 1... c k k x 1 y n x n c 1 1 x n... c k k x n But this is simply an overdetermined system of linear equations for the coefficients c 1,.., c k whose least squares solution we know how to otain. If we define y as the vector of y values and x as the corresponding vector of x values, we can write y x c as our system, where 4

5 x 1 x 2 x... k x. is the matrix whose columns 1 x,..., k x are the "data vectors" of each of the functions with which we are approximating the data. Indeed, we can write the curve fitting prolem in the form: Find c 1,.., c k which minimizes the sum of the squares of the error in approximating y c 1 1 x... c k k x so that we are approximating the data vector y as a comination of the data vectors of the functions 1 x,..., k x. We can then otain the solution of this least squares prolem using the normal equations. Least squares function approximation (optional): Imagine now that our data are the points on an entire curve, corresponding to the points x, f x for all x in some interval a,. Once again we wish to approximate y f x c 1 1 x... c k k x for all values of x on the interval. But how do we measure the error over the whole interval? Instead of expressing the size of the error (or a vector in general) in terms of the sum of the squares of the components of the vector, in the case of functions we take the integral of the square of the function over the interval concerned, that is we define f 2 a f x 2 dx. What "dot product" would this norm come from? If we define the dot product of two functions with domain a, as f g a f x g x dx then f 2 f f. Note that this dot product has exactly the same general properties as the dot product for vectors, as we previously enumerated them. This gives rise, using exactly the same proof, to the orthogonal projection theorem for least squares function approximation: The smallest value of e x f x x, where x c 1 1 x... c k k x, is otained when e x is orthogonal to each of the functions 1 x,..., k x on the interval a,. That is, the optimal values of c 1,.., c k are given y the solution of the system of linear equations: 1 f c c k c k k f k k 1 c 1 k 2 c 2... k k c k In general, least squares approximation y polynomials gives a much etter overall "fit" than interpolation once the degree of the polynomials egins to grow. In a sense, we are minimizing the "average" squared error over all values of x, as opposed to interpolation, which forces zero error at a discrete set of points while not caring at all aout the error at other values of x. In fact, the least squares polynomial approximations of a given function actually converge to the function as the degree of the polynomial approaches infinity, 5

6 requiring only that the function satisfy a mild smoothness condition (a continuous derivative is sufficient, even just a piecewise continuous derivative, is sufficient). In practice, one cannot exactly compute the integrals (i.e. the dot products) involved in the normal equations for least squares function approximation; one can resort to approximating the integrals involved or, for a quick and easy sustitute, one can simply perform a vector least squares approximation y sampling the function at many equally spaced points on the interval a,. If the points are not equally spaced then we are approximating a slightly different and more general type of least squares function approximation called "weighted least squares" in which the errors in different parts of the interval can e given different emphasis or "weight". In this case we are "really" looking at a norm given y f 2 a f x 2 w x dx where the "weighting function" w x satisfies w x 0. Such prolems also arise naturally in proaility theory when we try to minimize the "average" or "expected" squared error when different values of x are given different proailities of occuring. There are many other "systems" of functions esides polynomials that can e used for least squares function approximation. One very important, even more important, system is the so-called trigonometric polynomials on the interval, given y 1, cosx, sin x, cos2x, sin 2x, cos3x, sin 3x,.... These are particularly suitale for approximating 2 periodic functions and have the especially useful/important/fundamental property of orthogonality: i j a i x j x dx 0 whenever i and j are different. In this case the normal equations reduce very simply to: i f a i x f x dx a i x i x dx c i and so the coefficients are immediately determined, and in fact independent of which other functions are eing used in the approximation. In the case of the trigonometric polynomials, the resulting coefficients are the so-called Fourier coefficients and the resulting least squares approximations are the partial sums of the Fourier series of the function f x on,. 6

Solving Systems of Linear Equations Symbolically

Solving Systems of Linear Equations Symbolically " Solving Systems of Linear Equations Symolically Every day of the year, thousands of airline flights crisscross the United States to connect large and small cities. Each flight follows a plan filed with

More information

1 Systems of Differential Equations

1 Systems of Differential Equations March, 20 7- Systems of Differential Equations Let U e an open suset of R n, I e an open interval in R and : I R n R n e a function from I R n to R n The equation ẋ = ft, x is called a first order ordinary

More information

CHAPTER 5. Linear Operators, Span, Linear Independence, Basis Sets, and Dimension

CHAPTER 5. Linear Operators, Span, Linear Independence, Basis Sets, and Dimension A SERIES OF CLASS NOTES TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS LINEAR CLASS NOTES: A COLLECTION OF HANDOUTS FOR REVIEW AND PREVIEW OF LINEAR THEORY

More information

When two letters name a vector, the first indicates the and the second indicates the of the vector.

When two letters name a vector, the first indicates the and the second indicates the of the vector. 8-8 Chapter 8 Applications of Trigonometry 8.3 Vectors, Operations, and the Dot Product Basic Terminology Algeraic Interpretation of Vectors Operations with Vectors Dot Product and the Angle etween Vectors

More information

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA)

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA) ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA) General information Availale time: 2.5 hours (150 minutes).

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

D. Determinants. a b c d e f

D. Determinants. a b c d e f D. Determinants Given a square array of numers, we associate with it a numer called the determinant of, and written either det(), or. For 2 2 and 3 3 arrays, the numer is defined y () a c d = ad c; a c

More information

Upper Bounds for Stern s Diatomic Sequence and Related Sequences

Upper Bounds for Stern s Diatomic Sequence and Related Sequences Upper Bounds for Stern s Diatomic Sequence and Related Sequences Colin Defant Department of Mathematics University of Florida, U.S.A. cdefant@ufl.edu Sumitted: Jun 18, 01; Accepted: Oct, 016; Pulished:

More information

Differential Geometry of Surfaces

Differential Geometry of Surfaces Differential Geometry of urfaces Jordan mith and Carlo équin C Diision, UC Berkeley Introduction These are notes on differential geometry of surfaces ased on reading Greiner et al. n. d.. Differential

More information

Span and Linear Independence

Span and Linear Independence Span and Linear Independence It is common to confuse span and linear independence, because although they are different concepts, they are related. To see their relationship, let s revisit the previous

More information

Here is a general Factoring Strategy that you should use to factor polynomials. 1. Always factor out the GCF(Greatest Common Factor) first.

Here is a general Factoring Strategy that you should use to factor polynomials. 1. Always factor out the GCF(Greatest Common Factor) first. 1 Algera and Trigonometry Notes on Topics that YOU should KNOW from your prerequisite courses! Here is a general Factoring Strategy that you should use to factor polynomials. 1. Always factor out the GCF(Greatest

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

58. The Triangle Inequality for vectors is. dot product.] 59. The Parallelogram Law states that

58. The Triangle Inequality for vectors is. dot product.] 59. The Parallelogram Law states that 786 CAPTER 12 VECTORS AND TE GEOETRY OF SPACE 0, 0, 1, and 1, 1, 1 as shown in the figure. Then the centroid is. ( 1 2, 1 2, 1 2 ) ] x z C 54. If c a a, where a,, and c are all nonzero vectors, show that

More information

Sample Solutions from the Student Solution Manual

Sample Solutions from the Student Solution Manual 1 Sample Solutions from the Student Solution Manual 1213 If all the entries are, then the matrix is certainly not invertile; if you multiply the matrix y anything, you get the matrix, not the identity

More information

45. The Parallelogram Law states that. product of a and b is the vector a b a 2 b 3 a 3 b 2, a 3 b 1 a 1 b 3, a 1 b 2 a 2 b 1. a c. a 1. b 1.

45. The Parallelogram Law states that. product of a and b is the vector a b a 2 b 3 a 3 b 2, a 3 b 1 a 1 b 3, a 1 b 2 a 2 b 1. a c. a 1. b 1. SECTION 10.4 THE CROSS PRODUCT 537 42. Suppose that all sides of a quadrilateral are equal in length and opposite sides are parallel. Use vector methods to show that the diagonals are perpendicular. 43.

More information

The Gram-Schmidt Process 1

The Gram-Schmidt Process 1 The Gram-Schmidt Process In this section all vector spaces will be subspaces of some R m. Definition.. Let S = {v...v n } R m. The set S is said to be orthogonal if v v j = whenever i j. If in addition

More information

ERASMUS UNIVERSITY ROTTERDAM

ERASMUS UNIVERSITY ROTTERDAM Information concerning Colloquium doctum Mathematics level 2 for International Business Administration (IBA) and International Bachelor Economics & Business Economics (IBEB) General information ERASMUS

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 191 Applied Linear Algebra Lecture 1: Inner Products, Length, Orthogonality Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/ Motivation Not all linear systems have

More information

Module 9: Further Numbers and Equations. Numbers and Indices. The aim of this lesson is to enable you to: work with rational and irrational numbers

Module 9: Further Numbers and Equations. Numbers and Indices. The aim of this lesson is to enable you to: work with rational and irrational numbers Module 9: Further Numers and Equations Lesson Aims The aim of this lesson is to enale you to: wor with rational and irrational numers wor with surds to rationalise the denominator when calculating interest,

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

MATH 12 CLASS 2 NOTES, SEP Contents. 2. Dot product: determining the angle between two vectors 2

MATH 12 CLASS 2 NOTES, SEP Contents. 2. Dot product: determining the angle between two vectors 2 MATH 12 CLASS 2 NOTES, SEP 23 2011 Contents 1. Dot product: definition, basic properties 1 2. Dot product: determining the angle between two vectors 2 Quick links to definitions/theorems Dot product definition

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

MTH 2310, FALL Introduction

MTH 2310, FALL Introduction MTH 2310, FALL 2011 SECTION 6.2: ORTHOGONAL SETS Homework Problems: 1, 5, 9, 13, 17, 21, 23 1, 27, 29, 35 1. Introduction We have discussed previously the benefits of having a set of vectors that is linearly

More information

Polynomial Degree and Finite Differences

Polynomial Degree and Finite Differences CONDENSED LESSON 7.1 Polynomial Degree and Finite Differences In this lesson, you Learn the terminology associated with polynomials Use the finite differences method to determine the degree of a polynomial

More information

1.3.3 Basis sets and Gram-Schmidt Orthogonalization

1.3.3 Basis sets and Gram-Schmidt Orthogonalization .. Basis sets and Gram-Schmidt Orthogonalization Before we address the question of existence and uniqueness, we must estalish one more tool for working with ectors asis sets. Let R, with (..-) : We can

More information

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line Chapter 27: Inferences for Regression And so, there is one more thing which might vary one more thing aout which we might want to make some inference: the slope of the least squares regression line. The

More information

Vectors and Matrices

Vectors and Matrices Chapter Vectors and Matrices. Introduction Vectors and matrices are used extensively throughout this text. Both are essential as one cannot derive and analyze laws of physics and physical measurements

More information

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS 1Numer systems: real and complex 1.1 Kick off with CAS 1. Review of set notation 1.3 Properties of surds 1. The set of complex numers 1.5 Multiplication and division of complex numers 1.6 Representing

More information

Elements of linear algebra

Elements of linear algebra Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v

More information

Generalized Reed-Solomon Codes

Generalized Reed-Solomon Codes Chapter 5 Generalized Reed-Solomon Codes In 1960, I.S. Reed and G. Solomon introduced a family of error-correcting codes that are douly lessed. The codes and their generalizations are useful in practice,

More information

Inner Product Spaces 6.1 Length and Dot Product in R n

Inner Product Spaces 6.1 Length and Dot Product in R n Inner Product Spaces 6.1 Length and Dot Product in R n Summer 2017 Goals We imitate the concept of length and angle between two vectors in R 2, R 3 to define the same in the n space R n. Main topics are:

More information

UNIT 5 QUADRATIC FUNCTIONS Lesson 2: Creating and Solving Quadratic Equations in One Variable Instruction

UNIT 5 QUADRATIC FUNCTIONS Lesson 2: Creating and Solving Quadratic Equations in One Variable Instruction Lesson : Creating and Solving Quadratic Equations in One Variale Prerequisite Skills This lesson requires the use of the following skills: understanding real numers and complex numers understanding rational

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

Chapter 7. 1 a The length is a function of time, so we are looking for the value of the function when t = 2:

Chapter 7. 1 a The length is a function of time, so we are looking for the value of the function when t = 2: Practice questions Solution Paper type a The length is a function of time, so we are looking for the value of the function when t = : L( ) = 0 + cos ( ) = 0 + cos ( ) = 0 + = cm We are looking for the

More information

Mathematics Background

Mathematics Background UNIT OVERVIEW GOALS AND STANDARDS MATHEMATICS BACKGROUND UNIT INTRODUCTION Patterns of Change and Relationships The introduction to this Unit points out to students that throughout their study of Connected

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Determinants of generalized binary band matrices

Determinants of generalized binary band matrices Determinants of generalized inary and matrices Dmitry Efimov arxiv:17005655v1 [mathra] 18 Fe 017 Department of Mathematics, Komi Science Centre UrD RAS, Syktyvkar, Russia Astract Under inary matrices we

More information

Genetic Algorithms applied to Problems of Forbidden Configurations

Genetic Algorithms applied to Problems of Forbidden Configurations Genetic Algorithms applied to Prolems of Foridden Configurations R.P. Anstee Miguel Raggi Department of Mathematics University of British Columia Vancouver, B.C. Canada V6T Z2 anstee@math.uc.ca mraggi@gmail.com

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,

More information

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition 6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

More information

Consequences of Orthogonality

Consequences of Orthogonality Consequences of Orthogonality Philippe B. Laval KSU Today Philippe B. Laval (KSU) Consequences of Orthogonality Today 1 / 23 Introduction The three kind of examples we did above involved Dirichlet, Neumann

More information

As shown in the text, we can write an arbitrary azimuthally-symmetric solution to Laplace s equation in spherical coordinates as:

As shown in the text, we can write an arbitrary azimuthally-symmetric solution to Laplace s equation in spherical coordinates as: Remarks: In dealing with spherical coordinates in general and with Legendre polynomials in particular it is convenient to make the sustitution c = cosθ. For example, this allows use of the following simplification

More information

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar.

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar. It follows that S is linearly dependent since the equation is satisfied by which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar. is

More information

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1 Math 8, 9 Notes, 4 Orthogonality We now start using the dot product a lot. v v = v v n then by Recall that if w w ; w n and w = v w = nx v i w i : Using this denition, we dene the \norm", or length, of

More information

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)).

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)). Difference Equations to Differential Equations Section 8.5 Applications: Pendulums Mass-Spring Systems In this section we will investigate two applications of our work in Section 8.4. First, we will consider

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

1 2 2 Circulant Matrices

1 2 2 Circulant Matrices Circulant Matrices General matrix a c d Ax x ax + cx x x + dx General circulant matrix a x ax + x a x x + ax. Evaluating the Eigenvalues Find eigenvalues and eigenvectors of general circulant matrix: a

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

March 27 Math 3260 sec. 56 Spring 2018

March 27 Math 3260 sec. 56 Spring 2018 March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated

More information

6.1. Inner Product, Length and Orthogonality

6.1. Inner Product, Length and Orthogonality These are brief notes for the lecture on Friday November 13, and Monday November 1, 2009: they are not complete, but they are a guide to what I want to say on those days. They are guaranteed to be incorrect..1.

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

8.04 Spring 2013 March 12, 2013 Problem 1. (10 points) The Probability Current

8.04 Spring 2013 March 12, 2013 Problem 1. (10 points) The Probability Current Prolem Set 5 Solutions 8.04 Spring 03 March, 03 Prolem. (0 points) The Proaility Current We wish to prove that dp a = J(a, t) J(, t). () dt Since P a (t) is the proaility of finding the particle in the

More information

Exploring Lucas s Theorem. Abstract: Lucas s Theorem is used to express the remainder of the binomial coefficient of any two

Exploring Lucas s Theorem. Abstract: Lucas s Theorem is used to express the remainder of the binomial coefficient of any two Delia Ierugan Exploring Lucas s Theorem Astract: Lucas s Theorem is used to express the remainder of the inomial coefficient of any two integers m and n when divided y any prime integer p. The remainder

More information

Graphs and polynomials

Graphs and polynomials 1 1A The inomial theorem 1B Polnomials 1C Division of polnomials 1D Linear graphs 1E Quadratic graphs 1F Cuic graphs 1G Quartic graphs Graphs and polnomials AreAS of STud Graphs of polnomial functions

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Math 216 Second Midterm 28 March, 2013

Math 216 Second Midterm 28 March, 2013 Math 26 Second Midterm 28 March, 23 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

Robot Position from Wheel Odometry

Robot Position from Wheel Odometry Root Position from Wheel Odometry Christopher Marshall 26 Fe 2008 Astract This document develops equations of motion for root position as a function of the distance traveled y each wheel as a function

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Graphs and polynomials

Graphs and polynomials 5_6_56_MQVMM - _t Page Frida, Novemer 8, 5 :5 AM MQ Maths Methods / Final Pages / 8//5 Graphs and polnomials VCEcoverage Areas of stud Units & Functions and graphs Algera In this chapter A The inomial

More information

Summary Chapter 2: Wave diffraction and the reciprocal lattice.

Summary Chapter 2: Wave diffraction and the reciprocal lattice. Summary Chapter : Wave diffraction and the reciprocal lattice. In chapter we discussed crystal diffraction and introduced the reciprocal lattice. Since crystal have a translation symmetry as discussed

More information

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality Worksheet for Lecture (due December 4) Name: Section 6 Inner product, length, and orthogonality u Definition Let u = u n product or dot product to be and v = v v n be vectors in R n We define their inner

More information

Lecture 12: Grover s Algorithm

Lecture 12: Grover s Algorithm CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 12: Grover s Algorithm March 7, 2006 We have completed our study of Shor s factoring algorithm. The asic technique ehind Shor

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Vector calculus background

Vector calculus background Vector calculus background Jiří Lebl January 18, 2017 This class is really the vector calculus that you haven t really gotten to in Calc III. Let us start with a very quick review of the concepts from

More information

Generalized Geometric Series, The Ratio Comparison Test and Raabe s Test

Generalized Geometric Series, The Ratio Comparison Test and Raabe s Test Generalized Geometric Series The Ratio Comparison Test and Raae s Test William M. Faucette Decemer 2003 The goal of this paper is to examine the convergence of a type of infinite series in which the summands

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

1 Review of the dot product

1 Review of the dot product Any typographical or other corrections about these notes are welcome. Review of the dot product The dot product on R n is an operation that takes two vectors and returns a number. It is defined by n u

More information

Solutions to Exam 2, Math 10560

Solutions to Exam 2, Math 10560 Solutions to Exam, Math 6. Which of the following expressions gives the partial fraction decomposition of the function x + x + f(x = (x (x (x +? Solution: Notice that (x is not an irreducile factor. If

More information

Math 313 Midterm II KEY Spring 2011 sections 001 and 002 Instructor: Scott Glasgow

Math 313 Midterm II KEY Spring 2011 sections 001 and 002 Instructor: Scott Glasgow Math 33 Midterm II KEY Spring 20 sections 00 and 002 Instructor: Scott Glasgow Write your name very clearly on this exam ooklet In this ooklet, write your mathematics clearly, legily, in ig fonts, and,

More information

1 Caveats of Parallel Algorithms

1 Caveats of Parallel Algorithms CME 323: Distriuted Algorithms and Optimization, Spring 2015 http://stanford.edu/ reza/dao. Instructor: Reza Zadeh, Matroid and Stanford. Lecture 1, 9/26/2015. Scried y Suhas Suresha, Pin Pin, Andreas

More information

The geometry of least squares

The geometry of least squares The geometry of least squares We can think of a vector as a point in space, where the elements of the vector are the coordinates of the point. Consider for example, the following vector s: t = ( 4, 0),

More information

MTH 65 WS 3 ( ) Radical Expressions

MTH 65 WS 3 ( ) Radical Expressions MTH 65 WS 3 (9.1-9.4) Radical Expressions Name: The next thing we need to develop is some new ways of talking aout the expression 3 2 = 9 or, more generally, 2 = a. We understand that 9 is 3 squared and

More information

Linear Algebra: Homework 3

Linear Algebra: Homework 3 Linear Algebra: Homework 3 Alvin Lin August 206 - December 206 Section.2 Exercise 48 Find all values of the scalar k for which the two vectors are orthogonal. [ ] [ ] 2 k + u v 3 k u v 0 2(k + ) + 3(k

More information

Linear Algebra, Summer 2011, pt. 3

Linear Algebra, Summer 2011, pt. 3 Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................

More information

Non-Linear Regression Samuel L. Baker

Non-Linear Regression Samuel L. Baker NON-LINEAR REGRESSION 1 Non-Linear Regression 2006-2008 Samuel L. Baker The linear least squares method that you have een using fits a straight line or a flat plane to a unch of data points. Sometimes

More information

PHY451, Spring /5

PHY451, Spring /5 PHY451, Spring 2011 Notes on Optical Pumping Procedure & Theory Procedure 1. Turn on the electronics and wait for the cell to warm up: ~ ½ hour. The oven should already e set to 50 C don t change this

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Fact: Every matrix transformation is a linear transformation, and vice versa.

Fact: Every matrix transformation is a linear transformation, and vice versa. Linear Transformations Definition: A transformation (or mapping) T is linear if: (i) T (u + v) = T (u) + T (v) for all u, v in the domain of T ; (ii) T (cu) = ct (u) for all scalars c and all u in the

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Composition of Haar Paraproducts

Composition of Haar Paraproducts Composition of Haar Paraproucts Brett D. Wick Georgia Institute of Technology School of Mathematics Chongqing Analysis Meeting IV Chongqing University Chongqing, China June 1, 2013 B. D. Wick (Georgia

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information