Least Squares and QP Optimization
|
|
- Briana Miles
- 6 years ago
- Views:
Transcription
1 Least Squares and QP Optimization The following was implemented in Maple by Marcus Davidsson (2012) Stephen Boyd
2
3
4
5
6
7
8
9 Assume that we have a system of equations: y=x.w where y is vector, X is a square matrix and w is a weight vector. For example:
10 The objective is to find the vector w. This is done by introducing an error vector r = X.w - y. Note that the above system of equations is deterministic hence we dont need an error vector. An error vector is only included to illustrate the basic logic. Our system now looks like: In order for w to be an acceptable solution w has to minimize the total error (r[1] + r[2]). Since we are only interested in the absolute error (minus or plus does not matter) our objective becomes to minimize the sum of squares (r[1]^2 + r[2]^2). Our objective function can therefore be written as:
11 It turnes out that the Least Square (LS) solution to the above problem is given by: We can show this as follows: y = X.w # multiply both sides by X' X'.y = X'.X.w # solve for w (Normal equations) (X'.X)^-1.X'.y = w # caution Note that it is important to check that the matrix X is invertable. Not all matricies are invertable. A invertable matrix has the property X^-1.X = X.X^-1 = I where I is the Identity Matrix. In our case the matrix indeed is invertable hence the above equation holds. We can plot the objective function as follows:
12
13 20 w w 1 Now there also exist an graphical interpretation to the above problem as well:
14 A = 8 90 o B = 6 C =? Pythagoras theorem A^2 + B^2 = C^2 C = sqrt( 8^2 + 6^2) = 10 C= hypotenuse defined as the longest side of a right-angled triangle 3? [4,3] Vector Example 4^2 + 3^2 =?^2? = sqrt(4^2 + 3^2) = 5 [0,0] 4? = w = Euclidean Norm (L2) = lenght of the hypotenuse due to Pythagoras theorem w = sqrt w'.w = sqrt w 1 w 2 $ In our case we get: w 1 w 2 = sqrt( w[1]^2 + w[2]^2) w = sqrt w'.w = sqrt 4 3 $ 4 3 = sqrt( 4^2 + 3^2) = 5 In our previous example the error vector r = X.w - y was given by:
15 We can plot such system of equations as follows: w w1 w1+2*w2-10
16 We know that the optimal point is [-2,6] which is located where these two lines cross each other. The dashed black line represent the error vector. This is the vector that we want to minimize. Now in order for us to use Pythagoras theorem we have to make sure that the above triangle is a right-angled triangle. For two linear functions: a1*w[1] + b1*w[2] + c1 = 0 a2*w[1] + b2*w[2] + c2 = 0 The two lines will be perpendicular (90 degree angle) if and only if a1*a2 + b1*b2 = 0. i) Two lines or curves are orthogonal if they are perpendicular at their point of intersection. In our case we have: 0 (1) ii) Two vectors are orthogonal if their dot product (inner product) is
17 equal to zero. 0 0 (2) iii) A orthogonal matrix is necessarily square A[1..n,1..n] and invertible A^-1.A = A.A^-1 = I. This is an important property! Since the two lines were orthogonal our problem can be written as: We can show this as follows:
18 Warning, there are zero degrees of freedom (3) In the previous example we assumed that X was a square matrix. When X is a non-square matrix X[1..m,1..n] then we have two scenarios: Column-Dominant Matrix = more columns than rows (there are less linear equations than unknown variables ie underdetermined (df<0) Row-Dominant Matrix = more rows than columns (there are more linear equations than unknown variables ie overdetermined (df>0)
19 ) *Note df is defined at the # of rows - # of columns in matrix X In both these cases it turnes out that the normal equation solution (NES) given by error. will introduce significant estimation We can now note that there exist orthogonalization algorithms such as QR & SD decompositions. The Maple Command LinearAlgebra[LeastSquares] "For a floating-point column-dominant matrix a solution is computed by using a singular values decomposition (SVD) by default."
20 "For a floating-point row-dominant matrix a solution is computed by using a QR decomposition (QRD) by default." We can compare the three different approached as follows:
21
22 Column-Dominant Matrix Estimated w df=k23 80 Column-Dominant Matrix Estimated y df=k NES SVD NES SVD y
23
24 Row-Dominant Matrix Estimated w df=165 Row-Dominant Matrix Estimated y df= NES QRD NES QRD y
25
26 Square Matrix Estimated w df=0 Square Matrix Estimated y df= NES SVD/QRD NES SVD/QRD y
27 We can now look at a more pratical example ie portfolio optimization. We first note that the two methods below (minimize portfolio variance for a given portfolio expected return) will produce the same allocations if the matrix is square ie the number of columns=number of rows. where R is the return matrix and ERR is the vector of ER Note-1. The vector ERR is specificed by the user and not calculated on the data! Note-2. This equation is simply found from the normal equations where y=expected return and X= return matrix We can show this as follows:
28
29
30 Square Return Matrix QP NES We can now look what happens when the return matrix is not square ie column-dominate.
31 Warning, problem appears to be unbounded
32
33 25000 Column-Dominate Return Matrix QP NES We can now show that the Least Square (LS) solution and the Quadratic Programming (QP) solution for the objective of minimizing portfolio variance for a given portfolio expected return will be the same when the return matrix is either square or columndominant. This is a very convenient feature because a lot of financial datasets used for portfolio optimization often have more columns than rows ie the global universe might have thousands of stocks.
34 The Maple procedures that we use are: We can show this as follows:
35
36 Warning, problem appears to be unbounded Warning, necessary conditions met but sufficient conditions not satisfied
37
38 Optimization.QPSolve.M Optimization.QPSolve.A Optimization.LSSolve.M Optimization.LSSolve.A LinearAlgebra.LeastSquares When the return matrix is row-dominant the LS and QP solution will be different.
39
40
41
42 Optimization.LSSolve.M LinearAlgebra.LeastSquares Optimization.LSSolve.A Optimization.QPSolve.A Optimization.QPSolve.M We can see in the previous example that the Maple command was by far the fastest one. It also also produces better allocations ie lower risk. In Maple help pages for we can read: When the residuals in the objective function and the constraints are all linear (as in our case), then an active-set method is used ie a good QP-
43 Solver We can now show how we can use for a column-dominated matrix to included a simple budget constraint. The analysis is now done directly on the Stock Price and we assume that our Portfolio Capital (PC) must grow 5% each period ie where t=0..nr In the first period t=0 which means that PC=PC hence we can not buy more stocks than we can afford ie we are constrained by our budget. The analysis is done the same way as the previous example PC represent the y-variable and the x-variables is the matrix of stock prices.
44
45
46 Optimization.LSSolve.M Optimization.LSSolve.A We can do some forward testing as follows:
47
48 Portfolio Value Tracking Error
STAT 350: Geometry of Least Squares
The Geometry of Least Squares Mathematical Basics Inner / dot product: a and b column vectors a b = a T b = a i b i a b a T b = 0 Matrix Product: A is r s B is s t (AB) rt = s A rs B st Partitioned Matrices
More informationMarkowitz Efficient Portfolio Frontier as Least-Norm Analytic Solution to Underdetermined Equations
Markowitz Efficient Portfolio Frontier as Least-Norm Analytic Solution to Underdetermined Equations Sahand Rabbani Introduction Modern portfolio theory deals in part with the efficient allocation of investments
More informationThis appendix provides a very basic introduction to linear algebra concepts.
APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationVector Spaces, Orthogonality, and Linear Least Squares
Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationMATH 22A: LINEAR ALGEBRA Chapter 4
MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationLinear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions
Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix
More informationLECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.
Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply
More informationLinear Systems. Carlo Tomasi
Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationMATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra
MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra A. Vectors A vector is a quantity that has both magnitude and direction, like velocity. The location of a vector is irrelevant;
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW When we define a term, we put it in boldface. This is a very compressed review; please read it very carefully and be sure to ask questions on parts you aren t sure of. x 1 WedenotethesetofrealnumbersbyR.
More information8. Least squares. ˆ Review of linear equations. ˆ Least squares. ˆ Example: curve-fitting. ˆ Vector norms. ˆ Geometrical intuition
CS/ECE/ISyE 54 Introduction to Optimization Spring 017 18 8. Least squares ˆ Review of linear equations ˆ Least squares ˆ Eample: curve-fitting ˆ Vector norms ˆ Geometrical intuition Laurent Lessard (www.laurentlessard.com)
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationLeast Squares Solutions for Overdetermined Systems Joel S Steele
Least Squares Solutions for Overdetermined Systems Joel S Steele Overdetermined systems When we want to solve systems of linear equations, ŷ = Xβ, we need as many equations as unknowns. We also hope that
More informationMATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.
MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products. Orthogonal projection Theorem 1 Let V be a subspace of R n. Then any vector x R n is uniquely represented
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]
ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] Inner products and Norms Inner product or dot product of 2 vectors u and v in R n : u.v = u 1 v 1 + u 2 v 2 + + u n v n Calculate u.v when u = 1 2 2 0 v = 1 0
More informationInverses. Stephen Boyd. EE103 Stanford University. October 28, 2017
Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number
More information5.3. Polynomials and Polynomial Functions
5.3 Polynomials and Polynomial Functions Polynomial Vocabulary Term a number or a product of a number and variables raised to powers Coefficient numerical factor of a term Constant term which is only a
More information7. Dimension and Structure.
7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain
More informationRegression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)
Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features
More informationPrincipal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17
Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into
More information401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.
401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis
More informationRegression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)
Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationLecture 20: 6.1 Inner Products
Lecture 0: 6.1 Inner Products Wei-Ta Chu 011/1/5 Definition An inner product on a real vector space V is a function that associates a real number u, v with each pair of vectors u and v in V in such a way
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationLecture 23: 6.1 Inner Products
Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such
More informationGetting Started with Communications Engineering
1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we
More informationAM205: Assignment 2. i=1
AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationLecture 6: Geometry of OLS Estimation of Linear Regession
Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationApproximate Principal Components Analysis of Large Data Sets
Approximate Principal Components Analysis of Large Data Sets Daniel J. McDonald Department of Statistics Indiana University mypage.iu.edu/ dajmcdon April 27, 2016 Approximation-Regularization for Analysis
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationA TOUR OF LINEAR ALGEBRA FOR JDEP 384H
A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors
More informationMATH 167: APPLIED LINEAR ALGEBRA Chapter 3
MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have
More informationBindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17
Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and
More informationSingular Value Decomposition
Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationPre-AP Algebra II Summer Packet 2014
Pre-AP Algebra II Summer Packet 014 Name: Period: PLEASE READ THE FOLLOWING!!!!!!! Wait until a few weeks before school starts to work through this packet so that the material will be fresh when you begin
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationTo Find the Product of Monomials. ax m bx n abx m n. Let s look at an example in which we multiply two monomials. (3x 2 y)(2x 3 y 5 )
5.4 E x a m p l e 1 362SECTION 5.4 OBJECTIVES 1. Find the product of a monomial and a polynomial 2. Find the product of two polynomials 3. Square a polynomial 4. Find the product of two binomials that
More informationLecture 5 Least-squares
EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationCamera Models and Affine Multiple Views Geometry
Camera Models and Affine Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in May 29, 2001 1 1 Camera Models A Camera transforms a 3D
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationMidterm Solutions. EE127A L. El Ghaoui 3/19/11
EE27A L. El Ghaoui 3/9/ Midterm Solutions. (6 points Find the projection z of the vector = (2, on the line that passes through 0 = (, 2 and with direction given by the vector u = (,. Solution: The line
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationMathematics for Graphics and Vision
Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on
More informationProjections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for
Math 57 Spring 18 Projections and Least Square Solutions Recall that given an inner product space V with subspace W and orthogonal basis for W, B {v 1, v,..., v k }, the orthogonal projection of V onto
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationCambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information
Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationLinear Models for Regression. Sargur Srihari
Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood
More informationMath 302 Test 1 Review
Math Test Review. Given two points in R, x, y, z and x, y, z, show the point x + x, y + y, z + z is on the line between these two points and is the same distance from each of them. The line is rt x, y,
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationCS6964: Notes On Linear Systems
CS6964: Notes On Linear Systems 1 Linear Systems Systems of equations that are linear in the unknowns are said to be linear systems For instance ax 1 + bx 2 dx 1 + ex 2 = c = f gives 2 equations and 2
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationMathematics Higher Tier, November /2H (Paper 2, calculator)
Link to past paper on AQA website: www.aqa.org.uk This question paper is available to download freely from the AQA website. To navigate around the website, you want QUALIFICATIONS, GCSE, MATHS, MATHEMATICS,
More informationTerminology for Statistical Data
Terminology for Statistical Data variables - features - attributes observations - cases (consist of multiple values) In a standard data matrix, variables or features correspond to columns observations
More informationConvex Optimization Problems. Prof. Daniel P. Palomar
Conve Optimization Problems Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19, HKUST,
More information(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script
More informationACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations
ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 017/018 DR. ANTHONY BROWN. Lines and Their Equations.1. Slope of a Line and its y-intercept. In Euclidean geometry (where
More informationMath for ML: review. ML and knowledge of other fields
ath for L: review ilos Hauskrecht milos@cs.pitt.edu Sennott Square x- people.cs.pitt.edu/~milos/ L and knowledge of other fields L solutions and algorithms rely on knowledge of many other disciplines:
More informationLinear algebra for MATH2601 Numerical methods
Linear algebra for MATH2601 Numerical methods László Erdős August 12, 2000 Contents 1 Introduction 3 1.1 Typesoferrors... 4 1.1.1 Rounding errors... 5 1.1.2 Truncationerrors... 6 1.1.3 Conditioningerrors...
More informationVectors. A vector is usually denoted in bold, like vector a, or sometimes it is denoted a, or many other deviations exist in various text books.
Vectors A Vector has Two properties Magnitude and Direction. That s a weirder concept than you think. A Vector does not necessarily start at a given point, but can float about, but still be the SAME vector.
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationBook of Quadratic Equations
Page 1 of 14 Book of Quadratic Equations **when asked to FACTOR the answer will be **when asked to SOLVE the answer will be =(expression)(expression) x = number & x= number Words with similar meaning:
More information7.3 Ridge Analysis of the Response Surface
7.3 Ridge Analysis of the Response Surface When analyzing a fitted response surface, the researcher may find that the stationary point is outside of the experimental design region, but the researcher wants
More informationLecture No. 1 Introduction to Method of Weighted Residuals. Solve the differential equation L (u) = p(x) in V where L is a differential operator
Lecture No. 1 Introduction to Method of Weighted Residuals Solve the differential equation L (u) = p(x) in V where L is a differential operator with boundary conditions S(u) = g(x) on Γ where S is a differential
More informationC&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz
C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let
More information4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES
4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES FOR SINGLE FACTOR BETWEEN-S DESIGNS Planned or A Priori Comparisons We previously showed various ways to test all possible pairwise comparisons for
More information1 Linearity and Linear Systems
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)
More informationEstimating Covariance Using Factorial Hidden Markov Models
Estimating Covariance Using Factorial Hidden Markov Models João Sedoc 1,2 with: Jordan Rodu 3, Lyle Ungar 1, Dean Foster 1 and Jean Gallier 1 1 University of Pennsylvania Philadelphia, PA joao@cis.upenn.edu
More information8-2 Trigonometric Ratios
8-2 Trigonometric Ratios Warm Up Lesson Presentation Lesson Quiz Geometry Warm Up Write each fraction as a decimal rounded to the nearest hundredth. 1. 2. 0.67 0.29 Solve each equation. 3. 4. x = 7.25
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationx 1 x 2. x 1, x 2,..., x n R. x n
WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More information