Lecture: Quadratic optimization
|
|
- Rudolph Rose
- 6 years ago
- Views:
Transcription
1 Lecture: Quadratic optimization 1. Positive definite och semidefinite matrices 2. LDL T factorization 3. Quadratic optimization without constraints 4. Quadratic optimization with constraints 5. Least-squares problems Lecture 1 Quadratic optimization
2 Quadratic optimization without constraints minimize 1 2 xt Hx+c T x+c 0 where H = H T R n n, c R n, c 0 R. Common in applications, e.g., linear regression (fitting of models) minimization of physically motivated objective functions such as minimization of energy, variance etc. quadratic approximation of nonlinear optimization problems. Lecture 2 Quadratic optimization
3 The quadratic term Let f(x) = 1 2 xt Hx+c T x+c 0 where H = H T R n n, c R n, c 0 R. We can assume that the matrix H is symmetric. If H is not symmetric it holds that x T Hx = 1 2 xt Hx+ 1 2 (xt Hx) T = 1 2 xt (H+H T )x, i.e., x T Hx = x T Hx where H = 1 2 (H+HT ) is a symmetric matrix. Lecture 3 Quadratic optimization
4 Positive definite and positive semi-definite matrices Definition 1 (Positive semidefinite). A symmetric matrix H = H T R n n is called positive semidefinite if x T Hx 0 for all x R n Definition 2 (Positive definite). A symmetric matrix H = H T R n n is called positive definite if x T Hx > 0 for all x R n {0} Lecture 4 Quadratic optimization
5 How do you determine if a symmetric matrix H = H T R n n is positive (semi-) definite: LDL T factorization (suitable for calculations by hand). Chapter 27.6 in the book. Use the spectral factorization of the matrix H = QΛQ T, where Q T Q = I and Λ = diag(λ 1,...,λ n ) where λ k are the eigenvalues of the matrix. From this it follows that H is positive semidefinite if and only if λ k 0, k = 1,...,n H is positive definite if and only if λ k > 0, k = 1,...,n Lecture 5 Quadratic optimization
6 LDL T factorization Theorem 1. A symmetric matrix H = H T R n n is positive semidefinite if and only if there is a factorization H = LDL T where d l d L = l 31 l D = 0 0 d l n1 l n2 l n d n and d k 0, k = 1,...,n. H is positive definite if and only if d k > 0, k = 1,...,n, in the LDL T factorization. Proof: See chapter 27.9 in the book. Lecture 6 Quadratic optimization
7 The factorization LDL T is determined by one of the following methods Elementary row and column operations, Completion of squares, Lecture 7 Quadratic optimization
8 LDL T factorization - Example Determine the LDL T factorization of H = The idea is now to perform symmetric row- and column-operations from the left and right in order to form the diagonal matrix D. The row-operations can be performed with lower triangular invertible matrices, and the product of lower triangular matrices is lower triangular and also the inverse. Lecture 8 Quadratic optimization
9 LDL T factorization - Example Perform row-operations to get zeros in the first column: l 1 H = = and the corresponding column-operations: l 1 Hl T = = Lecture 9 Quadratic optimization
10 LDL T factorization - Example Perform row-operations to get zeros in the second column: l 2 l 1 Hl T = = and the corresponding column-operations: l 2 l 1 Hl T 1l T = = Lecture 10 Quadratic optimization
11 LDL T factorization - Example Perform row-operations to get zeros in the third column: l 3 l 2 l 1 Hl T 1l T = = and the corresponding column-operations: l 3 l 2 l 1 Hl T 1l T 2l T = = = D Lecture 11 Quadratic optimization
12 LDL T factorization - Example We finally got the diagonal matrix D = Since all elements on the diagonal of D are positive the matrix H is positive definite. The values on the diagonal are not the eigenvalues of H, but they have the same sign as the eigenvalues, and that is all we need to know. Lecture 12 Quadratic optimization
13 LDL T factorization - Example How do we determine the L matrix? l 3 l 2 l 1 Hl T 1l T 2l T 3 = D, i.e., H = LDL T if L = (l 3 l 2 l 1 ) 1 = l 1 1 l 1 2 l L = = = Lecture 13 Quadratic optimization
14 Unbounded problems Let f(x) = 1 2 xt Hx+c T x+c 0. Assume that H is not positive semidefinite. Then there is a d R n such that d T Hd < 0. It follows that f(td) = 1 2 t2 d T Hd+tc T d+c 0 when t Conclusion: A necessary condition for f(x) to have a minimum is that H is positive semidefinite. Lecture 14 Quadratic optimization
15 Assume that H is positive semidefinite. Then we can use the factorization H = LDL T which gives minimize 1 2 xt Hx+c T x+c 0 = minimize 1 2 xt LDL T x+c T (L T ) 1 L T x+c 0 = minimize = c 0 + n k=1 1 2 d ky 2 k + c k y k +c 0 n minimize 1 2 d kyk 2 + c k y k k=1 where we have introduced the new coordinates y = L T x and c = L 1 c. The optimization problem separates to n independent scalar quadratic optimization problems, which are easy to solve. Lecture 15 Quadratic optimization
16 Scalar Quadratic optimization without constraints The scalar quadratic optimization problem has a finite solution, if and only if, h = 0 and c = 0, or h > 0. minimize x R 1 2 hx2 +cx+c 0 In the first case there is no unique optimum. In the second case 1 2 hx2 +cx+c 0 = 1 2 h(x+c/h)2 +c 0 c 2 /(2h), and the optimum is achieved for x = c/h. Lecture 16 Quadratic optimization
17 Solving the minimization problems separately shows that [ ] T ŷ = ŷ 1... ŷ n is a minimizer if (i) d k 0, k = 1,...,n (ii) d k ŷ k = c k (i) H is positive semidefinite (ii) DL Tˆx = L 1 c Constraint (ii) can be replaced with LDL Tˆx = c i.e., Hˆx = c. Furthermore, f(x) = 1 2 xt Hx+c T x+c 0 is unbounded (from below) if (i) some d k < 0, or (ii) one of the equations d k ŷ k = c k do not have a solution i.e., d k = 0 and c k 0 (i) H is not positive semi-definite (ii) the equation Hˆx = c do not have a solution Lecture 17 Quadratic optimization
18 The above arguments can be formalized as the following theorem Theorem 2. Let f(x) = 1 2 xt Hx+c T x+c 0. Then ˆx is a minimizer if (i) H is positive semidefinite (ii) Hˆx = c If H is positive definite then ˆx = H 1 c. If H is not positive semidefinite or Hx = c do not have a solution, then f is not limited from below. Comment 1. The condition that Hx = c has a solution is equivalent to c R(H). Therefore, f is a minimizer if and only if H is positive semidefinite and c R(H). Lecture 18 Quadratic optimization
19 Quadratic optimization with linear constraints 1 minimize 2 xt Hx+c T x+c 0 s.t. Ax = b (1) where H = H T R n n, A R m n, c R n, and b R m, and c 0 R. We assume that b R(A) and n > m, i.e., N(A) {0}. Assume that N(A) = span{z ] 1,...,z k } and define the nullspace matrix Z = [z 1... z k. If A x = b it holds that an arbitrary solution to the linear constraint has the form x = x+zv, for some v R k. This follows since A( x+zv) = A x+azv = b+0 = b Lecture 19 Quadratic optimization
20 With x = x+zv inserted in f(x) = 1 2 xt Hx+c T x+c 0 we get f(x) = 1 2 vt Z T HZv+ ( Z T (H x+c) ) T v+f( x). The minimization problem (1) is thus equivalent with minimize 1 2 vt Z T HZv+ ( Z T (H x+c) ) T v+f( x). We can now apply Theorem 2 to obtain the following result: Theorem 3. ˆx is an optimal solution to (1) if (i) Z T HZ is positive semidefinite. (ii) ˆx = x+zˆv where Z T HZˆv = Z T (H x+c) Lecture 20 Quadratic optimization
21 Comment 2. The second condition in the theorem is equivalent with the existence of a û R m such that H AT A 0 ˆx = û c b Proof: The second constraint in the theorem can be written Z T (H( x+zˆv)+c) = 0 Since N(A) = R(Z) this is equivalent with H( x+zˆv }{{} )+c N(A) = R(A T ) ˆx Hˆx+c = A T û, for some û R m Lecture 21 Quadratic optimization
22 Linear constrained QP: Example Let f(x) = 1 2 xt Hx+c T x+c 0, where H = 2 0, c = 0 1 1, c 0 = 0. 1 Consider minimization of f over the feasible region F = {x Ax = b} where A = [ ] 2 1, b = 2. I.e. the minimization problem min x x2 2 +x 1 +x 2 s.t. 2x 1 +x 2 = 2 Lecture 22 Quadratic optimization
23 Linear constrained QP: Example - Nullspace method The nullspace of A is spanned by Z and the reduced Hessian is given by Z = 1, H = Z T HZ = 2 2 Since the reduced Hessian is positive definite the problem is strictly convex. The unique solution is then given by taking an arbitrary x F, e.g. x = (1 0) T and solving Then ˆx = x+zˆv = Hˆv = c = Z T (H x+c) = 3 ˆv = ( ) = Lecture 23 Quadratic optimization
24 Linear constrained QP: Example - Geometric illustration Z F 1 0 x Z 2 3 ˆx Lecture 24 Quadratic optimization
25 Linear constrained QP: Example - Lagrange method Assume that we know the problem is convex, then H AT x = c 2 0 2, A 0 u b x 1 x 2 u = Solving 2x 1 2u = 1 x 2 u = 1 x 1 = 1 2 u x 2 = 1+u, then 2x 1 +x 2 = 2 implies 2( 1 u)+( 1+u) = u = 2, so u = 2 2 and then x 1 = 5/2 and x 2 = 3. As for the nullspace method. Lecture 25 Quadratic optimization
26 Linear constrained QP: Example - Sensitivity analysis From the Lagrange method we know that u = 2, it tells us how the objective function changes due to variations in the right hand side b of the linear constraint, in first approximation. Assume that b := b+δ, then from the previous calculations u = (2+δ) and then x 1 = 5/2+δ and x 2 = 3 δ. Then f(x) = δ 2 3 δ δ δ 1 T = 9 4 2δ 1 2 δ2 5 +δ 2 3 δ T Lecture 26 Quadratic optimization
27 Example 2: Analysis of a resistance circuit I u 1 x 12 x 13 v 13 r 13 u 3 x 34 x 35 v 35 v 12 r 12 r 23 r 34 r 35 v 34 u 5 x 23 v 23 r 45 r 24 u 2 x 24 v 24 u 4 x 45 v 45 Let a current I go through the circuit from node 1 to node 5. What is the solution to the circuit equation? Lecture 27 Quadratic optimization
28 The current x ij goes from node i to node j if x ij > 0. Kirchoff s currentlaw gives This can be written Ax = b where x = x 12 +x 13 = i x 12 x 23 x 24 = 0 x 13 +x 23 x 34 x 35 = 0 x 24 +x 34 x 45 = 0 x 35 +x 45 = i [ x 12 x 13 x 23 x 24 x 34 x 35 x 45 ] T, I A = , b = I Lecture 28 Quadratic optimization
29 The Node potentials are denoted u j, j = 1,...,5 The voltage drop over the resistor r ij is denoted v ij and is given by the equations This can also be written v ij = u i u j v ij = r ij x ij (Ohms law) v = A T u v = Dx where [ ] T ] T u = u 1 u 2 u 3 u 4 u 5 v = [v 12 v 13 v 23 v 24 v 34 v 35 v 45 D = diag(r 12,r 13,r 23,r 24,r 34,r 35,r 45 ) Lecture 29 Quadratic optimization
30 The circuit equations: Ax = b v = A T u v = Dx D AT A 0 x = 0 b Since D is positive definite (r ij > 0) this is, see comment 2, equivalent to the optimality conditions for the following optimization problem 1 minimize 2 xt Dx s.t. Ax = b Lecture 30 Quadratic optimization
31 Comment In the lecture on network optimization we saw that the matrix N(A) 0 (then the matrix was called Ã) and that the nullspace corresponded to cycles in the circuit. The laws of electricity (Ohms law + Kirchoffs law) determines the current that minimizes the loss of power. This follows since the objective function can be written 1 2 xt Dx = 1 2 r ij x 2 ij ij Lecture 31 Quadratic optimization
32 Comment 3. It is common to connect one of the nodes to earth. Let us, e.g., connect node 5 to earth, then u 5 = 0. I u 1 x 12 x 13 v 13 r 13 u 3 x 34 x 35 v 35 v 12 r 12 x 23 r 23 v 23 r 34 r 35 v 34 r 45 u 5 = 0 u 2 x 24 r 24 v 24 u 4 x 45 v 45 Lecture 32 Quadratic optimization
33 With the potential in node 5 fixed at u 5 = 0 the last column in the voltage equation v = A T u is eliminated. The Voltage equation and the current balance can be replaced with v = Ā T ū and Āx = b where I u u 2 Ā =, b =, ū = u The advantage with this is that the rows of Ā are linearly independent. This facilitates the solution of the optimization problem. u 4 Lecture 33 Quadratic optimization
34 Least-Squares problems Application: Linear regression (model fitting) e(t) s(t) y(t) System Problem: Fit a linear regressionsmodel to measured data. n Regressionmodel : y(t) = α j ψ j (t) ψ k (t), k = 1,...,n are the regressors (known functions) α k = k = 1,...,n are the model parameters (to be determined) e(t) measurement noise (not known) s(t) observations. j=1 Lecture 34 Quadratic optimization
35 Idéa for fitting the model: Minimize the quadratic sum of the prediction errors minimize 1 m n ( α j ψ j (t i ) s(t i )) 2 2 =minimize 1 2 i=1 j=1 m (ψ(t i ) T x s(t i )) 2 i=1 =minimize 1 2 (Ax b)t (Ax b) (2) ψ(t) = ψ 1 (t)., x = ψ n (t) α 1. α n, A = ψ(t 1 ) T., b = ψ(t n ) T s(t 1 ). s(t n ) Lecture 35 Quadratic optimization
36 The solution of the Least-Squares problem (LSQ) The Least-Squares problem (2) can equivalently be written minimize 1 2 xt Hx+c T x+c 0 where H = A T A, c = A T b, c 0 = 1 2 bt b. We note that H = A T A is positive semidefinite since x T A T Ax = Ax 2 0. c R(H) since c = A T b R(A T ) = R(A T A) = R(H). The conditions in Theorem 2 and Comment 1 are satisfied and it follows that the LSQ-estimate is given by Hˆx = c A T Aˆx = A T b (The Normal equation) Lecture 36 Quadratic optimization
37 Linear algebra interpretation N(A) N(A T ) R n A R m x n x b Aˆx ˆx = A T u R(A T ) Aˆx R(A) The Normal equation can be written A T (b Aˆx) = 0 b Aˆx N(A T ) This orthogonality property has a geometric interpretation which is illustrated in the figure above. The Fundamental Theorem of Linear algebra explains the LSQ solution geometrically. Lecture 37 Quadratic optimization
38 Uniqueness of the LSQ solution Case 1 If the columns in A are linearly independent,i.e., N(A) = {0}, it holds that H = A T A is positive definite and hence invertible. The Normal equations then has the unique solution ˆx = (A T A) 1 Ab Case 2 If A has linearly dependent columns, i.e., N(A) {0} then it is natural to chose the solution of smallest length. From the figure on the previous slide it is clear that we should let ˆx = A T û, where AA T û = A x, and where x is an arbitrary solution to the LSQ problem. Lecture 38 Quadratic optimization
39 Our choice of ˆx in case 2 can equivalently be interpreted as the solution to the quadratic optimization problem 1 minimize 2 xt x s.t. Ax = A x According to Comment 2 the solution to this optimization problem is given by the solution to I AT A 0 i.e. ˆx = A T û, where AA T û = A x ˆx = û 0 A x Lecture 39 Quadratic optimization
40 Läsanvisningar Quadratic constraints without constraints: Chapter 9 and 27 in the book. Quadratic optimization with constraints: Chapter 10 in the book. Least-Squares method: Chapter 11 in the book. Lecture 40 Quadratic optimization
Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationPositive Definite Matrix
1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function
More informationMath 307 Learning Goals
Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationC&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz
C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationOptimeringslära för F (SF1811) / Optimization (SF1841)
Optimeringslära för F (SF1811) / Optimization (SF1841) 1. Information about the course 2. Examples of optimization problems 3. Introduction to linear programming Introduction - Per Enqvist 1 Linear programming
More informationOrthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016
Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That
More information18.06 Professor Johnson Quiz 1 October 3, 2007
18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationMath 1553 Introduction to Linear Algebra
Math 1553 Introduction to Linear Algebra Lecture Notes Chapter 2 Matrix Algebra School of Mathematics The Georgia Institute of Technology Math 1553 Lecture Notes for Chapter 2 Introduction, Slide 1 Section
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationTranspose & Dot Product
Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:
More informationMay 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions
May 9, 24 MATH 48 MIDTERM EXAM OUTLINE This exam will consist of two parts and each part will have multipart questions. Each of the 6 questions is worth 5 points for a total of points. The two part of
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationMATH 2360 REVIEW PROBLEMS
MATH 2360 REVIEW PROBLEMS Problem 1: In (a) (d) below, either compute the matrix product or indicate why it does not exist: ( )( ) 1 2 2 1 (a) 0 1 1 2 ( ) 0 1 2 (b) 0 3 1 4 3 4 5 2 5 (c) 0 3 ) 1 4 ( 1
More informationLecture 5 Least-squares
EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More informationTranspose & Dot Product
Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationAnnounce Statistical Motivation Properties Spectral Theorem Other ODE Theory Spectral Embedding. Eigenproblems I
Eigenproblems I CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Eigenproblems I 1 / 33 Announcements Homework 1: Due tonight
More informationLecture 6 Positive Definite Matrices
Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2
MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 1 Ridge Regression Ridge regression and the Lasso are two forms of regularized
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More information18.06 Quiz 2 April 7, 2010 Professor Strang
18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line
More informationAdvanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs
Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and
More informationOptimality Conditions
Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationMATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.
MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.
More informationDepartment of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4
Department of Aerospace Engineering AE6 Mathematics for Aerospace Engineers Assignment No.. Decide whether or not the following vectors are linearly independent, by solving c v + c v + c 3 v 3 + c v :
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationSF2822 Applied nonlinear optimization, final exam Wednesday June
SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.
More informationMATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.
MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationLinear Least Square Problems Dr.-Ing. Sudchai Boonto
Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkuts Unniversity of Technology Thonburi Thailand Linear Least-Squares Problems Given y, measurement signal, find
More informationComponents and change of basis
Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition
More informationFormula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column
Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the
More informationLagrange Multipliers
Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationMATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.
MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products. Orthogonal projection Theorem 1 Let V be a subspace of R n. Then any vector x R n is uniquely represented
More informationCAAM 335: Matrix Analysis
1 CAAM 335: Matrix Analysis Solutions to Problem Set 4, September 22, 2008 Due: Monday September 29, 2008 Problem 1 (10+10=20 points) (1) Let M be a subspace of R n. Prove that the complement of M in R
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationOrthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6
Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics
More informationFitting Linear Statistical Models to Data by Least Squares: Introduction
Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math
More informationRow Space, Column Space, and Nullspace
Row Space, Column Space, and Nullspace MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Every matrix has associated with it three vector spaces: row space
More informationMath Linear Algebra
Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation
ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More information3 Gramians and Balanced Realizations
3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationAnnounce Statistical Motivation ODE Theory Spectral Embedding Properties Spectral Theorem Other. Eigenproblems I
Eigenproblems I CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Eigenproblems I 1 / 33 Announcements Homework 1: Due tonight
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationMath 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:
Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =
More informationMath (P)refresher Lecture 8: Unconstrained Optimization
Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions
More informationLinear Systems. Math A Bianca Santoro. September 23, 2016
Linear Systems Math A4600 - Bianca Santoro September 3, 06 Goal: Understand how to solve Ax = b. Toy Model: Let s study the following system There are two nice ways of thinking about this system: x + y
More informationMagnetic Flux. Conference 8. Physics 102 General Physics II
Physics 102 Conference 8 Magnetic Flux Conference 8 Physics 102 General Physics II Monday, March 24th, 2014 8.1 Quiz Problem 8.1 Suppose we want to set up an EMF of 12 Volts in a circular loop of wire
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationPerformance Surfaces and Optimum Points
CSC 302 1.5 Neural Networks Performance Surfaces and Optimum Points 1 Entrance Performance learning is another important class of learning law. Network parameters are adjusted to optimize the performance
More informationQuantum Mechanics for Scientists and Engineers. David Miller
Quantum Mechanics for Scientists and Engineers David Miller Vector spaces, operators and matrices Vector spaces, operators and matrices Vector space Vector space We need a space in which our vectors exist
More information10-725/36-725: Convex Optimization Prerequisite Topics
10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the
More informationChapter 3. Vector spaces
Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More information