MATH Dr. Pedro V squez UPRM. P. V squez (UPRM) Conferencia 1/ 17

Similar documents
MATH Dr. Pedro Vásquez UPRM. P. Vásquez (UPRM) Conferencia 1 / 17

Constrained Nonlinear Optimization Algorithms

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

A. H. Hall, 33, 35 &37, Lendoi

1 Positive definiteness and semidefiniteness

Matrix decompositions

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

The Karush-Kuhn-Tucker conditions

Constrained Optimization

Numerical Linear Algebra

Digital Workbook for GRA 6035 Mathematics

Cheat Sheet for MATH461

CS 323: Numerical Analysis and Computing

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

IE 5531: Engineering Optimization I

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

This can be accomplished by left matrix multiplication as follows: I

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

' Liberty and Umou Ono and Inseparablo "

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

A DARK GREY P O N T, with a Switch Tail, and a small Star on the Forehead. Any

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Programming: Simplex

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Numerical optimization

Nonlinear Optimization: What s important?

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

5.5 Quadratic programming

Computational Methods. Systems of Linear Equations

Scientific Computing

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Review Questions REVIEW QUESTIONS 71

MATH 3511 Lecture 1. Solving Linear Systems 1

Direct Methods for Solving Linear Systems. Matrix Factorization

Math 302 Outcome Statements Winter 2013

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

9.1 Preconditioned Krylov Subspace Methods

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

FINITE-DIMENSIONAL LINEAR ALGEBRA

Total 170. Name. Final Examination M340L-CS

Lecture 18: Optimization Programming

Math 61CM - Solutions to homework 2

1 Computing with constraints

LOWELL WEEKLY JOURNAL

oenofc : COXT&IBCTOEU. AU skaacst sftwer thsa4 aafcekr will be ehat«s«ai Bi. C. W. JUBSSOS. PERFECT THBOUGH SDFFEBISG. our

POLI270 - Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

1.4 Gaussian Elimination Gaussian elimination: an algorithm for finding a (actually the ) reduced row echelon form of a matrix. A row echelon form

pset3-sol September 7, 2017

The Degree of Central Curve in Quadratic Programming

Orthogonal Transformations

LOWELL WEEKLY JOURNAL

Matrix Factorization and Analysis

ECE133A Applied Numerical Computing Additional Lecture Notes

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Lecture 3: QR-Factorization

Linear Algebra and Matrix Inversion

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

Matrix Multiplication Chapter IV Special Linear Systems

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Linear Analysis Lecture 16

Math 121 Homework 5: Notes on Selected Problems

Quadratic Programming

ANONSINGULAR tridiagonal linear system of the form

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1 GSW Sets of Systems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Block-tridiagonal matrices

Math 60. Rumbos Spring Solutions to Assignment #17

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving linear equations with Gaussian Elimination (I)

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Homework 2 Foundations of Computational Math 2 Spring 2019

Two Posts to Fill On School Board

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Introduction to Mathematical Programming

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Linear Systems of n equations for n unknowns

Computational Methods. Least Squares Approximation/Optimization

Key words. Nonconvex quadratic programming, active-set methods, Schur complement, Karush- Kuhn-Tucker system, primal-feasible methods.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Solving Linear Systems Using Gaussian Elimination

Optimization Methods

March 5, 2012 MATH 408 FINAL EXAM SAMPLE

MODEL ANSWERS TO THE THIRD HOMEWORK

3.4 Elementary Matrices and Matrix Inverse

Gaussian Elimination for Linear Systems

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Transcription:

Dr. Pedro V squez UPRM P. V squez (UPRM) Conferencia 1/ 17

Quadratic programming MATH 6026 Equality constraints A general formulation of these problems is: min x 2R nq (x) = 1 2 x T Qx + x T c (1) subjec to Ax = b where A is the m " n Jacobian of constraints (with m # n) whose rows ai T, i 2 # and b is the vector in R m. We assume that A has full row rank (rank m).so that the constraints are consistent. P. V squez (UPRM) Conferencia 2/ 17

First order necessary conditions The FONC for x $ to be a solution of (1) state that there is a vector l $ such that the following system of equations is satisöed:! "! " Q %A T x $ A 0 l $ =! " %c b The system can be rewritten in a form that is useful for computation by expressing x $ as x $ = x + p, where x is some estimate of the solution and p is the desired step, so the above system can be written:! Q A T A 0 "! " %p l $ =! " g h where h = Ax % b, g = c + Qx, p = x $ % x (2) (3) P. V squez (UPRM) Conferencia 3/ 17

The matrix in (2) is called the Karush-Kuhn-Tucker (KKT) matrix, and the following result gives conditions under which is non singular. We use Z as the n " (n % m) matrix whose columns are the basic null space of A, that is Z has full rank and satisöes AZ = 0. Lemma Let A have full row rank, and assume that the reduced-hessian matrix Z T QZ! is postive " deönite. Then the KKT matrix Q A T K = (4) A 0 is nonsingular and hence there is a unique vector pair (x $, l $ ) satisfying (2). P. V squez (UPRM) Conferencia 4/ 17

Example P. V squez (UPRM) Conferencia 5/ 17

P. V squez (UPRM) Conferencia 6/ 17

Theorem Let A have full row rank and assume that the reduced-hessian matrix Z T QZ is positive deönite. Then the vector x $ satisfying (2) is the unique global solution of (1). P. V squez (UPRM) Conferencia 7/ 17

Direct solution of the KKT system One important observation is that if m & 1, the KKT matrix is always indeönite. We deöne the inertia of a symmetric matrix K to be the triple that indicates the numbers n +, n %, and n 0 of positive, negative and zero eigenvalues, respectively, that is, inertia(k )=(n +, n %, n 0 ) Theorem Let K be deöned by (4), and suppose that A has rank m. inertia(k )=inertia(z T QZ )+(m, m, 0). Then Therefore, if Z T QZ is positive deönite, inertia(k )=(m, m, 0) P. V squez (UPRM) Conferencia 8/ 17

Factoring the full KKT system One option to solve (3) is to perform a triangular factorization on the full KKT matrix and then perform backward and forward substitution with the triangular factors. Because of indeöniteness, we cannot use the Cholesky factorization. We would use Gaussian elimination with partial pivoting to obtain the L and U factors, but this approach has the disadvantage that ignores symmetry.. The most e ective strategy in this case is to use a symmetric indeönite factorization. For a general symmetric matrix K, the factorization has the form: P T KP = LBL T where P is a permutation matrix, L is unit lower triangular, and B is blocked-diagonal with either 1 " 1 or 2" 2 blocks. P. V squez (UPRM) Conferencia 9/ 17

P. V squez (UPRM) Conferencia 10 / 17

Null- space method The null space method does not require nonsingularity of Q and therefore has wider applicability. It assumes only that the conditions of lemma, that A has full row rank and that Z T QZ is positive deönite. However requires knowledge of the null-space basis matrix Z Suppose that the vector p has a partition into two components: p = Yp Y + Zp Z where Z is the n " (n % m) null-space matrix, Y is any n " n matrix such that [Y jz ] is nonsingular p Y is an n-vector p Z is an (n % m)-vector Yx Y is a particular solution of Ax = b Zx Z is a displacement along the constraints P. V squez (UPRM) Conferencia 11 / 17

Then we obtain: (AY ) p Y = %h Since A has rank m and [Y jz ] is n " n nonsingular, the product A [Y jz ] = [AY j0] has rank m. Therefore, AY is a nonsingular m " m matrix, and p Y is well determined, substituting : %QYp Y % QZp Z + A T l $ = g and multiply by Z T : # Z T QZ $ p Z = %Z T QYp Y % Z T g The system can be solved by performing a Cholesky factorization of the reduced Hessian matrix Z T QZ to determine p Z. We therefore compute the total step p = Yp Y + Zp Z. To obtain the lagrange multiplier, we use: which can be solved for l $. (AY ) T l $ = Y T (g + Qp) P. V squez (UPRM) Conferencia 12 / 17

Example P. V squez (UPRM) Conferencia 13 / 17

P. V squez (UPRM) Conferencia 14 / 17

Iterative solution for the KKT system CG applied to the reduced system Assuminf that the solution of the QP is: x $ = Yx Y + Zx Z for some vectors x Z 2 R n%m, x Y 2 R m, the constraints Ax = b yield: AYx Y = b whixh determines the vector x. To obtain x Z solves the unsconstrained reduced problem: where 1 min x 2 x Z T Z T QZx Z + xz T c Z, Z c Z = Z T QYx Y + Z T c The solution x Z satisöes the linear system Z T QZx Z = %c Z Since Z T QZ is positive deönite, we can apply the CG method: P. V squez (UPRM) Conferencia 15 / 17

Algorithm 16.1 (preconditioned CG for reduced systems Choose an initial point x Z ; Compute r Z = Z T QZx Z + c Z, g Z = W ZZ %1 r Z, and d Z = %g Z repeat a r Z g Z /dz T Z T QZd Z ; x Z x Z + ad Z ; r + Z r Z + az T QZd Z ; g + Z W ZZ %1 r + # $ Z ; b r + T Z g + Z /rz T g Z ; d Z %g + Z + bd Z ; g Z g + Z ; r Z r + Z until a termination test is satisöed. P. V squez (UPRM) Conferencia 16 / 17

P. V squez (UPRM) Conferencia 17 / 17