Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008
|
|
- Aubrey Dickerson
- 5 years ago
- Views:
Transcription
1 Lecture 19 Algorithms for VIs KKT Conditions-based Ideas November 16, 2008
2 Outline for solution of VIs Algorithms for general VIs Two basic approaches: First approach reformulates (and solves) the KKT conditions directly (as a complementarity problem) - useful when K has a complex algebraic chaarcterization Second approach is inherently a primal framework that uses an appropriately defined merit function - a good idea when K is simple (next and last lecture) Game theory: Models, Algorithms and Applications 1
3 KKT Conditions based Methods Consider the VI(K,F) which requires an x such that (x x ) T F (x ) 0, x K. Suppose K is finitely representable as K = {x R n : h(x) = 0, g(x) 0}, where h : R n R l and g : R n R m are continuously differentiable Then the KKT conditions for VI(K,F) are (KKT) F (x) + JH(x) T µ + Jg(x) T λ = 0 h(x) = 0 0 g(x) λ 0. For the remainder of this section we assume that F C 1 on R n h, g C 2 on R n Game theory: Models, Algorithms and Applications 2
4 Using the FB function A reformulation of (KKT) that uses the FB function is given by L(x, µ, λ) h(x) 0 = Φ FB (x, µ, λ) ψ FB ( g 1 (x), λ),. ψ FB ( g m (x), λ) where L(x, µ, λ) F (x) + JH(x) T µ + Jg(x) T λ is the VI-Lagrangian function The natural merit function is given by By the results of the last 2 lectures: θ FB (x, µ, λ) 1 2 Φ FB(x, µ, λ) T Φ FB (x, µ, λ). Φ FB and θ FB are semismooth and continuously differentiable, respectively on R n+l+m. Moreover if JF, 2 g i and 2 h i are locally Lipschitz then Φ FB is strongly semismooth Game theory: Models, Algorithms and Applications 3
5 Nonsingularity of elements of Generalized Clarke Jacobian Next, we discuss the nonsingularity of all elements in Φ FB (x, µ, λ) We also relate these to the strong regularity of the KKT system given by (KKT) Proposition 1 (10.1.1) The generalized Clarke Jacobian Φ FB (x, µ, λ) is contained in the following family of matrices: J x L(x, µ, λ) Jh(x) T Jg(x) T A(x, µ, λ) { Jh(x) 0 0 }, D g (x, λ)jg(x) 0 D λ (x, λ) where D λ (x, λ) = diag(a 1 (x, λ),..., a m (x, λ)) and D g (x, λ) = diag(b 1 (x, λ),..., b m (x, λ)) are m m diagonal matrices whose diagonal elements are given by (λ i, g i (x)) g (a i (x, λ), b i (x, λ)) 2 (1, 1), (g i (x), λ i ) (0, 0) i (x)+λ2 i clb(0, 1) (1, 1), (g i (x), λ i ) = 0. Game theory: Models, Algorithms and Applications 4
6 Moreover, A(x, µ, λ) is a linear Newton approximation of Φ FB at (x, µ, λ) which is strong if JF and each g 2 i and 2 h j are locally Lipschitz continuous at x. Proof: Omitted (follows easily from results in Ch. 7). Game theory: Models, Algorithms and Applications 5
7 Index sets of relevance Next we introduce several index sets that will aid in the discussion: I 0 {i I : g i (x) = 0 λ i } I < {i I : g i (x) < 0 = λ i } I R I\(I 0 I < ) Note that i I R if and only if one of the following hold: g i (x) > 0 or infeasibility of g λ i < 0 or infeasibility of multipliers ( g i (x), λ i ) > 0. no complementarity If 0 g i (x) λ i 0, then I R is empty. More generally ψ FB ( g i (x), λ) = 0 i I 0 I <. Game theory: Models, Algorithms and Applications 6
8 We may also define I 00, I + I 0 given by I 00 = {i I 0 : λ i = 0} I + = {i I 0 : λ i > 0}. no strict complementarity strict complementarity We further refine I 00 with the particular elements a i (x, λ), b i (x, λ) of the generalized Clarke Jacobian Φ FB (x, µ, λ) : I 01 = {i I 00 : a i (x, λ) = 0} I 02 = {i I 00 : max(a i (x, λ), b i (x, λ)) < 0} I 03 = {i I 00 : b i (x, λ) = 0} If we let I R2 I R I 02, then the following relationships hold: I = I 0 I < I R (the whole set of indices) I 0 = I 00 I + I 00 = I 01 I 02 I 03 implying that I = I + I 01 I R2 I 03 I < Recall that (ai, b i ) cl(b(0, 1) (1, 1) Game theory: Models, Algorithms and Applications 7
9 Moreover we have (D g ) ii = 0 and (D λ ) ii = 1, i I < I 03 ( from definition) i I < = a i = 1 = (D λ ) ii = 1 i I 03 = b i = 0 = (D g ) ii = 0 Similarly, we have (D g ) ii = 1 and (D λ ) ii = 0, i I + I 01 max((d g ) ii, (D λ ) ii ) < 0, i I R2 = I R I 02. Useful to define a family of matrices closely related to the strong stability conditions. For any subset J I 00 I R, J x L Jh T Jg+ T JgT J Jh M F B (J ) Jg Jg J Game theory: Models, Algorithms and Applications 8
10 Equivalent statement of strong stability Lemma 1 (10.1.2) A KKT triple (x, µ, λ ) is a strongly-stable KKT point of VI(K,F) if and only if for all subsets J I 00, the determinants of the matrices M F B (J ) have the same nonzero sign. Proof: ( ) Suppose that (x, µ, λ ) is a strongly stable KKT triple. By corollary , it follows that JL Jh T Jg+ T B Jh 0 0 Jg is nonsingular and the Schur complement C Why do we only need to restrict ourselves to I00? ( ) Jg B 1 Jg T Game theory: Models, Algorithms and Applications 9
11 is a P matrix. Let J be an arbitrary subset of I 00. By the Schur determinantal formula, we have det M F B (J ) = det B det(m F B (J )/B). The Schur complement (M F B (J )/B) is a principal submatrix of C. But C is a P-matrix implying that every principal submatrix has a positive determinant or det(m F B (J )/B) > 0. But this implies that sgn det M F B (J ) = sgn det B and is independent of J. ( ) Omitted. This suggests a possible definition for an arbitrary triple (x, µ, λ). If this triple is a KKT triple, then it is equivalent to the earlier definition. The ESSC defined below extends this to the case of strong-stability of a non-kkt triple. Game theory: Models, Algorithms and Applications 10
12 Extended Strong-Stability Condition (ESSC) Definition 1 (10.1.3) The Extended Strong-Stability Condition (ESSC) is said to hold at a triple (x, µ, λ) R n+l+m if (a) sgn det M F B (J ) is a nonzero constant for all index sets J I 00 ; (b) det M F B (J ) det am F B (J ) 0 for any two index set J and J that are subsets of I 00 I R. Equivalently, the ESSC holds at (x, µ, λ) if and only if (i) The matrix is nonsingular B = J x L Jh T Jg+ T Jh 0 0 = M F B ( ) Jg (ii) the M F B (J ) is nonsingular for all nonempty subsets J I 00. (iii) for all subsets J I 00 I R for which which M F B (J ) is nonsingular, we have sgn det M F B (J ) = sgn det B. Game theory: Models, Algorithms and Applications 11
13 By considering the nonsingularity of the matrix M F B (I 00 ), it follows that if ESSC holds at (x, µ, λ) then the gradients are linearly independent. { h j, j = 1,..., l} { g i : i I 0 } Essentially, the ESSC is a sufficient condition for the matrices in the generalized Jacobian to be nonsingular. Theorem 1 Suppose the ESSC holds at a triple (x, µ, λ). All the elements in A(x, µ, λ) (and therefore all matrices in the generalized Jacobian Φ FB (x, µ, λ)) are nonsingular. Proof sketch: Consider an arbitrary matrix H in A(x, µ, λ). By definition, such a Game theory: Models, Algorithms and Applications 12
14 matrix has the following structure (by breaking into index sets): J x L Jh T Jg+ T JgT 01 JgR2 T Jg03 T JgT < Jh Jg H = Jg , (D g ) R2 Jg R (D λ ) R I I where (D g ) R2 and (D λ ) R2 are negative definite diagonal matrices. Moreover H is Game theory: Models, Algorithms and Applications 13
15 nonsingular if the following matrix is nonsingular: J x L Jh T Jg+ T JgT 01 JgR2 T Jh H = Jg Jg Jg R (D g ) 1 R2 (D λ) R2 J x L Jh T Jg+ T JgT 01 JgT R Jh = Jg Jg Jg R (D g ) 1 R2 (D λ) R2 = M F B ({I 01 I R2 }) + D. Next, we employ the following determinantal formula: For any square matrix A of order r and any diagonal matrix D of the same order, det(d + A) = α det D αα det Aᾱᾱ, where the summation ranges for all subsets α of {1,..., r} with complements ᾱ. Game theory: Models, Algorithms and Applications 14
16 It follows that det H = det(((d g ) 1 R2 (D λ) R2 ) αα det M F B (J ) α α, α I R2 where the summation is over all subsets α of I R2 and α J \α. The final step requires invoking the ESSC to show that this determinantal form is indeed positive. (omitted) Game theory: Models, Algorithms and Applications 15
17 Necessity of ESSC We see that ESSC is a sufficient condition for nonsigularity of the elements in the generalized Jacobian. If the triple satisfies the KKT conditions, then the ESSC is also necessary: Theorem 2 Let (x, µ, λ ) be a KKT triple of K, F ). equivalent: Then the following are (a) The ESSC holds at (x, µ, λ ). (b) All matrices in A(x, µ, λ ) are nonsingular. (b) All matrices in Φ FB (x, µ, λ ) are nonsingular. Proof: From the earlier result, we have (a) = (b) = (c). It suffices to prove that (c) = (a). We assume that all the matrices in Φ FB (x, µ, λ ) are nonsingular. show that the set of gradients We first { g i (x ) : i I 00 } Game theory: Models, Algorithms and Applications 16
18 are linearly independent. Define a sequence {λ k } as λ k i λ i, if i I\I 00 1/k, if i I 00 The sequence {λ k } converges to λ ; in addition for each k, Φ FB is continuously differentiable at (x, µ, λ k ). The sequence of matrices {JΦ FB (x, µ, λ k ) converges to a matrix H P B(x, µ, λ ) with I 01 = I 00 and I 02 = I 03 =. The matrix H is nonsingular by definition and the linear independence of the gradients follows (see earlier proof.) Next, we show that for any subset J I 00, there exists an element in Φ FB (x, µ, λ ) such that I 01 = J, I 02 = and I 03 = I 00 \J. Since the gradients are linearly independent, the classical implicit function theorem implies that there is a sequence {x k } converging to x such that g i (x k ) > 0 for all i I 00, k. For each k, we define λ i if i I\I 00 λ k i gi (x k ) if i J g i (x k ) 2 if i I 00 \J. Game theory: Models, Algorithms and Applications 17
19 The sequence {(x k, µ, λ k )} converges to (x, µ, λ ) and Φ FB is continuously differentiable at (x k, µ, λ k ) since there is no index i such that g i (x k ) = λ k i = 0. By using prop and the continuity of the involved functions, it follows that {JΦ FB (x k, µ 8, λ k )} converges to an H with the desired properties. Proposition 2 (7.1.18) Let G : R n R p R n be Lipschtiz in nbhd of ( x, ȳ) for which G( x, ȳ) = 0. Assume that all matrices in Π x G( x, ȳ) are nonsingular.then there exist open nbhds U and V of x and ȳ, respectively such that for every y V, the equation G(x, y) = 0 has a unique solution x = F (y) U, F (ȳ) = x and the map F : V U is Lipschtiz. Since all the matrices in Φ FB (x, µ, λ ) (a convex set) are nonsingular, it follows that such matrices have the same nonzero determinantal sign. Suppose not. Then if H 1 and H 2 are two matrices in Φ FB such that det H 1 < 0 and det H 2 > 0. Then there exists an H such that det H = 0 where H is a convex combination of H 1 and H 2. This contradicts the nonsingularity of all elements in Φ FB (x, µ, λ ). To complete the proof, we need so to show that the matrices M F B (J ) have the same nonzero determinantal sign for all J I 00. For each index set, let H(J ) Φ FB (x, µ, λ ) such that J = I 01 and I 03 = I 00 \I 01. From the Game theory: Models, Algorithms and Applications 18
20 structure of H(J ), we have JL Jh T Jg+ T JgT J sgn det H(J ) = ( 1) m Jh sgn det Jg = ( 1) m sgn det M F B (J ) Jg J 0 0 Since all the matrices in P B(x, µ, λ ) have determinants of the same nonzero sign, the ESSC holds. Corollary 1 A KKT triple (x, µ, λ ) is strongly stable if and only all the matrices in Φ FB (x, µ, λ ) are nonsingular. Game theory: Models, Algorithms and Applications 19
21 Example Let n = 1, l = 0, m = 1 and g 1 (x) x. Thus the VI is a 1-dimensional LCP(q,M). Case 1: q = -1, M = 1 = F (x) x 1. Then ( ) x 1 λ Φ FB (x, λ) =. x 2 + λ 2 x λ Consider the KKT pair (0,0) implying that I 00 = {1}, I + = I R =. Moreover, M F B ( ) = 1 and ( ) 1 1 M F B ({1}) = 1 0 implying that ESSC is satisfied. From the expression of Φ FB, the elements of Φ FB (0, 0) are given by ( 1 1 a 1 b 1 ), a 2 + b 2 1. These matrices are nonsingular (can be seen by showing that the determinant is nonzero). Game theory: Models, Algorithms and Applications 20
22 Case 2: q = 0, M = 1, F (x) = x Then ( ) x λ Φ FB (x, λ) =. x 2 + λ 2 x λ Consider the KKT pair (0,0) implying that I 00 = {1}, I + = I R =. Moreover, M F B ( ) = 1 and ( ) 1 1 M F B ({1}) = 1 0 implying that ESSC is not satisfied (since the signs of the determinants are not the same). From the expression of Φ FB, the elements of Φ FB (0, 0) are given by ( ) 1 1, a 2 + b 2 1. a 1 b 1 Singularity of this matrix follows when a = b = 1/ 2. Game theory: Models, Algorithms and Applications 21
Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008
Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity
More informationLecture 7 Monotonicity. September 21, 2008
Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:
More informationON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS
ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are
More informationDifferentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA
Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics
More informationWHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?
WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e
More informationLecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008
Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:
More informationSemismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones
to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National
More informationCopositive Plus Matrices
Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationOn the adjacency matrix of a block graph
On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit
More informationA Continuation Method for the Solution of Monotone Variational Inequality Problems
A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationc i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].
Lecture Notes: Rank of a Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Linear Independence Definition 1. Let r 1, r 2,..., r m
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationAffine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ
2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationExact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming
Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite
More informationAn Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables
More informationFinite Dimensional Optimization Part III: Convex Optimization 1
John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,
More informationSolution methods for quasi variational inequalities
Solution methods for quasi variational inequalities Simone Sagratella May 29, 2013 PhD thesis in Operations Research Sapienza University of Rome Advisor: Prof. Francisco Facchinei Introduction Quasi variational
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationUsing exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities
Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present
More informationCHARACTERIZATIONS OF LIPSCHITZIAN STABILITY
CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195
More informationLocal convergence of the method of multipliers for variational and optimization problems under the noncriticality assumption
Comput Optim Appl DOI 10.1007/s10589-014-9658-8 Local convergence of the method of multipliers for variational and optimization problems under the noncriticality assumption A. F. Izmailov A. S. Kurennoy
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationTechnische Universität Dresden Herausgeber: Der Rektor
Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationf(x) f(z) c x z > 0 1
INVERSE AND IMPLICIT FUNCTION THEOREMS I use df x for the linear transformation that is the differential of f at x.. INVERSE FUNCTION THEOREM Definition. Suppose S R n is open, a S, and f : S R n is a
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More informationThe matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.
) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationNonlinear Optimization
Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline
More informationChapter 1 Vector Spaces
Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationError bounds for symmetric cone complementarity problems
to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin
More information1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is
New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,
More informationThe Matrix-Tree Theorem
The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries
More informationFirst-order optimality conditions for mathematical programs with second-order cone complementarity constraints
First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order
More information6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games
6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence
More informationCHAPTER 10 Shape Preserving Properties of B-splines
CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More information1 Last time: inverses
MATH Linear algebra (Fall 8) Lecture 8 Last time: inverses The following all mean the same thing for a function f : X Y : f is invertible f is one-to-one and onto 3 For each b Y there is exactly one a
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More information16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.
Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationSCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE
SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.
More informationA PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)
A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationRANK-WIDTH AND WELL-QUASI-ORDERING OF SKEW-SYMMETRIC OR SYMMETRIC MATRICES
RANK-WIDTH AND WELL-QUASI-ORDERING OF SKEW-SYMMETRIC OR SYMMETRIC MATRICES SANG-IL OUM Abstract. We prove that every infinite sequence of skew-symmetric or symmetric matrices M, M 2,... over a fixed finite
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationMATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.
MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationIntroduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013
18.782 Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013 Throughout this lecture k denotes an algebraically closed field. 17.1 Tangent spaces and hypersurfaces For any polynomial f k[x
More informationGame theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008
Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP September 10, 2008 Geometry of the Complementarity Problem Definition 1 The set pos(a) generated by A R m p represents
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationOptimality conditions for problems over symmetric cones and a simple augmented Lagrangian method
Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method Bruno F. Lourenço Ellen H. Fukuda Masao Fukushima September 9, 017 Abstract In this work we are interested
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationOn the Properties of Positive Spanning Sets and Positive Bases
Noname manuscript No. (will be inserted by the editor) On the Properties of Positive Spanning Sets and Positive Bases Rommel G. Regis Received: May 30, 2015 / Accepted: date Abstract The concepts of positive
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationAlgorithms to Compute Bases and the Rank of a Matrix
Algorithms to Compute Bases and the Rank of a Matrix Subspaces associated to a matrix Suppose that A is an m n matrix The row space of A is the subspace of R n spanned by the rows of A The column space
More informationA sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm
Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationMA 265 FINAL EXAM Fall 2012
MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators
More informationUniqueness of Generalized Equilibrium for Box Constrained Problems and Applications
Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.
More informationCHAPTER 3 REVIEW QUESTIONS MATH 3034 Spring a 1 b 1
. Let U = { A M (R) A = and b 6 }. CHAPTER 3 REVIEW QUESTIONS MATH 334 Spring 7 a b a and b are integers and a 6 (a) Let S = { A U det A = }. List the elements of S; that is S = {... }. (b) Let T = { A
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationLinear Systems and Matrices
Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More information1. Introduction. Consider the following quadratically constrained quadratic optimization problem:
ON LOCAL NON-GLOBAL MINIMIZERS OF QUADRATIC OPTIMIZATION PROBLEM WITH A SINGLE QUADRATIC CONSTRAINT A. TAATI AND M. SALAHI Abstract. In this paper, we consider the nonconvex quadratic optimization problem
More informationMAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM
MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More information8. Constrained Optimization
8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More information1. Introduction. Consider the following parameterized optimization problem:
SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 940 946, November 1998 004 NONDEGENERACY AND QUANTITATIVE STABILITY OF PARAMETERIZED OPTIMIZATION PROBLEMS WITH MULTIPLE
More informationSolution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method
Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian
More informationMath 321: Linear Algebra
Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,
More informationChapter 1: Systems of Linear Equations
Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where
More informationKey words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone
ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called
More information