PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39
|
|
- Merryl Hampton
- 5 years ago
- Views:
Transcription
1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University kocvara@utia.cas.cz PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39
2 PBM Method for convex NLP Ben-Tal, Zibulevsky, 92, 97 Combination of: (exterior) Penalty meth., (interior) Barrier meth., Method of Multipliers Problem: (CP) min {f(x) : g i(x) 0, i = 1,..., m} x R n Assume: 1. f, g i ( i = 1,..., m ) convex 2. X nonempty and compact (A1) 3. ˆx so that g i (ˆx) < 0 for all i = 1,..., m (A2) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.2/39
3 PBM Method for convex NLP ϕ possibly smooth, domϕ possibly large (ϕ 0 ) ϕ strictly convex, strictly monotone increasing and C 2 (ϕ 1 ) domϕ = (, b) with 0 < b (ϕ 2 ) ϕ(0) = 0, (ϕ 4 ) lim ϕ (t) = t b (ϕ 3 ) ϕ (0) = 1, (ϕ 5 ) lim t ϕ (t) = 0 1 ϕ(t) b ϕ (t) b PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.3/39
4 PBM Method for convex NLP Examples: ϕ r 1 (t) = { c1 1 2 t2 + c 2 t + c 3 t r c 4 log(t c 5 ) + c 6 t < r. PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.4/39
5 PBM Method for convex NLP Examples: ϕ r 1 (t) = { c1 1 2 t2 + c 2 t + c 3 t r c 4 log(t c 5 ) + c 6 t < r. ϕ r 2 (t) = { c t2 + c 2 t + c 3 t r, c 4 t c 5 + c 6 t < r, r 1, 1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.4/39
6 PBM Method for convex NLP Examples: ϕ r 1 (t) = { c1 1 2 t2 + c 2 t + c 3 t r c 4 log(t c 5 ) + c 6 t < r. ϕ r 2 (t) = { c t2 + c 2 t + c 3 t r, c 4 t c 5 + c 6 t < r, r 1, 1 Properties: C 2, bounded second derivative = improved behaviour of Newton s method composition of barrier branch (logarithmic/reciprocal) and penalty branch (quadratic) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.4/39
7 PBM algorithm for convex problems With p i > 0 for i {1,..., m}, we have g i (x) 0 p i ϕ(g i (x)/p i ) 0, i = 1,..., m The corresponding augmented Lagrangian: F(x, u, p) := f(x) + m u i p i ϕ(g i (x)/p i ) i=1 PBM algorithm: = arg min x IR uk, p k ) n u k+1 i = u k i ϕ (g i (x k+1 )/p k i ) i = 1,..., m x k+1 p k+1 i = π p k i i = 1,..., m PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.5/39
8 Properties of the PBM method Theory: {u k } k generated by PBM is the same as for a Proximal Point algorithm applied to the dual problem ( convergence proof) any cluster point of {x k } k is an optimal solution to (CP) f(x k ) f without p k 0 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.6/39
9 Properties of the PBM method Theory: {u k } k generated by PBM is the same as for a Proximal Point algorithm applied to the dual problem ( convergence proof) any cluster point of {x k } k is an optimal solution to (CP) f(x k ) f without p k 0 Praxis: fast convergence thanks to the barrier branch of ϕ particularly suitable for large sparse problems robust, typically outer iterations and Newton steps PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.6/39
10 PBM in semidefinite programming Problem: { min b T x : A(x) 0 } x R n Question: How can the matrix constraint be treated by PBM approach? A(x) 0 (A : R n S d convex) Idea: Find an augmented Lagrangian as follows: F(x, U, p) = f(x) + U, Φ p (A(x)) Sd PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.7/39
11 PBM in semidefinite programming Problem: { min b T x : A(x) 0 } x R n Question: How can the matrix constraint be treated by PBM approach? A(x) 0 (A : R n S d convex) Idea: Find an augmented Lagrangian as follows: F(x, U, p) = f(x) + U, Φ p (A(x)) Sd Notation: A, B Sd S d+ U S d+ Φ p := tr ( A T B ) inner product on S d = {A S d A positive semidefinite} matrix multiplier (dual variable) penalty function on S d PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.7/39
12 Construction of the penalty function Φ p, first idea Given: scalar valued penalty function ϕ satisfying (ϕ 0 ) (ϕ 5 ) matrix A = S ΛS, where Λ = diag (λ 1, λ 2,..., λ d ) Define A Φ p S T pϕ ( λ1 p ) 0 pϕ ( λ2 p ) pϕ ( λd p ) S any positive eigenvalue of A is penalized by ϕ PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.8/39
13 PBM algorithm for semidefinite problems We have A(x) 0 Φ p (A(x)) 0 and the corresponding augmented Lagrangian: PBM algorithm: F(x, U, p) := f(x) + U, Φ p (A(x)) Sd (i) x k+1 = argmin x R n F(x, Uk, p k ) (ii) U k+1 = D A Φ p (A(x); U k ) (iii) p k+1 < p k PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.9/39
14 PBM algorithm for semidefinite problems The first idea may not be the best one: The matrix function Φ p corresponding to ϕ is convex but may be nonmonotone on H d (r, ) (right branch) may be nonconvex. U, Φ p (A(x)) Sd PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.10/39
15 PBM algorithm for semidefinite problems The first idea may not be the best one: The matrix function Φ p corresponding to ϕ is convex but may be nonmonotone on H d (r, ) (right branch) may be nonconvex. U, Φ p (A(x)) Sd Complexity of Hessian assembling O(d 4 + d 3 n + d 2 n 2 ) Even for a very sparse structure the complexity can be O(d 4 )! n... number of variables d... size of matrix constraint PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.10/39
16 PBM algorithm for semidefinite problems Hessian: [ xx U, Φ p (A(x)) Sd ] d k=1 i,j = ( [ ( sk (x) A i S(x) [ 2 ϕ(λ l (x), λ m (x), λ k (x))] n l,m=1 ] ) [S(x) US(x)] S(x) A j s k (x) ) S : s k : decomposition matrix of A(x) k-th column of S i divided difference of i-th order A : S d R n conjugate operator to A PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.11/39
17 Construction of the penalty function, second idea Find a penalty function ϕ which allows direct computation of Φ, its gradient and Hessian. PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.12/39
18 Construction of the penalty function, second idea Find a penalty function ϕ which allows direct computation of Φ, its gradient and Hessian. Example: (A(x) = x i A i ) ϕ(x) = x 2 Φ(A) = A 2 Then and Φ(A(x)) = A(x)A i + A i A(x) x i 2 x i x j Φ(A(x)) = A j A i + A i A j PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.12/39
19 Construction of the penalty function The reciprocal barrier function in SDP: (A(x) = x i A i ) ϕ := 1 t 1 1 The corresponding matrix function is and we can show that and 2 Φ(A) = (A I) 1 I x i Φ(A(x)) = (A I) 1 A i (A I) 1 x i x j Φ(A(x)) = (A I) 1 A i (A I) 1 A j (A I) 1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.13/39
20 Construction of the penalty function Complexity of Hessian assembling: O(d 3 n + d 2 n 2 ) for dense matrices O(n 2 K 2 ) for sparse matrices (K... max. number of nonzeros in A i, i = 1,..., n) Compare to O(d 4 + d 3 n + d 2 n 2 ) in the general case { min b T x : A(x) 0 } x R n A : R n S d PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.14/39
21 Handling sparsity... essential for code efficiency { min b T x : A(x) 0 } x R n A = x i A i Three basic sparsity types: many (small) blocks sparse Hessian (multi-load truss/material) (A I) 1 A i (A I) 1 A j (A I) 1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.15/39
22 Handling sparsity... essential for code efficiency { min b T x : A(x) 0 } x R n A = x i A i Three basic sparsity types: many (small) blocks sparse Hessian (multi-load truss/material) few (large) blocks (A I) 1 A i (A I) 1 A j (A I) 1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.15/39
23 Handling sparsity... essential for code efficiency { min b T x : A(x) 0 } x R n A = x i A i Three basic sparsity types: many (small) blocks sparse Hessian (multi-load truss/material) few (large) blocks A dense, A i sparse (most of SDPLIB examples) (A I) 1 A i (A I) 1 A j (A I) 1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.15/39
24 Handling sparsity x + 1 x + 2 x full version as inefficient as general sparse version Recently, 3 matrix-matrix multiplication routines: full full full sparse sparse sparse PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.16/39
25 Handling sparsity essential for code efficiency { min b T x : A(x) 0 } x R n A = x i A i Three basic sparsity types: many (small) blocks sparse Hessian (multi-load truss/material) few (large) blocks A dense, A i sparse (most of SDPLIB examples) A sparse (truss design with buckling/vibration, maxg,... ) (A I) 1 A i (A I) 1 A j (A I) 1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.17/39
26 Handling sparsity Fast inverse computation of sparse matrices Z = M 1 N Explicite inverse of M: O(n 3 ) Assume M is sparse and Cholesky factor L of M is sparse Z i = (L 1 ) T L 1 N i, i = 1,..., n Complexity: n times nk O(n 2 K) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.18/39
27 Numerical results New code called PENNON Comparison with DSDP by Benson and Ye SDPT3 by Toh, Todd and Tütüncü SeDuMi by Jos Sturm SDPLIB problems: sdplib/ PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.19/39
28 Numerical results problem variables matrix DSDP SDPT3 PENNON arch control control gpp mcp qap ss theta equalg equalg maxg maxg PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.20/39
29 Examples from Mechanics Multiple-load Free Material Optimization After reformulation, discretization, further reformulation: min α R,x (R n ) L { α } L (c l ) T x l A i (α, x) 0, i = 1,..., m l=1 Many ( 5000) small (11 19) matrices. Large dimension (nl ) In a standard form: { min x (R n ) L a T x nl i=1 x i B i 0 } x 1 + x PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.21/39
30 Examples from Mechanics no. of size of problem var. matrix DSDP SDPT3 SeDuMi PENNON mater mater mater m mater m m PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.22/39
31 Truss design with free vibration control Lowest eigenfrequency of the optimal structure should be bigger than a prescribed value min t,u s.t. ti A(t)u = f σ σ l (g(u) c) t T min. eigenfrequency a given value PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.23/39
32 Truss design with free vibration control Formulated as SDP problem: ti min t subject to A(t) ( λm(t) ) 0 c f T 0 f A(t) t i 0, i = 1,..., n where A(t) = t i A i A i = E i l 2 i γ i γ T i M(t) = t i M i M i = c diag(γ i γ T i ) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.24/39
33 truss test problems trto: problems from single-load truss topology design. Normally formulated as LP, here reformulated as SDP for testing purposes. vibra: single load truss topology problems with a vibration constraint. The constraint guarantees that the minimal self-vibration frequency of the optimal structure is bigger than a given value. buck: single load truss topology problems with linearized global buckling constraint. Originally a nonlinear matrix inequality, the constraint should guarantee that the optimal structure is mechanically stable (does not buckle). All problems characterized by sparsity of the matrix operator A. PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.25/39
34 truss test problems problem n m DSDP SDPT3 PENNON trto trto trto buck buck buck vibra vibra vibra PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.26/39
35 Benchmark tests by Hans Mittelmann: Implemented on the NEOS server: Homepage: kocvara/pennon/ Available with MATLAB interface through TOMLAB PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.27/39
36 When PCG helps (SDP)? Linear SDP, dense Hessian Complexity of Hessian evaluation A = n A i, A i R m m i=1 O(m 3 A n + m2 A n2 ) for dense matrices O(m 2 A n + K2 n 2 ) for sparse matrices (K... max. number of nonzeros in A i, i = 1,..., n) Complexity of Cholesky algorithm - linear SDP O(n 3 ) (... from PCG we expect O(n 2 )) Problems with large n and small m: CG better than Cholesky (expected) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.28/39
37 Iterative algorithms Conjugate Gradient method for Hd = g, H S n y = Hx complexity O(n 2 ) Exact arithmetics: convergence in n steps overall complexity O(n 3 ) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.29/39
38 Iterative algorithms Conjugate Gradient method for Hd = g, H S n y = Hx complexity O(n 2 ) Exact arithmetics: convergence in n steps Praxis: may be much worse (ill-conditioned problems) overall complexity O(n 3 ) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.29/39
39 Iterative algorithms Conjugate Gradient method for Hd = g, H S n y = Hx complexity O(n 2 ) Exact arithmetics: convergence in n steps Praxis: may be much worse (ill-conditioned problems) Praxis: may be much better preconditioning overall complexity O(n 3 ) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.29/39
40 Iterative algorithms Conjugate Gradient method for Hd = g, H S n y = Hx complexity O(n 2 ) Exact arithmetics: convergence in n steps Praxis: may be much worse (ill-conditioned problems) Praxis: may be much better preconditioning Convergence theory: number of iterations depends on condition number distribution of eigenvalues overall complexity O(n 3 ) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.29/39
41 Iterative algorithms Conjugate Gradient method for Hd = g, H S n y = Hx complexity O(n 2 ) Exact arithmetics: convergence in n steps Praxis: may be much worse (ill-conditioned problems) Praxis: may be much better preconditioning Convergence theory: number of iterations depends on condition number distribution of eigenvalues overall complexity O(n 3 ) Preconditioning: solve M 1 Hd = M 1 g with M H 1 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.29/39
42 Conditioning of Hessian Solve Hd = g, H Hessian of F(x, u, U, p, P) = f(x) + U, Φ P (A(x)) SmA Condition number depends on P Example: problem Theta2 from SDPLIB (n = 498) κ 0 = 394 κ opt = PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.30/39
43 Theta2 from SDPLIB (n = 498) Behaviour of CG: testing Hd + g / g PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.31/39
44 Theta2 from SDPLIB (n = 498) Behaviour of QMR: testing Hd + g / g PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.31/39
45 Theta2 from SDPLIB (n = 498) QMR: effect of preconditioning (for small P ) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.31/39
46 Control3 from SDPLIB (n = 136) κ 0 = κ opt = PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.32/39
47 Control3 from SDPLIB (n = 136) Behaviour of CG: testing Hd + g / g PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.32/39
48 Control3 from SDPLIB (n = 136) Behaviour of QMR: testing Hd + g / g PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.32/39
49 Preconditioners Should be: efficient (obvious but often difficult to reach) simple (low complexity) only use Hessian-vector product (NOT Hessian elements) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.33/39
50 Preconditioners Should be: efficient (obvious but often difficult to reach) simple (low complexity) only use Hessian-vector product (NOT Hessian elements) Diagonal Symmetric Gauss-Seidel L-BFGS (Morales-Nocedal, SIOPT 2000) A-inv (approximate inverse) (Benzi-Collum-Tuma, SISC 2000) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.33/39
51 Preconditioners Monteiro, O Neil, Nemirovski (2004): Adaptive Ellipsoid Preconditioner Improves the CG performance on extremely ill-conditioned systems. preconditioner: M = C k C T k, C k+1 αc k + βc k p k p T k, C 1 = γi α, β, p k... by matrix-vector products VERY preliminary results (MATLAB implementation) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.34/39
52 Preconditioners example Example: problem Theta2 from SDPLIB (n = 498) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.35/39
53 Preconditioners example Example: problem Theta2 from SDPLIB (n = 498) CG CG-diag CG-SGS CG-MON. PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.36/39
54 Hessian free methods Use finite difference formula for Hessian-vector products: with h = (1 + x k 2 ε) 2 F(x k )v F(x k + hv) F(x k ) h Complexity: Hessian-vector product = gradient evaluation Complexity: need for Hessian-vector-product type preconditioner Limited accuracy (4 5 digits) PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.37/39
55 Test results: linear SDP, dense Hessian Stopping criterium for PENNON Exact Hessian: 10 7 (7 8 digits in objective function) Approximate Hessian: 10 4 (4 5 digits in objective function) Stopping criterium for CG/QMR??? Hd = g, stop when Hd + g / g ǫ PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.38/39
56 Test results: linear SDP, dense Hessian Stopping criterium for PENNON Exact Hessian: 10 7 (7 8 digits in objective function) Approximate Hessian: 10 4 (4 5 digits in objective function) Stopping criterium for CG/QMR??? Hd = g, stop when Hd + g / g ǫ Experiments: ǫ = 10 2 sufficient. often very low (average) number of CG iterations Complexity: n 3 kn 2, k 4 8 Practice: effect not that strong, due to other complexity issues PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.38/39
57 Problems with large n and small m Library of examples with large n and small m (courtesy of Kim Toh) CG-exact much better than Cholesky CG-approx much better than CG-exact PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.39/39
58 problem n m PENSDP PENSDP (APCG) SDPLR CPU CPU CG CPU iter ham_7_5_ ham_9_ ham_8_3_ ham_9_5_ theta theta theta theta theta m theta m theta theta m theta m theta m theta t keller sanr problem theta83 theta103 theta123 n m PENSDP (APCG) RENDL
PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP p.1/22
PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University
More informationPENNON A Code for Convex Nonlinear and Semidefinite Programming
PENNON A Code for Convex Nonlinear and Semidefinite Programming Michal Kočvara Michael Stingl Abstract We introduce a computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite
More informationOn the solution of large-scale SDP problems by the modified barrier method using iterative solvers
Mathematical Programming manuscript No. (will be inserted by the editor) Michal Kočvara Michael Stingl On the solution of large-scale SDP problems by the modified barrier method using iterative solvers
More informationOn the solution of large-scale SDP problems by the modified barrier method using iterative solvers
Mathematical Programming manuscript No. (will be inserted by the editor) Michal Kočvara Michael Stingl On the solution of large-scale SDP problems by the modified barrier method using iterative solvers
More informationOn the solution of large-scale SDP problems by the modified barrier method using iterative solvers
Mathematical Programming manuscript No. (will be inserted by the editor) Michal Kočvara Michael Stingl On the solution of large-scale SDP problems by the modified barrier method using iterative solvers
More informationProjection methods to solve SDP
Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method
More informationIntroducing PENLAB a MATLAB code for NLP-SDP
Introducing PENLAB a MATLAB code for NLP-SDP Michal Kočvara School of Mathematics, The University of Birmingham jointly with Jan Fiala Numerical Algorithms Group Michael Stingl University of Erlangen-Nürnberg
More informationSolving large Semidefinite Programs - Part 1 and 2
Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior
More informationPENNON: Software for linear and nonlinear matrix inequalities
PENNON: Software for linear and nonlinear matrix inequalities Michal Kočvara 1 and Michael Stingl 2 1 School of Mathematics, University of Birmingham, Birmingham B15 2TT, UK and Institute of Information
More informationParallel implementation of primal-dual interior-point methods for semidefinite programs
Parallel implementation of primal-dual interior-point methods for semidefinite programs Masakazu Kojima, Kazuhide Nakata Katsuki Fujisawa and Makoto Yamashita 3rd Annual McMaster Optimization Conference:
More informationIII. Applications in convex optimization
III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization
More informationSolving linear and non-linear SDP by PENNON
Solving linear and non-linear SDP by PENNON Michal Kočvara School of Mathematics, The University of Birmingham University of Warwick, February 2010 Outline Why nonlinear SDP? PENNON the new generation
More informationThe Ongoing Development of CSDP
The Ongoing Development of CSDP Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Joseph Young Department of Mathematics New Mexico Tech (Now at Rice University)
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationPrimal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization
Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics
More informationLift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints
Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Nicolas Boumal Université catholique de Louvain (Belgium) IDeAS seminar, May 13 th, 2014, Princeton The Riemannian
More informationA CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationSDPARA : SemiDefinite Programming Algorithm PARAllel version
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationTruss topology design with integer variables made easy
Mathematical Programming manuscript No. (will be inserted by the editor) Michal Kočvara Truss topology design with integer variables made easy Dedicated to Herbert Hörnlein on the occasion of his 65th
More informationSolving Nonconvex SDP Problems of Structural Optimization with Stability Control
Copyright information to be inserted by the Publishers Solving Nonconvex SDP Problems of Structural Optimization with Stability Control MICHAL KOČVARA Institute of Information Theory and Automation, Pod
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationEffective reformulations of the truss topology design problem
Effective reformulations of the truss topology design problem Michal Kočvara and Jiří V. Outrata Abstract We present a new formulation of the truss topology problem that results in unique design and unique
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationAgenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms
Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationIntroduction to Semidefinite Programs
Introduction to Semidefinite Programs Masakazu Kojima, Tokyo Institute of Technology Semidefinite Programming and Its Application January, 2006 Institute for Mathematical Sciences National University of
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationImplementation and Evaluation of SDPA 6.0 (SemiDefinite Programming Algorithm 6.0)
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationExploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods
Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationSparse PCA with applications in finance
Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationSequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO
Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationTruncated Newton Method
Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University minimize convex f : R n R Newton s method Newton step x nt
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationCourse Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)
Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationIP-PCG An interior point algorithm for nonlinear constrained optimization
IP-PCG An interior point algorithm for nonlinear constrained optimization Silvia Bonettini (bntslv@unife.it), Valeria Ruggiero (rgv@unife.it) Dipartimento di Matematica, Università di Ferrara December
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationA nonlinear programming algorithm for solving semidefinite programs via low-rank factorization
Math. Program., Ser. B 95: 329 357 (2003) Digital Object Identifier (DOI) 10.1007/s10107-002-0352-8 Samuel Burer Renato D.C. Monteiro A nonlinear programming algorithm for solving semidefinite programs
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming Alexandre d Aspremont alexandre.daspremont@m4x.org Department of Electrical Engineering and Computer Science Laurent El Ghaoui elghaoui@eecs.berkeley.edu
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationSUCCESSIVE LINEARIZATION METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMS 1. Preprint 252 August 2003
SUCCESSIVE LINEARIZATION METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMS 1 Christian Kanzow 2, Christian Nagel 2 and Masao Fukushima 3 Preprint 252 August 2003 2 University of Würzburg Institute of Applied
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationFrom Stationary Methods to Krylov Subspaces
Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationInterior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy
InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia
More informationALADIN An Algorithm for Distributed Non-Convex Optimization and Control
ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity Hayato Waki,
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationMiscellaneous Nonlinear Programming Exercises
Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background
More informationThe moment-lp and moment-sos approaches
The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationFast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma
Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationComputational Science and Engineering (Int. Master s Program) Preconditioning for Hessian-Free Optimization
Computational Science and Engineering (Int. Master s Program) Technische Universität München Master s thesis in Computational Science and Engineering Preconditioning for Hessian-Free Optimization Robert
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationAn inexact block-decomposition method for extra large-scale conic semidefinite programming
Math. Prog. Comp. manuscript No. (will be inserted by the editor) An inexact block-decomposition method for extra large-scale conic semidefinite programming Renato D. C. Monteiro Camilo Ortiz Benar F.
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationA CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Sivaramakrishnan Department of Mathematics NC State University kksivara@ncsu.edu http://www4.ncsu.edu/
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationAn inexact block-decomposition method for extra large-scale conic semidefinite programming
An inexact block-decomposition method for extra large-scale conic semidefinite programming Renato D.C Monteiro Camilo Ortiz Benar F. Svaiter December 9, 2013 Abstract In this paper, we present an inexact
More information