Solving the normal equations system arising from interior po. linear programming by iterative methods
|
|
- Eleanore Pitts
- 5 years ago
- Views:
Transcription
1 Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April
2 Partners Interior Point Methods Alessandro Coelho Carla Ghidini Frederico Campos Daniele Silva Danny Sorensen Jair Silva Jordi Castro Lino Silva Luciana Casacio Marilene Silva Marta Velazco Porfirio Suñagua Silvana Bocanegra
3 Summary Interior Point Methods Introduction Interior Point Methods Iterative Methods Preconditioning Numerical Experiments Conclusions Future Works
4 Introduction Sophisticated codes for interior point methods Few iterations Expensive iterations Linear System Solution at each iteration Choice between Augmented System and Normal Equations System Choice between direct methods and iterative methods
5 Introduction Improve iteration time performance Reduce the number of iterations Specialized methods for a given class of problems
6 Main Goal Interior Point Methods Improve iteration time performance using iterative methods on the normal equations system. Conjugate Gradient Method Preconditioners
7 Interior Point Methods Interior Point Methods Standard LP form Min s.t. c T x Ax = b x 0, A m n - rank(a)= m c, b, x column vectors
8 Interior Point Methods Interior Point Methods Dual Problem Max s.t. b T y A T y + z = c z 0, free variable y and slack dual variable z
9 Interior Point Methods Interior Point Methods Optimality conditions Ax b = 0, A T y + z c = 0, XZe = 0, x, z 0. X = diag(x), Z = diag(z) and e is the vector of all ones Primal-dual like Newton s method
10 Interior Point Methods Interior Point Methods Search Directions 0 I A T Z X 0 A 0 0 x y z = r d r a r p r d = c A T y z, r a = XZe, r p = b Ax
11 Interior Point Methods Interior Point Methods Given y 0 and (x 0, z 0 ) > 0. For k = 0, 1, 2,..., do Primal-dual Interior Point Methods (1) Choose σ k [0, 1) and set µ k = σ k ( (x k ) t z k n (2) Solve the given linear system (3) Choose a step length α k = min(1,τ k ρ k p,τ k ρ k d ) for τ k (0, 1) where ρ k p = 1 ( x k min i i xi k ) and ρ k d = 1 min i ( z k i z k i ). ).
12 Interior Point Methods Interior Point Methods (4) Form the new iterate (x k+1, y k+1, z k+1 ) = (x k, y k, z k )+α k ( x k, y k, z k ).
13 Interior Point Methods Interior Point Methods Predictor-Corrector Variant Affine Direction ra k = X k Z k e Final Direction rm k = µk e X k Z k e X k Z k e
14 Interior Point Methods Interior Point Methods Eliminating z [ D A T A 0 ][ x y ] = [ r1 r p ], D = X 1 Z. Augmented System Eliminating x Normal Equations System AD 1 A T y = r
15 Linear Systems Augmented system Symmetric Indefinite Matrix D Sparse Very Ill-conditioned Normal Equations System Symmetric Definite Positive Matrix D Usually Sparse Very Very Ill-conditioned
16 Linear Systems Augmented system LTL T Factorization Bunch-Parlett Factorization Cholesky type Factorization MINRES SYMMLQ QMR Conjugate Gradient Method Normal Equations System Cholesky Factorization Conjugate Gradient Method COELHO, OLIVEIRA AND VELAZCO, TEMA, Accepted.
17 Preconditioning Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Preconditioning Given Bx = b solve (MBN t ) x = b where x = N t x and b = Mb.
18 Preconditioning Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Lemma Given a Preconditioner for the Schur Complement G(AD 1 A t + E)H t = GSH t There Exists an Equivalent Preconditioner for Augmented System ( ) ( ) ( ) D D A t D 1 2 D 1 A t H t GAD 1 G A E 0 H t = ( I 0 0 GSH t )
19 Preconditioning Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Lemma Consider the augmented system given by ( )( ) ( D A t x ry X = r z + V r t A E y r x + Y r w ) and its Schur complement S = AD 1 A t + E where D is nonsingular and A is full row ( rank. ) Then any symmetric block H 0 triangular preconditioner leads to a preconditioned F G system independent of both: H and F.
20 Preconditioning Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Proof Modified Augmented System ( HDH t B t B C )( x ỹ ) ( = H(r p X 1 r c ) Fr p + G(r d X 1 r c ) ) where, B = FDH t + GAH t and C = FDF t + FA t G t + GAF t + GEG t. Modified Schur Complement GSG t ỹ = G(r p + AD 1 r d AZ 1 r c ). OLIVEIRA AND SORENSEN, Linear Algebra and Its Applications, 394 (2005).
21 Hybrid Preconditioner Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Matrix A full row rank Preconditioner based upon a basic solution First Iterations: Generic Preconditioner Final Iterations: Specially tailored preconditioner Middle Iterations BOCANEGRA, CAMPOS, AND OLIVEIRA, Computational Optimization and Applications, 36 (2007), pp BOCANEGRA, CASTRO, AND OLIVEIRA, European Journal of Operational Research, 231 (2013), pp
22 Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Controlled Cholesky Factorization A kind of incomplete Cholesky factorization S = ADA T = LL T = L L T + R where L complete factorization L incomplete R remainder matrix E = L L= L = E + L L 1 (ADA T ) L T = ( L 1 L)( L 1 L) T = (I + L 1 E)(I + L 1 E) T L L = E 0 = L 1 (ADA T ) L T I
23 Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Controlled Cholesky Factorization m minimize E 2 F = c j with c j = j=1 m l ij l ij 2 i=1 c j = S j +η k=1 l ik j l ik j 2 + η extra entries allowed per column lij l ij as S j +η m k= S j +η+1 l ik j 2 E F minimized increasing η (c j decreases) and choosing the largest entries of L j
24 Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Controlled Cholesky Factorization Generic Developed for Symmetric Positive Definite Systems Conjugate Gradient Method No Need to Compute S = ADA t Allows Drop-out (η < 0) Versatile Diagonal Preconditioner Complete Factorization Predictable Storage Loss of Positive Definiteness: Exponential Shift F. CAMPOS Analysis of Conjugate Gradients - type methods for solving linear equations PHD THESIS, Campos and Birkett, SIAM J. Sci. Comput., 19 (1998.), pp
25 Splitting Preconditioner Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach A = [B N]P, where P permutation matrix and B nonsingular ADA t = BD B B t + ND N N t D 1 2 B B 1 ADA t B t D 1 2 B = I m + D 1 2 B B 1 ND N N t B t D 1 2 B Finding a Permutation P Rules for Reordering the Columns of A Diagonal value of D ADe j VELAZCO, OLIVEIRA AND CAMPOS, Pesquisa Operacional, 34 (2011), pp Compute LU Factorization to Determine the Columns of B Careful Implementation GHIDINI, OLIVEIRA AND SORENSEN, Annals of Management Science, 3 (2014), pp
26 Splitting Preconditioner Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Normal Equations System Conjugate Gradient Method No Need to Compute S = ADA T Works Fine near a Solution Designed for Last IP Iterations It is not Efficient far from a Solution First IP Iterations Hybrid Approach Diagonal Preconditioner Controlled Cholesky Preconditioner Change of Phases VELAZCO, OLIVEIRA AND CAMPOS, Optimization Methods and Software, 25 (2010), pp
27 Hydrid Approach Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach No Need to Compute ADA t Early Iterations Transition Preconditioned Conjugate Gradient Method Diagonal Preconditioner Controlled Cholesky Decomposition Fast MINRES Normal Equations System Indefinite System Robust Final Iterations Splitting Preconditioner MINRES Indefinite System Normal Equations System Preconditioned Conjugate Gradient Method Fast GHIDINI, OLIVEIRA AND SILVA, TEMA, 15 (2014), pp
28 New Approach to Compute B Preconditioning Hybrid Preconditioner Controlled Cholesky Factorization Splitting Preconditioner Hybrid Approach New Approach Regularization (penalization) ADA t +δi Compute LU Factorization of D 1 2 A t Reorder columns for sparsity Perform row permutation for stability Threshold parameter σ UMFPACK More stable that rectangular LU May need much more storage P. SUÑAGUA, Uma Nova Abordagem para Encontrar uma Base do Precondicionador Separador para Sistemas Lineares no Método de Pontos Interiores PhD Thesis, 2014.
29 for Linear programming Consider A R m n and A j = 1 Ax = 0, e T x = 1, x 0, OBS: Any LP can be reduced to this form
30 for Linear programming
31 Von Neumann Algorithm Simple but with slow convergence Find A column A s with largest angle with the residual b k 1. The next residual is given by the line connecting b k 1 to A s
32 Von Neumann Algorithm
33 Adjustment Algorithm for p coordinates Given: x 0 0, with e T x 0 = 1. Compute b 0 = Ax 0. For k = 1, 2, 3,... do: 1. Compute: {A η +,..., A 1 η + s1 } with largest angle with b k 1.,..., A η } with smallest angle with b k 1 s2 {A η 1 such that x k 1 i > 0, i = η 1,...,η s 2 s 1 + s 2 = p. v k 1 = min i=1,...,s1 A T η + i b k If v k 1 > 0, then Stop. The problem is infeasible
34 Adjustment Algorithm for p coordinates 3. Solve the subproblem Min b 2 s 1 s.t. λ 0 1 λ η + i λ η i i=1 x k 1 η + i s 2 x k 1 η j j=1 0, for i = 1,...,s 1, 0, for j = 1,...,s 2. s 1 + i=1 λ η + i s 2 + j=1 λ η j = 1, b = λ 0 b k 1 s 1 i=1 x k 1 η + i A η + i s 2 j=1 x k 1 η j A η j + s 1 i=1 λ η + i A η + i + s 2 j=1 λ η j A η. j
35 Adjustment Algorithm for p coordinates 4. Update b k = λ 0 b k 1 u k = b k, x k j = s 1 i=1 x k 1 η + i A η + i s 2 j=1 x k 1 η j A η j λ 0 x k 1 j, j / {η + 1,...,η+ s 1,η 1,...,η s 2 }, λ η +, j = η + i, i = 1,...,s 1, i, j = η j, j = 1,...,s 2. λ η j k = k s 1 i=1 λ η + i A η + i + s 2 j=1 λ η j A η, j
36 Subproblem Solution For p = 2: small number of cases in KKT conditions. For any p: 1+2C p 1 + 2Cp 2 + 2Cp Cp p = 2 (p+1) 1. Solution: Apply interior point methods Small dimension problem if p is small. GHIDINI, OLIVEIRA, SILVA AND VELAZCO, Linear Algebra and its Applications, 218 (2012), pp
37 Subproblem - Interior Point Method Min 1 2 Wλ 2 s.t. c T λ = 1, λ 0, [ ] W = w A η +...A 1 η + s1 A η...a 1 η, s2 w = b k 1 s 1 A η + s 2 ( λ = λ 0,λ η + 1 i=1 xk 1 η + i,...,λ η + s1,...,λ η 1 i j=1 xk 1 A η η j j ),...,λ η, s2 c = (c 1, 1,...,1),c 1 = 1 s 1 i=1 xk 1 η + i, s 2 j=1 xk 1 η j.
38 Subproblem - Interior Point Method Solve the Linear System at each subproblem iteration W T W c I U 0 Λ c T 0 0 λ l µ = r 1 r 2 r 3 where U = diag(µ), Λ = diag(λ), r 1 = µ cl W T Wλ, r 2 = l T λ, r 3 = 1 c T λ.
39 Subproblem - Interior Point Method Search Directions: µ = Λ 1 r 2 Λ 1 Udλ, λ = (W T W +Λ 1 U) 1 r 4 (W T W +Λ 1 U) 1 c l, c T (W T W +Λ 1 U) 1 c l = c T (W T W +Λ 1 U) 1 r 4 r 3 Linear Systems (W T W +Λ 1 U)v1 = c (W T W +Λ 1 U)v2 = r 4 r 4 = r 1 +Λ 1 r 2
40 Numerical Experiments Conclusions Problems NETLIB QAP Kennington Total: 189 Medium Problems Large Problems BL fit1p pds-30 ken18 BL2 fit2p pilot87 kra30a 1 chr22b GE qap12 kra30b 1 chr25a greenbeb qap15 nug07-3rd CO5 ken13 rou20 nug20 1 CO9 NL scr20 pds-70 CQ9 nug06-3rd stocfor3 pds-80 cre-b nug12 pds-90 cre-d nug15 pds-100 dfl001 osa-60 ste36a 1 els19 pds-10 ste36b 1 ex09 pds-20 ste36c 1 1 Need large storage
41 Numerical Experiments Conclusions Running Time (sec.) Problem LUrec LUrec+Reg LUstd LUstd+Reg pds ken pds pds pds kra30a 6693, ste36a ste36b kra30b ste36c nug Remaining Total Common
42 Numerical Experiments Conclusions Processing Time LUrec LUrec+Reg LUstd LUstd+Reg Solved Problems % 62.96% 78.84% 86.24% not solved % 37.04% 21.16% 13.76% Total (sec) Prob Prob Difference: h. LUstd more efficient then LUrec
43 Numerical Experiments Conclusions Conclusions Iterative Methods Very large problems Preconditioners Normal Equations Augmented System Sophisticated implementations Fast and Robust
44 Numerical Experiments Conclusions Future Work Heuristics for changing Preconditioners More robust iterative methods in the change of phases New simple methods Augmented system New Preconditioners
Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming
Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming Joo-Siong Chai and Kim-Chuan Toh Abstract We propose to compute
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationIncomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming
Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming by Jelena Sirovljevic B.Sc., The University of British Columbia, 2005 A THESIS SUBMITTED IN PARTIAL FULFILMENT
More informationExploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods
Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationImproving an interior-point approach for large block-angular problems by hybrid preconditioners
Improving an interior-point approach for large block-angular problems by hybrid preconditioners Silvana Bocanegra Jordi Castro Aurelio R.L. Oliveira Univ. Federal Rural Univ. Politècnica Univ. Estadual
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationFINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING
FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt
More informationExperiments with iterative computation of search directions within interior methods for constrained optimization
Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationConvergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl (29) 141: 231 247 DOI 1.17/s1957-8-95-5 Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization G. Al-Jeiroudi J. Gondzio Published online: 25 December
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationConstraint Reduction for Linear Programs with Many Constraints
Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park Pierre-Antoine
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationAx=b. Zack 10/4/2013
Ax=b Zack 10/4/2013 Iteration method Ax=b v 1, v 2 x k = V k y k Given (A, b) standard orthonormal base v 1, v 2 x k = V k y k For symmetric A, the method to generate V k is called Lanczos Method Lanczos
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationA SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS
SIAM J OPTIM Vol 10, No 3, pp 852 877 c 2000 Society for Industrial and Applied Mathematics A SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS JORDI CASTRO Abstract Despite the efficiency
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationKey words. convex quadratic programming, interior point methods, KKT systems, preconditioners.
A COMPARISON OF REDUCED AND UNREDUCED SYMMETRIC KKT SYSTEMS ARISING FROM INTERIOR POINT METHODS BENEDETTA MORINI, VALERIA SIMONCINI, AND MATTIA TANI Abstract. We address the iterative solution of symmetric
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationA stable primal dual approach for linear programming under nondegeneracy assumptions
Comput Optim Appl (2009) 44: 213 247 DOI 10.1007/s10589-007-9157-2 A stable primal dual approach for linear programming under nondegeneracy assumptions Maria Gonzalez-Lima Hua Wei Henry Wolkowicz Received:
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationA Stable Primal-Dual Approach for Linear Programming under Nondegeneracy Assumptions
A Stable Primal-Dual Approach for Linear Programming under Nondegeneracy Assumptions Maria Gonzalez-Lima Hua Wei Henry Wolkowicz January 15, 2008 University of Waterloo Department of Combinatorics & Optimization
More informationA robust multilevel approximate inverse preconditioner for symmetric positive definite matrices
DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationLecture 17: Iterative Methods and Sparse Linear Algebra
Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationUniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods
Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationSolving quadratic multicommodity problems through an interior-point algorithm
Solving quadratic multicommodity problems through an interior-point algorithm Jordi Castro Department of Statistics and Operations Research Universitat Politècnica de Catalunya Pau Gargallo 5, 08028 Barcelona
More informationIndefinite and physics-based preconditioning
Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)
More informationMaster Thesis Literature Study Presentation
Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element
More informationSparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices
Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as
More informationLinear Algebra in IPMs
School of Mathematics H E U N I V E R S I Y O H F E D I N B U R G Linear Algebra in IPMs Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio Paris, January 2018 1 Outline Linear
More informationTruncated Newton Method
Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University minimize convex f : R n R Newton s method Newton step x nt
More informationPreconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners
Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, Georgia, USA
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More information11. Equality constrained minimization
Convex Optimization Boyd & Vandenberghe 11. Equality constrained minimization equality constrained minimization eliminating equality constraints Newton s method with equality constraints infeasible start
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationEquality constrained minimization
Chapter 10 Equality constrained minimization 10.1 Equality constrained minimization problems In this chapter we describe methods for solving a convex optimization problem with equality constraints, minimize
More informationEfficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph
Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationDirect and Incomplete Cholesky Factorizations with Static Supernodes
Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)
More informationThe Q Method for Second-Order Cone Programming
The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationFEM and Sparse Linear System Solving
FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science
More informationRESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems
Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationEE364a Review Session 7
EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationMatrix Multiplication Chapter IV Special Linear Systems
Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.
More information1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form
MULTILEVEL ALGORITHMS FOR LARGE-SCALE INTERIOR POINT METHODS MICHELE BENZI, ELDAD HABER, AND LAUREN TARALLI Abstract. We develop and compare multilevel algorithms for solving constrained nonlinear variational
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,
More informationSVM May 2007 DOE-PI Dianne P. O Leary c 2007
SVM May 2007 DOE-PI Dianne P. O Leary c 2007 1 Speeding the Training of Support Vector Machines and Solution of Quadratic Programs Dianne P. O Leary Computer Science Dept. and Institute for Advanced Computer
More informationBlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas
BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block
More informationAn interior-point gradient method for large-scale totally nonnegative least squares problems
An interior-point gradient method for large-scale totally nonnegative least squares problems Michael Merritt and Yin Zhang Technical Report TR04-08 Department of Computational and Applied Mathematics Rice
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationCG and MINRES: An empirical comparison
School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering
More informationChapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization
In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationSolving Ax = b, an overview. Program
Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview
More informationSparse least squares and Q-less QR
Notes for 2016-02-29 Sparse least squares and Q-less QR Suppose we want to solve a full-rank least squares problem in which A is large and sparse. In principle, we could solve the problem via the normal
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationIII. Applications in convex optimization
III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationImproving an interior-point algorithm for multicommodity flows by quadratic regularizations
Improving an interior-point algorithm for multicommodity flows by quadratic regularizations Jordi Castro Jordi Cuesta Dept. of Stat. and Operations Research Dept. of Chemical Engineering Universitat Politècnica
More informationLow-Rank Factorization Models for Matrix Completion and Matrix Separation
for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationA MATRIX-FREE PRECONDITIONER FOR SPARSE SYMMETRIC POSITIVE DEFINITE SYSTEMS AND LEAST-SQUARES PROBLEMS
A MATRIX-FREE PRECONDITIONER FOR SPARSE SYMMETRIC POSITIVE DEFINITE SYSTEMS AND LEAST-SQUARES PROBLEMS STEFANIA BELLAVIA, JACEK GONDZIO, AND BENEDETTA MORINI Abstract. We analyze and discuss matrix-free
More informationSolving large Semidefinite Programs - Part 1 and 2
Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior
More informationPreconditioning via Diagonal Scaling
Preconditioning via Diagonal Scaling Reza Takapoui Hamid Javadi June 4, 2014 1 Introduction Interior point methods solve small to medium sized problems to high accuracy in a reasonable amount of time.
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More information