Linear Algebra in IPMs

Size: px
Start display at page:

Download "Linear Algebra in IPMs"

Transcription

1 School of Mathematics H E U N I V E R S I Y O H F E D I N B U R G Linear Algebra in IPMs Jacek Gondzio J.Gondzio@ed.ac.uk URL: Paris, January

2 Outline Linear Algebra in IPMs for LP, QP and NLP Definite, Indefinite and Quasidefinite Systems Cholesky factorization Exploiting Sparsity in Gaussian Elimination Minimum degree ordering Nested dissection Very Large Scale Optimization implicit inverse representation from sparsity to block-sparsity structured optimization problems OOPS: Object-Oriented Parallel Solver Applications financial planning problems (nonlinear risk measures) data mining (nonlinear kernels in SVMs) Paris, January

3 Summary: From LP to QP Newton direction where A 0 0 Q A I S 0 X Augmented system [ Q Θ 1 A A 0 [ x y s ξ p = b Ax, ] = [ ξp ξ d ξ µ ξ d = c A y s+qx, ξ µ = µe XSe. ][ x y Conclusion: QP is a natural extension of LP. ] = ] [ ξd X 1 ] ξ µ. ξ p Paris, January ,

4 IPMs: LP vs QP Augmented system in LP [ Θ 1 A ][ x A 0 y ] = [ ξd X 1 ] ξ µ. ξ p Eliminate x from the first equation and get normal equations (AΘA ) y = g. Paris, January

5 IPMs: LP vs QP Augmented system in QP [ Q Θ 1 A A 0 ][ x y ] = [ ξd X 1 ] ξ µ. ξ p Eliminate x from the first equation and get normal equations (A(Q+Θ 1 ) 1 A ) y = g. One can use normal equations in LP, but not in QP. Normal equations in QP may become almost completely dense even for sparse matrices A and Q. hus, in QP, usually the indefinite augmented system form is used. Paris, January

6 KK systems in IPMs for LP, QP and NLP LP [ Θ 1 A A 0 ][ x y ] = [ f d] QP [ Q+Θ 1 A A 0 ][ x y ] = [ f d] NLP [ Q(x,y)+Θ 1 P A(x) A(x) Θ D ][ x y ] = [ f d] Paris, January

7 Cholesky factorization Compute a decomposition LDL = AΘA. where: L is a lower triangular matrix; and D is a diagonal matrix. Cholesky factorization is simply the Gaussian Elimination process that exploits two properties of the matrix: symmetry; positive definiteness. Paris, January

8 Use of Cholesky factorization Replace the difficult equation (AΘA ) y = g, with a sequence of easy equations: L u = g, D v = u, L y = v. Note that g = Lu = L(Dv) = LD(L y) = (LDL ) y = (AΘA ) y. Paris, January

9 Symmetric Gaussian Elimination Let H R m m be a symmetric positive definite matrix h 11 h 12 h 1m h H = 21 h 22 h 2m h m1 h m2 h mm By applying Gaussian Elimination to it, we can represent it in the following form: l l m1 l m2 1 d d d mm 1 l 21 l m1 0 1 l m Paris, January

10 Symmetric GE: Examples Example 1: [ ] = [ ][ ][ ]. Example 2: [ ] = [ ][ ][ ]. Paris, January

11 Existence of LDL factorization Lemma: he decomposition H = LDL with d ii > 0, i exists iff H is positive definite (PD). Proof: Part 1 ( ) Let H = LDL with d ii > 0. ake any x 0 and let u = L x. Since L is a unit lower triangular matrix it is nonsingular so u 0 and m x Hx = x LDL x = u Du = d ii u 2 i > 0. i=1 Paris, January

12 Proof (cont d): Part 2 ( ) Proof by induction on dimension of H. For m = 1. H = h 11 = d 11 > 0 iff H is PD. Assume the [ result ] is true for m = k 1 1. W a Let H = a R q k k be given k k positive definite matrix with W R (k 1) (k 1), a R k 1 and q R. Note first that since H is PD, W is also PD. Indeed for any (x,0) R k we have [ ][ ] [x W a x,0] a q 0 =x Wx>0 x R k 1,x 0. From inductive hypothesis we know that W =LDL with d ii >0. Let [ ] [ ][ ][ W a L 0 D 0 a = L ] l q l 1 0 d, 0 1 Paris, January

13 where l is the solution of equation (LD)l = a (it is well defined since L and D are nonsingular) [ ] and d is given by d = q l Dl. W a Hence matrix H = a has an q L D L decomposition. It remains to prove that d > 0. Consider the vector [ x = L ] l. 1 Since H is positive definite, we get 0 < x Hx [ ][ = [ l L 1 L 0 D 0,1] l 1 0 d [ ] D 0 0 = [0,1] 0 d][ 1 = d, which completes the proof. ][ L l 0 1 ][ L l 1 Paris, January ]

14 Definite & Indefinite Systems Cholesky factorization fails for indefinite matrix. Example 1: Negative pivot d 22 < 0. [ ] [ ][ = 2/ /3 ][ 1 2/3 0 1 ]. Example 2: d 11 = 0. Can t even start the elimination. [ ] =??? Paris, January

15 Definite & Indefinite Systems (cont d) IPMs: For indefinite augmented system [ Θ 1 A ][ ] x A 0 y one needs to use some special tricks. For positive definite normal equations = [ r h]. (AΘA ) y = g. one can compute the Cholesky factorization. Paris, January

16 Major Cholesky Andre-Louis Cholesky ( ) Major of French Army, descendant from the Cholewski family of Polish imigrants. Read: M. A. Saunders, Major Cholesky would feel proud, ORSA Journal on Computing, vol 6 (1994) No 1, pp Paris, January

17 Symmetric Factorization wo step solution method: factorization to LDL form, backsolve to compute direction y. A symmetric nonsingular matrix H is factorizable if there exists a diagonal matrix D and unit lower triangular matrix L such that H = LDL. A symmetric matrix H is strongly factorizable if for any permutation matrix P a factorization PHP = LDL exists. he general symmetric indefinite matrix is not factorizable. Paris, January

18 Factoring Indefinite Matrix wo options are possible: 1. Replace diagonal matrix D with a block-diagonal one and allow 2 2 (indefinite) pivots [ 0 a a 0 ] and [ 0 a a d Hence obtain a decomp. H = LDL with block-diagonal D. 2. Regularize indefinite matrix to produce a quasidefinite matrix [ ] K = E A, A F where E R n n is positive definite, F R m m is positive definite, and A R m n has full row rank. Paris, January ].

19 Quasidefinite (QDF) Matrices Symmetric matrix is called quasidefinite if [ ] K = E A, A F where E R n n and F R m m are positive definite, and A R m n has full row rank. QDF matrices are strongly factorizable. For any quasidefinite matrix there exists a Cholesky-like factorization K = LDL, where D is diagonal but not positive definite: n negative pivots; and m positive pivots. Paris, January

20 From Indefinite to Quasidefinite Indefinite matrix H = [ Q Θ 1 A A 0 in IPMs can be converted to a quasidefinite one. Regularize indefinite matrix to produce a quasi-definite matrix. Use dynamic regularization [ H = Q Θ 1 A ] [ ] Rp 0 + A 0 0 R, d wherer p R n n andr d R m m aretheprimalanddual regularizations. For any quasidefinite matrix there exists a Cholesky-like factorization H = LDL, ]. where D is diagonal but not positive definite: n negative pivots and m positive pivots. Paris, January

21 Large Problems are Sparse Suppose a large LP is solved: m,n Can all variables be linked at the same time? No, usually only a subset of them is linked. here are usually only several nonzeros per row in an LP. Large problems are always sparse. Exploiting sparsity in computations leads to huge savings. Exploiting sparsity means mainly avoiding doing useless computations: the computations for which the result is known, as for example multiplications with zero. Paris, January

22 Exploiting sparsity: Example Ax = It requires computing [ ] 2 A.1 +5 A.3 2 A and involves only five multiplications and five additions. We say that this matrix-vector multiplication needs 5 flops. A flop is a floating point operation: x := x+a b. Paris, January

23 General Sparse Systems Single step in Gaussian Elimination [ ] p v A = u A 1 produces the following Schur complement A 1 p 1 uv. Markowitz Pivot Choice Let r i and c i,i = 1,2,...,n be numbers of nonzero entries in row and column i, respectively. he elimination of the pivot a ij needs f ij = (r i 1)(c j 1) flops to be made. his step creates at most f ij new nonzero entries in the Schur complement. Paris, January

24 General Sparse Systems he effect of pivot elimination on the sparsity pattern p x x x x 2 x x x x 3 x x x x 4 x x x 5 x x x 6 x x 7 x x x 8 x x x x pivot : p nonzero : x p x x x x 2 x x x x 3 x x x f f f f 4 x x x 5 x f f x f f 6 x x 7 x x x 8 x f f f f pivot : p nonzero : x fill in : f Paris, January

25 Markowitz Pivot Choice: Example Markowitz: Choose the pivot with min i,j f ij x x x x x 2 x x x x 3 x x x x 4 x x x 5 x x x 6 p x 7 x x x 8 x x x x x x x f x 2 x x x x 3 x x x x 4 x x x f 5 x x x 6 p x 7 x x x 8 x x x x Paris, January

26 Exploiting Sparsity in Cholesky Factorization Matrix H and its Cholesky Factor p x x x x x H = x x x x L = x x x x x x x x x x Reordered Matrix H and its Cholesky Factor x x x PHP x x x = x x L = x x x x x x x x x Paris, January

27 From Sparsity to Block-Sparsity: Sparse Matrix p x x x x x x x x H= x x L= x x x x x PHP x x = x x L= x x x x x x x x x x x x x x x x x P Block-Sparse Matrix L= L= Paris, January

28 Minimum Degree Ordering (MDO) In symmetric positive definite case: pivots are chosen from the diagonal and r i = c i hence choose the pivot with min i r i Minimum degree ordering: choose an element with the minimum number of nonzeros in a row, that is, choose a node with the minimum number of neighbours (a node with the minimum degree) in a graph related to sparsity pattern of the matrix. Paris, January

29 Minimum Degree Ordering (MDO) Sparse Matrix Pivot h 11 Pivot h 22 x x x x p x x x x x x x x x x x p x x x x x x f f x x x x H = x x x x f x f x x x x x x x x x f f x x x x x x x x x x x x x Minimum degree ordering: choose a diagonal element corresponding to a row with the minimum number of nonzeros. Permute rows and columns of H accordingly. MDO is simply the symmetric version of Markowitz pivot rule. Paris, January

30 From Sparsity to Block-Sparsity: Apply minimum degree ordering to (sparse) blocks: Block-Sparse Matrix Pivot Block H 11 Pivot Block H 22 H= P P Choose a diagonal block-pivot corresponding to a block-row with the minimum number of blocks. Permute block-rows and block-columns of H accordingly. Paris, January

31 Nested Dissection: Original Matrix x x x x 2 x x x x x 3 x x x x 4 x x x x x x 5 x x x x x 6 x x x x x 7 x x x x 8 x x x x x 9 x x x x 10 x x x x x x 11 x x x x x Reordered Matrix x x x x 2 x x x x x 3 x x x x 5 x x x x x 6 x x x x x 8 x x x x x 9 x x x x 10 x x x x x x 11 x x x x x 4 x x x x x x 7 x x x x Paris, January

32 Structured Problems Observation: ruly large scale problems are not only sparse... such problems are structured Structure is displayed in: Jacobian matrix A Hessian matrix Q Structure can be exploited in: IPM Algorithm Linear Algebra of IPM (focus of this lecture) Paris, January

33 Primal Block-Angular Structure: Q = [ ], A = [ ] and A = [ ] Reorder blocks: {1,3; 2,4; 5}. H =, PHP = Paris, January

34 Dual Block-Angular Structure: Q =, A = [ ] and A = Reorder blocks: {1,4; 2,5; 3}. H =, PHP = Paris, January

35 Row & Col Bordered Block-Diag Structure: Q =, A = [ ] and A = Reorder blocks: {1,4; 2,5; 3,6}. H =, PHP = Paris, January

36 Example: Bordered Block-Diagonal Structure Φ 1 B Φ n Bn = B 1...B n Φ }{{ 0 } Φ L 1 D = L n L 1,0...L n,0 L }{{ 0 } L D n D0 }{{} D he blocks Φ i, i = 0,1,...,n are KK systems. L 1 L 1,0.... L n L n,0 L 0 } {{ } L Paris, January

37 Example: Bordered Block-Diagonal Structure Cholesky-like factors obtained by Schur-complement: Φ i = L i D i L i L i,0 = B i L i Di 1, i = 1..n C = Φ 0 n i=1 L i,0 D i L i,0 = L 0D 0 L 0 And the system Φx = b is solved by z i = L 1 i b i z 0 = L 1 0 (b 0 L i,0 z i ) y i = D 1 i z i x 0 = L 0 y 0 x i = L i (y i L i,0 x 0) Operations(Cholesky, Solve, Product) performed on sub-blocks Paris, January

38 Abstract Linear Algebra for IPMs Execute the operation solve (reduced) KK system in IPMs for LP, QP and NLP. It works like the backslash operator in MALAB. Assumptions: Q and A are block-structured Paris, January

39 Linear Algebra of IPMs [ Q Θ 1 P A Θ D ] } A {{ } Φ(NLP) [ x y ree representation of matrix A: D 11 B 11 D 1 D 12 B 12 D 2 D 10 D 21 D 22 B B B D D 20 C 31 C 32 D 30 Primal Block Angular Structure Dual Block Angular Structure ] = Primal Block Angular Structure [ f d] D 1 D 2 D 11 D 12 D 10 B11 B12 D D D D B B B A C 31 C 32 D 30 Paris, January

40 Structures of A and Q imply structure of Φ: Q 11 D 11 B 11 Q 12 Q D12 B Q10 Q 21 Q22 Q D 10 D 21 D22 B B B Q 23 D Q20 D 20 Q Q Q 30 C 31 C 32 D 30 D 11 D 12 C 31 ( Q A A 0 B 11 B 12 D 10 ) D 21 D 22 D 23 C 32 B B B 23 D 20 D 30 Q 11 D 11 D 11 B 11 ~ Q 31 ~ C 31 Q 12 D 12 D 12 B 12 D 10 B 11 B12 D B 21 D 21 ~ ~ C ~ ~ 31 C ~ 31 C ~ 32 ~ C ~ ~ Q 32 C ~ ~ 31 Q31 Q32 Q32 Q32 32 Q32 P Q 10 D 10 Q 21 Q D B D 22 Q D B 23 ( Q A A 0 ) D 23 Q D 20 P 1 21 B 22 B B D20 C ~ 32 ~ Q ~ 31 C 31 ~ Q ~ 31 C 31 ~ Q ~ 31 C 31 ~ Q 32 C ~ 32 ~ Q 32 C ~ 32 ~ Q 32 C ~ 32 ~ Q 32 C ~ 32 Q 30 D 30 D 30 Paris, January

41 OOPS: Object-oriented linear algebra for IPM Every node in the block elimination tree has its own linear algebra implementation (depending on its type) Each implementation is a realisation of an abstract linear algebra interface. Different implementations are available for different structures Matrix Interface Matrix Factorize SolveL SolveLt y=mx y=mtx PrimalBlockAng Implicit factorization A i B i BorderedBlockDiag Implicit factorization A i B i C i SparseMatrix General sparse linear algebra DualBlockAng Implicit factorization A i C i RankCorrector Rank corrector implementation D R DenseMatrix General dense linear algebra Rebuild block elimination tree with matrix interface structures Paris, January

42 OOPS: Matrix Revolutions, Matrix Reloaded Paris, January

43 Structured Problems... are present everywhere. Paris, January

44 Sources of Structure Dynamics Staircase structure A A B I A A B I A B I A B I A A B I A A B I A B I A A B I x t+1 = A t x t +B t u t x t+1 =A t+1 t x t +...+A t+1 t p x t p+b t u t Paris, January

45 Sources of Structure Uncertainty Block-angular structure W W W W W W W W W W W i x 1 +W i y i = b i lt x a(lt ) +W l t x lt = b lt Paris, January

46 Sources of Structure Common resource constraint k B i x i = b Dantzig-Wolfe structure i=1 A A A B B B Paris, January

47 Sources of Structure Other types of near-separability Row and column bordered block-diagonal structure A C A C A B B B C D Paris, January

48 Sources of Structure (low) rank-corrector A+VV = C A + V V = C and networks, ODE- or PDE-discretizations, etc. Paris, January

49 Example Applications: financial planning problems (nonlinear risk measures) machine learning (nonlinear kernels in SVMs) Paris, January

50 Financial Planning Problems (ALM) A set of assets J ={1..J} given (bonds, stock, real estate) At every stage t = we can buy or sell different assets he return of asset j at stage t is uncertain Investment decisions: what to buy or sell, at which time stage Objectives: maximize the final wealth minimize the associated risk Mean Variance formulation: max IE(X) ρvar(x) Stochastic Program: formulate deterministic equivalent standard QP, but huge extentions: nonlinear risk measures (log utility, skewness) Paris, January

51 ALM: Largest Problem Attempted Optimization of 21 assets(stock market indices) 7 time stages. Using multistage stochastic programming Scenario tree geometry: M scenarios second level nodes with variables each. Scenario ree generated using geometric Brownian motion billion variables, 353 million constraints 128 nodes 30 nodes 30 nodes Paris, January

52 Sparsity of Linear Algebra = 8127 columns for Schur-complement Prohibitively expensive Need facility to exploit nested structure Need to be careful that Schurcomplement calculations stay sparse on second level Paris, January

53 Results (ALM: Mean-Variance QP formulation): Prob Stgs Asts Scen Rows Cols iter time procs machine ALM M 64M 154M BlueGene ALM M 96M 269M BlueGene ALM M 180M 500M BlueGene ALM M 353M 1.011M HPCx he problem with 353 million of constraints 1 billion of variables was solved in 50 minutes using 1280 procs. Equation systems of dimension billion were solved with the direct (implicit) factorization. One IPM iteration takes less than a minute. Paris, January

54 Support Vector Machines: Formulated as the (dual) quadratic program: min e y y Ky, s.t. d y = 0, 0 y λe. Ferris & Munson, SIOP 13 (2003) Kernel function K(x, z) = φ(x), φ(z), where φ is a (nonlinear) mapping from X to feature space F Matrix K: K ij = K(x i,x j ) Linear Kernel K(x,z) = x z. Polynomial Kernel K(x,z) = (x z +1) d. Gaussian Kernel K(x,z) = e γ x z 2. Paris, January

55 SVMs with Nonlinear Kernels: K is very large and dense! Approximate: K LL or K LL +D Introduce v = L y and get a separable QP: min e y v v y Dy, s.t. d y = 0, v L y = 0, 0 y λe. H = L L Structure can be exploited in: Linear Algebra of IPM Kristian Woodsend: PhD hesis, Edinburgh Paris, January

56 References Gondzio and Sarkissian, Parallel interior point solver for structured linear programs, Math Prog 96 (2003) Gondzio and Grothey, Reoptimization with the primaldual IPM, SIAM J. on Optimization 13 (2003) Gondzio and Grothey, Parallel IPM solver for structured QPs: application to financial planning problems, Annals of Oper Res 152 (2007) Woodsend and Gondzio, Hybrid MPI/OpenMP parallel linear support vector machine training, J. of Machine Learning Research 20 (2009) Papers available: OOPS: Object-Oriented Parallel Solver Paris, January

57 Conclusions: Interior Point Methods are well-suited to Large Scale Optimization Direct Methods are well-suited to structure exploitation Use IPMs in your research! Paris, January

58 Implementation of IPMs Andersen, Gondzio, Mészáros and Xu Implementation of IPMs for large scale LP, in: Interior Point Methods in Mathematical Programming,. erlaky (ed.), Kluwer Academic Publishers, 1996, pp Altman and Gondzio Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization, Optimization Methods and Software, (1999), pp Recent Survey on IPMs (easy reading) Gondzio Interior point methods 25 years later, European J. of Operational Research 218 (2012) Paris, January

Using interior point methods for optimization in training very large scale Support Vector Machines. Interior Point Methods for QP. Elements of the IPM

Using interior point methods for optimization in training very large scale Support Vector Machines. Interior Point Methods for QP. Elements of the IPM School of Mathematics T H E U N I V E R S I T Y O H Using interior point methods for optimization in training very large scale Support Vector Machines F E D I N U R Part : Interior Point Methods for QP

More information

Solving Nonlinear Portfolio Optimization Problems with the Primal-Dual Interior Point Method

Solving Nonlinear Portfolio Optimization Problems with the Primal-Dual Interior Point Method Solving Nonlinear Portfolio Optimization Problems with the Primal-Dual Interior Point Method Jacek Gondzio Andreas Grothey April 2, 2004 MS-04-001 For other papers in this series see http://www.maths.ed.ac.uk/preprints

More information

Solving Nonlinear Portfolio Optimization Problems with the. Primal-Dual Interior Point Method

Solving Nonlinear Portfolio Optimization Problems with the. Primal-Dual Interior Point Method Solving Nonlinear Portfolio Optimization Problems with the Primal-Dual Interior Point Method Jacek Gondzio Andreas Grothey School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

12. Cholesky factorization

12. Cholesky factorization L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning Editors: Suvrit Sra suvrit@gmail.com Max Planck Insitute for Biological Cybernetics 72076 Tübingen, Germany Sebastian Nowozin Microsoft Research Cambridge, CB3 0FB, United

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

EE364a Review Session 7

EE364a Review Session 7 EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming

Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming by Jelena Sirovljevic B.Sc., The University of British Columbia, 2005 A THESIS SUBMITTED IN PARTIAL FULFILMENT

More information

Revised Simplex Method

Revised Simplex Method DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

V C V L T I 0 C V B 1 V T 0 I. l nk

V C V L T I 0 C V B 1 V T 0 I. l nk Multifrontal Method Kailai Xu September 16, 2017 Main observation. Consider the LDL T decomposition of a SPD matrix [ ] [ ] [ ] [ ] B V T L 0 I 0 L T L A = = 1 V T V C V L T I 0 C V B 1 V T, 0 I where

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Kernel methods, kernel SVM and ridge regression

Kernel methods, kernel SVM and ridge regression Kernel methods, kernel SVM and ridge regression Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Collaborative Filtering 2 Collaborative Filtering R: rating matrix; U: user factor;

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Matrix-Free Interior Point Method

Matrix-Free Interior Point Method Matrix-Free Interior Point Method Jacek Gondzio School of Mathematics and Maxwell Institute for Mathematical Sciences The University of Edinburgh Mayfield Road, Edinburgh EH9 3JZ United Kingdom. Technical

More information

Experiments with iterative computation of search directions within interior methods for constrained optimization

Experiments with iterative computation of search directions within interior methods for constrained optimization Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

A convex QP solver based on block-lu updates

A convex QP solver based on block-lu updates Block-LU updates p. 1/24 A convex QP solver based on block-lu updates PP06 SIAM Conference on Parallel Processing for Scientific Computing San Francisco, CA, Feb 22 24, 2006 Hanh Huynh and Michael Saunders

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Solving the normal equations system arising from interior po. linear programming by iterative methods

Solving the normal equations system arising from interior po. linear programming by iterative methods Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April - 2015 Partners Interior

More information

Parallel implementation of primal-dual interior-point methods for semidefinite programs

Parallel implementation of primal-dual interior-point methods for semidefinite programs Parallel implementation of primal-dual interior-point methods for semidefinite programs Masakazu Kojima, Kazuhide Nakata Katsuki Fujisawa and Makoto Yamashita 3rd Annual McMaster Optimization Conference:

More information

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

SVM May 2007 DOE-PI Dianne P. O Leary c 2007

SVM May 2007 DOE-PI Dianne P. O Leary c 2007 SVM May 2007 DOE-PI Dianne P. O Leary c 2007 1 Speeding the Training of Support Vector Machines and Solution of Quadratic Programs Dianne P. O Leary Computer Science Dept. and Institute for Advanced Computer

More information

Enhancing Scalability of Sparse Direct Methods

Enhancing Scalability of Sparse Direct Methods Journal of Physics: Conference Series 78 (007) 0 doi:0.088/7-6596/78//0 Enhancing Scalability of Sparse Direct Methods X.S. Li, J. Demmel, L. Grigori, M. Gu, J. Xia 5, S. Jardin 6, C. Sovinec 7, L.-Q.

More information

Direct solution methods for sparse matrices. p. 1/49

Direct solution methods for sparse matrices. p. 1/49 Direct solution methods for sparse matrices p. 1/49 p. 2/49 Direct solution methods for sparse matrices Solve Ax = b, where A(n n). (1) Factorize A = LU, L lower-triangular, U upper-triangular. (2) Solve

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Large Scale Portfolio Optimization with Piecewise Linear Transaction Costs

Large Scale Portfolio Optimization with Piecewise Linear Transaction Costs Large Scale Portfolio Optimization with Piecewise Linear Transaction Costs Marina Potaptchik Levent Tunçel Henry Wolkowicz September 26, 2006 University of Waterloo Department of Combinatorics & Optimization

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Key words. numerical linear algebra, direct methods, Cholesky factorization, sparse matrices, mathematical software, matrix updates.

Key words. numerical linear algebra, direct methods, Cholesky factorization, sparse matrices, mathematical software, matrix updates. ROW MODIFICATIONS OF A SPARSE CHOLESKY FACTORIZATION TIMOTHY A. DAVIS AND WILLIAM W. HAGER Abstract. Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

An Interior Point Heuristic for the Hamiltonian Cycle Problem via Markov Decision Processes

An Interior Point Heuristic for the Hamiltonian Cycle Problem via Markov Decision Processes An Interior Point Heuristic for the Hamiltonian Cycle Problem via Markov Decision Processes Jerzy Filar Jacek Gondzio Vladimir Ejov April 3, 2003 Abstract We consider the Hamiltonian cycle problem embedded

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ. Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 3, pp. 621 639 c 2005 Society for Industrial and Applied Mathematics ROW MODIFICATIONS OF A SPARSE CHOLESKY FACTORIZATION TIMOTHY A. DAVIS AND WILLIAM W. HAGER Abstract.

More information

Math 502 Fall 2005 Solutions to Homework 3

Math 502 Fall 2005 Solutions to Homework 3 Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since

More information

Numerical Nonlinear Optimization with WORHP

Numerical Nonlinear Optimization with WORHP Numerical Nonlinear Optimization with WORHP Christof Büskens Optimierung & Optimale Steuerung London, 8.9.2011 Optimization & Optimal Control Nonlinear Optimization WORHP Concept Ideas Features Results

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

CS 219: Sparse matrix algorithms: Homework 3

CS 219: Sparse matrix algorithms: Homework 3 CS 219: Sparse matrix algorithms: Homework 3 Assigned April 24, 2013 Due by class time Wednesday, May 1 The Appendix contains definitions and pointers to references for terminology and notation. Problem

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Support Vector Machines

Support Vector Machines Wien, June, 2010 Paul Hofmarcher, Stefan Theussl, WU Wien Hofmarcher/Theussl SVM 1/21 Linear Separable Separating Hyperplanes Non-Linear Separable Soft-Margin Hyperplanes Hofmarcher/Theussl SVM 2/21 (SVM)

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Maths for Signals and Systems Linear Algebra for Engineering Applications

Maths for Signals and Systems Linear Algebra for Engineering Applications Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

More information

Lecture 4: January 26

Lecture 4: January 26 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Javier Pena Lecture 4: January 26 Scribes: Vipul Singh, Shinjini Kundu, Chia-Yin Tsai Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Support Vector Machine (continued)

Support Vector Machine (continued) Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need

More information

Department of Mathematics Comprehensive Examination Option I 2016 Spring. Algebra

Department of Mathematics Comprehensive Examination Option I 2016 Spring. Algebra Comprehensive Examination Option I Algebra 1. Let G = {τ ab : R R a, b R and a 0} be the group under the usual function composition, where τ ab (x) = ax + b, x R. Let R be the group of all nonzero real

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning Kernel Machines Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 SVM linearly separable case n training points (x 1,, x n ) d features x j is a d-dimensional vector Primal problem:

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines www.cs.wisc.edu/~dpage 1 Goals for the lecture you should understand the following concepts the margin slack variables the linear support vector machine nonlinear SVMs the kernel

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information