Parallel implementation of primal-dual interior-point methods for semidefinite programs

Size: px
Start display at page:

Download "Parallel implementation of primal-dual interior-point methods for semidefinite programs"

Transcription

1 Parallel implementation of primal-dual interior-point methods for semidefinite programs Masakazu Kojima, Kazuhide Nakata Katsuki Fujisawa and Makoto Yamashita 3rd Annual McMaster Optimization Conference: Theory and Applications (MOPTA 03) July 30 - August 1, 2003 Hamilton, Ontario 1

2 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix Completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 2

3 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 3

4 p=1 b py p sub.to X, S S n, y p R (1 p m) : variables A 0, A p S n, b p R (1 p m) : given data S n : the set of n n symmetric matrices n n U V = U ij V ij for every U, V R n n i=1 j=1 X O X S n X O X S n is positive semidefinite is positive definite 4

5 p=1 b py p sub.to Important features of SDPs n n matrix variables X, S S n, each of which involves n(n + 1)/2 real variables; for example, n = 2000 n(n + 1)/2 2 million. m linear equality constraints in P, where m can be large up to n(n + 1)/n. SDPs can be large scale easily Solving large scale SDPs (accurately) is a challenging subject. Special techniques for exploiting structure and sparsity Enormous computational power parallel computation 5

6 p=1 b py p sub.to Important features of SDPs n n matrix variables X, S S n, each of which involves n(n + 1)/2 real variables; for example, n = 2000 n(n + 1)/2 2 million. m linear equality constraints in P, where m can be large up to n(n + 1)/n. SDPs can be large scale easily Solving large scale SDPs (accurately) is a challenging subject. Special techniques for exploiting structure and sparsity Enormous computational power parallel computation 6

7 p=1 b py p sub.to 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 7

8 p=1 b py p sub.to Some existing numerical methods for SDPs IPMs (Interior-point methods) Primal-dual scaling, CSDP(Borchers), SDPA(Fujisawa-K-Nakata), SDPT3(Todd-Toh-Tutuncu), SeDuMi(F.Sturm) Dual scaling, DSDP(Benson-Ye) Nonlinear programming approaches Spectral bundle method(helmberg-kiwiel) Gradient-based log-barrier method(burer-monteiro-zhang) PENON(M. Kocvara) Generalized augmented Lagrangian method These methods are competing to each other. Large scale SDPs (e.g., n=10,000) and low accuracy Spectral bundle, Gradient-based log-barrier or IPMs using CG Medium scale SDPs (e.g. n, m = 1000) and high accuracy IPMs Parallel implementation of SDPA, DSDP, Spectral bundle method 8

9 p=1 b py p sub.to Some existing numerical methods for SDPs IPMs (Interior-point methods) Primal-dual scaling, CSDP(Borchers), SDPA(Fujisawa-K-Nakata), SDPT3(Todd-Toh-Tutuncu), SeDuMi(F.Sturm) Dual scaling, DSDP(Benson-Ye) Nonlinear programming approaches Spectral bundle method(helmberg-kiwiel) Gradient-based log-barrier method(burer-monteiro-zhang) PENON(M. Kocvara) Generalized augmented Lagrangian method These methods are competing to each other. Large scale SDPs (e.g., n=10,000) and low accuracy Spectral bundle, Gradient-based log-barrier or IPMs using CG Medium scale SDPs (e.g. n, m = 1000) and high accuracy IPMs Parallel implementation of SDPA, DSDP, Spectral bundle method 9

10 p=1 b py p sub.to Advantages of primal-dual IPMs (interior-point methods) Based on deep and strong theory (Nesterov-Nemirovskii) Highly accurate solutions. cf S.bundle and Gradient-based methods The number of iterations is small; usually iterations in practice, independent of sizes of SDPs. Disadvantage of primal-dual IPMs (interior-point methods) Heavy computation in each iteration Parallel execution of heavy computation in each iteration 10

11 p=1 b py p sub.to Advantages of primal-dual IPMs (interior-point methods) Based on deep and strong theory (Nesterov-Nemirovskii) Highly accurate solutions. cf S.bundle and Gradient-based methods The number of iterations is small; usually iterations in practice, independent of sizes of SDPs. Disadvantage of primal-dual IPMs (interior-point methods) Heavy computation in each iteration Parallel execution of heavy computation in each iteration 11

12 p=1 b py p sub.to 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 12

13 p=1 b py p sub.to Generic primal-dual IPM on a single CPU SDPA Step 0: Choose (X, y, S) = (X 0, y 0, S 0 ); X 0 O and S 0 O. Step 1: Compute a search direction (dx, dy, ds). Step 2: Choose α p and α d ; X + α p dx O and S + α d ds O. Let X = X + α p dx, (y, S) = (y, S) + α d (dy, ds). Step 3: Go to Step 1. Outline of our parallel implementation MPI (Message Passing Interface) for communication between CPUs. Myrinet-2000 between nodes, higher transmission than Gigabit Ethernet. ScaLAPACK (Scalable Linear Algebra PACKage) for parallel Cholesky factorization. 13

14 p=1 b py p sub.to Generic primal-dual IPM on a single CPU SDPA Step 0: Choose (X, y, S) = (X 0, y 0, S 0 ); X 0 O and S 0 O. Step 1: Compute a search direction (dx, dy, ds). matrix B Step 2: Choose α p and α d ; X + α p dx O and S + α d ds O. Let X = X + α p dx, (y, S) = (y, S) + α d (dy, ds). Step 3: Go to Step 1. parallel SDPA = = = = = = = SDPARA for large m 30, 000 Computation of Schur but small n 1, 000 complement matrix B Cholesky factorization of B Positive definite matrix completion technique SDPARA-C for large n 14

15 p=1 b py p sub.to Generic primal-dual IPM on a single CPU SDPA Step 0: Choose (X, y, S) = (X 0, y 0, S 0 ); X 0 O and S 0 O. Step 1: Compute a search direction (dx, dy, ds). Step 2: Choose α p and α d ; X + α p dx O and S + α d ds O. Let X = X + α p dx, (y, S) = (y, S) + α d (dy, ds). Step 3: Go to Step 1. parallel SDPA = = = = = = = SDPARA for large m 30, 000 Computation of Schur but small n 1, 000 complement matrix B Cholesky factorization of B Positive definite matrix completion technique SDPARA-C for larger n 15

16 p=1 b py p sub.to Generic primal-dual IPM on a single CPU SDPA Step 0: Choose (X, y, S) = (X 0, y 0, S 0 ); X 0 O and S 0 O. Step 1: Compute a search direction (dx, dy, ds). Step 2: Choose α p and α d ; X + α p dx O and S + α d ds O. Let X = X + α p dx, (y, S) = (y, S) + α d (dy, ds). Step 3: Go to Step 1. Outline of our parallel implementation MPI (Message Passing Interface) for communication between CPUs. Myrinet-2000 between nodes, higher transmission than Gigabit Ethernet. ScaLAPACK (Scalable Linear Algebra PACKage) for parallel Cholesky factorization. 16

17 p=1 b py p sub.to 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 17

18 p=1 b py p sub.to Computing HRVW/KSH/M search direction in SDPARA (X, y, S) ; X O and S O the current point (dx, dy, ds) HRVW/KSH/M search direction Bdy = r, where B S m ++, r Rm (X, y, S), A p, b p ds = D j=1 dy j, dx = (K XdS)S 1, dx = where D S n, K R n n (X, y, S), A p, b p ( dx + dx T ) /2 The computation of B requires O(m 2 n 2 ) arithmetic operations, and the solution of Bdy = r O(m 3 ) arithmetic operations SDPARA. The computation of dx = (K XdS)S 1 is expensive when n is large SDPARA-C. 18

19 p=1 b py p sub.to Computing HRVW/KSH/M search direction in SDPARA (X, y, S) ; X O and S O the current point (dx, dy, ds) HRVW/KSH/M search direction Bdy = r, where B S m ++, r Rm (X, y, S), A p, b p ds = D j=1 dy j, dx = (K XdS)S 1, dx = where D S n, K R n n (X, y, S), A p, b p ( dx + dx T ) /2 The computation of B requires O(m 2 n 2 ) arithmetic operations, and the solution of Bdy = r O(m 3 ) arithmetic operations SDPARA. The computation of dx = (K XdS)S 1 and dx = expensive when n is large SDPARA-C. ( dx + dx T ) /2 is 19

20 p=1 b py p sub.to Bdy = r, where B S m ++, B pq = Trace S 1 A p XA q (p, q = 1,..., m) 20

21 p=1 b py p sub.to Bdy = r, where B S m ++, B pq = Trace S 1 A p XA q (p, q = 1,..., m) In SDPARA: Compute each row of B by one cpu; S 1 A p X is common in pth row. Redistribute the elements of B in 2-dim. block-cyclic distribution. Apply the parallel Cholesky factorization to B for solving Bdy = r Processor 1 Processor 2 Processor 3 Processor 4 B 5 Processor 1 6 Processor 2 7 Processor Processor 4 Processor 1 Computation of 9 9 B by 4cpu s = dimensional block-cyclic dist. 21

22 p=1 b py p sub.to 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 6. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 22

23 p=1 b py p sub.to Numerical results on SDPARA PC cluster; 1.6GHz cpu and 768 MB memory in each node. Myrinet-2000 communication between the nodes. All data A p, b p are distributed to every node. Iterates {(X k, y k, S k )} are stored and updated in each nodes. Some heavy computations are done in parallel and their results are distributed to all nodes, but all other computations are done individually and independently in each node. Primal and dual feasibilities, relative duality gaps 1.0e 6 1.0e 7. 23

24 p=1 b py p sub.to Numerical results on SDPARA PC cluster; 1.6GHz cpu and 768 MB memory in each node. Myrinet-2000 communication between the nodes. All data A p, b p are distributed to every node. Iterates {(X k, y k, S k )} are stored and updated in each nodes. Some heavy computations are done in parallel and their results are distributed to all node, but all other computations are done individually and independently in each node. Primal and dual feasibilities, relative duality gaps 1.0e 6. 24

25 p=1 b py p sub.to Numerical results on SDPARA Test problem m >> size of A p (diag. blocks) small control11 from sdplib 1596 (100, 55) theta6 from sdplib thetag51 from sdplib sdp from quantum chemistry 1 15,313 (120,120,256) sdp from quantam chemistry 2 24,503 (153,153,324) Test probem m size of A p large maxg51 from sdplib qpg11 from sdplib qpg51 from sdplib torusg3-15 from dimacs 3,375 3,375 norm min

26 p=1 b py p sub.to Numerical results on SDPARA comp. time in sec Test probem m size of A p 1cpu 4cpu 16cpu 64cpu control (110,55) theta thetag M sdp from q.c 1 15,313 (120,120,256) M M sdp from q.c 2 24,503 (153,153,324) M M M: lack of memory Test probem m size of A p 1cpu 4cpu 16cpu 64cpu maxg qpg qpg M M M M torusg3-15 3,375 3,375 M M M M norm min M M M M 26

27 p=1 b py p sub.to Numerical results on SDPARA comp. time in sec Test probem m size of A p 1cpu 4cpu 16cpu 64cpu control (110,55) theta thetag M sdp from q.c 1 15,313 (120,120,256) M M sdp from q.c 2 24,503 (153,153,324) M M M: lack of memory Test probem m size of A p 1cpu 4cpu 16cpu 64cpu maxg qpg qpg M M M M torusg3-15 3,375 3,375 M M M M norm min M M M M 27

28 p=1 b py p sub.to 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 28

29 p=1 b py p sub.to Weak point in primal-dual interior-point methods The primal X is dense in general even when all A p s are sparse. The dual S = A 0 p=1 A py p inherits the sparsity of A p s. This difference causes a critical disadvantage of primal-dual interior-point methods compared to dual interior-point methods for large scale SDPs To overcome this disadvantage, we exploit E {(i, j) : [A p ] ij 0 for p} ( the aggregate sparsity pattern over all A p s) based on some fundamental results about positive matrix completion 29

30 p=1 b py p sub.to Weak point in primal-dual interior-point methods The primal X is dense in general even when all A p s are sparse. The dual S = A 0 p=1 A py p inherits the sparsity of A p s. This difference causes a critical disadvantage of primal-dual interior-point methods compared to dual interior-point methods for large scale SDPs To overcome this disadvantage, we exploit E {(i, j) : [A p ] ij 0 for p} ( the aggregate sparsity pattern over all A p s) based on some fundamental results about positive matrix completion. Fukuda-K-Murota-Nakata

31 p=1 b py p sub.to Example: m = 2, n = min sub.to X = 6, X 11 X 12 X 13 X 14 X 21 X 22 X 23 X 24 X 31 X 32 X 33 X 34 X 41 X 42 X 43 X Remember! C X = i,j X = 5, X O C ij X ij the aggregate sparsity pattern over all A p s E = {(i, j) in Red } X ij (i, j) E are unnecessary to evaluate the objective function and the equality constraints, but necessary for X O. Using positive definite matrix completion, we can generate each iterate (X, y, S) such that both X 1 and S have the same sparsity pattern as E when E is nicely sparse as in above example. In general, we need to expand E to a nicely sparse E. 31

32 p=1 b py p sub.to Example: m = 2, n = min sub.to X = 6, X 11 X 12 X 13 X 14 X 21 X 22 X 23 X 24 X 31 X 32 X 33 X 34 X 41 X 42 X 43 X Remember! C X = i,j X = 5, X O C ij X ij the aggregate sparsity pattern over all A p s E = {(i, j) in Red } X ij (i, j) E are unnecessary to evaluate the objective function and the equality constraints, but necessary for X O. Using positive definite matrix completion, we can generate each iterate (X, y, S) such that both X 1 and S have the same sparsity pattern as E when E is nicely sparse as in above example. In general, we need to expand E to a nicely sparse E. 32

33 p=1 b py p sub.to Technical details of the pd matrix completion E {(i, j) : [A p ] ij 0 for p}(the aggregate sparsity pattern) Each iteration of IPM, we need to (I) compute a step length α p such that X + α p dx O, (II) compute a search direction (dx, dy, ds). By applying the positive definite matrix completion technique, we can execute (I) and (II) using only X ij, (i, j) E for some nice and small E E (a chordal extension of E). Positive definite matrix completion problem: Given X ij = X ij, (i, j) E, assign values to the other elements X ij, (i, j) E so that X O. 33

34 (I) Computation of a step length α p : Example (m = 2, n = 4) min sub.to X = 6, X 11 X 12 X 13 X 14 X 21 X 22 X 23 X 24 X 31 X 32 X 33 X 34 X 41 X 42 X 43 X Remember! C X = i,j X = 5, X O C ij X ij ( ) Xii X Given X ij, (i, j) E, X ij, (i, j) E such that X O iff i4 X 4i X 44 O (i = 1, 2, 3) ( maximal principal submatrix in X ij s is pd.) Given X ij, dx ij (i, j) E, a step length α p is determined such that ( ) ( ) Xii X i4 dxii dx + α i4 X 4i X p O (i = 1, 2, 3) 44 dx 4i dx 44 X ij, dx ij (i, j) E are unnecessary! We can generalize this method to any aggregate sparsity pattern E characterized as a chordal graph. 34

35 7 7 example with the aggregate sparsity pattern E = {(i, j) : A ij 0}, A = where : a nonzero element. E is represented by the graph: Let E be a chordal extension of E, i.e., E such that minimal cycle has no more than 3 edges or the aggregate sparsity pattern E induced by the symbolic Cholesky factorization of A. E = E {(4, 5)} : a chordal extension of E 35

36 7 7 example with the aggregate sparsity pattern E = {(i, j) : A ij 0}, A = where : a nonzero element. E is represented by the graph: Let E be a chordal extension of E, i.e., E E such that minimal cycle has no more than 3 edges or the aggregate sparsity pattern E induced by the symbolic Cholesky factorization of A. E = E {(4, 5)} : a chordal extension of E 36

37 E : A = Theorem (Gron-Sá et.al. 84) Given X ij = X ij, (i, j) E, X ij, (i, j) E ; X O iff X CC O for maximal clique C of E In the example above, the maximal cliques are C 1 = {1, 2, 3}, C 2 = {3, 4, 5}, C 3 = {4, 5, 7}, C 4 = {6, 7} Hence α p is chosen; X CC + α p dx CC O for maximal clique C of E, instead of X + α p dx O. This requires less cpu time and memory. We need only X ij = X ij, dx ij = dx ij, (i, j) E. 37

38 (II) Computation of a search direction (dx, dy, ds). Let ˆX S n ++ be the matrix that maximizes its determinant among all pd completions of a given partial matrix with X ij = X ij, (i, j) E. Then ˆX 1 and the Cholesky factorization LL T of ˆX 1, which we can easily compute, enjoys the same sparsity pattern as E. Store the Cholesky factorization LL T of ˆX 1 instead of ˆX itself, and the Cholesky factorization NN T of S. Then, all A p s, N and L enjoy the same sparsity pattern as E. Use L T L 1 and NN T instead of ˆX and S, respectively. In particular, B pq Trace A p ˆXA q S 1 = Trace A p L T L 1 A q N T N 1. for computing the m m coefficient matrix B of the Schur complement equation Bdy = r for (dx, dy, ds). Considerable savings in memory. Suitable for parallel computation to reduce computational time. 38

39 (II) Computation of a search direction (dx, dy, ds). Let ˆX S n ++ be the matrix that maximizes its determinant among all pd completions of a given partial matrix with X ij = X ij, (i, j) E. Then ˆX 1 and the Cholesky factorization LL T of ˆX 1, which we can easily compute, enjoys the same sparsity pattern as E. Store the Cholesky factorization LL T of ˆX 1 instead of ˆX itself, and the Cholesky factorization NN T of S. Then, all A p s, N and L enjoy the same sparsity pattern as E. Use L T L 1 and NN T instead of ˆX and S, respectively. In particular, B pq Trace A p ˆXA q S 1 = Trace A p L T L 1 A q N T N 1. for computing the m m coefficient matrix B of the Schur complement equation Bdy = r for (dx, dy, ds). Considerable savings in memory. Suitable for parallel computation to reduce computational time. 39

40 (II) Computation of a search direction (dx, dy, ds). Let ˆX S n ++ be the matrix that maximizes its determinant among all pd completions of a given partial matrix with X ij = X ij, (i, j) E. Then ˆX 1 and the Cholesky factorization LL T of ˆX 1, which we can easily compute, enjoys the same sparsity pattern as E. Store the Cholesky factorization LL T of ˆX 1 instead of ˆX itself, and the Cholesky factorization NN T of S. Then, all A p s, N and L enjoy the same sparsity pattern as E. Use L T L 1 and NN T instead of ˆX and S, respectively. In particular, B pq Trace A p ˆXA q S 1 = Trace A p L T L 1 A q N T N 1. for computing the m m coefficient matrix B of the Schur complement equation Bdy = r for (dx, dy, ds). Considerable savings in memory. Suitable for parallel computation to reduce computational time. 40

41 p=1 b py p sub.to 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 41

42 p=1 b py p sub.to 7. Numerical results on SDPARA-C Test problem m size of A p 1cpu 4cpu 16cpu 64cpu theta thetag M M: lack of memory Test problem m size of A p 1cpu 4cpu 16cpu 64cpu maxg qpg qpg torusg3-15 3,375 3, norm min

43 p=1 b py p sub.to Comparison of Real Computational Time (sec) between SDPARAC, SDPARA and PDSDP (Benson). control10, sdplib, m=1326, n=(100,50) #CPU SDPARA-C SDPARA PDSDP theta6, sdplib, m=4375, n=300 #CPU SDPARA-C SDPARA PDSDP

44 p=1 b py p sub.to maxg51, sdplib, max cut m=1000, n=1000 #CPU SDPARA-C SDPARA PDSDP qpg51, sdplib, m=1000, n= M(970MB) M M M 495 torusg3-15 dimacs max cut, m=3,375, n=3, M(920MB) M M M

45 p=1 b py p sub.to 1. SDP (semidefinite program) and its dual 2. Existing numerical methods for SDPs 3. Outline of our parallel implementation, SDPARA and SDPARA-C 4. Computation of search directions in SDPARA 5. Numerical results on SDPARA 6. Positive definite matrix completion used in SDPARA-C 7. Numerical results on SDPARA-C 8. Conclusions 45

46 p=1 b py p sub.to Two types of parallel primal-dual interior-point methods for SDPs (a) Parallel implementation SDPARA of SDPA, suitable for large m and smaller n. The largest problem solved is m size of A p 1cpu 4cpu 16cpu 64cpu sdp from q.c 2 24,503 (153,153,324) M M (b) SDPARA-C = SDPARA + the pd matrix completion, suitable for larger n. The largest problems solved are m size of A p 1cpu 4cpu 16cpu 64cpu torusg3-15 3,375 3, norm min

SDPARA : SemiDefinite Programming Algorithm PARAllel version

SDPARA : SemiDefinite Programming Algorithm PARAllel version Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Parallel solver for semidefinite programming problem having sparse Schur complement matrix

Parallel solver for semidefinite programming problem having sparse Schur complement matrix Parallel solver for semidefinite programming problem having sparse Schur complement matrix Makoto Yamashita, Katsuki Fujisawa, Mituhiro Fukuda, Kazuhide Nakata and Maho Nakata September, 2010 Abstract

More information

Introduction to Semidefinite Programs

Introduction to Semidefinite Programs Introduction to Semidefinite Programs Masakazu Kojima, Tokyo Institute of Technology Semidefinite Programming and Its Application January, 2006 Institute for Mathematical Sciences National University of

More information

SDPA PROJECT : SOLVING LARGE-SCALE SEMIDEFINITE PROGRAMS

SDPA PROJECT : SOLVING LARGE-SCALE SEMIDEFINITE PROGRAMS Journal of the Operations Research Society of Japan 2007, Vol. 50, No. 4, 278-298 SDPA PROJECT : SOLVING LARGE-SCALE SEMIDEFINITE PROGRAMS Katsuki Fujisawa Kazuhide Nakata Makoto Yamashita Mituhiro Fukuda

More information

Implementation and Evaluation of SDPA 6.0 (SemiDefinite Programming Algorithm 6.0)

Implementation and Evaluation of SDPA 6.0 (SemiDefinite Programming Algorithm 6.0) Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS IN POLYNOMIAL OPTIMIZATION

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS IN POLYNOMIAL OPTIMIZATION A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS IN POLYNOMIAL OPTIMIZATION Kartik K. Sivaramakrishnan Department of Mathematics North Carolina State University

More information

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming.

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming. Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

m i=1 c ix i i=1 F ix i F 0, X O.

m i=1 c ix i i=1 F ix i F 0, X O. What is SDP? for a beginner of SDP Copyright C 2005 SDPA Project 1 Introduction This note is a short course for SemiDefinite Programming s SDP beginners. SDP has various applications in, for example, control

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS Kartik K. Sivaramakrishnan Department of Mathematics North Carolina State University kksivara@ncsu.edu http://www4.ncsu.edu/

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

SMCP Documentation. Release Martin S. Andersen and Lieven Vandenberghe

SMCP Documentation. Release Martin S. Andersen and Lieven Vandenberghe SMCP Documentation Release 0.4.5 Martin S. Andersen and Lieven Vandenberghe May 18, 2018 Contents 1 Current release 3 2 Future releases 5 3 Availability 7 4 Authors 9 5 Feedback and bug reports 11 5.1

More information

A PARALLEL TWO-STAGE INTERIOR POINT DECOMPOSITION ALGORITHM FOR LARGE SCALE STRUCTURED SEMIDEFINITE PROGRAMS

A PARALLEL TWO-STAGE INTERIOR POINT DECOMPOSITION ALGORITHM FOR LARGE SCALE STRUCTURED SEMIDEFINITE PROGRAMS A PARALLEL TWO-STAGE INTERIOR POINT DECOMPOSITION ALGORITHM FOR LARGE SCALE STRUCTURED SEMIDEFINITE PROGRAMS Kartik K. Sivaramakrishnan Research Division Axioma, Inc., Atlanta kksivara@axiomainc.com http://www4.ncsu.edu/

More information

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Sivaramakrishnan Department of Mathematics NC State University kksivara@ncsu.edu http://www4.ncsu.edu/

More information

Sparse Matrix Theory and Semidefinite Optimization

Sparse Matrix Theory and Semidefinite Optimization Sparse Matrix Theory and Semidefinite Optimization Lieven Vandenberghe Department of Electrical Engineering University of California, Los Angeles Joint work with Martin S. Andersen and Yifan Sun Third

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs

Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs Yang Zheng Department of Engineering Science, University of Oxford Homepage: http://users.ox.ac.uk/~ball4503/index.html Talk at Cranfield

More information

On the solution of large-scale SDP problems by the modified barrier method using iterative solvers

On the solution of large-scale SDP problems by the modified barrier method using iterative solvers Mathematical Programming manuscript No. (will be inserted by the editor) Michal Kočvara Michael Stingl On the solution of large-scale SDP problems by the modified barrier method using iterative solvers

More information

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Correlative sparsity in primal-dual interior-point methods for LP, SDP and SOCP Kazuhiro Kobayashi, Sunyoung Kim and Masakazu Kojima

More information

The Ongoing Development of CSDP

The Ongoing Development of CSDP The Ongoing Development of CSDP Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Joseph Young Department of Mathematics New Mexico Tech (Now at Rice University)

More information

A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization

A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization Math. Program., Ser. B 95: 329 357 (2003) Digital Object Identifier (DOI) 10.1007/s10107-002-0352-8 Samuel Burer Renato D.C. Monteiro A nonlinear programming algorithm for solving semidefinite programs

More information

On the solution of large-scale SDP problems by the modified barrier method using iterative solvers

On the solution of large-scale SDP problems by the modified barrier method using iterative solvers Mathematical Programming manuscript No. (will be inserted by the editor) Michal Kočvara Michael Stingl On the solution of large-scale SDP problems by the modified barrier method using iterative solvers

More information

On the solution of large-scale SDP problems by the modified barrier method using iterative solvers

On the solution of large-scale SDP problems by the modified barrier method using iterative solvers Mathematical Programming manuscript No. (will be inserted by the editor) Michal Kočvara Michael Stingl On the solution of large-scale SDP problems by the modified barrier method using iterative solvers

More information

SPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS

SPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS Journal of the Operations Research Society of Japan 2008, Vol. 51, No. 3, 241-264 SPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS Kazuhiro Kobayashi Sunyoung Kim Masakazu

More information

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry assoc. prof., Ph.D. 1 1 UNM - Faculty of information studies Edinburgh, 16. September 2014 Outline Introduction Definition

More information

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization CS-11-01 How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization Hayato Waki Department of Computer Science, The University of Electro-Communications

More information

PENNON A Code for Convex Nonlinear and Semidefinite Programming

PENNON A Code for Convex Nonlinear and Semidefinite Programming PENNON A Code for Convex Nonlinear and Semidefinite Programming Michal Kočvara Michael Stingl Abstract We introduce a computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite

More information

Semidefinite Optimization using MOSEK

Semidefinite Optimization using MOSEK Semidefinite Optimization using MOSEK Joachim Dahl ISMP Berlin, August 23, 2012 http://www.mose.com Introduction Introduction Semidefinite optimization Algorithms Results and examples Summary MOSEK is

More information

A Quantum Interior Point Method for LPs and SDPs

A Quantum Interior Point Method for LPs and SDPs A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity Hayato Waki,

More information

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Second Order Cone Programming Relaxation of Nonconvex Quadratic Optimization Problems

Second Order Cone Programming Relaxation of Nonconvex Quadratic Optimization Problems Second Order Cone Programming Relaxation of Nonconvex Quadratic Optimization Problems Sunyoung Kim Department of Mathematics, Ewha Women s University 11-1 Dahyun-dong, Sudaemoon-gu, Seoul 120-750 Korea

More information

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices

A robust multilevel approximate inverse preconditioner for symmetric positive definite matrices DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)

More information

Exploiting Structure in SDPs with Chordal Sparsity

Exploiting Structure in SDPs with Chordal Sparsity Exploitig Structure i SDPs with Chordal Sparsity Atois Papachristodoulou Departmet of Egieerig Sciece, Uiversity of Oxford Joit work with Yag Zheg, Giovai Fatuzzi, Paul Goulart ad Adrew Wy CDC 06 Pre-coferece

More information

Research Reports on. Mathematical and Computing Sciences. Department of. Mathematical and Computing Sciences. Tokyo Institute of Technology

Research Reports on. Mathematical and Computing Sciences. Department of. Mathematical and Computing Sciences. Tokyo Institute of Technology Department of Mathematical and Computing Sciences Tokyo Institute of Technology SERIES B: Operations Research ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity

More information

A Julia JuMP-based module for polynomial optimization with complex variables applied to ACOPF

A Julia JuMP-based module for polynomial optimization with complex variables applied to ACOPF JuMP-dev workshop 018 A Julia JuMP-based module for polynomial optimization with complex variables applied to ACOPF Gilles Bareilles, Manuel Ruiz, Julie Sliwak 1. MathProgComplex.jl: A toolbox for Polynomial

More information

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Journal of the Operations Research Society of Japan 2003, Vol. 46, No. 2, 164-177 2003 The Operations Research Society of Japan A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Masakazu

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 433 (2010) 1101 1109 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Minimal condition number

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

A LINEAR PROGRAMMING APPROACH TO SEMIDEFINITE PROGRAMMING PROBLEMS

A LINEAR PROGRAMMING APPROACH TO SEMIDEFINITE PROGRAMMING PROBLEMS A LINEAR PROGRAMMING APPROACH TO SEMIDEFINITE PROGRAMMING PROBLEMS KARTIK KRISHNAN AND JOHN E. MITCHELL Abstract. Until recently, the study of interior point methods has doated algorithmic research in

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP p.1/22

PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP p.1/22 PENNON A Generalized Augmented Lagrangian Method for Nonconvex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

PENNON: Software for linear and nonlinear matrix inequalities

PENNON: Software for linear and nonlinear matrix inequalities PENNON: Software for linear and nonlinear matrix inequalities Michal Kočvara 1 and Michael Stingl 2 1 School of Mathematics, University of Birmingham, Birmingham B15 2TT, UK and Institute of Information

More information

Minimum cost transportation problem

Minimum cost transportation problem Minimum cost transportation problem Complements of Operations Research Giovanni Righini Università degli Studi di Milano Definitions The minimum cost transportation problem is a special case of the minimum

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

The Graph Realization Problem

The Graph Realization Problem The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and inyu e September 27, 2006 Abstract Recently, a semidefinite programming (SDP) relaxation

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Sparse Linear Programming via Primal and Dual Augmented Coordinate Descent

Sparse Linear Programming via Primal and Dual Augmented Coordinate Descent Sparse Linear Programg via Primal and Dual Augmented Coordinate Descent Presenter: Joint work with Kai Zhong, Cho-Jui Hsieh, Pradeep Ravikumar and Inderjit Dhillon. Sparse Linear Program Given vectors

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Exact SDP Relaxations with Truncated Moment Matrix for Binary Polynomial Optimization Problems

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Exact SDP Relaxations with Truncated Moment Matrix for Binary Polynomial Optimization Problems MATHEMATICAL ENGINEERING TECHNICAL REPORTS Exact SDP Relaxations with Truncated Moment Matrix for Binary Polynomial Optimization Problems Shinsaku SAKAUE, Akiko TAKEDA, Sunyoung KIM, and Naoki ITO METR

More information

Frist order optimization methods for sparse inverse covariance selection

Frist order optimization methods for sparse inverse covariance selection Frist order optimization methods for sparse inverse covariance selection Katya Scheinberg Lehigh University ISE Department (joint work with D. Goldfarb, Sh. Ma, I. Rish) Introduction l l l l l l The field

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Chordal Sparsity in Interior-Point Methods for Conic Optimization

Chordal Sparsity in Interior-Point Methods for Conic Optimization University of California Los Angeles Chordal Sparsity in Interior-Point Methods for Conic Optimization A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

More information

Porting a sphere optimization program from LAPACK to ScaLAPACK

Porting a sphere optimization program from LAPACK to ScaLAPACK Porting a sphere optimization program from LAPACK to ScaLAPACK Mathematical Sciences Institute, Australian National University. For presentation at Computational Techniques and Applications Conference

More information

A SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS

A SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS SIAM J OPTIM Vol 10, No 3, pp 852 877 c 2000 Society for Industrial and Applied Mathematics A SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS JORDI CASTRO Abstract Despite the efficiency

More information

SDPTools: A High Precision SDP Solver in Maple

SDPTools: A High Precision SDP Solver in Maple MM Research Preprints, 66 73 KLMM, AMSS, Academia Sinica Vol. 28, December 2009 SDPTools: A High Precision SDP Solver in Maple Feng Guo Key Laboratory of Mathematics Mechanization Institute of Systems

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

A FRAMEWORK FOR SOLVING MIXED-INTEGER SEMIDEFINITE PROGRAMS

A FRAMEWORK FOR SOLVING MIXED-INTEGER SEMIDEFINITE PROGRAMS A FRAMEWORK FOR SOLVING MIXED-INTEGER SEMIDEFINITE PROGRAMS TRISTAN GALLY, MARC E. PFETSCH, AND STEFAN ULBRICH ABSTRACT. Mixed-integer semidefinite programs arise in many applications and several problem-specific

More information

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh

More information

Convex and Nonsmooth Optimization: Assignment Set # 5 Spring 2009 Professor: Michael Overton April 23, 2009

Convex and Nonsmooth Optimization: Assignment Set # 5 Spring 2009 Professor: Michael Overton April 23, 2009 Eduardo Corona Convex and Nonsmooth Optimization: Assignment Set # 5 Spring 29 Professor: Michael Overton April 23, 29. So, let p q, and A be a full rank p q matrix. Also, let A UV > be the SVD factorization

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Doubly Nonnegative Relaxations for Quadratic and Polynomial Optimization Problems with Binary and Box Constraints Sunyoung Kim, Masakazu

More information

Large-scale semidefinite programs in electronic structure calculation

Large-scale semidefinite programs in electronic structure calculation Math. Program., Ser. B (2007) 109:553 580 DOI 10.1007/s10107-006-0027-y FULL LENGTH PAPER Large-scale semidefinite programs in electronic structure calculation Mituhiro Fukuda Bastiaan J. Braams Maho Nakata

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Linear Algebra in IPMs

Linear Algebra in IPMs School of Mathematics H E U N I V E R S I Y O H F E D I N B U R G Linear Algebra in IPMs Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio Paris, January 2018 1 Outline Linear

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Stanford University (Joint work with Persi Diaconis, Arpita Ghosh, Seung-Jean Kim, Sanjay Lall, Pablo Parrilo, Amin Saberi, Jun Sun, Lin

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Parallelization of the Molecular Orbital Program MOS-F

Parallelization of the Molecular Orbital Program MOS-F Parallelization of the Molecular Orbital Program MOS-F Akira Asato, Satoshi Onodera, Yoshie Inada, Elena Akhmatskaya, Ross Nobes, Azuma Matsuura, Atsuya Takahashi November 2003 Fujitsu Laboratories of

More information

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints By I. Necoara, Y. Nesterov, and F. Glineur Lijun Xu Optimization Group Meeting November 27, 2012 Outline

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Truncated Newton Method

Truncated Newton Method Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University minimize convex f : R n R Newton s method Newton step x nt

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

Solving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI *

Solving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI * Solving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI * J.M. Badía and A.M. Vidal Dpto. Informática., Univ Jaume I. 07, Castellón, Spain. badia@inf.uji.es Dpto. Sistemas Informáticos y Computación.

More information