Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions

Size: px
Start display at page:

Download "Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions"

Transcription

1 Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Haifa Matrix Theory Conference May 18-21, 2009

2 Outline 1 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure 2 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case 3 Clique Unions and Node Absorptions Results (low CPU time; high accuracy)

3 Outline 1 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure 2 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case 3 Clique Unions and Node Absorptions Results (low CPU time; high accuracy)

4 Outline 1 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure 2 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case 3 Clique Unions and Node Absorptions Results (low CPU time; high accuracy)

5 3 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Sensor Network Localization, SNL, Problem Wireless Sensor Network n ad hoc wireless sensors (nodes) to locate in R r, (r is embedding dimension; sensors p i R r, i V := 1,...,n) m of the sensors are anchors, p i, i = n m + 1,...,n) (e.g. using GPS) pairwise distances known within radio range R, D ij = p i p j 2, ij E Current Techniques: Nearest, Weighted, SDP Approx. min Y 0,Y Ω H (K (Y) D) SDP program is: Expensive/low accuracy/implicitly highly degenerate (cliques restrict ranks of feasible Y s)

6 3 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Sensor Network Localization, SNL, Problem Wireless Sensor Network n ad hoc wireless sensors (nodes) to locate in R r, (r is embedding dimension; sensors p i R r, i V := 1,...,n) m of the sensors are anchors, p i, i = n m + 1,...,n) (e.g. using GPS) pairwise distances known within radio range R, D ij = p i p j 2, ij E Current Techniques: Nearest, Weighted, SDP Approx. min Y 0,Y Ω H (K (Y) D) SDP program is: Expensive/low accuracy/implicitly highly degenerate (cliques restrict ranks of feasible Y s)

7 Applications The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Tracking Humans/Animals/Equipment monitoring: natural habitat; earthquakes and volcanos; weather and ocean currents. military, tracking of goods, vehicle positions, surveillance, (where open-air positioning is not feasible), random deployment in inaccessible terrains or disaster relief operations Future Applications? bicycles, car/computer parts, guns (to prevent theft) students/teenagers (prevent class absence)

8 Applications The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Tracking Humans/Animals/Equipment monitoring: natural habitat; earthquakes and volcanos; weather and ocean currents. military, tracking of goods, vehicle positions, surveillance, (where open-air positioning is not feasible), random deployment in inaccessible terrains or disaster relief operations Future Applications? bicycles, car/computer parts, guns (to prevent theft) students/teenagers (prevent class absence)

9 5 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Graph G = (V, E,ω) node set V = {1,...,n} edge set (i, j) E ; ω ij = p i p j 2 known approximately The anchors form a clique (complete subgraph) Realization of G in R r : a mapping of node v i p i R r with squared distances given by ω. Corresponding Partial Euclidean Distance Matrix, EDM { d 2 D ij = ij if (i, j) E 0 otherwise, d 2 ij = ω ij are known squared Euclidean distances between sensors p i, p j ; anchors correspond to a clique.

10 5 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Graph G = (V, E,ω) node set V = {1,...,n} edge set (i, j) E ; ω ij = p i p j 2 known approximately The anchors form a clique (complete subgraph) Realization of G in R r : a mapping of node v i p i R r with squared distances given by ω. Corresponding Partial Euclidean Distance Matrix, EDM { d 2 D ij = ij if (i, j) E 0 otherwise, d 2 ij = ω ij are known squared Euclidean distances between sensors p i, p j ; anchors correspond to a clique.

11 6 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Sensor Localization Problem/Partial EDM Sensors and Anchors 10 9 Initial position of points sensors anchors sens anch # sensors n = 300, # anchors m = 9, radio range R = 1.2

12 SDP: Cone and Faces The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure S+ k, Cone of SDP matrices in Sk inner product A, B = trace AB Löwner (psd) partial order A B, A B Faces of cone K F K is a face of K, denoted F K, if ( x, y K, 1 2 (x + y) F) = (cone {x, y} F). F K, if F K, F K ; F is proper face if {0} F K. F K is exposed if: intersection of K with a hyperplane. face(s) denotes smallest face of K that contains set S.

13 SDP: Cone and Faces The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure S+ k, Cone of SDP matrices in Sk inner product A, B = trace AB Löwner (psd) partial order A B, A B Faces of cone K F K is a face of K, denoted F K, if ( x, y K, 1 2 (x + y) F) = (cone {x, y} F). F K, if F K, F K ; F is proper face if {0} F K. F K is exposed if: intersection of K with a hyperplane. face(s) denotes smallest face of K that contains set S.

14 8 Facial Structure of SDP Cone The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure S k + is a Facially Exposed Cone All faces are exposed. Faces of S+ k Equivalence to Subspaces of Rk face F S k + is determined by range of any, S relint F, i.e. let S = UΓU T be compact spectral decomposition, with diagonal matrix of eigenvalues Γ S t ++, then: F = US t +U T dim F = t(t + 1)/2. (F associated with R(U))

15 8 Facial Structure of SDP Cone The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure S k + is a Facially Exposed Cone All faces are exposed. Faces of S+ k Equivalence to Subspaces of Rk face F S k + is determined by range of any, S relint F, i.e. let S = UΓU T be compact spectral decomposition, with diagonal matrix of eigenvalues Γ S t ++, then: F = US t +U T dim F = t(t + 1)/2. (F associated with R(U))

16 9 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Matrix with Fixed Principal Submatrix For Y S n, α {1,..., n}: Y[α] denotes principal submatrix formed from rows & cols with indices α. Sets with Fixed Principal Submatrices If α = k and Ȳ Sk, then: S n (α, Ȳ) := { Y S k : Y[α] = Ȳ}, S+ n (α, Ȳ) := { Y S+ k : Y[α] = Ȳ} i.e. the subset of matrices Y S k (Y S+) k with principal submatrix Y[α] fixed to Ȳ.

17 9 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Matrix with Fixed Principal Submatrix For Y S n, α {1,..., n}: Y[α] denotes principal submatrix formed from rows & cols with indices α. Sets with Fixed Principal Submatrices If α = k and Ȳ Sk, then: S n (α, Ȳ) := { Y S k : Y[α] = Ȳ}, S+ n (α, Ȳ) := { Y S+ k : Y[α] = Ȳ} i.e. the subset of matrices Y S k (Y S+) k with principal submatrix Y[α] fixed to Ȳ.

18 10 The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure SNL Connection to SDP (Lin. Trans., Adjoints) v = diag (S) R n, S = Diag (v) S n diag = Diag (adjoint) for B S n, offdiag (B) := B Diag (diag (B)) D = K (B) E n, B = K (D) S n S C Let P T = [ p 1 p 2... p n ] M r n ; B := PP T S n ; D E n be corresponding EDM. (from S n ) K (B) := D e (B) 2B := diag(b) e T + e diag(b) T 2B ( ) n = pi T p i + pj T p j 2pi T p j i,j=1 = ( p i p j 2 2) n i,j=1 = D (to E n ).

19 K : S n + S C E n The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure Linear Transformations: D v (B), K (B), T (D) allow: D v (B) := diag(b) v T + v diag(b) T ; D v (y) := yv T + vy T adjoint K (D) = 2(Diag(De) D). K is 1 1, onto between centered and hollow subspaces: S C := {B S n : Be = 0}; S H := {D S n : diag (D) = 0} = R(offDiag ) J := I 1 n eet (orthogonal projection onto M := {e} ); T (D) := 1 2JoffDiag (D)J (= K (D))

20 Properties of Linear Transformations The Sensor Network Localization, SNL, Problem Semidefinite Programming (SDP); Facial Structure K, T, Diag, D e R(K ) = S H ; N (K ) = R (D e ); R(K ) = R(T ) = S C ; N (K ) = N (T ) = R(Diag ); S n = S H R(Diag ) = S C R (D e ). T (E n ) = S n + S C and K (S n + S C ) = E n.

21 13 Single Clique/Facial Reduction Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case D E k, α 1:n, α = k Define E n (α, D) := { D E n : D[α] = D }. Given D; find a corresponding B 0; find the corresponding face; find the corresponding subspace.

22 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case Basic Theorem for Single Clique/Facial Reduction THEOREM 1: Single Clique Reduction Let D := D[1:k] E k, k < n, with embedding dimension t r, and B := K ( D) = ŪBSŪT B, where ŪB M k t ], ŪT B ŪB = I t, and S S++. t 1 Furthermore, let U B := [ŪB k e M k (t+1), [ ] UB 0 [ ] U U :=, and let V T e M n k+t+1 be 0 I U T e n k orthogonal. Then: face K ( E n (1:k, D) ) ( ) = US+ n k+t+1 U T S C = (UV)S+ n k+t (UV) T Note that we add 1 k e to represent N (K ); then we use V to eliminate e to recover centered face.

23 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case Positive Integers for Intersecting Cliques Two Intersection Sets, α 1,α 2. α 1 α 2 k 1 k 2 k3 For each clique α = k, we get a corresponding face/subspace (k r matrix) representation. We now see how to handle two cliques, α 1,α 2, that intersect. 15

24 16 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case Two (Intersecting) Clique Reduction/Subsp. Repres. THEOREM 2: Clique Intersection Reduction/Subsp. Repres. α 1 := 1:( k 1 + k 2 ), α 2 := ( k 1 + 1):( k 1 + k 2 + k 3 ) 1:n, k 1 := α 1 = k 1 + k 2, k 2 := α 2 = k 2 + k 3, k := k 1 + k 2 + k 3. For i = 1, 2, let D i := D[α i ] E k i, with embedding dimension t i, and B i := K ( D i ) = ŪiS i Ūi T, where Ūi M k i t i, ŪT i Ū i = I ti, and S i S t i ++. Furthermore, for i = 1, 2, let 1 U i := [Ūi e ] ki M k i (t i +1), and let Ū M k (t+1) satisfy ([ ]) ([ ]) R(Ū) = R U1 0 R I k1 0, with 0 I k3 0 U ŪT Ū = I t+1 2 cont...

25 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case Two (Intersecting) Clique Reduction, cont... THEOREM 2 Nonsing. Clique Inters. cont... cont... with ([ ]) ([ ]) R(Ū) = R U1 0 R I k1 0, with 0 I k3 0 U ŪT Ū = I t+1, 2 [Ū ] 0 let U := M n (n k+t+1) and let [ 0 ] I n k U V T e M n k+t+1 be orthogonal. Then U T e 2 i=1 face K ( E n (α i, D i ) ) ( ) = US+ n k+t+1 U T S C = (UV)S+ n k+t (UV) T Basic work: find Ū to repres. inters. of 2 subspaces.

26 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case Two (Intersecting) Clique Reduction Figure C i C j Completion: missing distances can be recovered if desired. 18

27 19 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case Two (Intersecting) Clique Explicit Completion COR. Intersection with Embedding Dim. r/completion Hypotheses of Theorem 2 holds. Let D i := D[α i ] E k i, for i = 1, 2, β α 1 α 2,γ := α 1 α 2, D := D[β], B := K ( D), Ū β := Ū(β,:), where Ū M k (t+1) satisfies ] intersection equation of Theorem 2. Let [ V M t+1 Ū T e ŪT e be orthogonal. Let Z := (JŪβ V) B((JŪβ V) ) T. If the embedding dimension for D is r, THEN t = r in Theorem 2, and Z S+ r is the unique solution of the equation (JŪβ V)Z(JŪβ V) T = B, and the exact completion is D[γ] = K ( (Ū V)Z(Ū V) T).

28 Use R as lower bound in singular/nonrigid case. 20 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case 2 (Inters.) Clique Red. Figure/Singular Case Two (Intersecting) Clique Reduction Figure/Singular Case C i C j

29 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case Two (Inters.) Clique Explicit Compl.; Sing. Case COR. Clique-Sing.; Intersect. Embedding Dim. r 1 Hypotheses of previous COR holds. For i = 1, 2, let β δ i α i, A i := JŪδ i V, where Ū δi := Ū(δ i,:), and B i := K (D[δ i ]). Let Z S t be a particular solution of the linear systems A 1 ZA T 1 = B 1 A 2 ZA T 2 = B 2.. If the embedding dimension of D[δ i ] is r, for i = 1, 2, but the embedding dimension of D := D[β] is r 1, then the following holds. cont...

30 Single Clique Reduction (Exploit Degeneracy) Two Clique Reduction and EDM Completion Reduction and Completion; Singular Case 2 (Inters.) Clique Expl. Compl.; Degen. cont... COR. Clique-Degen. cont... The following holds: 1 dim N (A i ) = 1, for i = 1, 2. 2 For i = 1, 2, let n i N (A i ), n i 2 = 1, and Z := n 1 n2 T + n 2n1 T. Then, Z is a solution of the linear systems if and only if Z = Z + τ Z, for some τ R 3 There are at most two nonzero solutions, τ 1 and τ 2, for the generalized eigenvalue problem Zv = τ Z v, v 0. Set Z i := Z + 1 τ i Z, for i = 1, 2. Then the exact completion is one of D[γ] { K (Ū V Z i V T Ū T ) : i = 1, 2 }

31 23 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Initialize C i := { j : (D p ) ij < (R/2) 2}, for i = 1,...,n Iterate Finalize For C i C j r + 1, do Rigid Clique Union For C i N (j) r + 1, do Rigid Node Absorption For C i C j = r, do Non-Rigid Clique Union (lower bnds) For C i N (j) = r, do Non-Rigid Node Absorp. (lower bnds) When a clique containing all anchors, use computed facial representation and positions of anchors to solve for X

32 23 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Initialize C i := { j : (D p ) ij < (R/2) 2}, for i = 1,...,n Iterate Finalize For C i C j r + 1, do Rigid Clique Union For C i N (j) r + 1, do Rigid Node Absorption For C i C j = r, do Non-Rigid Clique Union (lower bnds) For C i N (j) = r, do Non-Rigid Node Absorp. (lower bnds) When a clique containing all anchors, use computed facial representation and positions of anchors to solve for X

33 23 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Initialize C i := { j : (D p ) ij < (R/2) 2}, for i = 1,...,n Iterate Finalize For C i C j r + 1, do Rigid Clique Union For C i N (j) r + 1, do Rigid Node Absorption For C i C j = r, do Non-Rigid Clique Union (lower bnds) For C i N (j) = r, do Non-Rigid Node Absorp. (lower bnds) When a clique containing all anchors, use computed facial representation and positions of anchors to solve for X

34 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Clique Union Node Absorption C i Rigid C i C j j Non-rigid C i Ci C j j

35 Results Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Rigid Clique Union n / R Remaining Cliques n / R CPU Seconds n / R Rigid Clique Union and Node Absorption n / R Remaining Cliques n / R CPU Seconds n / R Max log(error) 25 Max log(error)

36 26 SDP relaxation of SNL is highly (implicitly) degenerate: The feasible set of this SDP is restricted to a low dim. face of the SDP cone, causing the Slater constraint qualification (strict feasibility) to fail We take advantage of this degeneracy by finding explicit representations of intersections of faces of the SDP cone corresponding to unions of intersecting cliques Without using an SDP-solver (eg. SeDuMi or SDPT3), we quickly compute the exact solution to the SDP relaxation (except for round-off error from computing eigenvectors, etc.)

37 26 SDP relaxation of SNL is highly (implicitly) degenerate: The feasible set of this SDP is restricted to a low dim. face of the SDP cone, causing the Slater constraint qualification (strict feasibility) to fail We take advantage of this degeneracy by finding explicit representations of intersections of faces of the SDP cone corresponding to unions of intersecting cliques Without using an SDP-solver (eg. SeDuMi or SDPT3), we quickly compute the exact solution to the SDP relaxation (except for round-off error from computing eigenvectors, etc.)

38 26 SDP relaxation of SNL is highly (implicitly) degenerate: The feasible set of this SDP is restricted to a low dim. face of the SDP cone, causing the Slater constraint qualification (strict feasibility) to fail We take advantage of this degeneracy by finding explicit representations of intersections of faces of the SDP cone corresponding to unions of intersecting cliques Without using an SDP-solver (eg. SeDuMi or SDPT3), we quickly compute the exact solution to the SDP relaxation (except for round-off error from computing eigenvectors, etc.)

39 Thanks for your attention! Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Haifa Matrix Theory Conference May 18-21, 2009

Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction

Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo INFORMS, San

More information

Facial Reduction for Cone Optimization

Facial Reduction for Cone Optimization Facial Reduction for Cone Optimization Henry Wolkowicz Dept. Combinatorics and Optimization University of Waterloo Tuesday Apr. 21, 2015 (with: Drusvyatskiy, Krislock, (Cheung) Voronin) Motivation: Loss

More information

Theory and Applications of Degeneracy in Cone Optimization

Theory and Applications of Degeneracy in Cone Optimization Theory and Applications of Degeneracy in Cone Optimization Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Colloquium at: Dept. of Math. & Stats, Sept. 24 Outline Part I:

More information

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization

More information

Facial reduction for Euclidean distance matrix problems

Facial reduction for Euclidean distance matrix problems Facial reduction for Euclidean distance matrix problems Nathan Krislock Department of Mathematical Sciences Northern Illinois University, USA Distance Geometry: Theory and Applications DIMACS Center, Rutgers

More information

The Graph Realization Problem

The Graph Realization Problem The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization

More information

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Yichuan Ding Nathan Krislock Jiawei Qian Henry Wolkowicz October 27, 2008 University of Waterloo Department of

More information

c 2017 Society for Industrial and Applied Mathematics

c 2017 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 27, No. 4, pp. 2301 2331 c 2017 Society for Industrial and Applied Mathematics NOISY EUCLIDEAN DISTANCE REALIZATION: ROBUST FACIAL REDUCTION AND THE PARETO FRONTIER D. DRUSVYATSKIY,

More information

Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization

Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization Abdo Y. Alfakih Miguel F. Anjos Veronica Piccialli Henry Wolkowicz November 2, 2009 University of Waterloo Department

More information

Spherical Euclidean Distance Embedding of a Graph

Spherical Euclidean Distance Embedding of a Graph Spherical Euclidean Distance Embedding of a Graph Hou-Duo Qi University of Southampton Presented at Isaac Newton Institute Polynomial Optimization August 9, 2013 Spherical Embedding Problem The Problem:

More information

Facial Reduction in Cone Optimization with Applications to Matrix Completions

Facial Reduction in Cone Optimization with Applications to Matrix Completions 1 Facial Reduction in Cone Optimization with Applications to Matrix Completions Henry Wolkowicz Dept. Combinatorics and Optimization, University of Waterloo, Canada Wed. July 27, 2016, 2-3:20 PM at: DIMACS

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Singularity Degree of the Positive Semidefinite Matrix Completion Problem

Singularity Degree of the Positive Semidefinite Matrix Completion Problem Singularity Degree of the Positive Semidefinite Matrix Completion Problem Shin-ichi Tanigawa November 8, 2016 arxiv:1603.09586v2 [math.oc] 4 Nov 2016 Abstract The singularity degree of a semidefinite programming

More information

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction 1 2 3 4 Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction Shimeng Huang Henry Wolkowicz Sunday 14 th August, 2016, 17:43 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

More information

Sparse Matrix Theory and Semidefinite Optimization

Sparse Matrix Theory and Semidefinite Optimization Sparse Matrix Theory and Semidefinite Optimization Lieven Vandenberghe Department of Electrical Engineering University of California, Los Angeles Joint work with Martin S. Andersen and Yifan Sun Third

More information

A new graph parameter related to bounded rank positive semidefinite matrix completions

A new graph parameter related to bounded rank positive semidefinite matrix completions Mathematical Programming manuscript No. (will be inserted by the editor) Monique Laurent Antonios Varvitsiotis A new graph parameter related to bounded rank positive semidefinite matrix completions the

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Distance Geometry-Matrix Completion

Distance Geometry-Matrix Completion Christos Konaxis Algs in Struct BioInfo 2010 Outline Outline Tertiary structure Incomplete data Tertiary structure Measure diffrence of matched sets Def. Root Mean Square Deviation RMSD = 1 n x i y i 2,

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Peter J. Dukes. 22 August, 2012

Peter J. Dukes. 22 August, 2012 22 August, 22 Graph decomposition Let G and H be graphs on m n vertices. A decompostion of G into copies of H is a collection {H i } of subgraphs of G such that each H i = H, and every edge of G belongs

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

Dimension reduction for semidefinite programming

Dimension reduction for semidefinite programming 1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization arxiv:math/0612388v1 [math.oc] 14 Dec 2006 Yichuan Ding Nathan Krislock Jiawei Qian Henry Wolkowicz October 3,

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers

Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers Jan Mandel University of Colorado at Denver Bedřich Sousedík Czech Technical University

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones Bryan Karlovitz July 19, 2012 West Chester University of Pennsylvania

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by John Haas Last Time (10/23/12) Example of Low Rank Perturbation Relationship Between Eigenvalues and Principal Submatrices: We started

More information

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATRIX ANALYSIS HOMEWORK

MATRIX ANALYSIS HOMEWORK MATRIX ANALYSIS HOMEWORK VERN I. PAULSEN 1. Due 9/25 We let S n denote the group of all permutations of {1,..., n}. 1. Compute the determinants of the elementary matrices: U(k, l), D(k, λ), and S(k, l;

More information

Real solving polynomial equations with semidefinite programming

Real solving polynomial equations with semidefinite programming .5cm. Real solving polynomial equations with semidefinite programming Jean Bernard Lasserre - Monique Laurent - Philipp Rostalski LAAS, Toulouse - CWI, Amsterdam - ETH, Zürich LAW 2008 Real solving polynomial

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh, Amin Karbasi, and Andrea Montanari August 27, 2009 Abstract Sensor localization from only

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Spectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods

Spectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods Spectral Clustering Seungjin Choi Department of Computer Science POSTECH, Korea seungjin@postech.ac.kr 1 Spectral methods Spectral Clustering? Methods using eigenvectors of some matrices Involve eigen-decomposition

More information

Computing the Nearest Euclidean Distance Matrix with Low Embedding Dimensions

Computing the Nearest Euclidean Distance Matrix with Low Embedding Dimensions Computing the Nearest Euclidean Distance Matrix with Low Embedding Dimensions Hou-Duo Qi and Xiaoming Yuan September 12, 2012 and Revised May 12, 2013 Abstract Euclidean distance embedding appears in many

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

On the eigenvalues of Euclidean distance matrices

On the eigenvalues of Euclidean distance matrices Volume 27, N. 3, pp. 237 250, 2008 Copyright 2008 SBMAC ISSN 00-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH Department of Mathematics and Statistics University

More information

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler. Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and Yinyu Ye September 27, 2006; Revised May 2, 2007 Abstract Recently, a semi-definite programming

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

https://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Continuous Optimisation, Chpt 9: Semidefinite Problems

Continuous Optimisation, Chpt 9: Semidefinite Problems Continuous Optimisation, Chpt 9: Semidefinite Problems Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2016co.html version: 21/11/16 Monday

More information

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices Chapter 3 Some Applications Having developed the basic theory of cone programming, it is time to apply it to our actual subject, namely that of semidefinite programming. Indeed, any semidefinite program

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Strong Duality and Minimal Representations for Cone Optimization

Strong Duality and Minimal Representations for Cone Optimization Strong Duality and Minimal Representations for Cone Optimization Levent Tunçel Henry Wolkowicz August 2008, revised: December 2010 University of Waterloo Department of Combinatorics & Optimization Waterloo,

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property

Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property M. Laurent a,b, A. Varvitsiotis a, a Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 XG Amsterdam,

More information

Compression, Matrix Range and Completely Positive Map

Compression, Matrix Range and Completely Positive Map Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Chordally Sparse Semidefinite Programs

Chordally Sparse Semidefinite Programs Chordally Sparse Semidefinite Programs Raphael Louca Abstract We consider semidefinite programs where the collective sparsity pattern of the data matrices is characterized by a given chordal graph. Acyclic,

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Houduo Qi February 1, 008 and Revised October 8, 008 Abstract Let G = (V, E) be a graph

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations DISSERTATION submitted to Department of Mathematics at Chemnitz University of Technology in accordance with

More information

Linear vector spaces and subspaces.

Linear vector spaces and subspaces. Math 2051 W2008 Margo Kondratieva Week 1 Linear vector spaces and subspaces. Section 1.1 The notion of a linear vector space. For the purpose of these notes we regard (m 1)-matrices as m-dimensional vectors,

More information