Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction

Size: px
Start display at page:

Download "Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction"

Transcription

1 Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo INFORMS, San Diego Oct , 2009

2 Outline 1 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones 2 Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations 3 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) 4

3 Outline 1 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones 2 Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations 3 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) 4

4 Outline 1 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones 2 Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations 3 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) 4

5 Outline 1 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones 2 Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations 3 Clique Unions and Node Absorptions Results (low CPU time; high accuracy) 4

6 SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Sensor Network Localization, SNL, Problem SNL - a Fundamental Problem of Distance Geometry; easy to describe - dates back to Grasssmann 1886 n ad hoc wireless sensors (nodes) to locate in R r, (r is embedding dimension; sensors p i R r, i V := 1,...,n) m of the sensors are anchors, p i, i = n m + 1,...,n) (positions known, using e.g. GPS) pairwise distances D ij = p i p j 2, ij E, are known within radio range R > 0 P = p T 1. p T n = 3 [ ] X R n r A

7 Applications SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Horst Stormer (Nobel Prize, Physics, 1949), 21 Ideas for the 21st Century, Business Week. 8/23-30, 1999 Untethered micro sensors will go anywhere and measure anything - traffic flow, water level, number of people walking by, temperature. This is developing into something like a nervous system for the earth, a skin for the earth. The world will evolve this way. Tracking Humans/Animals/Equipment/Weather (smart dust) geographic routing; data aggregation; topological control; soil humidity; earthquakes and volcanos; weather and ocean currents. military; tracking of goods; vehicle positions; surveillance; random deployment in inaccessible terrains.

8 5 Preliminaries Conferences/Journals/Research Groups/Books/Theses/Codes SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Conference, MELT 2008 International Journal of Sensor Networks Research groups include: CENS at UCLA, Berkeley WEBS, recent related theses and books include: [10, 16, 8, 7, 11, 12, 6, 14, 17] recent algorithms specific for SNL: [1, 2, 3, 4, 5, 9, 15, 18, 13]

9 SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Underlying Graph Realization/Partial EDM Graph G = (V, E,ω) node set V = {1,...,n} edge set (i, j) E ; ω ij = p i p j 2 known approximately The anchors form a clique (complete subgraph) Realization of G in R r : a mapping of node v i p i R r with squared distances given by ω. Corresponding Partial Euclidean Distance Matrix, EDM { d 2 D ij = ij if (i, j) E 0 otherwise (unknown distance), d 2 ij = ω ij are known squared Euclidean distances between sensors p i, p j ; anchors correspond to a clique. 6 NP-Hard

10 SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Underlying Graph Realization/Partial EDM Graph G = (V, E,ω) node set V = {1,...,n} edge set (i, j) E ; ω ij = p i p j 2 known approximately The anchors form a clique (complete subgraph) Realization of G in R r : a mapping of node v i p i R r with squared distances given by ω. Corresponding Partial Euclidean Distance Matrix, EDM { d 2 D ij = ij if (i, j) E 0 otherwise (unknown distance), d 2 ij = ω ij are known squared Euclidean distances between sensors p i, p j ; anchors correspond to a clique. 6 NP-Hard

11 SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Sensor Localization Problem/Partial EDM Sensors and Anchors n = 100, m = 9, R = 2

12 8 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Connections to Semidefinite Programming (SDP) S n +, Cone of (symmetric) SDP matrices in S n ; x T Ax 0 inner product A, B = trace AB Löwner (psd) partial order A B, A B D = K (B) E n, B = K (D) S n S C (centered Be = 0) P T = [ ] p 1 p 2... p n M r n ; B := PP T S+ n rank B = r; D E ; n be corresponding EDM. (to D E n ) D = ( p i p j 2 n ( 2) i,j=1 ) n = pi T p i + pj T p j 2pi T p j i,j=1 = diag (B) e T + e diag (B) T 2B =: D e (B) 2B =: K (B) (from B S+ n ).

13 8 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Connections to Semidefinite Programming (SDP) S n +, Cone of (symmetric) SDP matrices in S n ; x T Ax 0 inner product A, B = trace AB Löwner (psd) partial order A B, A B D = K (B) E n, B = K (D) S n S C (centered Be = 0) P T = [ ] p 1 p 2... p n M r n ; B := PP T S+ n rank B = r; D E ; n be corresponding EDM. (to D E n ) D = ( p i p j 2 n ( 2) i,j=1 ) n = pi T p i + pj T p j 2pi T p j i,j=1 = diag (B) e T + e diag (B) T 2B =: D e (B) 2B =: K (B) (from B S+ n ).

14 SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Current Techniques; SDP Relax.; Highly Degen. Nearest, Weighted, SDP Approx. (relax rank B) min B 0,B Ω H (K (B) D) ; rank B = r; typical weights: H ij = 1/ D ij, if ij E. with rank constraint: a non-convex, NP-hard program SDP relaxation is convex, BUT: expensive/low accuracy/implicitly highly degenerate (cliques restrict ranks of feasible Bs) Instead: (Shall) Take Advantage of Degeneracy! clique α, α = k (corresp. D[α]) with embed. dim. = t r < k = rank K (D[α]) = t r = rank B[α] rank K (D[α]) + 1 = rank B = rank K (D) n (k t 1) = Slater s CQ (strict feasibility) fails 9

15 SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Current Techniques; SDP Relax.; Highly Degen. Nearest, Weighted, SDP Approx. (relax rank B) min B 0,B Ω H (K (B) D) ; rank B = r; typical weights: H ij = 1/ D ij, if ij E. with rank constraint: a non-convex, NP-hard program SDP relaxation is convex, BUT: expensive/low accuracy/implicitly highly degenerate (cliques restrict ranks of feasible Bs) Instead: (Shall) Take Advantage of Degeneracy! clique α, α = k (corresp. D[α]) with embed. dim. = t r < k = rank K (D[α]) = t r = rank B[α] rank K (D[α]) + 1 = rank B = rank K (D) n (k t 1) = Slater s CQ (strict feasibility) fails 9

16 10 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones (S n :) K : S n + S C E n S n S H : T (:E n ) Linear Transformations: D v (B), K (B), T (D) allow: D v (B) := diag(b) v T + v diag(b) T ; D v (y) := yv T + vy T adjoint K (D) = 2(Diag(De) D). K is 1 1, onto between centered & hollow subspaces : S C := {B S n : Be = 0}; S H := {D S n : diag (D) = 0} = R(offDiag ) J := I 1 n eet (orthogonal projection onto M := {e} ); T (D) := 1 2JoffDiag (D)J (= K (D))

17 Properties of Linear Transformations SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones K, T, Diag, D e R(K ) = S H ; N (K ) = R (D e ); R(K ) = R(T ) = S C ; N (K ) = N (T ) = R(Diag ); S n = S H R(Diag ) = S C R (D e ). T (E n ) = S n + S C and K (S n + S C ) = E n.

18 Semidefinite Cone, Faces SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Faces of cone K F K is a face of K, denoted F K, if ( x, y K, 1 2 (x + y) F) = (cone {x, y} F). F K, if F K, F K ; F is proper face if {0} F K. F K is exposed if: intersection of K with a hyperplane. face(s) denotes smallest face of K that contains set S. S n + is a Facially Exposed Cone All faces are exposed.

19 Semidefinite Cone, Faces SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Faces of cone K F K is a face of K, denoted F K, if ( x, y K, 1 2 (x + y) F) = (cone {x, y} F). F K, if F K, F K ; F is proper face if {0} F K. F K is exposed if: intersection of K with a hyperplane. face(s) denotes smallest face of K that contains set S. S n + is a Facially Exposed Cone All faces are exposed.

20 13 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Facial Structure of SDP Cone; Equivalent SUBSPACES Face F S n + Equivalence to R(U) Subspace of R n F S+ n determined by range of any S relint F, i.e. let S = UΓU T be compact spectral decomposition; Γ S++ t is diagonal matrix of pos. eigenvalues; F = US+U t T (F associated with R (U)) dim F = t(t + 1)/2. face F representation by subspace L (subspace) L = R(T), T is n t full column, then: F := T S t +T T S n +

21 13 Preliminaries SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Facial Structure of SDP Cone; Equivalent SUBSPACES Face F S n + Equivalence to R(U) Subspace of R n F S+ n determined by range of any S relint F, i.e. let S = UΓU T be compact spectral decomposition; Γ S++ t is diagonal matrix of pos. eigenvalues; F = US+U t T (F associated with R (U)) dim F = t(t + 1)/2. face F representation by subspace L (subspace) L = R(T), T is n t full column, then: F := T S t +T T S n +

22 Further Notation SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Matrix with Fixed Principal Submatrix For Y S n, α {1,..., n}: Y[α] denotes principal submatrix formed from rows & cols with indices α. Sets with Fixed Principal Submatrices If α = k and Ȳ Sk, then: S n (α, Ȳ) := { Y S n : Y[α] = Ȳ}, S+ n (α, Ȳ) := { Y S+ n : Y[α] = Ȳ} i.e. the subset of matrices Y S n (Y S+) n with principal submatrix Y[α] fixed to Ȳ.

23 Further Notation SNL < > GR < > EDM < > SDP Further Notation/Preliminaries; Facial Structure of Cones Matrix with Fixed Principal Submatrix For Y S n, α {1,..., n}: Y[α] denotes principal submatrix formed from rows & cols with indices α. Sets with Fixed Principal Submatrices If α = k and Ȳ Sk, then: S n (α, Ȳ) := { Y S n : Y[α] = Ȳ}, S+ n (α, Ȳ) := { Y S+ n : Y[α] = Ȳ} i.e. the subset of matrices Y S n (Y S+) n with principal submatrix Y[α] fixed to Ȳ.

24 15 Preliminaries Basic Single Clique/Facial Reduction Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations D E k, α 1:n, α = k Define E n (α, D) := { D E n : D[α] = D }. Given D; find a corresponding B 0; find the corresponding face; find the corresponding subspace. if α = 1 : k; embed. dim of D is t r ] [ D D =,

25 Note that we add 1 k e to represent N (K ); then we use V to eliminate e to recover a centered face. 16 Preliminaries Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations BASIC THEOREM for Single Clique/Facial Reduction THEOREM 1: Single Clique/Facial Reduction Let: D := D[1:k] E k, k < n, with embedding dimension t r; B := K ( D) = ŪBSŪT B, ŪB M k t ], ŪT B ŪB = I t, S S++. t 1 Furthermore, let U B := [ŪB k e M k (t+1), [ ] UB 0 [ ] U U :=, and let V T e M n k+t+1 be 0 I U T e n k orthogonal. Then: face K ( E n (1:k, D) ) ( ) = US+ n k+t+1 U T S C = (UV)S+ n k+t (UV) T

26 For each clique α = k, we get a corresponding face/subspace (k r matrix) representation. We now see how to handle two cliques, α 1,α 2, that intersect. Preliminaries Sets for Intersecting Cliques/Faces Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations α 1 := 1:( k 1 + k 2 ); α 2 := ( k 1 + 1):( k 1 + k 2 + k 3 ) α 1 α 2 k 1 k 2 k3

27 18 Preliminaries Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations Two (Intersecting) Clique Reduction/Subsp. Repres. THEOREM 2: Clique/Facial Intersection Using Subspace Intersection { α1,α 2 1:n; k := α 1 α 2 For i = 1, 2: D i := D[α i ] E k i, embedding dimension t i ; B i := K ( D i ) = ŪiS i Ūi T, Ūi M k i t i, ŪT i Ū i = I ti, S i S t i ++ ; 1 U i := [Ūi e ] ki M k i (t i +1) ; and Ū M k (t+1) satisfies ([ ]) ([ ]) R(Ū) = R U1 0 R I k1 0, with 0 I k3 0 U ŪT Ū = I t+1 2 cont...

28 19 Preliminaries Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations Two (Intersecting) Clique Reduction, cont... THEOREM 2 Nonsing. Clique/Facial Inters. cont... cont... with ([ ]) ([ ]) R(Ū) = R U1 0 R I k1 0, with 0 I k3 0 U ŪT Ū = I t+1 ; 2 [Ū ] 0 let: U := M n (n k+t+1) and 0 I [ ] n k U V T e M n k+t+1 be orthogonal. Then U T e 2 i=1 face K ( E n (α i, D i ) ) ( ) = US+ n k+t+1 U T S C = (UV)S+ n k+t (UV) T

29 20 Preliminaries Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations Expense/Work of (Two) Clique/Facial Reductions Subspace Intersection for Two Intersecting Cliques/Faces Suppose: U 1 0 I 0 U 1 = U 1 0 and U 2 = 0 U 2 0 I 0 U 2 Then: U 1 U := U 1 or U := U 2 (U 2 ) U 1 U 1 (U 1 ) U 2 U 2 U 2 (Efficiently) satisfies: R (U) = R(U 1 ) R(U 2 )

30 Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations Two (Intersecting) Clique Reduction Figure Completion: missing distances can be recovered if desired.

31 Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations Two (Intersecting) Clique Explicit Delayed Completion COR. Intersection with Embedding Dim. r/completion Hypotheses of Theorem 2 holds. Let D i := D[α i ] E k i, for i = 1, 2, β α 1 α 2,γ := α 1 α 2, D := D[β], B := K ( D), Ū β := Ū(β,:), where Ū M k (t+1) satisfies ] intersection equation of Theorem 2. Let [ V M t+1 Ū T e ŪT e be orthogonal. Let Z := (JŪβ V) B((JŪβ V) ) T. If the embedding dimension for D is r, THEN t = r in Theorem 2, and Z S r + is the unique solution of the equation (JŪβ V)Z(JŪβ V) T = B, and the exact completion is D[γ] = K ( PP T) where P := UV Z 1 2 R γ r

32 Use R as lower bound in singular/nonrigid case. 23 Preliminaries Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations 2 (Inters.) Clique Red. Figure/Singular Case Two (Intersecting) Clique Reduction Figure/Singular Case C i C j

33 Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations Two (Inters.) Clique Explicit Compl.; Sing. Case COR. Clique-Sing.; Intersect. Embedding Dim. r 1 Hypotheses of previous COR holds. For i = 1, 2, let β δ i α i, A i := JŪδ i V, where Ū δi := Ū(δ i,:), and B i := K (D[δ i ]). Let Z S t be a particular solution of the linear systems A 1 ZA T 1 = B 1 A 2 ZA T 2 = B 2.. If the embedding dimension of D[δ i ] is r, for i = 1, 2, but the embedding dimension of D := D[β] is r 1, then the following holds. cont...

34 25 Preliminaries Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations 2 (Inters.) Clique Expl. Compl.; Degen. cont... COR. Clique-Degen. cont... The following holds: 1 dim N (A i ) = 1, for i = 1, 2. 2 For i = 1, 2, let n i N (A i ), n i 2 = 1, and Z := n 1 n2 T + n 2n1 T. Then, Z is a solution of the linear systems if and only if Z = Z + τ Z, for some τ R 3 There are at most two nonzero solutions, τ 1 and τ 2, for the generalized eigenvalue problem Zv = τ Z v, v 0. Set Z i := Z + 1 τ i Z, for i = 1, 2. Then the exact completion is one of D[γ] { K (Ū V Z i V T Ū T ) : i = 1, 2 }

35 26 Preliminaries Basic Single Clique Reduction Two Clique Reduction and EDM DELAYED Completion Completing SNL; DELAYED use of Anchor Locations Completing SNL (Delayed use of Anchor Locations) Rotate to Align the Anchor Positions [ ] P1 Given P = R n r such that D = K (PP T ) P 2 Solve the orthogonal Procrustes problem: min s.t. A P 2 Q Q T Q = I P T 2 A = UΣV T SVD decomposition; set Q = UV T ; (Golub/Van Loan, Algorithm ) Set X := P 1 Q

36 Algorithm: Four Cases Clique Union Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Node Absorption Rigid Non-rigid C i Ci C j j

37 28 Preliminaries Clique Unions and Node Absorptions Results (low CPU time; high accuracy) ALGOR: clique union; facial reduct.; delay compl. Initialize: Find initial set of cliques. C i := { j : (D p ) ij < (R/2) 2}, Iterate Finalize for i = 1,...,n For C i C j r + 1, do Rigid Clique Union For C i N (j) r + 1, do Rigid Node Absorption For C i C j = r, do Non-Rigid Clique Union (lower bnds) For C i N (j) = r, do Non-Rigid Node Absorp. (lower bnds) When a clique containing all anchors, use computed facial representation and positions of anchors to solve for X

38 28 Preliminaries Clique Unions and Node Absorptions Results (low CPU time; high accuracy) ALGOR: clique union; facial reduct.; delay compl. Initialize: Find initial set of cliques. C i := { j : (D p ) ij < (R/2) 2}, Iterate Finalize for i = 1,...,n For C i C j r + 1, do Rigid Clique Union For C i N (j) r + 1, do Rigid Node Absorption For C i C j = r, do Non-Rigid Clique Union (lower bnds) For C i N (j) = r, do Non-Rigid Node Absorp. (lower bnds) When a clique containing all anchors, use computed facial representation and positions of anchors to solve for X

39 28 Preliminaries Clique Unions and Node Absorptions Results (low CPU time; high accuracy) ALGOR: clique union; facial reduct.; delay compl. Initialize: Find initial set of cliques. C i := { j : (D p ) ij < (R/2) 2}, Iterate Finalize for i = 1,...,n For C i C j r + 1, do Rigid Clique Union For C i N (j) r + 1, do Rigid Node Absorption For C i C j = r, do Non-Rigid Clique Union (lower bnds) For C i N (j) = r, do Non-Rigid Node Absorp. (lower bnds) When a clique containing all anchors, use computed facial representation and positions of anchors to solve for X

40 29 Preliminaries Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Results - Data for Random Noisless Problems 2.16 GHz Intel Core 2 Duo, 2 GB of RAM Dimension r = 2 Square region: [0, 1] [0, 1] m = 9 anchors Using only Rigid Clique Union and Rigid Node Absorption Error measure: Root Mean Square Deviation RMSD = ( 1 n n i=1 p i p true i 2 ) 1/2

41 30 Preliminaries Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Results - Large n (SDP size O(n 2 )) n # of Sensors Located n # sensors \ R CPU Seconds # sensors \ R RMSD (over located sensors) n # sensors \ R e 16 5e 16 6e 16 3e e 16 4e 16 3e 16 3e e 16 5e 16 4e 16 4e 16

42 31 Preliminaries Results - N Huge SDPs Solved Clique Unions and Node Absorptions Results (low CPU time; high accuracy) Large-Scale Problems # sensors # anchors radio range RMSD Time e 16 25s e 16 1m 23s e 16 3m 13s e 16 9m 8s Size of SDPs Solved: N = ( ) n (# vrbls) 2 E n (density of G ) = πr 2 ; M = E n ( E ) = πr 2 N (# constraints) Size of SDP Problems: M = [ 3, 078, , 315, , 709, , 969, 790 ] N = 10 9 [ ]

43 Locally Recover Exact EDMs 32 Nearest EDM Given clique α; corresp. EDM D ǫ = D + N ǫ, N ǫ noise we need to find the smallest face containing E n (α, D). min K (X) D ǫ s.t. rank (X) = r, Xe = 0, X 0 X 0. Eliminate the constraints: Ve = 0, V T V = I, K V (X) := K (VXV T ): Ur 1 argmin 2 K V (UU T ) D ǫ 2 F s.t. U M (n 1)r. The nearest EDM is D = K V (U r (U r ) T ).

44 Solve Overdetermined Nonlin. Least Squares Prob. 33 Newton (expensive) or Gauss-Newton (less accurate) ) F(U) := us2vec (K V (UU T ) D ǫ, min f(u) := 1 U 2 F(U) 2 Derivatives: gradient and Hessian ( ]) f(u)( U) = 2 K V [K V (UU T ) D ǫ U, U 2 f(u) = 2vec ( )) L U K V K V S Σ L U + K V (K V (UU T ) D ǫ Mat where L U ( ) = U T ; S Σ (U) = 1 2 (U + UT )

45 random noisy probs; r = 2, m = 9, nf = 1e 6 34 Using only Rigid Clique Union, preliminary results: remaining cliques n / R cpu seconds n / R max-log-error n / R Inf Inf

46 35 SDP relaxation of SNL is highly (implicitly) degenerate: The feasible set of this SDP is restricted to a low dim. face of the SDP cone, causing the Slater constraint qualification (strict feasibility) to fail We take advantage of this degeneracy by finding explicit representations of intersections of faces of the SDP cone corresponding to unions of intersecting cliques Without using an SDP-solver (eg. SeDuMi or SDPT3), we quickly compute the exact solution to the SDP relaxation

47 35 SDP relaxation of SNL is highly (implicitly) degenerate: The feasible set of this SDP is restricted to a low dim. face of the SDP cone, causing the Slater constraint qualification (strict feasibility) to fail We take advantage of this degeneracy by finding explicit representations of intersections of faces of the SDP cone corresponding to unions of intersecting cliques Without using an SDP-solver (eg. SeDuMi or SDPT3), we quickly compute the exact solution to the SDP relaxation

48 35 SDP relaxation of SNL is highly (implicitly) degenerate: The feasible set of this SDP is restricted to a low dim. face of the SDP cone, causing the Slater constraint qualification (strict feasibility) to fail We take advantage of this degeneracy by finding explicit representations of intersections of faces of the SDP cone corresponding to unions of intersecting cliques Without using an SDP-solver (eg. SeDuMi or SDPT3), we quickly compute the exact solution to the SDP relaxation

49 36 P. BISWAS, T.-C. LIAN, T.-C. WANG, and Y. YE, Semidefinite programming based algorithms for sensor network localization, ACM Trans. Sen. Netw. 2 (2006), no. 2, P. BISWAS, T.-C. LIANG, Y. YE, K-C. TOH, and T.-C. WANG, Semidefinite programming approaches for sensor network localization with noisy distance measurements, IEEE Transactions onautomation Science and Engineering 3 (2006), no. 4, P. BISWAS and Y. YE, Semidefinite programming for ad hoc wireless sensor network localization, IPSN 04: Proceedings of the 3rd international symposium on Information processing in sensor networks (New York, NY, USA), ACM, 2004, pp

50 37 P. BISWAS and Y. YE, A distributed method for solving semidefinite programs arising from ad hoc wireless sensor network localization, Multiscale optimization methods and applications, Nonconvex Optim. Appl., vol. 82, Springer, New York, 2006, pp MR MR M.W. CARTER, H.H. JIN, M.A. SAUNDERS, and Y. YE, SpaseLoc: an adaptive subproblem algorithm for scalable wireless sensor network localization, SIAM J. Optim. 17 (2006), no. 4, MR MR (2007j:90005) A. CASSIOLI, Global optimization of highly multimodal problems, Ph.D. thesis, Universita di Firenze, Dipartimento di sistemi e informatica, Via di S.Marta 3, Firenze, Italy, K. CHAKRABARTY and S.S. IYENGAR, Springer, London, 2005.

51 38 J. DATTORRO, Convex optimization & euclidean distance geometry, Meboo Publishing, USA, Y. DING, N. KRISLOCK, J. QIAN, and H. WOLKOWICZ, Sensor network localization, Euclidean distance matrix completions, and graph realization, Optimization and Engineering to appear (2006), no. CORR , to appear. B. HENDRICKSON, The molecule problem: Determining conformation from pairwise distances, Ph.D. thesis, Cornell University, H. JIN, Scalable sensor localization algorithms for wireless sensor networks, Ph.D. thesis, Toronto University, Toronto, Ontario, Canada, 2005.

52 39 D.S. KIM, Sensor network localization based on natural phenomena, Ph.D. thesis, Dept, Electr. Eng. and Comp. Sc., MIT, N. KRISLOCK and H. WOLKOWICZ, Explicit sensor network localization using semidefinite representations and clique reductions, Tech. Report CORR , University of Waterloo, Waterloo, Ontario, 2009, Available at URL: S. NAWAZ, Anchor free localization for ad-hoc wireless sensor networks, Ph.D. thesis, University of New South Wales, T.K. PONG and P. TSENG, (Robust) edge-based semidefinite programming relaxation of sensor network localization, Tech. Report Jan-09, University of Washington, Seattle, WA, 2009.

53 40 K. ROMER, Time synchronization and localization in sensor networks, Ph.D. thesis, ETH Zurich, S. URABL, Cooperative localization in wireless sensor networks, Master s thesis, University of Klagenfurt, Klagenfurt, Austria, Z. WANG, S. ZHENG, S. BOYD, and Y. YE, Further relaxations of the semidefinite programming approach to sensor network localization, SIAM J. Optim. 19 (2008), no. 2, MR MR

54 Thanks for your attention! INFORMS, San Diego Oct , 2009 Explicit Sensor Network Localization using Semidefinite Programming and Facial Reduction Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Haifa Matrix

More information

Theory and Applications of Degeneracy in Cone Optimization

Theory and Applications of Degeneracy in Cone Optimization Theory and Applications of Degeneracy in Cone Optimization Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Colloquium at: Dept. of Math. & Stats, Sept. 24 Outline Part I:

More information

Facial Reduction for Cone Optimization

Facial Reduction for Cone Optimization Facial Reduction for Cone Optimization Henry Wolkowicz Dept. Combinatorics and Optimization University of Waterloo Tuesday Apr. 21, 2015 (with: Drusvyatskiy, Krislock, (Cheung) Voronin) Motivation: Loss

More information

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization

More information

Facial reduction for Euclidean distance matrix problems

Facial reduction for Euclidean distance matrix problems Facial reduction for Euclidean distance matrix problems Nathan Krislock Department of Mathematical Sciences Northern Illinois University, USA Distance Geometry: Theory and Applications DIMACS Center, Rutgers

More information

The Graph Realization Problem

The Graph Realization Problem The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization

More information

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Yichuan Ding Nathan Krislock Jiawei Qian Henry Wolkowicz October 27, 2008 University of Waterloo Department of

More information

c 2017 Society for Industrial and Applied Mathematics

c 2017 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 27, No. 4, pp. 2301 2331 c 2017 Society for Industrial and Applied Mathematics NOISY EUCLIDEAN DISTANCE REALIZATION: ROBUST FACIAL REDUCTION AND THE PARETO FRONTIER D. DRUSVYATSKIY,

More information

Facial Reduction in Cone Optimization with Applications to Matrix Completions

Facial Reduction in Cone Optimization with Applications to Matrix Completions 1 Facial Reduction in Cone Optimization with Applications to Matrix Completions Henry Wolkowicz Dept. Combinatorics and Optimization, University of Waterloo, Canada Wed. July 27, 2016, 2-3:20 PM at: DIMACS

More information

Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization

Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization Euclidean Distance Matrices, Semidefinite Programming, and Sensor Network Localization Abdo Y. Alfakih Miguel F. Anjos Veronica Piccialli Henry Wolkowicz November 2, 2009 University of Waterloo Department

More information

Spherical Euclidean Distance Embedding of a Graph

Spherical Euclidean Distance Embedding of a Graph Spherical Euclidean Distance Embedding of a Graph Hou-Duo Qi University of Southampton Presented at Isaac Newton Institute Polynomial Optimization August 9, 2013 Spherical Embedding Problem The Problem:

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and Yinyu Ye September 27, 2006; Revised May 2, 2007 Abstract Recently, a semi-definite programming

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and inyu e September 27, 2006 Abstract Recently, a semidefinite programming (SDP) relaxation

More information

On SDP and ESDP Relaxation of Sensor Network Localization

On SDP and ESDP Relaxation of Sensor Network Localization On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IASI, Roma September 16, 2008 (joint work with Ting Kei Pong) ON SDP/ESDP RELAXATION OF

More information

Distance Geometry-Matrix Completion

Distance Geometry-Matrix Completion Christos Konaxis Algs in Struct BioInfo 2010 Outline Outline Tertiary structure Incomplete data Tertiary structure Measure diffrence of matched sets Def. Root Mean Square Deviation RMSD = 1 n x i y i 2,

More information

SOCP Relaxation of Sensor Network Localization

SOCP Relaxation of Sensor Network Localization SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IWCSN, Simon Fraser University August 19, 2006 Abstract This is a talk given at Int. Workshop on

More information

On SDP and ESDP Relaxation of Sensor Network Localization

On SDP and ESDP Relaxation of Sensor Network Localization On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle Southern California Optimization Day, UCSD March 19, 2009 (joint work with Ting Kei Pong)

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Sparse Matrix Theory and Semidefinite Optimization

Sparse Matrix Theory and Semidefinite Optimization Sparse Matrix Theory and Semidefinite Optimization Lieven Vandenberghe Department of Electrical Engineering University of California, Los Angeles Joint work with Martin S. Andersen and Yifan Sun Third

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction 1 2 3 4 Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction Shimeng Huang Henry Wolkowicz Sunday 14 th August, 2016, 17:43 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

More information

SOCP Relaxation of Sensor Network Localization

SOCP Relaxation of Sensor Network Localization SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle University of Vienna/Wien June 19, 2006 Abstract This is a talk given at Univ. Vienna, 2006. SOCP

More information

Introducing PENLAB a MATLAB code for NLP-SDP

Introducing PENLAB a MATLAB code for NLP-SDP Introducing PENLAB a MATLAB code for NLP-SDP Michal Kočvara School of Mathematics, The University of Birmingham jointly with Jan Fiala Numerical Algorithms Group Michael Stingl University of Erlangen-Nürnberg

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

Matrix Rank Minimization with Applications

Matrix Rank Minimization with Applications Matrix Rank Minimization with Applications Maryam Fazel Haitham Hindi Stephen Boyd Information Systems Lab Electrical Engineering Department Stanford University 8/2001 ACC 01 Outline Rank Minimization

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming

On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming MAHEMAICS OF OPERAIONS RESEARCH Vol. 36, No., February 20, pp. 88 04 issn0364-765x eissn526-547 360 0088 informs doi 0.287/moor.00.0473 20 INFORMS On Equivalence of Semidefinite Relaxations for Quadratic

More information

An SDP-based divide-and-conquer algorithm for large scale noisy anchor-free graph realization

An SDP-based divide-and-conquer algorithm for large scale noisy anchor-free graph realization An SDP-based divide-and-conquer algorithm for large scale noisy anchor-free graph realization Ngai-Hang Z. Leung, and Kim-Chuan Toh August 19, 2008 Abstract We propose the DISCO algorithm for graph realization

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh, Amin Karbasi, and Andrea Montanari August 27, 2009 Abstract Sensor localization from only

More information

Singularity Degree of the Positive Semidefinite Matrix Completion Problem

Singularity Degree of the Positive Semidefinite Matrix Completion Problem Singularity Degree of the Positive Semidefinite Matrix Completion Problem Shin-ichi Tanigawa November 8, 2016 arxiv:1603.09586v2 [math.oc] 4 Nov 2016 Abstract The singularity degree of a semidefinite programming

More information

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones Bryan Karlovitz July 19, 2012 West Chester University of Pennsylvania

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Optimal Value Function Methods in Numerical Optimization Level Set Methods

Optimal Value Function Methods in Numerical Optimization Level Set Methods Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract A STRENGTHENED SDP RELAXATION via a SECOND LIFTING for the MAX-CUT PROBLEM Miguel Anjos Henry Wolkowicz y July 15, 1999 University of Waterloo Department of Combinatorics and Optimization Waterloo, Ontario

More information

Convex Euclidean distance embedding for collaborative position localization with NLOS mitigation

Convex Euclidean distance embedding for collaborative position localization with NLOS mitigation Noname manuscript No. (will be inserted by the editor) Convex Euclidean distance embedding for collaborative position localization with NLOS mitigation Chao Ding and Hou-Duo Qi the date of receipt and

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 08 Boolean Matrix Factorization Rainer Gemulla, Pauli Miettinen June 13, 2013 Outline 1 Warm-Up 2 What is BMF 3 BMF vs. other three-letter abbreviations 4 Binary matrices, tiles,

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Solving Euclidean Distance Matrix Completion Problems Via Semidefinite Programming*

Solving Euclidean Distance Matrix Completion Problems Via Semidefinite Programming* Computational Optimization and Applications 12, 13 30 1999) c 1999 Kluwer Academic Publishers. Manufactured in The Netherlands. Solving Euclidean Distance Matrix Completion Problems Via Semidefinite Programming*

More information

Theory of semidefinite programming for Sensor Network Localization

Theory of semidefinite programming for Sensor Network Localization Math. Program. Ser. B DOI 10.1007/s10107-006-0040-1 FULL LENGTH PAPER Theory of semidefinite programming for Sensor Network Localization Anthony Man-Cho So Yinyu Ye Received: 22 November 2004 / Accepted:

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS

A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS By NING WANG A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL

More information

Preliminaries Overview OPF and Extensions. Convex Optimization. Lecture 8 - Applications in Smart Grids. Instructor: Yuanzhang Xiao

Preliminaries Overview OPF and Extensions. Convex Optimization. Lecture 8 - Applications in Smart Grids. Instructor: Yuanzhang Xiao Convex Optimization Lecture 8 - Applications in Smart Grids Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 32 Today s Lecture 1 Generalized Inequalities and Semidefinite Programming

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs

Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs Distributed Control of Connected Vehicles and Fast ADMM for Sparse SDPs Yang Zheng Department of Engineering Science, University of Oxford Homepage: http://users.ox.ac.uk/~ball4503/index.html Talk at Cranfield

More information

On the eigenvalues of Euclidean distance matrices

On the eigenvalues of Euclidean distance matrices Volume 27, N. 3, pp. 237 250, 2008 Copyright 2008 SBMAC ISSN 00-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH Department of Mathematics and Statistics University

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

A new graph parameter related to bounded rank positive semidefinite matrix completions

A new graph parameter related to bounded rank positive semidefinite matrix completions Mathematical Programming manuscript No. (will be inserted by the editor) Monique Laurent Antonios Varvitsiotis A new graph parameter related to bounded rank positive semidefinite matrix completions the

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Universal Rigidity and Edge Sparsification for Sensor Network Localization

Universal Rigidity and Edge Sparsification for Sensor Network Localization Universal Rigidity and Edge Sparsification for Sensor Network Localization Zhisu Zhu Anthony Man Cho So Yinyu Ye September 23, 2009 Abstract Owing to their high accuracy and ease of formulation, there

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Computing the Nearest Euclidean Distance Matrix with Low Embedding Dimensions

Computing the Nearest Euclidean Distance Matrix with Low Embedding Dimensions Computing the Nearest Euclidean Distance Matrix with Low Embedding Dimensions Hou-Duo Qi and Xiaoming Yuan September 12, 2012 and Revised May 12, 2013 Abstract Euclidean distance embedding appears in many

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Dimension reduction for semidefinite programming

Dimension reduction for semidefinite programming 1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Cooperative Non-Line-of-Sight Localization Using Low-rank + Sparse Matrix Decomposition

Cooperative Non-Line-of-Sight Localization Using Low-rank + Sparse Matrix Decomposition Cooperative Non-Line-of-Sight Localization Using Low-rank + Sparse Matrix Decomposition Venkatesan Ekambaram Kannan Ramchandran Electrical Engineering and Computer Sciences University of California at

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

On the local stability of semidefinite relaxations

On the local stability of semidefinite relaxations On the local stability of semidefinite relaxations Diego Cifuentes Department of Mathematics Massachusetts Institute of Technology Joint work with Sameer Agarwal (Google), Pablo Parrilo (MIT), Rekha Thomas

More information

Lecture: Local Spectral Methods (3 of 4) 20 An optimization perspective on local spectral methods

Lecture: Local Spectral Methods (3 of 4) 20 An optimization perspective on local spectral methods Stat260/CS294: Spectral Graph Methods Lecture 20-04/07/205 Lecture: Local Spectral Methods (3 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

An SDP-Based Divide-and-Conquer Algorithm for Largescale Noisy Anchor-free Graph Realization

An SDP-Based Divide-and-Conquer Algorithm for Largescale Noisy Anchor-free Graph Realization An SDP-Based Divide-and-Conquer Algorithm for Largescale Noisy Anchor-free Graph Realization The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

An Interior-Point Method for Approximate Positive Semidefinite Completions*

An Interior-Point Method for Approximate Positive Semidefinite Completions* Computational Optimization and Applications 9, 175 190 (1998) c 1998 Kluwer Academic Publishers. Manufactured in The Netherlands. An Interior-Point Method for Approximate Positive Semidefinite Completions*

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Four new upper bounds for the stability number of a graph

Four new upper bounds for the stability number of a graph Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Solving linear and non-linear SDP by PENNON

Solving linear and non-linear SDP by PENNON Solving linear and non-linear SDP by PENNON Michal Kočvara School of Mathematics, The University of Birmingham University of Warwick, February 2010 Outline Why nonlinear SDP? PENNON the new generation

More information

Communities, Spectral Clustering, and Random Walks

Communities, Spectral Clustering, and Random Walks Communities, Spectral Clustering, and Random Walks David Bindel Department of Computer Science Cornell University 26 Sep 2011 20 21 19 16 22 28 17 18 29 26 27 30 23 1 25 5 8 24 2 4 14 3 9 13 15 11 10 12

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

A Gradient Search Method to Round the Semidefinite Programming Relaxation Solution for Ad Hoc Wireless Sensor Network Localization

A Gradient Search Method to Round the Semidefinite Programming Relaxation Solution for Ad Hoc Wireless Sensor Network Localization A Gradient Search Method to Round the Semidefinite Programming Relaxation Solution for Ad Hoc Wireless Sensor Network Localization Tzu-Chen Liang, Ta-Chung Wang, and Yinyu Ye August 28, 2004; Updated October

More information

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information