On SDP and ESDP Relaxation of Sensor Network Localization
|
|
- Kimberly Hopkins
- 5 years ago
- Views:
Transcription
1 On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IASI, Roma September 16, 2008 (joint work with Ting Kei Pong)
2 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 1 Talk Outline Sensor network localization SDP, ESDP relaxations: formulation and properties
3 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 1 Talk Outline Sensor network localization SDP, ESDP relaxations: formulation and properties A robust version of ESDP to handle noises BCGD-barrier method
4 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 1 Talk Outline Sensor network localization SDP, ESDP relaxations: formulation and properties A robust version of ESDP to handle noises BCGD-barrier method Numerical simulations Conclusion & Ongoing work
5 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 2 Sensor Network Localization Basic Problem: n pts in R 2. Know last n m pts ( anchors ) x m+1,..., x n pairs of neighboring pts and Eucl. dist. estimate for d ij 0 (i, j) A with A {(i, j) : 1 i < j n}. Estimate first m pts ( sensors ).
6 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 2 Sensor Network Localization Basic Problem: n pts in R 2. Know last n m pts ( anchors ) x m+1,..., x n pairs of neighboring pts and Eucl. dist. estimate for d ij 0 (i, j) A with A {(i, j) : 1 i < j n}. Estimate first m pts ( sensors ). History?... Graph realization, position estimation in wireless sensor network,
7 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 3 Optimization Problem Formulation υ opt := min x 1,...,x m xi x j 2 d 2 ij (i,j) A
8 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 3 Optimization Problem Formulation υ opt := min x 1,...,x m xi x j 2 d 2 ij (i,j) A Objective function is nonconvex. m can be large (m > 1000)... Problem is NP-hard (reduction from PARTITION)... Use a convex (SDP, SOCP) relaxation. Low soln accuracy OK. Distributed methods.
9 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 4 SDP Relaxation Let X := [x 1 x m ]. Z = [ X I ] T [ X I ] Z = [ Y X T X I ] 0, rankz = 2
10 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 4 SDP Relaxation Let X := [x 1 x m ]. Z = [ X I ] T [ X I ] Z = [ Y X T X I ] 0, rankz = 2 SDP relaxation (Biswas,Ye 03): υ sdp := min Z + (i,j) A,j>m (i,j) A,j m s.t. Z = [ Y X T X Yii 2x T j x i + x j 2 d 2 ij Y ii 2Y ij + Y jj d 2 ij I ] 0 Adding the nonconvex constraint rankz = 2 yields original problem. But SDP relaxation is still expensive to solve for m large..
11 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 5 ESDP Relaxation ESDP relaxation (Wang, Zheng, Boyd, Ye 06): υ esdp := min Z + (i,j) A,j>m (i,j) A,j m [ ] Y X T s.t. Z = X I Y ii Y ij x T i Y ij Y jj x T j [ x i x j ] I Yii x T i 0 I x i Y ii 2x T j x i + x j 2 d 2 ij Y ii 2Y ij + Y jj d 2 ij 0 i m (i, j) A, j m 0 υ esdp υ sdp υ opt. In simulation, ESDP is nearly as strong as SDP, and solvable much faster by IP method.
12 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 6 An Example n = 3, m = 1, d 12 = d 13 = 2 Problem: 0 = min x 1 R 2 x 1 (1, 0) x 1 ( 1, 0) 2 4
13 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 7 SDP/ESDP Relaxation: 0 = min x 1 =(α,β) R 2 Y 11 R s.t. Y 11 2α 3 + Y α 3 Y 11 α β α 1 0 β If solve SDP/ESDP by IP method, then likely get analy. center.
14 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 8 Properties of SDP & ESDP Relaxations Assume each i m is conn. to some j > m in the graph ({1,..., n}, A). Fact 0: Sol(SDP) and Sol(ESDP) are nonempty, closed, convex, bounded. If d ij = x true i x true j (i, j) A noiseless case (x true i = x i i > m), then υ opt = υ sdp = υ esdp = 0 and Z true := [ X true I ] T [ X true I ] is a soln of SDP and ESDP (i.e., Z true Sol(SDP) Sol(ESDP)).
15 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 9 Let tr i [Z] := Y ii x i 2, i = 1,..., m. ith trace Fact 1 (Biswas,Ye 03, T 07, Wang et al 06): For each i, tr i [Z] = 0 Z ri(sol(esdp)) = x i is invariant over Sol(ESDP) (so x i = x true i in noiseless case) Still true with ESDP changed to SDP.
16 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 9 Let tr i [Z] := Y ii x i 2, i = 1,..., m. ith trace Fact 1 (Biswas,Ye 03, T 07, Wang et al 06): For each i, tr i [Z] = 0 Z ri(sol(esdp)) = x i is invariant over Sol(ESDP) (so x i = x true i in noiseless case) Still true with ESDP changed to SDP. Fact 2 (Pong, T 08): Suppose υ opt = 0. For each i, tr i [Z] = 0 Z Sol(ESDP) x i is invariant over Sol(ESDP). Proof is by induction, starting from sensors that neighbor anchors. (Q: True for SDP?)
17 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 10 In practice, there are measurement noises: d 2 ij = x true i x true j 2 + δ ij (i, j) A. When δ := (δ ij ) (i,j) A 0, does tr i [Z] = 0 (with Z ri(sol(esdp))) imply x i x true i?
18 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 10 In practice, there are measurement noises: d 2 ij = x true i x true j 2 + δ ij (i, j) A. When δ := (δ ij ) (i,j) A 0, does tr i [Z] = 0 (with Z ri(sol(esdp))) imply x i x true i? No!.. Fact 3 (Pong, T 08): For δ 0 and for each i, tr i [Z] = 0 Z ri(sol(esdp)) x i x true i. Still true with ESDP changed to SDP. Proof is by counter-example.
19 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 11 An example of sensitivity of ESDP solns to measurement noise: Input distance data: ɛ > 0 d 12 = 4 + (1 ɛ) 2, d 13 = 1 + ɛ, d 14 = 1 ɛ, d 25 = d 26 = 2; m = 2, n = 6. Thus, even when Z Sol(ESDP) is unique, tr i [Z] = 0 fails to certify accuracy of x i in the noisy case!
20 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 12 Robust ESDP Fix any ρ ij > δ ij (i, j) A (ρ > δ ). Let Sol(ρESDP) denote the set of Z = [ Y X T X I ] satisfying Y ii Y ij x T i Y ij Y jj x T j 0 (i, j) A, j m x i [ x j I ] Yii x T i 0 i m x i I Y ii 2x T j x i + x j 2 d 2 ij ρ ij (i, j) A, j > m Y ii 2Y ij + Y jj d 2 ij ρ ij (i, j) A, j m Note: Z true = [ X true I ] T [ X true I ] Sol(ρESDP).
21 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 13 Let Z ρ,δ := arg min Z Sol(ρESDP) (i,j) A,j m i m ln det [ Yii x ln det T i x i I Y ii Y ij x T i Y ij Y jj x T j x i x j I ]
22 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 13 Let Z ρ,δ := arg min Z Sol(ρESDP) (i,j) A,j m i m ln det [ Yii x ln det T i x i I Y ii Y ij x T i Y ij Y jj x T j x i x j I ] Fact 4 (Pong, T 08): η > 0 and ρ > 0 such that for each i, tr i [Z ρ,δ ] < η δ < ρ ρe = lim δ <ρ 0 xρ,δ i = x true i tr i [Z ρ,δ ] > η 10 δ < ρ ρe = x i not invar. over Sol(ESDP) when δ = 0 Moreover, x ρ,δ i x true i 2 A + m tr i [Z ρ,δ ] δ < ρ.
23 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 14 BCGD-Barrier Method Compute Z ρ,δ? Let h a (t) := 1 2 (t a) ( t a)2 + and f µ (Z) := (i,j) A,j>m + µ (i,j) A,j m (i,j) A,j m µ i m h ρij (Y ii 2x T j x i + x j 2 d 2 ij) h ρij (Y ii 2Y ij + Y jj d 2 ij) ln det [ Yii x ln det T i x i I Y ii Y ij x T i Y ij Y jj x T j x i x j I ]
24 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 15 For each (i, j) A with j > m (resp. j m), h ρij (Y ii 2x T j x i + x j 2 d 2 ij ) = 0 Y ii 2x T j x i + x j 2 d 2 ij ρ ij (resp. h ρij (Y ii 2Y ij + Y jj d 2 ij ) = 0 Y ii 2Y ij + Y jj d 2 ij ρ ij). f µ is partially separable, strictly convex & diff. on its domain. For each fixed ρ > δ, argminf µ Z ρ,δ as µ 0. In the noiseless case (δ = 0), if we set ρ = 0, then argminf µ Z as µ 0, for some Z ri (Sol (ESDP)).
25 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 15 For each (i, j) A with j > m (resp. j m), h ρij (Y ii 2x T j x i + x j 2 d 2 ij ) = 0 Y ii 2x T j x i + x j 2 d 2 ij ρ ij (resp. h ρij (Y ii 2Y ij + Y jj d 2 ij ) = 0 Y ii 2Y ij + Y jj d 2 ij ρ ij). f µ is partially separable, strictly convex & diff. on its domain. For each fixed ρ > δ, argminf µ Z ρ,δ as µ 0. In the noiseless case (δ = 0), if we set ρ = 0, then argminf µ Z as µ 0, for some Z ri (Sol (ESDP)). Idea: Approx. min f µ by block-coordinate gradient descent (BCGD). (T, Yun 06)
26 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 16 BCGD-Barrier Method: Given Z in domf µ, compute gradient Zi f µ of f µ w.r.t. Z i := {x i, Y ii, Y ij : (i, j) A} for each i. If Zi f µ max{µ, 10 6 } for some i, update Z i by moving along the ( ) 1 Newton direction 2 Z i Z i f µ Zi f µ with Armijo stepsize rule. Decrease µ when Zi f µ < max{µ, 10 6 } i. µ initial = 100, µ final = Decrease µ by a factor of 10 each time. Coded in Fortran. Computation easily distributes.
27 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 17 Simulation Results Compare ρesdp as solved by BCGD-barrier with ESDP as solved by Sedumi 1.05 Sturm (with the interface to Sedumi coded by Wang et al).
28 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 17 Simulation Results Compare ρesdp as solved by BCGD-barrier with ESDP as solved by Sedumi 1.05 Sturm (with the interface to Sedumi coded by Wang et al). Anchors and sensors x true 1,..., x true n uniformly distributed in [.5,.5] 2, m =.9n. (i, j) A whenever x true i x true j < rr. Set d ij = x true i x true j (1 + nf ɛ ij ) +, where ɛ ij N(0, 1).
29 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 17 Simulation Results Compare ρesdp as solved by BCGD-barrier with ESDP as solved by Sedumi 1.05 Sturm (with the interface to Sedumi coded by Wang et al). Anchors and sensors x true 1,..., x true n uniformly distributed in [.5,.5] 2, m =.9n. (i, j) A whenever x true i x true j < rr. Set d ij = x true i x true j (1 + nf ɛ ij ) +, where ɛ ij N(0, 1). Sensor i is judged as accurately positioned if tr i [Z found ] < nf.
30 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 18 ρesdp BCGD barrier ESDP Sedumi n m nf rr cpu/m ap /err ap cpu(cpus)/m ap /err ap /574/3.4e-4 189(106)/626/2.2e /520/2.8e-3 170(89)/624/3.1e /517/1.1e-2 128(48)/664/1.5e /1626/1.7e (397)/1689/2.7e /1596/8.5e (503)/1653/1.3e /1602/6.2e (417)/1689/1.2e-2 cpu(sec) times are on a HP DL360 workstation, running Linux 3.5. ESDP is solved by Sedumi; cpus:= run time for Sedumi. Set ρ ij = d 2 ij ((1 2 nf) 2 1). m ap := # accurately positioned sensors. err ap := max i accurate. pos. x i x true i.
31 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION sensors, 100 anchors, rr = 0.06, nf = 0.01, solve ρesdp by BCGD-barrier. x true i (shown as ) and x ρ,δ i (shown as ) are joined by blue line segment; anchors are shown as.
32 ON SDP/ESDP RELAXATION OF SENSOR NETWORK LOCALIZATION 20 Conclusion & Ongoing work SDP and ESDP solns are sensitive to measurement noise. Lack soln accuracy certificate (though the trace test works well enough in simulation). ρesdp has more stable solns. Has soln accuracy certificate (which works well enough in simulation). Needs to estimate the noise level δ to set ρ. Can ρ > δ be relaxed? Approximation bounds? Extension to maxmin dispersion problem. Thanks for coming!..
On SDP and ESDP Relaxation of Sensor Network Localization
On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle Southern California Optimization Day, UCSD March 19, 2009 (joint work with Ting Kei Pong)
More informationSOCP Relaxation of Sensor Network Localization
SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IWCSN, Simon Fraser University August 19, 2006 Abstract This is a talk given at Int. Workshop on
More informationSOCP Relaxation of Sensor Network Localization
SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle University of Vienna/Wien June 19, 2006 Abstract This is a talk given at Univ. Vienna, 2006. SOCP
More informationSemidefinite Facial Reduction for Euclidean Distance Matrix Completion
Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization
More informationFrom Evolutionary Biology to Interior-Point Methods
From Evolutionary Biology to Interior-Point Methods Paul Tseng Mathematics, University of Washington Seattle WCOM, UBC Kelowna October 27, 2007 Joint work with Immanuel Bomze and Werner Schachinger (Univ.
More informationAccelerated Proximal Gradient Methods for Convex Optimization
Accelerated Proximal Gradient Methods for Convex Optimization Paul Tseng Mathematics, University of Washington Seattle MOPTA, University of Guelph August 18, 2008 ACCELERATED PROXIMAL GRADIENT METHODS
More informationSDP Relaxation of Quadratic Optimization with Few Homogeneous Quadratic Constraints
SDP Relaxation of Quadratic Optimization with Few Homogeneous Quadratic Constraints Paul Tseng Mathematics, University of Washington Seattle LIDS, MIT March 22, 2006 (Joint work with Z.-Q. Luo, N. D. Sidiropoulos,
More informationNetwork Localization via Schatten Quasi-Norm Minimization
Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung
More informationFacial reduction for Euclidean distance matrix problems
Facial reduction for Euclidean distance matrix problems Nathan Krislock Department of Mathematical Sciences Northern Illinois University, USA Distance Geometry: Theory and Applications DIMACS Center, Rutgers
More informationFurther Relaxations of the SDP Approach to Sensor Network Localization
Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and Yinyu Ye September 27, 2006; Revised May 2, 2007 Abstract Recently, a semi-definite programming
More informationThe Graph Realization Problem
The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization
More informationBlock stochastic gradient update method
Block stochastic gradient update method Yangyang Xu and Wotao Yin IMA, University of Minnesota Department of Mathematics, UCLA November 1, 2015 This work was done while in Rice University 1 / 26 Stochastic
More informationRapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016
More informationLeader selection in consensus networks
Leader selection in consensus networks Mihailo Jovanović www.umn.edu/ mihailo joint work with: Makan Fardad Fu Lin 2013 Allerton Conference Consensus with stochastic disturbances 1 RELATIVE INFORMATION
More informationOptimal Value Function Methods in Numerical Optimization Level Set Methods
Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander
More informationAn accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems
An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems Kim-Chuan Toh Sangwoon Yun March 27, 2009; Revised, Nov 11, 2009 Abstract The affine rank minimization
More informationRandomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints
Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints By I. Necoara, Y. Nesterov, and F. Glineur Lijun Xu Optimization Group Meeting November 27, 2012 Outline
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationSelf-Calibration and Biconvex Compressive Sensing
Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements
More informationMathematical Optimization Models and Applications
Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,
More informationRobust Convex Approximation Methods for TDOA-Based Localization under NLOS Conditions
Robust Convex Approximation Methods for TDOA-Based Localization under NLOS Conditions Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (CUHK)
More informationAsynchronous Algorithms for Conic Programs, including Optimal, Infeasible, and Unbounded Ones
Asynchronous Algorithms for Conic Programs, including Optimal, Infeasible, and Unbounded Ones Wotao Yin joint: Fei Feng, Robert Hannah, Yanli Liu, Ernest Ryu (UCLA, Math) DIMACS: Distributed Optimization,
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationFurther Relaxations of the SDP Approach to Sensor Network Localization
Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and inyu e September 27, 2006 Abstract Recently, a semidefinite programming (SDP) relaxation
More informationRandomized Coordinate Descent with Arbitrary Sampling: Algorithms and Complexity
Randomized Coordinate Descent with Arbitrary Sampling: Algorithms and Complexity Zheng Qu University of Hong Kong CAM, 23-26 Aug 2016 Hong Kong based on joint work with Peter Richtarik and Dominique Cisba(University
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationFirst-order methods for structured nonsmooth optimization
First-order methods for structured nonsmooth optimization Sangwoon Yun Department of Mathematics Education Sungkyunkwan University Oct 19, 2016 Center for Mathematical Analysis & Computation, Yonsei University
More informationA SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS
A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS By NING WANG A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL
More informationIntroducing PENLAB a MATLAB code for NLP-SDP
Introducing PENLAB a MATLAB code for NLP-SDP Michal Kočvara School of Mathematics, The University of Birmingham jointly with Jan Fiala Numerical Algorithms Group Michael Stingl University of Erlangen-Nürnberg
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,
More informationReconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm
Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More information1-bit Matrix Completion. PAC-Bayes and Variational Approximation
: PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Bayes In Paris, 5 January 2017 (Happy New Year!) Various Topics covered Matrix Completion PAC-Bayesian Estimation Variational
More informationA Quantum Interior Point Method for LPs and SDPs
A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationComplexity of Deciding Convexity in Polynomial Optimization
Complexity of Deciding Convexity in Polynomial Optimization Amir Ali Ahmadi Joint work with: Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Laboratory for Information and Decision Systems Massachusetts
More informationAn Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion
An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationA Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming
A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming Zhaosong Lu Lin Xiao March 9, 2015 (Revised: May 13, 2016; December 30, 2016) Abstract We propose
More informationTutorial on Convex Optimization: Part II
Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationLow-Rank Factorization Models for Matrix Completion and Matrix Separation
for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so
More informationStochastic Optimization Algorithms Beyond SG
Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods
More informationProject Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming
Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305,
More informationPOLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationNonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy
Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille
More informationSolution to EE 617 Mid-Term Exam, Fall November 2, 2017
Solution to EE 67 Mid-erm Exam, Fall 207 November 2, 207 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 (4 points) Convex sets (a) (2 points) Consider the set { } a R k p(0) =, p(t) for t
More informationFAST DISTRIBUTED COORDINATE DESCENT FOR NON-STRONGLY CONVEX LOSSES. Olivier Fercoq Zheng Qu Peter Richtárik Martin Takáč
FAST DISTRIBUTED COORDINATE DESCENT FOR NON-STRONGLY CONVEX LOSSES Olivier Fercoq Zheng Qu Peter Richtárik Martin Takáč School of Mathematics, University of Edinburgh, Edinburgh, EH9 3JZ, United Kingdom
More informationc 2017 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 27, No. 4, pp. 2301 2331 c 2017 Society for Industrial and Applied Mathematics NOISY EUCLIDEAN DISTANCE REALIZATION: ROBUST FACIAL REDUCTION AND THE PARETO FRONTIER D. DRUSVYATSKIY,
More informationQuasi-Diffusion in a SUSY Hyperbolic Sigma Model
Quasi-Diffusion in a SUSY Hyperbolic Sigma Model Joint work with: M. Disertori and M. Zirnbauer December 15, 2008 Outline of Talk A) Motivation: Study time evolution of quantum particle in a random environment
More informationUniversal Rigidity and Edge Sparsification for Sensor Network Localization
Universal Rigidity and Edge Sparsification for Sensor Network Localization Zhisu Zhu Anthony Man Cho So Yinyu Ye September 23, 2009 Abstract Owing to their high accuracy and ease of formulation, there
More informationLift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints
Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Nicolas Boumal Université catholique de Louvain (Belgium) IDeAS seminar, May 13 th, 2014, Princeton The Riemannian
More informationSub-Sampled Newton Methods
Sub-Sampled Newton Methods F. Roosta-Khorasani and M. W. Mahoney ICSI and Dept of Statistics, UC Berkeley February 2016 F. Roosta-Khorasani and M. W. Mahoney (UCB) Sub-Sampled Newton Methods Feb 2016 1
More informationTowards stability and optimality in stochastic gradient descent
Towards stability and optimality in stochastic gradient descent Panos Toulis, Dustin Tran and Edoardo M. Airoldi August 26, 2016 Discussion by Ikenna Odinaka Duke University Outline Introduction 1 Introduction
More informationLecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.
Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods
More informationDistributed Optimization over Networks Gossip-Based Algorithms
Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random
More informationPENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39
PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University
More informationSubmodularity in Machine Learning
Saifuddin Syed MLRG Summer 2016 1 / 39 What are submodular functions Outline 1 What are submodular functions Motivation Submodularity and Concavity Examples 2 Properties of submodular functions Submodularity
More informationSolving linear and non-linear SDP by PENNON
Solving linear and non-linear SDP by PENNON Michal Kočvara School of Mathematics, The University of Birmingham University of Warwick, February 2010 Outline Why nonlinear SDP? PENNON the new generation
More informationLecture 3: Huge-scale optimization problems
Liege University: Francqui Chair 2011-2012 Lecture 3: Huge-scale optimization problems Yurii Nesterov, CORE/INMA (UCL) March 9, 2012 Yu. Nesterov () Huge-scale optimization problems 1/32March 9, 2012 1
More informationFast Coordinate Descent methods for Non-Negative Matrix Factorization
Fast Coordinate Descent methods for Non-Negative Matrix Factorization Inderjit S. Dhillon University of Texas at Austin SIAM Conference on Applied Linear Algebra Valencia, Spain June 19, 2012 Joint work
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationPart 2: Linesearch methods for unconstrained optimization. Nick Gould (RAL)
Part 2: Linesearch methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationLecture 5: September 12
10-725/36-725: Convex Optimization Fall 2015 Lecture 5: September 12 Lecturer: Lecturer: Ryan Tibshirani Scribes: Scribes: Barun Patra and Tyler Vuong Note: LaTeX template courtesy of UC Berkeley EECS
More informationDistributed MAP probability estimation of dynamic systems with wireless sensor networks
Distributed MAP probability estimation of dynamic systems with wireless sensor networks Felicia Jakubiec, Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania https://fling.seas.upenn.edu/~life/wiki/
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationARock: an algorithmic framework for asynchronous parallel coordinate updates
ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,
More informationDimension reduction for semidefinite programming
1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology
More informationMachine Learning. Lecture 04: Logistic and Softmax Regression. Nevin L. Zhang
Machine Learning Lecture 04: Logistic and Softmax Regression Nevin L. Zhang lzhang@cse.ust.hk Department of Computer Science and Engineering The Hong Kong University of Science and Technology This set
More informationAgenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization
Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use
More informationNatural Evolution Strategies for Direct Search
Tobias Glasmachers Natural Evolution Strategies for Direct Search 1 Natural Evolution Strategies for Direct Search PGMO-COPI 2014 Recent Advances on Continuous Randomized black-box optimization Thursday
More informationA Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression
A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression Noah Simon Jerome Friedman Trevor Hastie November 5, 013 Abstract In this paper we purpose a blockwise descent
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationRecent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables
Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop
More informationPreliminaries Overview OPF and Extensions. Convex Optimization. Lecture 8 - Applications in Smart Grids. Instructor: Yuanzhang Xiao
Convex Optimization Lecture 8 - Applications in Smart Grids Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 32 Today s Lecture 1 Generalized Inequalities and Semidefinite Programming
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationBeyond Convex Relaxation: A Polynomial Time Non Convex Optimization Approach to Network Localization
Beyond Convex Relaxation: A Polynomial Time Non Convex Optimization Approach to Network Localization Senshan Ji, Kam Fung Sze, Zirui Zhou, Anthony Man Cho So and Yinyu Ye Department of Systems Engineering
More informationContinuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation
Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:
More informationWelfare Maximization with Friends-of-Friends Network Externalities
Welfare Maximization with Friends-of-Friends Network Externalities Extended version of a talk at STACS 2015, Munich Wolfgang Dvořák 1 joint work with: Sayan Bhattacharya 2, Monika Henzinger 1, Martin Starnberger
More informationFEL3330 Networked and Multi-Agent Control Systems. Lecture 11: Distributed Estimation
FEL3330, Lecture 11 1 June 8, 2011 FEL3330 Networked and Multi-Agent Control Systems Lecture 11: Distributed Estimation Distributed Estimation Distributed Localization where R X is the covariance of X.
More informationDouglas-Rachford splitting for nonconvex feasibility problems
Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationarxiv: v1 [math.st] 10 Sep 2015
Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees Department of Statistics Yudong Chen Martin J. Wainwright, Department of Electrical Engineering and
More informationDistributed online optimization over jointly connected digraphs
Distributed online optimization over jointly connected digraphs David Mateos-Núñez Jorge Cortés University of California, San Diego {dmateosn,cortes}@ucsd.edu Mathematical Theory of Networks and Systems
More informationGradient-based Adaptive Stochastic Search
1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction
More informationImportance Sampling for Minibatches
Importance Sampling for Minibatches Dominik Csiba School of Mathematics University of Edinburgh 07.09.2016, Birmingham Dominik Csiba (University of Edinburgh) Importance Sampling for Minibatches 07.09.2016,
More informationOptimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison
Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationPositive semidefinite matrix approximation with a trace constraint
Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace
More informationProvable Alternating Minimization Methods for Non-convex Optimization
Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationThe Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences
The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences Yuxin Chen Emmanuel Candès Department of Statistics, Stanford University, Sep. 2016 Nonconvex optimization
More information