A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS

Size: px
Start display at page:

Download "A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS"

Transcription

1 A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS By NING WANG A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2010

2 c 2010 Ning Wang 2

3 I dedicate this thesis to my dear parents. 3

4 ACKNOWLEDGMENTS Foremost, I would like to express my sincere gratitude to my advisor, Dr. Liuqing Yang, for her financial support, continuous encouragement, and academic guidance during the past one and a half years. Dr. Yang introduced me to the fascinating GPS-alternative indoor localization and navigation problems, helped me in developing an understanding of this topic,and encouraged me to develop my immature ideas into well organized research papers. I would also like to thank Dr. Dapeng Wu and Dr. Janise McNair for serving on my thesis committee. I would like to thank all members of the Signal processing, Communications and Networking (SCaN) Group of the University of Florida: Mr. Rui Cao, Mr. Robert Griffin, Dr. Huilin Xu, Dr. Fengzhong Qu, Mr. Xilin Cheng, Mr. Dongliang Duan, Ms. Wenshu Zhang, Mr. Bo Yu, and Mr. Pan Deng, for all the help they have offered me during the completion of this thesis. Thanks go to all my friends both at the University of Florida and elsewhere who have made my life full of happiness and improvements. This thesis is dedicated to my parents, for everything they did for me. 4

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS LIST OF TABLES LIST OF FIGURES ABSTRACT CHAPTER 1 INTRODUCTION Localization Systems Non-cooperative v.s. Cooperative Localization Contributions PRELIMINARY Problem Statement Popular Cooperative Localization Metrics Time-of-Arrival (ToA) Metric Received Signal Strength (RSS) Metric Other Metrics Metric Comparisons Semidefinite Programming (SDP) Algorithm MINIMAX SDP LOCALIZATION ALGORITHM ToA Semidefinite Approximation for Target-Anchor ToA Constraints Semidefinite Approximation for Target-Target ToA Constraints ToA Minimax SDP Algorithm RSS Semidefinite Approximation for Target-Anchor RSS Constraints Semidefinite Approximation for Target-Target RSS Constraints RSS Minimax SDP Algorithm INVESTIGATIONS INTO MINIMAX SDP ALGORITHMS Introduction ToA Rank-d ToA Minimax SDP Algorithms Standard SDP (SSDP) Further relaxations to SSDP Rank-N ToA Minimax SDP Algorithm RSS

6 4.3.1 Rank-d RSS Minimax SDP Algorithms SSDP Further relaxations to SSDP Rank-N RSS Minimax SDP Algorithm Minimax SDP Algorithm Analysis Summary Semidefinite Cones Existing SDP Algorithm Comparisons SIMULATIONS Minimax SDP Algorithm Comparisons ToA RSS Minimax SDP Algorithm Discussions A More Challenging Case Non-Cooperative vs. Cooperative Localization ToA RSS Virtual Anchors ToA RSS The Most Challenging Case Non-Cooperative vs. Cooperative Localization ToA RSS Further Discussions CONCLUSIONS AND FUTURE WORK REFERENCES BIOGRAPHICAL SKETCH

7 Table LIST OF TABLES page 2-1 Localization metric comparisons Semidefinite cones SDP algorithm comparisons

8 Figure LIST OF FIGURES page 1-1 Localization system block diagram Localization graph (a general case) ToA cooperative localization results (a general case) RSS cooperative localization results (a general case) Localization graph (a more challenging case) ToA localization results (a more challenging case) RSS localization results (a more challenging case) ToA localization results with virtual anchors RSS localization results with virtual anchors Localization graph (the most challenging case) ToA localization results (the most challenging case) RSS localization results (the most challenging case)

9 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science A SEMIDEFINITE PROGRAMMING APPROACH TO DISTANCE-BASED COOPERATIVE LOCALIZATION IN WIRELESS SENSOR NETWORKS By Ning Wang December 2010 Chair: Liuqing Yang Major: Electrical and Computer Engineering Semidefinite programming (SDP) approach has been widely used to convert nonconvex problems into convex ones in recent years. In this thesis, we apply the SDP approach to cooperative localization where inter-target communication capability is exploited for the purpose of coverage extension and accuracy enhancement. Rank-d Time-of-Arrival (ToA) Minimax Component-wise SDP (CSDP) and rank-d Received Signal Strength (RSS) Minimax CSDP algorithms are proposed. Compared to Standard SDP (SSDP), semidefinite cones of CSDP are thoroughly decentralized. Rank-d and rank-n algorithms are compared, showing that rank-d algorithms are more suitable for cooperative localization. With rank-d scheme, SSDP, Edge-based SDP (ESDP), Node-based SDP (NSDP) and CSDP algorithms are compared and analyzed for both ToA and RSS metrics, demonstrating that CSDP can achieve satisfying accuracy with reasonable complexity. Finally, we propose a virtual anchor concept to further improve the localization accuracy, especially in the outside-of-the-convex-hull situations. 9

10 CHAPTER 1 INTRODUCTION 1.1 Localization Systems GPS-alternative positioning and localization techniques are of vital importance to reliable location-aware sensor networks with low cost and low power consumption. In addition, GPS is unfeasible in heavily-foliaged or indoor environments. Recently, GPS-complementary location-aware sensor networks have attracted increasing interests in a wide range of applications such as earthquake detection, weather forecasting, current flow measurement, etc [9]. Such networks typically consist of two types of sensor nodes: the anchors or base stations with their positions known to the system, and the targets with unknown positions. The localization problem is to estimate the locations of the targets with known positions of the anchors and geometrical relationships among the sensors. Figure 1-1 [1] is a block diagram illustrating main components of a localization system. Received Signal Location Metrics: ToA, AoA, RSS,... Location Coordinates (x, y, z) Location Sensing... Location Sensing Positioning Algorithm Display System Figure 1-1. Localization system block diagram. As shown in Figure 1-1, this is how the localization system works. Location metrics, such as distance or angle information are extracted from the received waveforms. Then, positioning algorithm incorporates location metrics to determine a coordinate estimate. 10

11 1.2 Non-cooperative v.s. Cooperative Localization A major difference between traditional localization and localization in Wireless Sensor Networks (WSN) is the cooperative scheme. In conventional wireless systems, such as cellular networks, target localization is realized in a non-cooperative manner which means that position of each target is estimated only by the anchors inside its communication range. This requires a high density of anchors with self-positioning capability. However, sensor networks adopt an alternative cooperative mode (Figure 1-2. Solid triangle: anchor, solid circle: target; blue solid line: non-cooperative target-anchor connection, red dashed line: cooperative target-target connection.) where inter-target communications are permitted such that single-hop connections to anchors are not necessary any more, thus targets can even utilize location information of the anchors outside of the communication range. While not all targets can estimate their locations non-cooperatively, all of them may cooperatively locate themselves. Cooperative localization can not only reduce density of expensive anchors but can also provide improved localization accuracy [11, 15]. However, exploiting cooperative information among targets requires more computational resources which are already precious. To solve this problem, a semidefinite programming (SDP) technique [3, 4, 10] is adopted to reduce computational complexity. 1.3 Contributions Standard SDP (SSDP) has been applied to non-cooperative Time-of-Arrival (ToA) localization problems in [5, 8], and non-cooperative ToA and Received Signal Strength (RSS) localization problems in [10]. ToA cooperative localization with SSDP approach was analyzed in [3]. As the complexity of SDP is at least O(d 3 ), where d is the dimension of the semidefinite cones [14], the SDP algorithm of relaxing a set of low dimensional cones instead of a single high dimensional cone is computationally desirable. In [14], Edge-based SDP (ESDP) and Node-based SDP (NSDP) with ToA 11

12 8 x 2 6 a 1 a 5 4 x 4 x 1 a 3 x 5 2 x 3 0 a 2 a 4 a Figure 1-2. Localization graph (a general case). metric were developed to further reduce the complexity, especially in large scale sensor networks. In this thesis, we apply the SDP algorithm to cooperative scenario where inter-target communication capability is exploited for the purpose of performance enhancement. In addition, by utilizing a minimax relaxation, our approach is also applicable to the RSS metric. In Chapter 3, rank-d ToA Minimax Component-wise SDP (CSDP) and rank-d RSS Minimax CSDP algorithms are proposed [13]. This component-wise semidefinite relaxation combined with minimax relaxation scheme is efficient and still retains crucial theoretical properties of the SSDP proposed in [3]. Compared to SSDP, semidefinite cones of CSDP are thoroughly decentralized. In Chapter 4, rank-d and rank-n algorithms are compared, showing that rank-d algorithms are more applicable to cooperative localization. With rank-d mode, SSDP, ESDP, NSDP and CSDP algorithms are compared and analyzed for both ToA and RSS metrics, demonstrating that CSDP 12

13 can achieve satisfying accuracy with reasonable complexity. Simualtion results are shown in Chapter 5. With noisy measurements, cooperative localization may have flip ambiguity now and then. In order to solve this problem and achieve a higher accuracy, we propose a virtual anchor concept. A location-unaware target which can communicate with more than the minimum number 1 of location-aware anchors can be upgraded to a virtual anchor once its location is determined. Our simulation results demonstrate that virtual anchors can be used to improve the localization accuracy, especially in the outside-of-the-convex-hull situations. 1 Theoretically, to locate a target, at least 3 noncollinear anchors are needed in 2-D cases, and 4 noncoplanar anchors in 3-D cases. 13

14 CHAPTER 2 PRELIMINARY 2.1 Problem Statement In a sensor network localization system with M anchors and N targets in a d dimensional real Euclidean space R d, locations of the anchors constitute a known vector set V a := {a 1, a 2,..., a M }, and locations of the targets form an unknown vector set V x := {x 1, x 2,..., x N }, with all vectors d-dimensional (usually, d = 2 or 3), and define V := V a V x. Furthermore, we are given Euclidean distances d mn between a m and x n for some (m, n), and d ij between x i and x j for some (i, j). Specifically, let E a := { (m, n) : a m V a, x n V x, and d mn is specified}, E x := { (i, j) : x i, x j V x, d ij is specified and i < j} and E := E a E x. The localization problem in R d for the graph G := (V, E) is to determine coordinates of unknown target positions V x from known anchor positions V a and partial distance measurements E. Notation: denotes the absolute value, denotes the l 2 vector norm. tr{ } is the trace of a square matrix, ( ) T is the matrix transpose operator. Boldface lowercase letters denote vectors, boldface uppercase letters denote matrices. Specifically, 0 denotes an all zero entry vector, e i denotes an N dimensional vector with all of its entries zero except a unit entry at the i-th position, I d d denotes a d d-dimensional identity matrix. Vector and matrix dimensions will be clear in the context, and will be specified whenever necessary. Y 0 means that Y is positive semidefinite (PSD), where Y is a square matrix. Y ij denotes the (i, j)-th entry of matrix Y. Z (i1,...,i k ) denotes the principal submatrix of Z from its rows and columns indexed by i 1,..., i k. 2.2 Popular Cooperative Localization Metrics Time-of-Arrival (ToA) Metric Ultrawide-band ToA localization has the potentials to generate very accurate position estimates with centimeter resolution, and adopting a two-way transmission 14

15 technique can avoid those annoying anchor-target synchronization issues, thus ToA localization is becoming more and more popular recently. In this model, target-anchor distances and target-target distances are measured via ToA, so long as the node pairs are within the radio range of each other. Suppose there are K available distances, out of which K distances are among targets and anchors, and the remaining distances are among targets. Without loss of generality, we can denote these distances as d 2 mn = a m x n 2 + n mn, (m, n) E a, (2 1) and d ij 2 = x i x j 2 + n ij, (i, j) E x and i < j, (2 2) where the noise n mn and n ij are i.i.d. Gaussian distributions with zero mean and variance σ 2. Denote the residue vector as ε = [ε 1, ε 2,..., ε K, ε K+1, ε K+2,..., ε K ] = [ε mn,..., ε ij,...] = [d 2 mn a m x n 2,..., d ij 2 x i x j 2,...], and X d N = [x 1, x 2,..., x N ], then the conditional probability density of the distances is p(d 2 mn,..., d ij 2 X) = 1 (m,n) Ea ε2 mn + (i,j) Ex and i<j ε2 ij (2πσ 2 e 2σ ) K/2 2, (2 3) where (m,n) E a ε 2 mn represents the traditional non-cooperative information, and (i,j) E x and i<j ε2 ij represents the cooperative information. The Maximum Likelihood (ML) estimate of X is ˆX = arg min X (m,n) N a ε 2 mn + Note that ˆX is optimal in the ML sense. (i,j) N x and i<j ε 2 ij = arg min X ε 2. (2 4) 15

16 The optimization problem of (2 4) is highly nonlinear and nonconvex in X, thus difficult to solve. In this thesis, we adopt a semidefinite programming (SDP) approach to reduce the computational complexity Received Signal Strength (RSS) Metric RSS metric provides a low cost solution that can avoid additional installation. However, distance estimates of this technique lack accuracy due to reliance on path loss models and indirect relationship between received power strength and distance information, i.e. the performance of this technique depends on the accuracy of the path loss model. Empirically, training functions are required to get the path loss parameter β, which only represents the average path loss effect. In a practical environment, different sensors may have significantly different β values. This is a well-recognized drawback of the RSS metric, especially so if the path loss parameters vary over time [1, 10]. In the ideal case, each target node emits a signal with power P, and anchor nodes within the radio range can receive a signal with energy strength s mn = P a m x n β, (m, n) E a, (2 5) and target nodes within the radio range can receive a signal with energy strength s ij = P x i x j β, (i, j) E x and i < j, (2 6) where β is the path loss coefficient. Under lognormal fading, the RSS in db can be modeled as 10 log s mn = 10 log P 10β log( a m x n ) + n mn, (m, n) E a, (2 7) for anchor nodes and 10 log s ij = 10 log P 10β log( x i x j ) + n ij, (i, j) E x and i < j, (2 8) 16

17 for target nodes, where n mn and n ij are assumed to be i.i.d. Gaussian distributed noises with zero mean and variance σ 2. Denote the residue vector as ɛ = [ɛ 1, ɛ 2,..., ɛ K, ɛ K+1, ɛ K+2,..., ɛ K ] = [ɛ mn,..., ɛ ij,...] = P P [ln s mn ln( a m x n ),..., ln s β ij ln( x i x j ),...], and X = [x β 1, x 2,..., x N ], the conditional probability density of the RSS in db is 1 p(ln s mn,..., ln s ij,... X)= e σ ln 10 (2π( 10 )2 ) K/2 (m,n) Ea ɛ2 mn + (i,j) Ex ɛ2 ij 2( σ ln ) 2, (2 9) where (m,n) E a ɛ 2 mn represents the traditional non-cooperative information, and (i,j) E x and i<j ɛ2 ij represents the cooperative information. The ML estimate of X can be obtained as ˆX = arg min X (m,n) N a ɛ 2 mn + Note that ˆX is optimal in the ML sense. (i,j) N x ɛ 2 ij = arg min X ɛ 2. (2 10) The optimization problem of (2 10) is highly nonlinear and nonconvex in X, thus difficult to solve. Similar to the ToA case, we adopt an SDP approach to reduce the computational complexity Other Metrics Time-Difference-of-Arrival (TDoA) has the advantage of only requiring anchor-anchor synchronization, thus avoiding unpractical target-anchor synchronization. However, cooperative TDoA definitely involves synchronization among targets and anchors, which makes TDoA lose its advantage, showing that TDoA is not applicable to cooperative localization. This is why a cooperative TDoA algorithm is not covered in this thesis. Note that even though TDoA cooperative localization is not reasonable, a mixed ToA and TDoA cooperative localization technique is indeed applicable. Angle-of-Arrival (AoA) metric is accurate, but can be very expensive. Unlike ToA, RSS and TDoA, which are all distance-based, AoA is not distance-based, thus beyond the scope of this thesis. Interested readers can refer to [2] for details. 17

18 2.2.4 Metric Comparisons Advantages and disadvantages of these popular localization metrics are summarized in Table 2-1. Table 2-1. Localization metric comparisons Location metric Advantage Disadvantage ToA Accurate Expensive TDoA Not applicable Not applicable RSS Cheap Not accurate AoA Accurate Can be very expensive Considering accuracy, hardware cost and power consumption constraints, target localization usually adopts ToA and RSS metrics. 2.3 Semidefinite Programming (SDP) Algorithm It has long been recognized that convex optimization problems are tractable since theories of convex optimization problems are far more complete and straightforward than those of general nonlinear optimization problems [4]. As Rockafellar stated in his 1993 SIAM Review survey paper [12], In fact the great watershed in optimization isn t between linearity and nonlinearity, but convexity and nonconvexity. SDP is a powerful programming approach that makes the above statement truly convincing. In SDP, Y = X T X is semidefinite relaxed to Y X T X. By Schur s Complement Theorem [7], Y X T X is equivalent to I d d X T N d X d N Y N N 0. Instead of searching over the original nonlinear and nonconvex curves, SDP approach searches over the relaxed, still nonlinear, but convex semidefinite cones. Global search is a must for solving nonconvex optimization problems. However, local minima/maxima are guaranteed to be global minima/maxima in convex optimization problems. Thus, semidefinite relaxation has the potential to significantly reduce the computational complexity by avoiding time-consuming global search. Current optimization research successfully proves that interior-point methods previously only used to solve linear 18

19 programming problems are able to solve SDP problems with a number of operations that grows polynomially with the problem dimension and log(1/ɛ), where ɛ > 0 is the desired accuracy [4]. The low polynomial-time complexity of interior-point methods for SDP optimization problems is in sharp contrast to general nonlinear situations requiring a number of operations that is exponential in the problem dimensions, and finally makes SDP solvers more and more popular both in theory and in practice. In this thesis, CVX package [6] is used to solve SDP localization problems. 19

20 CHAPTER 3 MINIMAX SDP LOCALIZATION ALGORITHM 3.1 ToA To solve ToA localization problem more efficiently by applying SDP approach, we minimax approximate ˆX in (2 4) X = arg min X max ε k. (3 1) k=1,2,...,k This approximation is supported by the commonly accepted equivalence between l 2 and l vector norms [7]. Eq. (3 1) can be equivalently reformulated as min t (3 2) s.t. t < ε k < t, k = 1, 2,..., K. Then min t (3 3) s.t. t < a m x n 2 d 2 mn < t, (m, n) E a, t < x i x j 2 d ij 2 < t, (i, j) E x and i < j Semidefinite Approximation for Target-Anchor ToA Constraints min t (3 4) s.t. t < a m x n 2 d 2 mn < t, (m, n) E a. Eq. (3 4) can be written as ( s.t. t < a T m 1 ) I x n x T n x T n x n min t (3 5) a m 1 d 2 mn < t, (m, n) E a. 20

21 Using the basic property x T Ax = tr{xx T A}, (3 5) can be written in an equivalent matrix form s.t. t < tr a ma T m a m a T m 1 min t (3 6) I d d x n x T n x n d mn 2 < t, (m, n) E a. x T n To illustrate the idea of SDP, we denote Y nn = x T n x n, and use semidefinite relaxation to approximate the nonconvex equation constraint Y nn = x T n x n into a disciplined convex form I d d x T n x n Y nn 0, then (3 6) can be reformulated as min t (3 7) s.t. t < tr a ma T m a m I d d x n a T m 1 x T n Y nn d mn 2 < t I d d x T n x n Y nn (m, n) E a. 0, In the optimization literature, (3 7) is a legitimate SDP problem, for which efficient software packages are available Semidefinite Approximation for Target-Target ToA Constraints min t (3 8) s.t. t < x i x j 2 d ij 2 < t, (i, j) E x and i < j. Eq. (3 8) is equivalent to min t (3 9) s.t. t < (x i x j ) T (x i x j ) d ij 2 < t, (i, j) E x and i < j. 21

22 Similarly, denote Y ij = x T i x j, which implies Y ji = x T j x i, we have ( s.t. t < 0 1 d 1 ) 1 min t (3 10) I d d x i x j 0 d 1 x T i Y ii Y ij 1 d 2 ij < t, x T j Y ji Y jj 1 (i, j) E x and i < j. Using the basic property x T Ax = tr{xx T A}, (3 10) can be written in an equivalent matrix form min t (3 11) s.t. t < tr 0 d ( 0 1 d 1 1 ) I d d x i x j x T i Y ii Y ij x T j Y ji Y jj d 2 ij < t, (i, j) E x and i < j. Use semidefinite relaxation technique and expand matrix expressions, we get min t (3 12) s.t. t < Y ii Y ij Y ji + Y jj d 2 ij < t, I d d x i x j x T i Y ii Y ij 0, x T j Y ji Y jj (i, j) E x and i < j. I d d Note that for target-target constraints, even though relaxing Y ij x T i x j Y ij = x T i x j into 0 is computationally more efficient, but this relaxation generates poor accuracy performance due to over-relaxation and losing target-target distance constraints. However, semidefinite relaxing target-target constraints over higher 22

23 dimensional matrices can preserve relevant information with reasonable computational complexity ToA Minimax SDP Algorithm Together, the ToA minimax SDP algorithm is min t (3 13) t,x,y nn, n,y ij, (i,j) Ex and i<j s.t. t < tr a ma T m a m I d d x n a T m 1 d mn 2 < t, (m, n) E a, x T n Y nn t < Y ii Y ij Y ji + Y jj d ij 2 < t, (i, j) E x and i < j, I d d x T n x n Y nn 0, I d d x i x j x T i Y ii Y ij 0. x T j Y ji Y jj where Y nn, n, Y ij, (i,j) Ex and i<j 1 are sets of all corresponding nuisance scalar variable matrices introduced by semidefinite relaxation and Y ij = Y ji due to symmetric constraints. Note that the last but one constraint of CSDP for those i such that j, (i, j) E x and i < j, i.e. target x i has other target(s) as neighbor(s), is redundant, since it is implied by the second constraint in this case. Once the solution of (3 13) is found, X is taken out as the semidefinite approximated location estimate X. 3.2 RSS We first reformulate the original ML problem of (2 10) into a minimax problem X = arg min X max ɛ k (3 14) k=1,2,...,k 1 Suppose all targets are localizable. Localizability itself is a challenging research area which is beyond the scope of this thesis. 23

24 Transform (3 14) into its equivalent form min t (3 15) s.t. t < ɛ k < t, k = 1, 2,..., K. Denote q mn = ( smn P )2/β, (m, n) E a and q ij = ( s ij P )2/β, (i, j) N x and i < j, then min t (3 16) s.t. t < ln(q mn a m x n 2 ) < t, (m, n) E a, t < ln( q ij x i x j 2 ) < t, (i, j) E x and i < j Semidefinite Approximation for Target-Anchor RSS Constraints Using the semidefinite relaxation technique, we have the new problem as following min t (3 17) s.t. t < ln q mn tr a ma T m a m I d d x n a T m 1 x T n Y nn < t, I d d x T n x n Y nn (m, n) E a. 0, Approximations can be made to further simplify the first constraint in (3 17). Since ln( ) ( ) 1 if ( ) 0, where denotes asymptotic equivalence in the first-order infinitesimal sense. The approximate condition can be satisfied by feasible SDP minima, thus (3 17) is equivalent to 24

25 min t (3 18) s.t. t < q mn tr a ma T m a m I d d x n a T m 1 x T n Y nn 1 < t, I d d x T n x n Y nn (m, n) E a. 0, Semidefinite Approximation for Target-Target RSS Constraints Similarly, min t (3 19) s.t. t < q ij (Y ii Y ij Y ji + Y jj ) 1 < t, I d d x i x j x T i Y ii Y ij 0, x T j Y ji Y jj (i, j) E x and i < j. 25

26 3.2.3 RSS Minimax SDP Algorithm Together, the RSS minimax SDP algorithm is min t (3 20) t,x,y {n},y {kj} s.t. t < q mn tr a ma T m a m I d d x n a T m 1 1 < t, (m, n) E a, x T n Y nn t < q ij (Y ii Y ij Y ji + Y jj ) 1 < t, (i, j) E x and i < j, I d d x T n x n Y nn 0, I d d x i x j x T i Y ii Y ij 0. x T j Y ji Y jj where Y nn, n, Y ij, (i,j) Ex and i<j are nuisance scalar variables introduced by semidefinite relaxation and Y ij = Y ji due to symmetric constraints. Note that the last but one constraint of CSDP for those i such that j, (i, j) E x and i < j, i.e. target x i has other target(s) as neighbor(s), is redundant, since it is implied by the second constraint in this case. Similarly, we take out X as the semidefinite approximated location estimate X. While the algorithm in [10] requires postprocessing techniques (Gaussian randomization, etc) to convert the semidefinite relaxed solution to an approximate solution of the original problem, our method can obtain an approximate solution of the original problem by simply taking out X as the location estimation. This property is obtained by only applying semidefinite relaxation to the nonconvex terms in X while keeping the convex terms unaltered. 26

27 CHAPTER 4 INVESTIGATIONS INTO MINIMAX SDP ALGORITHMS 4.1 Introduction Rank-d ToA Standard SDP (SSDP) was brought up by Biswas et al in [3]. However, SSDP is not inherently suitable for large-scale wireless sensor networks. Complexity of SDP is O(n 3 ) (where n is the dimension of the semidefinite cones), this indicates that semidefinite cones play an important role in further reducing the complexity of SDP. In [14], rank-d ToA Edge-based SDP (ESDP) and rank-d ToA Node-based SDP (NSDP) algorithms were proposed as further relaxations of SSDP. Rank-N Minimax SDP applicable to both ToA and RSS metrics were come up with in [10]. Algorithms proposed in Chapter 3, also in [13], named as Minimax Component-wise SDP (CSDP), were also applicable to both ToA and RSS metrics. In this chapter, we investigate into effects of semidefinite cones on accuracy and efficiency of cooperative localization algorithms with ToA and RSS metrics. Rank-N and rank-d SDP algorithms are compared. With regard to rank-d SDP algorithm, SSDP, ESDP, NSDP, and CSDP are investigated in details to illustrate how to choose forms of semidefinite cones to reduce the complexity without sacrificing the quality of localization. 4.2 ToA Target-anchor distances and target-target distances are measured via ToA metric so long as the node pairs are within radio range of each other. Taking computational efficiency into account, expressions in (3 13) have been expanded appropriately, which are the very expressions used for simulations. However, in this section, algorithms are expressed in compressed matrix or submatrix form in order to conveniently compare among different semidefinite cones. 27

28 4.2.1 Rank-d ToA Minimax SDP Algorithms Standard SDP (SSDP) Rank-d semidefinite approximation is Y = X T X Y X T X, (4 1) i.e. semidefinite cone of rank-d SDP denoted as Z is Z = I d d X T N d X d N Y N N 0. (4 2) If a solution Z is of rank d, i.e. Y = X T X, then X is a solution to the localization problem. This is how rank-d algorithm gets its name. Minimax ToA SSDP Algorithm is as follows: min t,z t (4 3) s.t. t < tr a ( ) m a T m e T Z n e n d mn 2 < t, (m, n) E a, t < tr 0 ( ) d d (e i e j ) T Z e i e j d ij 2 < t, (i, j) E x and i < j, Z (1:d) = I d d, Z 0. 28

29 Further relaxations to SSDP Edge-based SDP (ESDP): {( am s.t. t < tr {( 0d 1 t < tr e i e j Node-based SDP (NSDP): {( am s.t. t < tr {( 0d 1 t < tr e i e j min t,z t (4 4) ) } ( ) a T m e e T n Z dmn 2 < t, (m, n) E a, n ) ( 01 d (e i e j ) ) T Z} Z (1:d) = I d d, Z (1:d,i,j) 0, (i, j) E x and i < j. d 2 ij < t, (i, j) E x and i < j, min t,z t (4 5) ) } ( ) a T m e e T n Z dmn 2 < t, (m, n) E a, n ) ( 01 d (e i e j ) ) T Z} Z (1:d) = I d d, Z (1:d,i,Ni ) 0, x i V x. d 2 ij < t, (i, j) E x and i < j, where N i = {j : (i, j) E x and i < j} is the set formed by all the targets connected to target x i. Component-wise SDP (CSDP): {( am s.t. t < tr {( 0d 1 t < tr e i e j min t,z t (4 6) ) } ( ) a T m e e T n Z dmn 2 < t, (m, n) E a, n ) ( 01 d (e i e j ) ) T Z} Z (1:d) = I d d, Z (1:d,i) 0, x i V x, Z (1:d,i,j) 0, (i, j) E x and i < j. d 2 ij < t, (i, j) E x and i < j, The last but one constraint of CSDP for those x i such that j, (i, j) E x and i < j, i.e. a target that has other target(s) as neighbor(s), is redundant, since it is implied by the second constraint in this case. CSDP naturally results from the component-wise formulation of SDP models, thus keeps all the involved and necessary relaxed variables, and only those variables. 29

30 4.2.2 Rank-N ToA Minimax SDP Algorithm In rank-d semidefinite relaxation, I d d x 1 x N x T 1 Y 11 Y 1N Z = x T N Y N1 Y NN 0. (4 7) Semidefinite relaxation terms Y ij are used to approximate x T i x j, (i, j) E x and i < j, and Y nn are used to approximate x T n x n, (m, n) E a. However, in rank-n semidefinite relaxation, Z = Y x 1 x N x T x T N (4 8) where Y = N i=1 x ix T i is a d d matrix. It is not possible to use a single matrix Y to approximate all the nonconvex terms, unless N = 1. Set N = 1, rank-n semidefinite approximation is Y = XX T Y XX T, (4 9) i.e. semidefinite cone of rank-n SDP is is Z = Y d d X T N d X d N I N N 0. (4 10) If a solution Z is of rank N, i.e. Y = XX T, then X is a solution to the localization problem. This is how rank-n algorithm gets its name. Rank-N algorithm is adopted in [10], with N = 1, i.e. locating one target. Non-cooperative localization can use rank-n SDP by locating one target at a time. However, rank-n SDP localization algorithm is not in a good form for cooperative localization, thus not investigated in details. 30

31 There are indeed rank-n ToA Minimax ESDP and CSDP algorithms by approximating Y ij = (x i x j )(x i x j ) T into Y ij (x i x j )(x i x j ) T that work for cooperative localization. CSDP algorithm is shown as follows, ESDP algorithm is in a similar form. min t (4 11) t,x,y nn, n,y ij, (i,j) Ex and i<j s.t. t < tr I a m Y nn x n a T ma m x T n 1 d mn 2 < t, (m, n) E a, a T m t < tr {Y ij } d ij 2 < t, (i, j) E x and i < j, x n Y nn x T n 1 Y ij 0, x i x j (x i x j ) T 1 0. where Y nn, n, Y ij, (i,j) Ex and i<j are nuisance d d matrices introduced in by semidefinite relaxation, meanings of all the other notations remain the same. However, in this algorithm, relations between Y nn, n and Y ij, (i,j) Ex and i<j, i.e. Y ij = Y ii + Y jj x i x T j x j x T i, is not considered, making this algorithm not as accurate as its rank-d counterpart. 4.3 RSS Target-anchor distances and target-target distances are measured via RSS metric so long as the node pairs are within radio range of each other. Taking computational efficiency into account, expressions in (3 20) have been expanded appropriately, which are the very expressions used for simulations. However, in this section, algorithms are expressed in compressed matrix or submatrix form in order to conveniently compare among different semidefinite cones. 31

32 4.3.1 Rank-d RSS Minimax SDP Algorithms SSDP Rank-d semidefinite approximation is Y = X T X Y X T X, (4 12) i.e. semidefinite cone of rank-d SDP denoted as Z is Z = I d d X T N d X d N Y N N 0. (4 13) If a solution Z is of rank d, i.e. Y = X T X, then X is a solution to the localization problem. This is how rank-d algorithm gets its name. Minimax RSS SSDP Algorithm is as follows: min t,z t (4 14) s.t. t < q mn tr a ( ) m a T m e T Z n e n 1 < t, (m, n) E a, t < q ij tr 0 ( ) d d (e i e j ) T Z e i e j 1 < t, (i, j) E x and i < j, Z (1:d) = I d d, Z Further relaxations to SSDP Edge-based SDP (ESDP): 32

33 {( am s.t. t < q mn tr {( 0d 1 t < q ij tr e i e j Node-based SDP (NSDP): min t,z t (4 15) ) } ( ) a T m e e T n Z 1 < t, (m, n) E a, n ) ( 01 d (e i e j ) ) T Z} 1 < t, (i, j) E x and i < j, Z (1:d) = I d d, Z (1:d,i,j) 0, (i, j) E x and i < j. {( am s.t. t < q mn tr {( 0d 1 t < q ij tr e i e j min t,z t (4 16) ) } ( ) a T m e e T n Z 1 < t, (m, n) E a, n ) ( 01 d (e i e j ) ) T Z} 1 < t, (i, j) E x and i < j, Z (1:d) = I d d, Z (1:d,i,Ni ) 0, x i V x. where N i = {j : (i, j) E x and i < j} is the set formed by all the targets connected to target x i. Component-wise SDP (CSDP): {( am s.t. t < q mn tr {( 0d 1 t < q ij tr e i e j min t,z t (4 17) ) } ( ) a T m e e T n Z 1 < t, (m, n) E a, n ) ( 01 d (e i e j ) ) T Z} 1 < t, (i, j) E x and i < j, Z (1:d) = I d d, Z (1:d,i) 0, x i V x, Z (1:d,i,j) 0, (i, j) E x and i < j. The last but one constraint of CSDP for those x i such that j, (i, j) E x and i < j, i.e. a target that has other target(s) as neighbor(s), is redundant, since it is implied by the second constraint in this case. CSDP naturally results from the component-wise formulation of SDP models, thus keeps all the involved and necessary relaxed variables, and only those variables. 33

34 4.3.2 Rank-N RSS Minimax SDP Algorithm Similar to the ToA case, rank-n SDP localization algorithm is not in a good form for cooperative localization. Rank-N RSS Minimax ESDP and CSDP algorithms do exist and work. CSDP algorithm is shown as follows, ESDP algorithm is in a similar form. min t (4 18) t,x,y nn, n,y ij, (i,j) Ex and i<j s.t. t < q mn tr I a m Y nn x n a T ma m x T n 1 1 < t, (m, n) E a, a T m t < q ij tr {Y ij } 1 < t, (i, j) E x and i < j, x n Y nn x T n 1 Y ij 0, x i x j (x i x j ) T 1 0. where q mn = ( smn P )2/β, (m, n) E a, q ij = ( s ij P )2/β, (i, j) E x and i < j. Y nn, n, Y ij, (i,j) Ex and i<j are nuisance d d matrices introduced in by semidefinite relaxation, meanings of all the other notations remain the same. However, in this algorithm, relations between Y nn, n and Y ij, (i,j) Ex and i<j, i.e. Y ij = Y ii + Y jj x i x T j x j x T i, is not considered, making this algorithm not as accurate as its rank-d counterpart. 4.4 Minimax SDP Algorithm Analysis Generally speaking, rank-d algorithms are in a suitable style for cooperative localization with accurate performances and are recommended for its convenient reformulation process. Theoretically, relations among SSDP, ESDP, NSDP and CSDP are F SSDP F NSDP F CSDP F ESDP, (4 19) 34

35 where F represents the solution set of the corresponding SDP relaxation. ESDP, NSDP and CSDP are all further relaxations of SSDP. However, SSDP is in a centralized form and not suitable for large scale networks [14], even though SSDP may possess a better efficiency for small-scale networks. Semidefinite cones of NSDP are not completely decentralized. Cones of ESDP are completely decentralized, but CSDP is not robust enough and blows up under severe noisy cases. Our simulation results show that CSDP achieves comparable accuracy with reasonable complexity. Considering that cones of CSDP are thoroughly decentralized, they are suitable semidefinite cones for distributed algorithms. 4.5 Summary Semidefinite Cones Since objective functions and all inequality constraints essentially remain the same, constraints on semidefinite cones, shared by ToA and RSS metrics, are summarized and listed in Table 4-1. Table 4-1. Semidefinite cones Semidefinite Cones Mathematical Expressions Standard Z 0 Edge-based Z (1:d,i,j) 0, (i, j) E x and i < j Node-based Z (1:d,i,Ni ) 0, x i V x Component-wise Z (1:d,i) 0, x i V x, Z (1:d,i,j) 0, (i, j) E x and i < j Existing SDP Algorithm Comparisons In this section, characteristics of typical SDP algorithms are listed and compared in Table

36 36 Table 4-2. SDP algorithm comparisons Name Biswas, Lian et al [3] Wang, Zheng et al [14] Meng, Ding et al [10] Wang, Yang [13] Year Coop or Noncoop Cooperative Cooperative Non-cooperative Cooperative Target Number N N 1 N Distance-based Metric ToA ToA ToA and RSS ToA and RSS Solution Rank Rank-d Rank-d Rank-N, N = 1 Rank-d Semidefinite Cone Type SSDP ESDP, NSDP Minimax SDP Minimax CSDP Semidefinite Cone Number 1 Multiple 1 N + E x Semidefinite Cone Dimension N + d Depends d + 1 d + 1, d + 2 Note: Algorithms proposed in [10] only locate one targets, thus no cooperation (or trivially non-cooperative localization) and semidefinite cone approximation issues. E x denotes the number of elements in the set E x.

37 CHAPTER 5 SIMULATIONS In this chapter, we use the CVX package [6] as our SDP solvers. Localization accuracy is evaluated by the mean square error (MSE = 1 N N n=1 x n x n 2 ) of the location estimation, and localization efficiency is indicated by the average time used for locating all the targets in sensor networks. All simulation results are averaged over 2000 Monte Carlo tests. 5.1 Minimax SDP Algorithm Comparisons As shown in Figure 1-2, anchors and targets are located on a 2-D map, connections are specified as edges on the graph. Cases inside the convex hull, on the edge of the convex hull, out of the convex hull, as well as the cooperation between two targets are all included in this localization system. Specifically, anchors are located at a 1 = [0, 6] T, a 2 = [0, 0] T, a 3 = [2, 4] T, a 4 = [6, 0] T, a 5 = [7, 7] T, a 6 = [8, 0] T, i.e. V a = {[0, 6] T, [0, 0] T, [2, 4] T, [6, 0] T, [7, 7] T, [8, 0] T }. Targets are placed at x 1 = [0, 4] T, x 2 = [3, 8] T, x 3 = [2, 2] T, x 4 = [6, 5] T, x 5 = [7, 3] T, i.e. V x = {[0, 4] T, [3, 8] T, [2, 2] T, [6, 5] T, [7, 3] T }. Communications are allowed between node pairs at (a 1, x 1 ), (a 2, x 1 ), (a 1, x 2 ), (a 3, x 1 ), (a 2, x 3 ), (a 3, x 2 ), (a 3, x 3 ), (a 5, x 2 ), (a 3, x 4 ), (a 4, x 3 ), (a 5, x 4 ), (x 4, x 5 ), (a 4, x 5 ), and (a 6, x 5 ), i.e. E a = {(1, 1), (2, 1), (1, 2), (3, 1), (2, 3), (3, 2), (3, 3), (5, 2), (3, 4), (4, 3), (5, 4), (4, 5), (6, 5)}, E x = {(4, 5)}. In this case, x 4 and x 5 are not localizable without cooperative localization ToA ToA metric is adopted for localization. By adopting cooperative localization, we successfully located all the targets otherwise not localizable. See Figure 5-1A for algorithm accuracy performances and Figure 5-1B for algorithm efficiency performances. 37

38 MSE(m 2 ) SSDP ESDP NSDP CSDP /σ 2 A Accuracy Time(s) SSDP ESDP NSDP CSDP /σ 2 B Efficiency. Figure 5-1. ToA cooperative localization results (a general case). 38

39 5.1.2 RSS RSS metric is adopted for localization. By adopting cooperative localization, we successfully located all the targets otherwise not localizable. See Figure 5-2A for algorithm accuracy performances and Figure 5-2B for algorithm efficiency performances Minimax SDP Algorithm Discussions Generally speaking, ESDP, NSDP and CSDP are all further relaxations of SSDP for both ToA and RSS metrics. Even though SSDP is more accurate and may possess a better efficiency for small-scale networks, it is not suitable for large scale networks since it involves a single high-dimensional semidefinite cone [14]. Cones of NSDP are not completely decentralized. While cones of ESDP are completely distributed, it is not robust enough and blows up under severe noisy cases. Cones of CSDP are thoroughly decentralized, and our simulation results show that it achieves comparable accuracy with reasonable complexity. Thus CSDP algorithm is suitable for large-scale wireless sensor networks. We use component-wise SDP (CSDP) algorithms for both Section 5.2 and Section A More Challenging Case According to the simulation results in Section 5.1, localization of a target with access to at least three anchors on a 2-D map is accurate no matter where it is located, inside, outside, or on the boundary of convex hulls. But how good the localization algorithm performs when locating multiple targets cooperatively, especially, the outside-of-the-convex-hull situation? In this part, we investigate into how cooperative localization algorithm performs in details. As in Figure 5-3A, anchors are located on a 2-D map at a 1 = [0, 2] T, a 2 = [ 2, 2] T, a 3 = [2, 2] T, i.e. V a = {[0, 2] T, [ 2, 2] T, [2, 2] T }. Targets are placed at x 1 = [0, 1] T, x 2 = [0, 1] T, i.e. V x = {[0, 1] T, [0, 1] T } (inside the convex hull). As in Figure 5-3B, anchor locations remain the same. Targets are placed at x 1 = [0, 3] T, x 2 = [0, 5] T, i.e. V x = {[0, 3] T, [0, 5] T } (outside the convex hull). Communications are 39

40 MSE(m 2 ) SSDP ESDP NSDP CSDP /σ 2 A Accuracy. Time(s) SSDP ESDP 0.48 NSDP CSDP /σ 2 B Efficiency. Figure 5-2. RSS cooperative localization results (a general case). 40

41 allowed between node pairs at (a 1, x 1 ), (a 2, x 1 ), (a 3, x 1 ), (a 2, x 2 ), (a 3, x 2 ), and (x 1, x 2 ), i.e. E a = {(1, 1), (2, 1), (3, 1), (2, 2), (3, 2)}, E x = {(1, 2)} for both Figure 5-3A and Figure 5-3B. In this case, x 2 is not localizable without cooperative localization Non-Cooperative vs. Cooperative Localization ToA ToA metric is adopted for localization. For the inside-of-the-convex-hull situation, localization accuracy significantly improves by incorporating cooperative information between two targets. For the outside-of-the-convex-hull situation, cooperative localization works, but not accurate enough. However, non-cooperative localization completely fails to produce an acceptable estimate and undergoes serious performance degradation in this case. See Figure 5-4A for algorithm accuracy performances and Figure 5-4B for algorithm efficiency performances of ToA Minimax SDP algorithm RSS RSS metric is adopted for localization, with P = 1000, β = 3. In the inside-of-the-convex-hull situations, localization accuracy significantly improves by incorporating cooperative information between two targets. For the outside-of-the-convex-hull situation, cooperative localization works, but not accurate enough. However, non-cooperative localization completely fails to produce an acceptable estimate and undergoes serious performance degradation in this case. See Figure 5-5A for algorithm accuracy performances and Figure 5-5B for algorithm efficiency performances of RSS Minimax SDP algorithm Virtual Anchors A location-unaware target which can communicate with more than the minimum number of location-aware anchors can be upgraded as a virtual anchor once its location is determined. If not all of the targets are located inside the convex hull, even with cooperative algorithms have poor localization performance, as shown in Figure 41

42 2 a 2 a 3 1 x x 1 2 a A Inside the convex hull formed by the anchors. 5 x x 1 a 2 a B Outside the convex hull. x 1 serves as a virtual anchor for x 2. Figure 5-3. Localization graph (a more challenging case). a 1 42

43 10 2 MSE(m 2 ) ToA Outside Coop ToA Outside Noncoop ToA Inside Coop ToA Inside Noncoop /σ 2 A Accuracy. Time(s) ToA Outside Coop 0.26 ToA Outside Noncoop ToA Inside Coop ToA Inside Noncoop /σ 2 B Efficiency. Figure 5-4. ToA localization results (a more challenging case). 43

44 10 2 MSE(m 2 ) RSS Outside Coop RSS Outside Noncoop RSS Inside Coop RSS Inside Noncoop /σ 2 A Accuracy Time(s) RSS Outside Coop RSS Outside Noncoop RSS Inside Coop RSS Inside Noncoop /σ 2 B Efficiency. Figure 5-5. RSS localization results (a more challenging case). 44

45 5-4A and 5-5A. We introduce virtual anchors to improve localization accuracy in the outside-of-the-convex-hull situations ToA ToA metric is adopted for localization. Even though with lower computational efficiency, see Figure 5-6B, introducing virtual anchors can improve accuracy performance, see Figure 5-6A RSS RSS metric is adopted for localization, with P = 1000, β = 3. Even though with lower computational efficiency, see Figure 5-7B, introducing virtual anchors can improve accuracy performance, see Figure 5-7A. 5.3 The Most Challenging Case As in Figure 5-8A, anchors are located on a 2-D map at a 1 = [0, 2] T, a 2 = [ 2, 2] T, a 3 = [2, 2] T, i.e. V a = {[0, 2] T, [ 2, 2] T, [2, 2] T }. Targets are placed at x 1 = [0, 1] T, x 2 = [ 1, 1] T, x 3 = [1, 1] T, i.e. V x = {[0, 1] T, [ 1, 1] T, [1, 1] T } (inside the convex hull). As in Figure 5-8B, anchor locations remain the same. Targets are placed at x 1 = [0, 3] T, x 2 = [ 1, 4] T, x 3 = [1, 4] T, i.e. V x = {[0, 3] T, [ 1, 4] T, [1, 4] T } (outside the convex hull). Communications are allowed between node pairs at (a 1, x 1 ), (a 2, x 2 ), (a 3, x 3 ), (x 1, x 2 ), (x 2, x 3 ), and (x 3, x 1 ), i.e. E a = {(1, 1), (2, 2), (3, 3)}, E x = {(1, 2), (2, 3), (1, 3)} for both Figure 5-8A and Figure 5-8B. In this case, none of the targets are localizable without cooperative localization and no virtual anchors are available to assist localization Non-Cooperative vs. Cooperative Localization ToA ToA metric is adopted for localization. For the inside-of-the-convex-hull situation, localization accuracy improves by incorporating cooperative information between two targets. For the outside-of-the-convex-hull situation, both non-cooperative and cooperative localization fail to produce an acceptable estimate and undergo serious 45

46 MSE(m 2 ) ToA Virtual Anchor ToA Outside /σ 2 A Accuracy Time(s) ToA Virtual Anchor ToA Outside /σ 2 B Efficiency. Figure 5-6. ToA localization results with virtual anchors. 46

47 MSE(m 2 ) RSS Virtual Anchor RSS Outside /σ 2 A Accuracy Time(s) RSS Virtual Anchor RSS Outside /σ 2 B Efficiency. Figure 5-7. RSS localization results with virtual anchors. 47

48 2 a 2 a 3 1 x 2 x x 1 2 a A Inside the convex hull formed by the anchors x 2 x 3 x 1 a 2 a a B Outside the convex hull. Figure 5-8. Localization graph (the most challenging case). 48

49 performance degradation. Cooperative localization is slightly more accurate than non-cooperative localization. See Figure 5-9A for algorithm accuracy performances and Figure 5-9B for algorithm efficiency performances of ToA Minimax SDP algorithm RSS RSS metric is adopted for localization, with P = 1000, β = 3. For the inside-of-the-convex-hull situation, localization accuracy improves by incorporating cooperative information between two targets. For the outside-of-the-convex-hull situation, both non-cooperative and cooperative localization fail to produce an acceptable estimate and undergo serious performance degradation. Cooperative localization is slightly more accurate than non-cooperative localization. See Figure 5-10A for algorithm accuracy performances and Figure 5-10B for algorithm efficiency performances of RSS Minimax SDP algorithm Further Discussions Suppose all targets either have one-hop or two-hop connections to anchors, this is the most challenging case in the sense that all targets are not localizable without cooperative localization and no virtual anchors can be used to assist localization. When targets are located outside the convex hull, all existing and proposed SDP algorithms, including SSDP, ESDP, NSDP and CSDP, fail to generate satisfactory performances. New algorithms are needed to solve this problem. Luckily, this case is practically very rare since when a target has access to a faraway anchor, it should have access to nearer anchors in most cases. As in5-8b, should communications are allowed between node pair (a 1, x 1 ), usually, communications between node pairs (a 2, x 1 ) and (a 3, x 1 ) also exist. 49

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications

Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Local Strong Convexity of Maximum-Likelihood TDOA-Based Source Localization and Its Algorithmic Implications Huikang Liu, Yuen-Man Pun, and Anthony Man-Cho So Dept of Syst Eng & Eng Manag, The Chinese

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Location Determination Technologies for Sensor Networks

Location Determination Technologies for Sensor Networks Location Determination Technologies for Sensor Networks Moustafa Youssef University of Maryland at College Park UMBC Talk March, 2007 Motivation Location is important: Determining the location of an event

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Mathematical Optimization Models and Applications

Mathematical Optimization Models and Applications Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012 Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS

IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS By STEPHEN L. SORLEY A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

Robust Localization Using Range

Robust Localization Using Range Robust Localization Using Range 1 Measurements with Unknown and Bounded Errors arxiv:171.9v1 [stat.ap] 4 Jan 17 Xiufang Shi, Student Member, IEEE, Guoqiang Mao, Senior Member, IEEE, Brian.D.O. Anderson,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization

More information

The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook

The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook Stony Brook University The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook University. Alll Rigghht tss

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Robust Convex Approximation Methods for TDOA-Based Localization under NLOS Conditions

Robust Convex Approximation Methods for TDOA-Based Localization under NLOS Conditions Robust Convex Approximation Methods for TDOA-Based Localization under NLOS Conditions Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (CUHK)

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

On SDP and ESDP Relaxation of Sensor Network Localization

On SDP and ESDP Relaxation of Sensor Network Localization On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IASI, Roma September 16, 2008 (joint work with Ting Kei Pong) ON SDP/ESDP RELAXATION OF

More information

Source localization from received signal strength under lognormal shadowing

Source localization from received signal strength under lognormal shadowing University of Iowa Iowa Research Online Theses and Dissertations Spring 2010 Source localization from received signal strength under lognormal shadowing Sree Divya Chitte University of Iowa Copyright 2010

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Australian National University WORKSHOP ON SYSTEMS AND CONTROL

Australian National University WORKSHOP ON SYSTEMS AND CONTROL Australian National University WORKSHOP ON SYSTEMS AND CONTROL Canberra, AU December 7, 2017 Australian National University WORKSHOP ON SYSTEMS AND CONTROL A Distributed Algorithm for Finding a Common

More information

Binary Compressive Sensing via Analog. Fountain Coding

Binary Compressive Sensing via Analog. Fountain Coding Binary Compressive Sensing via Analog 1 Fountain Coding Mahyar Shirvanimoghaddam, Member, IEEE, Yonghui Li, Senior Member, IEEE, Branka Vucetic, Fellow, IEEE, and Jinhong Yuan, Senior Member, IEEE, arxiv:1508.03401v1

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

The Ongoing Development of CSDP

The Ongoing Development of CSDP The Ongoing Development of CSDP Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Joseph Young Department of Mathematics New Mexico Tech (Now at Rice University)

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and inyu e September 27, 2006 Abstract Recently, a semidefinite programming (SDP) relaxation

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions Score-Function Quantization for Distributed Estimation Parvathinathan Venkitasubramaniam and Lang Tong School of Electrical and Computer Engineering Cornell University Ithaca, NY 4853 Email: {pv45, lt35}@cornell.edu

More information

Inderjit Dhillon The University of Texas at Austin

Inderjit Dhillon The University of Texas at Austin Inderjit Dhillon The University of Texas at Austin ( Universidad Carlos III de Madrid; 15 th June, 2012) (Based on joint work with J. Brickell, S. Sra, J. Tropp) Introduction 2 / 29 Notion of distance

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

Localization with Imprecise Distance Information in Sensor Networks

Localization with Imprecise Distance Information in Sensor Networks Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December -5, 005 TuB03.3 Localization with Imprecise Distance Information in Sensor

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Universal Rigidity and Edge Sparsification for Sensor Network Localization

Universal Rigidity and Edge Sparsification for Sensor Network Localization Universal Rigidity and Edge Sparsification for Sensor Network Localization Zhisu Zhu Anthony Man Cho So Yinyu Ye September 23, 2009 Abstract Owing to their high accuracy and ease of formulation, there

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Distributed Optimization over Networks Gossip-Based Algorithms

Distributed Optimization over Networks Gossip-Based Algorithms Distributed Optimization over Networks Gossip-Based Algorithms Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Random

More information

A Linear Estimator for Joint Synchronization and Localization in Wireless Sensor Networks

A Linear Estimator for Joint Synchronization and Localization in Wireless Sensor Networks 2014 IEEE Global Communications Conference A Linear Estimator for Joint Synchronization and Localization in Wireless Sensor Networks Reza Monir Vaghefi and R. Michael Buehrer Wireless @ Virginia Tech,

More information

On SDP and ESDP Relaxation of Sensor Network Localization

On SDP and ESDP Relaxation of Sensor Network Localization On SDP and ESDP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle Southern California Optimization Day, UCSD March 19, 2009 (joint work with Ting Kei Pong)

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Using the Johnson-Lindenstrauss lemma in linear and integer programming

Using the Johnson-Lindenstrauss lemma in linear and integer programming Using the Johnson-Lindenstrauss lemma in linear and integer programming Vu Khac Ky 1, Pierre-Louis Poirion, Leo Liberti LIX, École Polytechnique, F-91128 Palaiseau, France Email:{vu,poirion,liberti}@lix.polytechnique.fr

More information

Algorithm-Hardware Co-Optimization of Memristor-Based Framework for Solving SOCP and Homogeneous QCQP Problems

Algorithm-Hardware Co-Optimization of Memristor-Based Framework for Solving SOCP and Homogeneous QCQP Problems L.C.Smith College of Engineering and Computer Science Algorithm-Hardware Co-Optimization of Memristor-Based Framework for Solving SOCP and Homogeneous QCQP Problems Ao Ren Sijia Liu Ruizhe Cai Wujie Wen

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

arxiv: v1 [math.oc] 20 Nov 2017

arxiv: v1 [math.oc] 20 Nov 2017 Solution of network localization problem with noisy distances and its convergence Ananya Saha Buddhadeb Sau arxiv:.v [math.oc] Nov Abstract The network localization problem with convex and non-convex distance

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

BALANCED LEAST SQUARES: ESTIMATION IN LINEAR SYSTEMS WITH NOISY INPUTS AND MULTIPLE OUTPUTS

BALANCED LEAST SQUARES: ESTIMATION IN LINEAR SYSTEMS WITH NOISY INPUTS AND MULTIPLE OUTPUTS BALANCED LEAST SQUARES: ESTIMATION IN LINEAR SYSTEMS WITH NOISY INPUTS AND MULTIPLE OUTPUTS Javier Vía and Ignacio Santamaría University of Cantabria. Spain {jvia,nacho}@gtas.dicom.unican.es ABSTRACT This

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Introduction to Convex Optimization

Introduction to Convex Optimization Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Optimization

More information

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Hyperplane-Based Vector Quantization for Distributed Estimation in Wireless Sensor Networks Jun Fang, Member, IEEE, and Hongbin

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and Yinyu Ye September 27, 2006; Revised May 2, 2007 Abstract Recently, a semi-definite programming

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs Adam N. Letchford Daniel J. Grainger To appear in Operations Research Letters Abstract In the literature

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

Convex hull of two quadratic or a conic quadratic and a quadratic inequality

Convex hull of two quadratic or a conic quadratic and a quadratic inequality Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should

More information

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

arxiv: v1 [math.oc] 23 Nov 2012

arxiv: v1 [math.oc] 23 Nov 2012 arxiv:1211.5406v1 [math.oc] 23 Nov 2012 The equivalence between doubly nonnegative relaxation and semidefinite relaxation for binary quadratic programming problems Abstract Chuan-Hao Guo a,, Yan-Qin Bai

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB 1 INTERIOR-POINT METHODS FOR SECOND-ORDER-CONE AND SEMIDEFINITE PROGRAMMING ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB Outline 2 Introduction

More information

APOCS: A RAPIDLY CONVERGENT SOURCE LOCALIZATION ALGORITHM FOR SENSOR NETWORKS. Doron Blatt and Alfred O. Hero, III

APOCS: A RAPIDLY CONVERGENT SOURCE LOCALIZATION ALGORITHM FOR SENSOR NETWORKS. Doron Blatt and Alfred O. Hero, III APOCS: A RAPIDLY CONVERGENT SOURCE LOCALIZATION ALGORITHM FOR SENSOR NETWORKS Doron Blatt and Alfred O. Hero, III Department of Electrical Engineering and Computer Science, University of Michigan, Ann

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under Imperfect Transmit Front-End Chain

Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under Imperfect Transmit Front-End Chain DRAFT 1 Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under Imperfect Transmit Front-End Chain Shuqiao Jia and Behnaam Aazhang, arxiv:1506.00330v1 [cs.it] 1 Jun 2015 Abstract We derive

More information

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES John Lipor Laura Balzano University of Michigan, Ann Arbor Department of Electrical and Computer Engineering {lipor,girasole}@umich.edu ABSTRACT This paper

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA 3 mo@umass.edu Marco F.

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Measure-Transformed Quasi Maximum Likelihood Estimation

Measure-Transformed Quasi Maximum Likelihood Estimation Measure-Transformed Quasi Maximum Likelihood Estimation 1 Koby Todros and Alfred O. Hero Abstract In this paper, we consider the problem of estimating a deterministic vector parameter when the likelihood

More information

LETTER A Semidefinite Relaxation Approach to Spreading Sequence Estimation for DS-SS Signals

LETTER A Semidefinite Relaxation Approach to Spreading Sequence Estimation for DS-SS Signals IEICE TRANS. COMMUN., VOL.E94 B, NO.11 NOVEMBER 2011 3163 LETTER A Semidefinite Relaxation Approach to Spreading Sequence Estimation for DS-SS Signals Hua Guo ZHANG a),qingmou, Nonmembers, Hong Shu LIAO,

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

An Optimization-based Approach to Decentralized Assignability

An Optimization-based Approach to Decentralized Assignability 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Interactive Interference Alignment

Interactive Interference Alignment Interactive Interference Alignment Quan Geng, Sreeram annan, and Pramod Viswanath Coordinated Science Laboratory and Dept. of ECE University of Illinois, Urbana-Champaign, IL 61801 Email: {geng5, kannan1,

More information