Near-Optimal Linear Recovery from Indirect Observations
|
|
- Betty Logan
- 5 years ago
- Views:
Transcription
1 Near-Optimal Linear Recovery from Indirect Observations joint work with A. Nemirovski, Georgia Tech nemirovs/statopt LN.pdf Les Houches, April, 2017
2 Situation: In the nature there exists a signal x known to belong to a given convex compact set X R n. We observe corrupted by noise affine image of the signal: ω = Ax + σξ Ω = R m A: given m n sensing matrix ξ: random observation noise Our goal is to recover the image Bx of x under a given affine mapping B: R n R ν. Risk of a candidate estimate x( ) : Ω R ν is defined as { } Risk[ x X ] = sup E ξ Bx x(ax + σξ) 2 2 x X Risk 2 is the worst-case, over x X, expected 2 2 recovery error
3 Agenda: Under appropriate assumptions on X, we are to show that One can build, in a computationally efficient fashion, the (nearly) best, in terms of risk, estimate from the family of linear estimates x(ω) = x H (ω) = H T ω [H R m ν ] The resulting linear estimate is nearly optimal among all estimates, linear and nonlinear alike
4 Linear estimation of signal in Gaussian noise... Kuks & Olman, 1971, 1972 Rao 1972, 1973, Pilz, 1981, 1986,..., Drygas, 1996, Christopeit & Helmes, 1996, Arnold & Stahlecker, 2000,... Pinsker 1980, Efromovich & Pinsker, 1981, 1982, Efromovich & Pinsker 1996, Golubev, Levit & Tsybakov, 1996,..., Efromovich, 2008,... Donoho, Liu, McGibbon,
5 Risk of linear estimation Assuming that ξ is zero mean with unit covariance matrix, we can easily compute the risk of a linear estimate x H (ω) = H T ω Risk 2 [ x H X ] = max E { ξ [B H T A]x σh T ξ 2} 2 x X { = max [B H T A]x x X σ2 E ξ {Tr(H T ξξ T H} } = σ 2 Tr(H T H) + max x X Tr(xxT [B T A T H][B H T A]). Note: building the minimum risk linear estimate reduces to solving convex minimization problem [ min H φ(h) := max x X Tr(xxT [B T A T H][B H T A]) + σ 2 Tr(H T H) ]. ( ) Convex function φ is given implicitly and can be difficult to compute, making ( ) difficult as well
6 Fact: essentially, the only cases when ( ) is known to be easy are those when X is given as a convex hull of finite set of moderate cardinality X is an ellipsoid: for W S n and S 0 ( ) max x T Sx 1 Tr(xxT W ) = λ max S 1/2 W S 1/2. where λ max ( ) is the maximal eigenvalue. When X is a box, computing φ is NP-hard... When φ is difficult to compute, we can to replace φ in the design problem ( ) with an efficiently computable convex upper bound ϕ(h). We are about to consider a family of sets X ellitopes for which reasonably tight bounds ϕ of desired type are available
7 An ellitope is a set X R n given as where X = {x R n : y R N, t T : x = P y, y T S k y t k, 1 k K} P is a given n N matrix (we can assume that P = I n ), S k 0 are positive semidefinite matrices with k S k 0 T is a convex compact subset of K-dimensional nonnegative orthant R K + such that T contains some positive vectors T is monotone: if 0 t t and t T, then t T as well. Note: every ellitope is a symmetric w.r.t. the origin convex compact set
8 Examples [A.] A centered at the origin ellipsoid (K = 1, T = [0; 1]) [B.] (Bounded) intersection of K ellispoids/elliptic cylinders centered at the origin (T = {t R K : 0 t k 1, k N}) [C.] Box {x R n : 1 x i 1} (T = {t R n : 0 t k 1, k K = n}, x T S k x = x 2 k ) [D.] X = {x R n : x p 1} with p 2 (T = {t R n + : t p/2 1}, x T S k x = x 2 k, k K = n) Ellitopes admit fully algorithmic calculus: if X i, 1 i I, are ellitopes, so are linear images of X i inverse linear images of X i under linear embeddings i X i X 1... X I X X I
9 Observation Let X = {x R n : t T : x T S k x t k, 1 k K} be an ellitope. Given a quadratic form x T W x, W S n, one has where max x X xt W x = max x X Tr(xxT W ) max Q Q Tr(QW ), We conclude that Q := {Q S n : Q 0, t T : Tr(QS k ) t k, k K}. and φ(h) ϕ(h) := σ 2 Tr(H T H) + max Q Q Tr( Q(A T H B T )(H T A B) ), Risk 2 [ x H X ] min H ϕ(h)
10 This attracts our attention to the optimization problem Opt P = min H { ϕ(h) = max Q Q Note that (P) is the primal problem [ [ σ 2 Tr(H T H) + Tr ( Q(A T H B T )(H T A B) ) ]}. (P ) }{{} Φ(H,Q) min H max Q Q Φ(H, Q) associated with the convex-concave saddle point function Φ(H, Q). problem associated with Φ(H, Q) is that is, the problem Opt D = max Q Q { ψ(q) := min H max Q Q ] [ ] min Φ(H, Q), H The dual [ σ 2 Tr(H T H) + Tr ( Q(A T H B T )(H T A B) )]}. (D) By the Sion-Kakutani theorem, (P) and (D) are solvable with equal optimal values: Opt D = Opt P = Opt
11 Note that the minimizer of Φ(, Q) can be easily computed: so that H(Q) = (σ 2 I m + AQA T ) 1 AQB T, ψ(q) = Tr ( B[Q QA T (σ 2 I m + AQA T ) 1 AQ]B T ), and the dual problem reads { Opt D = max Tr ( B[Q QA T (σ 2 I m + AQA T ) 1 AQ]B ) T, Q,t } (D) Q 0, t T, Tr(QS k ) t k, k K In fact, both (P) and (D) can be cast as Semidefinite Optimization problems. In particular, (P) can be rewritten as { [ Opt = min σ 2 Tr(H T H) + φ T (λ) : k λ ] } ks k B T A T H H,λ B H T 0, λ 0 A I ν where φ T : R K R is the support function of T : φ T (λ) = max t T λt t. Note that (P) is efficiently solvable whenever T is computationally tractable. (P )
12 Bottom line: Given matrices A R m n, B R k n and an ellitope X = {x R n : t T : x T S k x t k, 1 k K} ( ) consider the convex optimization problems Opt P = min H ϕ(h) and OptD = max Q Q ψ(q), where Q := {Q S n : Q 0, t : Tr(QS k ) t k, k K}. The optimal values of two problems coincide, Opt P = Opt D = Opt. When noise ξ satisfies E{ξ} = 0, and E{ξξ T } = I m, the risk of the linear estimate x H ( ) induced by the optimal solution H to the problem (this solution clearly exists provided that σ > 0) satisfies the risk bound Risk[ x H X ] Opt. We are to compare the bound Opt for the risk of x H to the minimax risk Risk Opt [X ] = inf Risk[ x X ]. x( )
13 Bayesian risks Minimax risk Risk Opt [X ] is defined as the worst, over the signals of interest, performance of x( ) Bayesian risk is the average performance, with the average taken over some prior probability distribution on the signals. For the problem of 2 -recovering Bx via noisy observation ω = Ax + σξ, ξ P this alternative reads as follows: (!) Given a probability distribution π of signals x R n, find an estimate x( ) which minimizes Risk 2 ( x π) := π { } Bx x(ax + σξ) 2 2 P (dξ) π(dx) R m the average, over the distribution π of signals x, of expected 2 2 estimation error of Bx via observation Ax + σξ
14 Let P x,ω be the induced by π and P ξ joint distribution of (x, ω = Ax + σξ) on R n x Rm ω. P x,ω gives rise to marginal distribution P ω of ω, conditional distribution P x ω of x given ω. We have Risk 2 ( x π) = Bx x(ω) 2 2 P x,ω(dx, dω) R n x Rm ω = R m { R n Bx x(ω) 2 2 P x ω(dx) } P ω (dω) Assuming that the probability distribution π possesses finite second moments, one has min x( ) R n R m Bx x(ω) 2 2 P x,ω(dx) = Bx x (ω) 2 2 P x,ω(dx), R n R m where x (ω) = BxP x ω (dx). R n
15 Corollary [Gauss-Markov theorem]: Let x R n and ξ R m be independent zeromean Gaussian random vectors. Assuming σ > 0 and the covariance matrix of ξ to be positive definite, conditional, given ω, distribution of x is normal, and the conditional expectation x (ω) is a linear function of ω, as a result, an optimal solution x ( ) to the risk minimization problem { } E x π,ξ Bx x(ax + σξ) 2 2 min x( ) exists and is a linear function of ω = Ax + σξ. In particular, when ξ N (0, I m ) and x N (0, Q), one has x (ω) = [ [σ 2 I m + AQA T )] 1 AQB ] T ω Risk 2 ( x N (0, Q)) = Tr ( B[Q QA T [σ 2 I m + AQA T ] 1 AQ]B ) T
16 Course of actions (Pinsker s program) Let N (0, Q) be a Gaussian prior for the signal x which sits on X with high probability. Then by the Gauss-Markov theorem the ( slightly reduced ) quantity ψ(q) = Tr(B[Q QA T [σ 2 I m + AQA T ] 1 AQ]B T ) would be a lower bound on Risk 2 Opt. Note that E η N (0,Q) {η T Sη} = Tr(SQ). Thus, selecting Q 0 according to t T : Tr(QS k ) t k, k K we ensure that η N (0, Q) sits in X on average. Imposing on Q 0 restriction t T : Tr(QS k ) ρt k, k K, [ρ > 0] we enforce η N (0, Q) to take values in X with probability controlled by ρ and approaching 1 as ρ
17 The above considerations give rise to parametric optimization problem Opt (ρ) = max Q 0 {ψ(q) : t T : Tr(QS k) ρt k, 1 k K} (P ρ ) We may expect that for small ρ a slightly corrected Opt (ρ) is a lower bound on Risk 2 Opt. As we have just seen, Opt (1) = Opt (!). Since the optimal value of the (concave) optimization problem (P ρ ) is a a concave function of ρ, we have Opt (ρ) ρopt, 0 < ρ < 1. Now, all we need is a simple result as follows: Lemma Let S and Q be positive semidefinite n n matrices with ρ := Tr(SQ) 1, and let η N (0, Q). Then Prob { η T Sη > 1 } 1 ρ+ρ ln(ρ) e 2ρ
18 We arrive at the following Theorem. Let us associate with ellitope X = {x R n : t T : x T S k x t k, k K} the convex compact set Q = {Q S n : Q 0, t T : Tr(QS k ) t k, k K}, and the quantity M = max Q Q Tr(BQB T ). Then the linear estimate x H (ω) = H T ω of Bx, x X, via observation ω = Ax+σξ, ξ N (0, I m ), given by the optimal solution H to the convex optimization problem { } λ Opt = min φ T (λ) + σ 2 Tr(HH T [ 0 ) : k λ ks k B T A T H H,λ 0 satisfies the risk bound Risk[ x H X ] Opt ( 6 ln B H T A I k ] 8M 2 K Risk 2 Opt [X ] ) Risk Opt [X ]
19 Numerical illustration In these experiments B is n n identity matrix, n n sensing matrix A is a randomly rotated matrix with singular values λ j, 1 j n, forming a geometric progression, with λ 1 = 1 and λ n = In the first experiment the signal set X 1 is an ellipsoid: n X 1 = {x R n : j 2 x 2 j 1}, that is, K = 1, S 1 = n j=1 j2 e j e T j (e j are basic orths), and T = [0, 1]. Theoretical suboptimality factor in the interval [31.6, 73.7] in this experiment. j=1 In the second experiment, the signal set X is the box: X = {x R n : j x j 1, 1 j n} [K = n, S k = k 2 e k e T k, k = 1,..., K, T = [0, 1]K ]. Theoretical suboptimality factor in the interval [73.2, 115.4]
20 Recovery on ellipsoids: risk bounds as functions of the noise level σ, dimension n = 32. Left plot: upper and lower bounds of the risk; right plot: suboptimality ratios Recovery on ellipsoids: risk bounds as functions of problem dimension n, noise level σ = Left plot: upper and lower risk bounds; right plot: suboptimality ratios.
21 Recovery on a box: risk bounds as functions of the noise level σ, dimension n = 32. Left plot: upper and lower bounds of the risk; right plot: suboptimality ratios Recovery on a box: risk bounds as functions of problem dimension n, noise level σ = Left plot: upper and lower risk bounds; right plot: suboptimality ratios.
22 Extensions 1. Relative risks When very large signals are allowed, one may switch from the usual risk to its relative version S-risk defined as follows: Given a positive semidefinite risk calibrating matrix S we set RiskS[ x X ] = min { τ : Eξ { Bx x(ax + σξ) 2 2} τ[1 + x T Sx] x X } Note: setting S = 0 recovers the usual plain risk. Results on design of near-optimal, in terms of plain risk, linear estimates extend directly to the case of S-risk
23 Design of near optimal linear estimate x H (ω) = H T ω is given by an optimal solution (H, τ, λ ) to the convex optimization problem { [ Opt = min τ : k λ ] } ks k + τs B T A T H 0, σ 2 Tr(HH T ) + φ H,τ,λ B H T T (λ) τ, λ 0 A I k For the resulting estimate, it holds RiskS[ x H X ] Opt, provided ξ is zero mean with unit covariance matrix. Near-optimality properties of the estimate x H remain the same as in the case of plain risk: when ξ N (0, I m ), one has ( ) RiskS[ x H X ] 8KM 2 6 ln RiskS 2 Opt [X ] RiskS Opt [X ], where { } M = max Q Tr(BQB T ) : Q 0, t T : Tr(QS k ) t k, 1 k K, and RiskS Opt [X ] = inf RiskS[ x X ]. x( )
24 In the case X = R n, the best linear estimate is yielded by the optimal solution to the convex problem Opt = min H,τ { τ : [ ] τs B T A T H B H T A I k } 0, σ 2 Tr(HH T ) τ ( ) A feasible solution τ, H to ( ) gives rise to linear estimate x H (ω) = H T ω such that RiskS[ x H R n ] τ, provided ξ is zero mean with unit covariance matrix. Proposition Assume that B 0 and ( ) is feasible. Then the problem is solvable, and its optimal solution Opt, H gives rise to the linear estimate x H (ω) = H T ω with S-risk Opt. When ξ N (0, I m ), this estimate is minimax optimal: RiskS[ x H R n ] = Opt = RiskS Opt [R n ]
25 2. Spectratopes We say that a set X R n is a basic spectratope, if it can be represented in the form X = { x R n : t T : Rk 2 [x] t ki dk, 1 k K } where [S 1 ] R k [x] = n i=1 x ir ki are symmetric d k d k matrices linearly depending on x R n (i.e., matrix coefficients R ki belong to S n ) [S 2 ] T R K + is a convex compact subset of RK + and is monotone: which contains a positive vector 0 t t T t T. [S 3 ] Whenever x 0, it holds R k [x] 0 for at least one k K. A spectratope is a linear image Y = P X of a basic spectratope. We refer to D = k d k as size of the spectratope Y
26 Examples [A.] Any ellitope is a spectratope. [B.] Let L be a positive definite d d matrix. Then the matrix box X = {X S d : L X L} = {X S d : R 2 [X] := [L 1/2 XL 1/2 ] 2 I d } is a basic spectratope. As a result, a bounded set X R n given by a system of two-sided LMI s, specifically, X = {x R n : t T : t k L k S k [x] t l L k, 1 k K} where S k [x] are symmetric d k d k matrices linearly depending on x, L k 0 and T satisfies S 2, is a basic spectratope: X = {x R n : t T : R 2 k [x] t ki dk, k K} [R k [x] = L 1/2 k S k [x]l 1/2 k ] Same as ellitopes, spectratopes admit fully algorithmic calculus
27 Bounding quadratic forms over ellitopes Proposition Let G be a symmetric n n matrix, X R n be given by spectratopic representation, and let and Opt = where R k (Λ) : Sd k Opt = max x X xt Gx min Λ={Λ k } k K { φt (λ[λ]) : Λ k 0, P T GP k R k [Λ k] } (QP R) Sn is the conjugate linear mapping, [R k (Λ)] ij = 1 2 Tr ( Λ[R ki R kj + R kj R ki ] ), 1 i, j n, φ T is the support function of T, and for Λ = {Λ k S d k } k K, λ[λ] = [Tr[Λ 1 ];...; Tr[Λ K ]]. Then (QPR) is solvable, and Opt Opt 2 max[ln(2d), 1]Opt
28 Remark The result of the proposition has some history. Nemirovski, Roos and Terlaky, 1999 X is an intersection of centered at the origin ellipsoids/elliptic cylinders J. and Nemirovski 2016 X is ellitope, with tighter bound Opt Opt 4 ln(5k)opt. Note that in the case of an ellitope, (QPR) results in a somewhat worse suboptimality factor O(1) ln( K k=1 Rank(S k))
29 Building linear estimate Proposition Consider convex optimization problem { (B H T A) T (B H T A) Opt = min τ : k H,Λ,τ σ 2 Tr(H T H) + φ T (λ[λ]) τ R k (Λ k) Problem ( ) is solvable, and its feasible solution (H, λ, τ) induces a linear estimate x H = H T ω of Bx, x X, via observation ω = Ax + σξ, ξ N (0, I) with the maximal over X risk not exceeding τ. } ( )
30 Proposition Let X be a spectratope, and let Q = {Q S n + : t T : R k[q] t k I dk, k K}. The set Q is a nonempty convex compact set containing a neighbourhood of the origin, so that the quantity is well defined and positive. M = max Q Q Tr(BQBT ), The efficiently computable linear estimate x H (ω) = H T solution of ( ) is nearly optimal in terms of the risk: ( ) Risk[ x H X ] 2 8DM 2 2 ln RiskS 2 Opt [X ] Risk Opt [X ], where Risk Opt [X ] = inf Risk[ x X ] x( ) is the minimax risk associated with X, and D = k d k. ω yielded by the optimal
31 3. Norms We say that the norm is spectratopic-representable if the unit ball B of the conjugate norm is a spectratope: B = MY, Y = { x R n : r R : Sl 2 [x] r li fl, 1 l L }, where S l R f l f l, r l, l = 1,..., L and R is a valid spectratopic data. We denote F = l f l the size of B. Examples p -norm with 1 p 2 B is the unit ball of q -norm with 1 p + 1 q = B is an affine image of the the direct product of unit balls of norms and 2 combined norm min x=u+v M 1 u 1 + M 2 v 2 B is the intersection B B2 of unit balls of norms M T 1 and M T 2 2 nuclear norm Sh,1 B is the unit ball of the spectral norm Sh,... spectral norm Sh, is difficult
32 For an estimate x of Bx, let Risk [ x X ] = sup x X E ξ N (0,Im ){ Bx x(ax + σξ) }. Proposition Consider the convex optimization problem { Opt = min H,Λ,Υ,Υ,Θ φ T (λ[λ]) + φ R (λ[υ]) + φ R (λ[υ ]) + σtr(θ) : Λ = {Λ k 0, k K}, Υ [ = {Υ l 0, l L}, Υ = {Υ l 0, l L}, k R k [Λ ] 1 k] 2 [BT A T H]M 1 2 M T [B H T A] l S l [Υ 0, [ l] 1 Θ 2 HM ] 1 2 M T H T l S l [Υ l ] 0. Here for Λ = {Λ i S m i } i I λ[λ] = [Tr[Λ 1 ];...; Tr[Λ I ]], and φ T [R k [Λ k]] ij = 1 2 Tr(Λ k[r ki k Rkj k [S l [Υ l]] ij = 1 2 Tr(Υ l[s li l Slj l + Rkj k Rki + S lj l Sli k ]), where R k[x] = i x ir ki, l ]), where S l[y] = i y is li, and φ R are the support function of T and R. The problem is solvable, and the H-component H of its optimal solution yields linear estimate x H (ω) = H T ω such that Risk [ x( ) X ] Opt
33 Near-optimality of linear estimation on spectratopes Proposition Let M 2 = max W { Eη N (0,In ) BW 1/2 η 2 : W Q := {W S n + : t T : R k[w ] t k I dk, 1 k K} }. Then there is an efficiently computable linear estimate x H = H ω which satisfies ( ) 2DM 2 Risk [ x H X ] Opt C ln(2f ) ln Risk 2 Risk [X ],Opt[X ], where C is a positive absolute constant, [ ] Risk Opt [X ] = inf sup E ξ N (0,Im ){ Bx x(ax + σξ) } x( ) x X the infimum being taken over all estimates, and D = k d k, F = l f l
34 The key component Lemma Let Y be an N ν matrix, let be a norm on R ν such that the unit ball B of the conjugate norm is the spectratope, and let ζ N (0, Q) for some positive semidefinte N N matrix Q. Then the upper bound on φ Q (Y ) := E{ Y T ζ } yielded by the SDP relaxation, that is, the optimal value Opt[Q] of the convex optimization problem { Opt[Q] = min Θ,Υ is tight, namely, φ R (λ[υ]) + Tr(Θ) : Υ = {Υ l 0, 1 l L}, Θ S m, [ ] } 1 Θ 2 Q1/2 Y M 1 2 M T Y T Q 1/2 l S l [Υ 0 l] ψ Q (Y ) Opt[Q] ( 8F 4 ln 2 e 1/4 2 e 1/4 where F = l f l is the size of the spectratope B ) ψ Q (Y ),
Near-Optimality of Linear Recovery in Gaussian Observation Scheme under L 2 -Loss
Near-Optimality of Linear Recovery in Gaussian Observation Scheme under L -Loss Anatoli Juditsky, Arkadi Nemirovski To cite this version: Anatoli Juditsky, Arkadi Nemirovski. Near-Optimality of Linear
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationHypothesis Testing via Convex Optimization
Hypothesis Testing via Convex Optimization Arkadi Nemirovski Joint research with Alexander Goldenshluger Haifa University Anatoli Iouditski Grenoble University Information Theory, Learning and Big Data
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,
More informationStability of optimization problems with stochastic dominance constraints
Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM
More informationOn deterministic reformulations of distributionally robust joint chance constrained optimization problems
On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,
More informationMathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector
On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationMulti-stage convex relaxation approach for low-rank structured PSD matrix recovery
Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar
More informationA Convex Upper Bound on the Log-Partition Function for Binary Graphical Models
A Convex Upper Bound on the Log-Partition Function for Binary Graphical Models Laurent El Ghaoui and Assane Gueye Department of Electrical Engineering and Computer Science University of California Berkeley
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationConvex Optimization on Large-Scale Domains Given by Linear Minimization Oracles
Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationDecentralized Control of Stochastic Systems
Decentralized Control of Stochastic Systems Sanjay Lall Stanford University CDC-ECC Workshop, December 11, 2005 2 S. Lall, Stanford 2005.12.11.02 Decentralized Control G 1 G 2 G 3 G 4 G 5 y 1 u 1 y 2 u
More informationNotation and Prerequisites
Appendix A Notation and Prerequisites A.1 Notation Z, R, C stand for the sets of all integers, reals, and complex numbers, respectively. C m n, R m n stand for the spaces of complex, respectively, real
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationA notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations
A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationTrust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization
Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationOn the Sandwich Theorem and a approximation algorithm for MAX CUT
On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationConvex Optimization in Classification Problems
New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationIE 521 Convex Optimization
Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong
More informationRobust Fisher Discriminant Analysis
Robust Fisher Discriminant Analysis Seung-Jean Kim Alessandro Magnani Stephen P. Boyd Information Systems Laboratory Electrical Engineering Department, Stanford University Stanford, CA 94305-9510 sjkim@stanford.edu
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationConvex Optimization & Parsimony of L p-balls representation
Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS
ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS E. ALPER YILDIRIM Abstract. Let S denote the convex hull of m full-dimensional ellipsoids in R n. Given ɛ > 0 and δ > 0, we study the problems of
More informationNonlinear Programming Models
Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationPrimal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization
Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More information(Linear Programming program, Linear, Theorem on Alternative, Linear Programming duality)
Lecture 2 Theory of Linear Programming (Linear Programming program, Linear, Theorem on Alternative, Linear Programming duality) 2.1 Linear Programming: basic notions A Linear Programming (LP) program is
More informationROBUST CONVEX OPTIMIZATION
ROBUST CONVEX OPTIMIZATION A. BEN-TAL AND A. NEMIROVSKI We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, yet
More informationapproximation algorithms I
SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationDifficult Geometry Convex Programs and Conditional Gradient Algorithms
Difficult Geometry Convex Programs and Conditional Gradient Algorithms Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research with
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationRobust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE
Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the
More informationarxiv: v2 [math.st] 24 Feb 2017
On sequential hypotheses testing via convex optimization Anatoli Juditsky Arkadi Nemirovski September 12, 2018 arxiv:1412.1605v2 [math.st] 24 Feb 2017 Abstract We propose a new approach to sequential testing
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationA Unified Theorem on SDP Rank Reduction. yyye
SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationThe moment-lp and moment-sos approaches
The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationConvex Stochastic and Large-Scale Deterministic Programming via Robust Stochastic Approximation and its Extensions
Convex Stochastic and Large-Scale Deterministic Programming via Robust Stochastic Approximation and its Extensions Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia
More informationA SHORT INTRODUCTION TO BANACH LATTICES AND
CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationIteration-complexity of first-order penalty methods for convex programming
Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)
More informationOn the Existence and Convergence of the Central Path for Convex Programming and Some Duality Results
Computational Optimization and Applications, 10, 51 77 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On the Existence and Convergence of the Central Path for Convex
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications
ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications Professor M. Chiang Electrical Engineering Department, Princeton University February
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationOn duality theory of conic linear problems
On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu
More informationQuadratic reformulation techniques for 0-1 quadratic programs
OSE SEMINAR 2014 Quadratic reformulation techniques for 0-1 quadratic programs Ray Pörn CENTER OF EXCELLENCE IN OPTIMIZATION AND SYSTEMS ENGINEERING ÅBO AKADEMI UNIVERSITY ÅBO NOVEMBER 14th 2014 2 Structure
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationDistributionally Robust Convex Optimization
Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationExercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem
Exercises Basic terminology and optimality conditions 4.1 Consider the optimization problem f 0(x 1, x 2) 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following
More informationLecture 7: Weak Duality
EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly
More informationDistributionally robust optimization techniques in batch bayesian optimisation
Distributionally robust optimization techniques in batch bayesian optimisation Nikitas Rontsis June 13, 2016 1 Introduction This report is concerned with performing batch bayesian optimization of an unknown
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationApplications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012
Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in
More informationSDP Relaxation of Quadratic Optimization with Few Homogeneous Quadratic Constraints
SDP Relaxation of Quadratic Optimization with Few Homogeneous Quadratic Constraints Paul Tseng Mathematics, University of Washington Seattle LIDS, MIT March 22, 2006 (Joint work with Z.-Q. Luo, N. D. Sidiropoulos,
More information