Orientation Assignment in Cryo-EM

Size: px
Start display at page:

Download "Orientation Assignment in Cryo-EM"

Transcription

1 Orientation Assignment in Cryo-EM Amit Singer Princeton University Department of Mathematics and Program in Applied and Computational Mathematics July 24, 2014 Amit Singer (Princeton University) July / 30

2 Single Particle Reconstruction using cryo-em Schematic drawing of the imaging process: The cryo-em problem: Amit Singer (Princeton University) July / 30

3 Orientation Estimation: Fourier projection-slice theorem (x ij,y ij) R ic ij c ij = (x ij,y ij,0) T Projection I i Projection I i Î i 3D Fourier space Molecule φ R i SO(3) (x ji,y ji) R ic ij = R jc ji Electron source Projection I j Î j 3D Fourier space Cryo-EM inverse problem: Find φ (and R 1,...,R n ) given I 1,...,I n. n = 3: Vainshtein and Goncharov 1986, van Heel 1987 n > 3: S, Shkolnisky SIAM Imaging 2011 n > 3: Bandeira, Charikar, Chen, S, in preparation Amit Singer (Princeton University) July / 30

4 Assumptions for today s talk Trivial point-group symmetry Homogeneity Amit Singer (Princeton University) July / 30

5 Experiments with simulated noisy projections Each projection is 129x129 pixels. SNR = Var(Signal) Var(Noise), (a) Clean (b) SNR=2 0 (c) SNR=2 1 (d) SNR=2 2 (e) SNR=2 3 (f) SNR=2 4 (g) SNR=2 5 (h) SNR=2 6 (i) SNR=2 7 (j) SNR=2 8 Amit Singer (Princeton University) July / 30

6 Fraction of correctly identified common lines and the SNR Define common line as being correctly identified if both radial lines deviate by no more than 10 from true directions. Fraction p of correctly identified common lines increases by PCA log 2 (SNR) p Amit Singer (Princeton University) July / 30

7 Least Squares Approach (x ij,y ij) R ic ij c ij = (x ij,y ij,0) T Projection I i Î i 3D Fourier space (x ji,y ji) R ic ij = R jc ji Projection I j Î j 3D Fourier space Least squares: min R 1,R 2,...,R n SO(3) R i c ij R j c ji 2 i j Search space is exponentially large and non-convex. Amit Singer (Princeton University) July / 30

8 Quadratic Optimization Under Orthogonality Constraints Quadratic cost: i j R ic ij R j c ji 2 Quadratic constraints: R T i R i = I 3 3 (det(r i ) = +1 constraint is ignored) We approximate the solution using SDP and rounding (S, Shkolnisky 2011) Related to: MAX-CUT (Goemans and Williamson 1995) Generalized Orthogonal Procrustes Problem (Nemirovski 2007) PhaseLift (Candes, Strohmer, Voroninski 2013) Non-commutative Grothendick Problem (Naor, Regev, Vidick 2013) Robust version Least Unsquared Deviations (Wang, S, Wen 2013) min R 1,R 2,...,R n SO(3) R i c ij R j c ji i j Amit Singer (Princeton University) July / 30

9 Detour: MAX-CUT MAX-CUT: Given a weighted undirected graph with n vertices and non-negative weights w ij 0, split the vertices into two sets such that the total weight of edges across the cut is maximized. 1 w 14 w 12 2 w 24 w 25 4 w 45 3 w 34 w 35 5 CUT({1,2,3}, {4,5}) = w 14 +w 24 +w 25 +w 34 +w 35 Amit Singer (Princeton University) July / 30

10 The MAX-CUT Approximation of Goemans & Williamson OPT = 1 max y 1,y 2,...,y n { 1,+1} 4 w ij (1 y i y j ) Combinatorial problem that is known to be NP-complete SDP relaxation of Goemans-Williamson (1995): The n n Gram matrix G = yy T (G ij = y i y j ) is PSD (G 0), G ii = 1, rank(g) = 1. 1 max G R n n 4 i j w ij 1 4 Tr(GW) i,j s.t. G 0, G ii = 1, i = 1,2,...,n. Randomize a random vector z with i.i.d standard Gaussian entries z i N(0,1), and compute Yz, where G = YY T is the Cholesky decomposition of G. Round y i = sign[(yz) i ]. Theorem (G-W): E[CUT GW ] > 0.87 OPT. Amit Singer (Princeton University) July / 30

11 SDP Relaxation for the Common-Lines Problem Least squares is equivalent to maximizing the sum of inner products: R i c ij R j c ji 2 R i c ij,r j c ji min R 1,R 2,...,R n SO(3) i j max R 1,R 2,...,R n SO(3) y ji Tr(c ji cij T RT i R j ) i j max R 1,R 2,...,R n SO(3) i j max Tr(CG) R 1,R 2,...,R n SO(3) C is the 2n 2n matrix ( the common lines matrix ) with [ ] [ ] [ C ij = c ji c ij T xji [ ] xji x = xij y ij = ij x ji y ij 0 0, C y ji x ij y ji y ii = ij 0 0 G is the 2n 2n Gram matrix G = R T R with Gij = R i T R j : R T 1 I 2 2 R T R R T 1 2 R T R 1 n [ ] R G = R 1 R 2 R n = 2. T R 2 1 I 2 2 RT 2 Rn R n T R 1 R n T R 2 I 2 2 R T n ] Amit Singer (Princeton University) July / 30

12 SDP Relaxation and Rounding SDP Relaxation: max Tr(CG) R 1,R 2,...,R n SO(3) max G R 2n 2nTr(CG) s.t. G 0, G ii = I 2 2, i = 1,2,...,n. Missing is the non-convex constraint rank(g) = 3. Randomize a 2n 3 orthogonal matrix Q using (careful) QR factorization of a 2n 3 matrix with i.i.d standard Gaussian entries Compute Cholesky decomposition G = YY T Round using SVD: (YQ) i = U i Σ i Vi T = R i T = U i V T Use the cross product to find Ri T. Loss of handedness. Amit Singer (Princeton University) July / 30 i.

13 Spectral Relaxation for Uniformly Distributed Rotations [ R 1 i R 2 i ] = Define 3 vectors of length 2n [ x 1 i xi 2 yi 1 yi 2 zi 1 zi 2 ], i = 1,...,n. x = [ x 1 1 x 2 1 x 1 2 x 2 2 x 1 n x 2 n y = [ y 1 1 y 2 1 y 1 2 y 2 2 y 1 n y 2 n z = [ z 1 1 z 2 1 z 1 2 z 2 2 z 1 n z 2 n Rewrite the least squares objective function as max R 1,...,R n SO(3) i j R i c ij,r j c ji = max R 1,...,R n SO(3) xt Cx +y T Cy +z T Cz By symmetry, if rotations are uniformly distributed over SO(3), then the top eigenvalue of C has multiplicity 3 and corresponding eigenvectors are x,y,z from which we recover R 1,R 2,...,R n! Amit Singer (Princeton University) July / 30 ] T ] T ] T

14 Spectrum of C Numerical simulation with n = 1000 rotations sampled from the Haar measure; no noise. Bar plot of positive (left) and negative (right) eigenvalues of C: λ λ Eigenvalues: λ l n ( 1)l+1 l(l+1), l = 1,2,3,... (1 2, 1 6, 1 12,...) Multiplicities: 2l + 1. Two basic questions: 1 Why this spectrum? Answer: Representation Theory of SO(3) (Hadani, S, 2011) 2 Is it stable to noise? Answer: Yes, due to random matrix theory. Amit Singer (Princeton University) July / 30

15 Common Lines Kernel Common line equation: R i c ij = R j c ji = ± R3 i R 3 j R 3 i R 3 j c ij = ±R 1 i Ri 3 Rj 3 Ri 3 Rj 3, c ji = ±R 1 j Basic properties of the cross product: C ij is only a function of the ratio U ij = R 1 i R j ( ) ( ) xij x C ij = ji x ij y ji xij ( = xji y y ij x ji y ij y ji ji = P [I3 U 3 ij ][(U 1 ij ) 3 I 3 ] T I 3 U 3 ij 2 P T, where P = y ij Ri 3 Rj 3 Ri 3 Rj 3 ) = (Pcij )(Pc ji ) T ( Common lines kernel: C ij = k(r i,r j ) = k(r 1 i R j ) : R 2 R 2. ). Amit Singer (Princeton University) July / 30

16 Common Lines Operator Common lines kernel: C ij = k(r i,r j ) = k(r 1 i R j ) : R 2 R 2. Suppose f : SO(3) R 2 then 1 n (Cf)(R i) = 1 n n C ij f(r j ) j=1 SO(3) k(r i,r j )f(r j )dµ(r j ) = (Cf)(R i ). Common lines integral operator C is a convolution over SO(3): (Cf)(R i ) = k(r 1 i R j )f(r j )dµ(r j ) = λf(r i ) SO(3) Representation theory of SO(3) explains eigenvalues and eigenfunctions. Eigenfunctions are tangent vector fields to the 2-sphere (sections of the tangent bundle). They are the projected gradient of the scalar spherical harmonics. Amit Singer (Princeton University) July / 30

17 Probabilistic Model and Wigner s Semi-Circle Law Simplistic Model: every common line is detected correctly with probability p, independently of all other common-lines, and with probability 1 p the common lines are falsely detected and are uniformly distributed over the unit circle. Let C clean be the matrix C when all common-lines are detected correctly (p = 1). The expected value of the noisy matrix C is E[C] = pc clean, as the contribution of the falsely detected common lines to the expected value vanishes. Decompose C as C = pc clean +W, where W is a 2n 2n zero-mean random matrix. The eigenvalues of W are distributed according to Wigner s semi-circle law whose support, up to O(p) and finite sample fluctuations, is [ 2n, 2n]. Amit Singer (Princeton University) July / 30

18 Threshold probability Sufficient condition for top three eigenvalues to be pushed away from the semi-circle and no other eigenvalue crossings: (rank-1 and finite rank deformed Wigner matrices, Füredi and Komlós 1981, Féral and Péché 2007,...) p (C clean ) > 1 2 λ 1(W) Spectral gap (C clean ) and spectral norm λ 1 (W) are given by (C clean ) ( )n and Threshold probability λ 1 (W) 2n. p c = n. Amit Singer (Princeton University) July / 30

19 Numerical Spectra of C, n = λ (a) SNR= λ (b) SNR= λ (c) SNR= λ (d) SNR= λ (e) SNR= λ (f) SNR= λ (g) SNR= λ (h) SNR= λ (i) SNR=2 8 Amit Singer (Princeton University) July / 30

20 Estimation Error True rotations: R 1,...,R n. Estimated rotations: ˆR1,..., ˆR n. Registration: Ô = argmin O SO(3) n R i OˆR i 2 F i=1 Mean squared error: MSE = 1 n n R i ÔˆR i 2 F i=1 Amit Singer (Princeton University) July / 30

21 MSE for n = 1000 SNR p MSE Model fails at low SNR. Why? Wigner model is too simplistic cannot have n 2 independent random variables from just n images. C ij = K(I i,i j ), random kernel matrix (Koltchinskii and Giné 2000, El-Karoui 2010). Kernel is discontinuous (Cheng, S, 2013) Amit Singer (Princeton University) July / 30

22 From Common Lines to Maximum Likelihood The images contain more information than that expressed by the common lines. Common line based algorithms can succeed only at high SNR. (Quasi) Maximum Likelihood: We would like to try all possible rotations R 1,...,R n and choose the combination for which the agreement on the common lines as observed in the images is maximal. Computationally intractable: exponentially large search space. Amit Singer (Princeton University) July / 30

23 Maximum Likelihood Solution using SDP Main idea: Lift SO(3) to Sym(S 2 ) Suppose ω 1,ω 2,...,ω L S 2 are evenly distributed points over the sphere (e.g., a spherical t-design). g SO(3) π S L ( Assignment, Procrustes ) (in practice no explicit construction is needed). Amit Singer (Princeton University) July / 30

24 Convex Relaxation of Permutations arising from Rotations The convex hull of permutation matrices are the doubly stochastic matrices (Birkhoff-von Neumann polytope): Π R L L, Π 0, 1 T Π = 1 T, Π1 = 1 Rotation by an element of SO(3) should map nearby-points to nearby-points. More precisely, SO(3) preserves inner products: Ω ij = ω i,ω j ǫ = ω π(i),ω π(j) = Ω π(i),π(j) ΠΩΠ T ǫ = Ω = ΠΩ ǫ = ΩΠ Axis of rotation (Euler s rotation theorem): Tr(Π) ǫ 2 Notice: We are discretizing S 2, not SO(3) (substantial gain in computational complexity) Amit Singer (Princeton University) July / 30

25 Convex Relaxation of Cycle Consistency: SDP Let G be a block-matrix of size n n with G ij R L L. We want G ij = Π i Π T j. G is PSD, G ii = I L L, and rank(g) = L (the rank constraint is dropped). Convex relaxation of search space (putting it all together): G R nl nl G 0 G ii = I L L (cycle consistency) G ij 0 G ij 1 = 1 1 T G ij = 1 T (doubly stochastic) G ij Ω = ǫ ΩG ij Tr(G ij ) ǫ 2 (SO(3)) Related work: Charikar, Makarychev, Makarychev, Near optimal algorithms for Unique Games (STOC 2006) Amit Singer (Princeton University) July / 30

26 Maximum Likelihood: Linear Objective Function The common line depends on R i R T j. Log-Likelihood function is of the form f ij (R i Rj T ) i j Nonlinear in R i R T j, but linear in G. Proof by picture. Amit Singer (Princeton University) July / 30

27 Linear Objective: Proof by picture Amit Singer (Princeton University) July / 30

28 Tightness of the SDP p σ p σ Figure : Fraction of trials (among 100) on which rank recovery was observed. Left: Generalized Orthogonal Procrustes. Right: multi-reference alignment (registration). The solution of the SDP has the desired rank up to a certain level of noise (w.h.p). In other words, even though the search-space is exponentially large, we typically find the MLE in polynomial time. This is a viable alternative to heuristic methods such as EM and alternating minimization. The SDP gives a certificate whenever it finds the MLE. Amit Singer (Princeton University) July / 30

29 Are we there yet? The SDP approach can be used in a variety of problems where the objective function is a sum of pairwise interactions Need better theoretical understanding for the phase transition behavior and conditions for tightness of the SDP. Need better computational tools for solving large scale SDPs. (we use ADMM). Amit Singer (Princeton University) July / 30

30 References A. Singer, Y. Shkolnisky, Three-Dimensional Structure Determination from Common Lines in Cryo-EM by Eigenvectors and Semidefinite Programming, SIAM Journal on Imaging Sciences, 4 (2), pp (2011). R. Hadani, A. Singer, Representation theoretic patterns in three dimensional Cryo-Electron Microscopy I The intrinsic reconstitution algorithm, Annals of Mathematics, 174 (2), pp (2011). L. Wang, A. Singer, Z. Wen, Orientation Determination of Cryo-EM images using Least Unsquared Deviations, SIAM Journal on Imaging Sciences, 6(4), pp (2013). A. S. Bandeira, M. Charikar, A. Singer, A. Zhu, Multireference Alignment using Semidefinite Programming, 5th Innovations in Theoretical Computer Science (ITCS 2014). A. S. Bandeira, Y. Khoo, A. Singer, Open problem: Tightness of maximum likelihood semidenite relaxations, COLT Also available at Amit Singer (Princeton University) July / 30

Three-dimensional structure determination of molecules without crystallization

Three-dimensional structure determination of molecules without crystallization Three-dimensional structure determination of molecules without crystallization Amit Singer Princeton University, Department of Mathematics and PACM June 4, 214 Amit Singer (Princeton University) June 214

More information

Some Optimization and Statistical Learning Problems in Structural Biology

Some Optimization and Statistical Learning Problems in Structural Biology Some Optimization and Statistical Learning Problems in Structural Biology Amit Singer Princeton University, Department of Mathematics and PACM January 8, 2013 Amit Singer (Princeton University) January

More information

Introduction to Three Dimensional Structure Determination of Macromolecules by Cryo-Electron Microscopy

Introduction to Three Dimensional Structure Determination of Macromolecules by Cryo-Electron Microscopy Introduction to Three Dimensional Structure Determination of Macromolecules by Cryo-Electron Microscopy Amit Singer Princeton University, Department of Mathematics and PACM July 23, 2014 Amit Singer (Princeton

More information

3D Structure Determination using Cryo-Electron Microscopy Computational Challenges

3D Structure Determination using Cryo-Electron Microscopy Computational Challenges 3D Structure Determination using Cryo-Electron Microscopy Computational Challenges Amit Singer Princeton University Department of Mathematics and Program in Applied and Computational Mathematics July 28,

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Non-unique Games over Compact Groups and Orientation Estimation in Cryo-EM

Non-unique Games over Compact Groups and Orientation Estimation in Cryo-EM Non-unique Games over Compact Groups and Orientation Estimation in Cryo-EM Amit Singer Princeton University Department of Mathematics and Program in Applied and Computational Mathematics July 28, 2016

More information

Beyond Scalar Affinities for Network Analysis or Vector Diffusion Maps and the Connection Laplacian

Beyond Scalar Affinities for Network Analysis or Vector Diffusion Maps and the Connection Laplacian Beyond Scalar Affinities for Network Analysis or Vector Diffusion Maps and the Connection Laplacian Amit Singer Princeton University Department of Mathematics and Program in Applied and Computational Mathematics

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Disentangling Orthogonal Matrices

Disentangling Orthogonal Matrices Disentangling Orthogonal Matrices Teng Zhang Department of Mathematics, University of Central Florida Orlando, Florida 3286, USA arxiv:506.0227v2 [math.oc] 8 May 206 Amit Singer 2 Department of Mathematics

More information

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Nicolas Boumal Université catholique de Louvain (Belgium) IDeAS seminar, May 13 th, 2014, Princeton The Riemannian

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

Nonlinear Analysis with Frames. Part III: Algorithms

Nonlinear Analysis with Frames. Part III: Algorithms Nonlinear Analysis with Frames. Part III: Algorithms Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD July 28-30, 2015 Modern Harmonic Analysis and Applications

More information

Global Positioning from Local Distances

Global Positioning from Local Distances Global Positioning from Local Distances Amit Singer Yale University, Department of Mathematics, Program in Applied Mathematics SIAM Annual Meeting, July 2008 Amit Singer (Yale University) San Diego 1 /

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences Yuxin Chen Emmanuel Candès Department of Statistics, Stanford University, Sep. 2016 Nonconvex optimization

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut CMPUT 675: Approximation Algorithms Fall 011 Lecture 17 (Nov 3, 011 ): Approximation via rounding SDP: Max-Cut Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 17.1 Approximation Algorithm

More information

Unique Games and Small Set Expansion

Unique Games and Small Set Expansion Proof, beliefs, and algorithms through the lens of sum-of-squares 1 Unique Games and Small Set Expansion The Unique Games Conjecture (UGC) (Khot [2002]) states that for every ɛ > 0 there is some finite

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem

Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem Amit Singer Princeton University Department of Mathematics and Program in Applied and Computational Mathematics July 25, 2014 Joint work

More information

Approximation & Complexity

Approximation & Complexity Summer school on semidefinite optimization Approximation & Complexity David Steurer Cornell University Part 1 September 6, 2012 Overview Part 1 Unique Games Conjecture & Basic SDP Part 2 SDP Hierarchies:

More information

Positive Semi-definite programing and applications for approximation

Positive Semi-definite programing and applications for approximation Combinatorial Optimization 1 Positive Semi-definite programing and applications for approximation Guy Kortsarz Combinatorial Optimization 2 Positive Sem-Definite (PSD) matrices, a definition Note that

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

A Convex Approach for Designing Good Linear Embeddings. Chinmay Hegde

A Convex Approach for Designing Good Linear Embeddings. Chinmay Hegde A Convex Approach for Designing Good Linear Embeddings Chinmay Hegde Redundancy in Images Image Acquisition Sensor Information content

More information

Lecture Semidefinite Programming and Graph Partitioning

Lecture Semidefinite Programming and Graph Partitioning Approximation Algorithms and Hardness of Approximation April 16, 013 Lecture 14 Lecturer: Alantha Newman Scribes: Marwa El Halabi 1 Semidefinite Programming and Graph Partitioning In previous lectures,

More information

arxiv: v3 [cs.cv] 3 Sep 2013

arxiv: v3 [cs.cv] 3 Sep 2013 GLOBAL REGISTRATION OF MULTIPLE POINT CLOUDS USING SEMIDEFINITE PROGRAMMING KUNAL N. CHAUDHURY, YUEHAW KHOO, AND AMIT SINGER arxiv:1306.5226v3 [cs.cv] 3 Sep 2013 ABSTRACT. Consider N points in R d and

More information

Class Averaging in Cryo-Electron Microscopy

Class Averaging in Cryo-Electron Microscopy Class Averaging in Cryo-Electron Microscopy Zhizhen Jane Zhao Courant Institute of Mathematical Sciences, NYU Mathematics in Data Sciences ICERM, Brown University July 30 2015 Single Particle Reconstruction

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

NIH Public Access Author Manuscript Ann Math. Author manuscript; available in PMC 2012 November 23.

NIH Public Access Author Manuscript Ann Math. Author manuscript; available in PMC 2012 November 23. NIH Public Access Author Manuscript Published in final edited form as: Ann Math. 2011 September 1; 174(2): 1219 1241. doi:10.4007/annals.2011.174.2.11. Representation theoretic patterns in three dimensional

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Representation theoretic patterns in three dimensional Cryo-Electron Microscopy I: The intrinsic reconstitution algorithm

Representation theoretic patterns in three dimensional Cryo-Electron Microscopy I: The intrinsic reconstitution algorithm Annals of Mathematics 174 (211), 1219 1241 http://dx.doi.org/1.47/annals.211.174.2.11 Representation theoretic patterns in three dimensional Cryo-Electron Microscopy I: The intrinsic reconstitution algorithm

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Approximating the little Grothendieck problem over the orthogonal and unitary groups

Approximating the little Grothendieck problem over the orthogonal and unitary groups Math. Program., Ser. A 06 60:433 475 DOI 0.007/s007-06-0993-7 FULL LENGTH PAPER Approximating the little Grothendieck problem over the orthogonal and unitary groups Afonso S. Bandeira Christopher Kennedy

More information

Convex and Semidefinite Programming for Approximation

Convex and Semidefinite Programming for Approximation Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since

More information

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

On the local stability of semidefinite relaxations

On the local stability of semidefinite relaxations On the local stability of semidefinite relaxations Diego Cifuentes Department of Mathematics Massachusetts Institute of Technology Joint work with Sameer Agarwal (Google), Pablo Parrilo (MIT), Rekha Thomas

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Parameter Selection Techniques and Surrogate Models

Parameter Selection Techniques and Surrogate Models Parameter Selection Techniques and Surrogate Models Model Reduction: Will discuss two forms Parameter space reduction Surrogate models to reduce model complexity Input Representation Local Sensitivity

More information

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Jialin Dong ShanghaiTech University 1 Outline Introduction FourVignettes: System Model and Problem Formulation Problem Analysis

More information

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell Lower bounds on the size of semidefinite relaxations David Steurer Cornell James R. Lee Washington Prasad Raghavendra Berkeley Institute for Advanced Study, November 2015 overview of results unconditional

More information

16.1 L.P. Duality Applied to the Minimax Theorem

16.1 L.P. Duality Applied to the Minimax Theorem CS787: Advanced Algorithms Scribe: David Malec and Xiaoyong Chai Lecturer: Shuchi Chawla Topic: Minimax Theorem and Semi-Definite Programming Date: October 22 2007 In this lecture, we first conclude our

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

8 Approximation Algorithms and Max-Cut

8 Approximation Algorithms and Max-Cut 8 Approximation Algorithms and Max-Cut 8. The Max-Cut problem Unless the widely believed P N P conjecture is false, there is no polynomial algorithm that can solve all instances of an NP-hard problem.

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

COM Optimization for Communications 8. Semidefinite Programming

COM Optimization for Communications 8. Semidefinite Programming COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.

More information

Phase Retrieval from Local Measurements in Two Dimensions

Phase Retrieval from Local Measurements in Two Dimensions Phase Retrieval from Local Measurements in Two Dimensions Mark Iwen a, Brian Preskitt b, Rayan Saab b, and Aditya Viswanathan c a Department of Mathematics, and Department of Computational Mathematics,

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY

More information

REPRESENTATION THEORETIC PATTERNS IN THREE DIMENSIONAL CRYO-ELECTRON MICROSCOPY I - THE INTRINSIC RECONSTITUTION ALGORITHM

REPRESENTATION THEORETIC PATTERNS IN THREE DIMENSIONAL CRYO-ELECTRON MICROSCOPY I - THE INTRINSIC RECONSTITUTION ALGORITHM REPRESENTATION THEORETIC PATTERNS IN THREE DIMENSIONAL CRYO-ELECTRON MICROSCOPY I - THE INTRINSIC RECONSTITUTION ALGORITHM RONNY HADANI AND AMIT SINGER A. In this paper, we reveal the formal algebraic

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

arxiv: v1 [math.st] 10 Sep 2015

arxiv: v1 [math.st] 10 Sep 2015 Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees Department of Statistics Yudong Chen Martin J. Wainwright, Department of Electrical Engineering and

More information

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Shuyang Ling Courant Institute of Mathematical Sciences, NYU Aug 13, 2018 Joint

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

On the efficient approximability of constraint satisfaction problems

On the efficient approximability of constraint satisfaction problems On the efficient approximability of constraint satisfaction problems July 13, 2007 My world Max-CSP Efficient computation. P Polynomial time BPP Probabilistic Polynomial time (still efficient) NP Non-deterministic

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

Hierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n )

Hierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n ) Hierarchies Today 1. Some more familiarity with Hierarchies 2. Examples of some basic upper and lower bounds 3. Survey of recent results (possible material for future talks) Hierarchies 1. Lovasz-Schrijver

More information

REPRESENTATION THEORETIC PATTERNS IN THREE DIMENSIONAL CRYO-ELECTRON MICROSCOPY III - PRESENCE OF POINT SYMMETRIES

REPRESENTATION THEORETIC PATTERNS IN THREE DIMENSIONAL CRYO-ELECTRON MICROSCOPY III - PRESENCE OF POINT SYMMETRIES REPRESENTATION THEORETIC PATTERNS IN THREE DIMENSIONAL CRYO-ELECTRON MICROSCOPY III - PRESENCE OF POINT SYMMETRIES SHAMGAR GUREVICH, RONNY HADANI, AND AMIT SINGER A. In this paper we present a novel algorithm,

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

COR-OPT Seminar Reading List Sp 18

COR-OPT Seminar Reading List Sp 18 COR-OPT Seminar Reading List Sp 18 Damek Davis January 28, 2018 References [1] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank Solutions of Linear Matrix Equations via Procrustes

More information

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA 3 mo@umass.edu Marco F.

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto Unsupervised Learning Techniques 9.520 Class 07, 1 March 2006 Andrea Caponnetto About this class Goal To introduce some methods for unsupervised learning: Gaussian Mixtures, K-Means, ISOMAP, HLLE, Laplacian

More information

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Moses Charikar Konstantin Makarychev Yury Makarychev Princeton University Abstract In this paper we present approximation algorithms

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Yoshua Bengio Pascal Vincent Jean-François Paiement University of Montreal April 2, Snowbird Learning 2003 Learning Modal Structures

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information