Dimensionality reduction of SDPs through sketching

Size: px
Start display at page:

Download "Dimensionality reduction of SDPs through sketching"

Transcription

1 Technische Universität München Workshop on "Probabilistic techniques and Quantum Information Theory", Institut Henri Poincaré Joint work with Andreas Bluhm arxiv:

2 Semidefinite Programs (SDPs) Semidefinite programs are constrained optimization problems of the form: maximize tr (AX ) subject to tr (B i X ) γ i, i [m] where A, B 1,..., B m M sym D γ 1,..., γ m R. X 0, are symmetric matrices and

3 Semidefinite Programs (SDPs) Semidefinite programs are constrained optimization problems of the form: maximize tr (AX ) subject to tr (B i X ) γ i, i [m] X 0, where A, B 1,..., B m M sym D are symmetric matrices and γ 1,..., γ m R. They can be seen as a generalization of linear programs and have many applications throughout QIT.

4 Semidefinite Programs (SDPs) Semidefinite programs are constrained optimization problems of the form: maximize tr (AX ) subject to tr (B i X ) γ i, i [m] X 0, where A, B 1,..., B m M sym D are symmetric matrices and γ 1,..., γ m R. They can be seen as a generalization of linear programs and have many applications throughout QIT. They can be written in many equivalent forms. Will call this one sketchable SDP. Any SDP can be formulated in this form.

5 Semidefinite Programs (SDPs) Good news: in most cases they can be solved in polynomial time!

6 Semidefinite Programs (SDPs) Good news: in most cases they can be solved in polynomial time! Using the ellipsoid method we can solve them in O(max { m, D 2 } D 6 log(1/ζ)) time, where ζ is the error tolerance.

7 Semidefinite Programs (SDPs) Good news: in most cases they can be solved in polynomial time! Using the ellipsoid method we can solve them in O(max { m, D 2 } D 6 log(1/ζ)) time, where ζ is the error tolerance. Bad news: scaling still prohibitive for high dimensional problems! Especially when it comes to memory. Try running an SDP with D 10 3 on your laptop and you will already run out of memory.

8 Semidefinite Programs (SDPs) Good news: in most cases they can be solved in polynomial time! Using the ellipsoid method we can solve them in O(max { m, D 2 } D 6 log(1/ζ)) time, where ζ is the error tolerance. Bad news: scaling still prohibitive for high dimensional problems! Especially when it comes to memory. Try running an SDP with D 10 3 on your laptop and you will already run out of memory. Need techniques to solve larger problems. Ideally using available solvers.

9 Sketch of Idea Apply a positive linear map Φ : M D M d to constraints s.t. tr (Φ(B i )Φ(X )) tr (B i X ) holds with high probability. Here X is a solution of the SDP.

10 Sketch of Idea Apply a positive linear map Φ : M D M d to constraints s.t. tr (Φ(B i )Φ(X )) tr (B i X ) holds with high probability. Here X is a solution of the SDP. Solve the SDP defined by Φ(B i ) and show that its value is not far from the value of the original problem.

11 Sketch of Idea Apply a positive linear map Φ : M D M d to constraints s.t. tr (Φ(B i )Φ(X )) tr (B i X ) holds with high probability. Here X is a solution of the SDP. Solve the SDP defined by Φ(B i ) and show that its value is not far from the value of the original problem. If d << D and computing Φ(B i ) is cheap, then this gives a computational advantage.

12 SDPs cannot be sketched Theorem (SDPs cannot be sketched) Let Φ : M 2D R d be a random linear map such that for all sketchable SDPs there exists an algorithm which allows us to estimate the value of an SDP up to a constant factor 1 τ < 2 3 given the sketch {Φ(A), Φ(B 1 ),..., Φ(B m )} with probability at least 9/10. Then d = Ω(D 2 ).

13 SDPs cannot be sketched Theorem (Not all SDPs can be sketched) Let Φ : M 2D R d be a random linear map such that for all sketchable SDPs there exists an algorithm which allows us to estimate the value of an SDP up to a constant factor 1 τ < 2 3 given the sketch {Φ(A), Φ(B 1 ),..., Φ(B m )} with probability at least 9/10. Then d = Ω(D 2 ).

14 Johnson-Lindenstrauss transforms Definition (Johnson-Lindenstrauss transform) A random matrix S M d,d is a Johnson-Lindenstrauss transform (JLT) with parameters (ɛ, δ, k) if with probability at least 1 δ, for any k-element subset V K D, for all v, w V it holds that Sv, Sw v, w ɛ v 2 w 2.

15 Johnson-Lindenstrauss transforms Definition (Johnson-Lindenstrauss transform) A random matrix S M d,d is a Johnson-Lindenstrauss transform (JLT) with parameters (ɛ, δ, k) if with probability at least 1 δ, for any k-element subset V K D, for all v, w V it holds that Sv, Sw v, w ɛ v 2 w 2. Example: S = 1 d R M d,d, where the entries of R are i.i.d. standard Gaussian random variables. If d = Ω(ɛ 2 log(kδ 1 )), then S is an (ɛ, δ, k)-jlt.

16 Sketching the HS scalar product Lemma (Sketching Hilbert-Schmidt Scalar Product) Let B 1,..., B m M D and S M d,d be an (ɛ, δ, k)-jlt with ɛ 1 and k such that k m rank (B i ). i=1 Then, with probability at least 1 δ, ( ) i, j [m] : tr SB i S T SB j S T tr (B i B j ) 3ɛ B i 1 B j 1

17 Bad scaling ( ) tr SB i S T SB j S T tr (B i B j ) 3ɛ B i 1 B j 1 Scaling with 1 is undesirable. Normal JLT gives scaling with 2.

18 Bad scaling ( ) tr SB i S T SB j S T tr (B i B j ) 3ɛ B i 1 B j 1 Scaling with 1 is undesirable. Normal JLT gives scaling with 2. Proof of the inequality admittedly crude. Can we improve the inequality?

19 No Johnson-Lindenstrauss with positive maps Theorem (No Johnson-Lindenstrauss with positive maps) Let Φ : M D M d be a random positive map such that with strictly positive probability for any Y 1,... Y D+1 M D and 0 < ɛ < 1 4 we have ( ) ( ) tr Φ(Y i ) T Φ(Y j ) tr Yi T Y j ɛ Y i 2 Y j 2. Then d = Ω(D).

20 The Algorithm Assumptions: uniform bounds on A 1, B 1 1,..., B m 1 and X 1, where X is an optimal point of the SDP. Standard regularity assumptions on SDP.

21 The Algorithm Assumptions: uniform bounds on A 1, B 1 1,..., B m 1 and X 1, where X is an optimal point of the SDP. Standard regularity assumptions on SDP. Consider the sketchable SDP of dimension D: maximize tr (AX ) subject to tr (B i X ) γ i, i [m] X 0.

22 The Algorithm Now pick a (ɛ, δ, k) JL-transform S M d,d, where m k rank (X ) + rank (A) + rank (B i ), i=1 and consider the SDP of dimension d: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i, Y 0. i [m]

23 The Algorithm Relax the constraints: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, Y 0 i [m]

24 The Algorithm Relax the constraints: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, Y 0 i [m] Call this SDP the sketched SDP!

25 The Algorithm Relax the constraints: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, Y 0 i [m] Call this SDP the sketched SDP! Solve this SDP!

26 The Algorithm Relax the constraints: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, Y 0. i [m] Bound on HS scalar product gives that SX S T is a feasible point of the relaxed problem with probability 1 δ.

27 The Algorithm Relax the constraints: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, Y 0. i [m] Bound on HS scalar product gives that SX S T is a feasible point of the relaxed problem with probability 1 δ. We have tr ( SAS T SX S T ) tr (AX ) 3ɛ X 1 A 1.

28 The Algorithm Relax the constraints: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, Y 0. i [m] Bound on HS scalar product gives that SX S T is a feasible point of the relaxed problem with probability 1 δ. We have tr ( SAS T SX S T ) tr (AX ) 3ɛ X 1 A 1. But tr (AX ) = α is the value of the sketchable SDP! We therefore obtain α S + 3ɛ X 1 A 1 α, where α S is the value of the sketched SDP.

29 The Algorithm Relax the constraints: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, Y 0. i [m] Bound on HS scalar product gives that SX S T is a feasible point of the relaxed problem with probability 1 δ. We have tr ( SAS T SX S T ) tr (AX ) 3ɛ X 1 A 1. But tr (AX ) = α is the value of the sketchable SDP! We therefore obtain α S + 3ɛ X 1 A 1 α, where α S is the value of the sketched SDP.

30 Upper bound through sketch Theorem Let A, B 1,..., B m M sym D, η, γ 1,..., γ m R and ɛ > 0. Denote by α the value of the sketchable SDP and assume it is attained at an optimal point X which satisfies tr (X ) η. Moreover, let S M d,d be an (ɛ, δ, k)-jlt, with k rank (X ) + rank (A) + m rank (B i ). Let α S be the value of the sketched SDP defined by A, B i and S. Then with probability at least 1 δ. i=1 α S + 3ɛη A 1 α

31 Lower Bound Can it be the case that α S α?

32 Lower Bound Can it be the case that α S α? Depends on how stable your SDP is!

33 Lower Bound Let Y be an optimal point of your sketched SDP. That is, a solution of: ( ) maximize tr SAS T Y ( ) subject to tr SB i S T Y γ i + 3ɛ B i 1 X 1, i [m] Y 0

34 Lower Bound By the cyclicity of the trace, S T Y S is a feasible point of maximize tr (AX ) subject to tr (B i X ) γ i + 3ɛ B i 1 X 1, i [m] X 0 with value α S.

35 Lower Bound By the cyclicity of the trace, S T Y S is a feasible point of maximize tr (AX ) subject to tr (B i X ) γ i + 3ɛ B i 1 X 1, i [m] X 0 with value α S. This is just a perturbed verision of the original SDP!

36 Lower Bound for positive γ i Theorem (Lower Bound in terms of α S ) For a sketchable SDP with γ i = 1 and κ = max i [m] B i 1, we have that α S 1 + ν α, where ν = 3ɛηκ. Moreover, denoting by XS an optimal point of the 1 sketched SDP, we have that 1+ν S T XS S is a feasible point of the sketchable SDP that attains this lower bound.

37 Summary Theorem For a sketchable SDP with γ i = 1 and κ = max i [m] B i 1, we have that where ν = 3ɛηκ. α S 1 + ν α α S + 3ɛη A 1,

38 Complexity and Memory Considerations Assuming A 1, B 1,..., B m 1, X 1 = O(1) and ɛ, δ, ζ fixed we obtain: Theorem Let A, B 1,..., B m M sym D, of a sketchable SDP be given. Furthermore, let SDP(m, d) be the complexity of solving a sketchable SDP of dimension d and m constraints up to some given precision. Then a number of O(D 2 m log(k)) + SDP(m, log(k))) operations is needed to generate and solve the sketched SDP, where k (m + 2)D 2 is defined as before.

39 Complexity and Memory Considerations Assuming ɛ, δ fixed, sketching gives a speedup as long as the complexity of solving the SDP directly is Ω(mD 2+µ ), where µ > 0.

40 Complexity and Memory Considerations Assuming ɛ, δ fixed, sketching gives a speedup as long as the complexity of solving the SDP directly is Ω(mD 2+µ ), where µ > 0. Need only to store O(mɛ 4 log(mk/δ) 2 ) entries to solve the sketched problem.

41 Uncertainty Relations Given observables A, B M sym D, consider uncertainty relations of the form tr ( A 2 ρ ) + tr ( B 2 ρ ) c for all states ρ s.t. tr (Aρ) (a ɛ, a + ɛ) and tr (Bρ) (b ɛ, b + ɛ).

42 Uncertainty Relations Given observables A, B M sym D, consider uncertainty relations of the form tr ( A 2 ρ ) + tr ( B 2 ρ ) c for all states ρ s.t. tr (Aρ) (a ɛ, a + ɛ) and tr (Bρ) (b ɛ, b + ɛ). Finding the optimal c can easily be cast as an SDP.

43 Uncertainty Relations minimize tr ( (A 2 + B 2 )X ) subject to tr (AX ) a ± ɛ, tr (BX ) b ± ɛ, tr (X ) = 1, X 0.

44 Uncertainty Relations minimize tr ( (A 2 + B 2 )X ) subject to tr (AX ) a ± ɛ, tr (BX ) b ± ɛ, tr (X ) = 1, X 0.

45 Uncertainty Relations minimize tr ( (A 2 + B 2 )X ) subject to tr (AX ) a ± ɛ, tr (BX ) b ± ɛ, tr (X ) = 1, X 0. Can t handle tr (X ) = 1 as a constraint. Relax problem and drop it.

46 Uncertainty Relations minimize tr ( (A 2 + B 2 )X ) subject to tr (AX ) a ± ɛ, tr (BX ) b ± ɛ, X 0. Can t handle tr (X ) = 1 as a constraint. Relax problem and drop it.

47 Uncertainty Relations minimize tr ( (A 2 + B 2 )X ) subject to tr (AX ) a ± ɛ, tr (BX ) b ± ɛ, X 0. Can t handle tr (X ) = 1 as a constraint. Relax problem and drop it. If B 1, A 1 = O(1) and their nonzero spectrum flat, we can show X 1 = O(1).

48 Uncertainty Relations minimize tr ( (A 2 + B 2 )X ) subject to tr (AX ) a ± ɛ, tr (BX ) b ± ɛ, X 0. Can t handle tr (X ) = 1 as a constraint. Relax problem and drop it. If B 1, A 1 = O(1) and their nonzero spectrum flat, we can show X 1 = O(1). Example: A, B of fixed rank and nonzero spectrum contained in some compact interval.

49 Numerical Results D d Value Error L.B. M.R.T. Sketchable [s] M.R.T Sketch [s] Table: For each combination of the sketchable dimension (D) and dimension of the sketch (d) we have generated 40 instances of the uncertainty relation SDP. Here M.R.T. stands for mean running time, L.B. stands for lower bound we obtained from the sketch. The column Value stands for the optimal value of the sketchable SDP.

50 Thanks!

Dimensionality reduction of SDPs through sketching

Dimensionality reduction of SDPs through sketching Dimensionality reduction of SDPs through sketching Andreas Bluhm 1 and Daniel Stilck França 1 arxiv:1707.09863v2 [math.oc] 15 Nov 2018 1 Department of Mathematics, Technical University of Munich, 85748

More information

Lecture 18: March 15

Lecture 18: March 15 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 18: March 15 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may

More information

Faster Johnson-Lindenstrauss style reductions

Faster Johnson-Lindenstrauss style reductions Faster Johnson-Lindenstrauss style reductions Aditya Menon August 23, 2007 Outline 1 Introduction Dimensionality reduction The Johnson-Lindenstrauss Lemma Speeding up computation 2 The Fast Johnson-Lindenstrauss

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

On positive maps in quantum information.

On positive maps in quantum information. On positive maps in quantum information. Wladyslaw Adam Majewski Instytut Fizyki Teoretycznej i Astrofizyki, UG ul. Wita Stwosza 57, 80-952 Gdańsk, Poland e-mail: fizwam@univ.gda.pl IFTiA Gdańsk University

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Convex Optimization in Classification Problems

Convex Optimization in Classification Problems New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Sketching as a Tool for Numerical Linear Algebra

Sketching as a Tool for Numerical Linear Algebra Sketching as a Tool for Numerical Linear Algebra David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching for Numerical

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

Global Optimization of Polynomials

Global Optimization of Polynomials Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree

More information

Accelerated Dense Random Projections

Accelerated Dense Random Projections 1 Advisor: Steven Zucker 1 Yale University, Department of Computer Science. Dimensionality reduction (1 ε) xi x j 2 Ψ(xi ) Ψ(x j ) 2 (1 + ε) xi x j 2 ( n 2) distances are ε preserved Target dimension k

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Non-commutative polynomial optimization

Non-commutative polynomial optimization Non-commutative polynomial optimization S. Pironio 1, M. Navascuès 2, A. Acín 3 1 Laboratoire d Information Quantique (Brussels) 2 Department of Mathematics (Bristol) 3 ICFO-The Institute of Photonic Sciences

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z. Determining a span Set V = R 3 and v 1 = (1, 2, 1), v 2 := (1, 2, 3), v 3 := (1 10, 9). We want to determine the span of these vectors. In other words, given (x, y, z) R 3, when is (x, y, z) span(v 1,

More information

Free probability and quantum information

Free probability and quantum information Free probability and quantum information Benoît Collins WPI-AIMR, Tohoku University & University of Ottawa Tokyo, Nov 8, 2013 Overview Overview Plan: 1. Quantum Information theory: the additivity problem

More information

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS E. ALPER YILDIRIM Abstract. Let S denote the convex hull of m full-dimensional ellipsoids in R n. Given ɛ > 0 and δ > 0, we study the problems of

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

Lecture notes for quantum semidefinite programming (SDP) solvers

Lecture notes for quantum semidefinite programming (SDP) solvers CMSC 657, Intro to Quantum Information Processing Lecture on November 15 and 0, 018 Fall 018, University of Maryland Prepared by Tongyang Li, Xiaodi Wu Lecture notes for quantum semidefinite programming

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Fast Random Projections

Fast Random Projections Fast Random Projections Edo Liberty 1 September 18, 2007 1 Yale University, New Haven CT, supported by AFOSR and NGA (www.edoliberty.com) Advised by Steven Zucker. About This talk will survey a few random

More information

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations 15.083J/6.859J Integer Optimization Lecture 10: Solving Relaxations 1 Outline The key geometric result behind the ellipsoid method Slide 1 The ellipsoid method for the feasibility problem The ellipsoid

More information

Properties for systems with weak invariant manifolds

Properties for systems with weak invariant manifolds Statistical properties for systems with weak invariant manifolds Faculdade de Ciências da Universidade do Porto Joint work with José F. Alves Workshop rare & extreme Gibbs-Markov-Young structure Let M

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

1 Positive and completely positive linear maps

1 Positive and completely positive linear maps Part 5 Functions Matrices We study functions on matrices related to the Löwner (positive semidefinite) ordering on positive semidefinite matrices that A B means A B is positive semi-definite. Note that

More information

ENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS. 1. Introduction

ENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS. 1. Introduction Trends in Mathematics Information Center for Mathematical Sciences Volume 6, Number 2, December, 2003, Pages 83 91 ENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS SEUNG-HYEOK KYE Abstract.

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

C* ALGEBRAS AND THEIR REPRESENTATIONS

C* ALGEBRAS AND THEIR REPRESENTATIONS C* ALGEBRAS AND THEIR REPRESENTATIONS ILIJAS FARAH The original version of this note was based on two talks given by Efren Ruiz at the Toronto Set Theory seminar in November 2005. This very tentative note

More information

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

More information

The Johnson-Lindenstrauss Lemma

The Johnson-Lindenstrauss Lemma The Johnson-Lindenstrauss Lemma Kevin (Min Seong) Park MAT477 Introduction The Johnson-Lindenstrauss Lemma was first introduced in the paper Extensions of Lipschitz mappings into a Hilbert Space by William

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Proving Unsatisfiability in Non-linear Arithmetic by Duality

Proving Unsatisfiability in Non-linear Arithmetic by Duality Proving Unsatisfiability in Non-linear Arithmetic by Duality [work in progress] Daniel Larraz, Albert Oliveras, Enric Rodríguez-Carbonell and Albert Rubio Universitat Politècnica de Catalunya, Barcelona,

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Lecture 6 September 13, 2016

Lecture 6 September 13, 2016 CS 395T: Sublinear Algorithms Fall 206 Prof. Eric Price Lecture 6 September 3, 206 Scribe: Shanshan Wu, Yitao Chen Overview Recap of last lecture. We talked about Johnson-Lindenstrauss (JL) lemma [JL84]

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

MP 472 Quantum Information and Computation

MP 472 Quantum Information and Computation MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 We present the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre

More information

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB 1 INTERIOR-POINT METHODS FOR SECOND-ORDER-CONE AND SEMIDEFINITE PROGRAMMING ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB Outline 2 Introduction

More information

Multiuser Downlink Beamforming: Rank-Constrained SDP

Multiuser Downlink Beamforming: Rank-Constrained SDP Multiuser Downlink Beamforming: Rank-Constrained SDP Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Compression, Matrix Range and Completely Positive Map

Compression, Matrix Range and Completely Positive Map Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n

More information

Operator norm convergence for sequence of matrices and application to QIT

Operator norm convergence for sequence of matrices and application to QIT Operator norm convergence for sequence of matrices and application to QIT Benoît Collins University of Ottawa & AIMR, Tohoku University Cambridge, INI, October 15, 2013 Overview Overview Plan: 1. Norm

More information

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2 Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while

More information

Various notions of positivity for bi-linear maps and applications to tri-partite entanglement

Various notions of positivity for bi-linear maps and applications to tri-partite entanglement Various notions of positivity for bi-linear maps and applications to tri-partite entanglement 2015.06.16., Waterloo Seung-Hyeok Kye (Seoul National University) a joint work with Kyung Hoon Han [arxiv:1503.05995]

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

Towards stability and optimality in stochastic gradient descent

Towards stability and optimality in stochastic gradient descent Towards stability and optimality in stochastic gradient descent Panos Toulis, Dustin Tran and Edoardo M. Airoldi August 26, 2016 Discussion by Ikenna Odinaka Duke University Outline Introduction 1 Introduction

More information

The query register and working memory together form the accessible memory, denoted H A. Thus the state of the algorithm is described by a vector

The query register and working memory together form the accessible memory, denoted H A. Thus the state of the algorithm is described by a vector 1 Query model In the quantum query model we wish to compute some function f and we access the input through queries. The complexity of f is the number of queries needed to compute f on a worst-case input

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Exercises Chapter II.

Exercises Chapter II. Page 64 Exercises Chapter II. 5. Let A = (1, 2) and B = ( 2, 6). Sketch vectors of the form X = c 1 A + c 2 B for various values of c 1 and c 2. Which vectors in R 2 can be written in this manner? B y

More information

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set ON THE FACETS OF THE MIXED INTEGER KNAPSACK POLYHEDRON ALPER ATAMTÜRK Abstract. We study the mixed integer knapsack polyhedron, that is, the convex hull of the mixed integer set defined by an arbitrary

More information

arzelier

arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.1 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS STABILITY ANALYSIS Didier HENRION www.laas.fr/ henrion henrion@laas.fr Denis ARZELIER www.laas.fr/

More information

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology

More information

Branch-and-cut (and-price) for the chance constrained vehicle routing problem

Branch-and-cut (and-price) for the chance constrained vehicle routing problem Branch-and-cut (and-price) for the chance constrained vehicle routing problem Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo May 25th, 2016 ColGen 2016 joint work with

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0

1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0 MA Linear Programming Tutorial 3 Solution. Consider the following polyhedron of an LP problem: x x + x 3 = ( 3x + x 4 ( 7x 4x 3 + x 4 4 (3 x, x, x 4 Identify all active constraints at each of the following

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

Lecture 8: Semidefinite programs for fidelity and optimal measurements

Lecture 8: Semidefinite programs for fidelity and optimal measurements CS 766/QIC 80 Theory of Quantum Information (Fall 0) Lecture 8: Semidefinite programs for fidelity and optimal measurements This lecture is devoted to two examples of semidefinite programs: one is for

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ. Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is

More information

A Quantum Interior Point Method for LPs and SDPs

A Quantum Interior Point Method for LPs and SDPs A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

SGD and Randomized projection algorithms for overdetermined linear systems

SGD and Randomized projection algorithms for overdetermined linear systems SGD and Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College IPAM, Feb. 25, 2014 Includes joint work with Eldar, Ward, Tropp, Srebro-Ward Setup Setup

More information

1 Review of last lecture and introduction

1 Review of last lecture and introduction Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

On the computation of Hermite-Humbert constants for real quadratic number fields

On the computation of Hermite-Humbert constants for real quadratic number fields Journal de Théorie des Nombres de Bordeaux 00 XXXX 000 000 On the computation of Hermite-Humbert constants for real quadratic number fields par Marcus WAGNER et Michael E POHST Abstract We present algorithms

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

An Approximation Algorithm for Approximation Rank

An Approximation Algorithm for Approximation Rank An Approximation Algorithm for Approximation Rank Troy Lee Columbia University Adi Shraibman Weizmann Institute Conventions Identify a communication function f : X Y { 1, +1} with the associated X-by-Y

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information