A First-Order Framework for Solving Ellipsoidal Inclusion and. Inclusion and Optimal Design Problems. Selin Damla Ahipa³ao lu.

Size: px
Start display at page:

Download "A First-Order Framework for Solving Ellipsoidal Inclusion and. Inclusion and Optimal Design Problems. Selin Damla Ahipa³ao lu."

Transcription

1 A First-Order Framework for Solving Ellipsoidal Inclusion and Optimal Design Problems Singapore University of Technology and Design November 23, 2012

2 1. Statistical Motivation Given m regression vectors {x 1, x 2,..x m } IR n which span IR n, we want to estimate the parameter θ IR n in a linear model such as y = X T θ + ɛ where X := [x 1, x 2,..x m ] IR n m and ɛ N(0, σ 2 ) IR m. An estimator ˆθ is unbiased if E(ˆθ) = θ.

3 2. Motivation It is well-known that the optimal unbiased estimator is ˆθ = (X T X) 1 Xy and D := σ 2 (XX T ) 1, the dispersion matrix, is a measure of the variance of the model. In classical estimation, X and y are known. In optimal design, we can choose X such that the dispersion is minimized with respect to some criterion, i.e. the accuracy of the model can be increased by designing the experiments carefully at the start.

4 3. The Optimal Design Problem An experimental design is a set of vectors (support vectors) {x 1,..., x m } X and non-zero integers n 1,..., n m such that m n i = N, i=1 where n i is the number of repetitions at regression point x i.

5 3. The Optimal Design Problem An experimental design is a set of vectors (support vectors) {x 1,..., x m } X and non-zero integers n 1,..., n m such that m n i = N, i=1 where n i is the number of repetitions at regression point x i. The corresponding dispersion matrix is m D = σ 2 ( n i x i x T i ) 1 = σ2 N i=1 ( m i=1 n i N x ix T i ) 1.

6 4. The Optimal Design Problem The optimal design problem in very general form is to nd a distribution function which in some sense maximizes the information matrix M = m u i x i x T i D 1. i=1

7 4. The Optimal Design Problem The optimal design problem in very general form is to nd a distribution function which in some sense maximizes the information matrix M = m u i x i x T i D 1. i=1 We call φ : S n + IR an information function if it is positively homogeneous, super additive, nonnegative, non-constant and upper semi-continuous. Information functions provide real-valued criteria w.r.t. which we can evaluate designs.

8 5. Matrix means Denition Let λ(c) denote the eigenvalues of a matrix C. If C is a positive denite matrix, i.e., C 0, the matrix mean φ p is dened as λ max (C) for p = ; φ p (C) = ( 1 n TrCp) 1/p for p 0, ± ; (det C) 1/n for p = 0; λ min (C) for p =.

9 6. The Optimal Design Problem For p 1, the general optimal design problem can be written as follows max u g p (u) := ln φ p (XUX T ) (D p ) e T u = 1, u 0, where U := Diag(u). Each value of the parameter p gives rise to a dierent criterion with dierent applications.

10 7. Geometric Motivation The set E( x, H) := {x IR n : (x x) T H(x x) n} for x IR n and H 0 is an ellipsoid in IR n with center x and shape dened by H.

11 7. Geometric Motivation The set E( x, H) := {x IR n : (x x) T H(x x) n} for x IR n and H 0 is an ellipsoid in IR n with center x and shape dened by H. We have vol(e( x, H)) = const(n)/ det H, and minimizing the volume of E( x, H) is equivalent to minimizing ln det H. This is equivalent to minimizing the matrix mean φ 0 (H 1 ). Each parameter corresponds to a dierent geometric feature.

12 8. The Fritz-John Theorem Theorem (John, 1948) For any point set X = {x 1,..., x m } IR n, there is an ellipsoid E, which satises x + 1 E conv(x ) x + E, n

13 8. The Fritz-John Theorem Theorem (John, 1948) For any point set X = {x 1,..., x m } IR n, there is an ellipsoid E, which satises furthermore if X = X, x + 1 E conv(x ) x + E, n 1 n E conv(x ) E.

14 8. The Fritz-John Theorem Theorem (John, 1948) For any point set X = {x 1,..., x m } IR n, there is an ellipsoid E, which satises furthermore if X = X, x + 1 E conv(x ) x + E, n 1 n E conv(x ) E.

15 9. The Ellipsoidal Inclusion Problem For q 1, consider the following problem: min H f q (H) := ln φ q (H) (P q ) x T i Hx i n, i = 1,..., m, H 0. H denes an ellipsoid which encloses the points x 1,..., x m. (P q ) is a geometric optimization problem.

16 10. Weak Duality Lemma Let p and q be conjugate numbers in (, 1]. Then we have f q (H) g p (u) for any H and u feasible in (P q ) and (D p ), resp. f q (H) g p (u) = ln φ q (H) ln φ p (XUX T ) = ln ( φ q (H)φ p (XUX T ) ) ( ) 1 ln n H XUXT ln 1 = 0.

17 11. Strong Duality and Optimality Conditions Let p and q be conjugate numbers in (, 1]. Then (P q ) and (D p ) are dual problems. Let H and u be optimal solutions for (P q ) and (D p ), respectively, then we must have 1. H = n Tr(XU X T ) p (XU X T ) p 1, 2. if u i > 0 then xt i H x i = n.

18 12. Approximate Solutions Let ω i (u) := x T i (XUXT ) p 1 x i. Denition Given a positive ɛ, we call a dual feasible point u 1. an ɛ-primal feasible solution if ω i (u) u T ω(u)(1 + ɛ) for all i, 2. and say that it is an ɛ-approximate optimal solution if moreover ω i (u) u T ω(u)(1 ɛ) whenever u i > 0.

19 13. Quality of Approximate Solutions Lemma Let p and q be a pair of conjugate numbers in (, 1]. Given a dual feasible solution u which is ɛ-primal feasible, 1. H = n (1+ɛ)Tr(XUX T ) p (XUXT ) p 1 is feasible in (P q ), 2. 0 g p g p (u) ln(1 + ɛ), where g p function value of (D p ). is the optimal objective

20 14. Initial Solutions with Provable Quality Lemma û = 1 m (1, 1,..., 1) is an (m 1)-primal feasible solution for (D p) for p < 1. Lemma If u 0 is an δ-primal feasible solution for (D 0 ), then it is also a (n + nδ 1)-primal feasible solution for D p for p < 1. There is a O(n log n) algorithm by Kumar-Yldrm that produces a 1-primal feasible solution for the (D 0 ) problem.

21 15. A First-Order Framework Note that the objective function g p of (D p ) is a concave function with gradient ω(u) := g p (u) = (x T i (XUX T ) p 1 x i ) m i=1.

22 15. A First-Order Framework Note that the objective function g p of (D p ) is a concave function with gradient ω(u) := g p (u) = (x T i (XUX T ) p 1 x i ) m i=1. Consider the following update u + := (1 τ)u + τe j ;

23 15. A First-Order Framework Note that the objective function g p of (D p ) is a concave function with gradient ω(u) := g p (u) = (x T i (XUX T ) p 1 x i ) m i=1. Consider the following update where u + := (1 τ)u + τe j ; - j := arg max i ω i (u) and τ > 0, or - j := arg min i ω i (u) and τ < 0 (s.t. u + 0).

24 16. A First-Order Framework

25 16. A First-Order Framework

26 16. A First-Order Framework

27 16. A First-Order Framework

28 16. A First-Order Framework

29 17. T-criterion When p = 1 and q =, the design problem (D) and its dual (P ) become: max u ln Tr(XUX T ) min H ln λ n (H) e T u = 1, x T i Hx i n, i, u 0, H 0, where λ 1 (H) λ 2 (H) λ n (H) are eigenvalues of H. (P ) corresponds to the Minimum Enclosing Ball problem. Trivial when centered at the origin.

30 18. A-criterion When p = 1 and q = 1/2, the design problem (D) and its dual (P ) become: max u ln Tr(XUX T ) 1 min H 2 ln TrH 1/2 e T u = 1, x T i Hx i n, i, u 0, H 0. (P ) corresponds to the problem of nding an enclosing ellipsoid which has the greatest sum of the inverses of the semi-axes. (D) generates a design with the least average dispersion.

31 19. D-criterion When p = 0 and q = 0, the design problem (D) and its dual (P ) become: max u ln det(xux T ) min H ln det H e T u = 1, x T i Hx i n, i, u 0, H 0. (P ) corresponds to the Minimum-Volume Enclosing Ellipsoid problem. MVEE is also the Fritz-John ellipsoid! (D) generates a design with the least maximum dispersion.

32 20. D-Criterion When p = 0, the problem is the well-known MVEE problem, with gradient ω(u) := g(u) = (x T i (XUX T ) 1 x i ) m i=1.

33 20. D-Criterion When p = 0, the problem is the well-known MVEE problem, with gradient ω(u) := g(u) = (x T i (XUX T ) 1 x i ) m i=1. Consider the following update u + := (1 τ)u + τe i ;

34 20. D-Criterion When p = 0, the problem is the well-known MVEE problem, with gradient ω(u) := g(u) = (x T i (XUX T ) 1 x i ) m i=1. Consider the following update u + := (1 τ)u + τe i ; then it is easy to update ω(u) and g(u) as in det XU + X T = (1 τ) n 1 [1 τ + τω i (u)] det XUX T,

35 20. D-Criterion When p = 0, the problem is the well-known MVEE problem, with gradient ω(u) := g(u) = (x T i (XUX T ) 1 x i ) m i=1. Consider the following update u + := (1 τ)u + τe i ; then it is easy to update ω(u) and g(u) as in det XU + X T = (1 τ) n 1 [1 τ + τω i (u)] det XUX T, and the optimal stepsize is (Khachiyan (1996)) τ = ω i(u)/n 1 ω i (u) 1.

36 21. First-Order Framework Khachiyan (1996) developed and analyzed the following algorithm: 1. Start with u = (1/m)e and calculate ω(u) 2. Check for ɛ-primal feasibility. 3. Let i := arg max j ω j (u) and calculate best step size τ > Update u to u + with τ > Update ω and go to step 2.

37 21. First-Order Framework Khachiyan (1996) developed and analyzed the following algorithm: 1. Start with u = (1/m)e and calculate ω(u) 2. Check for ɛ-primal feasibility. 3. Let i := arg max j ω j (u) and calculate best step size τ > Update u to u + with τ > Update ω and go to step 2. Each iteration takes O(mn) operations. Total number of iterations is N (ɛ) = O(n(ɛ 1 + log n + log log m)). This algorithm was also proposed by Fedorov (1972) and is very similar to that of Wynn (1970). It is a special case of the Frank-Wolfe (1956) algorithm on the dual problem.

38 22. First-Order Framework Kumar and Yldrm (2005) proposed an initialization scheme which improved the complexity for m n. Finally Todd and Yldrm (2007) modied the algorithm by also considering i := arg min {j:uj >0} ω j (u) and possibly updating with τ < 0. They seek ɛ-approximate optimality. This version was also proposed by Atwood (1973) and coincides with the Frank-Wolfe algorithm with Wolfe's away steps (1970). N (ɛ) = O(n(ɛ 1 + log n)). This algorithm guarantees the construction of a small core set.

39 23. First-Order Framework In Ahipa³ao lu, et al.(2008), we have showed that (for data dependent constants M and Q), N (ɛ) = O(Q + M log(ɛ 1 )). A similar result is proven by Wolfe (1970) and Guelat and Marcotte (1986) when g is strongly and boundedly concave but this assumption does not hold for (D). We work with a perturbation of (P ) and use Robinson's second-order constraint qualication. For the general concave problem over the unit simplex, we prove local linear convergence if g is twice dierentiable and there exists an optimal solution of (D) which satises the second-order sucient condition.

40 24. First-Order Framework For any ɛ-approximate solution u, no point x i such that ω i (u) < n[1 + ɛ ɛ(4 + ɛ 4/n) 2 ] 2 can be a support point (Harman-Pronzato(2005)). Incorporate elimination technique and active set strategies into Todd-Yldrm algorithm. Very fast compared to Todd-Yldrm algorithm. Inspired a similar algorithm for the related Minimum Enclosing Ball problem which decreases the run time by 90% (Ahipa³ao lu and Yldrm, 2008).

41 25. Computational Study Table : Average Running Time of Dierent Versions of Todd-Yldrm Algorithm on Exponentially Distributed Data Sets n m FO FO+ACT FO + ELIM FO +ELIM+ACT

42 26. Computational Study

43 27. Minimum Area Enclosing Ellipsoidal Cylinders Given m points {x 1, x 2,..., x m } IR n which span IR n and k n, the Minimum Area Enclosing Ellipsoidal Cylinder (MAEC) problem nds an ellipsoidal cylinder which is centered at the origin, covers all points and has minimum area intersection with Π := {[ y z ] [ IR k IR n k ] } : z = 0.

44 27. Geometry The set C(E, H) := {[y; z] IR n : (y + Ez) T H(y + Ez) k} for E IR k (n k) and H 0 is a cylinder in IR n dened by shape matrix H and axis direction matrix E.

45 27. Geometry The set C(E, H) := {[y; z] IR n : (y + Ez) T H(y + Ez) k} for E IR k (n k) and H 0 is a cylinder in IR n dened by shape matrix H and axis direction matrix E. Note that C(E, H) Π is an ellipsoid in IR k with vol(c(e, H) Π) = const(k)/ det H, and minimizing the volume of C(E, H) Π is equivalent to minimizing ln det H.

46 28. MAEC Formulation The MAEC problem can be formulated as follows: min f( H) := ln det H (y i + Ez i ) T H(yi + Ez i ) k, i = 1,..., m,

47 28. MAEC Formulation The MAEC problem can be formulated as follows: or equivalently min f( H) := ln det H (y i + Ez i ) T H(yi + Ez i ) k, i = 1,..., m, min f(h) := ln det H Y Y (P ) x T i Hx i k, i = 1,..., m, H 0, ( ) HY where H = Y H Y Z. H T Y Z H ZZ

48 29. The D k -optimal Design Problem The dual problem can be stated as max u,k g(u, K) := ln det K XUX T K := XUX T (D) e T u = 1, u 0. ( K ) 0

49 29. The D k -optimal Design Problem The dual problem can be stated as max u,k g(u, K) := ln det K XUX T K := XUX T (D) e T u = 1, u 0. ( K ) 0 (D) is the statistical problem of nding a D k -optimal design measure on the columns of X, that maximizes the determinant of a Schur Complement in the Fisher information matrix which is related to estimating the rst k parameters θ 1,..., θ k in the linear model ỹ X T θ.

50 30. Duality Lemma For any H feasible for (P ) and u and K feasible for (D), we have g(u, K) f(h).

51 30. Duality Lemma For any H feasible for (P ) and u and K feasible for (D), we have g(u, K) f(h). Furthermore, optimal solutions Ĥ and û and ˆK exist and satisfy the following necessary and sucient conditions: (a) Ĥ (XÛXT ˆK) = 0 (b) u i > 0 only if x T i Ĥx i = (y i + Êz i) T ˆK 1 (y i + Êz i) = k (c) ĤY Y = ˆK 1.

52 31. Optimality Conditions We have strong duality if (a) H (XUX T K) = 0 (b) u i > 0 only if x T i Hx i = (y i + Ez i ) T K 1 (y i + Ez i ) = k (c) H Y Y = K 1.

53 31. Optimality Conditions We have strong duality if (a) H (XUX T K) = 0 (b) u i > 0 only if x T i Hx i = (y i + Ez i ) T K 1 (y i + Ez i ) = k (c) H Y Y = K 1. For optimal (u, E, K), condition (a) implies E(ZUZ T ) = (Y UZ T ) and K = Y UY T E(ZUZ T )E T.

54 31. Optimality Conditions We have strong duality if (a) H (XUX T K) = 0 (b) u i > 0 only if x T i Hx i = (y i + Ez i ) T K 1 (y i + Ez i ) = k (c) H Y Y = K 1. For optimal (u, E, K), condition (a) implies E(ZUZ T ) = (Y UZ T ) and K = Y UY T E(ZUZ T )E T. Denition We say (u, E, K) is an ɛ-primal feasible solution if (a) (y i + Ez i ) T K 1 (y i + Ez i ) (1 + ɛ)k, i = 1,..., m.

55 31. Optimality Conditions We have strong duality if (a) H (XUX T K) = 0 (b) u i > 0 only if x T i Hx i = (y i + Ez i ) T K 1 (y i + Ez i ) = k (c) H Y Y = K 1. For optimal (u, E, K), condition (a) implies E(ZUZ T ) = (Y UZ T ) and K = Y UY T E(ZUZ T )E T. Denition We say (u, E, K) is an ɛ-primal feasible solution if (a) (y i + Ez i ) T K 1 (y i + Ez i ) (1 + ɛ)k, i = 1,..., m. Furthermore, it is an ɛ-approximate optimal solution if also (b) u i > 0 implies (y i + Ez i ) T K 1 (y i + Ez i ) (1 ɛ)k.

56 32. A First-Order Algorithm Using u + := (1 τ)u + τe i and rank-one update formulae leads to an algorithm: 1. Find a feasible u, E and K and calculate ω k (u) where g(u) u i = ω k i (u) := (y i + Ez i ) T K 1 (y i + Ez i ). 2. Check for ɛ-approximate optimality. 3. Choose i that improves the objective function or optimality conditions. 4. Update u to u +, where step size τ is a solution of a quadratic equation. 5. Update E, K and ω k and go to step 2.

57 33. Why is the [ MAEC ] [ harder ] than the MVEE? Y Example: Let X = =, k = 1, and Z u = [0, 0, 1].

58 33. Why is the [ MAEC ] [ harder ] than the MVEE? Y Example: Let X = =, k = 1, and Z [ ] 1 0 u = [0, 0, 1]. We have XUX T = and 0 0 E(ZUZ T ) = (Y UZ T ) becomes E0 = 0.

59 33. Why is the [ MAEC ] [ harder ] than the MVEE? Y Example: Let X = =, k = 1, and Z [ ] 1 0 u = [0, 0, 1]. We have XUX T = and 0 0 E(ZUZ T ) = (Y UZ T ) becomes E0 = 0. For E 1, this cylinder contains X 0 < E < 1 E = 1

60 33. Why is the [ MAEC ] [ harder ] than the MVEE? Y Example: Let X = =, k = 1, and Z u = [0, 0, 1].

61 33. Why is the [ MAEC ] [ harder ] than the MVEE? Y Example: Let X = =, k = 1, and Z [ ] 1 0 u = [0, 0, 1]. We have XUX T = and 0 0 E(ZUZ T ) = (Y UZ T ) becomes E0 = 0.

62 33. Why is the [ MAEC ] [ harder ] than the MVEE? Y Example: Let X = =, k = 1, and Z [ ] 1 0 u = [0, 0, 1]. We have XUX T = and 0 0 E(ZUZ T ) = (Y UZ T ) becomes E0 = 0. For E 1, this cylinder contains X, but for E > 1, it does not: 0 < E 1 E > 1

63 34. Why is the MAEC harder than the MVEE? For a given iterate u, when ZUZ T is not pd, it is hard to choose a matrix E which satises E(ZUZ T ) = (Y UZ T ). Computational and theoretical complications. Modify the algorithm so that ZUZ T never becomes singular until the last iteration. Unlike MVEE, choosing the right pivot is not trivial.

64 35. Complexity Analysis Assuming ZUZ T 0, w(u) < C 1, ω k (u) < C 2, we have: O(k(ln k + k ln ln m + ɛ 1 ) + m) iterations. Each iteration takes O(nm) operations. O( Q + M log(ɛ 1 )) iterations under technical assumptions. Away steps are necessary for rapid convergence.

65 36. Computational Study Table : Geometric Mean of Running Time and Average Number of Iterations Required by the Algorithm to Obtain an Approximate Solution for Sun-Freud data sets Dimensions With Away Steps n k m iter time (sec.)

66 37. Computational Study MAECC MVEE as k n:

67 37. Computational Study MAECC MVEE as k n: Can we nd a good warm-start strategy? Can we prove any non-trivial core-set results? Identify and eliminate non-support points?

68 38. State of the Art First-order algorithms are very ecient in solving optimal design and ellipsoidal inclusion problems! T-optimal D-optimal A-optimal MEB (uncentered) MVEE Global convergence Local convergence Warm-start Elimination technique? Cylinderical Inclusion?

Linear Convergence of a Modified Frank-Wolfe Algorithm for Computing Minimum Volume Enclosing Ellipsoids

Linear Convergence of a Modified Frank-Wolfe Algorithm for Computing Minimum Volume Enclosing Ellipsoids Linear Convergence of a Modified Frank-Wolfe Algorithm for Computing Minimum Volume Enclosing Ellipsoids S. Damla Ahipasaoglu Peng Sun Michael J. Todd October 5, 2006 Dedicated to the memory of Naum Shor

More information

A Modified Frank-Wolfe Algorithm for Computing Minimum-Area Enclosing Ellipsoidal Cylinders: Theory and Algorithms

A Modified Frank-Wolfe Algorithm for Computing Minimum-Area Enclosing Ellipsoidal Cylinders: Theory and Algorithms A Modified Frank-Wolfe Algorithm for Computing Minimum-Area Enclosing Ellipsoidal Cylinders: Theory and Algorithms S. Damla Ahipaşaoğlu Michael J. Todd September 11, 2009 Abstract We study a first-order

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS E. ALPER YILDIRIM Abstract. Let S denote the convex hull of m full-dimensional ellipsoids in R n. Given ɛ > 0 and δ > 0, we study the problems of

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

Lecture 24 November 27

Lecture 24 November 27 EE 381V: Large Scale Optimization Fall 01 Lecture 4 November 7 Lecturer: Caramanis & Sanghavi Scribe: Jahshan Bhatti and Ken Pesyna 4.1 Mirror Descent Earlier, we motivated mirror descent as a way to improve

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools Joan Llull Microeconometrics IDEA PhD Program Maximum Likelihood Chapter 1. A Brief Review of Maximum Likelihood, GMM, and Numerical

More information

Lecture 15: October 15

Lecture 15: October 15 10-725: Optimization Fall 2012 Lecturer: Barnabas Poczos Lecture 15: October 15 Scribes: Christian Kroer, Fanyi Xiao Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 14: Unconstrained optimization Prof. John Gunnar Carlsson October 27, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 27, 2010 1

More information

On the Chvatál-Complexity of Binary Knapsack Problems. Gergely Kovács 1 Béla Vizvári College for Modern Business Studies, Hungary

On the Chvatál-Complexity of Binary Knapsack Problems. Gergely Kovács 1 Béla Vizvári College for Modern Business Studies, Hungary On the Chvatál-Complexity of Binary Knapsack Problems Gergely Kovács 1 Béla Vizvári 2 1 College for Modern Business Studies, Hungary 2 Eastern Mediterranean University, TRNC 2009. 1 Chvátal Cut and Complexity

More information

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ???? DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Coordinate Descent and Ascent Methods

Coordinate Descent and Ascent Methods Coordinate Descent and Ascent Methods Julie Nutini Machine Learning Reading Group November 3 rd, 2015 1 / 22 Projected-Gradient Methods Motivation Rewrite non-smooth problem as smooth constrained problem:

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end with some cryptic AMS subject classications and a few

More information

Fundamentals of Unconstrained Optimization

Fundamentals of Unconstrained Optimization dalmau@cimat.mx Centro de Investigación en Matemáticas CIMAT A.C. Mexico Enero 2016 Outline Introduction 1 Introduction 2 3 4 Optimization Problem min f (x) x Ω where f (x) is a real-valued function The

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Week 8. 1 LP is easy: the Ellipsoid Method

Week 8. 1 LP is easy: the Ellipsoid Method Week 8 1 LP is easy: the Ellipsoid Method In 1979 Khachyan proved that LP is solvable in polynomial time by a method of shrinking ellipsoids. The running time is polynomial in the number of variables n,

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract

More information

The Frank-Wolfe Algorithm:

The Frank-Wolfe Algorithm: The Frank-Wolfe Algorithm: New Results, and Connections to Statistical Boosting Paul Grigas, Robert Freund, and Rahul Mazumder http://web.mit.edu/rfreund/www/talks.html Massachusetts Institute of Technology

More information

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 Linear programming Saad Mneimneh 1 Introduction Consider the following problem: x 1 + x x 1 x 8 x 1 + x 10 5x 1 x x 1, x 0 The feasible solution is a point (x 1, x ) that lies within the region defined

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization Shai Shalev-Shwartz and Tong Zhang School of CS and Engineering, The Hebrew University of Jerusalem Optimization for Machine

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

An Algorithm and a Core Set Result for the Weighted Euclidean One-Center Problem

An Algorithm and a Core Set Result for the Weighted Euclidean One-Center Problem An Algorithm and a Core Set Result for the Weighted Euclidean One-Center Problem Piyush Kumar Department of Computer Science, Florida State University, Tallahassee, FL 32306-4530, USA, piyush@cs.fsu.edu

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

that nds a basis which is optimal for both the primal and the dual problems, given

that nds a basis which is optimal for both the primal and the dual problems, given On Finding Primal- and Dual-Optimal Bases Nimrod Megiddo (revised June 1990) Abstract. We show that if there exists a strongly polynomial time algorithm that nds a basis which is optimal for both the primal

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

10-725/ Optimization Midterm Exam

10-725/ Optimization Midterm Exam 10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Lesson 6: Linear independence, matrix column space and null space New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Two linear systems:

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Math 118, Fall 2014 Final Exam

Math 118, Fall 2014 Final Exam Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 22th June 2005 Chapter 8: Finite Termination Recap On Monday, we established In the absence of degeneracy, the simplex method will

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl

More information

Matrix Support Functional and its Applications

Matrix Support Functional and its Applications Matrix Support Functional and its Applications James V Burke Mathematics, University of Washington Joint work with Yuan Gao (UW) and Tim Hoheisel (McGill), CORS, Banff 2016 June 1, 2016 Connections What

More information

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization 5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the

More information

Handout 2: Elements of Convex Analysis

Handout 2: Elements of Convex Analysis ENGG 5501: Foundations of Optimization 2018 19 First Term Handout 2: Elements of Convex Analysis Instructor: Anthony Man Cho So September 10, 2018 As briefly mentioned in Handout 1, the notion of convexity

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Convex Optimization Lecture 16

Convex Optimization Lecture 16 Convex Optimization Lecture 16 Today: Projected Gradient Descent Conditional Gradient Descent Stochastic Gradient Descent Random Coordinate Descent Recall: Gradient Descent (Steepest Descent w.r.t Euclidean

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014 ORIE 6300 Mathematical Programming I October 9, 2014 Lecturer: David P. Williamson Lecture 14 Scribe: Calvin Wylie 1 Simplex Issues: Number of Pivots Question: How many pivots does the simplex algorithm

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

The proximal mapping

The proximal mapping The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

A Linearly Convergent Linear-Time First-Order Algorithm for Support Vector Classification with a Core Set Result

A Linearly Convergent Linear-Time First-Order Algorithm for Support Vector Classification with a Core Set Result A Linearly Convergent Linear-Time First-Order Algorithm for Support Vector Classification with a Core Set Result Piyush Kumar Department of Computer Science, Florida State University, Tallahassee, FL 32306-4530,

More information

too, of course, but is perhaps overkill here.

too, of course, but is perhaps overkill here. LUNDS TEKNISKA HÖGSKOLA MATEMATIK LÖSNINGAR OPTIMERING 018-01-11 kl 08-13 1. a) CQ points are shown as A and B below. Graphically: they are the only points that share the same tangent line for both active

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Department of Mathematics Comprehensive Examination Option I 2016 Spring. Algebra

Department of Mathematics Comprehensive Examination Option I 2016 Spring. Algebra Comprehensive Examination Option I Algebra 1. Let G = {τ ab : R R a, b R and a 0} be the group under the usual function composition, where τ ab (x) = ax + b, x R. Let R be the group of all nonzero real

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Zhaosong Lu November 21, 2015 Abstract We consider the problem of minimizing a Lipschitz dierentiable function over a

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Lecture 4: Types of errors. Bayesian regression models. Logistic regression Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture

More information