Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming

Size: px
Start display at page:

Download "Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming"

Transcription

1 Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. yyye 1

2 Project-I: SNL x i x j 2 = d 2 ij, (i, j) N x, i < j, a k x j 2 = ˆd 2 kj, (k, j) N a, (1) SDP relaxation for solving solve (1): Find a symmetric matrix Z S d+n such that min 0 Z s.t. Z 1:d,1:d = I, (0; e i e j )(0; e i e j ) T Z = d 2 ij, i, j N x, i < j, (a k ; e j )(a k ; e j ) T Z = ˆd 2 kj, k, j N a, Z 0. (2) Also a simple nonlinear least squares (NLS) approach to solve (1): min (ij) N x ( xi x j 2 d 2 ij ) 2 + (kj) N a ( a k x j 2 d 2 kj) 2 (3) 2

3 P-I: Questions Run some randomly generated problems (in 1D, 2D and 3D) with few (2, 3 and 4) anchors and tens sensors to compare the SOCP, SDP, and NLS approaches. Use the SDP solution X = [ x 1, x 2,..., x n ] of Z as the initial solution for model (3) and apply the Steepest Descent Method for a number steps. How is the final solution come out after steepest descent? Apply ADMM to the split nonlinear least squares: min (ij) N x [ (xi x j ) T (y i y j ) d 2 ij ] [ ] 2 + (kj) N a (a k x j ) T (a k y j ) d 2 kj s.t. x j y j = 0, j. (4) Apply Steepest Descent and Feasible Projection Method to SDP Relaxation (Slide 11 of Lecture 11). 3

4 P-I: Objective Regularization for SDP Rank-Reduction for SNL Consider Anchor-Free SNL: minimize (i,j) E α ij subject to (e i e j )(e i e j ) T Y = d 2 ij + α ij, (i, j) E, minimize Y 0. (i,j) E α ij + λ r(y ) subject to (e i e j )(e i e j ) T Y = d 2 ij + α ij, (i, j) E, Y 0, where r(.) is a nonnehative regularization function. For example, the matrix p-norm function: 1/p r(y ) = Y p = j λ(y ) j p, 0 < p 1. 4

5 P-I: Tensegrity (Tensional-Integrity) Objective for 1-D SNL An Anchor-Free SNL example: a Unit-Distance Chain (e i e i+1 )(e i e i+1 ) T Y = 1, i = 1,..., n 1 Y 0. For certain graphs, to select a subset edges to maximize and/or a subset of edges to minimize is guaranteed to finding the lowest rank SDP solution Tensegrity Method. To Maximize 5

6 P-I: The Chain Graph Example Consider: max e 3 e 3 Y s.t. e 1 e T 1 Y = 1, (e 1 e 2 )(e 1 e 2 ) T Y = 1, (e 2 e 3 )(e 2 e 3 ) T Y = 1, Y 0 S 3, The dual is min y 1 + y 2 + y 3 s.t. y 1 e 1 e T 1 + y 2 (e 1 e 2 )(e 1 e 2 ) T + y 3 (e 2 e 3 )(e 2 e 3 ) T S = e 3 e 3, S 0 S 3, 6

7 P-I: Interpretation of the Dual What s the interpretation of the dual? What s the interpretation of complementarity? Dual variables are stresses (internal-forces) on edges, and the objective is the total potential of the graph. At the optimal solution, stresses reach an equilibrium or balanced state with an external force added at node 3. (1; 2; 3)(1; 2; 3) T S = 0 implies S(1; 2; 3) = 0; or ( 3e1 e T 1 + 3(e 1 e 2 )(e 1 e 2 ) T + 3(e 2 e 3 )(e 2 e 3 ) T e 3 e 3 ) (1; 2; 3) = 0, that is, 3e 1 3(e 1 e 2 ) 3(e 2 e 3 ) = 3e 3. 7

8 P-I: The Kissing Number Problem Given a unit sphere centered at the origin, the maximum number of other unit spheres, in dimension d, can touch or kiss the centered unit-sphere? The Kissing Problem as a Graph Realization: (e i e j ) T Y (e i e j ) 4, i j, e T i Y e i = 4, i Y 0; (Rank(Y ) = d) A Tensegrity regularization objective function is constructed to find the lowest rank solution up to dimension 3. 8

9 Figure 1: Kissing Localization in 2D 9

10 Project-II: First-Order Methods and Value-Iteration for MDP MDP problem with m states and total n actions: min x j A 1 c j x j j A m c j x j s.t. j A 1 (e 1 γp j )x j j A m (e m γp j )x j = e,... x j... 0, j, (5) maximize y m i=1 y i where y i represents the cost-to-go value in state i. subject to y i γp T j y c j, j A i, i. Question 1: Prove that in (5) every basic feasible solution represent a policy, i.e., the basic variables have exactly one variable from each state i. Furthermore, prove each basic variable value is no less than 1, and the sum of all basic variable values is m 1 γ. 10

11 P-II: First-Order Methods and Value-Iteration for MDP The Value-Iteration (VI) Method is, starting from any y 0, y k+1 i = min j A i {c j + γp T j y k }, i. Question 2: Prove the contraction result: y k+1 y γ y k y, k. where y is the fixed-point or optimal value vector, that is, yi = min j A i {c j + γp T j y }, i. Question 3: In the VI method, if starting with any vector y 0 y and assuming y 1 y 0, then prove the following entry-wise monotone property: y y k+1 y k, k. On the other hand, if we start from a vector such that yi 0 < min j A i {c j + γp T j y0 }, i (y 0 in the interior of the feasible region), then prove the entry-wise monotone property: y y k+1 y k, k. 11

12 P-II: Randomized VI Rather than go through all state values in each iteration, we modify the VI method, call it RamdomVI: In the kth iteration, randomly select a subset of states B k and do y k+1 i = min j A i {c j + γp T j y k }, i B k. (6) In RandomVI, we only update a subset of state values at random in each iteration. Question 4: What can you tell the convergence of the RandomVI method? Does it make a difference with the classical VI method? How is the sample size affect the performance? Use simulated computational experiments to verify your claims. Rather than randomly select a subset of all states in each iteration, suppose we build an influence tree from a given subset of states, say B, for all sates, denoted by I(B), that are connected by any state in B. Then when states in B are updated in the current iteration, then selected a subset of states in I(B) for updating in the next iteration. Redo the computational experiments using this strategy for a sparsely connected (p j is a very sparse distribution vector for each action j) MDP network. In doing so, many unimportant or irrelevant states may be avoided which results a state-reduction. 12

13 P-II: Cyclic VI Question 5: Here is another modification, called CyclicVI: In the kth iteration do Initialize ỹ k = y k. For i = 1 to m ỹ k i = min j A i {c j + γp T j ỹk } (7) y k+1 = ỹ k. In the CyclicVI method, as soon as a state value is updated, we use it to update the rest of state values. What can you tell the convergence of the CyclicVI method? Does it make a difference with other VI methods? Use simulated computational experiments to verify your claims. How is this cyclic method related to the method at the bottom of Question 4? 13

14 P-II: Randomly Permuted Cyclic VI In the CyclicVI method, rather than with the fixed cycle order from 1 to m, we follow a random permutation order, or sample without replacement to update the state values. More precisely, in the kth iteration do 0. Initialize ỹ k = y k and B k = {1, 2,..., m} 1. Randomly select i B k ỹ k i = min j A i {c j + γp T j ỹk } (8) remove i from B k and return to Step y k+1 = ỹ k. We call it the randomly permuted CyclicVI or RPCyclicVI in short What can you tell the convergence of the RPCyclicVI method? Does it compare with other VI methods? Use simulated computational experiments to verify your claims. 14

15 Project-III: Quadratic Regularization and Path-Following Algorithms Question 1: Quadratic Regularization Method described in Lecture #12, and try it on your favorite nonlinear and nonconvex minimization problem. d k (λ) = arg min d (c k ) T d dt Q k d λ d 2 (9) where parameterλ max{0, λ min (Q k )}. Then consider the one-variable function ϕ(λ) := f(x k + d k (λ)) and do the one-variable minimization of ϕ(λ). Let λ k = arg min ϕ(λ) and do update x k+1 = x k + d k (λ k ). First use the direct method to solve the convex QP problem (9) for any given λ. Then use first-order QP algorithms to solve (9) to accuracy error ϵ k. What is a good strategy to select ϵ k? Decrease ϵ k as k increases? At what rate? How to do the one-variable minimization problem fast? Try different approached, Golden Search, Bisection Search, Newton s Method, or simply the back-tracking strategy presented page 28 of Lecture #10? Also, is the final λ k a useful initial point for searching λ k+1? 15

16 P-III: Path-Following Algorithms for Unconstrained Optimization Question 2: Now consider the Path-Following Method described in Lecture #12. Start from a solution x k that approximately satisfies f(x) + λx = 0, with λ = λ k > 0. (10) When f is convex, such a solution x(λ) exists for any λ > 0 and we let x(λ) = arg min f(x) + λ 2 x 2 and they form a path down to x(0). Sub-Question 2.1: What would happen if f is not convex but it is bounded from below? 16

17 P-III: Path-Following Algorithms for Unconstrained Optimization Let the approximation path error at x k with λ = λ k be (x k ) + λ k x k 1 2β λk. Then, we like to compute a new iterate x k+1 such that f(x k+1 ) + λ k+1 x k+1 1 2β λk+1, where 0 λ k+1 < λ k. Sub-Question 2.2: What is a good strategy to decrease λ? 17

18 P-III: Path-Following Algorithms for Unconstrained Optimization After λ k is replaced by a smaller λ k+1, we aim to find a solution x such that f(x) + λ k+1 x = 0. One way is to start from x k and apply the Newton Method: f(x k ) + 2 f(x k )d + λ k+1 (x k + d) = 0, 2 f(x k )d + λ k+1 d = f(x k ) λ k+1 x k. or (11) which is also a convex quadratic minimization problem if f is convex. Sub-Question 2.3: Again, use first-order QP algorithms to solve the convex QP problem to accuracy error ϵ k. What is a good strategy to select ϵ k? Decreaseϵ k as k increases? At what rate? Sub-Question 2.4: Finally, how to modify the path-following algorithm for nonconvex f minimization? What should we do if the matrix of (11) is singular? Could some random perturbation be helpful here? 18

19 Project-IV: An Online Linear Programming/Resource Allocation Example order 1(t = 1) order 2(t = 2)... Inventory(b) Price(π t ) $100 $30... Decision x 1 x 2... Pants Shoes T-shirts Jacket Socks

20 P-IV: Offline Formulation LP max π T x s.t. A T x b, x e, x 0, CPCAM max π T x + U(s) s.t. A T x + s = b, s 0 x e, x 0. U( ), in the form U(s) = i u(s i), is a strcitly concave (risk aversion) and increasing value function of the possible slack variables to value the uncertain revenue of remaining resources. Question 1: The optimal s so is the multipliers/prices of first set of constraints. Exponential u(s i ) = b (1 exp( s i /b)) for some positive constant b. Logarithmic u(s i ) = b log(s i ) or u(s i ) = b log(1 + s i ) for some positive constant b. b (1 (1 s i /b) 2 ) 0 s i b Quadratic: u(s i ) = b s i b for some positive constant b. 20

21 P-IV: Online Formulation and SCPM Given the already-made (t 1) order-fill decisions x 1,..., x t 1, the t th order-fill decision: max π t x t + i u(s i) + t 1 j=1 π j x j s.t. a t x t + s = b t 1 j=1 a j x j, x t 1 x t 0. t 1 j=1 π j x j is collected revenue, and t 1 j=1 a j x j is outstanding allocations before the new arrival. max π t x t + i u(s i) s.t. a t x t + s = b t 1, x t 1, x t 0, max π t x t + i u(z a itx t b t 1 i ) s.t. x t 1, x t 0; where b t 1 = b t 1 j=1 a j x j. Question 2 and 3: KKT conditions of the simplified problem and experiments. 21

22 P-IV: Price Mechanism and SLPM The problem would be easy if there is an ideal price vector: Bid 1(t = 1) Bid 2(t = 2)... Inventory(b) p Bid(π t ) $100 $30... Decision x 1 x 2... Pants $45 Shoes $45 T-shirts $10 Jackets $55 Hats $15 Could such a ideal price vector be learned? 22

23 P-IV: Model Assumptions Main Assumptions The columns (π t, a t ) arrive in a random order. We know the total number of columns n a priori. Other technical assumptions 0 a it 1, for all (i, t); π t 0 for all t The algorithm/mechanism quality is evaluated on the expected performance over all the permutations comparing to the offline optimal solution, i.e., an algorithm A is c-competitive if and only if [ n ] E σ π t x t (σ, A) c OP T (A, π), t=1 where OP T (A, π) is the maximal objective value of the offline model. 23

24 P-IV: Comments and Theorems on the Online Model The online approach is distribution-free so that it allows for great robustness in practical problems. The second assumption is necessary for one to obtain a near optimal solution. However, it can be relaxed to an approximate knowledge of n or the length of decision horizon. Both assumptions are reasonable and standard in many engineering and science applications. Theorem 1 For any fixed 0 < ϵ < 1, there is no online algorithm for solving the linear program with competitive ratio 1 ϵ if B < log(m) ϵ 2. Theorem 2 For any fixed 0 < ϵ < 1, there is a 1 ϵ competitive online algorithm for solving the linear program if ( ) m log (n/ϵ) B Ω. Agrawal, Wang and Y [Operations Research 2014] 24 ϵ 2

25 P-IV: Key Ideas to Prove Negative and Positive Result Consider m = 1 and inventory level B, one can construct an instance where OP T = B, and there will be a loss of B with a high probability, which give an approximation ratio 1 1. B Consider general m and inventory level B for each good. We are able to construct an instance to decompose the problem into log(m) separable problems, each of which has an inventory level B/ log(m) on a composite single good and OP T = B/ log(m). Then, with hight probability each single good case has a loss of B/ log(m) and the total loss of B log(m). Thus, approximation ratio is at best 1 log(m) B. The proof of the positive result is constructive and based on a learning policy. There is no distribution known so that any type of stochastic optimization models is not applicable. Unlike dynamic programming, the decision maker does not have full information/data so that a backward recursion can not be carried out to find an optimal sequential decision policy. Thus, the online algorithm needs to be learning-based, in particular, learning-while-doing. 25

26 P-IV: One-Time Learning Algorithm We start with a simple Set x t = 0 for all 1 t ϵn; Solve the ϵ portion of the problem maximize x subject to ϵn t=1 π tx t ϵn t=1 a itx t (1 ϵ)ϵb i i = 1,..., m 0 x t 1 t = 1,..., ϵn and get the optimal Lagrange/dual solution ˆp; Determine the future allocation x t as: x t = 0 if π t ˆp T a t 1 if π t > ˆp T a t as long as a it x t b i t 1 j=1 a ijx j for all i; otherwise, set x t = 0. 26

27 P-IV: One-Time Learning Algorithm Result Theorem 3 For any fixed ϵ > 0, the one-time learning algorithm is (1 ϵ) competitive for solving the linear program when B Ω ( ) m log (n/ϵ) ϵ 3 Outline of the Proof: With high probability, we clear the market; With high probability, the revenue is near-optimal if we include the initial ϵ portion revenue; With high probability, the first ϵ portion revenue, a learning cost, doesn t contribute too much. Then, we prove that the one-time learning algorithm is (1 ϵ) competitive under condition B 6m log(n/ϵ) ϵ 3. But this is one ϵ factor higher than the lower bound... Question 4: numerical experiments. 27

28 P-IV: Dynamic Price Updating Algorithm In the dynamic price learning algorithm, we update the price at time ϵn, 2ϵn, 4ϵn,..., till 2 k ϵ 1. At time l {ϵn, 2ϵn,...}, the price vector is the optimal Lagrange/dual solution to the following linear program: maximize x subject to l t=1 π tx t l t=1 a itx t (1 h l ) l n b i i = 1,..., m 0 x t 1 t = 1,..., l where h l = ϵ n l ; and this price vector is used to determine the allocation for the next immediate period, which is doubled each update. Question 5: numerical experiments. In the dynamic algorithm, we update the prices log 2 (1/ϵ) times during the entire time horizon. The numbers h l basically balances the probability that the inventory ever gets violated and the lost of revenue due to the factor 1 h l. Choosing large h l (more conservative) at the beginning periods and smaller h l (more aggressive) at the later periods, one can now control the loss of revenue. 28

29 P-IV: Related Work on Random-Permutation Sufficient Condition Kleinberg [2005] B 1 ϵ 2, for m = 1 Devanur et al [2009] Feldman et al [2010] Agrawal et al [2010] Molinaro/Ravi [2013] Kesselheim et al [2014] Gupta/Molinaro [2014] Agrawal/Devanur [2014] B m log n ϵ 3 B m log n ϵ 2 OP T m2 log(n) ϵ 3 and OP T m log n ϵ or OP T m2 log n ϵ 2 B m2 log m ϵ 2 B log m ϵ 2 B log m ϵ 2 B log m ϵ 2 Learning Dynamic One-time One-time Dynamic Dynamic Dynamic* Dynamic* Dynamic* Table 1: Comparison of several existing results 29

30 P-IV: Summary and Future Questions on OLP B = log m ϵ 2 is now a necessary and sufficient condition (differing by a constant factor). Thus, they are near-optimal online algorithms for a very general class of online linear programs. The algorithms are distribution-free and/or non-parametric, thereby robust to distribution/data uncertainty. The dynamic learning has the feature of learning-while-doing, and is provably better than one-time learning by a factor. Buy-and-sell or double market? Price-Posting multi-good model? Online Utility Formulation for Resource Allocation? Question 6: multi-layer resource allocation/supply-chain management 30

Dual Interpretations and Duality Applications (continued)

Dual Interpretations and Duality Applications (continued) Dual Interpretations and Duality Applications (continued) Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters

More information

Mathematical Optimization Models and Applications

Mathematical Optimization Models and Applications Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

A Dynamic Near-Optimal Algorithm for Online Linear Programming

A Dynamic Near-Optimal Algorithm for Online Linear Programming Submitted to Operations Research manuscript (Please, provide the manuscript number! Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount Yinyu Ye Department of Management Science and Engineering and Institute of Computational

More information

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints By I. Necoara, Y. Nesterov, and F. Glineur Lijun Xu Optimization Group Meeting November 27, 2012 Outline

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Machine Learning for NLP

Machine Learning for NLP Machine Learning for NLP Linear Models Joakim Nivre Uppsala University Department of Linguistics and Philology Slides adapted from Ryan McDonald, Google Research Machine Learning for NLP 1(26) Outline

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality

A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Yinyu Ye April 23, 2005 Abstract: We extend the analysis of [26] to handling more general utility functions:

More information

Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality

Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Yinyu Ye April 23, 2005; revised August 5, 2006 Abstract This paper studies the equilibrium property and algorithmic

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

CS Algorithms and Complexity

CS Algorithms and Complexity CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem

More information

SOCP Relaxation of Sensor Network Localization

SOCP Relaxation of Sensor Network Localization SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IWCSN, Simon Fraser University August 19, 2006 Abstract This is a talk given at Int. Workshop on

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

A projection algorithm for strictly monotone linear complementarity problems.

A projection algorithm for strictly monotone linear complementarity problems. A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

Polyhedral Approaches to Online Bipartite Matching

Polyhedral Approaches to Online Bipartite Matching Polyhedral Approaches to Online Bipartite Matching Alejandro Toriello joint with Alfredo Torrico, Shabbir Ahmed Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Industrial

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Stochastic Programming Math Review and MultiPeriod Models

Stochastic Programming Math Review and MultiPeriod Models IE 495 Lecture 5 Stochastic Programming Math Review and MultiPeriod Models Prof. Jeff Linderoth January 27, 2003 January 27, 2003 Stochastic Programming Lecture 5 Slide 1 Outline Homework questions? I

More information

Module 8 Linear Programming. CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo

Module 8 Linear Programming. CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo Module 8 Linear Programming CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo Policy Optimization Value and policy iteration Iterative algorithms that implicitly solve

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Lecture 6: Communication Complexity of Auctions

Lecture 6: Communication Complexity of Auctions Algorithmic Game Theory October 13, 2008 Lecture 6: Communication Complexity of Auctions Lecturer: Sébastien Lahaie Scribe: Rajat Dixit, Sébastien Lahaie In this lecture we examine the amount of communication

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Partitioning Algorithms that Combine Spectral and Flow Methods

Partitioning Algorithms that Combine Spectral and Flow Methods CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 15-11/11/2009 Partitioning Algorithms that Combine Spectral and Flow Methods Lecturer: Michael Mahoney Scribes: Kshipra Bhawalkar and Deyan

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Homework 5. Convex Optimization /36-725

Homework 5. Convex Optimization /36-725 Homework 5 Convex Optimization 10-725/36-725 Due Tuesday November 22 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Second Welfare Theorem

Second Welfare Theorem Second Welfare Theorem Econ 2100 Fall 2015 Lecture 18, November 2 Outline 1 Second Welfare Theorem From Last Class We want to state a prove a theorem that says that any Pareto optimal allocation is (part

More information

The Ellipsoid (Kachiyan) Method

The Ellipsoid (Kachiyan) Method Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note: Ellipsoid Method 1 The Ellipsoid (Kachiyan) Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 29, 2016 Outline Convex vs Nonconvex Functions Coordinate Descent Gradient Descent Newton s method Stochastic Gradient Descent Numerical Optimization

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Lecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 1: Introduction to optimization Xiaoqun Zhang Shanghai Jiao Tong University Last updated: September 23, 2017 1.1 Introduction 1. Optimization is an important tool in daily life, business and

More information

Lecture 8: February 9

Lecture 8: February 9 0-725/36-725: Convex Optimiation Spring 205 Lecturer: Ryan Tibshirani Lecture 8: February 9 Scribes: Kartikeya Bhardwaj, Sangwon Hyun, Irina Caan 8 Proximal Gradient Descent In the previous lecture, we

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

SOCP Relaxation of Sensor Network Localization

SOCP Relaxation of Sensor Network Localization SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle University of Vienna/Wien June 19, 2006 Abstract This is a talk given at Univ. Vienna, 2006. SOCP

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

4y Springer NONLINEAR INTEGER PROGRAMMING

4y Springer NONLINEAR INTEGER PROGRAMMING NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

More information

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications Professor M. Chiang Electrical Engineering Department, Princeton University February

More information

arxiv: v1 [math.oc] 1 Jul 2016

arxiv: v1 [math.oc] 1 Jul 2016 Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Tutorial: PART 1. Online Convex Optimization, A Game- Theoretic Approach to Learning.

Tutorial: PART 1. Online Convex Optimization, A Game- Theoretic Approach to Learning. Tutorial: PART 1 Online Convex Optimization, A Game- Theoretic Approach to Learning http://www.cs.princeton.edu/~ehazan/tutorial/tutorial.htm Elad Hazan Princeton University Satyen Kale Yahoo Research

More information

1 Overview. 2 Learning from Experts. 2.1 Defining a meaningful benchmark. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Learning from Experts. 2.1 Defining a meaningful benchmark. AM 221: Advanced Optimization Spring 2016 AM 1: Advanced Optimization Spring 016 Prof. Yaron Singer Lecture 11 March 3rd 1 Overview In this lecture we will introduce the notion of online convex optimization. This is an extremely useful framework

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

Machine Learning A Geometric Approach

Machine Learning A Geometric Approach Machine Learning A Geometric Approach CIML book Chap 7.7 Linear Classification: Support Vector Machines (SVM) Professor Liang Huang some slides from Alex Smola (CMU) Linear Separator Ham Spam From Perceptron

More information

Problem 1 (Exercise 2.2, Monograph)

Problem 1 (Exercise 2.2, Monograph) MS&E 314/CME 336 Assignment 2 Conic Linear Programming January 3, 215 Prof. Yinyu Ye 6 Pages ASSIGNMENT 2 SOLUTIONS Problem 1 (Exercise 2.2, Monograph) We prove the part ii) of Theorem 2.1 (Farkas Lemma

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

The quest for finding Hamiltonian cycles

The quest for finding Hamiltonian cycles The quest for finding Hamiltonian cycles Giang Nguyen School of Mathematical Sciences University of Adelaide Travelling Salesman Problem Given a list of cities and distances between cities, what is the

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko

More information

The Multi-Path Utility Maximization Problem

The Multi-Path Utility Maximization Problem The Multi-Path Utility Maximization Problem Xiaojun Lin and Ness B. Shroff School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 47906 {linx,shroff}@ecn.purdue.edu Abstract

More information

Adding Production to the Theory

Adding Production to the Theory Adding Production to the Theory We begin by considering the simplest situation that includes production: two goods, both of which have consumption value, but one of which can be transformed into the other.

More information

MIT Manufacturing Systems Analysis Lecture 14-16

MIT Manufacturing Systems Analysis Lecture 14-16 MIT 2.852 Manufacturing Systems Analysis Lecture 14-16 Line Optimization Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin. Line Design Given a process, find the best set of machines

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Stephen Boyd and Almir Mutapcic Notes for EE364b, Stanford University, Winter 26-7 April 13, 28 1 Noisy unbiased subgradient Suppose f : R n R is a convex function. We say

More information

Stochastic Dual Dynamic Programming with CVaR Risk Constraints Applied to Hydrothermal Scheduling. ICSP 2013 Bergamo, July 8-12, 2012

Stochastic Dual Dynamic Programming with CVaR Risk Constraints Applied to Hydrothermal Scheduling. ICSP 2013 Bergamo, July 8-12, 2012 Stochastic Dual Dynamic Programming with CVaR Risk Constraints Applied to Hydrothermal Scheduling Luiz Carlos da Costa Junior Mario V. F. Pereira Sérgio Granville Nora Campodónico Marcia Helena Costa Fampa

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 1: Introduction Prof. John Gunnar Carlsson September 8, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 8, 2010 1 / 35 Administrivia

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Stochastic Primal-Dual Methods for Reinforcement Learning

Stochastic Primal-Dual Methods for Reinforcement Learning Stochastic Primal-Dual Methods for Reinforcement Learning Alireza Askarian 1 Amber Srivastava 1 1 Department of Mechanical Engineering University of Illinois at Urbana Champaign Big Data Optimization,

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Competitive Equilibria in a Comonotone Market

Competitive Equilibria in a Comonotone Market Competitive Equilibria in a Comonotone Market 1/51 Competitive Equilibria in a Comonotone Market Ruodu Wang http://sas.uwaterloo.ca/ wang Department of Statistics and Actuarial Science University of Waterloo

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Birgit Rudloff Operations Research and Financial Engineering, Princeton University TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street

More information

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization

More information

Bilinear Programming: Applications in the Supply Chain Management

Bilinear Programming: Applications in the Supply Chain Management Bilinear Programming: Applications in the Supply Chain Management Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information