Written Examination

Size: px
Start display at page:

Download "Written Examination"

Transcription

1 Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes (machine written, font size minimum 0 pt) Maximum number of points: 36 (8 points to pass) All answers must be motivated the get full points. Consider the linear program: max f(x) = 9x +7x 2 s.t. x +3x 2 6 3x +2x 2 2 x,x 2 0. a) Rewrite the program into standard form. [0.5 pt] () Solution min g(x) f(x) = 9x 7x 2 s.t. x +3x 2 +x 3 = 6 3x +2x 2 +x 4 = 2 x,x 2,x 3,x 4 0. (2) b) Solve the problem by using the simplex method. Use the slack variables as initial basic variables and employ Bland s rule to determine the entering and leaving variables. Give the solution for the optimizer x and the value of the objective function of the optimizer f(x ). Note: State clearly what your basic and non-basic variables are in every step. [4 pt] Solution.) First iteration x B = (x 3,x 4 ) T, x N = (x,x 2 ) T ( ) ( ) 0 3 B =, N = 0 3 2

2 c B = (0,0) T, c N = ( 9, 7) ˆb = B b = (6,2) T ŷ T = c T BB = (0,0) ĉ r = c T N yt N = ( 9, 7) T The basis is not optimal. The entering variable is x (smallest index i in ĉ r,i < 0). On the other hand,  = B A = ( 0 0 )( 3 ), and, then, the leaving variable is x 4, since only Â,2 > 0. 2.) Second iteration B = x B = (x 3,x ) T, x N = (x 4,x 2 ) T ( 0 3 ), N = ( ) 0 3, B = 2 3 c B = (0, 9) T, c N = (0, 7) T ˆb = B b = (3,7) T ŷ T = c T B B = (0, 3) ĉ r = c T N y T N = (3, ) T ( ) 3 0 The basis is not optimal since not all components of ĉ r are positive. The negative component corresponds to x 2 which is the entering variable. On the other hand, and  2 = B A 2 = 3 and, thus, the leaving variable is x 3. 3.) Third iteration B = ( 3 0 )( ) 3 = 2 3 ( ) 2 { } { ˆbi 39 min,a 2,i 0 = min i  2,i i, 2 }, 2 ( x B = (x,x 2 ) T, x N = (x 4,x 3 ) T ) ( ) 0, N =, B = 0 c B = ( 9,7) T, c N = (0,0) T ŷ T = c T B B = ( 3, 34) ( )

3 ĉ r = c T N y T N = (3,34)T The basis is now optimal since ĉ r > 0. Stop here. The optimizer is x =ˆb = B b = (5,39)T, and the value of the objective function (with the primal problem written in standard form) is 732/. c) Write down the dual of problem in () and give the solution for the dual problem (the optimizer y and also value of the dual objective). [.5 pt] Solution The explicit expression of the dual problem is not unique (in the same way that the primal problem can be also expressed in different forms). For example, if we directly calculate the dual from the primal expressed as in () we obtain (see p.4 on the slides for Chapter 6. Duality and Sensitivity for an identical example) max f(x) = 9x +7x 2 s.t. x +3x 2 6 3x +2x 2 2 min w(y) = 6y +2y 2 s.t. y +3y 2 9 3y +2y 2 7 (3) x,x 2 0. y,y 2 0. If, on the other hand, we decide to write first problem () in canonical form we have that the primal and the dual problems can be equivalently expressed as (see p.2 on the slides for Chapter 6. Duality and Sensitivity) min g(x) f(x) = 9x 7x 2 max v(y) w(y) = 6y 2y 2 s.t. x 3x 2 6 3x 2x 2 2 s.t. y 3y 2 9 3y 2y 2 7 (4) x,x 2 0, y,y 2 0. Another alternative is expressing the primal in standard form, as in(2). Then, the dual problem is max w(y) = 6y +2y 2 min g(x) f(x) = 9x 7x 2 s.t. x +3x 2 +x 3 = 6 3x +2x 2 +x 4 = 2 x,x 2,x 3,x 4 0. s.t. y +3y 2 9 3y +2y 2 7 y 0 y 2 0. (5) In this last formulation, as the primal constraints are equalities, this would mean that the variables y and y 2 are unrestricted, but the constraints y 0 3

4 and y 2 0 (coming for the positivity of the primal slack variables x 3 and x 4 ) are morerestrictive. Note also that thedual in (5) is, of course, equivalent to the dual in (4) simply by doing the change of variables y = y and y 2 = y 2. From all the previous equivalent formulations, we have to use the last one (with the primal expressed in standard form), as the theoretical results of the weak and strong duality theory are based on the primal problem being written in such a way (see p.2 on the slides for Chapter 6. Duality and Sensitivity and pp on the book by Griva, Nash and Sofer (2009)). Then, the optimizer is just the simplex multiplier from the last iteration i.e. ŷ = y = ( 3, 34) T / and the objective value of the dual function, written in standard form as in (5), is the same 732/ as for the primal problem, by the strong duality theorem. 2. The time evolution of a cooling process in a harmonic quantum system can be described by n(t) = e Wt ( s)+s. Here W is the effective cooling rate and s the steady state of the system. Based on the measured values for average quantum number n in the table, the optimal values for W and s can be found. t n a) Formulate the problem as a non-linear least squares problem. Write down the objective function f(x), the gradient f(x), the Jacobian matrix F(x), and the expression for the Hessian matrix (without putting in the numbers). [2 pt] Solution The problem is formulated in the common form for nonlinear least-squares problems: min x f(x) = 2 3 f i (x) 2 = 2 i= 3 [n(t i,x) n i ] 2 = 2 F(x)T F(x), i= where x = (W,s) T and F(x) = (f (x),f 2 (x),f 3 (x)) T. The residuals f i (x) are defined as: f i (x) = e Wt i ( s)+s n i. 4

5 The partial derivatives with respect to the parameters are: By using the Jacobian f F(x) = W f s f i W = t ie Wt i ( s), f i s = e Wt i +. f 2 W f 2 s f 3 W f, 3 s the derivative of the objective function can be expressed in a more compact way as: f(x) = F(x)F(x) The resulting Hessian matrix for the problem is, then: 2 f(x) = F(x) F(x) T + 3 f i (x) 2 f i (x). i= b) Perform one iteration with the Gauss-Newton method with a step length of one. Start with the initial guess W =.46, s = 0.09 (use 4 digits in the calculation). Check if the Armijo condition for a relaxation parameter µ = 0. is fulfilled. Give the value of the objective function. [5 pt] Solution We calculate all the components first. The vector of the residuals is then: F ( x (0)) = ( , , ) T. The value of the objective function at x (0) is The Jacobian evaluated at x (0) is: f ( x (0)) = 2 F( x (0)) T F ( x (0) ) = F ( x (0)) = The gradient evaluated at x (0) is: ( ). f ( x (0)) = F ( x (0)) F ( x (0)) = (0.0509, ) T. 5

6 The approximated Hessian, according to the Gauss-Newton algorithm, is 2 f ( x (0)) H ( x (0)) = F ( x (0)) F ( ( ) x (0)) T = Now, having all necessary components, we can calculate a quasi-newton search direction: p = H ( x (0)) ( f x (0) ) [ = F ( x (0)) F ( x (0)) ] T ( F x (0) ) F ( x (0)) = ( 0.293,0.073) T, and obtain a new point x () as x () = x (0) +p = (.46,0.09) T +( 0.704,0.022) T = (.24,0.07) T, with a function value f ( x ()) = Using the Armijo condition means to make relaxed linear prediction for the change of the function value: f ( x ()) f ( x (0)) +µp T f ( x (0)) = And we see the condition is fulfilled, and we have a sufficient decrease in the step. 3. Consider the following problem min f(x) = 4 x2 + 8 x2 2. a) Write the algorithm for the steepest-descent method without line search. [0.5 pt] b) Solve the problem using the algorithm in (3a). i. Perform the first five iterations of the method starting from x 0 = (,) T. Give a 6-digit approximation to the value of the norm of the gradient at each iteration. Write the results in a table like the one below. k x p f (x) 0. ii. What can you say about the convergence rate of the iterates? (do not make numbers here, just an intuitive interpretation). iii. Plottheiterates onthex x 2 plane, togetherwithasketch ofsomecontour lines of f (x). c) Perform again the first iterate as in (3b), but using an exact line search. Discuss the optimum value obtained for α. [.5 pt] [ pt] [ pt] [ pt] 6

7 Solution a) The algorithm for the steepest descent method without line search is simply Input Select initial value for the variable x 0 ; k = 0 repeat Step If x k is optimal, stop Step 2 Determine a search direction p k = f(x k ) and update the iterate x k+ = x k +p k ; k = k + until convergence b) The gradient of the objective function is ( f (x) = 2 x ) 4 x. 2 i. So, the first iterate is easily computed as x = x 0 +p 0 = x 0 f (x 0 ) = ( ) The rest of the iterates are given in the table below ( /2 /4 ) = k x p f (x) 0 (,) ( /2, /4) ( /2, 3 /4) ( /4. 3 /6) (/4, 9/6) ( /8, 9/64) (/8, 27/64) ( /6, 27/256) (/6, 8/256) ( /32, 8/024) ( /32, 243 /024) ( /64, 243 /4096) ( /2 ii. The convergence rate is linear, something typical of the behavior of the steepest descent method. (The exact calculations bring a convergence rate r = and a constant C = 3 4 ). iii. The plot with the iterates and some contour lines is given below 3/4 ). 7

8 c) We perform now the first iterate but using a exact line search. To obtain the value of the optimum step length α opt, we use the exact formula α = f (x 0) T p 0 p T 0 Qp, (6) where Q is the Hessian matrix of the quadratic function 2 xt Qx c T x. In this case, ( ) Q 2 /2 0 f (x) =, 0 /4 so the computation is straightforward and α = Should we take this whole step, the new point would be at ( 9, 4 9) (red line on the plot), with a gradient norm equal to 0.242, so we will be improving the performance of the algorithm. However, should we have used a backtracking strategy, we would have started with an initial value α =, which corresponds to the first step already calculated in (3b). Note: In general, when the objective function is not quadratic, (6) cannot be used, and we should solve the following unidimensional unconstrained optimization problem to obtain α min α φ(α) = f (x 0 +αp 0 ), with x 0 = (,) T and p 0 = ( 2 4), as previously calculated. In this case, the expression for φ(α) is simple, φ(x) = ( ) 4 α 2 ( α 2, 4) so the minimum is easily calculated at α =

9 4. Consider an inequality-constrained problem with two constraints min f (x) s.t. g (x) = x 2 +x 2 2 +x g 2 (x) = 2x 4x 2 +x a) State the necessary optimality conditions for this kind of problems, without using the explicit expressions for g and g 2. Consider now the points x a = (,,) T and x b = (3,2, ) T. [0.5 pt] b) Which of them (if any) is feasible? [0.5 pt] c) Which of them (if any) is regular? [ pt] d) Compute a null-space matrix Z(x) (as stated in 4a) for each point. [.5 pt] e) What is the range of admissible values for the Lagrange multipliers if we want the necessary conditions to be fulfilled at x a? And at x b? Can degeneracy happen in any of these cases? [.5 pt] Solution a) The necessary optimality conditions for inequality-constrained problems(kkt conditions) are stated on p. 506 on the book of Griva, Nash and Sofer (2009). Theorem. (Necessary Conditions, Inequality Constraints). Let x be a local minimum point of f subject to the constraints g(x) 0. Let the columns of Z(x ) form a basis for the null space of the Jacobian of the active constraints at x. If x is a regular point for the constraints, then there exists a vector of Lagrange multipliers λ such that x L(x ;λ ) = 0, or equivalently Z(x ) T f (x ) = 0, λ 0, λ T g(x ) = 0, and Z(x ) T 2 xxl(x ;λ )Z(x ) is positive semidefinite. b) x a is feasible, since it satisfies both constraints g (x a ) = 0 (active), g 2 (x a ) = 0 (active). x b is also feasible, since it also satisfies both constraints g (x b ) = > 0 (inactive), g 2 (x b ) = 0 (active). c) To check for regularity, we need the Jacobian matrix only of the active constraints. The general expression of the Jacobian is ( ) g(x) T 2x 2x = 2 2x x 3 9

10 At x a both constraints are active, so the Jacobian evaluated at x a is ( ) g(x a ) T = The columns of g(x a ) (or the rows of g(x a ) T ) are linearly independent so x a is a regular point. Regarding x b, only the second constraint is active, so we need only check the corresponding column of the Jacobian, i.e. g(x b ) T = ( ), which is a nonzero vector, thus x b is also a regular point. d) Z is a basis for the null space of the Jacobian of the active constraints at x. If we use the variable reduction method, matrices B and N for g(x a ) will be e.g. B = ( ) 2 2, N = 2 4 ( ) ( ) 2 B 2 = 2 6 Z(x a ) = ( B ) N = 0. Z Regarding g(x b ), matrices B, N and Z will be B = ( 2 ), N = ( 4 2 ) B = ( ( B 2) Z(xb ) = ) 2 N = 0. Z 0 e) As the two constraints are active at x a, the Lagrange multipliers must be strictly nonnegative (λ,λ 2 0). Degeneracy happens if λ and/or λ 2 are equal to zero. For x b only the second constraint is active, so it must hold that λ = 0 and λ 2 0. Degeneracy arises if λ 2 is equal to zero. 5. Solve the problem two ways analytically: min f (x) = x 2 +x x 2 +x 2 2 2x 2 s.t. x +x 2 = 2 a) With the necessary conditions. i. What is x? [0.5 pt] ii. What is λ? iii. Is x a strict local minimizer? iv. Is x also a global minimizer? [0.5 pt] [0.5 pt] [0.5 pt] 0

11 Note: Solving of the problem by simply expressing one variable in terms of the other from the constraint, plugging it into the objective function and minimizing this (now unidimensional) function will NOT be evaluated. b) Either with a logarithmic barrier function or with a quadratic penalty function. Motivate your choice. i. What is x(µ) (or x(ρ), depending on your choice)? [0.5 pt] ii. What is λ(µ) (or λ(ρ), depending on your choice)? iii. What is x? iv. What is λ? c) Compute the Hessian matrix B of the logarithmic barrier function for µ = 0 4 (or of the quadratic penalty function for ρ = 0 4, depending on your choice). What is the condition number of B? What is B? Help: For a 2 2 nonsingular symmetric matrix Q the condition number is cond(q) = λ max λ min, where λ min,λ max are, respectively, the smallest and largest (in moduli) eigenvalues of Q. [0.5 pt] [0.5 pt] [0.5 pt] [2 pt] Solution This is a linear equality constrained problem, with one single constraint. The gradient and the Hessian of the objective function are ( ) ( ) 2x +x f (x) = 2, 2 2 f (x) =. x +2x The constraint matrix is A = ( ), so we can choose (from the variable reduction method), a basis matrix for the null space of the constraint ( ) Z =. a) If we impose now the first-order optimality conditions, we get f (x ) = A T λ ( ) 2x +x 2 = x +2x 2 2 ( ) λ, together with the constraint x +x 2 2 = 0. We solve the gradient condition for x and x 2 in terms of λ, and plug x (λ) and x 2 (λ) to obtain the value λ = 2 which, in turn, yields x = 0 and x 2 = 2. Hence

12 i. x = (0,2) T. ii. λ = 2. iii. Yes, x is a strict local minimizer, since the reduced Hessian at the solution is Z T 2 f (x )Z = ( )( )( ) 2 = 2 > 0, 2 satisfying therefore the second order sufficient condition. iv. x is also a global minimizer since the function is convex and the feasible set is convex. b) We solve now the problem using a quadratic penalty function, because the problem has only equality constraints. ψ(x) = 2 m g i (x) 2, i= so that the original constrained problem is transformed into the unconstrained problem min π ρ (x) = f (x)+ρψ(x) = x 2 +x x 2 +x 2 2 2x ρ(x +x 2 2) 2. We now impose the first-order optimality condition to π ρ (x) π ρ (x) x 2x +x 2 +ρ(x +x 2 2) = 0 π ρ (x) x 2 x +2x 2 2+ρ(x +x 2 2) = 0. Subtracting both equations, we get that x = x 2 2, and plugging it back into any of the constraints we get that i. The solution of the unconstrained problem, expressed in terms of ρ is ( 2 x(ρ) = (x (ρ),x 2 (ρ)) = 2ρ+3, 4ρ+4 ). 2ρ+3 ii. The estimate of the Lagrange multiplier for the constraint is λ(ρ) = ρg(x(ρ)) = ρ(x +x 2 2) = ρ 4 2ρ+3 = 4ρ 2ρ+3. iii. In the limit when ρ, x = (0,2). iv. In the limit when ρ, λ = 2. 2

13 c) We need to compute now the Hessian matrix of the quadratic penalty function π ρ (x), obtaining B 2 xπ ρ (x) = ( ) 2+ρ +ρ +ρ 2+ρ ρ=0 4 ( ) The eigenvalues of this matrix are σ = {,2ρ+3} = {,20003}, so the condition number is cond(b) = 2ρ+3 = Finally, the inverse of B is ( ) ρ+2 B 2ρ+3 ρ+ 2ρ+3 = ρ+ 2ρ+3 ρ+2 2ρ+3 which is very close to being singular. ρ=0 4 ( ) , The graphical representation of the problem is given below. 6. A cardboard box for packing quantities of small foam balls is to be manufactured as shown in the figure. The top, bottom, and front faces must be of double weight (i.e., two pieces of cardboard). We want to find the dimensions of such a box that maximize the volume for a given amount of cardboard, equal to 72 m 2. 3

14 Top Front x 3 x 2 x Bottom a) Write the optimization problem. State clearly all the constraints (if any). [0.5 pt] b) By geometric reasoning, the problem is guaranteed to have a solution x = (x,x 2,x 3 ) such that x,x 2,x 3 are strictly positive. Does this fact change your formulation of the optimization problem? Motivate. c) Through a suitable change of variables, it is possible to transform the problem in (6b) into a linear equality constrained problem. Formulate one such change of variables and the corresponding optimization problem. State again clearly all the constraints (if any), making use of the assumption in (6b). Hint: minimizing f(x) for f(x) > 0 is equivalent to minimizing f(x). [0.5 pt] [ pt] d) State the first-order necessary conditions for the problem in (6c). [ pt] e) Find x = (x,x 2,x 3 ). [3 pt] f) Verify the second-order sufficient condition for x. [ pt] Solution a) The problem can be formulated as max f (x) = x x 2 x 3 s.t. 4x x 2 +3x x 3 +2x 2 x 3 = 72 (7) x,x 2,x 3 0, because we need to have double weight on the Front face (which has an area equal to x x 3 ) and on the Top and Bottom faces, which have an area of x x 2. b) As we know that the constraints x,x 2,x 3 0 will not be active at the solution, we can remove, and we can thus express problem (7) as max f (x) = x x 2 x 3 s.t. 4x x 2 +3x x 3 +2x 2 x 3 = 72. (8) 4

15 Problem (8) can be equivalently expressed as a minimization problem min f (x) = x x 2 x 3 s.t. 4x x 2 +3x x 3 +2x 2 x 3 72 = 0. (9) c) The gradient and the Hessian for problem (9) are x 2 x 3 0 x 3 x 2 f (x) = x x 3, 2 f (x) = x 3 0 x. (0) x x 2 x 2 x 0 The Jacobian matrix of the nonlinear constraint is g(x) 4x x 2 +3x x 3 +2x 2 x 3 72 = 0 () 4x 2 +3x 3 g(x) = 4x +2x 3. (2) 3x +2x 2 However, we can write problem (9) in a different (and simpler) way, making use of the following changes of variables y = x x 2, y 2 = x x 3, y 3 = x 2 x 3. (3) Of course, the point y = (y,y 2,y 3 ) corresponding to the original solution x = (x,x 2,x 3 ) will be also guaranteed to be strictly positive at the solution. Hence, problem (9) now reads min f (y) = y y 2 y 3 s.t. 4y +3y 2 +2y 3 72 = 0. Since the square root is a monotonically increasing function, we can equivalently minimize the radicand y y 2 y 3, so we are finally facing the following linear equality constrained optimization problem min f (y) = y y 2 y 3 s.t. 4y +3y 2 +2y 3 72 = 0. (4) The gradient and the Hessian of the objective function in problem (4) are y 2 y 3 0 y 3 y 2 f (y) = y y 3, 2 f (y) = y 3 0 y. y y 2 y 2 y 0 We have only one linear equality constraint g(y) 4y +3y 2 +2y 3 72 = 0, (5) 5

16 so the constraint matrix is A = ( ), and a choice for Z (computed through the variable reduction method) would be Z = 3/4 /2 0 0 d) We can now move to the first optimality condition y 2 y 3 4 f (y ) = A T λ y y 3 = 3 λ, (6) y y 2 2 which has to be solved, together with the feasibility requirement (5) in order to obtain the stationary points of problem (4). e) If we solve each equation on the gradient condition (6) for λ we obtain. λ = y 2y 3 4, λ = y y 3 3, λ = y y 2 2. (7) If we compare the right-hand sides of the first and second relationships in (7), this yields y 2 y 3 = y y 3 y 2 = y (8) (here we can divide by y 3 as the y,y 2,y 3 are guaranteed to be strictly positive). Similarly, if we compare the right-hand sides of the first and third relationships in (7), we get y 2 y 3 4 = y y 2 2 y 3 = 2y. (9) We can plug now (8) and (9) into the constraint (5), and solve for y 4y +3 ( 4 3 y ) +2(2y ) 72 = 0 2y = 72 y = 6. (20) Replacing y into (7), (8) and (9) this yields y 2 = 8 and y 3 = 2, obtaining the solution y = (6,8,2), λ = 24. The solution in terms of the original variables is obtained from the original relationships (3) x x 2 = 6, x x 3 = 8, x 2 x 3 = 2, dividing the second and the third equations (to solve for x 2 in terms of x x 2 = 3 2 x ), and the first and the third equations (to solve for x 3 in terms of x x 3 = 2x ), and plugging the result into the original constraint () 4x ( 3 2 x ) +3x (2x )+2 ( 3 2 x ) (2x ) 72 = 0 x = 4 = 2, 6

17 and, from here, x 2 = 3 and x 3 = 4, so the solution is x = (2,3,4). We have to check now that this point is actually a stationary point of the original problem (9). To do so, we check the first order optimality condition for problem (9), and we observe that the gradient f (x ) and the Jacobian g(x ) are parallel at the solution 2 24 f (x ) = g(x )λ 8 = 6 λ, 6 2 obtaining a Lagrange multiplier λ = 2. f) To check if x = (2,3,4) is actually a strict local minimizer, we should move now to the second order sufficient optimality condition for the original problem (9), which requires the reduced Hessian Z T 2 f (x )Z to be positive definite at x. Here, Z is a null space matrix of the Jacobian g(x ), evaluated at the solution, i.e /3 /2 g(x ) = 6 Z = With this, it is straightforward to check that the reduced Hessian ( ) Z T 2 f (x 2/3 0 2 /3 /2 )Z = = / ( ) 6/ is positive definite, since its principal minors are 6 3 > 0 and 6/ = 2 > 0, so the point x = (2,3,4) is a strict local minimizer of problem (9), and, in consequence, a strict local maximizer of problem (7). Good Luck! Javier & Markus 7

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Max Margin-Classifier

Max Margin-Classifier Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

10-725/ Optimization Midterm Exam

10-725/ Optimization Midterm Exam 10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

MATH 4211/6211 Optimization Constrained Optimization

MATH 4211/6211 Optimization Constrained Optimization MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

CONSTRAINED OPTIMALITY CRITERIA

CONSTRAINED OPTIMALITY CRITERIA 5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Homework 5. Convex Optimization /36-725

Homework 5. Convex Optimization /36-725 Homework 5 Convex Optimization 10-725/36-725 Due Tuesday November 22 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore Lecture - 13 Steepest Descent Method Hello, welcome back to this series

More information

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor. ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Miscellaneous Nonlinear Programming Exercises

Miscellaneous Nonlinear Programming Exercises Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc. ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

Optimeringslära för F (SF1811) / Optimization (SF1841)

Optimeringslära för F (SF1811) / Optimization (SF1841) Optimeringslära för F (SF1811) / Optimization (SF1841) 1. Information about the course 2. Examples of optimization problems 3. Introduction to linear programming Introduction - Per Enqvist 1 Linear programming

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information