Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Size: px
Start display at page:

Download "Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions."

Transcription

1 Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require finding extrema of multivariable functions subject to various constraints One method to solve such problems is by using Lagrange multipliers, as outlined below Let U R n be an open set, and let f : U R be a smooth function, eg, infinitely many times differentiable We want to find the extrema of f(x subject to m constraints given by g(x = 0, where g : U R m is a smooth function, ie, Find x 0 U such that max g(x =0 x U f(x = f(x 0 or min g(x =0 x U f(x = f(x 0 (91 Problem (91 is called a constrained optimization problem For this problem to be well posed, a natural assumption is that the number of constraints is smaller than the number of the degrees of freedom, ie, m<n Apointx 0 U satisfying (91 is called a constrained extremum point of the function f(x with respect to the constraint function g(x To solve the constrained optimization problem (91, let λ = (λ i i=1:m be a column vector of the same size, m, as the number of constraints; λ is called the Lagrange multipliers vector The Lagrangian associated to problem (91 is the function F : U R m R given by F (x, λ = f(x + λ t g(x (92 271

2 272 CHAPTER 9 LAGRANGE MULTIPLIERS If g(x = ( g1 (x g m (x, then F (x, λ can be written explicitly as F (x, λ = f(x + m λ i g i (x Example: For a better understanding of the Lagrange multipliers method, in parallel with presenting the general theory, we will solve the following constrained optimization problem: Find the maximum and minimum values of the function f(x 1,x 2,x 3 =4x 2 2x 3 subject to the constraints 2x 1 = x 2 + x 3 and x x2 2 =13 We reformulate the problem by introducing the functions f : R 3 R and g : R 3 R 2 given by ( 2x1 x f(x =4x 2 2x 3 ; g(x = 2 x 3 x x2 2 13, (93 where x =(x 1,x 2,x 3 We want to find x 0 R 3 such that Let λ = problem is max g(x =0 x R 3 λ 2 ( λ1 i=1 f(x = f(x 0 or min g(x =0 x R 3 be the Lagrange multiplier f(x = f(x 0 The Lagrangian associated to this F (x, λ = f(x + λ 1 g 1 (x + λ 2 g 2 (x = 4x 2 2x 3 + λ 1 (2x 1 x 2 x 3 + λ 2 (x x (94 In the Lagrange multipliers method, the constrained extremum point x 0 is found by identifying the critical points of the Lagrangian F (x, λ For the method to work, the following necessary condition must be satisfied: The gradient g(x has full rank at any point x where the constraint g(x =0is satisfied, ie, rank( g(x = m, x U such that g(x =0 (95 Example (continued: In order to use the Lagrange multipliers method for our example, we first check that rank( g(x = 2 for any x R 3 such that g(x =0 Recall that the function g(x is given by (93 Then, g(x = ( x 1 2x 2 0

3 91 LAGRANGE MULTIPLIERS 273 Note that rank( g(x = 1 if and only if x 1 = x 2 = 0 Also, g(x =0ifand only if 2x 1 x 2 x 3 =0andx x2 2 = 13 However, if x2 1 + x2 2 = 13, then it is not possible to have x 1 = x 2 = 0 We conclude that, if g(x = 0, then rank( g(x = 2 The gradient 1 of F (x, λ with respect to both x and λ will be denoted by (x,λ F (x, λ and is the following row vector: (x,λ F (x, λ = ( x F (x, λ λ F (x, λ ; (96 cf (138 It is easy to see that F (x, λ = f m g i (x + λ i (x, x j x j x j i=1 j =1:n; (97 F (x, λ λ i = g i (x, i =1:m (98 Denote by f(x and g(x the gradients of f : U R and g : U R m, ie, ( f f f(x = (x, λ (x, λ ; (99 x 1 x n g 1 g x 1 (x 1 g x 2 (x 1 x n (x g 2 g g(x = x 1 (x 2 g x 2 (x 2 x n (x ; (910 g m g x 1 (x m g x 2 (x m x n (x cf (138 and (140, respectively Then, from (97 910, it follows that ( F x F (x, λ = λ F (x, λ = (x, λ x 1 ( F (x, λ λ 1 F x n (x, λ F λ m (x, λ From (96, (911, and (912, we conclude that = f(x+λ t ( g(x; (911 =(g(x t (912 (x,λ F (x, λ = ( f(x+λ t ( g(x (g(x t (913 The following theorem gives necessary conditions for a point x 0 U to be a constrained extremum point for f(x Its proof involves the Inverse Function Theorem and is beyond the scope of this book 1 In this section, we use the notation F, instead of DF, for the gradient of F

4 274 CHAPTER 9 LAGRANGE MULTIPLIERS Theorem 91 Assume that the constraint function g(x satisfies the condition (95 If x 0 U is a constrained extremum point of f(x with respect to the constraint g(x =0, then there exists a Lagrange multiplier λ 0 R m such that the point (x 0,λ 0 is a critical point for the Lagrangian function F (x, λ, ie, (x,λ F (x 0,λ 0 = 0 (914 We note that (x,λ F (x, λ is a function from R m+n into R m+n Thus, solving (914 to find the critical points of F (x, λ requires, in general, using the N dimensional Newton s method for solving nonlinear equations; see section 521 for details Interestingly enough, for some financial applications such as finding minimum variance portfolios, problem (914 is a linear system which can be solved without using Newton s method; see section 93 for details Example (continued: Since we already checked that the condition (95 is satisfied for this example, we proceed to find the critical points of the Lagrangian F (x, λ Using formula (94 for F (x, λ, it is easy to see that (x,λ F (x, λ = 2λ 1 +2λ 2 x 1 4 λ 1 +2λ 2 x 2 2 λ 1 2x 1 x 2 x 3 x x Let x 0 =(x 0,1,x 0,2,x 0,3 andλ 0 =(λ 0,1,λ 0,2 Then, solving (x,λ F (x 0,λ 0 =0 is equivalent to solving the following system: 2λ 0,1 +2λ 0,2 x 0,1 = 0 4 λ 0,1 +2λ 0,2 x 0,2 = 0 2 λ 0,1 = 0 2x 0,1 x 0,2 x 0,3 = 0 x 2 0,1 + x2 0,2 13 = 0 t (915 From the third equation of (915, we find that λ 0,1 = 2 Then, the system (915 can be written as Since λ 0,2 0, we find from (916 that x 0,1 = 2 λ 0,2 ; λ 0,2 x 0,1 = 2 λ 0,2 x 0,2 = 3 x 0,3 = 2x 0,1 x 0,2 x 2 0,1 + x2 0,2 = 13 x 0,2 = 3 λ 0,2 ; x 0,3 = 7 λ 0,2 ; x 2 0,1 + x 2 0,2 = 13 λ 2 0,2 =13 (916 Thus, λ 2 0,2 = 1 and the system (915 has two solutions, one corresponding to λ 0,2 = 1, and another one corresponding to λ 0,2 = 1, as follows: x 0,1 =2; x 0,2 = 3; x 0,3 =7; λ 0,1 = 2; λ 0,2 =1 (917

5 91 LAGRANGE MULTIPLIERS 275 and x 0,1 = 2; x 0,2 =3; x 0,3 = 7; λ 0,1 = 2; λ 0,2 = 1 (918 From (917 and (918, we conclude that the Lagrangian F (x, λ has the following two critical points: ( 2 ( 2 x 0 = 3 ; λ 0 = 1 (919 7 and x 0 = ( 2 3 ; λ 0 = 7 ( 2 1 (920 Finding sufficient conditions for a critical point (x 0,λ 0 of the Lagrangian F (x, λ to correspond to a constrained extremum point x 0 for f(x is somewhat more complicated (and rather rarely checked in practice Consider the function F 0 : U R given by F 0 (x = F (x, λ 0 = f(x + λ t 0g(x Let D 2 F 0 (x 0 be the Hessian of F 0 (x evaluated at the point x 0, ie, 2 F 0 (x x F 0 1 x 2x 1 (x 0 2 F 0 x nx 1 (x 0 D 2 2 F 0 x F 0 (x 0 = 1x 2 (x 0 2 F 0 (x x F 0 2 x nx 2 (x 0 (921 2 F 0 x 1x n (x 0 2 F 0 x 2x n (x 0 2 F 0 x (x 0 2 n Note that D 2 F 0 (x 0 isann n matrix Let q(v be the quadratic form associated to the matrix D 2 F 0 (x 0, ie, q(v = v t D 2 F 0 (x 0 v = 1 i,j n 2 F 0 x i x j (x 0 v i v j, (922 where v =(v i i=1:n We restrict our attention to the vectors v satisfying g(x 0 v = 0, ie, to the vector space V 0 = {v R n g(x 0 v =0} (923 Note that g(x 0 is a matrix with m rows and n columns, where m<n; cf (910 If the condition (95 is satisfied, it follows that rank( g(x 0 = m, ie, the matrix g(x 0 hasm linearly independent columns Assume, without losing any generality, that the first m columns of g(x 0 are linearly independent

6 276 CHAPTER 9 LAGRANGE MULTIPLIERS By solving the linear system g(x 0 v = 0, we obtain that the entries v 1, v 2,, v m of the vector v can be written as linear combinations of v m+1, v m+2,, v n, the other n m entries of v Let v red = ( vm+1 Then, by restricting q(v to the vector space V 0, we can write q(v as a quadratic form depending only on the entries of the vector v red, ie, q(v = q red (v red = q red (i, jv i v j, v V 0 (924 v n m+1 i,j n Example (continued: We compute the reduced quadratic forms corresponding to the critical points of the Lagrangian function from our example Recall that there are two such critical points, given by (919 and (920, respectively The first critical point of F (x, λ isx 0 =(2, 3, 7 and λ 0 =( 2, 1; cf (919 The function F 0 (x =f(x+λ t 0g(x isequalto F 0 (x = 4x 2 2x 3 +( 2 (2x 1 x 2 x 3 +1 (x x = x x 2 2 4x 1 +6x 2 13 (925 From (925 and (926, we find that ( D 2 F 0 (x 0 = and g(x 0 = ( If v =(v i i=1:3, then q(v = v t D 2 F 0 (x 0 v = 2v v 2 2 Recall that the gradient of the constraint function g(x is ( g(x = 2x 1 2x 2 0 Then, g(x 0 = (926 ( , and the condition g(x 0 v = 0 is equivalent to { 2v1 v 2 v 3 = 0 4v 1 6v 2 = 0 { v3 = 2v 1 v 2 v 1 = 3 2 v 2 { v3 = 2v 2 ; v 1 = 3 2 v 2 Let v red = v 2 The reduced quadratic form q red (v red is q red (v red = 2v v 2 2 = 13 2 v2 2 (927

7 91 LAGRANGE MULTIPLIERS 277 The second critical point of F (x, λ isx 0 =( 2, 3, 7 and λ 0 =( 2, 1; cf (920 The function F 0 (x =f(x+λ t 0g(x isequalto F 0 (x = x 2 1 x 2 2 4x 1 +6x (928 From (928 and (926, we find that ( D 2 F 0 (x 0 = If v =(v i i=1:3, then and g(x 0 = ( Then, g(x 0 = q(v = v t D 2 F 0 (x 0 v = 2v1 2 2v2 2 and the condition g(x 0 v = 0 is equivalent to ( { 2v1 v 2 v 3 = 0 4v 1 +6v 2 = 0 { v3 = 2v 1 v 2 v 1 = 3 2 v 2 { v3 = 2v 2 ; v 1 = 3 2 v 2 Let v red = v 2 The reduced quadratic form q red (v red is q red (v red = 2v 2 1 2v 2 2 = 13 2 v2 2 (929 Whether the point x 0 is a constrained extremum for f(x will depend on the nonzero quadratic form q red being either positive semidefinite, ie, or negative semidefinite, ie, q red (v red 0, v red R n m, q red (v red 0, v red R n m Theorem 92 Assume that the constraint function g(x satisfies condition (95 Let x 0 U R n and λ 0 R m such that the point (x 0,λ 0 is a critical point for the Lagrangian function F (x, λ =f(x+λ t g(x If the reduced quadratic form (924 corresponding to the point (x 0,λ 0 is positive semidefinite, then x 0 is a constrained minimum for f(x with respect to the constraint g(x =0 If the reduced quadratic form (924 corresponding to the point (x 0,λ 0 is negative semidefinite, then x 0 is a constrained maximum for f(x with respect to the constraint g(x =0 If the reduced quadratic form (924 corresponding to (x 0,λ 0 is nonzero, and it is not positive semidefinite nor negative semidefinite, then x 0 is not a constrained extremum point for f(x with respect to the constraint g(x =0

8 278 CHAPTER 9 LAGRANGE MULTIPLIERS Example (continued: We use Theorem 92 to classify the critical points of the Lagrangian function corresponding to our example Recall from (927 that the reduced quadratic form corresponding to the critical point x 0 =(2, 3, 7 and λ 0 =( 2, 1 of F (x, λ isq red (v red = 13 2 v2 2, which is positive semidefinite for any x R 3 Using Theorem 92, we conclude that the point (2, 3, 7 is a minimum point for f(x By direct computation, we find that f(2, 3, 7 = 26 Recall from (929 that the reduced quadratic form corresponding to the critical point x 0 =( 2, 3, 7 and λ 0 =( 2, 1 of F (x, λ isq red (v red = 13 2 v2 2, which is negative semidefinite for any x R 3 Using Theorem 92, we conclude that the point ( 2, 3, 7 is a maximum point for f(x By direct computation, we find that f( 2, 3, 7 = 26 It is important to note that the solution to the constrained optimization problem presented in Theorem 92 is significantly simpler, and does not involve computing the reduced quadratic form (924, if the matrix D 2 F 0 (x 0 is either positive definite or negative definite Corollary 91 Assume that the constraint function g(x satisfies condition (95 Let x 0 U R n and λ 0 R m such that the point (x 0,λ 0 is a critical point for the Lagrangian function F (x, λ =f(x +λ t g(x Let F 0 (x =f(x +λ t 0g(x, and let D 2 F 0 (x 0 be the Hessian of F 0 evaluated at the point x 0 If the matrix D 2 F 0 (x 0 is positive definite, ie, if all the eigenvalues of the matrix D 2 F 0 (x 0 are strictly greater than 0, then x 0 is a constrained minimum for f(x with respect to the constraint g(x =0 If the matrix D 2 F 0 (x 0 is negative definite, ie, if all the eigenvalues of the matrix D 2 F 0 (x 0 are strictly less than 0, then x 0 is a constrained maximum for f(x with respect to the constraint g(x =0 The result of Corollary 91 will be used in sections 93 and 94 for details to find minimum variance portfolios and maximum return portfolios, respectively is not straightforward Summarizing, the steps required to solve a constrained optimization problem using Lagrange multipliers are: Step 1: Check that rank( g(x = m for all x such that g(x =0 Step 2: Find (x 0,λ 0 U R m such that (x,λ F (x 0,λ 0 =0 Step 31: Compute q(v =v t D 2 F 0 (x 0 v, wheref 0 (x =f(x+λ t 0 g(x Step 32: Compute q red (v red byrestrictingq(v to the vectors v satisfying the condition g(x 0 v = 0 Decide whether q red (v red is positive semidefinite or negative semidefinite

9 91 LAGRANGE MULTIPLIERS 279 Step 4: Use Theorem 92 to decide whether x 0 is a constrained minimum point or a constrained maximum point Note: If the matrix D 2 F 0 (x 0 is either positive semidefinite or negative semidefinite, skip Step 32 and go from Step 31 to the following version of Step 4: Step 4: Use Corollary 91 to decide whether x 0 is a constrained minimum point or a constrained maximum point Several examples of solving constrained optimization problems using Lagrange multipliers are given in section 911; the steps above are outlined for each example Applications of the Lagrange multipliers method to portfolio optimization problems are presented in sections 93 and Examples The first example below illustrates the fact that, if condition (95 is not satisfied, then Theorem 91 may not hold Example: Find the minimum value of x x 1 + x 2 2, subject to the constraint (x 1 x 2 2 =0 Answer: This problem can be solved without using Lagrange multipliers as follows: If (x 1 x 2 2 = 0, then x 1 = x 2 Then, the function to minimize becomes ( x x 1 + x 2 2 = 2x x 1 = 2 x , which achieves its minimum when x 1 = 1 4 We conclude that there exists a unique constrained minimum point x 1 = x 2 = 1 4, and that the corresponding minimum value is 1 8 Although we solved the problem directly, we attempt to find an alternative solution using the Lagrange multipliers method According to the framework outlined in section 91, we want to find the minimum of the function f(x 1,x 2 = x x 1 + x 2 2 for (x 1,x 2 R 2, such that the constraint g(x 1,x 2 = 0 is satisfied, where g(x 1,x 2 = (x 1 x 2 2 By definition, g(x = ( g x 1 g x 2 = ( 2(x 1 x 2 2(x 1 x 2 Note that g(x = 0 if and only if x 1 = x 2 Thus, g(x = (0 0 (and therefore rank( g(x = 0 for all x such that g(x = 0 We conclude that condition (95 is not satisfied at any point x such that g(x =0

10 280 CHAPTER 9 LAGRANGE MULTIPLIERS Since the problem has one constraint, we only have one Lagrange multiplier, whichwedenotebyλ R From (92, it follows that the Lagrangian is F (x 1,x 2,λ = f(x 1,x 2 + λg(x 1,x 2 = x x 1 + x λ(x 1 x 2 2 The gradient (x,λ F (x, λ is computed as in (913 and has the form (x,λ F (x, λ = ( 2x λ(x 1 x 2 2x 2 2λ(x 1 x 2 (x 1 x 2 2 Finding the critical points for F (x, λ requires solving (x,λ F (x, λ = 0, which is equivalent to the following system of equations: { 2x1 +1+2λ(x 1 x 2 = 0; 2x 2 2λ(x 1 x 2 = 0; (x 1 x 2 2 = 0 This system does not have a solution: from the third equation, we obtain that x 1 = x 2 Then, from the second equation, we find that x 2 = 0, which implies that x 1 = 0 Substituting x 1 = x 2 = 0 into the first equation, we obtain 1 = 0, which is a contradiction In other words, the Lagrangian F (x, λ has no critical points However, we showed before that the point (x 1,x 2 = ( 1 4, 4 1 is a constrained minimum point for f(x given the constraint g(x = 0 The reason Theorem 91 does not apply in this case is that condition (95, which was required in order for Theorem 91 to hold, is not satisfied Example: Find the positive numbers x 1, x 2, x 3 such that x 1 x 2 x 3 =1andx 1 x 2 + x 2 x 3 + x 3 x 1 is minimized Answer: We first reformulate the problem as a constrained optimization problem Let U = i=1:3 (0, R3 and let x =(x 1,x 2,x 3 U The functions f : U R and g : U R are defined as f(x = x 1 x 2 + x 2 x 3 + x 3 x 1 ; g(x = x 1 x 2 x 3 1 We want to minimize f(x overthesetu subject to the constraint g(x =0 Step 1: Check that rank( g(x = 1 for any x such that g(x =0 Let x =(x 1,x 2,x 3 U It is easy to see that g(x = (x 2 x 3 x 1 x 3 x 1 x 2 (930 Note that g(x 0, since x i > 0, i = 1 : 3 Therefore, rank( g(x = 1 for all x U, and condition (95 is satisfied Step 2: Find (x 0,λ 0 such that (x,λ F (x 0,λ 0 =0 The Lagrangian associated to this problem is F (x, λ = x 1 x 2 + x 2 x 3 + x 3 x 1 + λ (x 1 x 2 x 3 1, (931

11 91 LAGRANGE MULTIPLIERS 281 where λ R is the Lagrange multiplier Let x 0 = (x 0,1,x 0,2,x 0,3 From (931, we find that (x,λ F (x 0,λ 0 = 0 is equivalent to the following system: x 0,2 + x 0,3 + λ 0 x 0,2 x 0,3 = 0; x 0,1 + x 0,3 + λ 0 x 0,1 x 0,3 = 0; x 0,1 + x 0,2 + λ 0 x 0,1 x 0,2 = 0; x 0,1 x 0,2 x 0,3 = 1 By multiplying the first three equations by x 0,1, x 0,2,andx 0,3, respectively, and using the fact that x 0,1 x 0,2 x 0,3 = 1, we obtain that λ = x 0,1 x 0,2 + x 0,1 x 0,3 = x 0,1 x 0,2 + x 0,2 x 0,3 = x 0,1 x 0,3 + x 0,2 x 0,3 Since x 0,i 0,i =1:3,wefindthatx 0,1 = x 0,2 = x 0,3 Since x 0,1 x 0,2 x 0,3 =1,we conclude that x 0,1 = x 0,2 = x 0,3 =1andλ 0 = 2 Step 31: Compute q(v =v t D 2 F 0 (x 0 v Since λ 0 = 2, we find that F 0 (x =f(x+λ t 0g(x isgivenby F 0 (x = x 1 x 2 + x 2 x 3 + x 3 x 1 2x 1 x 2 x 3 +2 The Hessian of F 0 (x is ( 0 1 2x3 1 2x 2 D 2 F 0 (x 1,x 2,x 3 = 1 2x x 1, 1 2x 2 1 2x 1 0 and therefore ( D 2 F 0 (1, 1, 1 = From (922, we find that the quadratic form q(v is q(v = v t D 2 F 0 (1, 1, 1 v = 2v 1 v 2 2v 2 v 3 2v 1 v 3 (932 Step 32: Compute q red (v red We first solve formally the equation g(1, 1, 1 v =0,wherev =(v i i=1:3 is an arbitrary vector From (930, we find that g(1, 1, 1=(111,and g(1, 1, 1 v = v 1 + v 2 + v 3 = 0 By ( solving for v 1 in terms of v 2 and v 3 we obtain that v 1 = v 2 v 3 Let v red = v2 v We substitute v 2 v 3 for v 1 in (932, to obtain the reduced quadratic 3 form q red (v red, ie, q red (v red = 2v2 2 +2v 2 v 3 +2v3 2 = v2 2 + v3 2 +(v 2 + v 3 2 Therefore, q red (v red > 0 for all (v 2,v 3 (0, 0, which means that q red is a positive definite form Step 4: From Theorem 92, we conclude that the point x 0 =(1, 1, 1 is a constrained minimum for the function f(x =x 1 x 2 + x 2 x 3 + x 3 x 1, with x 1,x 2,x 3 > 0, subject to the constraint x 1 x 2 x 3 =1

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1 Objectives Optimization of functions of multiple variables subjected to equality constraints

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4 Optimization Methods: Optimization using Calculus - Equality constraints Module Lecture Notes 4 Optimization of Functions of Multiple Variables subect to Equality Constraints Introduction In the previous

More information

Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers

Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers Lecture 9: Implicit function theorem, constrained extrema and Lagrange multipliers Rafikul Alam Department of Mathematics IIT Guwahati What does the Implicit function theorem say? Let F : R 2 R be C 1.

More information

FINANCIAL OPTIMIZATION

FINANCIAL OPTIMIZATION FINANCIAL OPTIMIZATION Lecture 1: General Principles and Analytic Optimization Philip H. Dybvig Washington University Saint Louis, Missouri Copyright c Philip H. Dybvig 2008 Choose x R N to minimize f(x)

More information

Mechanical Systems II. Method of Lagrange Multipliers

Mechanical Systems II. Method of Lagrange Multipliers Mechanical Systems II. Method of Lagrange Multipliers Rafael Wisniewski Aalborg University Abstract. So far our approach to classical mechanics was limited to finding a critical point of a certain functional.

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

Solutions to Homework 7

Solutions to Homework 7 Solutions to Homework 7 Exercise #3 in section 5.2: A rectangular box is inscribed in a hemisphere of radius r. Find the dimensions of the box of maximum volume. Solution: The base of the rectangular box

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang)

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang) MATHEMATICAL BIOSCIENCES doi:10.3934/mbe.2015.12.503 AND ENGINEERING Volume 12, Number 3, June 2015 pp. 503 523 OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

0.1 Tangent Spaces and Lagrange Multipliers

0.1 Tangent Spaces and Lagrange Multipliers 01 TANGENT SPACES AND LAGRANGE MULTIPLIERS 1 01 Tangent Spaces and Lagrange Multipliers If a differentiable function G = (G 1,, G k ) : E n+k E k then the surface S defined by S = { x G( x) = v} is called

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx. Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Quadratic Programming

Quadratic Programming Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces

More information

1 Lagrange Multiplier Method

1 Lagrange Multiplier Method 1 Lagrange Multiplier Method Near a maximum the decrements on both sides are in the beginning only imperceptible. J. Kepler When a quantity is greatest or least, at that moment its flow neither increases

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Differentiation. f(x + h) f(x) Lh = L.

Differentiation. f(x + h) f(x) Lh = L. Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3, MATH 205 HOMEWORK #3 OFFICIAL SOLUTION Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. a F = R, V = R 3, b F = R or C, V = F 2, T = T = 9 4 4 8 3 4 16 8 7 0 1

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Basics of Calculus and Algebra

Basics of Calculus and Algebra Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array

More information

LAGRANGE MULTIPLIERS

LAGRANGE MULTIPLIERS LAGRANGE MULTIPLIERS MATH 195, SECTION 59 (VIPUL NAIK) Corresponding material in the book: Section 14.8 What students should definitely get: The Lagrange multiplier condition (one constraint, two constraints

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Max Margin-Classifier

Max Margin-Classifier Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization

More information

Production Possibility Frontier

Production Possibility Frontier Division of the Humanities and Social Sciences Production Possibility Frontier KC Border v 20151111::1410 This is a very simple model of the production possibilities of an economy, which was formulated

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming

Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming Kurt M. Anstreicher Dept. of Management Sciences University of Iowa European Workshop on MINLP, Marseille, April 2010 The

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Lecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1)

Lecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1) Lecture 2 In this lecture we will study the unconstrained problem minimize f(x), (2.1) where x R n. Optimality conditions aim to identify properties that potential minimizers need to satisfy in relation

More information

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let

More information

HOMEWORK 7 SOLUTIONS

HOMEWORK 7 SOLUTIONS HOMEWORK 7 SOLUTIONS MA11: ADVANCED CALCULUS, HILARY 17 (1) Using the method of Lagrange multipliers, find the largest and smallest values of the function f(x, y) xy on the ellipse x + y 1. Solution: The

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

MATH 4211/6211 Optimization Constrained Optimization

MATH 4211/6211 Optimization Constrained Optimization MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization

More information

Section 1.1: Systems of Linear Equations

Section 1.1: Systems of Linear Equations Section 1.1: Systems of Linear Equations Two Linear Equations in Two Unknowns Recall that the equation of a line in 2D can be written in standard form: a 1 x 1 + a 2 x 2 = b. Definition. A 2 2 system of

More information

Math 5311 Constrained Optimization Notes

Math 5311 Constrained Optimization Notes ath 5311 Constrained Optimization otes February 5, 2009 1 Equality-constrained optimization Real-world optimization problems frequently have constraints on their variables. Constraints may be equality

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Notes on arithmetic. 1. Representation in base B

Notes on arithmetic. 1. Representation in base B Notes on arithmetic The Babylonians that is to say, the people that inhabited what is now southern Iraq for reasons not entirely clear to us, ued base 60 in scientific calculation. This offers us an excuse

More information

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Select the unique answer (choice) for each problem. Write only the answer. MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

More information

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps We start with the definition of a vector space; you can find this in Section A.8 of the text (over R, but it works

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

1 Last time: linear systems and row operations

1 Last time: linear systems and row operations 1 Last time: linear systems and row operations Here s what we did last time: a system of linear equations or linear system is a list of equations a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Theorems. Least squares regression

Theorems. Least squares regression Theorems In this assignment we are trying to classify AML and ALL samples by use of penalized logistic regression. Before we indulge on the adventure of classification we should first explain the most

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information