Math 164-1: Optimization Instructor: Alpár R. Mészáros

Similar documents
Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros

Math 164-1: Optimization Instructor: Alpár R. Mészáros

MATH H53 : Final exam

INSTRUCTIONS USEFUL FORMULAS

MTH 234 Solutions to Exam 1 Feb. 22nd 2016

Final exam (practice) UCLA: Math 31B, Spring 2017

10-725/ Optimization Midterm Exam

Math 19 Practice Exam 2B, Winter 2011

Math 1: Calculus with Algebra Midterm 2 Thursday, October 29. Circle your section number: 1 Freund 2 DeFord

Lynch 2017 Page 1 of 5. Math 150, Fall 2017 Exam 2 Form A Multiple Choice

Math 106: Calculus I, Spring 2018: Midterm Exam II Monday, April Give your name, TA and section number:

Lynch, October 2016 Page 1 of 5. Math 150, Fall 2016 Exam 2 Form A Multiple Choice Sections 3A-5A

Final exam (practice) UCLA: Math 31B, Spring 2017

Multiple Choice Answers. MA 110 Precalculus Spring 2016 Exam 1 9 February Question

MATH 4211/6211 Optimization Constrained Optimization


2015 Math Camp Calculus Exam Solution

Constrained Optimization and Lagrangian Duality

University of Connecticut Department of Mathematics

MATH2070 Optimisation

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Math for Economics 1 New York University FINAL EXAM, Fall 2013 VERSION A

Math 10C - Fall Final Exam

THE USE OF A CALCULATOR, CELL PHONE, OR ANY OTHER ELECTRONIC DEVICE IS NOT PERMITTED IN THIS EXAMINATION.

Week 12: Optimisation and Course Review.

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

MA 113 Calculus I Fall 2017 Exam 1 Tuesday, 19 September Multiple Choice Answers. Question

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

THE UNIVERSITY OF BRITISH COLUMBIA Sample Questions for Midterm 1 - January 26, 2012 MATH 105 All Sections

April 30, Name: Amy s Solutions. Discussion Section: N/A. Discussion TA: N/A

Math 42: Fall 2015 Midterm 2 November 3, 2015

0, otherwise. Find each of the following limits, or explain that the limit does not exist.

Final Exam Solutions June 10, 2004

Q1 Q2 Q3 Q4 Tot Letr Xtra

Name: Instructor: Lecture time: TA: Section time:

Please do not start working until instructed to do so. You have 50 minutes. You must show your work to receive full credit. Calculators are OK.

Spring /11/2009

Page Points Score Total: 100

Assignment 1: From the Definition of Convexity to Helley Theorem

Midterm Exam II Math Engineering Calculus II April 6, Solutions

This exam will be over material covered in class from Monday 14 February through Tuesday 8 March, corresponding to sections in the text.

University of Toronto MAT137Y1 Calculus! Test 2 1 December 2017 Time: 110 minutes

Math 111 Exam 1. Instructions

Solutions to Math 41 First Exam October 18, 2012

Calculus Example Exam Solutions

MA EXAM 3 INSTRUCTIONS VERSION 01 April 14, Section # and recitation time

College Algebra: Midterm Review


Examination paper for TMA4180 Optimization I

1 Lagrange Multiplier Method

Solutions to Homework 7

MA EXAM 3 INSTRUCTIONS VERSION 01 April 17, Section # and recitation time

MTH 132 Solutions to Exam 2 Apr. 13th 2015

MA 113 Calculus I Spring 2013 Exam 3 09 April Multiple Choice Answers VERSION 1. Question

Math 51 Midterm 1 July 6, 2016

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Linear and non-linear programming

1 Directional Derivatives and Differentiability

HKUST. MATH1013 Calculus IB. Directions:

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

3.7 Constrained Optimization and Lagrange Multipliers


University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

Math 2114 Common Final Exam May 13, 2015 Form A

Math 131 Exam 2 November 13, :00-9:00 p.m.

University of Georgia Department of Mathematics. Math 2250 Final Exam Fall 2016

4x 2-5x+3. 7x-1 HOMEWORK 1-1

Exam 2 Solutions October 12, 2006

MA 262, Fall 2017, Final Version 01(Green)

MA CALCULUS II Friday, December 09, 2011 FINAL EXAM. Closed Book - No calculators! PART I Each question is worth 4 points.

Math 5311 Constrained Optimization Notes

MATH 1241 Common Final Exam Fall 2010

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

MATH 241 FALL 2009 HOMEWORK 3 SOLUTIONS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

Fall 2016 MATH*1160 Final Exam

Math 41 Final Exam December 6, 2010

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

2. To receive credit on any problem, you must show work that explains how you obtained your answer or you must explain how you obtained your answer.

Optimality Conditions

University of Connecticut Department of Mathematics

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

Math 251 Midterm II Information Spring 2018

Math 115 Second Midterm November 12, 2013

Math Fall Final Exam

MTH 234 Exam 1 February 20th, Without fully opening the exam, check that you have pages 1 through 11.

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Name (print): Question 4. exercise 1.24 (compute the union, then the intersection of two sets)

Math 115 Practice for Exam 2

5. Duality. Lagrangian

39.1 Absolute maxima/minima

Student s Printed Name:

MATH 1553, C. JANKOWSKI MIDTERM 3

Last/Family Name First/Given Name Seat #

Math 51 Second Exam May 18, 2017

STUDENT NAME: STUDENT SIGNATURE: STUDENT ID NUMBER: SECTION NUMBER RECITATION INSTRUCTOR:

Transcription:

Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing your name and signature on this exam paper, you attest that you are the person indicated and will adhere to the UCLA Student Conduct Code. You may use either a pen or a pencil to write your solutions. However, if you use a pencil I will withheld your paper for two weeks after grading it. No calculators, computers, cell phones (all the cell phones should be turned off during the exam), notes, books or other outside material are permitted on this exam. If you want to use scratch paper, you should ask for it from one of the supervisors. Do not use your own scratch paper! Please justify all your answers with mathematical precision and write rigorous and clear proofs. You may loose points in the lack of justification of your answers. Theorems from the lectures and homework may be used in order to justify your solution. In this case state the theorem you are using. This exam has 3 problems and is worth 20 points. Adding up the indicated points you can observe that there are 27 points, which means that there are 7 bonus points. This permits to obtain the highest score 20, even if you do not answer some of the questions. On the other hand nobody can be bored during the exam. All scores higher than 20 will be considered as 20 in the gradebook. I wish you success! Problem Exercise 1 Exercise 2 Exercise 3 Total Score

Exercise 1 (13 points). Let us consider the rhombus Ω R 2 bounded by the straight lines y = x, y = x, y = 2 + x and y = 2 x. We consider the function f : R 2 R given as f(x, y) = x 2 + y 2 2y + 9/2. We aim to find all the local minimizers and maximizers of f on Ω. Note that by Weierstrass theorem (Ω is compact and f is continuous on Ω), both global minimizers and maximizers exist. (1) Select all the candidates from the interior of Ω both for local minimizers and maximizers. (2) Using second order sufficient conditions decide whether the selected points in (1) are local minimizers, maximizers or neither of them. (3) Compute the feasible directions at (0, 0). Using first and second order necessary conditions, decide whether (0, 0) is a good candidate for local maximizer of f on Ω or not. Compute the value of f at (0, 0) and compare it with the values of f at the other three vertices of Ω. (4) Find the global maximizers of f on the side of Ω where y = x. Compute the value of f at these points. Hint: do not use Lagrange multipliers! (5) Characterize and draw the following level sets of f: LS 5 := {(x, y) R 2 : f(x, y) = 5} and LS 6 := {(x, y) R 2 : f(x, y) = 6}. (6) Find all the global maximizers and minimizers of f on Ω. Hint: you may use some of the geometrical properties of f from the previous point and point (3). Solutions (1) For interior points the first order necessary condition reads as f(x, y) = 0 which is equivalent to (x, y) = (0, 2/2), and clearly this is an interior point of Ω. (2) Clearly D 2 f(x, y) = 2I 2 for all (x, y) R 2 (where I 2 is the identity matrix in R 2 2 ) which is a positive definite matrix, the point (x, y) = (0, 2/2) selected in (1) is an interior point (at which f vanishes), hence by the second order sufficient condition it is a strict local minimizer of f on Ω. (3) Feasible directions at (0, 0) are those vectors e = (e 1, e 2 ) R 2 for which e 2 e 1 and e 2 e 1 (hence they lie in the intersection of the two half-planes), from where one obtains that e 2 0 and e 2 e 1 e 2. At (0, 0) the first order necessary condition for local maximizer reads as f(0, 0) e 0 for all e feasible directions at (0, 0). This is equivalent to (0, 2) e 0 which is 2e 2 0, which holds true, since for the feasible directions at (0, 0) one has e 2 0. As seen in (2), D 2 f(x, y) = 2I 2 for all (x, y) R 2, and the second order necessary condition (for (0, 0) to be a local maximizer) should be that e D 2 f(0, 0)e 0 for all e feasible directions at (0, 0) for which f(0, 0) e = 0. This is equivalent to 2(e 2 1 + e 2 2) 0 for all e feasible direction at (0, 0) for which e 1 = e 2 = 0, which is a trivial direction and hence the second order necessary condition for (0, 0) to be a local maximizer holds true. f(0, 0) = 9/2 = f( 2/2, 2/2) = f( 2/2, 2/2) = f(0, 2), thus the values of the function are the same at all the vertices of Ω. (4) On the side of Ω where y = x the function reduces to a function of one variable, i.e. g(x) = f(x, x) = 2x 2 2x+9/2 which has to the optimized on x [0, 2/2]. In the interior of the interval if one has a local maximizer, one should have that g (x) = 0, which is equivalent to x = 2/4, which is an interior point. But the parabola is convex, hence it has a global minimizer at this point (f( 2/4, 2/4) = 17/4). Since the parabola g is symmetric on [0, 2/2], one has that the maximum of f is achieved in both endpoints on the line segment (0, 0) and ( 2/2, 2/2), where one has computed already the values of f in (3) which was 9/2. (5) f can be written as f(x, y) = x 2 + (y 2/2) 2 + 4. Hence all the level sets of f (for levels greater than 4) are circles around the point (0, 2/2). In particular LS 5 := {(x, y) R 2 : x 2 + (y 2/2) 2 = 1} and LS 6 := {(x, y) R 2 : x 2 + (y 2/2) 2 = 2}. 2

(6) Using the previous point, having in mind that the level sets of f are circles, the value of f at a point (x, y) represents the distance square of this point from the center (0, 2/2) plus 4, i.e f(x, y) = (x, y) (0, 2/2) 2 + 4. Moreover Ω is symmetric w.r.t. (0, 2/2) and f is radially increasing w.r.t. the center (larger is the level at which one considers the function, larger is the value of it), it is clear that its maximum is attained on a level set which has the largest radius still intersecting Ω (i.e. precisely 2/2). These points (the global maximizers of f) are precisely the 4 vertex points (which are on LS 9 ) and the value of the function at 2 these points was computed in (3). By the same reasoning, the global minimum of the function is attained at the local minimizer (since this has distance 0 from the center), computed in (1), which is the center. And the value of f here is 4. 3

Exercise 2 (8 points). Let A R n n, n 1, be a symmetric matrix. We would like to find the minimum and the maximum of the so-called Rayleigh quotient associated to the matrix A. To do so, we define the Rayleigh quotient as the the function R : R n \ {0} R, defined as R(x) = x Ax x 2. Here x := x 2 1 + + x2 n denotes the usual euclidean norm on R n. (1) Show that R(cx) = R(x) for all x R n \ {0} and c R \ {0}. (2) Using (1), show that looking for the local maximizers and minimizers of R on R n \ {0} is equivalent to look for the local extremizers of R on the set Ω := {x R n : x 2 = 1}. (3) Using eventually the theory of Lagrange multipliers, show that the possible candidates for the local extremizers of R on Ω are the eigenvectors (normalized to be points in Ω) of A. (4) Characterize the tangent space T (x 1 ) and the normal space N(x 1 ) to Ω at x 1, where x 1 Ω is an eigenvector of A associated to the smallest eigenvalue λ 1 of A. (5) Assuming that one can order the eigenvalues (with possible multiplicities) of A as λ 1 < λ 2 λ n 1 < λ n, find the global minimizers and maximizers of R on Ω. Compute the minimum and the maximum of R. Hint: you may plug in the candidates from (3) into R to find the global extremizers. Another possibility is to work with second order sufficient conditions from the Lagrangian theory. Solutions (1) By direct computation one checks R(cx) = (cx) A(cx) cx 2 = c2 x Ax c 2 x 2 = R(x). (2) From (1) we see that R is depending only on the direction of the vector x and not on its length, in particular for all x R n \ {0} one has that R((1/ x )x) = R(x) and x/ x = 1, hence it is enough to look for the local minimizers and maximizers of R on the unit sphere Ω. (3) By (2) one has that optimizing R on R n \ {0} is equivalent to optimize it on Ω. Hence the local extremizers of R (and the optimal values) are the same as for the function P : R n R, P (x) = x Ax on Ω. The first order necessary Lagrangian condition says that if x Ω is a local extremizer of P on Ω, then there exists a constant λ R such that P (x) = λ h(x), where h : R n R, h(x) = x 2 1. Computing the gradients, one has that 2Ax = 2λx, which is clearly the condition for x Ω to be the eigenvector of A associated to the eigenvalue λ R of A. (4) By definition T (x 1 ) = {y R n : h(x 1 ) y = 0} = {y R n : 2x 1 y = 0}, which is the (n 1) dimensional hyperplane in R n orthogonal to x 1. Since T (x 1 ) and N(x 1 ) are orthogonal complements, one has that N(x 1 ) is the line passing through 0, which has direction x 1. (5) Since any local extremizer of P (hence of R) is an eigenvector of A (in Ω), let us compute P (x i ), where x i is any eigenvector associated to λ i (i {1,..., n}.) P (x i ) = (x i ) Ax i = (x i ) λ i x i = λ i. This implies in particular that P (x i ) P (x j ), if i j and in particular P (x 1 ) < P (x j ) < P (x n ) for all j {2,..., n 1}. We know that one can chose a set of n eigenvectors that form an orthonormal basis of R n, hence every x R n can be written as linear combination of these vectors. Thus, if x R n with x = 1, one has that P (x) = x Ax = c 2 1λ 1 +... c 2 nλ n, where c i is the coefficient in the front of x i, while writing x in the orthonormal basis of the eigenvectors. In particular c 2 1 + + c 2 n = 1. Hence it is clear that P (x 1 ) = λ 1 < P (x) < λ n = P (x n ), for all x = 1, 4

hence clearly the eigenvectors corresponding to λ 1 are strict global minimizers of P (hence of R) and the eigenvectors corresponding to λ n are strict global maximizers of P (hence of R) on Ω. The minimal and maximal values of P and R are λ 1 and λ n respectively. If one wants to work with the second order sufficient Lagrangian conditions instead (which is more difficult), one has to consider the matrices L(x i, λ i ) = 2A 2λ i I n, and check the sign of y L(x i, λ i )y for all y T (x i ), where I n R n n denotes the identity matrix. Let us check it for i = 1 (we forget about the positive coefficient 2). y L(x i, λ i )y = y Ay λ 1 y 2. Since y T (x 1 ), one has that y is orthogonal to x 1, hence while writing y in the orthonormal basis formed by eigenvectors of A one does not have a component of x 1, i.e. y = c 2 x 2 +... c n x n (with c 2 2+... c 2 n = y 2 ) which implies that y L(x i, λ i )y = y Ay λ 1 y 2 = c 2 2λ 2 +... c 2 nλ n λ 1 y 2 > y 2 (λ 2 λ 1 ) > 0, thus by the second order sufficient condition all the eigenvectors associated to λ 1 are strict local minimizers of P (hence of R) on Ω. The fact that they are global follows from the same arguments as in the first approach. The global maximality of the eigenvectors associated to λ n follow from a similar argument. 5

Exercise 3 (6 points). Let us consider the domain Ω R 2 defined as Ω := {(x, y) R 2 : x [1, e]; 0 y ln(x)}, where e denotes the base of the natural logarithm ln. We aim to find the global minimizers and maximizers of the function f : R 2 R, f(x, y) = e xy on Ω. Here also we observe that we are optimizing a continuous function on a compact set, hence the existence of a global minimizer and a global maximizer is given by the Theorem of Weierstrass. (1) Show that there are no candidates for local extremizers of f in the interior of Ω and deduce that the global extremizers lie on Ω. (2) Decompose Ω into three pieces and find the global maximizers and minimizers of f on each of the pieces. Explain why cannot we use the theory of Lagrange multipliers for equality constraints on each piece of the boundary separately! Observe then that to find the extremizers on each piece of the boundary reduces the problem to a 1D problem. (3) Deduce from (2) the global minimizers and maximizers of f on Ω. Compute the maximal and minimal values as well. Solutions (1) For a local extremizer (x, y) in the interior of Ω one should have that f(x, y) = 0, which is equivalent to (x, y) = (0, 0). Since this point is outside of Ω one deduces that there are no interior local extremizers of f and all the local ones lie on Ω. (2) Clearly Ω decomposes into 3 pieces, C 1 := {(x, y) R 2 : y = ln(x), x [1, e]}, C 2 := {(e, y) R 2 : 0 y ln(e) = 1} and C 1 := {(x, 0) R 2 : x [1, e]}. One cannot use the Lagrangian theory with equality constraints on the 3 pieces of the boundary, because to describe these pieces one needs inequalities as well. This situation can be handled by the so-called KKT theory, which was not the subject of this exam. Let us optimize f on each C i, i {1, 2, 3}. On C 1 the function f can be written as g 1 (x) = e x ln x which has to be optimized on [1, e]. Clearly g 1(x) = e x ln x (ln x + 1) > 0 for all x [1, e], thus g 1 is strictly increasing on this interval, having its global minimizer at x = 1, where g 1 (1) = 1 and its global maximizer at x = e, where g 1 (e) = e e. Hence the global minimizer of f on C 1 is the point (1, 0) and the global maximizer is (e, 1) with the corresponding values. On C 2 f reduces to the function g 2 = e ey which has to be optimized on [0, 1]. Clearly g 2(y) = ee ey > 0 for all y [0, 1], which means that g 2 is strictly increasing on [0, 1], hence its global minimizer and maximizer are y = 0 and y = 1 respectively, with the optimal values g 2 (0) = 1 and g 2 (1) = e e. The corresponding global minimizer and maximizer of f are the points (e, 0) and (e, 1) respectively. Similarly on C 3 the problem is to optimize g 3 (x) = 1 on the interval [1, e], for which all the points are both global minimizers and maximizers. The same is true for f on C 3. (3) By (1) one knows that the global extremizers lie on Ω. In (2) we computed all the global extremizers on each piece of the boundary. Hence one has to select the points from the selected ones in (2) which have the smallest and the biggest value. We see that the global maximizer of f is unique and it is the point (e, 1), with maximal value f(e, 1) = e e. There are infinitely many global maximizers which are the points (x, 0), where x [1, e]. The minimal value of the function is 1. 6