ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Similar documents
ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

ECE580 Partial Solution to Problem Set 3

MATH 4211/6211 Optimization Basics of Optimization Problems

Lecture 3: Basics of set-constrained and unconstrained optimization

ECE580 Exam 2 November 01, Name: Score: / (20 points) You are given a two data sets

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

ECE580 Solution to Problem Set 3: Applications of the FONC, SONC, and SOSC

MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS

Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros

ECE569 Exam 1 October 28, Name: Score: /100. Please leave fractions as fractions, but simplify them, etc.

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Math 273a: Optimization Basic concepts

The Steepest Descent Algorithm for Unconstrained Optimization

MA 113 Calculus I Fall 2015 Exam 3 Tuesday, 17 November Multiple Choice Answers. Question

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Introduction to unconstrained optimization - direct search methods

ECE580 Final Exam December 14, Please leave fractions as fractions, I do not want the decimal equivalents.


Performance Surfaces and Optimum Points

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Unconstrained Optimization

FALL 2018 MATH 4211/6211 Optimization Homework 4

Final Exam Practice Problems Part II: Sequences and Series Math 1C: Calculus III

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review


Multiple Choice Answers. MA 114 Calculus II Spring 2013 Final Exam 1 May Question

Math 104 Section 2 Midterm 2 November 1, 2013

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

Chapter 8 Gradient Methods

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

ORF 363/COS 323 Final Exam, Fall 2018

MATH 4211/6211 Optimization Quasi-Newton Method

Computational Optimization. Convexity and Unconstrained Optimization 1/29/08 and 2/1(revised)

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Written Examination

Unconstrained optimization

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Math 273a: Optimization Netwon s methods

The Conjugate Gradient Method

SF2822 Applied nonlinear optimization, final exam Wednesday June

Page Points Score Total: 100

Math 1132 Practice Exam 1 Spring 2016

MTH 132 Solutions to Exam 2 Apr. 13th 2015

No books, notes, any calculator, or electronic devices are allowed on this exam. Show all of your steps in each answer to receive a full credit.

Convex Optimization. Problem set 2. Due Monday April 26th

Nonlinear Programming

Final exam (practice) UCLA: Math 31B, Spring 2017

Math 235: Linear Algebra

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Unconstrained minimization of smooth functions

TMA4180 Solutions to recommended exercises in Chapter 3 of N&W

Math (P)refresher Lecture 8: Unconstrained Optimization

Fundamentals of Unconstrained Optimization

Math 106: Calculus I, Spring 2018: Midterm Exam II Monday, April Give your name, TA and section number:


CHAPTER 2: QUADRATIC PROGRAMMING

Nonlinear Optimization: What s important?

Numerical Optimization

Linear and Nonlinear Optimization

Scientific Computing: Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization

8 Numerical methods for unconstrained problems

CS 323: Numerical Analysis and Computing

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

MTH 132 Solutions to Exam 2 Nov. 23rd 2015

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

MATH 120 THIRD UNIT TEST

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Math 116 Second Midterm November 13, 2017

2.098/6.255/ Optimization Methods Practice True/False Questions

1 Strict local optimality in unconstrained optimization

10-725/ Optimization Midterm Exam

Lecture 10: September 26

STUDENT NAME: STUDENT SIGNATURE: STUDENT ID NUMBER: SECTION NUMBER RECITATION INSTRUCTOR:

Math 22 Fall 2018 Midterm 2

Math 162: Calculus IIA

Final exam (practice) UCLA: Math 31B, Spring 2017

Math 1: Calculus with Algebra Midterm 2 Thursday, October 29. Circle your section number: 1 Freund 2 DeFord

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)


MATH 1207 R02 MIDTERM EXAM 2 SOLUTION

Math Exam 2-11/17/2014

FINAL EXAM Math 25 Temple-F06

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Math 107: Calculus II, Spring 2015: Midterm Exam II Monday, April Give your name, TA and section number:

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

ECE580 Solution to Problem Set 6

We describe the generalization of Hazan s algorithm for symmetric programming

A Distributed Newton Method for Network Utility Maximization, II: Convergence

Math 110 (Fall 2018) Midterm II (Monday October 29, 12:10-1:00)

10. Unconstrained minimization

Transcription:

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers carefully. Calculators may NOT be used. Please leave fractions as fractions, but simplify them, etc. I do not want the decimal equivalents. Cell phones and other electronic communication devices must be turned off and stowed under your desk. Please do not write on the backs of the exam or additional pages. The instructor will grade only one side of each page. Extra paper is available from the instructor. Please write your name on every page that you would like graded. SD1 SD2 N1 N2 Q G1 G2 G3 G4 SNC1 SNC2 SNC3 SQ

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 2 (25 points) Method of Steepest Descent Algorithm and Application 1. State the update algorithm for the method of steepest descent. Make sure that you have not left any variables undefined. where x (k+1) = x (k) α k f(x (k) ) α k = arg min α 0 f(x(k) α f(x(k))) 2. Apply the method of steepest descent to the minimization problem min subject to f(x) x Ω where f(x) = 1 + 2x + 3x 2, and Ω = Rl. Use x (0) = 0 and find α 0. Find x (1). We will need several quantities: f(x) = 2 + 6x, f(x (0) ) = 2, so x (0) α f(x (0) ) = 2α, and f( 2α) = 1 + ( 2α) + 3( 2α) 2 = 1 2α + 12α 2. Now we let φ(α) = f( 2α) and find the minimizing α 0 of φ(α). First, dφ (α) = 2 + 24α. dα Equating the derivative to zero and solving for α 0, we obtain from

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 3 dφ dα (α 0) = 2 + 24α 0 = 0, that α 0 = 1/6. This is nonnegative, so it is an acceptable minimizer...if the Hessian evaluated at α 0 is positive. d 2 φ dα 2 (α 0) = 24 > 0, so we have indeed found the minimizer. Finally we calculate x (1), x (1) = x (0) 1/6(2 + 6x (0) ) = 1/3.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 4 (20 points) Newton s method for optimization of scalar functions 1. State the update algorithm for Newton s method. x (k+1) = x (k) f (x (k) )/f (x (k) ), f (x (k) ) 0. 2. Consider the minimization problem min subject to f(x) x Ω where f(x) = 1 + 2x + 3x 2, and Ω = Rl. Use x (0) = 0 and apply Newton s method to find x (1). We need the following: f (x) = 2 + 6x, f (x (0) ) = 2, f (x) = 6, f (x (0) ) = 6. Then x (1) = x (0) f (x (0) )/f (x (0) ) = 2/6 = 1/3. Note that we would not, in general, expect to get the same x (k+1) the steepest descent and Newton s algorithms. from

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 5 (5 points) Quadratic Forms Give 2 2 examples of 1. a positive definite matrix 2. a positive semidefinite matrix that is not positive definite 3. an indefinite matrix 4. a negative definite matrix The simplest way to determine the definiteness of a matrix is to find the eigenvalues. If all positive, the matrix is positive definite; if all nonnegative, the matrix is positive semidefinite. Note that that means that positive definite matrices are also positive semidefinite. If all eigenvalues are negative, the matrix is negative definite; if all nonpositive, the matrix is negative semidefinite. If a matrix has both positive and negative eigenvalues it is indefinite. A diagonal matrix has its eigenvalues on its diagonal, so the simplest choice for our examples is to provide diagonal matrices with appropriate eigenvalues. Note that the simplest positive definite matrix is the identity I; the simplest negative definite matrix is I.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 6 (25 points) Geometry Sets, set boundaries, and set interiors, convexity 1. State the definition of a boundary point of a set. A point x is a boundary point of a set S if every neighborhood of x contains a point in S and a point not in S. (When S is a connected region of Rl n, any neighborhood of a boundary point x, will contain infinitely many points in S and infinitely many not in S. This is a property of the real numbers.) 2. State the definition of an interior point of a set. A point x S is an interior point of the set S if the set S contains a neighborhood of x. 3. Consider the set C = {x : x 1 0, x 2 1 + x 2 2 < 1}. (a) Complete the mathematical description of the set of boundary points of C: C = {x : x 1 = 0, 1 < x 2 < 1} {x : x 1 0, x 2 1 + x 2 2 = 1}. (b) Complete the mathematical description of the set of interior points of C: o C = {x : x 1 > 0, x 2 1 + x 2 2 < 1} (c) Consider the set A = {x Rl 2 : x 1 = 0}. Show that A is or is not convex. Let u, v be arbitrary points in the set A. Then u 1 = 0 and v 1 = 0. Let w = αu + (1 α)v. Then w 1 = u 1 + v 1 = 0, so w A so A is convex.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 7 (20 points) Sufficient and Necessary Conditions Be sure to define all terms and be as complete as possible. 1. State the SOSC for interior points. For a C 1 function f : Ω Rl, where Ω Rl n, if, at an interior point x of Ω, f(x ) = 0 and F (x ) > 0, then x is a strict local minimizer of the function f. 2. State the FONC for general (not just interior) points. Let Ω Rl n and C 1 function f be real valued on Ω. If x is a local minimizer of f over Ω, then for every feasible direction d, d T f(x ) = 0. 3. State the definition of the the term feasible direction. A nonzero vector d Rl n is a feasible direction at x Ω if there is an α 0, such that for all α [0, α 0 ], x + αd Ω.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 8 (5 points) Short Questions All answers require justification. 1. In analyzing the application of gradient algorithms to quadratic functions, what use was made of the variable γ k. You need not give a formula for calculating γ k. Lemma 8.1, defines where V (x) = f(x) + 1 2 x T Qx, f(x) = 1 2 xt Qx b T x, and asserts that the iterates of the steepest descent algorithm satisfy V (x (k+1) ) = (1 γ k ) V (x (k). A formula is given for γ k. The proof establishes that except when γ k = 1, at which the search for a minimizer is over, γ k < 1. Thus if γ k > 0, the value of V (x (k) ) is decreasing and so f, which differs from V by only a constant, is also decreasing. 2. Is the sequence {e k cos k} k=0 decreasing? No, the sequence is not decreasing. The sequence approaches zero as k increases without bound. While the first and second terms x (0), and x (1) are positive, the third is negative. The factor e k is always positive so we need not examine it further. Since 1/(2π) < 1/6, cos 1 is in the first quadrant, hence positive. On the other hand, 2/(2π) = 1/π is around 1/3, so cos 2 is in the second quadrant and thus negative. Since the limit is zero, we conclude that there is a k such that cos k increases from its value at k = 2 towards zero. 3. Is the sequence {e k sin k} k=0 convergent?

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 9 By definition, a sequence of real numbers {x k } k=0 converges to a value x if for any ɛ > 0, there exists an N such that for all k > N, x k x < ɛ. Noting that 0 sin k 1, we now show that the sequence convergent to zero. As if i.e. e k sin k = e k sin k e k = e k < ɛ, k < ln ɛ k > ln ɛ. Thus choosing N ln ɛ suffices to guarantee that e k sin k 0 < ɛ. 4. Name three desirable properties for algorithms to have. (a) convergence (b) short run time (c) low computational demands (d) accuracy (e) etc. c 2015 S. Koskie