FALL 2018 MATH 4211/6211 Optimization Homework 4

Similar documents
FALL 2018 MATH 4211/6211 Optimization Homework 1

Chapter 4. Unconstrained optimization

MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS

Chapter 10 Conjugate Direction Methods

Homework 3 Conjugate Gradient Descent, Accelerated Gradient Descent Newton, Quasi Newton and Projected Gradient Descent

Lecture 10: September 26

1 Numerical optimization

Nonlinear Programming

The Conjugate Gradient Method

January 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13

1 Numerical optimization

MATH 4211/6211 Optimization Quasi-Newton Method

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

MATH 4211/6211 Optimization Basics of Optimization Problems

ECE580 Partial Solution to Problem Set 3

x k+1 = x k + α k p k (13.1)

Scientific Computing: Optimization

The Conjugate Gradient Algorithm

The Steepest Descent Algorithm for Unconstrained Optimization

Math 164-1: Optimization Instructor: Alpár R. Mészáros

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

Numerical Optimization of Partial Differential Equations

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Homework 5. Convex Optimization /36-725

Steepest descent algorithm. Conjugate gradient training algorithm. Steepest descent algorithm. Remember previous examples

Math 409/509 (Spring 2011)

Optimization II: Unconstrained Multivariable

Introduction to Scientific Computing

Analysis Methods in Atmospheric and Oceanic Science

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Convex Optimization / Homework 1, due September 19

Programming, numerics and optimization

Lecture 5. September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico

ECE580 Exam 2 November 01, Name: Score: / (20 points) You are given a two data sets

Unconstrained optimization

4 damped (modified) Newton methods

Numerical Optimization: Basic Concepts and Algorithms

Line Search Methods for Unconstrained Optimisation

Convex Optimization. Problem set 2. Due Monday April 26th

Conjugate-Gradient. Learn about the Conjugate-Gradient Algorithm and its Uses. Descent Algorithms and the Conjugate-Gradient Method. Qx = b.

Introduction to Optimization

Optimization II: Unconstrained Multivariable

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

ORF 363/COS 323 Final Exam, Fall 2018

Numerical Methods I Solving Nonlinear Equations

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Nonlinear Optimization: What s important?

Math 164: Optimization Barzilai-Borwein Method

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

Unconstrained Optimization

IE 5531: Engineering Optimization I

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

Classification with Perceptrons. Reading:

SOLUTIONS to Exercises from Optimization

Conjugate Directions for Stochastic Gradient Descent

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

Written Examination

Quasi-Newton Methods

Nonlinear Optimization for Optimal Control

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Image restoration: numerical optimisation

Analysis Methods in Atmospheric and Oceanic Science

Conjugate gradient algorithm for training neural networks

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

Statistics 580 Optimization Methods

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Lecture V. Numerical Optimization

Gradient Descent Methods

INTRODUCTION, FOUNDATIONS

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

5 Quasi-Newton Methods

2. Quasi-Newton methods

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Solving nonlinear equations (See online notes and lecture notes for full details) 1.3: Newton s Method

Static unconstrained optimization

17 Solution of Nonlinear Systems

Open Problems in Nonlinear Conjugate Gradient Algorithms for Unconstrained Optimization

Numerical Optimization

Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros

8 Numerical methods for unconstrained problems

Convex Optimization CMU-10725

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Project #1 Internal flow with thermal convection

CSE 250a. Assignment Noisy-OR model. Out: Tue Oct 26 Due: Tue Nov 2

Problem 1 Cost of an Infinite Horizon LQR

Optimization and Root Finding. Kurt Hornik

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Lecture 7: Optimization methods for non linear estimation or function estimation

HOMEWORK 4: SVMS AND KERNELS

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Lecture 5: September 15

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Math 551 Homework Assignment 3 Page 1 of 6

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Spring 2014: Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4 (due 14 April 2014)

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720

Transcription:

FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution manual). Copying others solutions or programs is strictly prohibited and will result in grade of 0 to all involved students. Please type your answers in Latex and submit a single PDF file on icollege before due time. Please do not include your name anywhere in the submitted PDF file. Instead, name your PDF file as hw4 123456789.pdf (replace 123456789 by your own Panther ID number). It is recommended to use notations that are consistent to lectures. By default all vectors are treated as column vectors. 1

Problem 1. (1 point) Let f(x), x [x 1, x 2 ] T R 2, be given by f(x) = 5 2 x2 1 + 1 2 x2 2 + 2x 1 x 2 3x 1 x 2. (a) Express f(x) in the form of f(x) = 1 2 xt Qx x T b. (b) Find the minimizer of f using the conjugate gradient algorithm. x (0) = [0, 0] T. Use a starting point of (c) Calculate the minimizer of f analytically form Q and b, and check it with your answer in part (b). 2

Problem 2. (1 point) Given f : R n R, f C 1, consider the algorithm x (k+1) = x (k) + α k d (k), where d (1), d (2),... are vectors in R n, and α k 0 is chosen to minimize f(x (k) + αd (k) ); that is α k = arg min f(x (k) + αd (k) ). α 0 Note that the above general algorithm encompasses almost all algorithms that we discussed in this part, including the steepest descent, Newton, conjugate gradient, and quasi-newton algorithms. Let g (k) = f(x (k) ), and assume d (k)t g (k) < 0 (a) Show that d (k) is a descent direction for f, in the sense that there exists ᾱ > 0 such that forall α (0, ᾱ], f(x (k) + αd (k) ) < f(x (k) ). (b) Show that α k > 0. (c) Show that d (k)t g (k+1) = 0. (d) Show that the following algorithms all satisfy the condition d (k)t g (k) < 0, if g (k) 0: 1. Steepest descent algorithm; 2. Newton s method, assuming the Hessian is positive definite; 3. Conjugate gradient algorithm; 4. Quasi-Newton algorithm, assuming H (k) 0. (e) For the case where f(x) = 1 2 xt Qx x T b, with Q symmetric and positive definite, derive an expression for α k in terms of Q, d (k), and g (k). 3

Problem 3. (1 point) Consider the optimization problem: minimize c 1 x 1 + c 2 x 2 + + c n x n subject to Ax = b where c i 0, i = 1,..., n. Convert the above problem into an equivalent standard form linear programing problem. Hint: Given any x R, we can find unique numbers x +, x R, x +, x 0, such taht x = x + + x and x = x + x. 4

Problem 4. (1 point) Consider the linear program: maximize 2x 1 + x 2 subject to 0 x 1 5 0 x 2 7 x 1 + x 2 9 Convert the problem to standard form and solve it using the simplex method. 5

Problem 5. (1 point) Write a MATLAB (or Python) routine for implementing the conjugate gradient algorithm for general functions. Use the secant method for the line search. Test the three different formulas, Hestenes-Stiefel, Polak-Ribière, and Fletcher-Reeves, for β k on Rosenbrock s function f(x) = 100(x 2 x 2 1) 2 + (1 x 1 ) 2 with an initial condition x (0) = [ 2, 2] T. For this exercise, reinitialize the update direction to the negative gradient every 6 iterations. Set the stopping criterion to g (k) < ε for ε = 10 7. For each of the three formulas, show the number of iterations needed to terminate the algorithm and create a table of x (k), x (k) x, g (k), and f(x (k) ) for k = 0, 1, 2, 3, 4 using the following template (keep 4 digits in all numbers): k x (k) x (k) x g (k) f(x (k) ) 0 1 2 3 4 6

Problem 6. (1 point) Write a MATLAB (or Python) routine for implementing the quasi-newton algorithm for general functions. Use the secant method for the line search. Test the three different formulas H (k) on the Rosenbrock s function f(x) = 100(x 2 x 2 1) 2 + (1 x 1 ) 2 with an initial condition x (0) = [ 2, 2] T. For this exercise, reinitialize the update direction to the negative gradient every 6 iterations. Set the stopping criterion to g (k) < ε for ε = 10 7. For each of the three formulas, show the number of iterations needed to terminate the algorithm and create a table of x (k), x (k) x, g (k), and f(x (k) ) for k = 0, 1, 2, 3, 4 using the following template (keep 4 digits in all numbers): k x (k) x (k) x g (k) f(x (k) ) 0 1 2 3 4 7