Minimization of Static! Cost Functions!

Size: px
Start display at page:

Download "Minimization of Static! Cost Functions!"

Transcription

1 Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for a minimum in J with respect to u Analytical and numerical solutions Copyright 2017 by Robert Stengel. All rights reserved. For educational use only Static Cost Function Minimum value of the cost, J, is fixed, but may be unknown Corresponding control parameter, u*, also is fixed but possibly unknown Cost function preferably has Single minimum Locally smooth, monotonic contours away from the minimum 2

2 Vector Norms for Real Variables Norm = Measure of length or magnitude of a vector, x Scalar quantity axicab or Manhattan norm L 1 norm = x 1 = Euclidean or Quadratic Norm n i=1 x i x 1 x x = 2 x n dim(x) = n 1 L 2 norm = x 2 = ( x x) 1/2 = x x 2 2 ( 2 ++ x n ) 1/2 p Norm L p norm = x p = n i=1 x i p 1/ p Infinity Norm: p -> 3 p Norm Example L p norm = x p = n i=1 x i p 15-dimensional vector 1/ p p norms of the vector p L p As p increases, the norm approaches the value of the maximum component of the vector

3 Apples and Oranges Problem Suppose elements of x represent quantities with different units One solution: Vector, y, normalized by ranges of elements of x y = x = y 1 y 2 y n x 1 x 2 x n = Velocity, m / s Angle, rad emperature, K ( ) ( ) d 1 x 1 x 1 x 1max ( x 1min d 2 x 2 x 2 x 2max ( x 2min d n x n x n x nmax ( x nmin ( ) What is a reasonable definition for the norm? = d d d n x 1 x 2 x n = Dx 5 Weighted Euclidean (Quadratic) Norm of x ( ) 1/2 = y y 2 2 ( 2 ++ y m ) 1/2 ( ) 1/2 dim y = Dx 2 dim x y 2 = y y = x D Dx dim D ( ) = m 1 ( ) = n 1 ( ) = m n If m = n, D is square D D is square and symmetric If D is diagonal D D is diagonal If m n, D is not square D D is square and symmetric Rank(D) m, n > m Rank(D) n, n < m 6

4 Rank of Matrix D Maximal number of linearly independent rows or columns of D Order of the largest non-zero minor determinant of D Examples D = D = a c e b d f a b c d e f ; maximal rank 2 ; maximal rank 2 7 Quadratic Forms x D Dx x Qx = Quadratic form Q D D = Defining matrix of the quadratic form dim(q) = n x n Q is symmetric x Qx is a scalar Rank(Q) m, n > m Rank(Q) n, n < m Useful identity for the trace of the quadratic form x Qx = r( x Qx) = r( xx Q) = r( Qxx ) ( 1 n) ( n n) ( n 1) = ( 1 1) = r 1 1 ( ) = r ( n n) ( ) = r n n = Scalar 8

5 Why are Quadratic Forms Useful? 2 x 2 example J ( x) x Qx = x 1 x 2 = q 11 x q 22 x ( q 12 + q 21 )x 1 x 2 q 11 q 12 q 21 q 22 x 1 x 2 = q 11 x q 22 x q 12 x 1 x 2 for symmetric matrix 2 nd -order term of aylor series expansion of any vector function f ( x) f ( x 0 + x) = f ( x 0 ) + x ( + 1 f 2 ( x) x x=x0 ( 2 x ( x +... H.O. x 2 x=x 0 ( Gradient (as row vector) well-defined everywhere J x J x 1 J x 2 ( ) ( 2q 22 x 2 + 2q 12 x 1 ) = 2q 11x 1 + 2q 12 x 2 Large values weighted more heavily than small values Some terms can count more than others Coupling between terms can be considered Asymmetric Q can always be replaced by a symmetric Q 9 Definiteness 10

6 Definiteness of a Matrix If x Qx > 0 for all x 0 a) he scalar is definitely positive b) he matrix Q is a positive definite matrix Q = If x Qx 0 for all x 0 Q is a positive semi - definite matrix Q = If x Qx < 0 for all x 0 Q is a negative definite matrix Q = Positive-Definite Matrix Q is positive-definite if All leading principal minor determinants are positive All eigenvalues are real and positive 3 x 3 example q 11 q 12 q 13 Q = q 21 q 22 q 23 q 31 q 32 q 33 ( ) = s 3 + a 2 s 2 + a 1 s + a 0 = ( s 1 )( s 2 )( s 3 ) det si Q 1, 2, 3 > 0 q 11 > 0, q 11 q 12 q 21 q 22 > 0, q 11 q 12 q 13 q 21 q 22 q 23 > 0 q 31 q 32 q 33 What s an eigenvalue? 12

7 Characteristic Polynomial Characteristic polynomial of a matrix, F si F = det( si F) = Scalar ( si F) = (s) = s n + a n1 s n a 1 s + a 0 = s n r( F)s n a 1 s + a 0 where ( s f 11 ) f f 1n f 21 s f 22 ( )... f 2n ( ) f n1 f n2... s f nn (n x n) 13 Eigenvalues Characteristic equation of the matrix (s) = s n + a n1 s n a 1 s + a 0 0 ( )( s 2 )(...)( s n ) 0 = s 1 i are solutions that set ( s) = 0 hey are called the eigenvalues of F the roots of the characteristic polynomial a n1 = r( F) a 0 = 1 n ( ) i i i=1 14

8 Examples of Positive Definite Matrices Inertia matrix of a rigid body I = (y 2 + z 2 ) xy xz ( xy (x 2 + z 2 ) yz dm = xz yz (x 2 + y 2 ) Bo dy I xx I xy I xz I xy I yy I yz I xz I yz I zz Diagonal matrix with positive elements Q = Rotation (Direction Cosine) Matrix y = Hx Projections of unit vector components of one Cartesian reference frame on another Rotational orientation of one Cartesian reference frame with respect to another H = cos 11 cos 21 cos 31 cos 12 cos 22 cos 32 cos 13 cos 23 cos 33 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 16

9 Euler Angles Body attitude measured with respect to inertial frame hree-angle orientation expressed by sequence of three orthogonal single-angle rotations from inertial to body frame : Yaw angle : Pitch angle : Roll angle det(h) = 1 H = cos cos cos sin sin ( * cos sin + sin sin cos cos cos + sin sin sin sin cos * sin sin + cos sin cos sin cos + cos sin sin cos cos * ) 17 Rotational ransformation Euclidean Norm Position of a particle in two coordinate frames with common origin y = Hx x = H 1 y Orthonormal transformation [H 1 = H ] Not a positive-definite matrix Non-singular matrix [det(h) = 1 0] y 2 = ( y y) 1/2 = y y 2 2 ( 2 + y 3 ) 1/2 ( ) 1/2 = Hx 2 = x 2 = x x 2 2 ( 2 + x 3 ) 1/2 = x H Hx as H H = H 1 H = I 3 (positive-definite) 18

10 Conditions for a Static Minimum 19 he Static Minimization Problem Find the value of a continuous control parameter, u, that minimizes a continuous scalar cost junction, J Single control parameter min J ( u ); dim J wrt u ( ) = 1, dim ( u ) = 1 u* = argmin J ( u) Argument that minimizes J u Many control parameters min J ( u); dim J wrt u ( ) = 1, dim( u) = m 1 u* = argmin J ( u) u 20

11 Necessary Condition for Static Optimality Single control dj du u =u* = 0 i.e., the slope is zero at the optimum point Example: J = ( u 4) 2 dj du = 2 u 4 ( ) = 0 when u* = 4 Optimum point is a stationary point 21 Necessary Condition for Static Optimality Multiple controls J u u=u* = J J... u 1 u 2 J u m = 0 u=u* Gradient, defined as a row vector i.e., all the slopes are concurrently zero at the optimum point Example: J = ( u 1 4) + ( u 2 8) dj = 2( u 1 4) = 0 du 1 when u 1 * = 4 dj = 2( u 2 8) = 0 du 2 when u 2 * = 8 J u u=u* = J J u 1 u 2 4 u=u*= 8 Optimum point is a stationary point =

12 But the Slope can be Zero for More than One Reason Minimum Maximum Either Neither (Inflection Point) 23 But the Gradient can be Zero for More than One Reason Minimum Maximum Either Neither (Saddle Point) 24

13 Sufficient Condition for Extremum Single control Minimum Satisfy necessary condition plus d 2 J du 2 u =u* > 0 Maximum Satisfy necessary condition plus d 2 J du 2 u =u* < 0 i.e., the curvature is positive at the optimum point Example: J = ( u 4) 2 dj du = 2 u 4 d 2 J du = 2 > 0 2 ( ) i.e., the curvature is negative at the optimum point Example: J = ( u 4) 2 dj du = 2 u 4 d 2 J du = 2 < 0 2 ( ) 25 Sufficient Condition for a Local Minimum Multiple controls Satisfy necessary condition plus Hessian matrix 2 J u 2 u=u* = 2 J u J u 1 u J u 1 u m 2 J 2 J 2 J... 2 u 2 u 1 u 2 u 2 u m J 2 J... u m u 1 u 2 u m 2 J u m 2 u=u* > 0 i.e., J must be convex for a minimum 26

14 Minimized Cost Function, J* Gradient is zero at the minimum Hessian matrix is positive-definite at the local minimum J ( u*+u) J ( u* ) + J ( u* ) + 2 J ( u* ) +... ( ) = u J J u* 2 J u* ( ) = 1 2 u 2 J u ) u=u* ( = 0 u 2 ) u * 0 u=u* ( First variation is zero at the minimum Second variationis positive at the minimum 27 Equality Constraints 28

15 Static Cost Functions with Equality Constraints Minimize J(u ), subject to c(u ) = 0 dim(c) = [n x 1] dim(u ) = [(m + n) x 1] 29 wo Approaches to Static Optimization with a Constraint 1.Use constraint to reduce control dimension u = 2.Augment the cost function to recognize the constraint u 1 u 2 Example : min J u 1,u 2 subject to c( u ) = c( u 1,u 2 ) = 0 u 2 = fcn u 1 then ( ) = J ( u 1,u 2 ) = J u 1, fcn( u 1 ) J u ( ) ( ) = J u 1 Lagrange multiplier,, is an unknown constant dim () = dim( c) = n 1 J A ( u ) = J ( u ) + c( u ) Warning Lagrange multiplier,, is not the same as an eigenvalue, 30

16 Solution Example: First Approach Cost function J = u 1 2 2u 1 u 2 + 3u Constraint c = u 2 u 1 2 = 0 u 2 = u Solution Example: Reduced Control Dimension Cost function and gradient with substitution J = u 1 2 2u 1 u 2 + 3u = u 1 2 2u 1 u = 2u u 1 28 J u 1 = 4u = 0 ( ) + 3( u 1 + 2) 2 40 Optimal solution u 1 * = 2 u 2 * = 0 J* = 36 32

17 Solution: Second Approach Partition u into a state, x, and a control, u, such that dim(x) = [n x 1] = dim(c) dim(u) = [m x 1] x u = u Add constraint to the cost function, weighted by Lagrange multiplier, u J A J A ( ) = J ( u ) + c( u ) ( x,u) = J ( x,u) + c( x,u) c is required to be zero when J A is a minimum ( ) = c c u x u = 0 33 J A x = J x + c x = 0 Solution: Adjoin Constraint with Lagrange Multiplier Gradients with respect to x, u, and are zero at the optimum point J A u = J u + c u = 0 J A = c = 0 34

18 Simultaneous Solutions for State and Control First equation: find optimal Lagrange multiplier (n scalar equations) Second and third equations: specify the state and control (n + m) scalar equations * = J x * c * =, x( ) + ( 2n + m) values must be found: ( x, u, ) 1 - /. c x( ) or 1 J x ( ) [Row] [Column] J u + c * u = 0 J u J x c x( ) c( x,u) = 0 1 c u = 0 35 Solution Example: Lagrange Multiplier Cost function Constraint Partial derivatives J x J u J = u 2 2xu + 3x 2 40 c = x u 2 = 0 = 2u + 6x = 2u 2x c x = 1 c u = 1 36

19 Solution Example: Lagrange Multiplier From first equation * = 2u 6x From second equation ( 2u 2x) + ( 2u 6x) (1) x = 0 From constraint u = 2 Optimal solution x* = 0 u* = 2 J* = Numerical Optimization 38

20 Numerical Optimization What if J is too complicated to find an analytical solution for the minimum? or J has multiple minima? Use numerical optimization to find local and/or global solutions 39 wo Approaches to Numerical Minimization Gradient-Free Search Evaluate J and search directly for min J J u o J u n Gradient-Based Search Evaluate J/u and search for zero = J u u=u0 = starting guess = J u n(1 + ) J u n = J u u=un such that J < J u n u n(1 40

21 Gradient-Free Search (Based on J) Exhaustive grid search Unstructured random search Structured random search 41 Gradient-Free Search Structured random search Nelder-Mead (Downhill Simplex) algorithm Simulated annealing Genetic algorithm Particle-swarm optimization 42

22 Downhill Simplex Search (Nelder-Mead Algorithm)* Simplex: N-dimensional figure in control space defined by N + 1 vertices (N + 1) N / 2 straight edges between vertices * Basis for MALAB fminsearch 43 Search Procedure for Downhill Simplex Method Select starting set of vertices Evaluate cost at each vertex Determine vertex with largest cost (e.g., J 1 at right) Project search from this vertex through middle of opposite face (or edge for N = 2) Evaluate cost at new vertex (e.g., J 4 at right) Drop J 1 vertex, and form simplex with new vertex Repeat until cost is small 44

23 Gradient Search to Minimize a Quadratic Function Cost function, gradient, and Hessian matrix of a quadratic function Guess a starting value of u, u o Solve gradient equation J = 1 ( 2 u u* ) R( u u* ), R > 0 = 1 ( 2 u Ru u Ru*u* Ru + u* Ru* ) J u = ( u u* ) R = 0 when u = u* 2 J = R = symmetric constant 2 u J = ( u o u* ) R = ( u o u* ) 2 J u u=uo u 2 u=u o ( u o u* ) = J R 1 u u=uo J u* = u o R 1 ( u u=uo ( row) ( column) 45 For a quadratic cost function J u* = u o R 1 ( u u=uo = u o 2 J ( u 2 u=u o ( 1 Optimal Value Found in a Single Step J ( u u=uo Gradient establishes general search direction Hessian fine-tunes direction and tells exactly how far to go 46

24 Newton-Raphson Iteration Many cost functions are not quadratic However, the surface is well-approximated by a quadratic in the vicinity of the optimum, u* J ( u*+u) J ( u* ) + J ( u* ) + 2 J ( u* ) +... J J ( u* ) = u = 0 u u=u* 2 J u* ( ) = 1 2 u 2 J u 2 ) u * 0 u=u* ( Optimal solution requires multiple steps 47 Newton-Raphson Iteration Newton-Raphson algorithm is an iterative search patterned after the quadratic search u k +1 = u k 2 J u 2 u=u k ( ( 1 J u u=uk ( ( 48

25 Difficulties with Newton- Raphson Iteration u k +1 = u k 2 J u 2 u=u k ( ( 1 J u u=uk ( ( Good when close to the optimum, but Hessian matrix (i.e., the curvature) may be Difficult to estimate from local measurements of the cost May have the wrong sign (e.g., not positive-definite) May lead to large errors in incremental control variation 49 Steepest-Descent Algorithm Multiplies Gradient by a Scalar Constant ( Gain ) u k +1 = u k J u u=uk Replace Hessian matrix by a scalar constant Gradient is orthogonal to equal-cost contours ) () 50

26 Choice of Steepest- Descent Gain If gain is too small Convergence is slow If gain is too large Convergence oscillates or may fail Solution: Make gain adaptive 51 Optimal Steepest- Descent Gain Find optimal gain by evaluating cost, J, for intermediate solutions with same J u Adjustment rule for Starting estimate, J 0 First estimate, J 1, using ( ) Second estimate, J 2, using 2 If J 2 > J 1 Quadratic fit through three points to find * Else, third estimate, J 3, using 4 Use optimal gain, *, on each major iteration 52

27 Generalized Direct Search Algorithm J u k+1 (t) = u k (t) K k u t ( ) ( k Choose optimal elements of K by sequential line search before recalculating the gradient K k = k k Ad Hoc Modifications to the Newton-Raphson Search u k+1 = u k 2 J ) u 2 u=u k () 1 J ) u u=uk (, < 1 u k+1 = u k 2 J u J 2 u m ( ( ( ( ( ( ( ( ( ( ( ( ( 1 J ( u u=uk u k+1 = u k 2 J + K( u 2 u=u k ( 1 J ( u u=uk, K > 0, diagonal u k+1 = u k 2 J + I) u 2 u=u k () 1 J ) u u=uk ( With varying, this is the Levenberg-Marquardt algorithm 54

28 Next ime: Principles for Optimal Control of Dynamic Systems Reading: OCE: Sections 3.1, 3.2, Supplemental Material 56

29 L p norm = x p = n i=1 Infinity Norm Norm As p -> Norm is the value of the maximum component x i p 1/ p n ) ***( x p() i i=1 1/) = x imax = L ) norm x 1 x x = 2 x n dim(x) = n n Also called Supremum norm Chebyshev norm Uniform norm L norm = x = max{ x 1, x 2,, x n } 57 Gradient-Free vs. Gradient-Based Searches J is a scalar J provides no search direction Search may provide a global minimum J/u is a vector J/u indicates feasible search direction Search defines a local minimum 58

30 MALAB Implementations of Gradient-Free Search Nelder-Mead (Downhill Simplex) algorithm fminsearch.html Simulated annealing simulated-annealing.html Genetic algorithm genetic-algorithm.html Example: edit gaconstrained Particle-swarm optimization particle-swarm.html 59 Numerical Example Cost function and derivatives J = 1. 0 ( /* 2 0* 1) 5J 5u u 1 u 2 ( = * * ) 1 + ( 3 - * -, ) u 1 u ( 3 - * -, ) ( - *, * ) u 1 u 2 + ( - ; R = *, ) , First guess at optimal control (details of the actual cost function are unknown) u 1 4 = u 2 7 Derivatives evaluated at starting point J ) = + u u=u0 * + J = 1 ( 2 u u *) R( u u *), R > ( 1, ) * ,. = , ; R = ) 1 2, +. * u* = = 4 7 u 1 u 2 * = Solution from starting point J u* = u o R = 1 3 u u=uo ( ( ( 9 / 5 2 / 5 + * - ) 2 / 5 1 / 5,

31 Conjugate Gradient Algorithm First step H u 1 (t) = u k (t) K ( 0 u t) ( Calculate ratio of integrated gradient magnitudes squared b = t f ( t o t f ( t o H ( u t) H ( 1 u t) dt 1 H ( u t) H ( 0 u t) dt 0 0 Calculate gradient of improved trajectory H u t ( ) 1 Second step ) + H u k +1 (t) = u 2 (t) = u 1 (t) K ( 1 u t) * (,+ 1 b H ( u t) -+ (. 0 /+ 61 Rosenbrock Function ypical test function for numerical optimization algorithms J(u 1,u 2 ) = ( 1 u 1 ) ( u 2 u 1 ) 2 Wolfram Alpha Plot (D ((1-u1)^2+100 (u2 - u1^2)^2, u1), D ((1-u1)^2+100 (u2 - u1^2)^2, u2)) Minimize[(1-u1)^2+100 (u2 - u1^2)^2, u1, u2] 62

32 How Many Maxima/Minima does the Mexican Hat [z = (sin R)/R] Have? J u u=u* = J J... u 1 u 2 J u m = 0 u=u* One maximum 2 J u 2 u=u* = 2 J u J u 1 u J 2 J... 2 u 2 u 1 u 2 2 J u 1 u m 2 J u 2 u m J 2 J... u m u 1 u 2 u m 2 J u m 2 u=u* > < 0 Prior example Wolfram Alpha (sin(sqrt(u1^2 + u2^2)) ) / (sqrt(u1^2 + u2^2)) maximize(sin(sqrt(u1^2 + u2^2)) ) / (sqrt(u1^2 + u2^2)) 63

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2018 J = Static cost function with constant control parameter vector, u Conditions for

More information

Introduction to Optimization!

Introduction to Optimization! Introduction to Optimization! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2017 Optimization problems and criteria Cost functions Static optimality conditions Examples

More information

Minimization with Equality Constraints

Minimization with Equality Constraints Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2015 State and Control Equality Constraints Pseudoinverse State and Control Inequality

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Translational and Rotational Dynamics!

Translational and Rotational Dynamics! Translational and Rotational Dynamics Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Copyright 217 by Robert Stengel. All rights reserved. For educational use only.

More information

Minimization with Equality Constraints!

Minimization with Equality Constraints! Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 State and Control Equality Constraints Pseudoinverse State and Control Inequality

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2018 Copyright 2018 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors Time Response of Dynamic Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Multi-dimensional trajectories Numerical integration Linear and nonlinear systems Linearization

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one

More information

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION 15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome

More information

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

ENGI Partial Differentiation Page y f x

ENGI Partial Differentiation Page y f x ENGI 3424 4 Partial Differentiation Page 4-01 4. Partial Differentiation For functions of one variable, be found unambiguously by differentiation: y f x, the rate of change of the dependent variable can

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Linearized Equations of Motion!

Linearized Equations of Motion! Linearized Equations of Motion Robert Stengel, Aircraft Flight Dynamics MAE 331, 216 Learning Objectives Develop linear equations to describe small perturbational motions Apply to aircraft dynamic equations

More information

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C. Chapter 4: Numerical Computation Deep Learning Authors: I. Goodfellow, Y. Bengio, A. Courville Lecture slides edited by 1 Chapter 4: Numerical Computation 4.1 Overflow and Underflow 4.2 Poor Conditioning

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2017 Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Upon successful completion of MATH 220, the student will be able to:

Upon successful completion of MATH 220, the student will be able to: MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

NonlinearOptimization

NonlinearOptimization 1/35 NonlinearOptimization Pavel Kordík Department of Computer Systems Faculty of Information Technology Czech Technical University in Prague Jiří Kašpar, Pavel Tvrdík, 2011 Unconstrained nonlinear optimization,

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Multivariate Newton Minimanization

Multivariate Newton Minimanization Multivariate Newton Minimanization Optymalizacja syntezy biosurfaktantu Rhamnolipid Rhamnolipids are naturally occuring glycolipid produced commercially by the Pseudomonas aeruginosa species of bacteria.

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Gradient Methods Using Momentum and Memory

Gradient Methods Using Momentum and Memory Chapter 3 Gradient Methods Using Momentum and Memory The steepest descent method described in Chapter always steps in the negative gradient direction, which is orthogonal to the boundary of the level set

More information

Math 118, Fall 2014 Final Exam

Math 118, Fall 2014 Final Exam Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

II. An Application of Derivatives: Optimization

II. An Application of Derivatives: Optimization Anne Sibert Autumn 2013 II. An Application of Derivatives: Optimization In this section we consider an important application of derivatives: finding the minimum and maximum points. This has important applications

More information

AB-267 DYNAMICS & CONTROL OF FLEXIBLE AIRCRAFT

AB-267 DYNAMICS & CONTROL OF FLEXIBLE AIRCRAFT FLÁIO SILESTRE DYNAMICS & CONTROL OF FLEXIBLE AIRCRAFT LECTURE NOTES LAGRANGIAN MECHANICS APPLIED TO RIGID-BODY DYNAMICS IMAGE CREDITS: BOEING FLÁIO SILESTRE Introduction Lagrangian Mechanics shall be

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

NUMERICAL MATHEMATICS AND COMPUTING

NUMERICAL MATHEMATICS AND COMPUTING NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Gradient Descent. Sargur Srihari

Gradient Descent. Sargur Srihari Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors

More information

Introduction to unconstrained optimization - direct search methods

Introduction to unconstrained optimization - direct search methods Introduction to unconstrained optimization - direct search methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Structure of optimization methods Typically Constraint handling converts the

More information

Rotational Motion. Chapter 4. P. J. Grandinetti. Sep. 1, Chem P. J. Grandinetti (Chem. 4300) Rotational Motion Sep.

Rotational Motion. Chapter 4. P. J. Grandinetti. Sep. 1, Chem P. J. Grandinetti (Chem. 4300) Rotational Motion Sep. Rotational Motion Chapter 4 P. J. Grandinetti Chem. 4300 Sep. 1, 2017 P. J. Grandinetti (Chem. 4300) Rotational Motion Sep. 1, 2017 1 / 76 Angular Momentum The angular momentum of a particle with respect

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

Machine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Machine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science Machine Learning CS 4900/5900 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Machine Learning is Optimization Parametric ML involves minimizing an objective function

More information

Basics of Calculus and Algebra

Basics of Calculus and Algebra Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review 10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues. Prof. Peter Bermel January 23, 2013

ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues. Prof. Peter Bermel January 23, 2013 ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues Prof. Peter Bermel January 23, 2013 Outline Recap from Friday Optimization Methods Brent s Method Golden Section Search

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours) SOLUTIONS TO THE 18.02 FINAL EXAM BJORN POONEN December 14, 2010, 9:00am-12:00 (3 hours) 1) For each of (a)-(e) below: If the statement is true, write TRUE. If the statement is false, write FALSE. (Please

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Optimization Methods

Optimization Methods Optimization Methods Categorization of Optimization Problems Continuous Optimization Discrete Optimization Combinatorial Optimization Variational Optimization Common Optimization Concepts in Computer Vision

More information

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues

More information

Trajectory-based optimization

Trajectory-based optimization Trajectory-based optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 6 1 / 13 Using

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion

Alternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion Linearized Longitudinal Equations of Motion Robert Stengel, Aircraft Flig Dynamics MAE 33, 008 Separate solutions for nominal and perturbation flig paths Assume that nominal path is steady and in the vertical

More information

Partial Derivatives. w = f(x, y, z).

Partial Derivatives. w = f(x, y, z). Partial Derivatives 1 Functions of Several Variables So far we have focused our attention of functions of one variable. These functions model situations in which a variable depends on another independent

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: Unconstrained Convex Optimization 21 4 Newton Method H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: f(x + p) f(x)+p T f(x)+ 1 2 pt H(x)p ˆf(p) In general, ˆf(p) won

More information

Robot Control Basics CS 685

Robot Control Basics CS 685 Robot Control Basics CS 685 Control basics Use some concepts from control theory to understand and learn how to control robots Control Theory general field studies control and understanding of behavior

More information

x + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the

x + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the 1.(8pts) Find F ds where F = x + ye z + ze y, y + xe z + ze x, z and where T is the T surface in the pictures. (The two pictures are two views of the same surface.) The boundary of T is the unit circle

More information

Krzysztof Tesch. Continuous optimisation algorithms

Krzysztof Tesch. Continuous optimisation algorithms Krzysztof Tesch Continuous optimisation algorithms Gdańsk 16 GDAŃSK UNIVERSITY OF TECHNOLOGY PUBLISHERS CHAIRMAN OF EDITORIAL BOARD Janusz T. Cieśliński REVIEWER Krzysztof Kosowski COVER DESIGN Katarzyna

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

Robotics 1 Inverse kinematics

Robotics 1 Inverse kinematics Robotics 1 Inverse kinematics Prof. Alessandro De Luca Robotics 1 1 Inverse kinematics what are we looking for? direct kinematics is always unique; how about inverse kinematics for this 6R robot? Robotics

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information