Minimization of Static! Cost Functions!

Size: px
Start display at page:

Download "Minimization of Static! Cost Functions!"

Transcription

1 Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2018 J = Static cost function with constant control parameter vector, u Conditions for a minimum in J with respect to u Analytical and numerical solutions Copyright 2018 by Robert Stengel. All rights reserved. For educational use only Static Cost Function Minimum value of the cost, J, is fixed, but may be unknown Corresponding control parameter, u*, also is fixed but possibly unknown Cost function preferably has Single minimum Locally smooth, monotonic contours away from the minimum 2

2 Vector Norms for Real Variables Norm = Measure of length or magnitude of a vector, x Scalar quantity axicab or Manhattan norm L 1 norm = x 1 = Euclidean or Quadratic Norm n i=1 x i x 1 x x = 2 x n dim(x) = n 1 L 2 norm = x 2 = ( x x) 1/2 = x x 2 2 ( 2 ++ x n ) 1/2 p Norm L p norm = x p = n i=1 x i p 1/ p Infinity Norm: L norm = x = x imax 3 p Norm (L p Norm) Example L p norm x p = n i=1 x i p 15-dimensional vector 1/ p p L p Serie As p increases, the norm approaches the value of the maximum component of the vector 4

3 Apples and Oranges Problem Suppose elements of x represent quantities with different units One solution: Vector, y, normalized by ranges of elements of x y = x = y 1 y 2 y n x 1 x 2 x n = Velocity, m / s Angle, rad emperature, K ( ) ( ) d 1 x 1 x 1 x 1max ( x 1min d 2 x 2 x 2 x 2max ( x 2min d n x n x n x nmax ( x nmin ( ) What is a reasonable definition for the norm? = d d d n x 1 x 2 x n = Dx 5 Weighted Euclidean (Quadratic) Norm of x ( ) 1/2 = y y 2 2 ( 2 ++ y m ) 1/2 ( ) 1/2 dim y = Dx 2 dim x y 2 = y y = x D Dx dim D ( ) = m 1 ( ) = n 1 ( ) = m n If m = n, D is square D D is square and symmetric If D is diagonal D D is diagonal If m n, D is not square D D is square and symmetric Rank(D) m, n > m Rank(D) n, n < m 6

4 Rank of Matrix D Maximal number of linearly independent rows or columns of D Order of the largest non-zero minor determinant of D Examples D = D = a c e b d f a b c d e f ; maximal rank 2 ; maximal rank 2 7 Quadratic Forms x D Dx x Qx = Quadratic form D D Q = Defining matrix of the quadratic form dim(q) = n x n Q is symmetric x Qx is a scalar Rank(Q) m, n > m Rank(Q) n, n < m Useful identity for the trace of the quadratic form x Qx = r( x Qx) = r( xx Q) = r( Qxx ) ( 1 n) ( n n) ( n 1) = ( 1 1) = r 1 1 ( ) = r ( n n) ( ) = r n n = Scalar 8

5 Why are Quadratic Forms Useful? 2 x 2 example J ( x) x Qx = x 1 x 2 = q 11 x q 22 x ( q 12 + q 21 )x 1 x 2 q 11 q 12 q 21 q 22 x 1 x 2 = q 11 x q 22 x q 12 x 1 x 2 for symmetric matrix 2 nd -order term of aylor series expansion of any vector function f ( x) f ( x 0 + x) = f ( x 0 ) + x ( + 1 f 2 ( x) x x=x0 ( 2 x ( x +... H.O. x 2 x=x 0 ( Gradient (as row vector) well-defined everywhere J x J x 1 J x 2 ( ) ( 2q 22 x 2 + 2q 12 x 1 ) = 2q 11x 1 + 2q 12 x 2 Large values weighted more heavily than small values Some terms can count more than others Coupling between terms can be considered Asymmetric Q can always be replaced by a symmetric Q 9 Definiteness 10

6 Definiteness of a Matrix If J = x Qx > 0 for all x 0 a) he scalar, J, is definitely positive b) he matrix Q is a positive definite matrix Q = If J = x Qx 0 for all x 0 Q is a positive semi - definite matrix Q = If J = x Qx < 0 for all x 0 Q is a negative definite matrix Q = Positive-Definite Matrix Q is positive-definite if All leading principal minor determinants are positive All eigenvalues are real and positive 3 x 3 example q 11 q 12 q 13 Q = q 21 q 22 q 23 q 31 q 32 q 33 ( ) = s 3 + a 2 s 2 + a 1 s + a 0 = ( s 1 )( s 2 )( s 3 ) det si Q 1, 2, 3 > 0 q 11 > 0, q 11 q 12 q 21 q 22 > 0, q 11 q 12 q 13 q 21 q 22 q 23 > 0 q 31 q 32 q 33 What s an eigenvalue? 12

7 Characteristic Polynomial Characteristic polynomial of a matrix, F si F det( si F) = Scalar ( si F) = (s) = s n + a n1 s n a 1 s + a 0 = s n r( F)s n a 1 s + a 0 where ( s f 11 ) f f 1n f 21 s f 22 ( )... f 2n ( ) f n1 f n2... s f nn (n x n) 13 Eigenvalues Characteristic equation of the matrix (s) = s n + a n1 s n a 1 s + a 0 0 ( )( s 2 )(...)( s n ) 0 = s 1 i are solutions that set ( s) = 0 hey are called the eigenvalues of F the roots of the characteristic polynomial a n1 = r( F), a 0 = 1 n ( ) i i i=1 14

8 Examples of Positive Definite Matrices Diagonal matrix with positive elements Q = Inertia matrix of a rigid body I = (y 2 + z 2 ) xy xz ( xy (x 2 + z 2 ) yz dm = xz yz (x 2 + y 2 ) Bo dy I xx I xy I xz I xy I yy I yz I xz I yz I zz 15 Rotation (Direction Cosine) Matrix y = Hx Projections of unit vector components of one Cartesian reference frame on another Rotational orientation of one Cartesian reference frame with respect to another H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 = cos 11 cos 21 cos 31 cos 12 cos 22 cos 32 cos 13 cos 23 cos 33 16

9 Euler Angles Body attitude measured with respect to inertial frame hree-angle orientation expressed by sequence of three orthogonal single-angle rotations from inertial to body frame : Yaw angle : Pitch angle : Roll angle H = cos cos cos sin sin ( * cos sin + sin sin cos cos cos + sin sin sin sin cos * sin sin + cos sin cos sin cos + cos sin sin cos cos * ) det(h) = 1 17 Euclidean Norm Rotational ransformation Position of a particle in two coordinate frames with common origin y = Hx x = H 1 y Orthonormal transformation [H 1 = H ] Non-singular matrix [det(h) = 1 0] Not a positive-definite matrix y 2 = ( y y) 1/2 = y y 2 2 ( 2 + y 3 ) 1/2 ( ) 1/2 = Hx 2 = x 2 = x x 2 2 ( 2 + x 3 ) 1/2 = x H Hx as H H = H 1 H = I 3 (positive-definite) 18

10 Conditions for a Static Minimum 19 he Static Minimization Problem Find the value of a continuous control parameter, u, that minimizes a continuous scalar cost junction, J Single control parameter min J ( u ); dim J wrt u ( ) = 1, dim ( u ) = 1 u* = argmin J ( u) Argument that minimizes J u Many control parameters min J ( u); dim J wrt u ( ) = 1, dim( u) = m 1 u* = argmin J ( u) u 20

11 Necessary Condition for Static Optimality Single control dj du u =u* = 0 i.e., the slope is zero at the optimum point Example: J = ( u 4) 2 dj du = 2 u 4 ( ) = 0 when u* = 4 Optimum point is a stationary point 21 Necessary Condition for Static Optimality Multiple controls J u u=u* = J J... u 1 u 2 J u m = 0 u=u* Gradient, defined as a row vector i.e., all the slopes are concurrently zero at the optimum point Example: J = ( u 1 4) + ( u 2 8) dj = 2( u 1 4) = 0 du 1 when u 1 * = 4 dj = 2( u 2 8) = 0 du 2 when u 2 * = 8 J u u=u* = J J u 1 u 2 4 u=u*= 8 Optimum point is a stationary point =

12 But the Slope can be Zero for More than One Reason Minimum Maximum Either Neither (Inflection Point) 23 But the Gradient can be Zero for More than One Reason Minimum Maximum Either Neither (Saddle Point) 24

13 Sufficient Condition for Extremum Single control Minimum Satisfy necessary condition plus d 2 J du 2 u =u* > 0 Maximum Satisfy necessary condition plus d 2 J du 2 u =u* < 0 i.e., the curvature is positive at the optimum point Example: J = ( u 4) 2 dj du = 2 u 4 d 2 J du = 2 > 0 2 ( ) i.e., the curvature is negative at the optimum point Example: J = ( u 4) 2 dj du = 2 u 4 d 2 J du = 2 < 0 2 ( ) 25 Sufficient Condition for a Local Minimum Multiple controls Satisfy necessary condition plus Hessian matrix 2 J u 2 u=u* = 2 J u J u 1 u J u 1 u m 2 J 2 J 2 J... 2 u 2 u 1 u 2 u 2 u m J 2 J... u m u 1 u 2 u m 2 J u m 2 u=u* > 0 i.e., J must be convex for a minimum 26

14 Minimized Cost Function, J* Gradient is zero at the minimum Hessian matrix is positive-definite at the local minimum J ( u*+u) J ( u* ) + J ( u* ) + 2 J ( u* ) +... ( ) = u J J u* 2 J u* ( ) = 1 2 u 2 J u ) u=u* ( = 0 u 2 ) u * 0 u=u* ( J ( u*+u) J ( u* ) + 2 J ( u* ) +... First variation is zero at the minimum Second variationis positive at the minimum 27 Numerical Optimization 28

15 Numerical Optimization What if J is too complicated to find an analytical solution for the minimum? or J has multiple minima? Use numerical optimization to find local and/or global solutions 29 wo Approaches to Numerical Minimization Gradient-Free Search Evaluate J and search directly for min J Gradient-Based Search Evaluate J/u and search for zero J u o = J u u=u0 = Starting guess Iterate: J u n = J + ) J u u=un(1 u u=u n(1 J u u=un such that J u n < J u n(1 30

16 Gradient-Free Search [Based on J(u)] Exhaustive grid search Unstructured random search Structured random search 31 Gradient-Free Search Structured random search Nelder-Mead (Downhill Simplex) algorithm Simulated annealing Genetic algorithm Particle-swarm optimization 32

17 Downhill Simplex Search (Nelder-Mead Algorithm)* Simplex: N-dimensional figure in control space defined by N + 1 vertices (N + 1) N / 2 straight edges between vertices 33 Search Procedure for Downhill Simplex Method Select starting set of vertices, u(0) Evaluate cost at each vertex Determine vertex with largest cost (e.g., J 1 at right) Project search from this vertex through middle of opposite face (or edge for N = 2) Reflection [equal distance along direction] Expansion [longer distance along direction] Contraction [shorter distance along direction] Shrink [replace all but best point with points contracted toward best point] Evaluate cost at new vertex (e.g., J 4 ) Drop J 1 vertex, and form simplex with new vertex Repeat until cost is small enough (termination) 34

18 Simulated Annealing Algorithm Goal: Find global minimum among local minima Approach: Randomized search, with convergence that emulates physical annealing Evaluate cost, J i, with u(i) Accept if J i < J i 1 Accept with probability Pr(E) if J i > J i 1 Probability distribution of energy state, E (Boltzmann Distribution) Pr(E) e E /k k : Boltzmanns constant : emperature Algorithm s cooling schedule accepts many bad guesses at first, fewer as iteration number, i, increases MALAB 35 Application of Annealing Principle to Search If cost decreases (J 2 < J 1 ), always accept new point If cost increases (J 2 > J 1 ), accept new point with probability proportional to generalized Boltzmann distribution Pr( Accept) e ( J 2J 1 )/k Occasional diversion from convergent path intended to prevent entrapment by a local minimum As search progresses, decrease k, making probability of accepting a cost increase smaller 36

19 Broad Characteristics of Genetic Algorithms Search based on the coding of a parameter set, not the parameters themselves Search evolves from a population of points Blind search, i.e., without gradient Probabilistic transitions from one control state to another (using random number generator) Control parameters assembled as genes of a single chromosome strand (Example: four 4-bit parameters = four genes ) John Holland, Progression of a Genetic Algorithm Most fit chromosome evolves from a sequence of reproduction, crossover, and mutation Initialize algorithm with N (even) random chromosomes, c n (two 8-bit genes or parameters in example) Evaluate fitness, F n, of each chromosome Compute total fitness, F total, of chromosome population F total = N F Bigger F is better n n=1 F 1 F 2 F 3 F 4 38

20 Genetic Algorithm: Reproduction Reproduce N additional copies of the N originals with probabilistic weighting based on relative fitness, F n /F total, of originals (Survival of the fittest) Roulette wheel selection: Pr(c n ) = F n /F total Multiple copies of most-fit chromosomes No copies of least-fit chromosomes Starting Set Reproduced Set 39 Genetic Algorithm: Crossover Arrange N new chromosomes in N/2 pairs chosen at random Interchange tails that are cut at random locations Head ail Head ail 40

21 Create New Generations By Reproduction, Crossover, and Mutation Until Solution Converges Chromosomes narrow in on best values with advancing generations F max and F total increase with advancing generations MALAB: Minimizes cost rather than maximizing fitness 41 Particle Swarm Optimization Converse of the GA: Uses multiple cost evaluations to guide parameter search directly Stochastic, population-based algorithm Search for optimizing parameters modeled on social behavior of groups that possess cognitive consistency Particles = Parameter vectors Particles have position and velocity Projection of own best (Local best) Knowledge of swarm s best Neighborhood best Global best Peregrine Falcon Hunting Murmuration of Starlings in Rome 42

22 Particle Swarm Optimization Find min J(u) = J *(u*) u Jargon : argminj(u) = u* i.e., argument of J that minimizes J Recursive algorithm to find best particle or configuration of particle values u: Parameter vector Position of the particles v: Velocity of u dim(u) = dim(v) = Number of particles 43 Particle Swarm Optimization Local best: RNG, downhill simplex, or SA step for each particle Neighborhood best: argmin of closest n neighboring points Global best: argmin of all particles v k = bv k1 + c u bestlocal u k1 k1 u k = u k1 + av k1 ( ) + d( u bestneighborhood u k1 k1 ) + e( u bestglobal u k1 k1 ) u 0 : Starting value from random number generator v 0 : Zero 44 a, b, c, d :Search tuning parameters

23 Gradient Search 45 Gradient Search to Minimize a Quadratic Function Cost function, gradient, and Hessian matrix of a quadratic function Guess a starting value of u, u o Solve gradient equation J = 1 ( 2 u u* ) R( u u* ), R > 0 = 1 ( 2 u Ru u Ru*u* Ru + u* Ru* ) J u = ( u u* ) R = 0 when u = u* 2 J = R = symmetric constant 2 u J = ( u o u* ) R = ( u o u* ) 2 J u u=uo u 2 u=u o ( u o u* ) = J R 1 u u=uo J u* = u o R 1 ( u u=uo ( row) ( column) 46

24 For a quadratic cost function J u* = u o R 1 ( u u=uo = u o 2 J ( u 2 u=u o ( 1 Optimal Value Found in a Single Step J ( u u=uo Gradient establishes general search direction Hessian fine-tunes direction and tells exactly how far to go 47 Newton-Raphson Iteration Many cost functions are not quadratic However, the surface is well-approximated by a quadratic in the vicinity of the optimum, u* J ( u*+u) J ( u* ) + J ( u* ) + 2 J ( u* ) +... J J ( u* ) = u = 0 u u=u* ( ) = 1 2 J 2 u 2 J u* ( u 2 ) u=u* * u + 0 Optimal solution requires multiple steps 48

25 Newton-Raphson Iteration Newton-Raphson algorithm is an iterative search patterned after the quadratic search u k +1 = u k 2 J u 2 u=u k ( ( 1 J u u=uk ( ( 49 Difficulties with Newton- Raphson Iteration u k +1 = u k 2 J u 2 u=u k Good when close to the optimum, but Hessian matrix (i.e., the curvature) may be Difficult to estimate from local measurements of the cost May have the wrong sign (e.g., not positive-definite) May lead to large errors in incremental control variation ( ( 1 J u u=uk ( ( 50

26 Steepest-Descent Algorithm Multiplies Gradient by a Scalar Constant ( Gain ) u k +1 = u k J u u=uk Replace Hessian matrix by a scalar constant Gradient is orthogonal to equal-cost contours ) () 51 Choice of Steepest- Descent Gain If gain is too small Convergence is slow If gain is too large Convergence oscillates or may fail Solution: Make gain adaptive 52

27 Optimal Steepest- Descent Gain Find optimal gain by evaluating cost, J, for intermediate solutions with same J u Adjustment rule for Starting estimate, J 0 First estimate, J 1, using ( ) Second estimate, J 2, using 2 If J 2 > J 1 Quadratic fit through three points to find * Else, third estimate, J 3, using 4 Use optimal gain, *, on each major iteration 53 Equality Constraints 54

28 Static Cost Functions with Equality Constraints Minimize J(u ), subject to c(u ) = 0 dim(c) = [n x 1] dim(u ) = [(m + n) x 1] 55 wo Approaches to Optimization with a Constraint 1.Use constraint to reduce control dimension u = 2.Augment the cost function to recognize the constraint u 1 u 2 Example : min J u 1,u 2 subject to c( u ) = c( u 1,u 2 ) = 0 u 2 = fcn u 1 then ( ) = J ( u 1,u 2 ) = J u 1, fcn( u 1 ) J u ( ) ( ) = J u 1 Lagrange multiplier,, is an unknown constant dim () = dim( c) = n 1 J A ( u ) = J ( u ) + c( u ) Warning Lagrange multiplier,, is not the same as an eigenvalue, 56

29 Solution Example: First Approach Cost function J = u 1 2 2u 1 u 2 + 3u Constraint c = u 2 u 1 2 = 0 u 2 = u Solution Example: Reduced Control Dimension Cost function and gradient with substitution J = u 1 2 2u 1 u 2 + 3u = u 1 2 2u 1 u = 2u u 1 28 J u 1 = 4u = 0 ( ) + 3( u 1 + 2) 2 40 Optimal solution u 1 * = 2 u 2 * = 0 J* = 36 58

30 Solution: Second Approach Partition u into a state, x, and a control, u, such that dim(x) = [n x 1] = dim(c) dim(u) = [m x 1] x u = u Add constraint to the cost function, weighted by Lagrange multiplier, u J A ( ) = J ( u ) + c( u ) ( x,u) = J ( x,u) + c( x,u) J A c is required to be zero when J A is a minimum ( ) = c c u x u = 0 59 J A x = J x + c x = 0 Solution: Adjoin Constraint with Lagrange Multiplier Gradients with respect to x, u, and are zero at the optimum point J A u = J u + c u = 0 J A = c = 0 60

31 Simultaneous Solutions for State and Control First equation: find optimal Lagrange multiplier (n scalar equations) Second and third equations: specify the state and control (n + m) scalar equations * = J c x x( ) or * c * =, x( ) + ( 2n + m) values must be found: ( x, u, ) 1 - /. 1 [Row] J x ( ) [Column] J u J x J u + c * u = 0 c x( ) c( x,u) = 0 1 c u = 0 61 Solution Example: Lagrange Multiplier Cost function Constraint Partial derivatives J x J u J = u 2 2xu + 3x 2 40 c = x u 2 = 0 = 2u + 6x = 2u 2x c x = 1 c u = 1 62

32 Solution Example: Lagrange Multiplier From first equation * = 2u 6x From second equation ( 2u 2x) + ( 2u 6x) (1) = 0; x = 0 From constraint u = 2 Optimal solution x* = 0 u* = 2 J* = Next ime: Principles for Optimal Control of Dynamic Systems Reading: OCE: Sections 3.1, 3.2,

33 Supplemental Material 65 L p norm = x p = n i=1 Infinity Norm Norm As p -> Norm is the value of the maximum component x i p 1/ p n ) ***( x p() i i=1 1/) = x imax = L ) norm x 1 x x = 2 x n dim(x) = n n Also called Supremum norm Chebyshev norm Uniform norm L norm = x = max{ x 1, x 2,, x n } 66

34 Gradient-Free vs. Gradient-Based Searches J is a scalar J provides no search direction Search may provide a global minimum J/u is a vector J/u indicates feasible search direction Search defines a local minimum 67 Numerical Example Cost function and derivatives J = 1 ( 2 u u *) R( u u *), R > 0 5 J 5 u J = 1. 0 ( /* 2 0* 1) ( = * * ) u 1 u 2 u 1 u ( 3 - * -, ) 1 + ( 3 - * -, ) ( - *, * ) u 1 u 2 + ( - ; R = *, ) , First guess at optimal control (details of the actual cost function are unknown) u 1 4 = u 2 7 Derivatives evaluated at starting point J ) = + u u=u0 * ( 1, ) * ,. = , ; R = ) 1 2, +. * u* = = u 1 u * = Solution from starting point J u* = u o R = 1 3 u u=uo ( ( ( 9 / 5 2 / 5 + * - ) 2 / 5 1/ 5,

35 Conjugate Gradient Algorithm First step H u 1 (t) = u k (t) K ( 0 u t) ( Calculate ratio of integrated gradient magnitudes squared b = t f ( t o t f ( t o H ( u t) H ( 1 u t) dt 1 H ( u t) H ( 0 u t) dt 0 0 Calculate gradient of improved trajectory H u t ( ) 1 Second step ) + H u k +1 (t) = u 2 (t) = u 1 (t) K ( 1 u t) * (,+ 1 b H ( u t) -+ (. 0 /+ 69 Rosenbrock Function ypical test function for numerical optimization algorithms J(u 1,u 2 ) = ( 1 u 1 ) ( u 2 u 1 ) 2 One minimum 70

36 How Many Maxima/Minima does the Mexican Hat Have? z = sinc( R) sin R R J u u=u* = J J... u 1 u 2 J u m = 0 u=u* R = u u J u 2 u=u* = 2 J u J u 1 u J 2 J... 2 u 2 u 1 u 2 u 2 u m J 2 J... u m u 1 u 2 u m 2 J u 1 u m 2 J 2 J 2 u m u=u* > < 0 Prior example One maximum 71 Physical Annealing Produce a strong, hard object made of crystalline material High temperature allows molecules to redistribute to relieve stress, remove dislocations Gradual cooling allows large, strong crystals to form Low temperature working (e.g., squeezing, bending, drawing, shearing, and hammering) produces desired crystal structure and shape urbojet Engine urbines urbine Blade Casting Single-Crystal urbine Blade 72

37 e ( J 2 J 1 )/k Introduce random wobble to simplex search Add random components to costs evaluated at vertices Project new vertex as before based on modified costs With large, this becomes a random search Decrease random components on a cooling schedule Same annealing strategy as before If cost decreases (J 2 < J 1 ), always accept new point If cost increases (J 2 > J 1 ), accept new point probabilistically As search progresses, decrease Combination of Simulated Annealing with Downhill Simplex Method SA Mona Lisa ( ) ( ) ( ) J 1SA = J 1 + J 1 rng J 2SA = J 2 + J 2 rng J 3SA = J 3 + J 3 rng... = Genetic Coding: Replication, Recombination, and Mutation of Chromosomes 74

38 Crossover Creates New Chromosome Population Containing Old Gene Sequences Reproduced Set Crossover Set 75 Starting Set Reproduction Eliminates Least Fit Chromosomes Probabilistically Reproduced Set 76

39 Genetic Algorithm: Mutation Flip a bit, 0 -> 1 or 1 -> 0, at random every 1,000 to 5,000 bits Crossover Set Mutated Set 77 Comments on GA GA Mona Lisa Short, fit genes tend to survive crossover Random location of crossover produces large and small variations in genes interchanges genes in chromosomes Multiple copies of best genes evolve Alternative implementations Real numbers rather than binary numbers Retention of elite chromosomes Clustering in fit regions to produce elites

40 Particle Swarm Optimization Particle Swarm oolboxes Comparison of Algorithms in Caterpillar Gait-raining Example Performance: distance traveled Y. Bourquin, U. Sussex, BIRG 80

41 Generalized Direct Search Algorithm J u k+1 (t) = u k (t) K k u t ( ) ( k Choose optimal elements of K by sequential line search before recalculating the gradient K k = k k Ad Hoc Modifications to the Newton-Raphson Search u k+1 = u k 2 J ) u 2 u=u k () 1 J ), < 1 u u=uk () u k+1 = u k 2 J u J 2 u m ( ( ( ( ( ( ( ( ( ( ( ( ( 1 J ( u u=uk u k+1 = u k 2 J + K( u 2 u=u k ( 1 J (, K > 0, diagonal u u=uk u k+1 = u k 2 J + I) u 2 u=u k () 1 J ) u u=uk ( With varying, this is the Levenberg-Marquardt algorithm 82

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for

More information

Introduction to Optimization!

Introduction to Optimization! Introduction to Optimization! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2017 Optimization problems and criteria Cost functions Static optimality conditions Examples

More information

Minimization with Equality Constraints

Minimization with Equality Constraints Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2015 State and Control Equality Constraints Pseudoinverse State and Control Inequality

More information

Minimization with Equality Constraints!

Minimization with Equality Constraints! Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 State and Control Equality Constraints Pseudoinverse State and Control Inequality

More information

Translational and Rotational Dynamics!

Translational and Rotational Dynamics! Translational and Rotational Dynamics Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Copyright 217 by Robert Stengel. All rights reserved. For educational use only.

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2018 Copyright 2018 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Gradient Descent. Sargur Srihari

Gradient Descent. Sargur Srihari Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors

More information

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors Time Response of Dynamic Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Multi-dimensional trajectories Numerical integration Linear and nonlinear systems Linearization

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries! Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2017 Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html

More information

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION 15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Computer simulation can be thought as a virtual experiment. There are

Computer simulation can be thought as a virtual experiment. There are 13 Estimating errors 105 13 Estimating errors Computer simulation can be thought as a virtual experiment. There are systematic errors statistical errors. As systematical errors can be very complicated

More information

Linearized Equations of Motion!

Linearized Equations of Motion! Linearized Equations of Motion Robert Stengel, Aircraft Flight Dynamics MAE 331, 216 Learning Objectives Develop linear equations to describe small perturbational motions Apply to aircraft dynamic equations

More information

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.). Local Search Scaling Up So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.). The current best such algorithms (RBFS / SMA*)

More information

II. An Application of Derivatives: Optimization

II. An Application of Derivatives: Optimization Anne Sibert Autumn 2013 II. An Application of Derivatives: Optimization In this section we consider an important application of derivatives: finding the minimum and maximum points. This has important applications

More information

Math 118, Fall 2014 Final Exam

Math 118, Fall 2014 Final Exam Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T

More information

UNCONSTRAINED OPTIMIZATION

UNCONSTRAINED OPTIMIZATION UNCONSTRAINED OPTIMIZATION 6. MATHEMATICAL BASIS Given a function f : R n R, and x R n such that f(x ) < f(x) for all x R n then x is called a minimizer of f and f(x ) is the minimum(value) of f. We wish

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C. Chapter 4: Numerical Computation Deep Learning Authors: I. Goodfellow, Y. Bengio, A. Courville Lecture slides edited by 1 Chapter 4: Numerical Computation 4.1 Overflow and Underflow 4.2 Poor Conditioning

More information

Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS

Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS 1. A LIST OF PITFALLS Linearized models are of course valid only locally. In stochastic economic models, this usually

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review 10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Exploring the energy landscape

Exploring the energy landscape Exploring the energy landscape ChE210D Today's lecture: what are general features of the potential energy surface and how can we locate and characterize minima on it Derivatives of the potential energy

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY /ESD.77 MSDO /ESD.77J Multidisciplinary System Design Optimization (MSDO) Spring 2010.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY /ESD.77 MSDO /ESD.77J Multidisciplinary System Design Optimization (MSDO) Spring 2010. 16.888/ESD.77J Multidisciplinary System Design Optimization (MSDO) Spring 2010 Assignment 4 Instructors: Prof. Olivier de Weck Prof. Karen Willcox Dr. Anas Alfaris Dr. Douglas Allaire TAs: Andrew March

More information

ENGI Partial Differentiation Page y f x

ENGI Partial Differentiation Page y f x ENGI 3424 4 Partial Differentiation Page 4-01 4. Partial Differentiation For functions of one variable, be found unambiguously by differentiation: y f x, the rate of change of the dependent variable can

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Lecture 17: Numerical Optimization October 2014

Lecture 17: Numerical Optimization October 2014 Lecture 17: Numerical Optimization 36-350 22 October 2014 Agenda Basics of optimization Gradient descent Newton s method Curve-fitting R: optim, nls Reading: Recipes 13.1 and 13.2 in The R Cookbook Optional

More information

Basics of Calculus and Algebra

Basics of Calculus and Algebra Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues. Prof. Peter Bermel January 23, 2013

ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues. Prof. Peter Bermel January 23, 2013 ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues Prof. Peter Bermel January 23, 2013 Outline Recap from Friday Optimization Methods Brent s Method Golden Section Search

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14 LOCAL SEARCH Today Reading AIMA Chapter 4.1-4.2, 5.1-5.2 Goals Local search algorithms n hill-climbing search n simulated annealing n local beam search n genetic algorithms n gradient descent and Newton-Rhapson

More information

Chapter 8: Introduction to Evolutionary Computation

Chapter 8: Introduction to Evolutionary Computation Computational Intelligence: Second Edition Contents Some Theories about Evolution Evolution is an optimization process: the aim is to improve the ability of an organism to survive in dynamically changing

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: Unconstrained Convex Optimization 21 4 Newton Method H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: f(x + p) f(x)+p T f(x)+ 1 2 pt H(x)p ˆf(p) In general, ˆf(p) won

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Gradient Methods Using Momentum and Memory

Gradient Methods Using Momentum and Memory Chapter 3 Gradient Methods Using Momentum and Memory The steepest descent method described in Chapter always steps in the negative gradient direction, which is orthogonal to the boundary of the level set

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Mathematics 1EM/1ES/1FM/1FS Notes, weeks 18-23

Mathematics 1EM/1ES/1FM/1FS Notes, weeks 18-23 2 MATRICES Mathematics EM/ES/FM/FS Notes, weeks 8-2 Carl Dettmann, version May 2, 22 2 Matrices 2 Basic concepts See: AJ Sadler, DWS Thorning, Understanding Pure Mathematics, pp 59ff In mathematics, a

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Summary of Linear Least Squares Problem. Non-Linear Least-Squares Problems

Summary of Linear Least Squares Problem. Non-Linear Least-Squares Problems Lecture 7: Fitting Observed Data 2 Outline 1 Summary of Linear Least Squares Problem 2 Singular Value Decomposition 3 Non-Linear Least-Squares Problems 4 Levenberg-Marquart Algorithm 5 Errors with Poisson

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Rotational Motion. Chapter 4. P. J. Grandinetti. Sep. 1, Chem P. J. Grandinetti (Chem. 4300) Rotational Motion Sep.

Rotational Motion. Chapter 4. P. J. Grandinetti. Sep. 1, Chem P. J. Grandinetti (Chem. 4300) Rotational Motion Sep. Rotational Motion Chapter 4 P. J. Grandinetti Chem. 4300 Sep. 1, 2017 P. J. Grandinetti (Chem. 4300) Rotational Motion Sep. 1, 2017 1 / 76 Angular Momentum The angular momentum of a particle with respect

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Machine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Machine Learning CS 4900/5900. Lecture 03. Razvan C. Bunescu School of Electrical Engineering and Computer Science Machine Learning CS 4900/5900 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Machine Learning is Optimization Parametric ML involves minimizing an objective function

More information

Introduction to Scientific Computing

Introduction to Scientific Computing Introduction to Scientific Computing Benson Muite benson.muite@ut.ee http://kodu.ut.ee/ benson https://courses.cs.ut.ee/2018/isc/spring 26 March 2018 [Public Domain,https://commons.wikimedia.org/wiki/File1

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Bounded Approximation Algorithms

Bounded Approximation Algorithms Bounded Approximation Algorithms Sometimes we can handle NP problems with polynomial time algorithms which are guaranteed to return a solution within some specific bound of the optimal solution within

More information

Fitting The Unknown 1/28. Joshua Lande. September 1, Stanford

Fitting The Unknown 1/28. Joshua Lande. September 1, Stanford 1/28 Fitting The Unknown Joshua Lande Stanford September 1, 2010 2/28 Motivation: Why Maximize It is frequently important in physics to find the maximum (or minimum) of a function Nature will maximize

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

Krzysztof Tesch. Continuous optimisation algorithms

Krzysztof Tesch. Continuous optimisation algorithms Krzysztof Tesch Continuous optimisation algorithms Gdańsk 16 GDAŃSK UNIVERSITY OF TECHNOLOGY PUBLISHERS CHAIRMAN OF EDITORIAL BOARD Janusz T. Cieśliński REVIEWER Krzysztof Kosowski COVER DESIGN Katarzyna

More information

Upon successful completion of MATH 220, the student will be able to:

Upon successful completion of MATH 220, the student will be able to: MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient

More information

Math (P)Review Part II:

Math (P)Review Part II: Math (P)Review Part II: Vector Calculus Computer Graphics Assignment 0.5 (Out today!) Same story as last homework; second part on vector calculus. Slightly fewer questions Last Time: Linear Algebra Touched

More information

Multivariate Newton Minimanization

Multivariate Newton Minimanization Multivariate Newton Minimanization Optymalizacja syntezy biosurfaktantu Rhamnolipid Rhamnolipids are naturally occuring glycolipid produced commercially by the Pseudomonas aeruginosa species of bacteria.

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

A.I.: Beyond Classical Search

A.I.: Beyond Classical Search A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods Optimization Last time Root inding: deinition, motivation Algorithms: Bisection, alse position, secant, Newton-Raphson Convergence & tradeos Eample applications o Newton s method Root inding in > 1 dimension

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form Linear algebra This chapter discusses the solution of sets of linear algebraic equations and defines basic vector/matrix operations The focus is upon elimination methods such as Gaussian elimination, and

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information