Numerical Optimization

Size: px
Start display at page:

Download "Numerical Optimization"

Transcription

1 Numerical Optimization General Setup Let (. be a unction such that bn R R where b is a vector o unnown parameters. In many cases, b will not have a closed orm solution. We will estimate b by minimizing some loss unction., Popular loss unction: A sum o squares. Let {y t, t } be a set o measurements/constraints. That is, we it (., in general a nice smooth unction, to the data by solving: 1, min b yt t b t,or min b with y b t t t t t 1

2 Finding a Minimum - Review When is twice dierentiable, a local minima is characterized by: 1. (b min =0 (.o.c. hhbt T. 0, or all small enough (s.o.c. min hhusual approach: Solve (b min =0 or b min Chec H(b min is positive deinitive. Sometimes, an analytical solution to (b min =0 is not possible or hard to ind. For these situations, a numerical solution is needed. Notation: (.: gradient (o a scalar ield. : Vector dierential operator ( del. Finding a Univariate Minimum Finding an analytic minimum o a simple univariate sometimes is easy given the usual.o.c. and s.o.c. Eample: Quadratic Function U( 5 4 U ( 10 * 4 0 U ( 10 0 * 5 Analytical solution (rom.o.c.: *= 0.4. The unction is globally conve, that is *=/5 is a global minimum. Note: To ind *, we ind the zeroes o the irst derivative unction.

3 Finding a Univariate Minimum Minimization in R: We can easily use the popular R optimizer, optim to ind the minimum o U( --in this case, not very intersting application! Eample: Code in R > <- unction( (5*^-4*+ > optim(10,, method="brent",lower=-10,upper=10$par [1] 0.4 >curve(,rom=-5,to=5 Finding a Univariate Minimum Straightorward transormations add little additional compleity: U ( e 5 4 U ( U ( * 10 * 4 0 * 5 Again, we get an analytical solution rom the.o.c. => *=/5. Since U(. is globally conve, *=/5 is a global minimum. Usual Problems: Discontinuities, Unboundedness 3

4 Finding a Univariate Minimum - Discontinuity Be careul o discontinuities. Need to restrict range or. Eample: 5 4 ( Finding a Univariate Minimum - Discontinuity This unction has a discontinuity at some point in its range. I we restrict the search to points where >-10, then the unction is deined at all points ( (10 * 4( * 10 [5 * '( ( * * * ] 0 Ater restricting the range o, we ind an analytical solution as usual i.e., by inding the zeroes o (. 4

5 Finding a Univariate Minimum - Unbounded '( Finding a Univariate Minimum No analytical Solution So ar, we have presented eamples were an analytical solution or close solution was easy to obtain rom the.o.c. In many situations, inding an analytical solution is not possible or impractical. For eample: ( ( sin( (1 / 6 4 In these situations, we rely on numerical methods to ind a solutions. Most popular numerical methods are based on iterative methods. These methods provide an approimation to the eact solution, *. We will concentrate on the details o these iterative methods. 5

6 Numerical solutions: Iterative algorithm Eample: We want to minimze ( = We use the R leible minimizer, Optim. > curve(, rom=-3, to=5 > <- unction( (^6-4*^5-*^3+*+40 > optim(0.5,, method="brent", lower=-10, upper=10$par [1] > curve(, rom=-3, to=5 We get the solution *= Optim oers many choices to do the iterative optimization. In our eample, we used the method Brent, a miture o a bisection search and a secant method. We will discuss the details o both methods in the net slides. Numerical solutions: Iterative algorithm Old method: There is a Babylonian method. Iterative algorithms provide a sequence o approimations { 0, 1,, n }such that in the limit converge to the eact solution, *. Computing +1 rom is called one iteration o the algorithm. Each iteration typically requires one evaluation o the objective unction (or and evaluated at. An algorithm needs a convergence criterion (or, more general, a stopping criterion. For eample: Stop algorithm i ( +1 -( < pre-speciied tolerance 6

7 Iterative algorithm Goal o algorithm: Produce a sequence {}={ 0, 1,, n }such that ( 0 > ( 1 > > ( n Algorithm: Setch - Start at an initial position 0, usually an initial guess. - Generate a sequence {}, which hopeully convergence to *. - {} is generated according to an iteration unction g, such that +1 = g(. Algorithm: Speed The speed o the algorithm depends on: the cost (in lops o evaluating ( (and, liely, (. the number o iterations. Iterative algorithm Characteristics o algorithms: - We need to provide a subroutine to compute and, liely, at. - The evaluation o and can be epensive. For eample, it may require a simulation or numerical integration. Limitations o algorithms - There is no algorithm that guarantees inding all solutions - Most algorithms ind at most one (local solution - Need prior inormation rom the user: an initial guess, an interval that contains a zero, etc. 7

8 Speed o algorithm Suppose * with (* = 0 -i.e., we ind the roots o. Q: How ast does go to *? Error ater iterations: - absolute error: - * - relative error: - * / * (deined i * 0 The number o correct digits is given by: -log 10 [ - * / * ] (when it is well deined i.e., * 0 and [ - * / * ] 1. Speed o algorithm Rates o Convergence Rates o convergence o a sequence with limit *. - Linear convergence: There eists a c (0, 1 such that +1 - * c - * or suiciently large - R-linear convergence: There eists c (0, 1, M > 0 such that - * Mc or suiciently large - Quadratic convergence: There eists a c > 0 such that +1 - * c - * or suiciently large - Superlinear convergence: There eists a sequence c, with c 0 s.t * c - * or suiciently large. 8

9 Speed o algorithm Interpretation Assume * 0. Let r = -log 10 [ - * / * ] - r the number o correct digits at iteration. - Linear convergence: We gain roughly log 10 c correct digits per step: r +1 r log 10 c - Quadratic convergence: For suiciently large, the number o correct digits roughly doubles in one step: r +1 log 10 (c * + r - Superlinear convergence: Number o correct digits gained per step increases with : r +1 r Speed o algorithm Eamples Let * = 1. The number o correct digits at iteration, r, can be approimated by: r = -log 10 [ - * / * ] We deine 3 sequences or with dierent types o convergence: 1. Linear convergence: +1 = Quadratic convergence: +1 = 1 + ( Superlinear convergence: +1 = 1 + (1/(+1 For each sequence we calculate or = 0, 1,..., 10. 9

10 Speed o algorithm Eamples Sequence 1: we gain roughly -log 10 (c = 0.3 correct digits per step +1-1 / - 1 = / +1 = 0.5 c (c =0.5 Sequence : r almost doubles at each step Sequence 3: r per step increases slowly but it gets up to speed later / - 1 = (+ 1 / ( Convergence Criteria An iterative algorithm needs a stopping rule. Ideally, it is stopped because the solution is suiciently accurate. Several rules to chec or accuracy: - X vector criterion: +1 - < tolerance - Function criterion: ( +1 ( < tolerance - Gradient criterion: ( +1 < tolerance I none o the convergence criteria is met, the algorithm will stop by hitting the stated maimum number o iterations. This is not a convergence! I a very large number o iterations is allowed, you may want to stop the algorithm i the solution diverges and or cycles. 10

11 Numerical Derivatives Many methods rely on the irst derivative: '(. Eample: Newton s method s requires '( and ''(. Newton s method algorithm: +1 = λ '( / ''( It is best to use the analytical epression or (. But, it may not be easy to calculate and/or epensive to evaluate. In these situations it may be appropriate to approimate ( numerically by using the dierence quotient: '( h, Then, pic a small h and compute at using: '(,,1,1 h h h Numerical Derivatives Graphical View Approimation errors will occur. For eample, using the traditional deinition, we have that the secant line diers rom the slope o the tangent line by an amount that is approimately proportional to h. 11

12 Numerical Derivatives -point Approimation We can approimate '( with another two-point approimation: [ '( h ] [ h] h h h Errors may cancel out on both sides. For small values o h this is a more accurate approimation to the tangent line than the one-sided estimation. h Q: What is an appropriate h? Floating point considerations (and cancellation errors point out to choose an h that s not too small. On the other hand, i h is too large, the irst derivative will be a secant line. A popular choice is sqrt(ε, where ε is o the order I woring with returns (in %, h= is ine. Source o errors - Computation We use a computer to eecute iterative algorithms. Recall that iterative algorithms provide a sequence o approimations that in the limit converge to the eact solution, *. Errors will occur. (1 Round-o errors: A practical problem because a computer cannot represent all R eactly. Eample: Evaluate two epression or same unction at =0.000: ( = [1-cos (]/ and g( = [sin (]/ (=0.000 = and g(=0.000 = ( Cancelation errors: a and b should be the same (or almost the same, but errors during the calculations maes them dierent. When we subtract them the dierence is no longer zero. 1

13 Source o errors Truncation and Propagation (3 Truncation errors: They occurs when the iterative method is terminated, usually ater a convergence criterion is met. (4 Propagation o errors: Once an error is generated, it will propagate. This problem can arise in the calculation o standard errors, CI, etc. Eample: When we numerically calculate a second derivative, we use the numerically calculated irst derivate. We potentially have a compounded error: an approimation based on another approimation. Source o errors Data Ill-conditioned problem This is a data problem. Any small error or change in the data produce a large error in the solution. No clear deinition on what large error means (absolute or relative, norm used, etc. Usually, the condition number o a unction relative to the data (in general, a matri X, κ(x, is used to evaluate this problem. A small κ(x, say close to 1, is good. Eample: A solution to a set o linear equations, A = b Now, we change b to (b + b => new solution is ( +, which satisies A( + = (b + b -The change in is = A -1 b - We say, the equations are well-conditioned i small b results in small. (The condition o the solution depends on A. 13

14 Source o errors - Algorithm Numerical stability It reers to the accuracy o an algorithm in the presence o small errors, say a round-o error. Again, no clear deinition o accurate or small. Eamples: A small error grows during the calculation and signiicantly aects the solution. A small change in initial values can produce a dierent solution. Line Search Line search techniques are simple optimization algorithms or onedimensional minimization problems. Considered the bacbone o non-linear optimization algorithms. Typically, these techniques search a braceted interval. Oten, unimodality is assumed. Usual steps: - Start with bracet [ L, R ] such that the minimum * lies inside. - Evaluate ( at two point inside the bracet. - Reduce the bracet in the descent direction (. - Repeat the process. Note: Line search techniques can be applied to any unction and dierentiability and continuity is not essential. 14

15 Line Search Basic Line Search (Ehaustive Search 1. Start with a [ L, R ] and divide it in T intervals.. Evaluate in each interval (T evaluations o. Choose the interval where is smaller. 3. Set [ L, R ] as the endpoints o the chosen interval => Bac to Continue until convergence. Key step: Sequential reduction in the bracets. The ewer evaluations o unctions, the better or the line search algorithm. - There are many techniques to reduce the bracets: - Dichotomous i.e., divide in parts- search - Fibonacci series - Golden section Line Search: Dichotomous Braceting Very simple idea: Divide the bracet in hal in each iteration. Start at the middle point o the interval [a,b]: (a+b/. Then evaluate (. at two points near the middle ( 1 &. Near :. Then, evaluate the unction at 1 and.. Move the interval in the direction o the lower value o the unction. a 1 b The interval size ater 1 iteration (& evaluations: b 1 a 1 = (1/ b 0 a

16 Line Search: Dichotomous Braceting - Steps a 1 Steps: 0 Assume an interval [a,b] 1 Find 1 = a + (b-a/ - = a+(b-a/ + / Compare ƒ( 1 andƒ( b ( is the resolution 3 I ƒ( 1 <ƒ( then eliminate > and set b = I ƒ( 1 >ƒ( then eliminate < 1 and set a = 1 I ƒ( 1 =ƒ( then pic another pair o points 4 Continue until interval < (tolerance Line Search: Fibonacci numbers Fibonacci numbers: 1, 1,, 3, 5, 8, 13, 1, 34,... That is, the sum o the last numbers: F n =F n-1 +F n- (F 0 =F 1 =1 The Fibonacci sequence becomes the basis or choosing sequentially N points such that the discrepancy +1 1 is minimized. L L 3 a L R b L L 1 Q: How is the interval reduced? Start at inal interval, ater K iterations and go bacwards: b K-1 a K-1 = b K a K ; b K- a K- = 3 b K a K ;... ; b K-J a K-J = F J+1 b K a K b a = b - a - - b K-1 a K-1 16

17 Line Search: Fibonacci numbers Let =b-a. Then, evaluate ( at L =a+f - (b-a / F and R =a+f -1 (b-a / F We eep the smaller unctional value and its corresponding opposite end point. L L 3 Eample: ( = /(1+, ϵ [-0.6,0.75] 1=-.6+(.75+.6*1/3= => ƒ1= =-.6+(.75+.6*/3= 0.30 =>ƒ= 0.75 a L R b New ϵ [-0.6,. 0.30] L L 1 1=-.6+(.3+.6*/5=-0.4 => ƒ1=-0.67 L 1 =-.6+(.3+.6*3/5=-0.06 =>ƒ= New ϵ [-.6,-0.06] Continue until bracet < tolerance. * = Search methods - Eamples U L L L R UR Dichotomous L L UR U Fibonacci:

18 Line Search: Golden Section b a a a - b b Discard In Golden Section, you try to have b/(a-b = a/b, b*b = a*a - ab Solving gives a = (b ± b* sqrt(5/ a/b = or (Golden Section ratio Note that 1/1.618 = Note: lim n F n- /F n =.38 and lim n F n-1 /F n =.618 Golden section method uses a constant interval reduction ratio: Line Search: Golden Section - Eample Initialize: 1=a+(b-a*0.38 =a+(b-a* = ƒ(1 = ƒ( Loop: i 1 > then a = 1; 1 = ; 1 = =a+(b-a*0.618 = ƒ( else b = ; = 1; = 1 1=a+(b-a* = ƒ(1 endi a 1 b Eample: ( = /(1+, ϵ [-0.6,0.75] 1=-.6+(.75+.6*.38= => ƒ1= =-.6+(.75+.6*.618=0.343 =>ƒ=0.1 New ϵ [-0.6,. 0.34] 1=-.6+( *.38= => ƒ1= =-.6+( *618= =>ƒ= New ϵ [-.6, ] Continue until bracet < tolerance. * =

19 Pure Line Search - Remars Since only ƒ is evaluated, a line search is also called 0 th order method. Line searches wor best or unimodal unctions. Line searches can be applied to any unction. They wor very well or discontinuous and non-dierentiable unctions. Very easy to program. Robust. They are less eicient than higher order methods i.e., methods that evaluate ƒ and ƒ. Golden section is very simple and tends to wor well (in general, ewer evaluations than most line search methods. Line Search: Bisection The bisection method involves shrining an interval [a, b] nown to contain the root (zero o the unction, using (=> a 1 st order method. Bisection: Interval is replaced by either its let or right hal at each. Method: - Start with an interval [a, b] that satisies (a (b < 0. Since is continuous, the interval contains at least one solution o ( = 0. - In each iteration, evaluate at the midpoint o the interval [a,b], '((a+b/. - Reduce interval: Depending on the sign ', we replace a or b with the midpoint value, (a + b/. (The new interval still satisies (a (b<0. Note: Ater each iteration, the interval [a, b]ishalved: a - b = (1/ a 0 b 0 19

20 Line Search: Bisection - Eample 5 4 Eample: ( 10 - We restrict the range o to >-10. -Steps: (1 Start with interval [a=-8, b=0]. The bisection o that interval is 6. At =6, (=6= is.88 > 0 (. increasing. (1a Eclude subinterval [6,0] rom search: New interval [a=-8, b=6]. Bisection at -1. ( New interval [-8,6]. At (=-1= < 0 (. decreasing (3 New interval becomes [a=-1,b=6]. Bisection at.5. - Continue until interval < (tolerance Since at each iteration, the interval is halved. Then, * (1/ a 0 b 0 R-linear convergence. Line Search : Bisection Eample Graph o ollowing unction: 5 4 ( 10 0

21 Line Search : Bisection Eample Lower Upper Midpoint Gradient Replace Upper Lower Upper Upper Lower Lower Upper Upper Lower Lower Lower Upper Upper Lower Lower We can calculate the required or convergence, say with ε=0.00: log[(b 0 a 0 /ε ] [1/log(] = log(8/.00/log( = Line Search : Bisection Eample in R bisect <- unction(n, lower, upper, tol=.000,... {.lo <- n(lower,....hi <- n(upper,... eval <- i (.lo *.hi > 0 stop("root is not braceted in the speciied interval\n" chg <- upper - lower while (abs(chg > tol {.new <- (lower + upper /.new <- n(.new,... i (abs(.new <= tol brea i (.lo *.new < 0 upper <-.new i (.hi *.new < 0 lower <-.new chg <- upper - lower eval <- eval + 1 } list( =.new, value =.new, evals=eval } # Eample => unction to minimize = (a*^-4*+/(+10 # irst derivative => n1= (*a*-4/(+10-(a*^-4*+/(+10^ n1 <- unction(, a { (*a*-4/(+10-(a*^-4*+/(+10^ } bisect(n1, -8, 0, a=5 1

22 Descent Methods Typically, the line search pays little attention to the direction o change o the unction i.e., '(. Given a starting location, 0, eamine ( and move in the downhill direction to generate a new estimate, 1 = 0 + δ Q: How to determine the step size δ? Use the irst derivative: '(. Steepest (Gradient descent Basic algorithm: 1. Start at an initial position 0. Until convergence Find minimizing step δ, which will be a unction o. +1 = + δ In general, it pays to multiply the direction δ by a constant, λ, called step-size. That is, +1 = + λ δ To determine the direction δ, note that a small change proportional to '( multiplied by (-1 decreases (. That is, +1 = - λ '(

23 Steepest descent Theorem Theorem: Given a unction : R n R, dierentiable at 0, the direction o steepest descent is the vector - ( 0. Proo: Consider the unction z(λ = ( 0 + λ u (u: unit vector u =1 By chain rule: d1 d dn z'( u1 u... ( u u Thus, z (0 = ( 0 u = ( 0 u cos(θ (dot product = ( 0 cos(θ (θ: angle between ( 0 & u Then, z (0 is minimized when θ= z (0 = - ( 0. Since u is a unit vector ( u =1 u = - ( 0 / ( 0 n u n n 0 Steepest descent Algorithm Since u is a unit vector u =- ( 0 / ( 0 The method becomes the steepest descent, due to Cauchy (1847. The problem o minimizing a unction o N variables can be reduced to a single variable minimization problem, by inding the minimum o z(λ or this choice o u: Find λ that minimizes z(λ = ( - λ ( Algorithm: 1. Start with 0. Evaluate ( 0.. Find λ 0 and set 1 = 0 - λ 0 ( 0 3. Continue until convergence. Sequence or {}: +1 = - λ ( 3

24 Steepest descent Algorithm Note: Two ways to determine λ, in the general +1 = - λ ( 1. Optimal λ. Select λ that minimizes g(λ = ( - λ (.. Non-optimal λ can be calculated using line search methods. Good property: The method o steepest descent is guaranteed to mae at least some progress toward * during each iteration. (Show that z (0 <0, which guarantees there is a λ>0, such that z(λ<z(0. It can be shown that the steepest descent directions rom +1 and are orthogonal, that is ( +1 ( =0. For some unctions, this property maes the method to zig-zag" (or ping pong rom 0 to *. Steepest descent Graphical Eample min 1 1 4

25 Steepest descent Limitations Limitations: - In general, a global minimum is not guaranteed. (This issue is also a undamental problem or the other methods. - Steepest descent can be relatively slow close to the minimum: going in the steepest downhill direction is not eicient. Technically, its asymptotic rate o convergence is inerior to many other methods. - For poorly conditioned conve problems, steepest descent increasingly 'zigzags' as the gradients point nearly orthogonally to the shortest direction to a minimum point. - Steepest descent Eample (Wiipedia Steepest descent has problems with pathological unctions such as the Rosenbroc unction. Eample: ( 1, = ( ( - 1 This unction has a narrow curved valley, which contains *. The bottom o the valley is very lat. Because o the curved lat valley the optimization zig-zags slowly with small stepsizes towards *=(1,1. 5

26 Newton-Raphson Method (Newton s method It is a method to iteratively ind roots o unctions (or a solution to a system o equations, (=0. Idea: Approimate ( near the current guess by a unction ( or which the system o equations (=0 is easy to solve. Then use the solution as the net guess +1. A good choice or ( is the linear approimation o ( at : ( ( + ( ( -. Solve the equation ( +1 =0, or the net +1. Setting (=0: 0 = ( + ( ( Or +1 = - ( -1 ( (NR iterative method Newton-Raphson Method History History: - Heron o Aleandria (10 70 AD described a method (called Babylonian method to iteratively approimate a square root. - François Viète ( developed a method to approimate roots o polynomials. - Isaac Newton ( in 1669 (published in 1711 improved upon Viète s method. A simpliied version o Newton s method was published by Joseph Raphson ( in Though, Newton (and Raphson did not see the connection between his method and calculus. - The modern treatment is due to Thomas Simpson (

27 7 ( ( X +1 X X B C A ( ( 1 1 ( '( AC AB tan( We want to ind the roots o a unction (. We start with: ( ' ( ( ' ( 0 ( ' ( ( ( ' ( 1 1 y y Newton-Raphson Method Univariate Version Alternative derivation: Note: At each step, we need to evaluate and. Iterations: Starting at 0 inscribe triangles along ( = 0. Newton-Raphson Method Graphical View

28 Newton-Raphson Method Minimization We can use NR method to minimize a unction. Recall that '(* = 0 at a minimum or maimum, thus stationary points can be ound by applying NR method to the derivative. The iteration becomes: 1 '( ''( We need ''( 0; otherwise the iterations are undeined. Usually, we add a step-size, λ, in the updating step o : +1 = λ '( / ''(. Note: NR uses inormation rom the second derivative. This inormation is ignored by the steepest descent method. But, it requires more computations. NR Method As a Quadratic Approimation When used or minimization, the NR method approimates ( by its quadratic approimation near. Epand ( locally using a nd-order Taylor series: (+δ = ( + '( δ + ½ ''( (δ + o(δ ( 0 +δ= ( 0 +'( 0 δ+½ ''( 0 (δ 0 Findtheδ which minimizes this local quadratic approimation: δ = '(/ ''( Update: +1 = λ '(/ ''(. 8

29 NR Method N-variate Version TheN-variate version o the NR method is based on the 1st-order Taylor series epansion o (= 0: Then, ( ( * ( * ( * ( 0 * 1 * ( ( 1 1 ( ( H ( t1 t t t t t * It is always convenient to add a step-size λ. Algorithm: 1. Start with 0. Evaluate ( 0 andh( 0.. Find λ 0 and set 1 = 0 - λ 0 H( 0-1 ( 0 3. Continue until convergence. NR Method Bivariate Version - Graph hhhht 1 T T 1 min1 1 -H 9

30 NR Method Properties I H( is pd, the critical point is also guaranteed to be the unique strict global minimizer o (. For quadratic unctions, NR method applied to ( converges to in one step; that is, =1 = *. I ( is not a quadratic unction, then NR method will generally not compute a minimizer o ( in one step, even i its H( is pd. But, in this case, NR method is guaranteed to mae progress. Under certain conditions, the NR method has quadratic convergence, given a suiciently close initial guess. NR Method Eample Find the roots or the ollowing unction: Set 0 =.05. Use the Newton s method o inding roots o equations to ind a The root. b The absolute relative approimate error at the end o each iteration. c The number o signiicant digits at least correct at the end o each iteration. 30

31 NR Method Eample Calculate ( Iterations: ' ' Iteration 1( 0 = ' NR Method Eample Estimate o the root and abs relative error or the irst iteration. The absolute relative approimate error ε α at the end o Iteration 1 is a % Number o signiicant digits at least correct: 0. (Suppose we use as stopping rule ε α < 0.05%. 31

32 NR Method Eample Iteration ( 1 = ' Asolute relative approimate error ε α at the end o Iteration is: 4 a Number o signiicant digits at least correct: % NR Method Eample 3 Iteration 3 ( = ' Absolute relative approimate error ε α at the end o Iteration 3 is: 4 a % 64 Number o signiicant digits at least correct: 4. ( ε α <.05% stop. 3

33 NR Method Eample (Code in R 65 > # Newton Raphson Method > > 1<-unction({ + return(^3-.165*^ ; + } > > d1<-unction({ + return(3*^-.33* ; + } > NR.method<-unction(unc,dunc,{ + i (abs(dunc(< { + return (; + }else{ + return(-unc(/dunc(; + } + } > curve(1, -.05,.15 > > iter=0; > n<-null; > n[1]=.05; > n=1; > > while(n<iter{ + n=n+1 + n<-c(n,nr.method(1,d1,n[n-1]; + i(abs(n[n]-n[n-1]< brea; + } > n [1] NR Method Convergence Under certain conditions, the NR method has quadratic convergence, given a suiciently close initial guess. 0 ( * ( d ( d ( * d ( ~ ( d * But 0 ( d ( d Subtracting (* rom above: ( 1 d ( d Mean Value theorem truncates Taylor series ( (* 1 by Newton deinition * d ( ~ ( d Then, solving or ( +1 - : d d ( [ ( ] ( ( d d 1 * 1 * * 33

34 NR Method Convergence From d d ( [ ( ] ( ( d d 1 * 1 * d 1 d Let [ ( ] ( K d d then K 1 * * Local Convergence Theorem I a d d b d d bounded away rom zero bounded => Convergence is quadratic. bounded Then, NR method converges given a suiciently close initial guess (and convergence is quadratic. K is Convergence Chec Chec ( to avoid alse convergence. Chec more than one criterion. ( Method Convergence 1 NR 1 1 1* 34

35 NR Method Convergence Local Convergence Convergence depends on initial values. Ideally, select 0 close to *. Note: Inlection point or lat regions can create problems. demo NR Method: Limitations Local results Multiple roots: Dierent initial values tae us to dierent roots. For the equation For 0 = 0.05, > n [1] For 0 = 0.15, > n [1] For 0 = -0.05, > n [1]

36 Division by zero. During the iteration we get such that ( =0. When ( 0 (lat region o, the algorithm diverges. Method Limitations (=0 or ( 0 Both problems can be solved by using dierent 0 (popular solution ( 0 10NR 1 courtesy Alessandra Nardi UCB NR Method: Limitations Division by zero Division by zero For the equation the NR method reduces to i1 3 6 i i i i For 0 = 0, or 0 = 0., the denominator will equal zero. i 36

37 NR Method: Limitations Inlection Points Divergence at inlection points Selection o 0 or an iteration value o the root that is close to the inlection point o the unction ( may start diverging away rom the root in the Newton-Raphson method. 0 Eample: Find the root o the equation: i The NR method reduces to i 1 i. 3 i 1 The root starts to diverge at Iteration 6 because the previous estimate o is close to the inlection point o =1.. Note: Ater >1, the root converges to the root o *=0.. 3 NR Method: Limitations Inlection Points Divergence near inlection point or: Iteration Number i

38 NR Method: Limitations Oscillations It is possible that ( and ( +1 have opposite values (or close enough and the algorithm becomes loc in an ininite loop. Again, dierent 0 can help to move the algorithm away rom this problem. NR Method: Limitations Oscillations Oscillations near local maimum and minimum Results obtained rom the Newton-Raphson method may oscillate about the local maimum or minimum without converging on a root but converging on the local maimum or minimum. Eventually, it may lead to division by a number close to zero and may diverge. For eample or 0 the equation has no real roots. 38

39 NR Method: Limitations Oscillations Oscillations near local maima and minima in NR method. Iteration Number i (λ ( ε α 6 5 ( Oscillations around local minima or NR Method: Limitations Oscillations Note: Let s add a step-size, λ, in the updating step o : +1 = λ '( / ''(. λ = (.8,.9, 1, 1.1, 1. Iteration i (λ ( ( i (λ ε α => The step-size improves the value o unction! 39

40 NR Method: Limitations Root Jumping Root Jumping In some cases where the unction ( is oscillating and has a number o roots, one may choose 0 close to a root. However, the guesses may jump and converge to some other root. Eample: Choose sin ( It will converge to = 0, not to Root jumping rom intended location o root or sin 0. NR Method: Advantages and Drawbacs Advantages - Converges ast (quadratic convergence, i it converges. - Requires only one guess. Drawbacs - Initial values are important. -I ( =0, algorithm ails. - Inlection points are a problem. - I '( is not continuous the method may ail to converge. - It requires the calculation o irst and second derivatives. - Not easy to compute H(. - No guarantee o global minimum. 40

41 NR Method: Numerical Derivatives NR algorithm: +1 = λ '( / ''(. It requires '( and ''(. We can use the quotient ratio as a starting point, which delivers a one-point approimation: '( h,,,1,1 We can approimate '( with a two-point approimation, where errors may cancel out: [ h ] [ h ] h h '( h h This approimation produces the secant method ormula or +1 : +1 = ( / '( = ( /{[ ( - ( -1 ] /[ - -1 ].} = ( [ - -1 ] / [ ( - ( -1 ] (secant method, with a slower convergence than NR method (1.618 relative to. h NR Method: Numerical Derivatives We also need ''(. A similar approimation or the second derivative ''( is also done. Let s suppose we use a one-point approimation: ''( ' ' ' h ',,1,,1 The approimation problems or the nd derivative become more serious: The approimation o the 1st derivative is used to approimate the nd derivative. A typical propagation o errors situation. h Eample: In the previous R code, we can use: > # num derivatives > d1<-unction(,h=0.001{ + return(((+h-(/h; + } 41

42 Multivariate Case - Eample : N-variate NR-method iteration: ma Starting with 0 =(1,1,1 ( ( ( ( ( t1 H ( 1 t1 t t Eample: ma 3 4 s.t Solution * = ( Multivariate Case Eample in R library(numderiv <- unction(z {y <- -((100-*z[1]-3*z[]-z[3]^.*z[1]^.3*z[]^.4*z[3]^.1 return(y } <- c(1,1,1 # numerical gradient & hessian # d1 <- grad(,, method="richardson" # d1 <- hessian(,, method="comple" ma_ite = 10; tol=.0001 #NR i (abs((.new - ( < tol brea <-.new } return(p[1:(i-1,] } NR_num(,tol,,ma_ite NR_num <- unction(,tol,,n { [,1] [,] [,3] [,4] i<-1;.new<- p <- matri(1,nrow=n,ncol=4 while(i<n { d1 <- grad(,, method="richardson" d1 <- hessian(,, method="comple".new<--solve(d1%*%d1 p[i,] <- rbind(.new,(.new i<-i+1 [1,] [,] [3,] [4,] [5,] [6,] [7,] [8,]

43 Multivariate Case Eample in R Note: H( -1 can create problems. I we change the calculation method to the Richardson etrapolation, with 0 =(1,1,1, we get a NaN result or.new ater 3 iterations => H( -1 is not pd! > d1 <- hessian(,, method= Richardson" But, i we use the Richardson etrapolation, with 0 =(,,, we get [,1] [,] [,3] [,4] [1,] [,] [3,] [4,] [5,] [6,] Lots o compuational trics are devoted to deal with these situations. Multivariate Case Computational Drawbacs Basic N-variate NR-method iteration: 1 1 H( ( As illustrated beore, H( -1 can be diicult to compute. In general, the inverse o H is time consuming. (In addition, in the presense o many parameters, evaluating H can be impractical or costly. In the basic algorithm, it is better not to compute H( -1. Instead, solve H( ( +1 =-( Each iteration requires: - Evaluation o ( - Computation o H( - Solution o a linear system o equations, with coeicient matri H( and RHS matri - (. 43

44 Multivariate Case H Matri In practice, H( can be diicult to calculate. Many times, H( is just not pd. There are many trics to deal with this situation. A popular tric is to add a matri, E (usually, δi, where δ is a constant, that ensures H( is pd. That is, H( ( +E. The algorithm can be structured to tae a dierent step when H( is not pd, or eample, the steepest descent. That is, H( I. Note: Beore using the Hessian to calculate standard errors, mae sure it is pd. This can be done by computing the eigenvalues and checing they are all positive. Multivariate Case H Matri NR method is computationally epensive. The structure o the NR algorithm does not help (there is no re-use o data rom one iteration to the other. To avoid computing the Hessian i.e., second derivatives-, we'll approimate. Theory-based approimations: - Gauss-Newton: - BHHH: L ( ' ( H ( E[ ] [ ] ' L L T t t H ( ] t 1 ' Note: In the case we are doing MLE, or each algorithm, -H( can serve as an estimator or the asymptotic covariance matri or the maimum lielihood estimator o. 44

45 Modiied Newton Methods The Modiied Newton method or inding an etreme point is +1 = - S y( Note that: i S = I, then we have the method o steepest descent i S = H -1 ( and = 1, then we have the pure Newton method i y( = 0.5 T Q - b T, then S = H -1 ( = Q (quadratic case Classical Modiied Newton s Method: +1 = - H -1 ( 0 y( Note that the Hessian is only evaluated at the initial point 0. Quasi-Newton Methods Central idea underlying quasi-newton methods (a variable metric method is to use an approimation o the inverse Hessian (H -1. By using approimate partial derivatives, there is a slightly slower convergence resulting rom such an approimation, but there is an improved eiciency in each iteration. Idea: Since H( consists o the partial derivatives evaluated at an element o a convergent sequence, intuitively Hessian matrices rom consecutive iterations are close" to one another. Then, it should be possible to cheaply update an approimate H( rom one iteration to the other. With an NN matri D: D = A + A u, A u : update, usually o the orm uv T. 45

46 Quasi-Newton Methods Or D +1 = A + A u, A u : update, usually o the orm uv T, where u and v are N1 given vectors in R n. (This modiication o A to obtain D is called a ran-one update, sinceuv T has ran one. In quasi-newton methods, instead o the true Hessian, an initial matri H 0 is chosen (usually, H 0 = I, which is subsequently updated by an update ormula: H +1 = H + H u, where H u is the update matri. Since in the NR method, we really care about H -1, not H. The updating is done or H -1. Let B = H -1 ; then the updating ormula or H -1 is also o the orm: B +1 =B +Bu Quasi-Newton Methods Conjugate Gradient Conjugate Method or Solving A=b - Two non-zero vectors u and v are conjugate (with respect to A i u T Av =0 (A=H symmetric and pd =><u,av>= u T Av. - Suppose we want to solve A=b. Wehaven mutually conjugate directions, P (a basis o R n. Then, *=Σ α i p i. - Thus, b = A*= Σ α i Ap i. -Foranyp K P, Or p KT b = p KT A*= Σ α i p KT Ap i = α K p KT Ap K α K = p KT b / p KT Ap K Method or solving A=b: Find a sequence o n conjugate directions, and then compute the coeicients α K. 46

47 Quasi-Newton Methods Conjugate Gradient Conjugate Gradient Methods - Conjugate gradient methods build up inormation on H. - From our standard starting point, we tae a Taylor series epansion around the point + s : Or s ( ( '[ s s ( s s KT Q K = s KT H s K ( ( ( H ( ] ( s s s ' H ( s Note: H(, scaled by s, can be approimated by the change in the gradient: q = ( -1 - (. Hessian Matri Updates Deine g = (. p = +1 - and q = g +1 - g Then, q = g +1 - g H( p (secant condition. I the Hessian is constant: H( =H => q = H p I H is constant, then the ollowing condition would hold as well H q i =p i 0 i This is called the quasi-newton condition (also, inverse secant condition. Let B = H -1, then the quasi-newton condition becomes: p i = B +1 q i 0 i. 47

48 Ran One and Ran Two Updates Substitute the updating ormula B +1 = B + B u and we get: p i = B q i + B u q i (1 (remember: p i = i+1 - i and q i =g i+1 -g i Note: There is no unique solution to inding the update matri B u A general orm is B u = a uu T + b vv T where a and b are scalars and u and v are vectors satisying condition (1. Note: a uu T & b vv T are symmetric matrices o (at most ran one. Quasi-Newton methods that tae b =0areusingran one updates. Quasi-Newton methods that tae b 0areusingran two updates. Note that b 0 provides more leibility. Update Formulas: Ran One Simple approach: Add new inormation to the current B.For eample, using a ran one update: B u = B +1 - B = uv T => p i =(B + uv T q i => p i - B q i = uv T q i => u =[1/(v T q i ] (p i - B q i Set v T =(p i - B q i => B +1 = B +[1/(v T q i ] (p i - B q i v T => B +1 = B + [1/((p i - B q i T q i ] (p i - B q i (p i - B q i T No systems o linear equations need to be solved during an iteration; only matri-vector multiplications are required, which are computationally simpler. 48

49 Ran One and Ran Two Updates Ran one updates are simple, but have limitations. It is diicult to ensure that the approimate H is pd. Ran two updates are the most widely used schemes, since it is easier to get the approimate H to be pd. Two popular ran two update ormulas: - Davidon -Fletcher-Powell (DFP ormula - Broyden-Fletcher-Goldarb-Shanno (BFGS ormula. Davidon-Fletcher-Powel (DFP Formula Earliest (and one o the most clever schemes or constructing the inverse Hessian, H -1, was originally proposed by Davidon (1959 and later developed by Fletcher and Powell (1963. Ithasthenicepropertythat,oraquadratic objective, it simultaneously generates the directions o the conjugate gradient method while constructing H -1 (or B. Setch o derivation: - Ran two update or B: B +1 = B + a uu T + b vv T (* - Recall B +1 must satisy the Inverse Secant Condition: B +1 q = p - Post-multiply (* by q : p - B q =a uu T q + b vv T q (=0! - The RHS must be a linear combination o p and B q, and it is already a linear combination o u and v. Set u= p & v=b q. -This maes a u T q = 1, & b v T q = -1 - DFP update ormula: B+1 = B + pp T ptq Bqq TB qtbq 49

50 DFP Formula - Remars It can be shown that i p is a descent direction, then each B is pd. The DFP Method beneits rom picing an arbitrary pd B 0, instead o evaluating H( 0-1, but in this case the beneit is greater because computing an inverse matri is very epensive. I you select B 0 = I, we use the steepest descent direction. Once B is computed, the DFP Method computes +1 = - λ B (, where λ >0 is chosen to mae sure ( +1 < ( (use an optimal search or line search. Broyden Fletcher Goldarb Shanno Formula Remember secant condition: q i = H +1 p i and B q i = p i 0 i Both equations have eactly the same orm, ecept that q i and p i are interchanged and H is replaced by B (B =H (or vice versa. Observation: Any update ormula or B can be transormed into a corresponding complimentary ormula or H by interchanging the roles o B and H and o q and p. The reverse is also true. BFGS ormula update o H : Tae complimentary ormula o DFP: H +1 = H + q q T q Tp H p p TH p TH p By taing the inverse, the BFGS update ormula or B +1 is obtained: B +1 = B + ( 1 + q T B q q Tp p T p p Tq p q TB + B q p T q Tp 50

51 Some Comments on Broyden Methods BFGS ormula is more complicated than DFP, but straightorward to apply. UnderBFGS,iB is psd, then B +1 is also psd. BFGS update ormula can be used eactly lie DFP ormula. Numerical eperiments have shown that BFGS ormula's perormance is superior over DFP ormula. Both DFP and BFGS updates have symmetric ran two corrections that are constructed rom the vectors p and B q. Weighted combinations o these ormulae will thereore also have the same properties. This observation leads to a whole collection o updates, now as the Broyden amily, deined by: B =(1 w B DFP + w B BFGS where w is a parameter that may tae any real value. Quasi-Newton Algorithm 1. Input 0,B 0 (say, I, termination criteria.. For any, set S = B g. 3. Compute a step size λ (e.g., by line search on ( + λ S and set +1 = + λ S. 4. Compute the update matri B u accordingtoagivenormula(say, DFP or BFGS using the values q =g +1 -g,p = +1 -,and B. 5. Set B +1 =B +B u. 6. Continue with net until termination criteria are satisied. Note: You do have to calculate the vector o irst order derivatives g or each iteration. 51

52 Some Closing Remars Both DFP and BFGS methods have theoretical properties that guarantee superlinear (ast convergence rate and global convergence under certain conditions. However, both methods could ail or general nonlinear problems. In particular: - DFP is highly sensitive to inaccuracies in line searches. - Both methods can get stuc on a saddle-point. In NR method, a saddle-point can be detected during modiications o the (true Hessian. Thereore, search around the inal point when using quasi- Newton methods. - Update o Hessian becomes "corrupted" by round-o and other inaccuracies. All ind o "trics" such as scaling and preconditioning eist to boost the perormance o the methods. Iterative approach In economics and inance, many maimization problems involve, sums o squares: arg min S ( y i i ( ; i i i where is a nown data set and β is a set o unnown parameters. The above problem can be solved by many nonlinear optimization algorithms: - Steepest descent - Newton-Raphson - Gauss-Newton 5

53 53 Gauss-Newton Method Gauss-Newton taes advantage o the quadratic nature o the problem. Algorithm: Step 1: Initialize β = β 0 Step : Update the parameter β. Determine optimal update, Δβ. i y i i ( min arg ( ( min arg i i i i y i i i i i i i y ( ( ( arg min Taylor series epansion Note: This is a quadratic unction o Δβ. Straightorward solution. where J is the Jacobian o (β. Setting the gradient equal to zero: Δβ =(J T J -1 J T ε => LS solution! Notice the setting loos lie the amiliar linear model: T K K T T T K K ( ( ( min arg ΔβεΔβεJ J T i i i Or, J Δβ = ε (A linear system o TN equations Gauss-Newton Method

54 Gauss-Newton Method From the LS solution, the updating step involves an OLS regression: β +1 = β + (J T J -1 J T ε A step-size, λ, can be easily added: β +1 = β + λ (J T J -1 J T ε Note: The Guass-Newton method can be derived rom the Newton- Raphson s method. N-R s updating step: β +1 = β - λ H -1 (β where H (J T J (β = J T ε -i.e., second derivatives are ignored Gauss-Newton Method - Application Non-Linear Least Squares (NLLS ramewor: y i = h ( i ; β + ε i - Minimization problem: arg mins( i y h( i i ; i i - Iteration: b NLLS,+1 = b NLLS, + λ (J T J -1 J T ε where J T ε = - Σ i δh( i ; β/δβ ε i (J T J -1 =- Σ i δh( i ; β/δβ δh( i ; β/δβ Note: (J T J -1 ignored the term { δ h( i ; β/δβ δβ T ε i }. Or, b NLLS,+1 = b NLLS, + λ ( 0T 0-1 0T ε = J(b NLLS, 54

55 General Purpose Optimization Routines in R The most popular optimizers are optim and nlm: - optim gives you a choice o dierent algorithms including Newton, quasi-newton, conjugate gradient, Nelder-Mead and simulated annealing. The last two do not need gradient inormation, but tend to be slower. (The option method="l-bfgs-b allows or parameter constraints. - nlm uses a Newton algorithm. This can be ast, but i ( is ar rom quadratic, it can be slow or tae you to a bad solution. (nlminb can be used in the presence o parameter constraints. Both optim and nlm have an option to calculate the Hessian, which is needed to calculate standard errors. 55

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods Optimization Last time Root inding: deinition, motivation Algorithms: Bisection, alse position, secant, Newton-Raphson Convergence & tradeos Eample applications o Newton s method Root inding in > 1 dimension

More information

Lecture 8 Optimization

Lecture 8 Optimization 4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

CISE-301: Numerical Methods Topic 1:

CISE-301: Numerical Methods Topic 1: CISE-3: Numerical Methods Topic : Introduction to Numerical Methods and Taylor Series Lectures -4: KFUPM Term 9 Section 8 CISE3_Topic KFUPM - T9 - Section 8 Lecture Introduction to Numerical Methods What

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-66: Numerical Methods or Engineering Design and Optimization Xin Li Department o ECE Carnegie Mellon University Pittsburgh, PA 53 Slide Overview Linear Regression Ordinary least-squares regression Minima

More information

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods Numerical Methods - Lecture 1 Numerical Methods Lecture. Analysis o errors in numerical methods Numerical Methods - Lecture Why represent numbers in loating point ormat? Eample 1. How a number 56.78 can

More information

AP Calculus Notes: Unit 1 Limits & Continuity. Syllabus Objective: 1.1 The student will calculate limits using the basic limit theorems.

AP Calculus Notes: Unit 1 Limits & Continuity. Syllabus Objective: 1.1 The student will calculate limits using the basic limit theorems. Syllabus Objective:. The student will calculate its using the basic it theorems. LIMITS how the outputs o a unction behave as the inputs approach some value Finding a Limit Notation: The it as approaches

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

Syllabus Objective: 2.9 The student will sketch the graph of a polynomial, radical, or rational function.

Syllabus Objective: 2.9 The student will sketch the graph of a polynomial, radical, or rational function. Precalculus Notes: Unit Polynomial Functions Syllabus Objective:.9 The student will sketch the graph o a polynomial, radical, or rational unction. Polynomial Function: a unction that can be written in

More information

Basic mathematics of economic models. 3. Maximization

Basic mathematics of economic models. 3. Maximization John Riley 1 January 16 Basic mathematics o economic models 3 Maimization 31 Single variable maimization 1 3 Multi variable maimization 6 33 Concave unctions 9 34 Maimization with non-negativity constraints

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Objectives. By the time the student is finished with this section of the workbook, he/she should be able

Objectives. By the time the student is finished with this section of the workbook, he/she should be able FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

9.3 Graphing Functions by Plotting Points, The Domain and Range of Functions

9.3 Graphing Functions by Plotting Points, The Domain and Range of Functions 9. Graphing Functions by Plotting Points, The Domain and Range o Functions Now that we have a basic idea o what unctions are and how to deal with them, we would like to start talking about the graph o

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

( x) f = where P and Q are polynomials.

( x) f = where P and Q are polynomials. 9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational

More information

Root Finding and Optimization

Root Finding and Optimization Root Finding and Optimization Ramses van Zon SciNet, University o Toronto Scientiic Computing Lecture 11 February 11, 2014 Root Finding It is not uncommon in scientiic computing to want solve an equation

More information

This is only a list of questions use a separate sheet to work out the problems. 1. (1.2 and 1.4) Use the given graph to answer each question.

This is only a list of questions use a separate sheet to work out the problems. 1. (1.2 and 1.4) Use the given graph to answer each question. Mth Calculus Practice Eam Questions NOTE: These questions should not be taken as a complete list o possible problems. The are merel intended to be eamples o the diicult level o the regular eam questions.

More information

RATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions

RATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions RATIONAL FUNCTIONS Finding Asymptotes..347 The Domain....350 Finding Intercepts.....35 Graphing Rational Functions... 35 345 Objectives The ollowing is a list o objectives or this section o the workbook.

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points Roberto s Notes on Dierential Calculus Chapter 8: Graphical analysis Section 1 Extreme points What you need to know already: How to solve basic algebraic and trigonometric equations. All basic techniques

More information

Introduction to Compact Dynamical Modeling. II.1 Steady State Simulation. Luca Daniel Massachusetts Institute of Technology. dx dt.

Introduction to Compact Dynamical Modeling. II.1 Steady State Simulation. Luca Daniel Massachusetts Institute of Technology. dx dt. Course Outline NS & NIH Introduction to Compact Dynamical Modeling II. Steady State Simulation uca Daniel Massachusetts Institute o Technology Quic Snea Preview I. ssembling Models rom Physical Problems

More information

9.1 The Square Root Function

9.1 The Square Root Function Section 9.1 The Square Root Function 869 9.1 The Square Root Function In this section we turn our attention to the square root unction, the unction deined b the equation () =. (1) We begin the section

More information

Math Review and Lessons in Calculus

Math Review and Lessons in Calculus Math Review and Lessons in Calculus Agenda Rules o Eponents Functions Inverses Limits Calculus Rules o Eponents 0 Zero Eponent Rule a * b ab Product Rule * 3 5 a / b a-b Quotient Rule 5 / 3 -a / a Negative

More information

Finding roots. Lecture 4

Finding roots. Lecture 4 Finding roots Lecture 4 Finding roots: Find such that 0 or given. Bisection method: The intermediate value theorem states the obvious: i a continuous unction changes sign within a given interval, it has

More information

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

Differentiation. The main problem of differential calculus deals with finding the slope of the tangent line at a point on a curve.

Differentiation. The main problem of differential calculus deals with finding the slope of the tangent line at a point on a curve. Dierentiation The main problem o dierential calculus deals with inding the slope o the tangent line at a point on a curve. deinition() : The slope o a curve at a point p is the slope, i it eists, o the

More information

Example - Newton-Raphson Method

Example - Newton-Raphson Method Eample - Newton-Raphson Method We now consider the following eample: minimize f( 3 3 + -- 4 4 Since f ( 3 2 + 3 3 and f ( 6 + 9 2 we form the following iteration: + n 3 ( n 3 3( n 2 ------------------------------------

More information

and ( x, y) in a domain D R a unique real number denoted x y and b) = x y = {(, ) + 36} that is all points inside and on

and ( x, y) in a domain D R a unique real number denoted x y and b) = x y = {(, ) + 36} that is all points inside and on Mat 7 Calculus III Updated on 10/4/07 Dr. Firoz Chapter 14 Partial Derivatives Section 14.1 Functions o Several Variables Deinition: A unction o two variables is a rule that assigns to each ordered pair

More information

Lab on Taylor Polynomials. This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor.

Lab on Taylor Polynomials. This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor. Lab on Taylor Polynomials This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor. In this Lab we will approimate complicated unctions by simple unctions. The

More information

8. THEOREM If the partial derivatives f x. and f y exist near (a, b) and are continuous at (a, b), then f is differentiable at (a, b).

8. THEOREM If the partial derivatives f x. and f y exist near (a, b) and are continuous at (a, b), then f is differentiable at (a, b). 8. THEOREM I the partial derivatives and eist near (a b) and are continuous at (a b) then is dierentiable at (a b). For a dierentiable unction o two variables z= ( ) we deine the dierentials d and d to

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

Section 3.4: Concavity and the second Derivative Test. Find any points of inflection of the graph of a function.

Section 3.4: Concavity and the second Derivative Test. Find any points of inflection of the graph of a function. Unit 3: Applications o Dierentiation Section 3.4: Concavity and the second Derivative Test Determine intervals on which a unction is concave upward or concave downward. Find any points o inlection o the

More information

The concept of limit

The concept of limit Roberto s Notes on Dierential Calculus Chapter 1: Limits and continuity Section 1 The concept o limit What you need to know already: All basic concepts about unctions. What you can learn here: What limits

More information

Curve Sketching. The process of curve sketching can be performed in the following steps:

Curve Sketching. The process of curve sketching can be performed in the following steps: Curve Sketching So ar you have learned how to ind st and nd derivatives o unctions and use these derivatives to determine where a unction is:. Increasing/decreasing. Relative extrema 3. Concavity 4. Points

More information

In many diverse fields physical data is collected or analysed as Fourier components.

In many diverse fields physical data is collected or analysed as Fourier components. 1. Fourier Methods In many diverse ields physical data is collected or analysed as Fourier components. In this section we briely discuss the mathematics o Fourier series and Fourier transorms. 1. Fourier

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

A Simple Explanation of the Sobolev Gradient Method

A Simple Explanation of the Sobolev Gradient Method A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used

More information

Review of Classical Optimization

Review of Classical Optimization Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

Gradient Descent. Sargur Srihari

Gradient Descent. Sargur Srihari Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors

More information

Topic 4b. Open Methods for Root Finding

Topic 4b. Open Methods for Root Finding Course Instructor Dr. Ramond C. Rump Oice: A 337 Phone: (915) 747 6958 E Mail: rcrump@utep.edu Topic 4b Open Methods or Root Finding EE 4386/5301 Computational Methods in EE Outline Open Methods or Root

More information

Topic 8c Multi Variable Optimization

Topic 8c Multi Variable Optimization Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) 747 6958 E Mail: rcrumpf@utep.edu Topic 8c Multi Variable Optimization EE 4386/5301 Computational Methods in EE Outline Mathematical Preliminaries

More information

Optimal Control of process

Optimal Control of process VYSOKÁ ŠKOLA BÁŇSKÁ TECHNICKÁ UNIVERZITA OSTRAVA FAKULTA METALURGIE A MATERIÁLOVÉHO INŽENÝRSTVÍ Optimal Control o process Study Support Milan Heger Ostrava 8 Title: Optimal Control o process Code: Author:

More information

1 Numerical optimization

1 Numerical optimization Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................

More information

Mat 267 Engineering Calculus III Updated on 9/19/2010

Mat 267 Engineering Calculus III Updated on 9/19/2010 Chapter 11 Partial Derivatives Section 11.1 Functions o Several Variables Deinition: A unction o two variables is a rule that assigns to each ordered pair o real numbers (, ) in a set D a unique real number

More information

Steepest descent method implementation on unconstrained optimization problem using C++ program

Steepest descent method implementation on unconstrained optimization problem using C++ program IOP Conerence Series: Materials Science and Engineering PAPER OPEN ACCESS Steepest descent method implementation on unconstrained optimization problem using C++ program o cite this article: H Napitupulu

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Optimization II: Unconstrained Multivariable

Optimization II: Unconstrained Multivariable Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1

More information

A Primer on Multidimensional Optimization

A Primer on Multidimensional Optimization A Primer on Multidimensional Optimization Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Eercise

More information

ECS550NFB Introduction to Numerical Methods using Matlab Day 2

ECS550NFB Introduction to Numerical Methods using Matlab Day 2 ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015 Today Root-finding: find x that solves

More information

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction . ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Increasing and Decreasing Functions and the First Derivative Test. Increasing and Decreasing Functions. Video

Increasing and Decreasing Functions and the First Derivative Test. Increasing and Decreasing Functions. Video SECTION and Decreasing Functions and the First Derivative Test 79 Section and Decreasing Functions and the First Derivative Test Determine intervals on which a unction is increasing or decreasing Appl

More information

y2 = 0. Show that u = e2xsin(2y) satisfies Laplace's equation.

y2 = 0. Show that u = e2xsin(2y) satisfies Laplace's equation. Review 1 1) State the largest possible domain o deinition or the unction (, ) = 3 - ) Determine the largest set o points in the -plane on which (, ) = sin-1( - ) deines a continuous unction 3) Find the

More information

Math 2412 Activity 1(Due by EOC Sep. 17)

Math 2412 Activity 1(Due by EOC Sep. 17) Math 4 Activity (Due by EOC Sep. 7) Determine whether each relation is a unction.(indicate why or why not.) Find the domain and range o each relation.. 4,5, 6,7, 8,8. 5,6, 5,7, 6,6, 6,7 Determine whether

More information

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n Week 3 Jack Carl Kiefer (94-98) Jack Kiefer was an American statistician. Much of his research was on the optimal design of eperiments. However, he also made significant contributions to other areas of

More information

Mathematical Notation Math Calculus & Analytic Geometry III

Mathematical Notation Math Calculus & Analytic Geometry III Mathematical Notation Math 221 - alculus & Analytic Geometry III Use Word or WordPerect to recreate the ollowing documents. Each article is worth 10 points and should be emailed to the instructor at james@richland.edu.

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one

More information

«Develop a better understanding on Partial fractions»

«Develop a better understanding on Partial fractions» «Develop a better understanding on Partial ractions» ackground inormation: The topic on Partial ractions or decomposing actions is irst introduced in O level dditional Mathematics with its applications

More information

Basic properties of limits

Basic properties of limits Roberto s Notes on Dierential Calculus Chapter : Limits and continuity Section Basic properties o its What you need to know already: The basic concepts, notation and terminology related to its. What you

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

EC5555 Economics Masters Refresher Course in Mathematics September 2013

EC5555 Economics Masters Refresher Course in Mathematics September 2013 EC5555 Economics Masters Reresher Course in Mathematics September 3 Lecture 5 Unconstraine Optimization an Quaratic Forms Francesco Feri We consier the unconstraine optimization or the case o unctions

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

Rolle s Theorem and the Mean Value Theorem. Rolle s Theorem

Rolle s Theorem and the Mean Value Theorem. Rolle s Theorem 0_00qd //0 0:50 AM Page 7 7 CHAPTER Applications o Dierentiation Section ROLLE S THEOREM French mathematician Michel Rolle irst published the theorem that bears his name in 9 Beore this time, however,

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Extreme Values of Functions

Extreme Values of Functions Extreme Values o Functions When we are using mathematics to model the physical world in which we live, we oten express observed physical quantities in terms o variables. Then, unctions are used to describe

More information

On High-Rate Cryptographic Compression Functions

On High-Rate Cryptographic Compression Functions On High-Rate Cryptographic Compression Functions Richard Ostertág and Martin Stanek Department o Computer Science Faculty o Mathematics, Physics and Inormatics Comenius University Mlynská dolina, 842 48

More information

3. Several Random Variables

3. Several Random Variables . Several Random Variables. Two Random Variables. Conditional Probabilit--Revisited. Statistical Independence.4 Correlation between Random Variables. Densit unction o the Sum o Two Random Variables. Probabilit

More information

YURI LEVIN AND ADI BEN-ISRAEL

YURI LEVIN AND ADI BEN-ISRAEL Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD

More information

We would now like to turn our attention to a specific family of functions, the one to one functions.

We would now like to turn our attention to a specific family of functions, the one to one functions. 9.6 Inverse Functions We would now like to turn our attention to a speciic amily o unctions, the one to one unctions. Deinition: One to One unction ( a) (b A unction is called - i, or any a and b in the

More information

Chapter 8 Gradient Methods

Chapter 8 Gradient Methods Chapter 8 Gradient Methods An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Introduction Recall that a level set of a function is the set of points satisfying for some constant. Thus, a point

More information

Continuity and Differentiability

Continuity and Differentiability Continuity and Dierentiability Tis capter requires a good understanding o its. Te concepts o continuity and dierentiability are more or less obvious etensions o te concept o its. Section - INTRODUCTION

More information

Lecture 17: Numerical Optimization October 2014

Lecture 17: Numerical Optimization October 2014 Lecture 17: Numerical Optimization 36-350 22 October 2014 Agenda Basics of optimization Gradient descent Newton s method Curve-fitting R: optim, nls Reading: Recipes 13.1 and 13.2 in The R Cookbook Optional

More information

Lecture : Feedback Linearization

Lecture : Feedback Linearization ecture : Feedbac inearization Niola Misovic, dipl ing and Pro Zoran Vuic June 29 Summary: This document ollows the lectures on eedbac linearization tought at the University o Zagreb, Faculty o Electrical

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Multivariate Newton Minimanization

Multivariate Newton Minimanization Multivariate Newton Minimanization Optymalizacja syntezy biosurfaktantu Rhamnolipid Rhamnolipids are naturally occuring glycolipid produced commercially by the Pseudomonas aeruginosa species of bacteria.

More information

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems

More information

Math-3 Lesson 8-5. Unit 4 review: a) Compositions of functions. b) Linear combinations of functions. c) Inverse Functions. d) Quadratic Inequalities

Math-3 Lesson 8-5. Unit 4 review: a) Compositions of functions. b) Linear combinations of functions. c) Inverse Functions. d) Quadratic Inequalities Math- Lesson 8-5 Unit 4 review: a) Compositions o unctions b) Linear combinations o unctions c) Inverse Functions d) Quadratic Inequalities e) Rational Inequalities 1. Is the ollowing relation a unction

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION

OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION Xu Bei, Yeo Jun Yoon and Ali Abur Teas A&M University College Station, Teas, U.S.A. abur@ee.tamu.edu Abstract This paper presents

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

MATH1901 Differential Calculus (Advanced)

MATH1901 Differential Calculus (Advanced) MATH1901 Dierential Calculus (Advanced) Capter 3: Functions Deinitions : A B A and B are sets assigns to eac element in A eactl one element in B A is te domain o te unction B is te codomain o te unction

More information

AP Calculus BC Chapter 8: Integration Techniques, L Hopital s Rule and Improper Integrals

AP Calculus BC Chapter 8: Integration Techniques, L Hopital s Rule and Improper Integrals AP Calculus BC Chapter 8: Integration Techniques, L Hopital s Rule and Improper Integrals 8. Basic Integration Rules In this section we will review various integration strategies. Strategies: I. Separate

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information