Minimization with Equality Constraints
|
|
- Amanda Crystal Malone
- 5 years ago
- Views:
Transcription
1 Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2015 State and Control Equality Constraints Pseudoinverse State and Control Inequality Constraints Numerical Methods Penalty function and collocation Neighboring optimal Quasilinearization Gradient Newton-Raphson Steepest descent Optimal reatment of an Infection Copyright 2015 by Robert Stengel. All rights reserved. For educational use only. Minimization with Equality Constraints
2 Minimization with Equality Constraint on State and Control min u(t ) J = x(t ) f + t f t o L [ x(t),u(t) ] dt subject to Dynamic Constraint x(t) = f[x(t),u(t)], x( t o ) given dim(x) = n 1 dim(f) = n 1 dim(u) = m 1 State/Control Equality Constraint cx(t),u(t) [ ] 0 in (t o, t f ); dim(c) = ( r 1) ( m 1) Aeronautical Example: Longitudinal Point-Mass Dynamics 1 V = C D 2 V 2 S = 1 1 C L V 2 V 2 S h = V sin r = V cos ( )( ) m = SFC m gsin m gcos x 1 = V : Velocity, m/s x 2 = : Flight path angle, rad x 3 = h : Height, m x 4 = r : Range, m x 5 = m : Mass, kg
3 Aeronautical Example: Longitudinal Point-Mass Dynamics u 1 = : hrottle setting, % u 2 = : Angle of attack, rad = maxsl ( e h ) : hrust, N ( ) Drag coefficient C D = C Do + C L 2 C L = C L = Lift coefficient S = Reference area, m 2 m = Vehicle mass, kg = Air density = SL e h,kg/m 3 g = Gravitational acceleration, m/s 2 SFC = Specific Fuel Consumption, g/kn-s Path Constraint Included in the Cost Function Hamiltonian Dimension of the constraint dimension of the control J 1 = x(t f ) + t f t o { L + 1 (t)[ f x(t) ]+µ c} cx(t),u(t) [ ] 0 in (t o, t f ) he constraint is adjoined to the Hamiltonian dt H L + 1 f +µ c dim(x) = dim(f) = dim() = n 1 dim(u) = m 1 dim(c) = dim(µ) = r 1, r m
4 Euler-Lagrange Equations Including Equality Constraint (t f ) = [x(t )] f x = H[x,u,,c,µ,t] x = L x + f c x +µ x = L x + F + c x µ H[x,u,,c,µ,t] u L = u + G + c u µ = 0 cx(t),u(t) [ ] 0 in (t o, t f ) dim(x) = dim() = n 1 dim( F)= n n dim( G)= n m dim(u) = m 1 dim(c) = dim(µ) = r 1, r m No Optimization When r = m m unknowns, m equations cx(t),u(t) [ ] 0 u(t) = fcn[ x(t] Example c = Ax()+ t Bu()= t 0; dim(x) = n 1; dim(c) = dim(u) = m 1 dim(a) = m n; dim(b) = m m u()= t B 1 Ax() t Constraint Lagrange multiplier is irrelevant but can be expressed from dh/du = 0, µ = c u L u + G c u is square and non-singular
5 No Optimization When r = m MAINAIN CONSAN VELOCIY AND FLIGH PAH ANGLE cx(t),u(t) [ ] 0 = ( ) 1 2 V 2 S 2 0 = V = max C Do + C L m gsin 0 = = 1 C L 1 V 2 V 2 S m gcos u(t) = fcn[ x(t] V ( 0)= V desired ( 0)= desired Effect of Constraint Dimensionality: r < m MINIMIZE FUEL AND CONROL USE WHILE MAINAINING CONSAN FLIGH PAH ANGLE min u(t ) t f ( ) J = q m 2 + r 1 u r 2 u 2 2 t o dt c[ x(t),u(t) ] 0 = = 1 V t () C L 1 2 V 2 m ()S t () gcos t ( 0)= desired
6 Effect of Constraint Dimensionality: r < m dim(x) = n 1 dim(u) = m 1 dim(c) = r 1 c u is not square when r < m c u is not strictly invertible hree approaches to constrained optimization Algebraic solution for r control variables using an invertible subset of the constraint Pseudoinverse of control effect Soft constraint dim(u) = m 1 dim(c) = r 1 Effect of Constraint Dimensionality: r < m Algebraic solution for r control variables using an invertible subset of the constraint dim(x) = n 1; Example1 dim(a r ) = r n dim(u) = m 1; dim(u r ) = r 1; dim(b r ) = r r c = A r x()+ t B r u r ()= t 0; det(b r ) 0 ()= t B 1 r Ax() t u r Example 2 dim(u) = m 1; dim(u r ) = r 1; dim(b 1 ) = r r c = B 1 B 2 u r u mr u r = B 1u r + B 2 u mr = 0; det(b 1 ) 0 () ()= t B 1 1 B 2 u mr t
7 Second Approach: Satisfy Constraint Using Left Pseudoinverse: r < m c u L is the left pseudoinverse of control sensitivity dim c u Lagrange multiplier L = r m c µ L = u L L u + G Pseudoinverse of Matrix y = Ax dim(x) = r 1 ( m 1)= ( m r) ( r 1) r = m, A is square and non-singular x = A 1 y ( r 1)= ( r m) ( m 1)= ( r r) ( r 1) r m, A is not square Use pseudoninverse of A dim(y) = m 1 x = A # y = A y ( r 1)= ( r m) ( m 1) Maximum rank of A is r or m, whichever is smaller See
8 Left Pseudoinverse dim(x) = r 1 dim(y) = m 1 Maximum rank of A is r or m, whichever is smaller dim( A A)= r r dim( AA )= m m r < m, Left pseudoinverse is appropriate Ax = y A Ax = A y Averaging solution ( ) 1 A y ( ) 1 A x = A A A L A A x = A L y ( r 1)= ( r m) ( m 1) dim(x) = r 1 dim(y) = m 1 Right Pseudoinverse r > m, Right pseudoinverse is appropriate Ax = y = Iy Ax = ( AA )( AA ) 1 y = Iy Minimum Euclidean error norm solution Ax = AA ( AA ) 1 y x = A ( AA ) 1 y A R A ( AA ) 1 x = A R y dim( A A)= r r dim( AA )= m m ( m 1)= ( m r) ( r 1)
9 Left Pseudoinverse Example Ax = y, x = A A x = 1 3 r < m ( ) 1 A y 1 3 x = = 1 ( 10 20)= 2 Unique solution Right Pseudoinverse Example Ax = y, r > m x = A ( AA ) 1 y x 1 x 2 Ax = y; 1 3 x 1 x 2 = = ( 14)= = x 1 x 2 = 2 4 x 1 x 2 At least two solutions satisfies the equation, but x 2 = 20 = 1.4 and x = 19.6 Minimum - norm solution
10 Necessary Conditions Use Left Pseudoinverse for r < m (t f ) = [x(t f )] x Optimality conditions L u = L x + F + + G + c u with cx(t),u(t) [ ] 0 µ = 0 c x µ hird Approach: Penalty Function Provides Soft State-Control Equality Constraint: r < m L L original + c c : Scalar penalty weight Euler-Lagrange equations are adjusted accordingly L orig u (t f ) = [x(t )] f x + c 2c u + G = 0 = L x + F = L orig x + c 2c x cx(t),u(t) [ ] 0 + F
11 Equality Constraint on State Alone cx(t) [ ] 0 in (t o, t f ) J 1 = x(t f ) + t f t o { L + 1 (t)[ f x(t) ]+µ c} Hamiltonian H L + 1 f +µ c Constraint is insensitive to control c = c x x + c u u = 0 c x x dt Example of Equality Constraint on State Alone MINIMIZE FUEL AND CONROL USE WHILE MAINAINING CONSAN ALIUDE min u(t ) t f ( ) J = q m 2 + r 1 u r 2 u 2 2 t o dt c[ x(t),u(t) ]= c[ x(t) ]= 0 = ht ()h desired
12 Introduce ime-derivative of Equality Constraint Equality constraint has no effect on optimality condition H u L = u L = u + G + c u + G µ Solution: Incorporate time-derivative of c[x(t)] in optimization Introduce ime-derivative of Equality Constraint c[x(t)] as the zeroth-order equality constraint cx(t) [ ] c (0) [ x(t) ] 0 Compute dc (0) [ x(t) ] dt [ = c(0) x(t) ] t + [ c(0) x(t) ] f[ x(t),u(t) ] x c (1) [ x(t),u(t) ]= 0
13 ime-derivative of Equality Constraint Optimality condition now includes derivative of equality constraint H u L = u Subject to + G + c(1) u c (0) [ x(t o )] 0 or c (0) x(t f ) 0 µ trajectory, c (1) = 0 If c (1) /u = 0, differentiate again, and again, State Equality Constraint Example dc (0) c[ x(t) ] c ( 0) [ x(t) ]= 0 = ht ()h desired No control in the constraint; differentiate [ x(t) ] [ = c(0) x(t) ] dt t c (1) [ x(t) ]= 0 = h t + [ c(0) x(t) ] f[ x(t),u(t) ] x ()= V()sin t () t Still no control in the constraint; differentiate again
14 State Equality Constraint Example Still no control in the constraint; differentiate again dc (1) [ x(t) ] dt [ = c(1) x(t) ] t [ ] + c(1) x(t) x f[ x(t),u(t) ] c (2) [ x(t) ]= 0 = ht ()= dv ()sin t ()+ t V() t d sin t dt dt 1 = maxsl ( e h ) C D 2 V 2 S m gsin sin () t + cos t () C L 1 2 V 2 S m gcos () State Equality Constraint Example Control appears in the 2 nd -order equality constraint c (2) = maxsl ( e h ) C D + cos t x(t),u() t = 0 ( ) 1 2 h () C L 1 2 h ( )V 2 t ( )V 2 t ()S ()S H L + 1 f +µ c ( 2) () mt ()gsin t () mt ()gcos t sin t () 0 th - and 1 st -order some point on the trajectory (e.g., t 0 ) c ( 0) [ x(t 0 )]= 0 ht ( 0 )= h desired [ ]= 0 ( t 0 )= 0 c () 1 x(t 0 )
15 Minimization with Inequality Constraints Hard Inequality Constraints
16 Inequality Constraints Inequality Constraints
17 Inequality Constraints Soft Control Inequality Constraint L Loriginal + c c Scalar Example c [ u(t)] = (u : Scalar penalty weight umax ) 2, umin < u < umax 0 (u umin ), u umax 2, u umin
18 Numerical Optimization Numerical Optimization Methods
19 Parametric Optimization min u(t ) J = x(t ) f + t f t o subject to x(t) = f[x(t),u(t)], [ ] L x(t),u(t) dt x(t o ) given k No adjoint equations Examples u(t) = k u(t) = k 0 + k 1 t + k 2 t 2 +, k = k 0 k 1 k m u(t) = k u(t) = k 0 + k 1 t + k 2 t 2 +, k = k 0 k 1 k m t u(t) = k 0 + k 1 sin t f t 0 + k t 2 cos t f t 0, k = k 0 k 1 k 2 Parametric Optimization min u(t ) J = x(t ) f + subject to x(t) = f[x(t),u(t)], t f t o [ ] L x(t),u(t) dt x(t o ) given for a minimum minimizing control parameter, k J k = 0 2 J k > 0 2
20 min u(t ) J = x(t ) f + t f t o subject to x(t) = f[x(t),u(t)], [ ] L x(t),u(t) dt x(t o ) given ut ()( k 0 + k 1 t + k 2 t 2 ) Parametric Optimization Example 1 V = max C D 2 V 2 S = 1 C L k 0 + k 1 t + k 2 t 2 V h = V sin r = V cos ( )( ) m = SFC m gsin ( ) 1 2 V 2 S m gcos a = J k = 0 k 0 k 1 k 2 2 J k 2 > 0 Legendre Polynomials Solutions to Legendre s differential equation
21 Legendre Polynomials Polynomials can be generated by Rodriques s formula P n ( x)= 1 d n x n n! dx n ( ) n Optimizing Control: Find minimizing values of k n u( x)= k 0 P 0 ( x)+ k 1 P 1 ( x)+ k 2 P 2 ( x) +k 3 P 3 ( x)+ k 4 P 4 ( x)+ k 5 P 5 ( x)+ t x t f t o P 5 P 4 P 2 P 3 P 0 P 1 ( x)= 1 ( x)= x ( ) ( x)= 1 2 3x2 1 ( ) ( x)= 1 2 5x3 3x ( ) ( x)= x4 30x ( ) ( x)= x5 70x 3 +15x Control History Optimized with Legendre Polynomials Could be expressed as a Simple Power Series u *( x)= k * 0 P 0 ( x)+ k * 1 P 1 ( x)+ k * 2 P 2 x +k * 3 P 3 ( x)+ k * 4 P 4 ( x)+ k * 5 P 5 ( x)+ u *( x)= a * 0 + a * 1 x + a * 2 x 2 +a 3 * x 3 + a 4 * x 4 + a 5 * x 5 + a * 0 = k * * 1 * 3 0 k k a * 1 = k * * 3 * 15 1 k k a * * 3 * 30 2 = k 2 2 k ( ) P 5 P 4 P 2 P 3 P 0 P 1 ( x)= 1 ( x)= x ( ) ( x)= 1 2 3x2 1 ( ) ( x)= 1 2 5x3 3x ( ) ( x)= x4 30x ( ) ( x)= x5 70x 3 +15x
22 Parametric Optimization: Collocation Admissible controls occur at discrete times, k Cost and dynamic constraint are discretized Pseudospectral Optimal Control: State and adjoint points may be connected by basis functions, e.g., Legendre polynomials Continuous solution approached as time interval decreased min u k J = x k f + k f 1 k =0 L [ x,u k k ] subject to x k +1 = f k [x k,u k ], x 0 given Penalty Function Method Balakrishnans Epsilon echnique No integration of the dynamic equation Parametric optimization of the state and control history x(t) x(k x,t) u(t) u(k u,t) dim(k x ) n dim(k u ) m Augment the integral cost function by the dynamic equation error min J = x(t ),t f f wrt u(t ),x(t ) + t f t o L[ x(t),u(t),t]+ 1 { f[ x(t),u(t),t] x(t) } ( {} ) dt 1/ is the penalty for not satisfying the dynamic constraint
23 Penalty Function Method Choose reasonable starting values of state and control parameters e.g., state and control satisfy boundary conditions Evaluate cost function J 0 = x 0 (t f ) + Update state and control parameters (e.g., steepest descent) k xi+1 = k xi J t f t o L[ x 0(t),u (t) ]+ 1 { f[ x 0 (t),u 0 (t)] x 0 (t)} ( {} ) dt k x k x =k xi k ui+1 = k ui J k u k u =k ui aylor, Smith Iliff, 1969 Re-evaluate cost with higher penalty Repeat to convergence x i+1 (t) x(k xi+1,t) u i+1 (t) u(k ui+1,t) J i J i+1 J*, 0, f[ x(t),u(t),t] x(t) Neighboring Extremal Method Shooting Method: Integrate both state and adjoint vector forward in time x k +1 (t) = f[x k +1 (t),u k (t)], k +1 (t) = L t x k with x 0 (t 0 ) given, initial guess for u 0 (t) ()+ k +1 ()F t k () t u k +1 (t) defined by H[x k (t),u k (t), k +1 (t),t] u = L uk (t) + k +1 ()G t k (t) = 0... but how do you know the initial value of the adjoint vector?, k (t 0 ) given
24 Neighboring Extremal Method All trajectories are optimal (i.e., extremals) for some cost function because H u = H u = L u + G = 0 Integrating state equation computes a value for t f x(t f ) = x(t 0 ) + f[x k+1 (t),u k (t)]; x(t f ) x(t ) f x t 0 = ( t f ) Use a learning rule to estimate the initial value of the adjoint vector, e.g., ( ) x t f k +1 ( t 0 )= k ( t 0 ) k ( t f ) desired Gradient-Based Methods
25 Gradient-Based Search Algorithms Gradient-Based Search Algorithms Steepest Descent H u k +1 (t) = u k (t) k u t () u k +1 (t) = u k (t) 2 H () t u 2 k Newton Raphson 1 k H u t () k Generalized Direct Search H u k +1 (t) = u k (t) K () k u t k
26 Numerical Optimization Using Steepest-Descent Algorithm Iterative bidirectional procedure Forward solution to find the state, x() t Backward solution to find the adjoint vector, t () () Steepest-descent adjustment of control history, u t x k (t) = f[x k (t),u k 1 (t)], x(t o ) given Use educated guess for u 0 Numerical Optimization Using Steepest-Descent Algorithm k (t) = H x (t f ) = [x(t f )] x k = L x (t) + ()F(t) t k, [ E L #2 and #1] Use x k-1 (t) and u k-1 (t) from previous step
27 Numerical Optimization Using Steepest-Descent Algorithm H u k = L u (t) + ()G(t) t k [ E L #3] u k+1 (t) = u k (t) H u u(t )=uk (t ) = u k (t) L u + ()G(t) t k Use x(), t (), t and u() t from previous step Finding the Best Steepest-Descent Gain J 1k u k t J 2k u k t J k+1 u k+1 t J 0k u k t (), 0< t < t f : Best solution from the previous iteration Calculate the gradient, H k u H () k (), t 0< t < t k f u H ()2 k (), t 0< t < t k f J 0k J 1k J 2k = u (), t in 0 < t < t f : Steepest - descent calculation of cost (1) : Steepest - descent calculation of cost (2) J ( )= a 0 + a 1 + a 2 2 a 0 + a 1 ( 0)+ a 2 ( 0) 2 ( ) 2 a 0 + a 1 ( k )+ a 2 k a 0 + a 1 ( 2 k )+ a 2 2 k ( ) = 1 ( k ) k Solve for a 0, a 1, and a 2 Find * that minimizes J ( ) H ()= u k () t * k (), t 0< t < t k f u ( ) 2 1 ( 2 k ) ( 2 k ) a 0 a 1 a 2 : Best steepest - descent calculation of cost Go to next iteration
28 Steepest-Descent Algorithm for Problem with erminal Constraint min u(t ) t f J = x(t ) f + L[ x(t),u(t) ] dt x(t f ) t o 0 (scalar) H C u = H 0 u a b H 1 u = 0 see Lecture 3 for a and b Chose u k+1 (t) such that u k+1 (t) = u k (t) H C u u(t )=uk (t ) = u k (t) L u + G (t) 0 () t k 1 G k(t) 1 () t k x(t f ) b k Zero Gradient Algorithm for Quadratic Control Cost min u(t ) J = x(t ) f + t f t o L [ x(t) ]+ 1 2 u ()Ru t t () dt [ ] = L x(t) H x(t),u(t), (t) [ ]+ 1 2 u ()Ru t t H u ()= t H u ()= t u ()R t + () Optimality condition: + (t)f x(t),u(t) () ()G t t 0 [ ]
29 Zero Gradient Algorithm for Quadratic Control Cost H u ()= t H u ()= t u ()R t + () ()G t t 0 Optimal control, u*(t) u* ()R t = ()G t *() t u* ()= t R 1 G * () t () t But G k () t and k () t are sub-optimal before convergence, and optimal control cannot be computed in single step Chose u k+1 (t) such that u k+1 (t) = ( 1 )u k (t) R 1 G k t Relaxation parameter < 1 () k () t Stopping Conditions for Numerical Optimization Computed total cost, J, reaches a theoretical minimum, e.g., zero Convergence of J is essentially complete Control gradient, H u (t), is essentially zero throughout [t o, t f ] erminal cost/constraint is good enough t f t o H uk+1 H uk+1 J k+1 = 0 + J k+1 > J k ()= t 0 ± in t o,t f or ()H t uk+1 ()dt t = 0 + k+1 ( t f )= 0 +, or k+1 t f t f ( )= 0 ±, and L x(),u t () t dt < t o
30 Optimal reatment of an Infection Model of Infection and Immune Response x 1 = Concentration of a pathogen, which displays antigen x 2 = Concentration of plasma cells, which are carriers and producers of antibodies x 3 = Concentration of antibodies, which recognize antigen and kill pathogen x 4 = Relative characteristic of a damaged organ [0 = healthy, 1 = dead]
31 Infection Dynamics Fourth-order ordinary differential equation, including effects of therapy (control) x 1 = (a 11 a 12 x 3 )x 1 + b 1 u 1 + w 1 x 2 = a 21 (x 4 )a 22 x 1 x 3 a 23 (x 2 x 2 *) + b 2 u 2 + w 2 x 3 = a 31 x 2 (a 32 + a 33 x 1 )x 3 + b 3 u 3 + w 3 x 4 = a 41 x 1 a 42 x 4 + b 4 u 4 + w 4 Uncontrolled Response to Infection
32 Cost Function to be Minimized by Optimal herapy J = 1 2 p 11x 2 2 ( 1f + p 44 x 4 f ) t f t o ( q 11 x q 44 x ru 2 )dt Cost function includes weighted square values of Final concentration of the pathogen Final health of the damaged organ (0 is good, 1 is bad) Integral of pathogen concentration Integral health of the damaged organ (0 is good, 1 is bad) Integral of drug usage Examples of Optimal herapy Unit cost weights u 1 = Pathogen killer u 2 = Plasma cell enhancer u 3 = Antibody enhancer u 4 = Organ health enhancer
33 Effects of Increased Drug Cost r = 100 Next ime: Minimum-ime and -Fuel Problems Reading OCE: Section 3.5, 3.6
34 Supplemental Material Examples of Equality Constraints cx(t),u(t) [ ] 0 Pitch Moment = 0 = fcn(mach Number, Stabilator rim Angle) cu(t) [ ] 0 Stabilator rim Angle constant = 0 cx(t) [ ] 0 Altitude constant = 0
35 Minimum-Error-Norm Solution dim(x) = r 1 dim(y) = m 1 r > m Euclidean error norm for linear equation Ax y 2 2 = Ax y [ ] [ Ax y] [ ] = 2 AA AA 2 Ax y = 2[ y y] = 0 Necessary condition for minimum error x Ax y 2 = 2 Ax y 2 ( ) 1 y { } [ ] = 0 Express x as right pseudoinverse y {( )( AA ) 1 y y} = 2 AA herefore, x is the minimizing solution, as long as AA is non-singular
Minimization with Equality Constraints!
Path Constraints and Numerical Optimization Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 State and Control Equality Constraints Pseudoinverse State and Control Inequality
More informationDynamic Optimal Control!
Dynamic Optimal Control! Robert Stengel! Robotics and Intelligent Systems MAE 345, Princeton University, 2017 Learning Objectives Examples of cost functions Necessary conditions for optimality Calculation
More informationMinimization of Static! Cost Functions!
Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for
More informationOptimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!
Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2018 Copyright 2018 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html
More informationIntroduction to Optimization!
Introduction to Optimization! Robert Stengel! Robotics and Intelligent Systems, MAE 345, Princeton University, 2017 Optimization problems and criteria Cost functions Static optimality conditions Examples
More informationOptimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!
Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, 2017 Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html
More informationLinearized Equations of Motion!
Linearized Equations of Motion Robert Stengel, Aircraft Flight Dynamics MAE 331, 216 Learning Objectives Develop linear equations to describe small perturbational motions Apply to aircraft dynamic equations
More informationTime Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors
Time Response of Dynamic Systems Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Multi-dimensional trajectories Numerical integration Linear and nonlinear systems Linearization
More informationTutorial on Control and State Constrained Optimal Control Pro. Control Problems and Applications Part 3 : Pure State Constraints
Tutorial on Control and State Constrained Optimal Control Problems and Applications Part 3 : Pure State Constraints University of Münster Institute of Computational and Applied Mathematics SADCO Summer
More informationStochastic and Adaptive Optimal Control
Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal
More informationAlternative Expressions for the Velocity Vector Velocity restricted to the vertical plane. Longitudinal Equations of Motion
Linearized Longitudinal Equations of Motion Robert Stengel, Aircraft Flig Dynamics MAE 33, 008 Separate solutions for nominal and perturbation flig paths Assume that nominal path is steady and in the vertical
More informationMinimization of Static! Cost Functions!
Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2018 J = Static cost function with constant control parameter vector, u Conditions for
More informationTheory and Applications of Optimal Control Problems with Time-Delays
Theory and Applications of Optimal Control Problems with Time-Delays University of Münster Institute of Computational and Applied Mathematics South Pacific Optimization Meeting (SPOM) Newcastle, CARMA,
More informationECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation
ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements
More informationOPTIMAL CONTROL AND ESTIMATION
OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION
More informationAircraft Flight Dynamics!
Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2016 Course Overview Introduction to Flight Dynamics Math Preliminaries Copyright 2016 by Robert Stengel. All rights reserved. For
More informationTime-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 15 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationTo keep things simple, let us just work with one pattern. In that case the objective function is defined to be. E = 1 2 xk d 2 (1)
Backpropagation To keep things simple, let us just work with one pattern. In that case the objective function is defined to be E = 1 2 xk d 2 (1) where K is an index denoting the last layer in the network
More informationSuppose that we have a specific single stage dynamic system governed by the following equation:
Dynamic Optimisation Discrete Dynamic Systems A single stage example Suppose that we have a specific single stage dynamic system governed by the following equation: x 1 = ax 0 + bu 0, x 0 = x i (1) where
More informationUnconstrained minimization of smooth functions
Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and
More informationNonlinear State Estimation! Extended Kalman Filters!
Nonlinear State Estimation! Extended Kalman Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Deformation of the probability distribution!! Neighboring-optimal
More informationTime-Invariant Linear Quadratic Regulators!
Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 17 Asymptotic approach from time-varying to constant gains Elimination of cross weighting
More informationAircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018
Aircraft Flight Dynamics Robert Stengel MAE 331, Princeton University, 2018 Course Overview Introduction to Flight Dynamics Math Preliminaries Copyright 2018 by Robert Stengel. All rights reserved. For
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationECE7850 Lecture 9. Model Predictive Control: Computational Aspects
ECE785 ECE785 Lecture 9 Model Predictive Control: Computational Aspects Model Predictive Control for Constrained Linear Systems Online Solution to Linear MPC Multiparametric Programming Explicit Linear
More informationTranslational and Rotational Dynamics!
Translational and Rotational Dynamics Robert Stengel Robotics and Intelligent Systems MAE 345, Princeton University, 217 Copyright 217 by Robert Stengel. All rights reserved. For educational use only.
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationStatic and Dynamic Optimization (42111)
Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationLösning: Tenta Numerical Analysis för D, L. FMN011,
Lösning: Tenta Numerical Analysis för D, L. FMN011, 090527 This exam starts at 8:00 and ends at 12:00. To get a passing grade for the course you need 35 points in this exam and an accumulated total (this
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationTutorial on Control and State Constrained Optimal Control Problems
Tutorial on Control and State Constrained Optimal Control Problems To cite this version:. blems. SADCO Summer School 211 - Optimal Control, Sep 211, London, United Kingdom. HAL Id: inria-629518
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationVector Derivatives and the Gradient
ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationStatic and Dynamic Optimization (42111)
Static and Dynamic Optimization (42111) Niels Kjølstad Poulsen Build. 303b, room 016 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationCalculus of Variations Summer Term 2016
Calculus of Variations Summer Term 2016 Lecture 14 Universität des Saarlandes 28. Juni 2016 c Daria Apushkinskaya (UdS) Calculus of variations lecture 14 28. Juni 2016 1 / 31 Purpose of Lesson Purpose
More informationDepartment of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004
Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationCalculus of Variations Summer Term 2015
Calculus of Variations Summer Term 2015 Lecture 14 Universität des Saarlandes 24. Juni 2015 c Daria Apushkinskaya (UdS) Calculus of variations lecture 14 24. Juni 2015 1 / 20 Purpose of Lesson Purpose
More informationLecture 7: September 17
10-725: Optimization Fall 2013 Lecture 7: September 17 Lecturer: Ryan Tibshirani Scribes: Serim Park,Yiming Gu 7.1 Recap. The drawbacks of Gradient Methods are: (1) requires f is differentiable; (2) relatively
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationYou should be able to...
Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationNumerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point
Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error In this method we assume initial value of x, and substitute in the equation. Then modify x and continue till we
More informationReal Time Stochastic Control and Decision Making: From theory to algorithms and applications
Real Time Stochastic Control and Decision Making: From theory to algorithms and applications Evangelos A. Theodorou Autonomous Control and Decision Systems Lab Challenges in control Uncertainty Stochastic
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationTheory in Model Predictive Control :" Constraint Satisfaction and Stability!
Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationGeneralized Gradient Descent Algorithms
ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 11 ECE 275A Generalized Gradient Descent Algorithms ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationAH Mechanics Checklist (Unit 1) AH Mechanics Checklist (Unit 1) Rectilinear Motion
Rectilinear Motion No. kill Done 1 Know that rectilinear motion means motion in 1D (i.e. along a straight line) Know that a body is a physical object 3 Know that a particle is an idealised body that has
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationOptimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.
Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation
More informationClassical Numerical Methods to Solve Optimal Control Problems
Lecture 26 Classical Numerical Methods to Solve Optimal Control Problems Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Necessary Conditions
More informationSET-A U/2015/16/II/A
. n x dx 3 n n is equal to : 0 4. Let f n (x) = x n n, for each n. Then the series (A) 3 n n (B) n 3 n (n ) (C) n n (n ) (D) n 3 n (n ) f n is n (A) Uniformly convergent on [0, ] (B) Po intwise convergent
More informationFinal Exam May 4, 2016
1 Math 425 / AMCS 525 Dr. DeTurck Final Exam May 4, 2016 You may use your book and notes on this exam. Show your work in the exam book. Work only the problems that correspond to the section that you prepared.
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationMinimum Time Ascent Phase Trajectory Optimization using Steepest Descent Method
IJCTA, 9(39), 2016, pp. 71-76 International Science Press Closed Loop Control of Soft Switched Forward Converter Using Intelligent Controller 71 Minimum Time Ascent Phase Trajectory Optimization using
More informationChapter 6: Derivative-Based. optimization 1
Chapter 6: Derivative-Based Optimization Introduction (6. Descent Methods (6. he Method of Steepest Descent (6.3 Newton s Methods (NM (6.4 Step Size Determination (6.5 Nonlinear Least-Squares Problems
More informationThe Ascent Trajectory Optimization of Two-Stage-To-Orbit Aerospace Plane Based on Pseudospectral Method
Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 00 (014) 000 000 www.elsevier.com/locate/procedia APISAT014, 014 Asia-Paciic International Symposium on Aerospace Technology,
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationNumerical Algorithms as Dynamical Systems
A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive
More informationCruising Flight Envelope Robert Stengel, Aircraft Flight Dynamics, MAE 331, 2018
Cruising Flight Envelope Robert Stengel, Aircraft Flight Dynamics, MAE 331, 018 Learning Objectives Definitions of airspeed Performance parameters Steady cruising flight conditions Breguet range equations
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45
Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationLecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More information10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review
10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationOrdinary Differential Equation Theory
Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit
More informationFirst-Order Low-Pass Filter!
Filters, Cost Functions, and Controller Structures! Robert Stengel! Optimal Control and Estimation MAE 546! Princeton University, 217!! Dynamic systems as low-pass filters!! Frequency response of dynamic
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationTrust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725
Trust Region Methods Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 10-725/36-725 Trust Region Methods min p m k (p) f(x k + p) s.t. p 2 R k Iteratively solve approximations
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More informationNumerical Optimization Algorithms
Numerical Optimization Algorithms 1. Overview. Calculus of Variations 3. Linearized Supersonic Flow 4. Steepest Descent 5. Smoothed Steepest Descent Overview 1 Two Main Categories of Optimization Algorithms
More informationRecitation 1. Gradients and Directional Derivatives. Brett Bernstein. CDS at NYU. January 21, 2018
Gradients and Directional Derivatives Brett Bernstein CDS at NYU January 21, 2018 Brett Bernstein (CDS at NYU) Recitation 1 January 21, 2018 1 / 23 Initial Question Intro Question Question We are given
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationKINGS COLLEGE OF ENGINEERING DEPARTMENT OF MATHEMATICS ACADEMIC YEAR / EVEN SEMESTER QUESTION BANK
KINGS COLLEGE OF ENGINEERING MA5-NUMERICAL METHODS DEPARTMENT OF MATHEMATICS ACADEMIC YEAR 00-0 / EVEN SEMESTER QUESTION BANK SUBJECT NAME: NUMERICAL METHODS YEAR/SEM: II / IV UNIT - I SOLUTION OF EQUATIONS
More informationMath 54. Selected Solutions for Week 5
Math 54. Selected Solutions for Week 5 Section 4. (Page 94) 8. Consider the following two systems of equations: 5x + x 3x 3 = 5x + x 3x 3 = 9x + x + 5x 3 = 4x + x 6x 3 = 9 9x + x + 5x 3 = 5 4x + x 6x 3
More informationPreliminary Examination in Numerical Analysis
Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationVolumes of Solids of Revolution Lecture #6 a
Volumes of Solids of Revolution Lecture #6 a Sphereoid Parabaloid Hyperboloid Whateveroid Volumes Calculating 3-D Space an Object Occupies Take a cross-sectional slice. Compute the area of the slice. Multiply
More informationUpon successful completion of MATH 220, the student will be able to:
MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient
More informationR. Balan. Splaiul Independentei 313, Bucharest, ROMANIA D. Aur
An On-line Robust Stabilizer R. Balan University "Politehnica" of Bucharest, Department of Automatic Control and Computers, Splaiul Independentei 313, 77206 Bucharest, ROMANIA radu@karla.indinf.pub.ro
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationMath 302 Outcome Statements Winter 2013
Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a
More informationECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming
ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear
More information