AN EMBEDDED FUNCTION TOOL FOR MODELING AND SIMULATING ESTIMATION PROBLEMS IN AEROSPACE ENGINEERING

Size: px
Start display at page:

Download "AN EMBEDDED FUNCTION TOOL FOR MODELING AND SIMULATING ESTIMATION PROBLEMS IN AEROSPACE ENGINEERING"

Transcription

1 AAS -8 AN EMBEDDED FUNCTION TOOL FOR MODELING AND SIMULATING ESTIMATION PROBLEMS IN AEROSPACE ENGINEERING INTRODUCTION D. Todd Griffith *, James D. Turner, and John L. Junkins An automatic differentiation-based embedded function tool, OCEA (Object Oriented Coordinate Embedding Method), is presented for solving common estimation problems in Aerospace Engineering. The orbit determination and ballistic projectile parameter estimation problems have been chosen as examples. OCEA is extremely useful for computing n th order partial derivatives of scalar, vector, matrix, and higher dimension tensor functions for these applications. Both applications consider algorithm performance and robustness issues associated with applying high order generalizations of the classical firstorder optimization and estimation algorithms. OCEA-based tools are expected to have broad applicability for Aerospace problems in particular and engineering problems in general. An automatic differentiation-based embedded function tool is presented for solving some common estimation problems in Aerospace Engineering. The two common Aerospace Engineering problems include orbit determination and aircraft parameter estimation. The embedded tool, OCEA (Object Oriented Coordinate Embedding Method), has broad potential for solving engineering design and optimization problems. - 6 OCEA is extremely useful for computing nth order partial derivatives of scalar, vector, matrix, and higher dimension tensor functions. A considerable advantage is found for applications requiring partial derivative calculations (e.g. gradient, Jacobian, Hessian, state transition matrices). Hidden operator overloading tools completely free the analyst from the time consuming and error prone tasks of deriving, coding, and validating analytical partial derivative models. The user merely needs to define the embedded functions for the problem, using the standard programming language tools. The partial derivatives are automatically computed and evaluated. The user can select one through fourth order partial derivative models. Extensive use of operator overloading provides a greatly simplified modeling development environment, because vector, matrix, and tensor * Graduate Research Assistant, Department of Aerospace Engineering, Texas A&M University, College Station, TX 778-, griffith@tamu.edu, Student member AAS and AIAA. Adjunct Faculty, Department of Aerospace Engineering, Texas A&M University, and President Amdyn Systems, White GA 8. George Eppright Chair, Distinguished Professor, Department of Aerospace Engineering, Texas A&M University, College Station, TX 778-, junkins@tamu.edu, AAS and AIAA fellow. Copyright (c) by the authors. Permission to publish granted to The American Astronautical Society.

2 equations can be expressed and manipulated in a form that closely resembles the way an analyst derives the results by hand. The first problem is a ballistic projectile identification problem which involves estimation of pitch and yaw angles. Computation of the required Jacobian for the first order method requires considerable attention for a problem with many unknown parameters. Computation of the Hessian, and higher order Hessians would typically not be attempted; however, these higher order methods are quickly and easily implemented in OCEA. At the heart of this problem lies the desire to estimate parameters for nonlinear systems. The ability to implement higher order methods is very promising for expediting convergence rates for any system, especially nonlinear systems, without the overhead of computing partial derivatives by hand or using mathematical software packages. Orbit determination is the second problem. The objective here is to determine the orbit of a spacecraft from range and line-of-sight measurements. The goal of orbit determination is to determine the initial conditions of the orbit and obtain parameter estimates for quantities such as drag and perturbing accelerations. The procedure here is no different than the well known Gaussian Differential Correction method; however, by using OCEA, considerable time is saved by the analyst because the partial derivatives required in the calculation of the state transition matrices need not be derived and coded by hand. The analyst simply specifies the dynamical model (an embedded function) and the measurement model (an embedded function) and proceeds with the simulation. Tedious Calculus and Algebra is not required. The analyst can focus his time on the algorithm itself. Advanced optimization algorithms are considered for first- through fourth-order generalized state transition matrix algorithms. Both applications consider algorithm performance and robustness issues associated with applying high order generalizations of the classical first-order optimization and estimation algorithms. OCEA-based tools are expected to have broad applicability for Aerospace problems in particular and engineering problems in general. OVERVIEW OF OCEA AND AUTOMATIC DIFFERENTIATION The computational tool used in this paper is the OCEA (Object Oriented Coordinate Embedding Method) extension for FORTRAN9 (F9). The OCEA package is an object-oriented equation manipulation package. OCEA defines embedded variables that represent abstract data types. OCEA replaces each scalar variable in the problem with a differential n-tuple consisting of the following variables for a second-order OCEA method: f f f f () : =

3 where and denote first- and second-order gradient tensors w.r.t. a user-defined set of independent variables. The introduction of the abstract differential n-tuple allows the computer to continue to manipulate each scalar variable as a conventional scalar variable, even though the first- and higher-order partial derivative are attached to the scalar variable in a hidden way. The individual objects are extracted using structure constructor types (%) as follows: f = f % E, f = f % V, and f = f % T. The automatic computation of the partial derivatives is achieved by operator-overloading methodologies that redefine the intrinsic mathematical operators and functions using the rules of calculus. For example, addition and multiplication are redefined as follows. : a + b = a + b a + b a + b () ( ) ( ) a * b : = a* b i a* b j i a * b () Additional operations for the standard mathematical library functions, such as trigonometric and exponential functions, are redefined to account for the known rules of differentiation. In essence, this approach pre-codes, once and for all, all of the partial derivatives required for any problem. At compile time, and without user intervention, the OCEA-based approach links the subroutines and functions required for evaluating the system and partial derivative models. REVERSION OF SERIES SOLUTION In this section, we present the reversion of series solution, which was previously reported by Turner. 5,6 This solution provides the correction to be applied to the current guess of the unknown parameters in the estimation problem, where g(x) = defines the necessary condition for the root of the equation. In order to develop the reversion of series solution, the necessary condition is defined as a root solving problem with the following parameter embedding problem : G( x( s), s) = g( x( s)) sg( x guess ) = () where s is a scalar embedding parameter, x guess is the starting guess, and x = x(s). The reversion of series solution is given by dx d x d x d x x xguess + δ = xguess (5) ds! ds! ds! ds g g g g

4 The differential rates appearing in Eq. (5) are obtained repeatedly differentiating G(x(s),s) w.r.t. s. The developments leading to computing these rates are presented in Reference 5 and are repeated here for completeness. The first through fourth order terms are given by dx ds s= = ( G) h( x ) guess d x dx dx = ( G) G ds ds s s= ds = s= dx dx dx d x dx G + G + ds s= ds s= ds s= ds ds s= s= = ( G) ds s= dx d x G ds s= ds s= d x dx dx dx dx G + ds s= ds s= ds s= ds s= d x dx dx G + ds s ds s ds = = s= d x dx d x dx = ( G) G + (6) ds ds s s ds ds = = s= s= dx dx d x d x dx G + G + ds s= ds s= ds ds ds s= s= s= d x d x dx d x G + G ds ds ds s= s= s= ds s= The gradient terms ( G, G, and so on) are understood to be taken with respect to the unknown parameters to be estimated. Equations (5) and (6) is used to update the state estimate. These gradient terms, or better yet sensitivities, are explained in greater detail in the estimation algorithm section. Weight Matrix Issues We note here that some special attention must be given to weighting observations for higher order corrections. For uncorrelated measurements, the optimal choice for

5 weighting is wi σ = i where σ i is the variance of the i th measurement. This weighting approach results in a diagonal weight matrix, W, which can be factored by the Cholesky T T Decomposition as W = LL. Thus, for the optimal choice, L = L = diag(/ σ i ). The simplest way to implement weighted observations is to pre-multiply the observations and the measurement model predictions, which in this work includes the evaluations of the T measurement model and it s first and higher order partial derivatives, by L. NONLINEAR LEAST SQUARES Review of Nonlinear Least Squares It is a fact of life that most estimation problems are nonlinear. A description of the Nonlinear Least Squares algorithm can be found in many books on estimation 7. In summary, given a set of observations or measurements and a model for these measurements, the task is to estimate a set of measurement model parameters which best fit the observations. For a nonlinear problem, an iterative solution procedure must be employed. First, a starting guess for the unknown model parameters is supplied, and one iteration of the algorithm produces a correction to the starting guess. This process repeats until the estimate for the model parameters has converged. Of course, the algorithm is considered to have failed if the residual errors increase during the iteration process or the residual error remains essentially unchanged for many iterations of the algorithm. Typically, one begins in finding the Least Squares estimate ˆx which minimizes the following cost function J T = ( y h( xˆ )) W ( y h( xˆ )) (7) where ỹ is the vector of measurements, h( x ˆ) is the measurement model, and W is an assumed weighting matrix. Taking the gradient of Eq. (7) w.r.t. the unknown state ˆx results in the necessary condition for minimizing the cost function, leading to: x h T W ~ = ( y h( xˆ )) which is equivalent to the necessary condition of Eq. (). Linearization of the h measurement model produces h( xˆ ˆ k + ) = h( xk ) + xk, which when substituted into x the necessary condition produces the well known normal equations. xˆ ( ) y (8) T T k = H WH H W k xˆ k 5

6 where H h = x xˆ k and y = y h( xˆ ). The normal equations represent exactly the first k k equation given in Eq. (6), which is the first order correction term. In the remainder of this paper, the higher order solutions given in Eq. (6) will be used to compute the updated state as x ˆ = x ˆ + x ˆ (9) k+ k k The automatic differentiation capability of OCEA is well suited for computing the sensitivities for the Nonlinear Least Squares algorithm. These sensitivities are the first through fourth order partial derivatives of the measurement model with respect to the unknown model parameters. The benefit for the analyst is quite significant with respect to the time saved in computing these sensitivities by hand or by symbolic manipulation. As well, the analyst is freed from validating and hard coding these partial derivative expressions. This capability is particularly advantageous when there are a large number of model parameters to be estimated. For example, given a measurement model with n unknown model parameters, n partials are required for a first order correction model. When going to higher order, the number of partial derivatives to be computed explodes to n o where o is the order of the correction model. In addition, the benefit of having the capability to change the measurement model without recomputing or revalidating the sensitivities cannot be overstated. An example is presented in the following section. Ballistic Projectile Identification Example As an example, consider the orientation an aerodynamically and inertially symmetric projectile. Along the trajectory, measurements are taken of the pitch and yaw angles. The following model is assumed for the pitch and yaw angles, respectively λ λ θ ( t, x) = h ( t, x) = k e cos( ω t + δ ) + k e cos( ω t + δ ) t t λ + k e cos( ω t + δ ) + k t () λ λ ψ ( t, x) = h ( t, x) = k e sin( ω t + δ ) + k e sin( ω t + δ ) t t λ + k e sin( ω t + δ ) + k t 5 () where = ( k, k, k, k, k, λ, λ, λ, ω, ω, ω, δ, δ, δ ) x are the unknown 5 constant model parameters. Therefore, first through fourth order sensitivities would require,,, and partial derivatives to be computed and validated. Future 6

7 research will dramatically reduce the number of higher-order partial derivatives required, by exploiting the symmetry and sparsity of the gradient tensor operators. The Nonlinear Least Squares algorithm described above can be coded once and for all. In order to solve the problem at hand, only the measurement model and starting guess must be specified. For this aircraft identification problem, the FORTRAN 9 subroutine containing the measurement model given in Eqs. (7) and (8) is shown Appendix A. An important remark to be made about the measurement model subroutine given in Appendix A is that the analyst can invoke automatic differentiation by standard FORTRAN programming. A USE statement (USE EB_HANDLING) is included in order to invoke the automatic differentiation tool. The embedded variables (the model parameters) and embedded functions (the measurement model) are defined as embedded objects and coded using standard FORTRAN arithmetic operators. The output of this subroutine is the measurement model, and the gradient and higher order partials, evaluated at the current state and time. The automatic differentiation capability makes higher order computational methods readily available. Results for norms of the measurement residuals for first and second order solutions for the aircraft identification example are shown in Table. Table. Cost for Ballistic Projectile Identification Problem Iteration First order Second order.65e+.65e+.788e+.676e+.59e+.e+.65e+.9e E-.9E E E- The results of Table show that for an initial guess of more than 5% error, the first order method converges in 7 iterations, while the second order method shows rapid convergence in 5 iterations. GAUSSIAN LEAST SQUARES DIFFERENTIAL CORRECTION Higher-order Generalizations of GLSDC The problem of determining the orbits of the heavenly bodies has been studied in great detail for many hundreds of years. Just over years ago, Gauss devised a method for solving this problem which bears his name, Gaussian Least Squares 7

8 Differential Correction or GLSDC. The essence of this method involves estimating the position and velocity at some time, usually the initial time in which the first measurement was viewed. Along with a model for the dynamics of the body of interest, the complete orbit can be reconstructed with the estimated position and velocity at some time. In addition, uncertain model parameters can be estimated. These parameters include force model parameters such as drag, solar radiation pressure, and gravitational constants. One aim of this paper is to present higher order solution methods within the GLSDC framework. In addition, the necessary higher order derivatives are shown to be computed automatically using automatic differentiation. As was the case for the Nonlinear Least Squares algorithm, the result is a general computation tool, which can be coded once and for all, in this case for validating dynamical models by a set of measurements of the state of the system at various times. The following sections present the GLSDC algorithm. The first order GLSDC algorithm 7 is a standard topic in many textbooks on estimation; however, it will be presented again in order to proceed logically to the second and higher order GLSDC algorithms. Simply put, the distinction between a Nonlinear Least Squares problem and a GLSDC problem is that in the latter case the measurements are chosen to fit a dynamical model as opposed to a set of algebraic equations. Both the dynamical model and the measurement model are nonlinear functions of the state and force model parameters. The dynamical model and measurement models are written in general in Eqs. () and (), respectively. x ( t) = f ( t, x( t)); x = x( t ) () y = h( x ( t)) () Here, the state vector x contains the position and velocity states as well as any force model parameters in the model. A nonlinear estimation problem is defined because the dynamics and measurement models are nonlinear functions of the state and force model parameters. Therefore, an iteration procedure identical to that of Nonlinear Least Squares is utilized as is done in Eq. (9). Unlike the Nonlinear Least Squares problem, GLSDC is an explicit function of the unknown parameters initial position and velocity parameters. Therefore, the sensitivity calculations require state transition matrix calculations. First, we will begin with the necessary sensitivity calculations. For the time being, we focus on the first and second order differential corrections. The reversion of series provides the following corrections for first and second order, respectively, which follow from the first two equations of Eq. (6). x = ( h) y () x o 8

9 x = ( h) y ( h) i( h) i( h) yi ( h) y (5) x x x x x o o o o o It can be seen that first and second order sensitivities are needed in Eq. (5), and are computed as follows: h h x( t) x h = = = Φ( t, t) o xh x( t ) x( t) x( t ) (6) h = x x x o o o h h x( t) x( t) h x( t) = i i + i x( t) x( t ) x( t ) x( t) x( t ) = xhiφ t tiφ t t + xhiφ t t t (, ) (, ) (,, ) (7) where the symbol Φ ( t, t) represents a first-order state transition matrix and Φ( t, t, t) denotes a second-order state transition matrix. These sensitivities are written here in vector/matrix form for simplicity. A more thorough interpretation of how to carry out the implied multiplications in the first through fourth order sensitivity calculations is given in Appendix B in indicial notation. Now we develop the necessary state transition matrix differential equations. For first through fourth order generalizations, we must introduce a number of state transition matrices. In this paper we adopt the following notation for first through fourth order state transition matrices: x( t) First order: Φ ( t, t) = x( t ) Second order: Third order: Fourth order: x( t) = x( t) x( t) = x( t) x( t) = x( t) Φ ( t, t, t ) Φ ( t, t, t, t ) Φ ( t, t, t, t, t ) The state transition matrices must be computed at each instant in time by solving differential equations associated with each. These state transition matrices are developed in the following by beginning with the integral form of Eq. (). 9

10 t = o + to x( t) x( t ) f ( τ, x ( τ )) dτ (8) We then differentiate Eq. (8) with respect to the initial state to compute t x( t) f ( τ, x( τ )) x( τ ) Φ ( t, to) = = I + dτ x( to ) ( to) t x x o (9) The first order state transition matrix differential equation is obtained upon time differentiation of Eq. (9). ( t, ( t)) Φ f x ( t, to) = Φ( t, to ) () x Upon differentiating Eqn. (9) once again with respect to the initial state, and then time differentiating this expression, we arrive at the following second order state transition matrix differential equation. ( t, ( t)) ( t, ( t)) Φ f x f x ( t, to, to) = Φ ( t, to, to) + Φ( t, to ) Φ( t, to) ( t) i i i () x x( t) Again, it is obvious that the state transition matrix differential equations can be extended to higher order by continuing along the path described above. Developments in indicial notation are given for first through fourth order state transition matrix differential equations in Appendix C. The utility of automatic differentiation is even more profound for the GLSDC algorithm. Here, the measurement model must be differentiated in order to compute the sensitivities, and the dynamical equations must be differentiated in order to solve the state transition matrix differential equations. The GLSDC algorithm can be summarized as follows:. Given measurements, ỹ, and an intial guess for the state, ˆx.. Integrate state transition differential equations along with dynamical equations until the next measurement is available: Eqs. () and (), and ().. Compute sensitivity for each measurement time: Eqs. (6) and (7).. Once final measurement time is reached, compute differential correction: x ˆ k. 5. Update state: x ˆ = x ˆ + x ˆ k+ k k 6. Check convergence.

11 7. If not converged, then repeat steps If converged, then done Orbit Determination Example In order to demonstrate higher order corrections for the GLSDC algorithm, we consider as an example the planar motion of projectile in a constant gravity field. We also consider a quadratic drag model of the form fdrag = p V V where p is the drag constant, V = [ x x ] is the velocity vector, and V is the magnitude of velocity. The equations of motion in first order form are thus given by x x x x x = x px = V x g px V p () Range and line of sight measurements are taken along the projectile s trajectory. The measurement model is given by r x + x h = = () θ tan ( x / x ) OCEA automatically computes the partial derivatives of Eq. () which are required for integrating the state transition matrix differential equations given in Appendix C, and also computes the partial derivatives of Eq. () required for the sensitivity calculations shown in Appendix B. Therefore, a generalized estimation tool can be coded once and for all since partial derivatives need not be hand coded for each dynamics model or measurement model. For a particular problem, the analyst can simply change these models and the required sensitivities are automatically computed. The objective here is to estimate the initial state whose true value is given by x = m 5m m / s m / s. It is assumed that the standard deviation of [ ] the range measurement and the line of sight measurement is meters and. rad, respectively. For these simulations, the projectile is observed for a total of seconds at second intervals. Results are shown for two cases: I) without drag and II) with drag. When drag is present, the equations of motion are nonlinear as seen in Eq. (); however, without drag this is a linear system. It can be observed from Appendix C that for a linear system, second and higher order state transition matrices are zero for all time; however, first order state transition matrices are not. Therefore, second and higher order

12 sensitivities are not zero since they are a function of the first order state transition matrices as can be seen in Appendix B. In each of the cases, we are primarily interested in evaluating rate of convergence and domain of convergence, or put another way, we want to know how fast the algorithms converge and from how poor of a guess they will converge. Case I: Zero Drag Trajectory Here we consider the case when the drag parameter p is removed from Eq. () and we estimate only the initial position and velocity. Since the measurements are a function of only position (no velocity dependence), we expect to have a better guess for the initial position than the velocity. For this reason and practical issues dealing with the large number of possible guesses, we simulate the first and second order algorithms with an initial position guess in % error of the truth and varied initial velocity guess error. The results for the number of iterations required for convergence for the first and second order GLSDC algorithms are given in Table. The stopping criterion used for these simulations is 6 digit consistency of the measurement residual error. Table. Convergence Study for Case I Initial guess First Order Iteration Count Second Order Iteration Count.9 x true,. ẋ true 5.9 x true,.9 ẋ true 5.9 x true,.8 ẋ true 5.9 x true,.7 ẋ true 5.9 x true,.6 ẋ true 5.9 x true,.5 ẋ true 6.9 x true,. ẋ true x true,. ẋ true x true,. ẋ true x true,. ẋ true x true,. ẋ true 8 6 The results of Table show that for a large practical range of poor guesses in the initial velocity the second order algorithm converges in - fewer iterations. The state convergence history for an initial velocity guess of. ẋ true is given in Table for the first order algorithm and in Table for the second order algorithm.

13 Table. First Order Algorithm State History Results Iteration X Z Ẋ Ż Cost Table. Second Order Algorithm State History Results Iteration X Z Ẋ Ż Cost Case II: Drag Trajectory Now we look at the case of estimating five states including initial position and velocity, and the drag parameter. The results of Table 5 show that the first-order algorithm converges faster than the second-order algorithm with the exception of starting guesses very close to the truth. For Case II, the trajectory is very sensitive to the estimate of the drag parameter on the first iteration. The first-order algorithm consistently produces a better drag parameter estimate on the first iteration, which results in faster convergence. One would expect that a second-order algorithm would show rapid convergence near the solution. The results here indicate that algorithm performance is problem dependent. Second-order algorithms are well known to have some potential difficulties related to reduction in sensitivity and diminished domain of convergence. Whereas first-order algorithms are typically insensitive to the starting guess, second-order algorithms can have a diminished domain of convergence because some starting guesses outside the

14 region of convergence have second-order sensitivity (or curvature) which has the wrong sign. Simply put, if the initial guess lies where the curvature is wrong, the predicted corrections to the state are not necessarily in the correct direction. Convergence depends upon the ability to extrapolate from one state to another state with a reduction in the performance index, which does not happen when the curvature has the wrong sign. It is anticipated that simulation of third and fourth-order generalizations of the GLSDC algorithm will provide additional insight. With the second-order method, one term is added to the correction (extrapolation) and there is no guarantee that curvature conditions are satisfied for any choice of initial guess. When state is corrected by including third and fourth-order terms as well, one would expect an improvement in the prediction of the state for the next iteration in the correct direction over the addition of only one additional second-order term..9 x true Table 5. Convergence Study for Case II Initial guess First Order Iteration Count Count,. ẋ true, p =.95*ptrue 5 Second Order Iteration.9 x true,.9 ẋ true, p =.95*ptrue x true,.8 ẋ true, p =.95*ptrue x true,.7 ẋ true, p =.95*ptrue x true,.6 ẋ true, p =.95*ptrue x true,.5 ẋ true, p =.95*ptrue 6 8 Tables 6 and 7 show results for one particular optimistic initial guess in which all states guesses are in 5% error of the truth. The results show that the second-order algorithm produces a better estimate of the drag parameter on the first iteration and converges in one less iteration.

15 Table 6. First Order State Time History Iteration X Z Ẋ Ż p Cost Table 7. Second Order State Time History Iteration X Z Ẋ Ż p Cost

16 CONCLUSION Higher order generalizations of the commonly used Nonlinear Least Squares and Gaussian Least Squares Differential Correction estimation algorithms have been presented in this paper. The automatic differentiation tool, OCEA, was utilized to compute the partial derivatives required in these algorithms. The automatic differentiation capability permits coding these algorithms once and for all. New problems are solved by simply changing the appropriate dynamical and measurement models for the problem. An example for each case was presented. An improvement in convergence was found for the ballistic projectile identification problem with the second order algorithm. Some of the difficulties associated with these higher order generalizations, such as domain of convergence and reduction in sensitivity, were addressed with an orbit determination example. Overall, these higher-order generalizations offer new algorithms which show promise for improving convergence. An important contribution of this paper is the development of the differential equations for higher-order state transition matrices. This development is essential in computing sensitivities for the higher-order GLSDC estimation algorithms. The higherorder state transition matrices will prove useful in higher-order methods for propagation of uncertainty or covariance. Future work includes simulating third and fourth-order estimation algorithms, as well as higher-order methods for propagation of uncertainty. REFERENCES. J. D. Turner, "Quaternion-Based Partial Derivative And State Transition Matrix Calculations For Design Optimization," Paper Presented To th AIAA Aerospace Sciences Meeting And Exhibit, Reno, Nevada, -7 Jan.. J. D. Turner, "Object Oriented Coordinate Embedding Algorithm For Automatically Generating The Jacobian And Hessian Partials Of Nonlinear Vector Functions," Invention Disclosure, University Of Iowa, May.. J. D. Turner, "The Application Of Clifford Algebras For Computing The Sensitivity Partial Derivatives Of Linked Mechanical Systems," Invited Paper Presented To Mini-Symposium: Nonlinear Dynamics And Control, USNCTAM: Fourteenth U.S. National Congress Of Theoretical And Applied Mechanics, Blacksburg, Virginia, USA, June -8,.. J. D. Turner, "Automated Generation Of High-Order Partial Derivative Models, To Appear, AIAA Journal, August. 5. J. D. Turner, "Generalized Gradient Search And Newton's Methods For Multilinear Algebra Root-Solving And Optimization Applications," Invited Paper No. AAS - 6, To Appear In The Proceedings Of The John L. Junkins Astrodynamics Symposium, George Bush Conference Center, College Station, Texas, May -,. 6. J. D. Turner, "Generalized Gradient Search And Newton's Methods For Multilinear Algebra Root-Solving And Optimization Applications," Paper No. AAS -6, To 6

17 Appear In A Special Issue Of The Journal Of The Astronautical Sciences Commenorating The John L. Junkins Astrodynamics Symposium, Held At The George Bush Conference Center, College Station, Texas, May -,. 7. Crassidis, J.L., and Junkins, J.L., An Introduction to Optimal Estimation of Dynamical Systems, Text book in Press, CRC Press, March. 7

18 APPENDIX A: Fortran 9 Nonlinear Least Squares Measurement Model SUBROUTINE NONLINEAR_FX( T, EB_VAR, EB_FCTN )! THIS PROGRAM EVALUATES A VECTOR FUNCTION USING EMBEDDED! PROCESSING.! THE USER INPUTS A VECTOR OF OCEA-INITIALIZED INDEPENDENT VARIABLES! AND EVALUATES A VECTOR FUNCTION.!! INPUT:! EB_VAR: NVx VECTOR OF OCEA-INITIALIZED INDEPENDENT! VARIABLES! OUTPUT:! EB_FCTN: NFx VECTOR OF OCEA-EVALUATED NONLINEAR FUNCTIONS! = [ F, DEL(F), DEL^(F) ] = [function, gradient, hessian]!=====================================! COPYRIGHT (C) JAMES D. TURNER!===================================== USE EB_HANDLING IMPLICIT NONE! ARGUMENT LIST VARIABLES REAL(DP)::T TYPE(EB), DIMENSION(NV), INTENT(IN ):: EB_VAR TYPE(EB), DIMENSION(NF), INTENT(INOUT):: EB_FCTN! DEFINE LOCAL + EMBEDDED VARIABLES REAL(DP), DIMENSION(NF):: FX, DELX REAL(DP), DIMENSION(NF,NV):: JAC REAL(DP), DIMENSION(NF,NV,NV):: HES REAL(DP), DIMENSION(NV,NV):: A TYPE(EB):: K, K, K, K, K5, LAM, LAM, LAM TYPE(EB):: OMEG, OMEG, OMEG, DEL, DEL, DEL! ASSIGN LOCAL VARIABLES K=EB_VAR();K=EB_VAR();K=EB_VAR();K=EB_VAR();K5=EB_VAR(5) LAM=EB_VAR(6);LAM=EB_VAR(7);LAM=EB_VAR(8) OMEG=EB_VAR(9);OMEG=EB_VAR();OMEG=EB_VAR() DEL=EB_VAR();DEL=EB_VAR();DEL=EB_VAR()! COMPUTE NONLINEAR FUNCTION USING EMBEDDED ALGEBRA EB_FCTN() = K*EXP(LAM*T)*COS(OMEG*T+DEL) + K*EXP(LAM*T)*& COS(OMEG*T+DEL) + K*EXP(LAM*T)*COS(OMEG*T+DEL) + K EB_FCTN() = K*EXP(LAM*T)*SIN(OMEG*T+DEL) + K*EXP(LAM*T)*& SIN(OMEG*T+DEL) + K*EXP(LAM*T)*SIN(OMEG*T+DEL) + K5 END SUBROUTINE NONLINEAR_FX 8

19 APPENDIX B: Sensitivities First order: h = h Φ (B.) i, j i, s sj h i, j hi ( t) = x ( t ) j (B.) Second order: Third order: h = h Φ + h Φ Φ (B.) i, jk i, s sjk i, st tk sj h i, jk = hi ( t) x ( t ) x ( t ) j k (B.) h = h Φ + h Φ Φ + h Φ Φ i, jkl i, s sjkl i, su ul sjk i, st tk sjl + h Φ Φ + h Φ Φ Φ h i, jkl i, st tkl sj i, stu ul tk sj hi ( t) = x ( t ) x ( t ) x ( t ) j k l (B.5) (B.6) Fourth order: h = h Φ + h Φ Φ i, jklm i, s sjklm i, sv vm sjkl + h Φ Φ + h Φ Φ + h Φ Φ Φ i, su ul sjkm i, su ulm sjk i, suv vm ul sjk + h Φ Φ + h Φ Φ + h Φ Φ Φ i, st tk sjlm i, st tkm sjl i, stv vm tk sjl + h Φ Φ + h Φ Φ + h Φ Φ Φ i, st tkl sjm i, st tklm sj i, stv vm tkl sj + h Φ Φ Φ + h Φ Φ Φ + h, Φ Φ Φ i, stu ul tk sjm i, stu ul tkm sj i stu ulm tk sj + h Φ Φ Φ Φ i, stuv vm ul tk sj (B.7) h i, jklm hi ( t) = x ( t ) x ( t ) x ( t ) x ( t ) j k l m (B.8) i =,,..., n m j, k, l, m, s, t, u, v =,,..., n n m s = number of measurements n = number of states s 9

20 APPENDIX C: State Transition Matrix Differential Equations The state transition differential equations presented on pages X and Y are here written in indicial notation. Note: All indices run from to n s where n s is the number of states. Initial conditions are the identity matrix for the first order state transition matrix differential equations, and zeros for second and higher order state transition matrix differential equations. First order: Φ = f Φ (C.) ij i, s sj xi ( t) Φ ij = Φ ij ( t, to) = x ( t ) j o (C.) Second order: Third order: Φ = f Φ + f Φ Φ (C.) ijk i, s sjk i, st tk sj Φ = Φ i ( t) ijk ijk ( t, to, to ) = x x ( t ) x ( t ) j o k o (C.) Φ = f Φ + f Φ Φ + f Φ Φ ijkl i, s sjkl i, su ul sjk i, st tk sjl + f Φ Φ + f Φ Φ Φ i, st tkl sj i, stu ul tk sj xi ( t) Φ ijkl = Φ ijkl ( t, to, to, to ) = x ( t ) x ( t ) x ( t ) j o k o l o (C.5) (C.6) Fourth order: Φ = f Φ + f Φ Φ ijklm i, s sjklm i, sv vm sjkl + f Φ Φ + f Φ Φ + f Φ Φ Φ i, su ul sjkm i, su ulm sjk i, suv vm ul sjk + f Φ Φ + f Φ Φ + f Φ Φ Φ i, st tk sjlm i, st tkm sjl i, stv vm tk sjl + f Φ Φ + f Φ Φ + f Φ Φ Φ i, st tkl sjm i, st tklm sj i, stv vm tkl sj + f Φ Φ Φ + f Φ Φ Φ i, stu ul tk sjm i, stu ul tkm sj i, stu ulm tk sj + f Φ Φ Φ Φ i, stuv vm ul tk sj + f Φ Φ Φ (C.7) xi ( t) Φ ijklm = Φ ijklm ( t, to, to, to, to ) = x ( t ) x ( t ) x ( t ) x ( t ) j o k o l o m o (C.8)

HIGH-ORDER STATE FEEDBACK GAIN SENSITIVITY CALCULATIONS USING COMPUTATIONAL DIFFERENTIATION

HIGH-ORDER STATE FEEDBACK GAIN SENSITIVITY CALCULATIONS USING COMPUTATIONAL DIFFERENTIATION (Preprint) AAS 12-637 HIGH-ORDER STATE FEEDBACK GAIN SENSITIVITY CALCULATIONS USING COMPUTATIONAL DIFFERENTIATION Ahmad Bani Younes, James Turner, Manoranjan Majji, and John Junkins INTRODUCTION A nonlinear

More information

NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM

NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM (Preprint) AAS 12-638 NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM Ahmad Bani Younes, and James Turner Many numerical integration

More information

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS AAS 6-135 A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS Andrew J. Sinclair,JohnE.Hurtado, and John L. Junkins The concept of nonlinearity measures for dynamical systems is extended to estimation systems,

More information

Space Surveillance with Star Trackers. Part II: Orbit Estimation

Space Surveillance with Star Trackers. Part II: Orbit Estimation AAS -3 Space Surveillance with Star Trackers. Part II: Orbit Estimation Ossama Abdelkhalik, Daniele Mortari, and John L. Junkins Texas A&M University, College Station, Texas 7783-3 Abstract The problem

More information

Linear Feedback Control Using Quasi Velocities

Linear Feedback Control Using Quasi Velocities Linear Feedback Control Using Quasi Velocities Andrew J Sinclair Auburn University, Auburn, Alabama 36849 John E Hurtado and John L Junkins Texas A&M University, College Station, Texas 77843 A novel approach

More information

Bézier Description of Space Trajectories

Bézier Description of Space Trajectories Bézier Description of Space Trajectories Francesco de Dilectis, Daniele Mortari, Texas A&M University, College Station, Texas and Renato Zanetti NASA Jonhson Space Center, Houston, Texas I. Introduction

More information

NONLINEAR EQUATIONS AND TAYLOR S THEOREM

NONLINEAR EQUATIONS AND TAYLOR S THEOREM APPENDIX C NONLINEAR EQUATIONS AND TAYLOR S THEOREM C.1 INTRODUCTION In adjustment computations it is frequently necessary to deal with nonlinear equations. For example, some observation equations relate

More information

Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE. ISSFD 2015 congress, Germany. Dated: September 14, 2015

Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE. ISSFD 2015 congress, Germany. Dated: September 14, 2015 Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE Emmanuel Bignon (), Pierre mercier (), Vincent Azzopardi (), Romain Pinède () ISSFD 205 congress, Germany Dated:

More information

Method 1: Geometric Error Optimization

Method 1: Geometric Error Optimization Method 1: Geometric Error Optimization we need to encode the constraints ŷ i F ˆx i = 0, rank F = 2 idea: reconstruct 3D point via equivalent projection matrices and use reprojection error equivalent projection

More information

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

USING CARLEMAN EMBEDDING TO DISCOVER A SYSTEM S MOTION CONSTANTS

USING CARLEMAN EMBEDDING TO DISCOVER A SYSTEM S MOTION CONSTANTS (Preprint) AAS 12-629 USING CARLEMAN EMBEDDING TO DISCOVER A SYSTEM S MOTION CONSTANTS John E. Hurtado and Andrew J. Sinclair INTRODUCTION Although the solutions with respect to time are commonly sought

More information

9 th AAS/AIAA Astrodynamics Specialist Conference Breckenridge, CO February 7 10, 1999

9 th AAS/AIAA Astrodynamics Specialist Conference Breckenridge, CO February 7 10, 1999 AAS 99-139 American Institute of Aeronautics and Astronautics MATLAB TOOLBOX FOR RIGID BODY KINEMATICS Hanspeter Schaub and John L. Junkins 9 th AAS/AIAA Astrodynamics Specialist Conference Breckenridge,

More information

Recursive On-orbit Calibration of Star Sensors

Recursive On-orbit Calibration of Star Sensors Recursive On-orbit Calibration of Star Sensors D. odd Griffith 1 John L. Junins 2 Abstract Estimation of calibration parameters for a star tracer is investigated. Conventional estimation schemes are evaluated

More information

STATISTICAL ORBIT DETERMINATION

STATISTICAL ORBIT DETERMINATION STATISTICAL ORBIT DETERMINATION Satellite Tracking Example of SNC and DMC ASEN 5070 LECTURE 6 4.08.011 1 We will develop a simple state noise compensation (SNC) algorithm. This algorithm adds process noise

More information

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS (Preprint) AAS 12-202 RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS Hemanshu Patel 1, T. Alan Lovell 2, Ryan Russell 3, Andrew Sinclair 4 "Relative navigation using

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

A First Course on Kinetics and Reaction Engineering Supplemental Unit S4. Numerically Fitting Models to Data

A First Course on Kinetics and Reaction Engineering Supplemental Unit S4. Numerically Fitting Models to Data Supplemental Unit S4. Numerically Fitting Models to Data Defining the Problem Many software routines for fitting a model to experimental data can only be used when the model is of a pre-defined mathematical

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 00 Dec 27, 2014.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 00 Dec 27, 2014. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim100887@aol.com rev 00 Dec 27, 2014 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

When Does the Uncertainty Become Non-Gaussian. Kyle T. Alfriend 1 Texas A&M University Inkwan Park 2 Texas A&M University

When Does the Uncertainty Become Non-Gaussian. Kyle T. Alfriend 1 Texas A&M University Inkwan Park 2 Texas A&M University When Does the Uncertainty Become Non-Gaussian Kyle T. Alfriend Texas A&M University Inkwan Park 2 Texas A&M University ABSTRACT The orbit state covariance is used in the conjunction assessment/probability

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

RADIALLY ADAPTIVE EVALUATION OF THE SPHERICAL HARMONIC GRAVITY SERIES FOR NUMERICAL ORBITAL PROPAGATION

RADIALLY ADAPTIVE EVALUATION OF THE SPHERICAL HARMONIC GRAVITY SERIES FOR NUMERICAL ORBITAL PROPAGATION AAS 15-440 RADIALLY ADAPTIVE EVALUATION OF THE SPHERICAL HARMONIC GRAVITY SERIES FOR NUMERICAL ORBITAL PROPAGATION Austin B. Probe, * Brent Macomber,* Julie I. Read,* Robyn M. Woollands,* and John L. Junkins

More information

Modeling and Experimentation: Compound Pendulum

Modeling and Experimentation: Compound Pendulum Modeling and Experimentation: Compound Pendulum Prof. R.G. Longoria Department of Mechanical Engineering The University of Texas at Austin Fall 2014 Overview This lab focuses on developing a mathematical

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

The Implicit Function Theorem with Applications in Dynamics and Control

The Implicit Function Theorem with Applications in Dynamics and Control 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition 4-7 January 2010, Orlando, Florida AIAA 2010-174 The Implicit Function Theorem with Applications in Dynamics

More information

Quaternion Data Fusion

Quaternion Data Fusion Quaternion Data Fusion Yang Cheng Mississippi State University, Mississippi State, MS 39762-5501, USA William D. Banas and John L. Crassidis University at Buffalo, State University of New York, Buffalo,

More information

Nonlinear examples. June 21, Solution by fixed-point iteration 3. 5 Exercises 9

Nonlinear examples. June 21, Solution by fixed-point iteration 3. 5 Exercises 9 Nonlinear examples June 21, 2012 Contents 1 Bratu s equation 1 1.1 Manufacturing a problem to match a solution.................. 2 1.1.1 A manufactured Bratu s problem in 1D.................. 2 1.2 Weak

More information

Tensors, and differential forms - Lecture 2

Tensors, and differential forms - Lecture 2 Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description

More information

Ossama Abdelkhalik and Daniele Mortari Department of Aerospace Engineering, Texas A&M University,College Station, TX 77843, USA

Ossama Abdelkhalik and Daniele Mortari Department of Aerospace Engineering, Texas A&M University,College Station, TX 77843, USA Two-Way Orbits Ossama Abdelkhalik and Daniele Mortari Department of Aerospace Engineering, Texas A&M University,College Station, TX 77843, USA November 17, 24 Abstract. This paper introduces a new set

More information

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer Oscar De Silva, George K.I. Mann and Raymond G. Gosine Faculty of Engineering and Applied Sciences, Memorial

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Spacecraft Angular Rate Estimation Algorithms For Star Tracker-Based Attitude Determination

Spacecraft Angular Rate Estimation Algorithms For Star Tracker-Based Attitude Determination AAS 3-191 Spacecraft Angular Rate Estimation Algorithms For Star Tracker-Based Attitude Determination Puneet Singla John L. Crassidis and John L. Junkins Texas A&M University, College Station, TX 77843

More information

Nested Differentiation and Symmetric Hessian Graphs with ADTAGEO

Nested Differentiation and Symmetric Hessian Graphs with ADTAGEO Nested Differentiation and Symmetric Hessian Graphs with ADTAGEO ADjoints and TAngents by Graph Elimination Ordering Jan Riehme Andreas Griewank Institute for Applied Mathematics Humboldt Universität zu

More information

Introduction and Vectors Lecture 1

Introduction and Vectors Lecture 1 1 Introduction Introduction and Vectors Lecture 1 This is a course on classical Electromagnetism. It is the foundation for more advanced courses in modern physics. All physics of the modern era, from quantum

More information

Analytical Mechanics. of Space Systems. tfa AA. Hanspeter Schaub. College Station, Texas. University of Colorado Boulder, Colorado.

Analytical Mechanics. of Space Systems. tfa AA. Hanspeter Schaub. College Station, Texas. University of Colorado Boulder, Colorado. Analytical Mechanics of Space Systems Third Edition Hanspeter Schaub University of Colorado Boulder, Colorado John L. Junkins Texas A&M University College Station, Texas AIM EDUCATION SERIES Joseph A.

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

On Sun-Synchronous Orbits and Associated Constellations

On Sun-Synchronous Orbits and Associated Constellations On Sun-Synchronous Orbits and Associated Constellations Daniele Mortari, Matthew P. Wilkins, and Christian Bruccoleri Department of Aerospace Engineering, Texas A&M University, College Station, TX 77843,

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Analytical Non-Linear Uncertainty Propagation: Theory And Implementation

Analytical Non-Linear Uncertainty Propagation: Theory And Implementation Analytical Non-Linear Uncertainty Propagation: Theory And Implementation Kohei Fujimoto 1), Daniel J. Scheeres 2), and K. Terry Alfriend 1) May 13, 2013 1) Texas A&M University 2) The University of Colorado

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Model-building and parameter estimation

Model-building and parameter estimation Luleå University of Technology Johan Carlson Last revision: July 27, 2009 Measurement Technology and Uncertainty Analysis - E7021E MATLAB homework assignment Model-building and parameter estimation Introduction

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 3 3.4 Differential Algebraic Systems 3.5 Integration of Differential Equations 1 Outline 3.4 Differential Algebraic Systems 3.4.1 Constrained Dynamics 3.4.2 First and Second

More information

The Conjugate Gradient Method for Solving Linear Systems of Equations

The Conjugate Gradient Method for Solving Linear Systems of Equations The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2

More information

A Robust Controller for Scalar Autonomous Optimal Control Problems

A Robust Controller for Scalar Autonomous Optimal Control Problems A Robust Controller for Scalar Autonomous Optimal Control Problems S. H. Lam 1 Department of Mechanical and Aerospace Engineering Princeton University, Princeton, NJ 08544 lam@princeton.edu Abstract Is

More information

Covariant Formulation of Electrodynamics

Covariant Formulation of Electrodynamics Chapter 7. Covariant Formulation of Electrodynamics Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 11, and Rybicki and Lightman, Chap. 4. Starting with this chapter,

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

QM and Angular Momentum

QM and Angular Momentum Chapter 5 QM and Angular Momentum 5. Angular Momentum Operators In your Introductory Quantum Mechanics (QM) course you learned about the basic properties of low spin systems. Here we want to review that

More information

Physics 202 Laboratory 3. Root-Finding 1. Laboratory 3. Physics 202 Laboratory

Physics 202 Laboratory 3. Root-Finding 1. Laboratory 3. Physics 202 Laboratory Physics 202 Laboratory 3 Root-Finding 1 Laboratory 3 Physics 202 Laboratory The fundamental question answered by this week s lab work will be: Given a function F (x), find some/all of the values {x i }

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate

Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate Extension of Farrenopf Steady-State Solutions with Estimated Angular Rate Andrew D. Dianetti and John L. Crassidis University at Buffalo, State University of New Yor, Amherst, NY 46-44 Steady-state solutions

More information

Case Study: The Pelican Prototype Robot

Case Study: The Pelican Prototype Robot 5 Case Study: The Pelican Prototype Robot The purpose of this chapter is twofold: first, to present in detail the model of the experimental robot arm of the Robotics lab. from the CICESE Research Center,

More information

Differentiation of functions of covariance

Differentiation of functions of covariance Differentiation of log X May 5, 2005 1 Differentiation of functions of covariance matrices or: Why you can forget you ever read this Richard Turner Covariance matrices are symmetric, but we often conveniently

More information

Line Search Algorithms

Line Search Algorithms Lab 1 Line Search Algorithms Investigate various Line-Search algorithms for numerical opti- Lab Objective: mization. Overview of Line Search Algorithms Imagine you are out hiking on a mountain, and you

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Calculus. Integration (III)

Calculus. Integration (III) Calculus Integration (III) Outline 1 Other Techniques of Integration Partial Fractions Integrals Involving Powers of Trigonometric Functions Trigonometric Substitution 2 Using Tables of Integrals Integration

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

Automatic Generation and Integration of Equations of Motion for Linked Mechanical Systems

Automatic Generation and Integration of Equations of Motion for Linked Mechanical Systems Automatic Generation and Integration of Equations of Motion for Linked Mechanical Systems D. Todd Griffith a, John L. Junkins a, and James D. Turner b a Deartment of Aerosace Engineering, Texas A&M University,

More information

has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity.

has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. http://preposterousuniverse.com/grnotes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. As with any major theory in physics, GR has been framed

More information

Review of Engineering Dynamics

Review of Engineering Dynamics Review of Engineering Dynamics Part 1: Kinematics of Particles and Rigid Bodies by James Doane, PhD, PE Contents 1.0 Course Overview... 4.0 Basic Introductory Concepts... 4.1 Introduction... 4.1.1 Vectors

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Optimization of Orbital Transfer of Electrodynamic Tether Satellite by Nonlinear Programming

Optimization of Orbital Transfer of Electrodynamic Tether Satellite by Nonlinear Programming Optimization of Orbital Transfer of Electrodynamic Tether Satellite by Nonlinear Programming IEPC-2015-299 /ISTS-2015-b-299 Presented at Joint Conference of 30th International Symposium on Space Technology

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Simulation of CESR-c Luminosity from Beam Functions

Simulation of CESR-c Luminosity from Beam Functions Simulation of CESR-c Luminosity from Beam Functions Abhijit C. Mehta Trinity College, Duke University, Durham, North Carolina, 27708 (Dated: August 13, 2004) It is desirable to have the ability to compute

More information

carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general

carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general http://pancake.uchicago.edu/ carroll/notes/ has a lot of good notes on GR and links to other pages. General Relativity Philosophy of general relativity. As with any major theory in physics, GR has been

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Identifying Safe Zones for Planetary Satellite Orbiters

Identifying Safe Zones for Planetary Satellite Orbiters AIAA/AAS Astrodynamics Specialist Conference and Exhibit 16-19 August 2004, Providence, Rhode Island AIAA 2004-4862 Identifying Safe Zones for Planetary Satellite Orbiters M.E. Paskowitz and D.J. Scheeres

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

A Few Concepts from Numerical Analysis

A Few Concepts from Numerical Analysis 2 A Few Concepts from Numerical Analysis A systematic treatment of numerical methods is provided in conventional courses and textbooks on numerical analysis. But a few very common issues, that emerge in

More information

1 Gauss integral theorem for tensors

1 Gauss integral theorem for tensors Non-Equilibrium Continuum Physics TA session #1 TA: Yohai Bar Sinai 16.3.216 Index Gymnastics: Gauss Theorem, Isotropic Tensors, NS Equations The purpose of today s TA session is to mess a bit with tensors

More information

Rigorous Global Optimization of Impulsive Space Trajectories

Rigorous Global Optimization of Impulsive Space Trajectories Rigorous Global Optimization of Impulsive Space Trajectories P. Di Lizia, R. Armellin, M. Lavagna K. Makino, M. Berz Fourth International Workshop on Taylor Methods Boca Raton, December 16 19, 2006 Motivation

More information

Math and Numerical Methods Review

Math and Numerical Methods Review Math and Numerical Methods Review Michael Caracotsios, Ph.D. Clinical Associate Professor Chemical Engineering Department University of Illinois at Chicago Introduction In the study of chemical engineering

More information

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C. Chapter 4: Numerical Computation Deep Learning Authors: I. Goodfellow, Y. Bengio, A. Courville Lecture slides edited by 1 Chapter 4: Numerical Computation 4.1 Overflow and Underflow 4.2 Poor Conditioning

More information

Terminal Convergence Approximation Modified Chebyshev Picard Iteration for efficient numerical integration of orbital trajectories

Terminal Convergence Approximation Modified Chebyshev Picard Iteration for efficient numerical integration of orbital trajectories Terminal Convergence Approximation Modified Chebyshev Picard Iteration for efficient numerical integration of orbital trajectories Austin B. Probe Texas A&M University Brent Macomber, Donghoon Kim, Robyn

More information

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:

More information

MATHEMATICAL MODELLING, MECHANICS AND MOD- ELLING MTHA4004Y

MATHEMATICAL MODELLING, MECHANICS AND MOD- ELLING MTHA4004Y UNIVERSITY OF EAST ANGLIA School of Mathematics Main Series UG Examination 2017 18 MATHEMATICAL MODELLING, MECHANICS AND MOD- ELLING MTHA4004Y Time allowed: 2 Hours Attempt QUESTIONS 1 and 2, and ONE other

More information

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering With Polynomial Chaos Raktim Bhattacharya Aerospace Engineering, Texas A&M University uq.tamu.edu Nonlinear Filtering with PC Problem Setup. Dynamics: ẋ = f(x, ) Sensor Model: ỹ = h(x)

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

SUN INFLUENCE ON TWO-IMPULSIVE EARTH-TO-MOON TRANSFERS. Sandro da Silva Fernandes. Cleverson Maranhão Porto Marinho

SUN INFLUENCE ON TWO-IMPULSIVE EARTH-TO-MOON TRANSFERS. Sandro da Silva Fernandes. Cleverson Maranhão Porto Marinho SUN INFLUENCE ON TWO-IMPULSIVE EARTH-TO-MOON TRANSFERS Sandro da Silva Fernandes Instituto Tecnológico de Aeronáutica, São José dos Campos - 12228-900 - SP-Brazil, (+55) (12) 3947-5953 sandro@ita.br Cleverson

More information

Parallel Methods for ODEs

Parallel Methods for ODEs Parallel Methods for ODEs Levels of parallelism There are a number of levels of parallelism that are possible within a program to numerically solve ODEs. An obvious place to start is with manual code restructuring

More information

Tensor Analysis in Euclidean Space

Tensor Analysis in Euclidean Space Tensor Analysis in Euclidean Space James Emery Edited: 8/5/2016 Contents 1 Classical Tensor Notation 2 2 Multilinear Functionals 4 3 Operations With Tensors 5 4 The Directional Derivative 5 5 Curvilinear

More information

CHAPTER 4 ROOTS OF EQUATIONS

CHAPTER 4 ROOTS OF EQUATIONS CHAPTER 4 ROOTS OF EQUATIONS Chapter 3 : TOPIC COVERS (ROOTS OF EQUATIONS) Definition of Root of Equations Bracketing Method Graphical Method Bisection Method False Position Method Open Method One-Point

More information

Extension of the Sparse Grid Quadrature Filter

Extension of the Sparse Grid Quadrature Filter Extension of the Sparse Grid Quadrature Filter Yang Cheng Mississippi State University Mississippi State, MS 39762 Email: cheng@ae.msstate.edu Yang Tian Harbin Institute of Technology Harbin, Heilongjiang

More information

Partial J 2 -Invariance for Spacecraft Formations

Partial J 2 -Invariance for Spacecraft Formations Partial J 2 -Invariance for Spacecraft Formations Louis Breger and Jonathan P. How MIT Department of Aeronautics and Astronautics Kyle T. Alfriend Texas A&M University Department of Aerospace Engineering

More information

Performance of a Dynamic Algorithm For Processing Uncorrelated Tracks

Performance of a Dynamic Algorithm For Processing Uncorrelated Tracks Performance of a Dynamic Algorithm For Processing Uncorrelated Tracs Kyle T. Alfriend Jong-Il Lim Texas A&M University Tracs of space objects, which do not correlate, to a nown space object are called

More information

A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios

A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios Théophile Griveau-Billion Quantitative Research Lyxor Asset Management, Paris theophile.griveau-billion@lyxor.com Jean-Charles Richard

More information

Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation

Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation Adaptive Unscented Kalman Filter with Multiple Fading Factors for Pico Satellite Attitude Estimation Halil Ersin Söken and Chingiz Hajiyev Aeronautics and Astronautics Faculty Istanbul Technical University

More information

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics). 1 Introduction Read sections 1.1, 1.2.1 1.2.4, 1.2.6, 1.3.8, 1.3.9, 1.4. Review questions 1.1 1.6, 1.12 1.21, 1.37. The subject of Scientific Computing is to simulate the reality. Simulation is the representation

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information