Sensitivity Analysis AA222 - Multidisciplinary Design Optimization Joaquim R. R. A. Martins Durand

Size: px
Start display at page:

Download "Sensitivity Analysis AA222 - Multidisciplinary Design Optimization Joaquim R. R. A. Martins Durand"

Transcription

1 Sensitivity Analysis AA222 - Multidisciplinary Design Optimization Joaquim R. R. A. Martins Durand joaquim.martins@stanford.edu 1 Introduction Sensitivity analysis consists in computing derivatives of one or more quantities (outputs) with respect to one or several independent variables (inputs). Although there are various uses for sensitivity information, our main motivation is the use of this information in gradient-based optimization. Since the calculation of gradients is often the most costly step in the optimization cycle, using efficient methods that accurately calculate sensitivities are extremely important. There are a several different methods for sensitivity analysis but since none of them is the clear choice for all cases, it is important to understand their relative merits. When choosing a method for computing sensitivities, one is mainly concerned with its accuracy and computational expense. In certain cases it is also important that the method be easily implemented. A method which is efficient but difficult to implement may never be finalized, while an easier, though computationally more costly method, would actually give some result. Factors that affect the choice of method include: the ratio of the number of outputs to the number of inputs, the importance of computational efficiency and degree of laziness of the programmer. Consider a general constrained optimization problem of the form: minimize f(x i ) w.r.t x i i = 1, 2,..., n subject to g j (x i ) 0, j = 1, 2,..., m where f is a non-linear function of n design variables x i and g j are the m nonlinear inequality constraints we have to satisfy. In order to solve this problem, a gradient-based optimization algorithm usually requires: The sensitivities of the objective function, f/ x i (n 1). The sensitivities of all the active constraints at the current design point g j / x i (m n). 2 Finite-Differences Finite-difference formulae are very commonly used to estimate sensitivities. Although these approximations are neither particularly accurate or efficient, this method s biggest advantage resides in the fact that it is extremely easy to implement. 1

2 All the finite-differencing formulae can be derived by truncating a Taylor series expanded about a given point x. A common estimate for the first derivative is the forward-difference which can be derived from the expansion of f(x + h), f(x + h) = f(x) + hf (x) + h2 2! f (x) + h3 3! f (x) +... (1) Solving for f we get the finite-difference formula, f (x) = f(x + h) f(x) h + O(h), (2) where h is called the finite-difference interval. The truncation error is O(h), and hence this is a first-order approximation. For a second-order estimate we can use the expansion of f(x h), f(x h) = f(x) hf (x) + h2 2! f (x) h3 3! f (x) +..., (3) and subtract it from the expansion given in Equation (1). The resulting equation can then be solved for the derivative of f to obtain the central-difference formula, f (x) = f(x + h) f(x h) 2h + O(h 2 ). (4) When estimating sensitivities using finite-difference formulae we are faced with the step-size dilemma, i.e. the desire to choose a small step size to minimize truncation error while avoiding the use of a step so small that errors due to subtractive cancellation become dominant. The cost of calculating sensitivities with finite-differences is proportional to the number of design variables since f must be calculated for each perturbation of x i. This means that if we use forward differences, for example, the cost would be n + 1 times the cost of calculating f. 3 The Complex-Step Derivative Approximation 3.1 Background The use of complex variables to develop estimates of derivatives originated with the work of Lyness and Moler [1] and Lyness [2]. Their work produced several methods that made use of complex variables, including a reliable method for calculating the n th derivative of an analytic function. However, only recently has some of this theory been rediscovered by Squire and Trapp [3] and used to obtain a very simple expression for estimating the first derivative. This estimate is suitable for use in modern numerical computing and has shown to be very accurate, extremely robust and surprisingly easy to implement, while retaining a reasonable computational cost. 2

3 3.2 Basic Theory We will now see that a very simple formula for the first derivative of real functions can be obtained using complex calculus. Consider a function, f = u+iv, of the complex variable, z = x+iy. If f is analytic the Cauchy-Riemann equations apply, i.e., x = v y (5) y = v x. (6) These equations establish the exact relationship between the real and imaginary parts of the function. We can use the definition of a derivative in the right hand side of the first Cauchy-Riemann Equation(5) to obtain, x = lim h 0 v(x + i(y + h)) v(x + iy). (7) h where h is a small real number. Since the functions that we are interested in are real functions of a real variable, we restrict ourselves to the real axis, in which case y = 0, u(x) = f(x) and v(x) = 0. Equation (7) can then be re-written as, f x = lim h 0 For a small discrete h, this can be approximated by, f x Im [f (x + ih)]. (8) h Im [f (x + ih)]. (9) h We will call this the complex-step derivative approximation. This estimate is not subject to subtractive cancellation error, since it does not involve a difference operation. This constitutes a tremendous advantage over the finite-difference approaches expressed in Equation (2, 4). In order to determine the error involved in this approximation, we will show an alternative derivation based on a Taylor series expansion. Rather than using a real step h, we now use a pure imaginary step, ih. If f is a real function in real variables and it is also analytic, we can expand it in a Taylor series about a real point x as follows, f(x + ih) = f(x) + ihf (x) h 2 f (x) 2! ih 3 f (x) 3! +... (10) Taking the imaginary parts of both sides of Equation (10) and dividing the equation by h yields f (x) = Im [f(x + ih)] h + h 2 f (x) 3! +... (11) Hence the approximations is a O(h 2 ) estimate of the derivative of f. 3

4 3.3 A Simple Numerical Example Because the complex-step approximation does not involve a difference operation, we can choose extremely small steps sizes with no loss of accuracy due to subtractive cancellation. To illustrate this, consider the following analytic function: f(x) = e x sin3 x + cos 3 x (12) The exact derivative at x = 1.5 was computed analytically to 16 digits and then compared to the results given by the complex-step (9) and the forward and central finite-difference approximations. Figure 1: Relative error in the sensitivity estimates given by finite-difference and the complex- step methods with the analytic result as the reference; ε = f f ref. f ref The forward-difference estimate initially converges to the exact result at a linear rate since its truncation error is O(h), while the central-difference converges quadratically, as expected. However, as the step is reduced below a value of about 10 8 for the forward-difference and 10 5 for the central-difference, subtractive cancellation errors become significant and the estimates are unreliable. When the interval h is so small that no difference exists in the output (for steps smaller than ) the finite-difference estimates eventually yields zero and then ε = 1. The complex-step estimate converges quadratically with decreasing step size, as predicted by the truncation error estimate. The estimate is practically insensitive to small step sizes and below an h of the order of 10 8 it achieves the 4

5 accuracy of the function evaluation. Comparing the best accuracy of each of these approaches, we can see that by using finite-difference we only achieve a fraction of the accuracy that is obtained by using the complex-step approximation. As we can see the complex-step size can be made extremely small. However, there is a lower limit on the step size when using finite precision arithmetic. The range of real numbers that can be handled in numerical computing is dependent on the particular compiler that is used. In this case, the smallest non-zero number that can be represented is If a number falls below this value, underflow occurs and the number drops to zero. Note that the estimate is still accurate down to a step of the order of Below this, underflow occurs and the estimate results in NaN. In general, the smallest possible h is the one below which underflow occurs somewhere in the algorithm. When it comes to comparing the relative accuracy of complex and real computations, there is an increased error in basic arithmetic operations when using complex numbers, more specifically when dividing and multiplying. 3.4 Complex Function Definitions In the derivation of the complex-step derivative approximation (9) for a function f we have assumed that f was an analytic function, i.e. that the Cauchy- Riemann equations apply. It is therefore important to examine to what extent this assumption holds when the value of the function is calculated by a numerical algorithm. In addition it is also useful to explain how we can convert real functions and operators such that they can take complex numbers as arguments. Fortunately, in the case of Fortran, complex numbers are a standard data type and many intrinsic functions are already defined for them. Any algorithm can be broken down into a sequence of basic operations. Two main types of operations are relevant when converting a real algorithm to a complex one: Relational operators Arithmetic functions and operators. Relational logic operators such as greater than and less than are not defined for complex numbers in Fortran. These operators are usually used in conjunction with if statements in order to redirect the execution thread. The original algorithm and its complexified version must obviously follow the same execution thread. Therefore, defining these operators to compare only the real parts of the arguments is the correct approach. Functions that choose one argument such as max and min are based on relational operators. Therefore, according to our previous discussion, we should once more choose a number based on its real part alone and let the imaginary part tag along. Any algorithm that uses conditional statements is likely to be a discontinuous function of its inputs. Either the function value itself is discontinuous or the 5

6 discontinuity is in the first or higher derivatives. When using a finite-difference method, the derivative estimate will be incorrect if the two function evaluations are within h of the discontinuity location. However, if the complex-step is used, the resulting derivative estimate will be correct right up to the discontinuity. At the discontinuity, a derivative does not exist by definition, but if the function is defined a that point, the approximation will still return a value that will depend on how the function is defined at that point. Arithmetic functions and operators include addition, multiplication, and trigonometric functions, to name only a few, and most of these have a standard complex definition that is analytic almost everywhere. Many of these definitions are implemented in Fortran. Whether they are or not depends on the compiler and libraries that are used. The user should check the documentation of the particular Fortran compiler being used in order to determine which intrinsic functions need to be redefined. Functions of the complex variable are merely extensions of their real counterparts. By requiring that the extended function satisfy the Cauchy-Riemann equations, i.e. analyticity, and that its properties be the same as those of the real function, we can obtain a unique complex function definition. Since these complex functions are analytic, the complex-step approximation is valid and will yield the correct result. Some of the functions, however, have singularities or branch cuts on which they are not analytic. This does not pose a problem since, as previously observed, the complex-step approximation will return a correct one-sided derivative. As for the case of a function that is not defined at a given point, the algorithm will not return a function value, so a derivative cannot be obtained. However, the derivative estimate will be correct in the neighborhood of the discontinuity. The only standard complex function definition that is non-analytic is the absolute value function or modulus. When the argument of this function is a complex value, the function returns a positive real number, z = x 2 + y 2. This function s definition was not derived by imposing analyticity and therefore it will not yield the correct derivative when using the complex-step estimate. In order to derive an analytic definition of abs we start by satisfying the Cauchy- Riemann equations. From the Equation (5), since we know what the value of the derivative must be, we can write, x = v y = { 1 x < 0 +1 x > 0. (13) From Equation (6), since v/ x = 0 on the real axis, we get that / y = 0 on the axis, so the real part of the result must be independent of the imaginary part of the variable. Therefore, the new sign of the imaginary part depends only on the sign of the real part of the complex number, and an analytic absolute 6

7 value function can be defined as: abs(x + iy) = { x iy x < 0 +x + iy x > 0. (14) Note that this is not analytic at x = 0 since a derivative does not exist for the real absolute value. Once again, the complex-step approximation will give the correct value of the first derivative right up to the discontinuity. Later the x > 0 condition will be substituted by x 0 so that we not only obtain a function value for x = 0, but also we are able to calculate the correct right-hand-side derivative at that point. 3.5 Implementation Procedure The complex-step method can be implemented in many different programming languages. The following is a general procedure that applies to any language which supports complex arithmetic: 1. Substitute all real type variable declarations with complex declarations. It is not strictly necessary to declare all variables complex, but it is much easier to do so. 2. Define all functions and operators that are not defined for complex arguments and re-define abs. 3. Change input and output statements if necessary. can be esti- 4. A complex-step can then be added to the desired x and f x mated using Equation (9). The complex-step method can be implemented with of without operator overloading, but the latter results in a more elegant implementation. Fortran: Fortunately, in Fortran 90, intrinsic functions and operators (including comparison operators) can be overloaded and this makes it possible to use the operator overloading type of implementation. This means that if a particular function or operator does not take complex arguments, one can extend it by writing another definition that takes this type of arguments. This feature makes it much easier to implement the complex-step method since once we overload the functions and operators, there is no need to change the function calls or conditional statements. The compiler will automatically determine the argument type and choose the correct function or operation. A module with the necessary definitions and a script that converts the original source code automatically are available on the web [9]. C/C++: Since C++ also supports overloading, the implementation is analogous to the Fortran one. An include file, contains the definition of a new 7

8 variable type called cmplx as well as all the functions that are necessary for the the complex-step method. The inclusion of this file and the replacement of double or float declarations with cmplx is nearly all that is required. Matlab: As in the case of Fortran, one must redefine functions such as abs, max and min. All differentiable functions are defined for complex variables. Results for the simple example in the previous section were computed using Matlab. The standard transpose operation represented by an apostrophe ( ) poses a problem as it takes the complex conjugate of the elements of the matrix, so one should use the non-conjugate transpose represented by dot apostrophe (. ) instead. Java: Complex arithmetic is not standardized at the moment but there are plans for its implementation. Although function overloading is possible, operator overloading is currently not supported. Python: When using the Numerical Python module (NumPy), we have access to complex number arithmetic and implementation is as straightforward as in Matlab. 4 Algorithmic Differentiation Algorithmic differentiation also known as computational differentiation or automatic differentiation is a well known method based on the systematic application of the differentiation chain rule to computer programs. Although this approach is as accurate an analytic method, it is potentially much easier to implement since this can be done automatically. 4.1 How it Works The method is based on the application of the chain rule of differentiation to each operation in the program flow. The derivatives given by the chain rule can be propagated forward (forward mode) or backwards (reverse mode). When using the forward mode, for each intermediate variable in the algorithm, a variation due to one input variable is carried through. This is very similar to the way the complex-step method works. To illustrate this, suppose we want to differentiate the multiplication operation, f = x 1 x 2, with respect to x 1. Table 1 compares how the differentiation would be performed using either algorithmic differentiation or the complex-step method. As we can see, algorithmic differentiation stores the derivative value in a separate set of variables while the complex step carries the derivative information in the imaginary part of the variables. It is shown that in this case, the complex-step method performs one additional operation the calculation of the term h 1 h 2 which, for the purposes of calculating the derivative is superfluous. The complex-step method will nearly always include these superfluous computations which correspond to 8

9 Algorithmic Complex-Step x 1 = 1 h 1 = x 2 = 0 h 2 = 0 f = x 1 x 2 f = (x 1 + ih 1 )(x 2 + ih 2 ) f = x 1 x 2 + x 2 x 1 f = x 1 x 2 h 1 h 2 + i(x 1 h 2 + x 2 h 1 ) df/dx 1 = f df/dx 1 = Im f/h Table 1: The differentiation of the multiplication operation f = x 1 x 2 with respect to x 1 using algorithmic differentiation and the complex-step derivative approximation. the higher order terms in the Taylor series expansion of Equation (9). For very small h, when using finite precision arithmetic, these terms have no effect on the real part of the result. Although this example involves only one operation, both methods work for an algorithm involving an arbitrary sequence of operations by propagating the variation of one input forward throughout the code. This means that in order to calculate n derivatives, the differentiated code must be executed n times. The other mode the reverse mode has no equivalent in the complex-step method. When using the reverse mode, the code is executed forwards and then backwards to calculate derivatives of one output with respect to n inputs. The total number of operations is independent of n, but the memory requirements may be prohibitive, especially for the case of large iterative algorithms. There is nothing like an example, so we will now use both the forward and reverse modes to compute the derivatives of the function, f(x 1, x 2 ) = x 1 x 2 + sin(x 1 ). (15) The algorithm that would calculate this function is shown below, together with the derivative calculation using the forward mode. t 1 = x 1 t 1 = 1 t 2 = x 2 t 2 = 0 t 3 = t 1 t 2 t 3 = t 1 t 2 + t 1 t 2 t 4 = sin(t 1 ) t 4 = t 1 cos(t 1 ) t 5 = t 3 + t 4 t 5 = t 3 + t 4 The reverse mode is also based on the chain rule. Let t j note all the intermediate variables in an algorithm that calculates f(x i ). We set t 1,..., t n to x 1,..., x n and the last intermediate variable, t m to f. Then the chain rule can be written as, t j = t i k K j t j t k t k t i, i = 1, 2,..., m, (16) 9

10 t 5 + t 4 t 3 sin t 1 t 2 x 1 x 2 Figure 2: Graph of the algorithm that calculates f(x 1, x 2 ) = x 1 x 2 + sin(x 1 ) 10

11 for j = m + 1,..., n to obtain the gradients of the intermediate and output variables. K j denotes the set of indices k < j such that the variable t j in the code depends explicitly on t k. In order to know in advance what these indices are, we have to form the graph of the algorithm when it is first executed. This provides information on the interdependence of all the intermediate variables. A graph for our sample algorithm is shown in Figure 2. The sequence of calculations shown below corresponds to the application of the reverse mode to our simple function. t 5 = 1 t 5 t 5 = t 5 = 1 t 4 t 4 t 5 = t 5 t 4 + t 5 t 3 + = = 1 t 3 t 4 t 3 t 3 t 3 t 5 = t 5 t 3 + t 5 t 4 = 1 t = t 1 t 2 t 3 t 2 t 4 t 2 t 5 = t 5 t 2 + t 5 t 3 + t 5 t 4 = t t cos(t 1 ) = t 2 + cos(t 1 ) t 1 t 2 t 1 t 3 t 1 t 4 t 1 The following matrix, helps to visualize the sensitivities of all the variables with respect to each other t 3 t 3 t 1 t (17) t 4 t 4 t 4 t 1 t 2 t 5 t 5 t 5 t 5 t 1 t 2 t 3 t 4 1 t In the case of the example we are considering we have: t 2 t cos(t 1 ) t 2 + cos(t 1 ) t (18) The cost of calculating the derivative of one output to many inputs is not proportional to the number of input but to the number of outputs. Since when using the reverse mode we need to store all the intermediate variables as well as the complete graph of the algorithm, the amount of memory that is necessary increases dramatically. In the case of three-dimensional iterative solver, the cost of using this mode can be prohibitive. 4.2 Existing Tools There are two main methods for implementing algorithmic differentiation: by source code transformation or by using derived datatypes and operator overloading. 11

12 To implement algorithmic differentiation by source transformation, the whole source code must be processed with a parser and all the derivative calculations are introduced as additional lines of code. The resulting source code is greatly enlarged and it becomes practically unreadable. This fact constitutes an implementation disadvantage as it becomes impractical to debug this new extended code. One has to work with the original source, and every time it is changed (or if different derivatives are desired) one must rerun the parser before compiling a new version. In order to use derived types, we need languages that support this feature, such as Fortran 90 or C++. To implement algorithmic differentiation using this feature, a new type of structure is created that contains both the value and its derivative. All the existing operators are then re-defined (overloaded) for the new type. The new operator has exactly the same behavior as before for the value part of the new type, but uses the definition of the derivative of the operator to calculate the derivative portion. This results in a very elegant implementation since very few changes are required in the original code. Many tools for automatic algorithmic differentiation of programs in different languages exist. They have been extensively developed and provide the user with great functionality, including the calculation of higher-order derivatives and reverse mode options. Fortran: Tools that use the source transformation approach include: AD- IFOR [11], TAMC, DAFOR, GRESS, Odysse and PADRE2. The necessary changes to the source code are made automatically. The derived datatype approach is used in the following tools: AD01, ADOL-F, IMAS and OPTIMA90. Although it is in theory possible to have a script make the necessary changes in the source code automatically, none of these tools have this facility and the changes must be done manually. C/C++: Established tools for automatic algorithmic differentiation also exist for C/C++[10]. These include include ADIC, an implementation mirroring ADIFOR, and ADOL-C, a free package that uses operator overloading and can operate in the forward or reverse modes and compute higher order derivatives. References [1] Lyness, J. N., and C. B. Moler,, Numerical differentiation of analytic functions, SIAM J. Numer. Anal., Vol. 4, 1967, pp [2] Lyness, J. N., Numerical algorithms based on the theory of complex variables, Proc. ACM 22nd Nat. Conf., Thompson Book Co., Washington DC, 1967, pp [3] Squire, W., and G. Trapp, Using Complex Variables to Estimate Derivatives of Real Functions, SIAM Review, Vol. 10, No. 1, March 1998, pp

13 [4] Martins, J. R. R. A., I. M. Kroo, and J. J. Alonso An Automated Method for Sensitivity Analysis using Complex Variables Proceedings of the 38th Aerospace Sciences Meeting, AIAA Paper Reno, NV, January [5] Martins, J. R. R. A. and P. Sturdza The Connection Between the Complex-Step Derivative Approximation and Algorithmic Differentiation Proceedings of the 39th Aerospace Sciences Meeting, Reno, NV, January AIAA Paper [6] Anderson, W. K., J. C. Newman, D. L. Whitfield, E. J. Nielsen, Sensitivity Analysis for the Navier-Stokes Equations on Unstructured Meshes Using Complex Variables, AIAA Paper No , Proceedings of the 17th Applied Aerodynamics Conference, 28 Jun [7] Newman, J. C., W. K. Anderson, D. L. Whitfield, Multidisciplinary Sensitivity Derivatives Using Complex Variables, MSSU-COE-ERC-98-08, Jul [8] [9] [10] /AD/subject.html [11] Bischof, C., A. Carle, G. Corliss, A. Grienwank, P. Hoveland, ADIFOR: Generating Derivative Codes from Fortran Programs, Scientific Programming, Vol. 1, No. 1, 1992, pp

14 5 Analytic Sensitivity Analysis Analytic methods are the most accurate and efficient methods available for sensitivity analysis. They are, however, more involved than the other methods we have seen so far since they require the knowledge of the governing equations and the algorithm that is used to solve those equations. In this section we will learn how to compute analytic sensitivities with direct and adjoint methods. We will start with single discipline systems and then generalize for the case of multiple systems such as we would encounter in MDO. 5.1 Single Systems Notation f i R k x j y k ψ k function of interest/output i = 1,..., n f residuals of governing equation, k = 1,..., n R design/independent/input variables, j = 1,..., n x state variables, k = 1,..., n R adjoint vector, k = 1,..., n R Basic Equations Consider the residuals of the governing equations of a given system, R k (x j, y k (x j )) = 0 (19) where x j are the independent variables (the design variables) and y k are the state variables that depend on the independent ones through the solution of the governing equations. Note that the number of equations must equal the number of unknowns (the state variables.) Any perturbation in the variables this system of equations must result in no variation of the residuals, if the governing equations are to be satisfied. Therefore, we can write, δr k = 0 R k Rk δx j + δy k = 0. (20) x j since there is a variation due the change in the design variables as well as a variation due to the change in the state vector. This equation applies to all k = 1,..., n R and j = 1,..., n x. Dividing the equation by δx j, we can get it in another form which involves the total derivative dy k /dx j, R k Rk dy k + = 0. (21) x j dx j Our final objective is to obtain the sensitivity of the function of interest, f i, which can be the objective function or a set of constraints. The function f 14

15 also depends on both x j and y k and hence the total variation of f i is, δf i = f i x j δx j + f i δy k. (22) Note that δy k cannot be found explicitly since y k varies implicitly with respect to x j through the solution of the governing equations. We can also divide this equation by δx j to get the alternate form, df i = f + f dy k, (23) dx j x j dx j where i = 1,..., n x and k = 1,..., n R. The first term on the right-hand-side represents the explicit variation of the function of interest with respect to the design variables due to the presence of these variables in the expression for f i. The second term the variation of the function due to the change of the state variables when the governing equations are solved. x R = 0 y f Figure 3: Schematic representation of the governing equations (R = 0), design variables or inputs (x j ), state variables (y k ) and the function of interest or output (f i ). A graphical representation of the system of governing equations is shown in Figure 3 with the input variables x j as the input and f i as the output. The two arrows leading to f i illustrate the fact that f i depends on x j not also explicitly but also through the state variables. The following two sections describe two different ways of calculating df i /dx j which is what we need to perform gradient-based optimization. 15

16 5.1.3 Direct Sensitivity Equations The direct approach first calculates the total variation of the state variables, y k, by solving the differentiated governing equation (23) for dy k /dx j, the total derivative of the state variables with respect to a given design variable. This means solving the linear system of equations, R k dy k Rk =. (24) dx j x j The solution procedure usually involves factorizing the square matrix, R k / and then back-solve to obtain the solution. Note that we have to chose one x j each time we back-solve, since the right-hand-side vector is different for each j. We can then use the result for dy k /dx j and substitute it in equation (23) to get df i /dx j for all i = 1,..., n f Adjoint Sensitivity Equations The adjoint approach adjoins the variations of the governing equations (20) and the function of interest (22), δf i = f i δx j + f ( ) i δy k + ψ T Rk Rk k δx j + δy k (25) x j x j }{{} δr k =0 where ψ k is the adjoint vector. The values of the components of this vector are arbitrary because we only consider variations for which the governing equations are satisfied, i.e., δr k = 0. If we collect the terms multiplying each of the variations we obtain, δf i = ( fi x j + ψ T k ) ( R k fi δx j + + ψk T x j ) R k δy k (26) Since ψ k is arbitrary, we can chose its values to be those for which the term multiplying by δy k is zero, i.e., if we solve, ψ T k R k = f i [ ] T [ ] T Rk fi ψ k = (27) for the adjoint vector ψ k. An adjoint vector is the same for any x j, but it is different for each f i. The term in equation (26) that is multiplied by δx j corresponds to the total derivative of f i with respect to x j, i.e., which are the sensitivities we want to calculate. df i = f i + ψ T dr k k, (28) dx j x j dx j 16

17 5.1.5 Direct vs. Adjoint In the previous two sections, the direct and adjoint sensitivity equations were both derived independently from the same two equations (20, 22). We will now attempt to unify the derivation of these two methods by expressing them in the same equation. This will help us gain a better understanding on how these two approaches are related. If we want to solve for the total sensitivity of the state variables with respect to the design variables, we would have to solve equation (24). Assuming that we have the sensitivity matrix of the residuals with respect to the state variables, R k /, and that it is invertible, the solution is, dy k dx j = [ Rk ] 1 R k x j. (29) Note that the matrix of partial derivatives of the residuals with respect to the state variables, R k /, is square, since the number of governing equations must equal the number of state variables. Substituting equation (29) into the expression for the total derivative of the function of interest (23) to get, df i = f i f [ ] 1 i Rk R k. (30) dx j x j x j }{{} dy k /dx j Both the direct and adjoint methods can be seen in this equation. Using the direct method we would start by solving for the term shown under-braced in equation (30), i.e., the solution of, R k dy k Rk =, (31) dx j x j which is the total sensitivity of the state variables. Note that each set of these total sensitivities is valid for only one design variable, x j. Once we have these sensitivities, we can use this result in equation (30), i.e., df i dx j = f i x j + f i dy k dx j (32) to get the desired sensitivities. To use the adjoint method, we would define the adjoint vector as shown below, df i = f i f [ ] 1 i Rk dx j x j }{{} ψk T R k x j (33) 17

18 Step Direct Adjoint Factorization same same Back-solve n x times n f times Multiplication same same The adjoint vector is then the solution of the system, [ Rk ] T ψ k = f i (34) where we have to solve for each f i. The adjoint vector is the same for any chosen design variable since j does not appear in the equation. We can substitute the resulting adjoint vector into equation (33) to get, df i = f i ψ T R k k. (35) dx j x j x j Unlike the direct method, where each dy k /dx j can be used for any function f i, we must compute a different adjoint vector ψ k for each function of interest. A comparison of the cost of computing sensitivities with the direct versus adjoint methods is shown in Table With either method, we must factorize the same matrix, R k /. The difference in the cost comes form the backsolve step for solving equations (31) and (34) respectively. The direct method requires that we perform this step for each design variable (i.e. for each j) while the adjoint method requires this to be done for each function of interest (i.e. for each i). The multiplication step is simply the calculation of the final sensitivity expressed in equations (32) and (35) respectively. The cost involved in this step when computing the same set of sensitivities is the same for both methods. The final conclusion is the established rule that if the number of design variables (inputs) is greater than the number of functions of interest (output), the adjoint method is more efficient than the direct method and vice-versa. If the number of outputs is similar to the number of inputs, either method will be costly. In this discussion, we have assumed that the governing equations have been discretized. The same kind of procedure can be applied to continuous governing equations. The principle is the same, but the notation would have to be more general. The equations, in the end, have to be discretized in order to be solved numerically. Figure 4 shows the two ways of arriving at the discrete sensitivity equations. We can either differentiate the continuous governing equations first and then discretize them, or discretize the governing equations and differentiate them in the second step. The resulting sensitivity equations should be equivalent, but are not necessarily the same. Differentiating the continuous governing equations first is usually more involved. In addition, applying boundary conditions to the differentiated equations can be non-intuitive as some of these boundary conditions are non-physical. because the boundary conditions of the continuous sensitivity equations are non-physical. 18

19 Continuous Governing Equations Continuous Sensitivity Equations Discrete Governing Equations Discrete Sensitivity Equations 1 Discrete Sensitivity Equations 2 Figure 4: The two ways of obtaining the discretized sensitivity equations 5.2 Example: Structural Sensitivity Analysis The discretized governing equations for a finite-element structural model is, R k = K k ku k f k = 0, (36) where K k k is the stiffness matrix, u k is the vector of displacement (the state) and f k is the vector of applied force (not to be confused with the function of interest from the previous section!). We are interested in finding the sensitivities of the stress, which is related to the displacements by the equation, σ i = S ik u k. (37) We will consider the design variables to be the cross-sectional areas of the elements, A j. We will now look at the terms that we need to use the generalized total sensitivity equation (30). For the matrix of sensitivities of the governing equations with respect to the state variables we find that it is simply the stiffness matrix, i.e., R k = (K k ku k f k ) k = K k k. (38) Let s consider the sensitivity of the residuals with respect to the design variables (cross-sectional areas in our case). Neither the displacements of the applied forces vary explicitly with the element sizes. The only term that depends on A j directly is the stiffness matrix, so we get, R k = (K ku k k f k ) = Kk k u k (39) x j A j A j The partial derivative of the stress with respect to the displacements is simply given by the matrix in equation (37), i.e., f i = σ i k = S ik (40) 19

20 Finally, the explicit variation of stress with respect to the cross-sectional areas is zero, since the stresses depends only on the displacement field, f i x j = σ i A j = 0. (41) Substituting these into the generalized total sensitivity equation (30) we get: dσ i da j = σ i K 1 k k k K k k x j u k (42) Referring to the theory presented previously, if we were to use the direct method, we would solve, and then substitute the result in, du k K k k = Kk k u k (43) da j A j dσ i da j = σ i A j + σ i k du k da j (44) to calculate the desired sensitivities. The adjoint method could also be used, in which case we would solve equation (34) for the structures case, K T k kψ k = σ i k. (45) Then we would substitute the adjoint vector into the equation, dσ i = σ ( i + ψk T K ) k k u k. (46) dx j x j A j to calculate the desired sensitivities. 5.3 Multidisciplinary Sensitivity Analysis The analysis done in the previous section for single discipline systems can be generalized for multiple, coupled systems. The same total sensitivity equation (30) applies, but now governing equations and state variables of all disciplines are included in R and y respectively. To illustrate this, consider for example a coupled aero-structural systems were both aerodynamic (A) and structural (S) analysis are involved and the state variables are the flow state, w and the structural displacements, u. Figure 5 shows how such a system is coupled. equation (31) can then be written for this coupled system as, [ RA R S R A R S ] [ dw dx j du dx j ] = [ RA x j R S x j ] (47) 20

21 x R = 0 A u w R = 0 S f Figure 5: Schematic representation of the aero-structural governing equations. In addition to diagonal terms of the matrix which would appear when solving the single systems we have cross terms expressing the sensitivity of one system to the other s state variables. These equations are sometimes called the Global Sensitivity Equations (GSE) [4]. Similarly, we can write a coupled adjoint based on equation (34), [ RA R S R A R S ] T [ ] ψa = ψ S [ fi f i ] (48) In addition to the GSE expressed in equation (47), Sobieski [4] also introduced an alternative method which he called GSE2 for calculating total sensitivities. Instead of looking at the residual variation, see variation in state, y k (x, y k (x)), where k k. Dividing this by δy k, we get, δy k = δx j + dy k δx j (49) x j dx j dy k = + dy k (50) dx j x j dx j 21

22 For all k, δ kk dy k dx j = x j (51) Writing this in matrix form for the two discipline example we get, [ ] [ ] [ ] I dw dx I j x du = j. (52) dx j x j One advantage of this formulation is that the size of the matrix to be factorized might be reduced. This is due to the fact that the state variables of one system not always depend on all the state variables of the other system. For example, in the case of the coupled aero-structural system, only the surface aerodynamic pressures affect the structural analysis and we could substitute all the flow state, w, by a much smaller vector of surface pressures. Similarly, we could use only the surface structural displacements rather than all of them, since only these influence the aerodynamics. An adjoint version of this alternative can also be derived and the system to be solved is for this case, [ I I ] T [ ] ψa = ψ S [ fi f i ] (53) Since factorizing the full residual sensitivity matrix is in many cases impractical, the method can be slightly modified as follows. Equation (48) can be re-written as, [ ] RA T R S T [ ] ] ψa = (54) R A T R S T ψ S [ fi f i Since the factorization of the matrix in equation ( 54) would be extremely costly, we decided to set up an iterative procedure, much like the one used for our aerostructural solution, where the adjoint vectors are lagged and two different sets of equations are solved separately. For the calculation of the adjoint vector of one discipline, we use the adjoint vector of the other discipline from the previous iteration, i.e., we would solve, [ ] T RA ψ A = f i [ ] T RS ψs (55) [ ] T RS ψ S = f i [ ] T RA ψa (56) whose final result, after convergence, is be the same as equation (48). We will call this the lagged coupled adjoint method for computing sensitivities of coupled 22

23 systems. Note that these equations look like the single discipline ones for the aerodynamic and structural adjoint, except that a forcing term is subtracted from the right-hand-side. Once the solution for both of the adjoint vectors have converged, we are able to compute the final sensitivities of a given cost function by using, References df i = f i ψ T R A A ψ T R S S. (57) dx j x j x j x j [1] Adelman, H. M., R. T. Haftka, Sensitivity Analysis of Discrete Structural Systems, AIAA Journal, Vol. 24, No. 5, May [2] Barthelemy, B., R. T. Haftka and G. A. Cohen, Physically Based Sensitivity Derivatives for Structural Analysis Programs, Computational Mechanics, pp , Springer-Verlag, [3] Belegundu, A. D. and J. S. Arora, Sensitivity Interpretation of Adjoint Variables in Optimal Design, Computer Methods in Applied Mechanics and Engineering, Vol. 48, pp , [4] Sobieszczanski-Sobieski, J., Sensitivity of Complex, Internally Coupled Systems, AIAA Journal, Vol. 28, No. 1., January [5] Hajela, P., C. L. Bloebaum and J. Sobieszczanski-Sobieski, Application of Global Sensitivity Equations in Multidisciplinary Aircraft Synthesis, Journal of Aircraft, Vol. 27, No. 12., pp , December

A COUPLED-ADJOINT METHOD FOR HIGH-FIDELITY AERO-STRUCTURAL OPTIMIZATION

A COUPLED-ADJOINT METHOD FOR HIGH-FIDELITY AERO-STRUCTURAL OPTIMIZATION A COUPLED-ADJOINT METHOD FOR HIGH-FIDELITY AERO-STRUCTURAL OPTIMIZATION Joaquim Rafael Rost Ávila Martins Department of Aeronautics and Astronautics Stanford University Ph.D. Oral Examination, Stanford

More information

AERO-STRUCTURAL WING DESIGN OPTIMIZATION USING HIGH-FIDELITY SENSITIVITY ANALYSIS

AERO-STRUCTURAL WING DESIGN OPTIMIZATION USING HIGH-FIDELITY SENSITIVITY ANALYSIS AERO-STRUCTURAL WING DESIGN OPTIMIZATION USING HIGH-FIDELITY SENSITIVITY ANALYSIS Joaquim R. R. A. Martins and Juan J. Alonso Department of Aeronautics and Astronautics Stanford University, Stanford, CA

More information

Review and Unification of Methods for Computing Derivatives of Multidisciplinary Systems

Review and Unification of Methods for Computing Derivatives of Multidisciplinary Systems 53rd AAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference2th A 23-26 April 212, Honolulu, Hawaii AAA 212-1589 Review and Unification of Methods for Computing Derivatives of

More information

Review and Unification of Methods for Computing Derivatives of Multidisciplinary Computational Models

Review and Unification of Methods for Computing Derivatives of Multidisciplinary Computational Models his is a preprint of the following article, which is available from http://mdolabenginumichedu J R R A Martins and J Hwang Review and unification of methods for computing derivatives of multidisciplinary

More information

Module III: Partial differential equations and optimization

Module III: Partial differential equations and optimization Module III: Partial differential equations and optimization Martin Berggren Department of Information Technology Uppsala University Optimization for differential equations Content Martin Berggren (UU)

More information

A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization

A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization Juan J. Alonso Department of Aeronautics & Astronautics Stanford University jjalonso@stanford.edu Lecture 19 AA200b Applied Aerodynamics

More information

COMPLETE CONFIGURATION AERO-STRUCTURAL OPTIMIZATION USING A COUPLED SENSITIVITY ANALYSIS METHOD

COMPLETE CONFIGURATION AERO-STRUCTURAL OPTIMIZATION USING A COUPLED SENSITIVITY ANALYSIS METHOD COMPLETE CONFIGURATION AERO-STRUCTURAL OPTIMIZATION USING A COUPLED SENSITIVITY ANALYSIS METHOD Joaquim R. R. A. Martins Juan J. Alonso James J. Reuther Department of Aeronautics and Astronautics Stanford

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

A Coupled-Adjoint Sensitivity Analysis Method for High-Fidelity Aero-Structural Design

A Coupled-Adjoint Sensitivity Analysis Method for High-Fidelity Aero-Structural Design Optimization and Engineering, 6, 33 62, 2005 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands A Coupled-Adjoint Sensitivity Analysis Method for High-Fidelity Aero-Structural

More information

Math 4263 Homework Set 1

Math 4263 Homework Set 1 Homework Set 1 1. Solve the following PDE/BVP 2. Solve the following PDE/BVP 2u t + 3u x = 0 u (x, 0) = sin (x) u x + e x u y = 0 u (0, y) = y 2 3. (a) Find the curves γ : t (x (t), y (t)) such that that

More information

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics). 1 Introduction Read sections 1.1, 1.2.1 1.2.4, 1.2.6, 1.3.8, 1.3.9, 1.4. Review questions 1.1 1.6, 1.12 1.21, 1.37. The subject of Scientific Computing is to simulate the reality. Simulation is the representation

More information

Physics 584 Computational Methods

Physics 584 Computational Methods Physics 584 Computational Methods Introduction to Matlab and Numerical Solutions to Ordinary Differential Equations Ryan Ogliore April 18 th, 2016 Lecture Outline Introduction to Matlab Numerical Solutions

More information

Divergence Formulation of Source Term

Divergence Formulation of Source Term Preprint accepted for publication in Journal of Computational Physics, 2012 http://dx.doi.org/10.1016/j.jcp.2012.05.032 Divergence Formulation of Source Term Hiroaki Nishikawa National Institute of Aerospace,

More information

Errors. Intensive Computation. Annalisa Massini 2017/2018

Errors. Intensive Computation. Annalisa Massini 2017/2018 Errors Intensive Computation Annalisa Massini 2017/2018 Intensive Computation - 2017/2018 2 References Scientific Computing: An Introductory Survey - Chapter 1 M.T. Heath http://heath.cs.illinois.edu/scicomp/notes/index.html

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... Introduce the topic of numerical methods Consider the Error analysis and sources of errors Introduction A numerical method which

More information

Analysis II - few selective results

Analysis II - few selective results Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem

More information

For those of you who are taking Calculus AB concurrently with AP Physics, I have developed a

For those of you who are taking Calculus AB concurrently with AP Physics, I have developed a AP Physics C: Mechanics Greetings, For those of you who are taking Calculus AB concurrently with AP Physics, I have developed a brief introduction to Calculus that gives you an operational knowledge of

More information

Synopsis of Complex Analysis. Ryan D. Reece

Synopsis of Complex Analysis. Ryan D. Reece Synopsis of Complex Analysis Ryan D. Reece December 7, 2006 Chapter Complex Numbers. The Parts of a Complex Number A complex number, z, is an ordered pair of real numbers similar to the points in the real

More information

Complex Differentials and the Stokes, Goursat and Cauchy Theorems

Complex Differentials and the Stokes, Goursat and Cauchy Theorems Complex Differentials and the Stokes, Goursat and Cauchy Theorems Benjamin McKay June 21, 2001 1 Stokes theorem Theorem 1 (Stokes) f(x, y) dx + g(x, y) dy = U ( g y f ) dx dy x where U is a region of the

More information

Complex Variables. Instructions Solve any eight of the following ten problems. Explain your reasoning in complete sentences to maximize credit.

Complex Variables. Instructions Solve any eight of the following ten problems. Explain your reasoning in complete sentences to maximize credit. Instructions Solve any eight of the following ten problems. Explain your reasoning in complete sentences to maximize credit. 1. The TI-89 calculator says, reasonably enough, that x 1) 1/3 1 ) 3 = 8. lim

More information

01 Harmonic Oscillations

01 Harmonic Oscillations Utah State University DigitalCommons@USU Foundations of Wave Phenomena Library Digital Monographs 8-2014 01 Harmonic Oscillations Charles G. Torre Department of Physics, Utah State University, Charles.Torre@usu.edu

More information

Automatic Differentiation Lecture No 1

Automatic Differentiation Lecture No 1 Automatic Differentiation Lecture No 1 Warwick Tucker The CAPA group Department of Mathematics Uppsala University, Sweden escience Winter School, Geilo Introduction to AD When do we use derivatives? Introduction

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list

More information

26.2. Cauchy-Riemann Equations and Conformal Mapping. Introduction. Prerequisites. Learning Outcomes

26.2. Cauchy-Riemann Equations and Conformal Mapping. Introduction. Prerequisites. Learning Outcomes Cauchy-Riemann Equations and Conformal Mapping 26.2 Introduction In this Section we consider two important features of complex functions. The Cauchy-Riemann equations provide a necessary and sufficient

More information

QUADRATIC PROGRAMMING?

QUADRATIC PROGRAMMING? QUADRATIC PROGRAMMING? WILLIAM Y. SIT Department of Mathematics, The City College of The City University of New York, New York, NY 10031, USA E-mail: wyscc@cunyvm.cuny.edu This is a talk on how to program

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

High-Fidelity Aero-Structural Design Optimization of a Supersonic Business Jet

High-Fidelity Aero-Structural Design Optimization of a Supersonic Business Jet AIAA 2002 1483 High-Fidelity Aero-Structural Design Optimization of a Supersonic Business Jet Joaquim R. R. A. Martins, Juan J. Alonso Stanford University, Stanford, CA 94305 James J. Reuther NASA Ames

More information

Adjoint code development and optimization using automatic differentiation (AD)

Adjoint code development and optimization using automatic differentiation (AD) Adjoint code development and optimization using automatic differentiation (AD) Praveen. C Computational and Theoretical Fluid Dynamics Division National Aerospace Laboratories Bangalore - 560 037 CTFD

More information

Convergence sums and the derivative of a sequence at

Convergence sums and the derivative of a sequence at Convergence sums and the derivative of a sequence at infinity Chelton D. Evans and William K. Pattinson Abstract For convergence sums, by threading a continuous curve through a monotonic sequence, a series

More information

Is My CFD Mesh Adequate? A Quantitative Answer

Is My CFD Mesh Adequate? A Quantitative Answer Is My CFD Mesh Adequate? A Quantitative Answer Krzysztof J. Fidkowski Gas Dynamics Research Colloqium Aerospace Engineering Department University of Michigan January 26, 2011 K.J. Fidkowski (UM) GDRC 2011

More information

Here are brief notes about topics covered in class on complex numbers, focusing on what is not covered in the textbook.

Here are brief notes about topics covered in class on complex numbers, focusing on what is not covered in the textbook. Phys374, Spring 2008, Prof. Ted Jacobson Department of Physics, University of Maryland Complex numbers version 5/21/08 Here are brief notes about topics covered in class on complex numbers, focusing on

More information

Numerical algorithms for one and two target optimal controls

Numerical algorithms for one and two target optimal controls Numerical algorithms for one and two target optimal controls Sung-Sik Kwon Department of Mathematics and Computer Science, North Carolina Central University 80 Fayetteville St. Durham, NC 27707 Email:

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Introduction to Finite Di erence Methods

Introduction to Finite Di erence Methods Introduction to Finite Di erence Methods ME 448/548 Notes Gerald Recktenwald Portland State University Department of Mechanical Engineering gerry@pdx.edu ME 448/548: Introduction to Finite Di erence Approximation

More information

ODEs and Redefining the Concept of Elementary Functions

ODEs and Redefining the Concept of Elementary Functions ODEs and Redefining the Concept of Elementary Functions Alexander Gofen The Smith-Kettlewell Eye Research Institute, 2318 Fillmore St., San Francisco, CA 94102, USA galex@ski.org, www.ski.org/gofen Abstract.

More information

An Introduction to Differential Algebra

An Introduction to Differential Algebra An Introduction to Differential Algebra Alexander Wittig1, P. Di Lizia, R. Armellin, et al. 1 ESA Advanced Concepts Team (TEC-SF) SRL, Milan Dinamica Outline 1 Overview Five Views of Differential Algebra

More information

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1. MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:

More information

Chapter 4. Adjoint-state methods

Chapter 4. Adjoint-state methods Chapter 4 Adjoint-state methods As explained in section (3.4), the adjoint F of the linearized forward (modeling) operator F plays an important role in the formula of the functional gradient δj of the

More information

AN EVALUATION SCHEME FOR THE UNCERTAINTY ANALYSIS OF A CAPTIVE TRAJECTORY SYSTEM

AN EVALUATION SCHEME FOR THE UNCERTAINTY ANALYSIS OF A CAPTIVE TRAJECTORY SYSTEM 24th INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES AN EVALUATION SCHEME FOR THE UNCERTAINTY ANALYSIS OF A CAPTIVE TRAJECTORY SYSTEM Sean Tuling Defencetek, CSIR Keywords: Uncertainty Analysis, CTS

More information

NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM

NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM (Preprint) AAS 12-638 NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM Ahmad Bani Younes, and James Turner Many numerical integration

More information

Variational assimilation Practical considerations. Amos S. Lawless

Variational assimilation Practical considerations. Amos S. Lawless Variational assimilation Practical considerations Amos S. Lawless a.s.lawless@reading.ac.uk 4D-Var problem ] [ ] [ 2 2 min i i i n i i T i i i b T b h h J y R y B,,, n i i i i f subject to Minimization

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations An Overly Simplified and Brief Review of Differential Equation Solution Methods We will be dealing with initial or boundary value problems. A typical initial value problem has the form y y 0 y(0) 1 A typical

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Continuous Differentiation of Complex Systems Applied to a Hypersonic Vehicle

Continuous Differentiation of Complex Systems Applied to a Hypersonic Vehicle Continuous of Complex Systems Applied to a Vehicle AIAA Aircraft Flight Mechanics Conference Derek J. Dalle, Sean M. Torrez, James F. Driscoll University of Michigan, Ann Arbor, MI 4819 August 15, 212,

More information

A GUI FOR EVOLVE ZAMS

A GUI FOR EVOLVE ZAMS A GUI FOR EVOLVE ZAMS D. R. Schlegel Computer Science Department Here the early work on a new user interface for the Evolve ZAMS stellar evolution code is presented. The initial goal of this project is

More information

Truss Structures: The Direct Stiffness Method

Truss Structures: The Direct Stiffness Method . Truss Structures: The Companies, CHAPTER Truss Structures: The Direct Stiffness Method. INTRODUCTION The simple line elements discussed in Chapter introduced the concepts of nodes, nodal displacements,

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

Single Variable Minimization

Single Variable Minimization AA222: MDO 37 Sunday 1 st April, 2012 at 19:48 Chapter 2 Single Variable Minimization 2.1 Motivation Most practical optimization problems involve many variables, so the study of single variable minimization

More information

LECTURE 3. RATIONAL NUMBERS: AN EXAMPLE OF MATHEMATICAL CONSTRUCT

LECTURE 3. RATIONAL NUMBERS: AN EXAMPLE OF MATHEMATICAL CONSTRUCT ANALYSIS FOR HIGH SCHOOL TEACHERS LECTURE 3. RATIONAL NUMBERS: AN EXAMPLE OF MATHEMATICAL CONSTRUCT ROTHSCHILD CAESARIA COURSE, 2011/2 1. Rational numbers: how to define them? Rational numbers were discovered

More information

Design Optimization of Structures with Shape Memory Alloy Member Actuators. Michele Pachikara

Design Optimization of Structures with Shape Memory Alloy Member Actuators. Michele Pachikara Design Optimization of Structures with Shape Memory Alloy Member Actuators Michele Pachikara A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Aeronautics

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

Introduction to Partial Differential Equations

Introduction to Partial Differential Equations Introduction to Partial Differential Equations Partial differential equations arise in a number of physical problems, such as fluid flow, heat transfer, solid mechanics and biological processes. These

More information

8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples

8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples 8-1: Backpropagation Prof. J.C. Kao, UCLA Backpropagation Chain rule for the derivatives Backpropagation graphs Examples 8-2: Backpropagation Prof. J.C. Kao, UCLA Motivation for backpropagation To do gradient

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Computational Methods in Plasma Physics

Computational Methods in Plasma Physics Computational Methods in Plasma Physics Richard Fitzpatrick Institute for Fusion Studies University of Texas at Austin Purpose of Talk Describe use of numerical methods to solve simple problem in plasma

More information

Chapter 1 Error Analysis

Chapter 1 Error Analysis Chapter 1 Error Analysis Several sources of errors are important for numerical data processing: Experimental uncertainty: Input data from an experiment have a limited precision. Instead of the vector of

More information

Challenges and Complexity of Aerodynamic Wing Design

Challenges and Complexity of Aerodynamic Wing Design Chapter 1 Challenges and Complexity of Aerodynamic Wing Design Kasidit Leoviriyakit and Antony Jameson Department of Aeronautics and Astronautics Stanford University, Stanford CA kasidit@stanford.edu and

More information

Chapter 1 Direct Modeling for Computational Fluid Dynamics

Chapter 1 Direct Modeling for Computational Fluid Dynamics Chapter 1 Direct Modeling for Computational Fluid Dynamics Computational fluid dynamics (CFD) is a scientific discipline, which aims to capture fluid motion in a discretized space. The description of the

More information

Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE. ISSFD 2015 congress, Germany. Dated: September 14, 2015

Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE. ISSFD 2015 congress, Germany. Dated: September 14, 2015 Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE Emmanuel Bignon (), Pierre mercier (), Vincent Azzopardi (), Romain Pinède () ISSFD 205 congress, Germany Dated:

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

A computational architecture for coupling heterogeneous numerical models and computing coupled derivatives

A computational architecture for coupling heterogeneous numerical models and computing coupled derivatives This is a preprint of the following article, which is available from http://mdolab.engin.umich.edu John T. Hwang and J. R. R. A. Martins. A computational architecture for coupling heterogeneous numerical

More information

B Elements of Complex Analysis

B Elements of Complex Analysis Fourier Transform Methods in Finance By Umberto Cherubini Giovanni Della Lunga Sabrina Mulinacci Pietro Rossi Copyright 21 John Wiley & Sons Ltd B Elements of Complex Analysis B.1 COMPLEX NUMBERS The purpose

More information

Improving the Verification and Validation Process

Improving the Verification and Validation Process Improving the Verification and Validation Process Mike Fagan Rice University Dave Higdon Los Alamos National Laboratory Notes to Audience I will use the much shorter VnV abbreviation, rather than repeat

More information

Inverse Kinematics. Mike Bailey.

Inverse Kinematics. Mike Bailey. Inverse Kinematics This work is licensed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License Mike Bailey mjb@cs.oregonstate.edu inversekinematics.pptx Inverse Kinematics

More information

STCE. Adjoint Code Design Patterns. Uwe Naumann. RWTH Aachen University, Germany. QuanTech Conference, London, April 2016

STCE. Adjoint Code Design Patterns. Uwe Naumann. RWTH Aachen University, Germany. QuanTech Conference, London, April 2016 Adjoint Code Design Patterns Uwe Naumann RWTH Aachen University, Germany QuanTech Conference, London, April 2016 Outline Why Adjoints? What Are Adjoints? Software Tool Support: dco/c++ Adjoint Code Design

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

NUMERICAL SOLUTION OF ODE IVPs. Overview

NUMERICAL SOLUTION OF ODE IVPs. Overview NUMERICAL SOLUTION OF ODE IVPs 1 Quick review of direction fields Overview 2 A reminder about and 3 Important test: Is the ODE initial value problem? 4 Fundamental concepts: Euler s Method 5 Fundamental

More information

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9. PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional

More information

A Brief Introduction to Numerical Methods for Differential Equations

A Brief Introduction to Numerical Methods for Differential Equations A Brief Introduction to Numerical Methods for Differential Equations January 10, 2011 This tutorial introduces some basic numerical computation techniques that are useful for the simulation and analysis

More information

Final year project. Methods for solving differential equations

Final year project. Methods for solving differential equations Final year project Methods for solving differential equations Author: Tej Shah Supervisor: Dr. Milan Mihajlovic Date: 5 th May 2010 School of Computer Science: BSc. in Computer Science and Mathematics

More information

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems P.-O. Persson and J. Peraire Massachusetts Institute of Technology 2006 AIAA Aerospace Sciences Meeting, Reno, Nevada January 9,

More information

Matrix Assembly in FEA

Matrix Assembly in FEA Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,

More information

Notes for Expansions/Series and Differential Equations

Notes for Expansions/Series and Differential Equations Notes for Expansions/Series and Differential Equations In the last discussion, we considered perturbation methods for constructing solutions/roots of algebraic equations. Three types of problems were illustrated

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models Mixed effects models - Part IV Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra

Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra Politecnico di Milano Department of Aerospace Engineering Milan, Italy Taylor Methods and Computer Assisted Proofs

More information

Waves in a Shock Tube

Waves in a Shock Tube Waves in a Shock Tube Ivan Christov c February 5, 005 Abstract. This paper discusses linear-wave solutions and simple-wave solutions to the Navier Stokes equations for an inviscid and compressible fluid

More information

Automatic Differentiation Algorithms and Data Structures. Chen Fang PhD Candidate Joined: January 2013 FuRSST Inaugural Annual Meeting May 8 th, 2014

Automatic Differentiation Algorithms and Data Structures. Chen Fang PhD Candidate Joined: January 2013 FuRSST Inaugural Annual Meeting May 8 th, 2014 Automatic Differentiation Algorithms and Data Structures Chen Fang PhD Candidate Joined: January 2013 FuRSST Inaugural Annual Meeting May 8 th, 2014 1 About 2 3 of the code for physics in simulators computes

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

I. Numerical Computing

I. Numerical Computing I. Numerical Computing A. Lectures 1-3: Foundations of Numerical Computing Lecture 1 Intro to numerical computing Understand difference and pros/cons of analytical versus numerical solutions Lecture 2

More information

Modeling and Experimentation: Compound Pendulum

Modeling and Experimentation: Compound Pendulum Modeling and Experimentation: Compound Pendulum Prof. R.G. Longoria Department of Mechanical Engineering The University of Texas at Austin Fall 2014 Overview This lab focuses on developing a mathematical

More information

Ordinary differential equations - Initial value problems

Ordinary differential equations - Initial value problems Education has produced a vast population able to read but unable to distinguish what is worth reading. G.M. TREVELYAN Chapter 6 Ordinary differential equations - Initial value problems In this chapter

More information

Derivatives for Time-Spectral Computational Fluid Dynamics using an Automatic Differentiation Adjoint

Derivatives for Time-Spectral Computational Fluid Dynamics using an Automatic Differentiation Adjoint Derivatives for Time-Spectral Computational Fluid Dynamics using an Automatic Differentiation Adjoint Charles A. Mader University of Toronto Institute for Aerospace Studies Toronto, Ontario, Canada Joaquim

More information

Automatic Differentiation and Neural Networks

Automatic Differentiation and Neural Networks Statistical Machine Learning Notes 7 Automatic Differentiation and Neural Networks Instructor: Justin Domke 1 Introduction The name neural network is sometimes used to refer to many things (e.g. Hopfield

More information

Multidisciplinary Design Optimization of Aerospace Systems

Multidisciplinary Design Optimization of Aerospace Systems Please cite this document as: Martins, Joaquim R. R. A., Multidisciplinary Design Optimization of Aerospace Systems, Advances and Trends in Optimization with Engineering Applications, SIAM, 2017, pp. 249

More information

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations.

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations. FINITE DIFFERENCES Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations. 1. Introduction When a function is known explicitly, it is easy

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

Mathematical Foundations

Mathematical Foundations Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information