Sensitivity Analysis AA222 - Multidisciplinary Design Optimization Joaquim R. R. A. Martins Durand

Similar documents
A COUPLED-ADJOINT METHOD FOR HIGH-FIDELITY AERO-STRUCTURAL OPTIMIZATION

AERO-STRUCTURAL WING DESIGN OPTIMIZATION USING HIGH-FIDELITY SENSITIVITY ANALYSIS

Review and Unification of Methods for Computing Derivatives of Multidisciplinary Systems

Review and Unification of Methods for Computing Derivatives of Multidisciplinary Computational Models

Module III: Partial differential equations and optimization

A Crash-Course on the Adjoint Method for Aerodynamic Shape Optimization

COMPLETE CONFIGURATION AERO-STRUCTURAL OPTIMIZATION USING A COUPLED SENSITIVITY ANALYSIS METHOD

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

A Coupled-Adjoint Sensitivity Analysis Method for High-Fidelity Aero-Structural Design

Math 4263 Homework Set 1

Compute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).

Physics 584 Computational Methods

Divergence Formulation of Source Term

Errors. Intensive Computation. Annalisa Massini 2017/2018

5 Handling Constraints

Numerical Methods. King Saud University

Analysis II - few selective results

For those of you who are taking Calculus AB concurrently with AP Physics, I have developed a

Synopsis of Complex Analysis. Ryan D. Reece

Complex Differentials and the Stokes, Goursat and Cauchy Theorems

Complex Variables. Instructions Solve any eight of the following ten problems. Explain your reasoning in complete sentences to maximize credit.

01 Harmonic Oscillations

Automatic Differentiation Lecture No 1

Chapter 1 Mathematical Preliminaries and Error Analysis

26.2. Cauchy-Riemann Equations and Conformal Mapping. Introduction. Prerequisites. Learning Outcomes

QUADRATIC PROGRAMMING?

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

High-Fidelity Aero-Structural Design Optimization of a Supersonic Business Jet

Adjoint code development and optimization using automatic differentiation (AD)

Convergence sums and the derivative of a sequence at

Is My CFD Mesh Adequate? A Quantitative Answer

Here are brief notes about topics covered in class on complex numbers, focusing on what is not covered in the textbook.

Numerical algorithms for one and two target optimal controls

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

Introduction to Finite Di erence Methods

ODEs and Redefining the Concept of Elementary Functions

An Introduction to Differential Algebra

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Chapter 4. Adjoint-state methods

AN EVALUATION SCHEME FOR THE UNCERTAINTY ANALYSIS OF A CAPTIVE TRAJECTORY SYSTEM

NUMERICAL INTEGRATION OF CONSTRAINED MULTI-BODY DYNAMICAL SYSTEMS USING 5 T H ORDER EXACT ANALYTIC CONTINUATION ALGORITHM

Variational assimilation Practical considerations. Amos S. Lawless

Chapter 1 Computer Arithmetic

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Basic Aspects of Discretization

B553 Lecture 5: Matrix Algebra Review

Continuous Differentiation of Complex Systems Applied to a Hypersonic Vehicle

A GUI FOR EVOLVE ZAMS

Truss Structures: The Direct Stiffness Method

Elements of Floating-point Arithmetic

MAT 460: Numerical Analysis I. James V. Lambers

Notes for Chapter 1 of. Scientific Computing with Case Studies

Single Variable Minimization

LECTURE 3. RATIONAL NUMBERS: AN EXAMPLE OF MATHEMATICAL CONSTRUCT

Design Optimization of Structures with Shape Memory Alloy Member Actuators. Michele Pachikara

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Mathematical Methods for Physics and Engineering

Introduction to Partial Differential Equations

8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Computational Methods in Plasma Physics

Chapter 1 Error Analysis

Challenges and Complexity of Aerodynamic Wing Design

Chapter 1 Direct Modeling for Computational Fluid Dynamics

Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE. ISSFD 2015 congress, Germany. Dated: September 14, 2015

MATH 205C: STATIONARY PHASE LEMMA

Elements of Floating-point Arithmetic

A computational architecture for coupling heterogeneous numerical models and computing coupled derivatives

B Elements of Complex Analysis

Improving the Verification and Validation Process

Inverse Kinematics. Mike Bailey.

STCE. Adjoint Code Design Patterns. Uwe Naumann. RWTH Aachen University, Germany. QuanTech Conference, London, April 2016

Scientific Computing

Notes on Complex Analysis

NUMERICAL SOLUTION OF ODE IVPs. Overview

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

A Brief Introduction to Numerical Methods for Differential Equations

Final year project. Methods for solving differential equations

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

Matrix Assembly in FEA

Notes for Expansions/Series and Differential Equations

Introduction to General and Generalized Linear Models

Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra

Waves in a Shock Tube

Automatic Differentiation Algorithms and Data Structures. Chen Fang PhD Candidate Joined: January 2013 FuRSST Inaugural Annual Meeting May 8 th, 2014

Math 411 Preliminaries

I. Numerical Computing

Modeling and Experimentation: Compound Pendulum

Ordinary differential equations - Initial value problems

Derivatives for Time-Spectral Computational Fluid Dynamics using an Automatic Differentiation Adjoint

Automatic Differentiation and Neural Networks

Multidisciplinary Design Optimization of Aerospace Systems

FINITE DIFFERENCES. Lecture 1: (a) Operators (b) Forward Differences and their calculations. (c) Backward Differences and their calculations.

Finite difference method for elliptic problems: I

Mathematical Foundations

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Transcription:

Sensitivity Analysis AA222 - Multidisciplinary Design Optimization Joaquim R. R. A. Martins Durand 165 email: joaquim.martins@stanford.edu 1 Introduction Sensitivity analysis consists in computing derivatives of one or more quantities (outputs) with respect to one or several independent variables (inputs). Although there are various uses for sensitivity information, our main motivation is the use of this information in gradient-based optimization. Since the calculation of gradients is often the most costly step in the optimization cycle, using efficient methods that accurately calculate sensitivities are extremely important. There are a several different methods for sensitivity analysis but since none of them is the clear choice for all cases, it is important to understand their relative merits. When choosing a method for computing sensitivities, one is mainly concerned with its accuracy and computational expense. In certain cases it is also important that the method be easily implemented. A method which is efficient but difficult to implement may never be finalized, while an easier, though computationally more costly method, would actually give some result. Factors that affect the choice of method include: the ratio of the number of outputs to the number of inputs, the importance of computational efficiency and degree of laziness of the programmer. Consider a general constrained optimization problem of the form: minimize f(x i ) w.r.t x i i = 1, 2,..., n subject to g j (x i ) 0, j = 1, 2,..., m where f is a non-linear function of n design variables x i and g j are the m nonlinear inequality constraints we have to satisfy. In order to solve this problem, a gradient-based optimization algorithm usually requires: The sensitivities of the objective function, f/ x i (n 1). The sensitivities of all the active constraints at the current design point g j / x i (m n). 2 Finite-Differences Finite-difference formulae are very commonly used to estimate sensitivities. Although these approximations are neither particularly accurate or efficient, this method s biggest advantage resides in the fact that it is extremely easy to implement. 1

All the finite-differencing formulae can be derived by truncating a Taylor series expanded about a given point x. A common estimate for the first derivative is the forward-difference which can be derived from the expansion of f(x + h), f(x + h) = f(x) + hf (x) + h2 2! f (x) + h3 3! f (x) +... (1) Solving for f we get the finite-difference formula, f (x) = f(x + h) f(x) h + O(h), (2) where h is called the finite-difference interval. The truncation error is O(h), and hence this is a first-order approximation. For a second-order estimate we can use the expansion of f(x h), f(x h) = f(x) hf (x) + h2 2! f (x) h3 3! f (x) +..., (3) and subtract it from the expansion given in Equation (1). The resulting equation can then be solved for the derivative of f to obtain the central-difference formula, f (x) = f(x + h) f(x h) 2h + O(h 2 ). (4) When estimating sensitivities using finite-difference formulae we are faced with the step-size dilemma, i.e. the desire to choose a small step size to minimize truncation error while avoiding the use of a step so small that errors due to subtractive cancellation become dominant. The cost of calculating sensitivities with finite-differences is proportional to the number of design variables since f must be calculated for each perturbation of x i. This means that if we use forward differences, for example, the cost would be n + 1 times the cost of calculating f. 3 The Complex-Step Derivative Approximation 3.1 Background The use of complex variables to develop estimates of derivatives originated with the work of Lyness and Moler [1] and Lyness [2]. Their work produced several methods that made use of complex variables, including a reliable method for calculating the n th derivative of an analytic function. However, only recently has some of this theory been rediscovered by Squire and Trapp [3] and used to obtain a very simple expression for estimating the first derivative. This estimate is suitable for use in modern numerical computing and has shown to be very accurate, extremely robust and surprisingly easy to implement, while retaining a reasonable computational cost. 2

3.2 Basic Theory We will now see that a very simple formula for the first derivative of real functions can be obtained using complex calculus. Consider a function, f = u+iv, of the complex variable, z = x+iy. If f is analytic the Cauchy-Riemann equations apply, i.e., x = v y (5) y = v x. (6) These equations establish the exact relationship between the real and imaginary parts of the function. We can use the definition of a derivative in the right hand side of the first Cauchy-Riemann Equation(5) to obtain, x = lim h 0 v(x + i(y + h)) v(x + iy). (7) h where h is a small real number. Since the functions that we are interested in are real functions of a real variable, we restrict ourselves to the real axis, in which case y = 0, u(x) = f(x) and v(x) = 0. Equation (7) can then be re-written as, f x = lim h 0 For a small discrete h, this can be approximated by, f x Im [f (x + ih)]. (8) h Im [f (x + ih)]. (9) h We will call this the complex-step derivative approximation. This estimate is not subject to subtractive cancellation error, since it does not involve a difference operation. This constitutes a tremendous advantage over the finite-difference approaches expressed in Equation (2, 4). In order to determine the error involved in this approximation, we will show an alternative derivation based on a Taylor series expansion. Rather than using a real step h, we now use a pure imaginary step, ih. If f is a real function in real variables and it is also analytic, we can expand it in a Taylor series about a real point x as follows, f(x + ih) = f(x) + ihf (x) h 2 f (x) 2! ih 3 f (x) 3! +... (10) Taking the imaginary parts of both sides of Equation (10) and dividing the equation by h yields f (x) = Im [f(x + ih)] h + h 2 f (x) 3! +... (11) Hence the approximations is a O(h 2 ) estimate of the derivative of f. 3

3.3 A Simple Numerical Example Because the complex-step approximation does not involve a difference operation, we can choose extremely small steps sizes with no loss of accuracy due to subtractive cancellation. To illustrate this, consider the following analytic function: f(x) = e x sin3 x + cos 3 x (12) The exact derivative at x = 1.5 was computed analytically to 16 digits and then compared to the results given by the complex-step (9) and the forward and central finite-difference approximations. Figure 1: Relative error in the sensitivity estimates given by finite-difference and the complex- step methods with the analytic result as the reference; ε = f f ref. f ref The forward-difference estimate initially converges to the exact result at a linear rate since its truncation error is O(h), while the central-difference converges quadratically, as expected. However, as the step is reduced below a value of about 10 8 for the forward-difference and 10 5 for the central-difference, subtractive cancellation errors become significant and the estimates are unreliable. When the interval h is so small that no difference exists in the output (for steps smaller than 10 16 ) the finite-difference estimates eventually yields zero and then ε = 1. The complex-step estimate converges quadratically with decreasing step size, as predicted by the truncation error estimate. The estimate is practically insensitive to small step sizes and below an h of the order of 10 8 it achieves the 4

accuracy of the function evaluation. Comparing the best accuracy of each of these approaches, we can see that by using finite-difference we only achieve a fraction of the accuracy that is obtained by using the complex-step approximation. As we can see the complex-step size can be made extremely small. However, there is a lower limit on the step size when using finite precision arithmetic. The range of real numbers that can be handled in numerical computing is dependent on the particular compiler that is used. In this case, the smallest non-zero number that can be represented is 10 308. If a number falls below this value, underflow occurs and the number drops to zero. Note that the estimate is still accurate down to a step of the order of 10 307. Below this, underflow occurs and the estimate results in NaN. In general, the smallest possible h is the one below which underflow occurs somewhere in the algorithm. When it comes to comparing the relative accuracy of complex and real computations, there is an increased error in basic arithmetic operations when using complex numbers, more specifically when dividing and multiplying. 3.4 Complex Function Definitions In the derivation of the complex-step derivative approximation (9) for a function f we have assumed that f was an analytic function, i.e. that the Cauchy- Riemann equations apply. It is therefore important to examine to what extent this assumption holds when the value of the function is calculated by a numerical algorithm. In addition it is also useful to explain how we can convert real functions and operators such that they can take complex numbers as arguments. Fortunately, in the case of Fortran, complex numbers are a standard data type and many intrinsic functions are already defined for them. Any algorithm can be broken down into a sequence of basic operations. Two main types of operations are relevant when converting a real algorithm to a complex one: Relational operators Arithmetic functions and operators. Relational logic operators such as greater than and less than are not defined for complex numbers in Fortran. These operators are usually used in conjunction with if statements in order to redirect the execution thread. The original algorithm and its complexified version must obviously follow the same execution thread. Therefore, defining these operators to compare only the real parts of the arguments is the correct approach. Functions that choose one argument such as max and min are based on relational operators. Therefore, according to our previous discussion, we should once more choose a number based on its real part alone and let the imaginary part tag along. Any algorithm that uses conditional statements is likely to be a discontinuous function of its inputs. Either the function value itself is discontinuous or the 5

discontinuity is in the first or higher derivatives. When using a finite-difference method, the derivative estimate will be incorrect if the two function evaluations are within h of the discontinuity location. However, if the complex-step is used, the resulting derivative estimate will be correct right up to the discontinuity. At the discontinuity, a derivative does not exist by definition, but if the function is defined a that point, the approximation will still return a value that will depend on how the function is defined at that point. Arithmetic functions and operators include addition, multiplication, and trigonometric functions, to name only a few, and most of these have a standard complex definition that is analytic almost everywhere. Many of these definitions are implemented in Fortran. Whether they are or not depends on the compiler and libraries that are used. The user should check the documentation of the particular Fortran compiler being used in order to determine which intrinsic functions need to be redefined. Functions of the complex variable are merely extensions of their real counterparts. By requiring that the extended function satisfy the Cauchy-Riemann equations, i.e. analyticity, and that its properties be the same as those of the real function, we can obtain a unique complex function definition. Since these complex functions are analytic, the complex-step approximation is valid and will yield the correct result. Some of the functions, however, have singularities or branch cuts on which they are not analytic. This does not pose a problem since, as previously observed, the complex-step approximation will return a correct one-sided derivative. As for the case of a function that is not defined at a given point, the algorithm will not return a function value, so a derivative cannot be obtained. However, the derivative estimate will be correct in the neighborhood of the discontinuity. The only standard complex function definition that is non-analytic is the absolute value function or modulus. When the argument of this function is a complex value, the function returns a positive real number, z = x 2 + y 2. This function s definition was not derived by imposing analyticity and therefore it will not yield the correct derivative when using the complex-step estimate. In order to derive an analytic definition of abs we start by satisfying the Cauchy- Riemann equations. From the Equation (5), since we know what the value of the derivative must be, we can write, x = v y = { 1 x < 0 +1 x > 0. (13) From Equation (6), since v/ x = 0 on the real axis, we get that / y = 0 on the axis, so the real part of the result must be independent of the imaginary part of the variable. Therefore, the new sign of the imaginary part depends only on the sign of the real part of the complex number, and an analytic absolute 6

value function can be defined as: abs(x + iy) = { x iy x < 0 +x + iy x > 0. (14) Note that this is not analytic at x = 0 since a derivative does not exist for the real absolute value. Once again, the complex-step approximation will give the correct value of the first derivative right up to the discontinuity. Later the x > 0 condition will be substituted by x 0 so that we not only obtain a function value for x = 0, but also we are able to calculate the correct right-hand-side derivative at that point. 3.5 Implementation Procedure The complex-step method can be implemented in many different programming languages. The following is a general procedure that applies to any language which supports complex arithmetic: 1. Substitute all real type variable declarations with complex declarations. It is not strictly necessary to declare all variables complex, but it is much easier to do so. 2. Define all functions and operators that are not defined for complex arguments and re-define abs. 3. Change input and output statements if necessary. can be esti- 4. A complex-step can then be added to the desired x and f x mated using Equation (9). The complex-step method can be implemented with of without operator overloading, but the latter results in a more elegant implementation. Fortran: Fortunately, in Fortran 90, intrinsic functions and operators (including comparison operators) can be overloaded and this makes it possible to use the operator overloading type of implementation. This means that if a particular function or operator does not take complex arguments, one can extend it by writing another definition that takes this type of arguments. This feature makes it much easier to implement the complex-step method since once we overload the functions and operators, there is no need to change the function calls or conditional statements. The compiler will automatically determine the argument type and choose the correct function or operation. A module with the necessary definitions and a script that converts the original source code automatically are available on the web [9]. C/C++: Since C++ also supports overloading, the implementation is analogous to the Fortran one. An include file, contains the definition of a new 7

variable type called cmplx as well as all the functions that are necessary for the the complex-step method. The inclusion of this file and the replacement of double or float declarations with cmplx is nearly all that is required. Matlab: As in the case of Fortran, one must redefine functions such as abs, max and min. All differentiable functions are defined for complex variables. Results for the simple example in the previous section were computed using Matlab. The standard transpose operation represented by an apostrophe ( ) poses a problem as it takes the complex conjugate of the elements of the matrix, so one should use the non-conjugate transpose represented by dot apostrophe (. ) instead. Java: Complex arithmetic is not standardized at the moment but there are plans for its implementation. Although function overloading is possible, operator overloading is currently not supported. Python: When using the Numerical Python module (NumPy), we have access to complex number arithmetic and implementation is as straightforward as in Matlab. 4 Algorithmic Differentiation Algorithmic differentiation also known as computational differentiation or automatic differentiation is a well known method based on the systematic application of the differentiation chain rule to computer programs. Although this approach is as accurate an analytic method, it is potentially much easier to implement since this can be done automatically. 4.1 How it Works The method is based on the application of the chain rule of differentiation to each operation in the program flow. The derivatives given by the chain rule can be propagated forward (forward mode) or backwards (reverse mode). When using the forward mode, for each intermediate variable in the algorithm, a variation due to one input variable is carried through. This is very similar to the way the complex-step method works. To illustrate this, suppose we want to differentiate the multiplication operation, f = x 1 x 2, with respect to x 1. Table 1 compares how the differentiation would be performed using either algorithmic differentiation or the complex-step method. As we can see, algorithmic differentiation stores the derivative value in a separate set of variables while the complex step carries the derivative information in the imaginary part of the variables. It is shown that in this case, the complex-step method performs one additional operation the calculation of the term h 1 h 2 which, for the purposes of calculating the derivative is superfluous. The complex-step method will nearly always include these superfluous computations which correspond to 8

Algorithmic Complex-Step x 1 = 1 h 1 = 10 20 x 2 = 0 h 2 = 0 f = x 1 x 2 f = (x 1 + ih 1 )(x 2 + ih 2 ) f = x 1 x 2 + x 2 x 1 f = x 1 x 2 h 1 h 2 + i(x 1 h 2 + x 2 h 1 ) df/dx 1 = f df/dx 1 = Im f/h Table 1: The differentiation of the multiplication operation f = x 1 x 2 with respect to x 1 using algorithmic differentiation and the complex-step derivative approximation. the higher order terms in the Taylor series expansion of Equation (9). For very small h, when using finite precision arithmetic, these terms have no effect on the real part of the result. Although this example involves only one operation, both methods work for an algorithm involving an arbitrary sequence of operations by propagating the variation of one input forward throughout the code. This means that in order to calculate n derivatives, the differentiated code must be executed n times. The other mode the reverse mode has no equivalent in the complex-step method. When using the reverse mode, the code is executed forwards and then backwards to calculate derivatives of one output with respect to n inputs. The total number of operations is independent of n, but the memory requirements may be prohibitive, especially for the case of large iterative algorithms. There is nothing like an example, so we will now use both the forward and reverse modes to compute the derivatives of the function, f(x 1, x 2 ) = x 1 x 2 + sin(x 1 ). (15) The algorithm that would calculate this function is shown below, together with the derivative calculation using the forward mode. t 1 = x 1 t 1 = 1 t 2 = x 2 t 2 = 0 t 3 = t 1 t 2 t 3 = t 1 t 2 + t 1 t 2 t 4 = sin(t 1 ) t 4 = t 1 cos(t 1 ) t 5 = t 3 + t 4 t 5 = t 3 + t 4 The reverse mode is also based on the chain rule. Let t j note all the intermediate variables in an algorithm that calculates f(x i ). We set t 1,..., t n to x 1,..., x n and the last intermediate variable, t m to f. Then the chain rule can be written as, t j = t i k K j t j t k t k t i, i = 1, 2,..., m, (16) 9

t 5 + t 4 t 3 sin t 1 t 2 x 1 x 2 Figure 2: Graph of the algorithm that calculates f(x 1, x 2 ) = x 1 x 2 + sin(x 1 ) 10

for j = m + 1,..., n to obtain the gradients of the intermediate and output variables. K j denotes the set of indices k < j such that the variable t j in the code depends explicitly on t k. In order to know in advance what these indices are, we have to form the graph of the algorithm when it is first executed. This provides information on the interdependence of all the intermediate variables. A graph for our sample algorithm is shown in Figure 2. The sequence of calculations shown below corresponds to the application of the reverse mode to our simple function. t 5 = 1 t 5 t 5 = t 5 = 1 t 4 t 4 t 5 = t 5 t 4 + t 5 t 3 + = 1 0 + 1 1 = 1 t 3 t 4 t 3 t 3 t 3 t 5 = t 5 t 3 + t 5 t 4 = 1 t 1 + 1 0 = t 1 t 2 t 3 t 2 t 4 t 2 t 5 = t 5 t 2 + t 5 t 3 + t 5 t 4 = t 1 0 + 1 t 2 + 1 cos(t 1 ) = t 2 + cos(t 1 ) t 1 t 2 t 1 t 3 t 1 t 4 t 1 The following matrix, helps to visualize the sensitivities of all the variables with respect to each other. 1 0 0 0 0 0 1 0 0 0 t 3 t 3 t 1 t 2 1 0 0 (17) t 4 t 4 t 4 t 1 t 2 t 5 t 5 t 5 t 5 t 1 t 2 t 3 t 4 1 t 3 1 0 In the case of the example we are considering we have: 1 0 0 0 0 0 1 0 0 0 t 2 t 1 1 0 0 cos(t 1 ) 0 0 1 0 t 2 + cos(t 1 ) t 1 1 1 1 (18) The cost of calculating the derivative of one output to many inputs is not proportional to the number of input but to the number of outputs. Since when using the reverse mode we need to store all the intermediate variables as well as the complete graph of the algorithm, the amount of memory that is necessary increases dramatically. In the case of three-dimensional iterative solver, the cost of using this mode can be prohibitive. 4.2 Existing Tools There are two main methods for implementing algorithmic differentiation: by source code transformation or by using derived datatypes and operator overloading. 11

To implement algorithmic differentiation by source transformation, the whole source code must be processed with a parser and all the derivative calculations are introduced as additional lines of code. The resulting source code is greatly enlarged and it becomes practically unreadable. This fact constitutes an implementation disadvantage as it becomes impractical to debug this new extended code. One has to work with the original source, and every time it is changed (or if different derivatives are desired) one must rerun the parser before compiling a new version. In order to use derived types, we need languages that support this feature, such as Fortran 90 or C++. To implement algorithmic differentiation using this feature, a new type of structure is created that contains both the value and its derivative. All the existing operators are then re-defined (overloaded) for the new type. The new operator has exactly the same behavior as before for the value part of the new type, but uses the definition of the derivative of the operator to calculate the derivative portion. This results in a very elegant implementation since very few changes are required in the original code. Many tools for automatic algorithmic differentiation of programs in different languages exist. They have been extensively developed and provide the user with great functionality, including the calculation of higher-order derivatives and reverse mode options. Fortran: Tools that use the source transformation approach include: AD- IFOR [11], TAMC, DAFOR, GRESS, Odysse and PADRE2. The necessary changes to the source code are made automatically. The derived datatype approach is used in the following tools: AD01, ADOL-F, IMAS and OPTIMA90. Although it is in theory possible to have a script make the necessary changes in the source code automatically, none of these tools have this facility and the changes must be done manually. C/C++: Established tools for automatic algorithmic differentiation also exist for C/C++[10]. These include include ADIC, an implementation mirroring ADIFOR, and ADOL-C, a free package that uses operator overloading and can operate in the forward or reverse modes and compute higher order derivatives. References [1] Lyness, J. N., and C. B. Moler,, Numerical differentiation of analytic functions, SIAM J. Numer. Anal., Vol. 4, 1967, pp. 202-210. [2] Lyness, J. N., Numerical algorithms based on the theory of complex variables, Proc. ACM 22nd Nat. Conf., Thompson Book Co., Washington DC, 1967, pp. 124-134. [3] Squire, W., and G. Trapp, Using Complex Variables to Estimate Derivatives of Real Functions, SIAM Review, Vol. 10, No. 1, March 1998, pp. 100-112. 12

[4] Martins, J. R. R. A., I. M. Kroo, and J. J. Alonso An Automated Method for Sensitivity Analysis using Complex Variables Proceedings of the 38th Aerospace Sciences Meeting, AIAA Paper 2000-0689. Reno, NV, January 2000. [5] Martins, J. R. R. A. and P. Sturdza The Connection Between the Complex-Step Derivative Approximation and Algorithmic Differentiation Proceedings of the 39th Aerospace Sciences Meeting, Reno, NV, January 2001. AIAA Paper 2001-0921. [6] Anderson, W. K., J. C. Newman, D. L. Whitfield, E. J. Nielsen, Sensitivity Analysis for the Navier-Stokes Equations on Unstructured Meshes Using Complex Variables, AIAA Paper No. 99-3294, Proceedings of the 17th Applied Aerodynamics Conference, 28 Jun. 1999. [7] Newman, J. C., W. K. Anderson, D. L. Whitfield, Multidisciplinary Sensitivity Derivatives Using Complex Variables, MSSU-COE-ERC-98-08, Jul. 1998. [8] http://www.python.org [9] http://aero-comlab.stanford.edu/jmartins [10] http://www.sc.rwth-aachen.de/research /AD/subject.html [11] Bischof, C., A. Carle, G. Corliss, A. Grienwank, P. Hoveland, ADIFOR: Generating Derivative Codes from Fortran Programs, Scientific Programming, Vol. 1, No. 1, 1992, pp. 11-29. 13

5 Analytic Sensitivity Analysis Analytic methods are the most accurate and efficient methods available for sensitivity analysis. They are, however, more involved than the other methods we have seen so far since they require the knowledge of the governing equations and the algorithm that is used to solve those equations. In this section we will learn how to compute analytic sensitivities with direct and adjoint methods. We will start with single discipline systems and then generalize for the case of multiple systems such as we would encounter in MDO. 5.1 Single Systems 5.1.1 Notation f i R k x j y k ψ k function of interest/output i = 1,..., n f residuals of governing equation, k = 1,..., n R design/independent/input variables, j = 1,..., n x state variables, k = 1,..., n R adjoint vector, k = 1,..., n R 5.1.2 Basic Equations Consider the residuals of the governing equations of a given system, R k (x j, y k (x j )) = 0 (19) where x j are the independent variables (the design variables) and y k are the state variables that depend on the independent ones through the solution of the governing equations. Note that the number of equations must equal the number of unknowns (the state variables.) Any perturbation in the variables this system of equations must result in no variation of the residuals, if the governing equations are to be satisfied. Therefore, we can write, δr k = 0 R k Rk δx j + δy k = 0. (20) x j since there is a variation due the change in the design variables as well as a variation due to the change in the state vector. This equation applies to all k = 1,..., n R and j = 1,..., n x. Dividing the equation by δx j, we can get it in another form which involves the total derivative dy k /dx j, R k Rk dy k + = 0. (21) x j dx j Our final objective is to obtain the sensitivity of the function of interest, f i, which can be the objective function or a set of constraints. The function f 14

also depends on both x j and y k and hence the total variation of f i is, δf i = f i x j δx j + f i δy k. (22) Note that δy k cannot be found explicitly since y k varies implicitly with respect to x j through the solution of the governing equations. We can also divide this equation by δx j to get the alternate form, df i = f + f dy k, (23) dx j x j dx j where i = 1,..., n x and k = 1,..., n R. The first term on the right-hand-side represents the explicit variation of the function of interest with respect to the design variables due to the presence of these variables in the expression for f i. The second term the variation of the function due to the change of the state variables when the governing equations are solved. x R = 0 y f Figure 3: Schematic representation of the governing equations (R = 0), design variables or inputs (x j ), state variables (y k ) and the function of interest or output (f i ). A graphical representation of the system of governing equations is shown in Figure 3 with the input variables x j as the input and f i as the output. The two arrows leading to f i illustrate the fact that f i depends on x j not also explicitly but also through the state variables. The following two sections describe two different ways of calculating df i /dx j which is what we need to perform gradient-based optimization. 15

5.1.3 Direct Sensitivity Equations The direct approach first calculates the total variation of the state variables, y k, by solving the differentiated governing equation (23) for dy k /dx j, the total derivative of the state variables with respect to a given design variable. This means solving the linear system of equations, R k dy k Rk =. (24) dx j x j The solution procedure usually involves factorizing the square matrix, R k / and then back-solve to obtain the solution. Note that we have to chose one x j each time we back-solve, since the right-hand-side vector is different for each j. We can then use the result for dy k /dx j and substitute it in equation (23) to get df i /dx j for all i = 1,..., n f. 5.1.4 Adjoint Sensitivity Equations The adjoint approach adjoins the variations of the governing equations (20) and the function of interest (22), δf i = f i δx j + f ( ) i δy k + ψ T Rk Rk k δx j + δy k (25) x j x j }{{} δr k =0 where ψ k is the adjoint vector. The values of the components of this vector are arbitrary because we only consider variations for which the governing equations are satisfied, i.e., δr k = 0. If we collect the terms multiplying each of the variations we obtain, δf i = ( fi x j + ψ T k ) ( R k fi δx j + + ψk T x j ) R k δy k (26) Since ψ k is arbitrary, we can chose its values to be those for which the term multiplying by δy k is zero, i.e., if we solve, ψ T k R k = f i [ ] T [ ] T Rk fi ψ k = (27) for the adjoint vector ψ k. An adjoint vector is the same for any x j, but it is different for each f i. The term in equation (26) that is multiplied by δx j corresponds to the total derivative of f i with respect to x j, i.e., which are the sensitivities we want to calculate. df i = f i + ψ T dr k k, (28) dx j x j dx j 16

5.1.5 Direct vs. Adjoint In the previous two sections, the direct and adjoint sensitivity equations were both derived independently from the same two equations (20, 22). We will now attempt to unify the derivation of these two methods by expressing them in the same equation. This will help us gain a better understanding on how these two approaches are related. If we want to solve for the total sensitivity of the state variables with respect to the design variables, we would have to solve equation (24). Assuming that we have the sensitivity matrix of the residuals with respect to the state variables, R k /, and that it is invertible, the solution is, dy k dx j = [ Rk ] 1 R k x j. (29) Note that the matrix of partial derivatives of the residuals with respect to the state variables, R k /, is square, since the number of governing equations must equal the number of state variables. Substituting equation (29) into the expression for the total derivative of the function of interest (23) to get, df i = f i f [ ] 1 i Rk R k. (30) dx j x j x j }{{} dy k /dx j Both the direct and adjoint methods can be seen in this equation. Using the direct method we would start by solving for the term shown under-braced in equation (30), i.e., the solution of, R k dy k Rk =, (31) dx j x j which is the total sensitivity of the state variables. Note that each set of these total sensitivities is valid for only one design variable, x j. Once we have these sensitivities, we can use this result in equation (30), i.e., df i dx j = f i x j + f i dy k dx j (32) to get the desired sensitivities. To use the adjoint method, we would define the adjoint vector as shown below, df i = f i f [ ] 1 i Rk dx j x j }{{} ψk T R k x j (33) 17

Step Direct Adjoint Factorization same same Back-solve n x times n f times Multiplication same same The adjoint vector is then the solution of the system, [ Rk ] T ψ k = f i (34) where we have to solve for each f i. The adjoint vector is the same for any chosen design variable since j does not appear in the equation. We can substitute the resulting adjoint vector into equation (33) to get, df i = f i ψ T R k k. (35) dx j x j x j Unlike the direct method, where each dy k /dx j can be used for any function f i, we must compute a different adjoint vector ψ k for each function of interest. A comparison of the cost of computing sensitivities with the direct versus adjoint methods is shown in Table 5.1.5. With either method, we must factorize the same matrix, R k /. The difference in the cost comes form the backsolve step for solving equations (31) and (34) respectively. The direct method requires that we perform this step for each design variable (i.e. for each j) while the adjoint method requires this to be done for each function of interest (i.e. for each i). The multiplication step is simply the calculation of the final sensitivity expressed in equations (32) and (35) respectively. The cost involved in this step when computing the same set of sensitivities is the same for both methods. The final conclusion is the established rule that if the number of design variables (inputs) is greater than the number of functions of interest (output), the adjoint method is more efficient than the direct method and vice-versa. If the number of outputs is similar to the number of inputs, either method will be costly. In this discussion, we have assumed that the governing equations have been discretized. The same kind of procedure can be applied to continuous governing equations. The principle is the same, but the notation would have to be more general. The equations, in the end, have to be discretized in order to be solved numerically. Figure 4 shows the two ways of arriving at the discrete sensitivity equations. We can either differentiate the continuous governing equations first and then discretize them, or discretize the governing equations and differentiate them in the second step. The resulting sensitivity equations should be equivalent, but are not necessarily the same. Differentiating the continuous governing equations first is usually more involved. In addition, applying boundary conditions to the differentiated equations can be non-intuitive as some of these boundary conditions are non-physical. because the boundary conditions of the continuous sensitivity equations are non-physical. 18

Continuous Governing Equations Continuous Sensitivity Equations Discrete Governing Equations Discrete Sensitivity Equations 1 Discrete Sensitivity Equations 2 Figure 4: The two ways of obtaining the discretized sensitivity equations 5.2 Example: Structural Sensitivity Analysis The discretized governing equations for a finite-element structural model is, R k = K k ku k f k = 0, (36) where K k k is the stiffness matrix, u k is the vector of displacement (the state) and f k is the vector of applied force (not to be confused with the function of interest from the previous section!). We are interested in finding the sensitivities of the stress, which is related to the displacements by the equation, σ i = S ik u k. (37) We will consider the design variables to be the cross-sectional areas of the elements, A j. We will now look at the terms that we need to use the generalized total sensitivity equation (30). For the matrix of sensitivities of the governing equations with respect to the state variables we find that it is simply the stiffness matrix, i.e., R k = (K k ku k f k ) k = K k k. (38) Let s consider the sensitivity of the residuals with respect to the design variables (cross-sectional areas in our case). Neither the displacements of the applied forces vary explicitly with the element sizes. The only term that depends on A j directly is the stiffness matrix, so we get, R k = (K ku k k f k ) = Kk k u k (39) x j A j A j The partial derivative of the stress with respect to the displacements is simply given by the matrix in equation (37), i.e., f i = σ i k = S ik (40) 19

Finally, the explicit variation of stress with respect to the cross-sectional areas is zero, since the stresses depends only on the displacement field, f i x j = σ i A j = 0. (41) Substituting these into the generalized total sensitivity equation (30) we get: dσ i da j = σ i K 1 k k k K k k x j u k (42) Referring to the theory presented previously, if we were to use the direct method, we would solve, and then substitute the result in, du k K k k = Kk k u k (43) da j A j dσ i da j = σ i A j + σ i k du k da j (44) to calculate the desired sensitivities. The adjoint method could also be used, in which case we would solve equation (34) for the structures case, K T k kψ k = σ i k. (45) Then we would substitute the adjoint vector into the equation, dσ i = σ ( i + ψk T K ) k k u k. (46) dx j x j A j to calculate the desired sensitivities. 5.3 Multidisciplinary Sensitivity Analysis The analysis done in the previous section for single discipline systems can be generalized for multiple, coupled systems. The same total sensitivity equation (30) applies, but now governing equations and state variables of all disciplines are included in R and y respectively. To illustrate this, consider for example a coupled aero-structural systems were both aerodynamic (A) and structural (S) analysis are involved and the state variables are the flow state, w and the structural displacements, u. Figure 5 shows how such a system is coupled. equation (31) can then be written for this coupled system as, [ RA R S R A R S ] [ dw dx j du dx j ] = [ RA x j R S x j ] (47) 20

x R = 0 A u w R = 0 S f Figure 5: Schematic representation of the aero-structural governing equations. In addition to diagonal terms of the matrix which would appear when solving the single systems we have cross terms expressing the sensitivity of one system to the other s state variables. These equations are sometimes called the Global Sensitivity Equations (GSE) [4]. Similarly, we can write a coupled adjoint based on equation (34), [ RA R S R A R S ] T [ ] ψa = ψ S [ fi f i ] (48) In addition to the GSE expressed in equation (47), Sobieski [4] also introduced an alternative method which he called GSE2 for calculating total sensitivities. Instead of looking at the residual variation, see variation in state, y k (x, y k (x)), where k k. Dividing this by δy k, we get, δy k = δx j + dy k δx j (49) x j dx j dy k = + dy k (50) dx j x j dx j 21

For all k, δ kk dy k dx j = x j (51) Writing this in matrix form for the two discipline example we get, [ ] [ ] [ ] I dw dx I j x du = j. (52) dx j x j One advantage of this formulation is that the size of the matrix to be factorized might be reduced. This is due to the fact that the state variables of one system not always depend on all the state variables of the other system. For example, in the case of the coupled aero-structural system, only the surface aerodynamic pressures affect the structural analysis and we could substitute all the flow state, w, by a much smaller vector of surface pressures. Similarly, we could use only the surface structural displacements rather than all of them, since only these influence the aerodynamics. An adjoint version of this alternative can also be derived and the system to be solved is for this case, [ I I ] T [ ] ψa = ψ S [ fi f i ] (53) Since factorizing the full residual sensitivity matrix is in many cases impractical, the method can be slightly modified as follows. Equation (48) can be re-written as, [ ] RA T R S T [ ] ] ψa = (54) R A T R S T ψ S [ fi f i Since the factorization of the matrix in equation ( 54) would be extremely costly, we decided to set up an iterative procedure, much like the one used for our aerostructural solution, where the adjoint vectors are lagged and two different sets of equations are solved separately. For the calculation of the adjoint vector of one discipline, we use the adjoint vector of the other discipline from the previous iteration, i.e., we would solve, [ ] T RA ψ A = f i [ ] T RS ψs (55) [ ] T RS ψ S = f i [ ] T RA ψa (56) whose final result, after convergence, is be the same as equation (48). We will call this the lagged coupled adjoint method for computing sensitivities of coupled 22

systems. Note that these equations look like the single discipline ones for the aerodynamic and structural adjoint, except that a forcing term is subtracted from the right-hand-side. Once the solution for both of the adjoint vectors have converged, we are able to compute the final sensitivities of a given cost function by using, References df i = f i ψ T R A A ψ T R S S. (57) dx j x j x j x j [1] Adelman, H. M., R. T. Haftka, Sensitivity Analysis of Discrete Structural Systems, AIAA Journal, Vol. 24, No. 5, May 1986. [2] Barthelemy, B., R. T. Haftka and G. A. Cohen, Physically Based Sensitivity Derivatives for Structural Analysis Programs, Computational Mechanics, pp. 465-476, Springer-Verlag, 1989. [3] Belegundu, A. D. and J. S. Arora, Sensitivity Interpretation of Adjoint Variables in Optimal Design, Computer Methods in Applied Mechanics and Engineering, Vol. 48, pp. 81-90, 1985. [4] Sobieszczanski-Sobieski, J., Sensitivity of Complex, Internally Coupled Systems, AIAA Journal, Vol. 28, No. 1., January 1990. [5] Hajela, P., C. L. Bloebaum and J. Sobieszczanski-Sobieski, Application of Global Sensitivity Equations in Multidisciplinary Aircraft Synthesis, Journal of Aircraft, Vol. 27, No. 12., pp. 1002-1010, December 1990. 23