Module III: Partial differential equations and optimization
|
|
- Lydia Patrick
- 5 years ago
- Views:
Transcription
1 Module III: Partial differential equations and optimization Martin Berggren Department of Information Technology Uppsala University Optimization for differential equations Content Martin Berggren (UU) Opt. for DE / 41 Introduction Introduction, generalities Sensitivity analysis (in finite dimensions) Optimizing forcing terms/boundary conditions for elliptic PDE s (inverse or control problems) Optimizing coefficients: Material and topology optimization Optimizing geometry: Shape optimization Martin Berggren (UU) Opt. for DE / 41
2 Introduction What is the use of optimization for PDEs? Weather forecasting: Weather models (PDEs) needs initial conditions at each spatial point Only available data: limited set of local measurements at different times Find through optimization the initial condition that best matches the given observations ( 4D var ) Parameter estimation (e. g. material properties), nondestructive evaluation Optimizing geometrical properties: shapes and topologies Martin Berggren (UU) Opt. for DE / 41 Application examples Introduction Example I: Redesign of the ONERA M6 wing Reduce drag while keeping the lift and pitch moment constant. (PDE: the Euler equations of gas dynamics) Computations by Olivier Amoignon (2005) Initial wing (pressure) Initial (gray) and optimized (yellow) wing design Pressure on optimized wing Martin Berggren (UU) Opt. for DE / 41
3 Introduction Example II: Topology optimization (Borrvall & Petersson Linköping, 2001) Objective: Cutting out (say) 50 % of the material in a way that maximizes the stiffness of the remaining structure Martin Berggren (UU) Opt. for DE / 41 Problem structure Introduction Module I viewed objective functions or constraints as direct functions of the design variables (decision variables, control variables, parameters): Here: φ J φ u J design var. state objective, constraint The discrete design space may be of small or large dimension The discrete state space typically large (can have millions of degrees of freedom) The number of objectives, constraints typically small Often more intermediate steps Martin Berggren (UU) Opt. for DE / 41
4 Introduction Linear-algebra-type example State equation: Au = Bφ Objective function: j = c T u + ɛ 2 φ 2 φ U R m, u R n, n large; A: n-by-n, B: n-by-m A. Non-nested optimization formulation Viewing state equation as constraint min φ,u j (u,φ) subject to Au = φ, φ U Martin Berggren (UU) Opt. for DE / 41 Introduction Linear-algebra-type example, cont. Now define J(φ) = j (u,φ)where Au = φ B. Nested optimization formulation Viewing state equation only as intermediate step min φ J(φ) subject to φ U Martin Berggren (UU) Opt. for DE / 41
5 Introduction Nested and non-nested formulations The formulations suggest different algorithms Nested: Only φ is the optimization decision variables. States u will be kept feasible at each iteration. ( NAND : Nested Analysis and Design) Non-nested: Both φ and u are optimization decision variables. States will be feasible only at convergence. ( SAND : Simultaneous Analysis and Design) Martin Berggren (UU) Opt. for DE / 41 SAND vs. NAND Introduction SAND-type algorithms solves the state (and adjoint, se below) equations simultaneously with the optimization problem. Potentially very fast (goal: a cost corresponding to a fixed small multiple of state solves). Algorithms need to be run to full convergence, otherwise non-meaningful results (non-feasible states) NAND-type algorithms more standard. Can stop iterations when good enough. Can be very costly for expensive state equations. This module: only NAND-type algorithms. Martin Berggren (UU) Opt. for DE / 41
6 Directional derivatives The optimization algorithms from Module I need derivatives of the objective function and the constraints. Convenient to use directional derivatives (evaluations of the differential in an arbitrary direction) in derivations f = f(á +±Á) { f(á) ±f(á; ±Á) Á ±Á Black graph: the function f, Red graph: the differential δ f (φ) δ f (φ, δφ): the evaluation of the differential in direction δφ Martin Berggren (UU) Opt. for DE / 41 Directional derivatives f : U R objective function or constraint U a convex subset of R n or a function space (e. g. bounded or square-integrable functions) A design variation: δφ = φ φ; φ, φ U If φ U, by convexity, φ(s) = φ + s δφ stays in U for s [0, 1] f (φ + s δφ) f (φ) δ f (φ; δφ) = δ f (φ), δφ lim s 0 + s f (φ) δφ for U R, φ (if differentiable) = f (φ) T δφ for U R n, Df(φ) δφ for U L 2 ( ) Martin Berggren (UU) Opt. for DE / 41
7 Directional derivatives Analogous definition for functions with values in R n or any other normed space V (u : U V ) δu(φ; δφ) = lim s 0 + u(φ + s δφ) u(φ) s where the limit is in the sense of the norm on V, that is, δu(φ; δφ) is a directional derivative if lim u(φ + s δφ) u(φ) δu(φ; δφ) s 0 + s = 0 V Martin Berggren (UU) Opt. for DE / 41 Two ways to compute objective-function gradients State equation ( A square and nonsingular matrix): Au = φ Objective function: J(φ) = c T u Differentiate: A δu = δφ δ J = J T δφ = c T δu = c T ( A 1 δφ ) = ( A T c ) T δφ Martin Berggren (UU) Opt. for DE / 41
8 δ J = J T δφ = i J φ i δφ i = c T δu = c T ( A 1 δφ ) direct sensitivities = ( A T c ) T δφ adjoint equation Direct sensitivities: Compute each component of J by choosing successively δφ = all unit vectors. Computational complexity: The number of state-equation solves grows linearly with the number of design variables No extra state solves when changing objective function (i. e. c) Adjoint equations: Compute all components of J at once from A T c. Computational complexity Independent of the number of design variables Grows linearly with the number of objective functions Martin Berggren (UU) Opt. for DE / 41 Algorithms for gradient computations Many state equations are of the abstract form a(u) = b(φ) (However, often left-hand side also depend on φ: a(u,φ)= b(φ); treated in the exercises.) We consider several objective functions (say lift, drag,...) or constraints J i (φ) = f i (u(φ), φ), i = 1,...,m and want to compute J i, i = 1,...,m at some φ. Martin Berggren (UU) Opt. for DE / 41
9 Finite-differenced gradients Let e k = (0,...,0, 1, 0,...) T, with the non-zero component at position i J i J i (φ + ɛ e k ) J i (φ) = lim φ k ɛ 0 ɛ i = 1,...,m; k = 1,...,n 1. Solve a(u) = b(φ). 2. For k = 1,...,n do! Loop over design variables 2.1 Solve a(u k ) = b(φ + ɛ e k ) 2.2 For i = 1,...,m do! Loop over objective functions Set J i f i (u k,φ) f i (u,φ) φ k ɛ + f i(u,φ) φ k Martin Berggren (UU) Opt. for DE / 41 Easy to implement with existing, black-box software to solve the state equation. Computational effort: Essentially n state solves for each calculation of the gradient, that is, for each iteration of a gradient-based optimization algorithm. Note that the multiple objective functions do not cause any additional state solves. How to select ɛ? Too large ɛ yield an inaccurate gradient. Too small ɛ yields cancelation of significant digits. Can show that the optimal trade-off between accuracy and round off occurs for ɛ ɛ u, where ɛ u is unit round-off ( machine epsilon ) for the floating-point system (ɛ u = in IEEE double precision). Martin Berggren (UU) Opt. for DE / 41
10 The complex-variable trick A finite-difference technique that avoids cancelation effects. Let f = f (x) be a real-valued function of a single real variable. We want to approximate f (x). Let f = f (z) be the analytic continuation of f in a complex neighborhood of x. (Always exists if f is (real) analytic at x.) Martin Berggren (UU) Opt. for DE / 41 Using that dn f dz n = dn f dx n R on the real line, a Taylor-series expansion in imaginary direction at x R yields f (x + iɛ) = f (x) + iɛ f (x) + (iɛ)2 2 f (x) + (iɛ)3 6 f (x) + = f (x) ɛ2 2 f (x) + O(ɛ 4 ) + iɛ ( f (x) ɛ2 6 f (x) + O(ɛ 4 ) ) Thus, Re f (x + iɛ) = f (x) ɛ2 2 f (x) + O(ɛ 4 ) Im f (x + iɛ) ɛ = f (x) ɛ2 6 f (x) + O(ɛ 4 ) Martin Berggren (UU) Opt. for DE / 41
11 Hence f (x) Im f (x + iɛ) ɛ to second order in ɛ, and ɛ can be selected without concerns for cancelation! Procedure: 1. Solve a(u) = b(φ). 2. For k = 1,...,n do! Loop over design variables 2.1 Solve a(u k ) = b(φ + iɛ e k ) (in complex arithmetic) 2.2 For i = 1,...,m do! Loop over objective functions J i Set = Im f i (u k ) + f i(u,φ) φ k ɛ φ k, Martin Berggren (UU) Opt. for DE / 41 The derivative f can be obtained almost in full precision by choosing ɛ very small, say ɛ = Requires minor changes in the code: basically changing from real to complex arithmetic. Most operations, functions in a computer program posses analytic extensions. Watch out for: Absolute value: change x to x 2 (complex absolute value is not the analytic extensions of the real absolute value) Conditionals, max, min: operate on real part (differentiable and analytic if not exactly on the switch ) Martin Berggren (UU) Opt. for DE / 41
12 Same computational complexity as for finite-differences + the increased cost of using complex arithmetic Convenient in languages with build-in complex arithmetic such as Fortran or Matlab Less convenient and efficient for languages such as C++ lacking complex-arithmetic support In Matlab: watch out for the transpose operation on vectors: v means v H = v T. To obtain v T for a complex v, use v. Great for checking a code that computes exact gradients (by state sensitivities or the adjoint method) Martin Berggren (UU) Opt. for DE / 41 State sensitivities Utilizes the chain rule and computes explicitly the sensitivity of the states with respect to design changes. Linearize the state equation a(u) = b(φ) with respect to a design variation, A(u)δu = B(φ) δφ, where A ij = a i, B ij = b i, (Jacobian matrices) u j φ j Differentiation the objective functions yields where u f T i = δ J i (φ) = J i (φ) T δφ = δ f i (u(φ), φ) = u f i (u,φ) T δu + φ f i (u,φ) T δφ ( fi, f i,..., f ) i, φ fi T = u 1 u 2 u N ( fi, f i,..., f ) i φ 1 φ 2 φ n Martin Berggren (UU) Opt. for DE / 41
13 Procedure: 1. Solve a(u) = b(φ) 2. For k = 1,...,n do! Loop over design variables 2.1 Solve Au k = b (linearized state equation) φ k 2.2 For i = 1,...,m do! Loop over objective functions J i Set = u f i (u,φ) T u k + f i φ k φ k Martin Berggren (UU) Opt. for DE / 41 No parameter to choose: yields the exact gradient. Not easily implemented with black-box software: needs coding of the linearized state equations. If the code uses Newton s method to solve the nonlinear state equation, the Jacobian A(u) is already there (but probably not B(φ)) Computational complexity the same as for finite differences and the complex-variable trick. Martin Berggren (UU) Opt. for DE / 41
14 The adjoint-equation approach Recall the differentiated objective functions (i = 1,...,m) δ J i (φ) = J i (φ) T δφ = u f i (u,φ) T δu + φ f i (u,φ) T δφ (1) and the differentiated state equation (a(u) = b(φ)) A(u)δu = B(φ) δφ, (2) Multiply equation (2) with an arbitrary vector p T 0 = p T A δu p T B δφ = δu T A T p δφ T B T p. (3) Letting p i, i = 1,...,m, satisfy the adjoint equations A T p i = u f i (u,φ), equation (3) yields that δu T u f i (u,φ)= δφ T B(φ) T p i. (4) Martin Berggren (UU) Opt. for DE / 41 Substituting expression (4) into (1), we find δ J i (φ) = J i (φ) T δφ = ( ) B(φ) T T p i δφ + φ f i (u,φ) T δφ, and we may identify the gradient: J i (φ) = B(φ) T p i + φ f i (u,φ) Martin Berggren (UU) Opt. for DE / 41
15 Adjoint variables = Lagrange multipliers Consider U = R m and the optimization problem (non-nested form) min φ,u f (u,φ)subject to a(u) = b(φ) (P) Define the Lagrangian L (φ, u; p) = f (u,φ) p T( a(u) b(φ) ) Form Module I: the first-order necessary conditions for optimality is L (φ, u; p) = 0, or, δφ T φ L (φ, u; p) + δu T u L (φ, u; p) + δp T p L (φ, u; p) = 0 for each δφ R m, δu,δp R n Martin Berggren (UU) Opt. for DE / 41 L (φ, u; p) = f (u,φ) p T( a(u) b(φ) ) Differentiate with respect to each variable δp T p L (φ, u; p) = δp T( a(u) b(u) ) = 0 δu T u L (φ, u; p) = δu T( u f (u,φ) A(u) T p ) = 0 δφ T φ L (φ, u; p) = δφ T( φ f (u,φ)+ B(φ) T p ) = 0 [State equation] [Adjoint equation] [Gradient expression] Often called optimality system in the present context Martin Berggren (UU) Opt. for DE / 41
16 Gradient computations with adjoints. Procedure: 1. Solve a(u) = b(φ) (state equation) For i = 1,...,m do! Loop over objective functions 1.1 Solve A T p i = u f i, (adjoint equation) 1.2 Set J i (φ) = B(φ) T p i + φ f i (u,φ) Martin Berggren (UU) Opt. for DE / 41 Computational work: one state solve and m adjoint solves for each gradient evaluation. The computational work is independent of n, the size of the design space! Best computational efficiency when there are few objective functions (or constraints) but many design variables. Yields the exact gradient, no parameter. Not easily implemented with black-box software. Major coding effort to implement the adjoint equations: generally much more difficult to implement than linearized state equations. Martin Berggren (UU) Opt. for DE / 41
17 Automatic (or Algorithmic) Differentiation (AD) Both the state sensitivity and the adjoint method requires coding efforts AD requires access to source code but no coding, in principle Observation: Each line in a computer program is easy to differentiate Differentiation rules (e. g. product rule, chain rule) are completely mechanical Let the computer analyze each row in the program and calculate the derivative simultaneously as the function Martin Berggren (UU) Opt. for DE / 41 Assume we have a computer program computes the function f : R n R AD software turns this program into another program returning f (x) and f (x) Two ways to implement AD: Source transformation (compiler technology) Operator overloading Martin Berggren (UU) Opt. for DE / 41
18 AD with operator overloading Convenient in languages such as C ++ and Java Redefines real variables and redefines arithmetic operations to include derivative information u,v,w: real variables (their values depend on input vector x) α, β: constants (do not depend on x) For u,v, replace real data structure with an abstract data type also containing directional derivatives du, dv: u = ( ) du, v = u ( ) dv, w = v ( ) dw w For simplicity, assume scalar du, dv, dw (input vector x scalar) Straightforward to extend to vector du, dv, dw Martin Berggren (UU) Opt. for DE / 41 Examples Operation: v = αu. In code: v = α u. Operation redefined as v = ( ) α du αu Operation: w = uv. In code: w = u v. Operation redefined as w = ( ) udv + v du uv Operation: w = u. In code: w = u/v. Operation redefined as v du w = v 1 v dv 2 u v Martin Berggren (UU) Opt. for DE / 41
19 Change data type for each variable that may depend on x in program calculating f Providing the input ( ) ( dx x = 1x0 ) ( ) ( ) yields the output df f = f (x 0 ) f (x 0 ) Yields exact derivatives (up to machine precision) Computational effort grows linearly with the dimension of x, similarly as finite-differences Martin Berggren (UU) Opt. for DE / 41 Similar to the complex-variable trick, where the real part contained the function value and the imaginary part the derivative However, AD with operator overloading yields the whole gradient vector, not just one component (when using vector du) Less floating-point operations than the complex-variable trick Example, multiplication w = uv: AD: ( ) dw = w Complex variables: ( ) udv + v du uv w = uv = (u r + iu i )(v r + iv i ) = u r v r u i v }{{} i +i(u r v i + v r u i ) unnecessary (5) AD with operator overloading easy to use in C++ using FAD (see page 41) Martin Berggren (UU) Opt. for DE / 41
20 AD using source translation Is a compiler-like program. The AD tool associates to each scalar floating-point program variable v (also temporary ones) an n-vector dv. Each statement that assigns a value to a floating-point variable will be preceded by a statement that assigns, according to the chain rule, values to associated derivatives Example: the statement y = z sin(φ) will be replaced by the statements dy = z cos(φ) dφ + dz sin(dφ) y = z sin(φ) Martin Berggren (UU) Opt. for DE / 41 The compiler technology allows optimization of the produced code (as opposed to operator overloading) Above version of AD with source translation known as forward mode Computational complexity grows linearly with dimension of x AD in forward mode: essentially state sensitivities line-by-line in the code There is also an adjoint-version of AD: backward mode. Computational complexity then independent of dimension of x Backward mode may need excessive amounts of storage Martin Berggren (UU) Opt. for DE / 41
21 For AD software, see Examples (free): TAPENADE (former Odyssee). Source transformation; Fortran 77 and 95; forward and reverse (INRIA Sophia Antipolis, France) ADOL-C. Operator overloading; C/C++ (callable from Fortran); forward and reverse (Dresden Univ. of Techn. Germany) FAD. Simply a header file to be added to an existing C/C++ program to provide AD in forward mode by operator overloading (P. Aubert & N. Dicesare; can be downloaded from ADIFOR. Source transformation; Fortran 77; forward mode (Argonne Nat. Labs, Rice U., USA) ADIC. Source transformation; C/C++; forward mode (Argonne Nat. Labs) Martin Berggren (UU) Opt. for DE / 41
Sensitivity Analysis AA222 - Multidisciplinary Design Optimization Joaquim R. R. A. Martins Durand
Sensitivity Analysis AA222 - Multidisciplinary Design Optimization Joaquim R. R. A. Martins Durand 165 email: joaquim.martins@stanford.edu 1 Introduction Sensitivity analysis consists in computing derivatives
More informationVariational assimilation Practical considerations. Amos S. Lawless
Variational assimilation Practical considerations Amos S. Lawless a.s.lawless@reading.ac.uk 4D-Var problem ] [ ] [ 2 2 min i i i n i i T i i i b T b h h J y R y B,,, n i i i i f subject to Minimization
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationLecture 7. Floating point arithmetic and stability
Lecture 7 Floating point arithmetic and stability 2.5 Machine representation of numbers Scientific notation: 23 }{{} }{{} } 3.14159265 {{} }{{} 10 sign mantissa base exponent (significand) s m β e A floating
More information1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx
PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of
More informationAdjoint approach to optimization
Adjoint approach to optimization Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in Health, Safety
More informationReview and Unification of Methods for Computing Derivatives of Multidisciplinary Computational Models
his is a preprint of the following article, which is available from http://mdolabenginumichedu J R R A Martins and J Hwang Review and unification of methods for computing derivatives of multidisciplinary
More informationb 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n
Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven
More informationComputational Graphs, and Backpropagation
Chapter 1 Computational Graphs, and Backpropagation (Course notes for NLP by Michael Collins, Columbia University) 1.1 Introduction We now describe the backpropagation algorithm for calculation of derivatives
More informationTransformations from R m to R n.
Transformations from R m to R n 1 Differentiablity First of all because of an unfortunate combination of traditions (the fact that we read from left to right and the way we define matrix multiplication
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationChapter 4: Partial differentiation
Chapter 4: Partial differentiation It is generally the case that derivatives are introduced in terms of functions of a single variable. For example, y = f (x), then dy dx = df dx = f. However, most of
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationReview and Unification of Methods for Computing Derivatives of Multidisciplinary Systems
53rd AAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference2th A 23-26 April 212, Honolulu, Hawaii AAA 212-1589 Review and Unification of Methods for Computing Derivatives of
More informationMathematics for Engineers. Numerical mathematics
Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationIntro to Automatic Differentiation
Intro to Automatic Differentiation Thomas Kaminski (http://fastopt.com) Thanks to: Simon Blessing (FastOpt), Ralf Giering (FastOpt), Nadine Gobron (JRC), Wolfgang Knorr (QUEST), Thomas Lavergne (Met.No),
More informationBasic Aspects of Discretization
Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)
More informationHomework and Computer Problems for Math*2130 (W17).
Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should
More informationGradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice
1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationSolving Linear Systems of Equations
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationECON 186 Class Notes: Derivatives and Differentials
ECON 186 Class Notes: Derivatives and Differentials Jijian Fan Jijian Fan ECON 186 1 / 27 Partial Differentiation Consider a function y = f (x 1,x 2,...,x n ) where the x i s are all independent, so each
More informationIntroduction, basic but important concepts
Introduction, basic but important concepts Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 7, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch1 October 7, 2017 1 / 31 Economics
More informationNotes on Cellwise Data Interpolation for Visualization Xavier Tricoche
Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche urdue University While the data (computed or measured) used in visualization is only available in discrete form, it typically corresponds
More information2 A Model, Harmonic Map, Problem
ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or
More informationVolume in n Dimensions
Volume in n Dimensions MA 305 Kurt Bryan Introduction You ve seen that if we have two vectors v and w in two dimensions then the area spanned by these vectors can be computed as v w = v 1 w 2 v 2 w 1 (where
More informationImproving the Verification and Validation Process
Improving the Verification and Validation Process Mike Fagan Rice University Dave Higdon Los Alamos National Laboratory Notes to Audience I will use the much shorter VnV abbreviation, rather than repeat
More informationMath 302 Outcome Statements Winter 2013
Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a
More informationNumerical Methods for Differential Equations Mathematical and Computational Tools
Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationSOLUTION OF EQUATIONS BY MATRIX METHODS
APPENDIX B SOLUTION OF EQUATIONS BY MATRIX METHODS B.1 INTRODUCTION As stated in Appendix A, an advantage offered by matrix algebra is its adaptability to computer use. Using matrix algebra, large systems
More informationEfficient Hessian Calculation using Automatic Differentiation
25th AIAA Applied Aerodynamics Conference, 25-28 June 2007, Miami Efficient Hessian Calculation using Automatic Differentiation Devendra P. Ghate and Michael B. Giles Oxford University Computing Laboratory,
More informationCourse Outline. 2. Vectors in V 3.
1. Vectors in V 2. Course Outline a. Vectors and scalars. The magnitude and direction of a vector. The zero vector. b. Graphical vector algebra. c. Vectors in component form. Vector algebra with components.
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationBasics of Calculus and Algebra
Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array
More informationLecture 2: Review of Prerequisites. Table of contents
Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More informationLinear Algebra Review (Course Notes for Math 308H - Spring 2016)
Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,
More informationNumerical Methods for Large-Scale Nonlinear Equations
Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationElements of Floating-point Arithmetic
Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow
More informationREVIEW OF DIFFERENTIAL CALCULUS
REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be
More informationDuality, Dual Variational Principles
Duality, Dual Variational Principles April 5, 2013 Contents 1 Duality 1 1.1 Legendre and Young-Fenchel transforms.............. 1 1.2 Second conjugate and convexification................ 4 1.3 Hamiltonian
More information8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples
8-1: Backpropagation Prof. J.C. Kao, UCLA Backpropagation Chain rule for the derivatives Backpropagation graphs Examples 8-2: Backpropagation Prof. J.C. Kao, UCLA Motivation for backpropagation To do gradient
More informationNUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group
NUMERICAL METHODS lor CHEMICAL ENGINEERS Using Excel', VBA, and MATLAB* VICTOR J. LAW CRC Press Taylor & Francis Group Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Croup,
More informationFLOATING POINT ARITHMETHIC - ERROR ANALYSIS
FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors 3-1 Roundoff errors and floating-point arithmetic
More informationFLOATING POINT ARITHMETHIC - ERROR ANALYSIS
FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors Roundoff errors and floating-point arithmetic
More informationOptimal control problems with PDE constraints
Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.
More informationThings we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic
Unit II - Matrix arithmetic matrix multiplication matrix inverses elementary matrices finding the inverse of a matrix determinants Unit II - Matrix arithmetic 1 Things we can already do with matrices equality
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationChapter 1 Mathematical Preliminaries and Error Analysis
Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list
More informationMATH2071: LAB #5: Norms, Errors and Condition Numbers
MATH2071: LAB #5: Norms, Errors and Condition Numbers 1 Introduction Introduction Exercise 1 Vector Norms Exercise 2 Matrix Norms Exercise 3 Compatible Matrix Norms Exercise 4 More on the Spectral Radius
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationInverse Kinematics. Mike Bailey. Oregon State University. Inverse Kinematics
Inverse Kinematics Mike Bailey mjb@cs.oregonstate.edu inversekinematics.pptx Inverse Kinematics Forward Kinematics solves the problem if I know the link transformation parameters, where are the links?.
More informationIntroduction to Algebraic and Geometric Topology Week 14
Introduction to Algebraic and Geometric Topology Week 14 Domingo Toledo University of Utah Fall 2016 Computations in coordinates I Recall smooth surface S = {f (x, y, z) =0} R 3, I rf 6= 0 on S, I Chart
More information10. Smooth Varieties. 82 Andreas Gathmann
82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It
More informationBackpropagation Introduction to Machine Learning. Matt Gormley Lecture 13 Mar 1, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Backpropagation Matt Gormley Lecture 13 Mar 1, 2018 1 Reminders Homework 5: Neural
More information1 Nonlinear deformation
NONLINEAR TRUSS 1 Nonlinear deformation When deformation and/or rotation of the truss are large, various strains and stresses can be defined and related by material laws. The material behavior can be expected
More informationLinear equations in linear algebra
Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationDerivatives for Time-Spectral Computational Fluid Dynamics using an Automatic Differentiation Adjoint
Derivatives for Time-Spectral Computational Fluid Dynamics using an Automatic Differentiation Adjoint Charles A. Mader University of Toronto Institute for Aerospace Studies Toronto, Ontario, Canada Joaquim
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationEXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS
EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS JIANWEI DIAN AND R BAKER KEARFOTT Abstract Traditional computational fixed point theorems, such as the Kantorovich theorem (made rigorous
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More information42. Change of Variables: The Jacobian
. Change of Variables: The Jacobian It is common to change the variable(s) of integration, the main goal being to rewrite a complicated integrand into a simpler equivalent form. However, in doing so, the
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationThe answer in each case is the error in evaluating the taylor series for ln(1 x) for x = which is 6.9.
Brad Nelson Math 26 Homework #2 /23/2. a MATLAB outputs: >> a=(+3.4e-6)-.e-6;a- ans = 4.449e-6 >> a=+(3.4e-6-.e-6);a- ans = 2.224e-6 And the exact answer for both operations is 2.3e-6. The reason why way
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationWeek 4: Differentiation for Functions of Several Variables
Week 4: Differentiation for Functions of Several Variables Introduction A functions of several variables f : U R n R is a rule that assigns a real number to each point in U, a subset of R n, For the next
More informationHigher Order Taylor Methods
Higher Order Taylor Methods Marcelo Julio Alvisio & Lisa Marie Danz May 6, 2007 Introduction Differential equations are one of the building blocks in science or engineering. Scientists aim to obtain numerical
More informationAdjoint code development and optimization using automatic differentiation (AD)
Adjoint code development and optimization using automatic differentiation (AD) Praveen. C Computational and Theoretical Fluid Dynamics Division National Aerospace Laboratories Bangalore - 560 037 CTFD
More informationCompute the behavior of reality even if it is impossible to observe the processes (for example a black hole in astrophysics).
1 Introduction Read sections 1.1, 1.2.1 1.2.4, 1.2.6, 1.3.8, 1.3.9, 1.4. Review questions 1.1 1.6, 1.12 1.21, 1.37. The subject of Scientific Computing is to simulate the reality. Simulation is the representation
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationMultiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions
Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................
More informationMachine Learning Support Vector Machines. Prof. Matteo Matteucci
Machine Learning Support Vector Machines Prof. Matteo Matteucci Discriminative vs. Generative Approaches 2 o Generative approach: we derived the classifier from some generative hypothesis about the way
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationAutomatic Differentiation for Optimum Design, Applied to Sonic Boom Reduction
Automatic Differentiation for Optimum Design, Applied to Sonic Boom Reduction Laurent Hascoët, Mariano Vázquez, Alain Dervieux Laurent.Hascoet@sophia.inria.fr Tropics Project, INRIA Sophia-Antipolis AD
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More informationTangent linear and adjoint models for variational data assimilation
Data Assimilation Training Course, Reading, -4 arch 4 Tangent linear and adjoint models for variational data assimilation Angela Benedetti with contributions from: arta Janisková, Philippe Lopez, Lars
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationTHEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)
4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationLecture 23: 6.1 Inner Products
Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such
More informationMath Advanced Calculus II
Math 452 - Advanced Calculus II Manifolds and Lagrange Multipliers In this section, we will investigate the structure of critical points of differentiable functions. In practice, one often is trying to
More informationMon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:
Math 2250-004 Week 4 notes We will not necessarily finish the material from a given day's notes on that day. We may also add or subtract some material as the week progresses, but these notes represent
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationWe wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form
Linear algebra This chapter discusses the solution of sets of linear algebraic equations and defines basic vector/matrix operations The focus is upon elimination methods such as Gaussian elimination, and
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationChapter 1 Computer Arithmetic
Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations
More informationA COUPLED-ADJOINT METHOD FOR HIGH-FIDELITY AERO-STRUCTURAL OPTIMIZATION
A COUPLED-ADJOINT METHOD FOR HIGH-FIDELITY AERO-STRUCTURAL OPTIMIZATION Joaquim Rafael Rost Ávila Martins Department of Aeronautics and Astronautics Stanford University Ph.D. Oral Examination, Stanford
More informationLecture II: Vector and Multivariate Calculus
Lecture II: Vector and Multivariate Calculus Dot Product a, b R ' ', a ( b = +,- a + ( b + R. a ( b = a b cos θ. θ convex angle between the vectors. Squared norm of vector: a 3 = a ( a. Alternative notation:
More informationSection Summary. Sequences. Recurrence Relations. Summations. Examples: Geometric Progression, Arithmetic Progression. Example: Fibonacci Sequence
Section 2.4 1 Section Summary Sequences. Examples: Geometric Progression, Arithmetic Progression Recurrence Relations Example: Fibonacci Sequence Summations 2 Introduction Sequences are ordered lists of
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,
More information