Introduction to Compact Dynamical Modeling. II.1 Steady State Simulation. Luca Daniel Massachusetts Institute of Technology. dx dt.
|
|
- Emerald Parks
- 5 years ago
- Views:
Transcription
1 Course Outline NS & NIH Introduction to Compact Dynamical Modeling II. Steady State Simulation uca Daniel Massachusetts Institute o Technology Quic Snea Preview I. ssembling Models rom Physical Problems II. Simulating Models II.. Steady State Simulation II..a inear Systems II..b Non-inear Systems II.. Time Domain Integration o Dynamical Systems III. Model Order Reduction or inear Systems IV. Model Order Reduction or Non-inear Systems V. Parameterized Model Order Reduction II.- II.- Steady State nalysis o Dynamical inear Models d dt y ( t C t steady-state state ( t + ( t B u ( t d u ( t u constant, dt The analysis boils down to solving a linear system o equations y B u C b Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o networs with local connectivity nalysis o networs with global connectivity Nonlinear networs Outline Dynamical nalysis o Networs Model Order Reduction 3 II.-4
2 Networ Steady State Equations Eistence and Uniqueness o Steady State N N b b b N + ++ N N b.i.columns.d. columns b range{ } b range{ } Solution eists and is unique Ininite solutions eist No solutions No solutions ind a set o weights,, so that the weighted sum o the columns o the matri is equal to the right hand side b Given b, where is square I columns are linearly independent (nonsingular matri then solution always eist (and o course is unique 5 6 Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o networs with local connectivity nalysis o networs with global connectivity Nonlinear networs Outline Dynamical nalysis o Networs Model Order Reduction II.-7 Use Eqn to Eliminate rom Eqn and b b ( b 3 Gaussian Elimination Basics Reminder by eample the ey idea + b b b b + + b ( b II.-8
3 Gaussian Elimination Basics Reminder by eample the ey idea (matri view Gaussian Elimination Basics Reminder by eample simpliy the notation Pivot MUTIPIERS b b b b3 b b 3 3 b b 3 b b3 b3 b II.-9 II.- Gaussian Elimination Basics Reminder by eample apply recursively Gaussian Elimination Basics The right hand side update b Pivot 3 3 b b 3 b b 3 3 b 3 3 b3 b Multiplier II.- y y y 3 b b b b 3 3 b 3 b Note: irst loop multipliers 3 3 Note: second loop multiplier y y y 3 b b b 3 II.- 3
4 4 II.-3 b U b b y y U Gaussian Elimination Putting it all bac together U decomposition basics II.-4 U U decomposition basics itting the pieces together II.-5 U U decomposition basics itting the pieces together II U decomposition basics n in place implementation
5 U decomposition basics Heat conducting bar eample Incoming Heat U decomposition basics Heat conducting bar eample Group activity T I s R TN Nodal matri In place U decomposition: II.-7 II.-8 U decomposition basics The three steps to solving a linear system Solve b Step U Decomposition U O( N 3 Group activity Discuss in your team in what situations you thin it is practically convenient to separate and organize the computation in 3 steps as opposed to the standard Gaussian Elimination procedure. Discuss i that situation may apply and what it means or each o the case studies o the members o your team. Step orward Elimination Solve y b O( N Step 3 Bacward Substitution Solve U y O( N II.-9 II.- 5
6 Outline Sparse Matrices Tridiagonal eample: matri orm Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o sparse systems (e.g. networs with local connectivity nalysis o dense systems (e.g. networs with global connectivity Nonlinear networs Dynamical nalysis o Networs Model Order Reduction II m m m II.- Sparse Matrices Tridiagonal eample: U decomposition Sparse Matrices - Circuit Grid Eample or i to N- { or each Row or i+ to N { or each target Row below the source i i ii or i+ to N { or each Row element beyond Pivot i i 3 4 m m + m + m + 3 m m } } } O(N Operations! ( m ( m+ Unnowns : Node Voltages Equations : currents m II.-3 II.-4 6
7 Sparse Matrices - Circuit Grid Eample Matri non-zero locations or Resistor Grid Sparse Matrices Temperature in a Cube Eample Temperature nown on surace, determine interior temperature Circuit Model m + m + m + m + II.-5 II.-6 Steady State nalysis o Networs with local (sparse connectivity, when neglecting sparsity Banded U computational analysis 3 3 D ( ( pt grid ON Om 3 6 D ( ( ON Om grid O( 6 ( ops ops D ( ( rid ON Om O msec 7min g O( ops 3years E.g. on a Glops computer using m and constant c b b NONZEROS or i to N- { or i+ to N i+b- { { or i+ to N i+b- { { } } } i i ii II.-7 N i (min(, ( b N i O b N Multiply-adds II.-8 7
8 Steady State nalysis o Networs with local (sparse connectivity, eploiting banded inormation matri size band -D Nm b -D Nm bm 3-D Nm 3 bm D Ob ( N Om ( pt grid O( ops D 4 Ob ( N Om ( 8 grid O( ops 3 D ( ( 7 grid 4 Ob N Om Domain size m O( ops.us.sec E.g. on a Glops computer using m and constant c 7h II.-9 Sparse Matrices - Struts and Joints rame Eample Space rame Unnowns : Joint positions Equations : orces Nodal Matri i i i i bloc II.-3 Sparse Matrices: ill-in Sparse Matrices: ill-in potential locations ill-ins Propagate: ill-ins rom Step result in ill-ins in step II.-3 Where can ill-in occur? Multipliers lready actored Possible ill-in ocations ill-in Estimate (Non zeros in unactored part o Row - (Non zeros in unactored part o Col - Marowitz product II.-3 8
9 Sparse Matrices: ill-in reordering algorithm Sparse Matrices: pivoting or sparsity or i to N Marowitz Reordering ind diagonal i with min Marowitz Product Swa p rows i and columns i actor the new row i and determine ill-ins End Greedy lgorithm! Pattern o a illed-in Matri Very Sparse Very Sparse Dense II.-33 II.-34 Computational Compleity o Steady State nalysis o Networs with local (sparse connectivity Computational Compleity o Steady State nalysis o Networs with local (sparse connectivity Dimension Banded U Sparse U GCR 3 ( ( ( ( 4 ( 3 ( 3 ( 7 ( 6 ( 4 O m O m O m O m O m O m O m O m O m Dimension Banded U Sparse U GCR 3 (.us.us (.ms ( (.sec ms ( ms ( ( ( ( O m O m O m O m O m O m O7hours m O7min m Osec m E.g. on a Glops computer using m, and constant c no precond II.-35 II.-36 9
10 Sparse Matrices Data Structures or Sparse Matrices Outline Vector o row pointers N Val Val Col Col Val Val Col Col Val N Val N Col N Col N rrays o Data in a Row Val N Col N Val K Col K Matri entries Column inde Val Col Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o sparse systems (e.g. networs with local connectivity nalysis o dense systems (e.g. networs with global connectivity Nonlinear networs Dynamical nalysis o Networs Model Order Reduction II.-37 II.-38 Steady State nalysis or inear Dense Problems Iterative methods the general idea Problem: solve b guess REPET how good was my guess? Calculate residual r b- pic a search direction based on previous guess history ind the net guess + along the search direction, which minimizes the net residual r + UNTI residual < desired accuracy Steady State nalysis or inear Dense Problems Selection o Search Direction Vectors ssume T T (symmetric and > (pos. de Observation: then the solution o b corresponds T T to the location o the minimum o ( b ( b b minimizes Gradient o ( Residual at dvantages over the direct method (i.e. U decomposition: Have a great control on accuracy (can stop and save computation when desired accuracy is achieved Only need a matri vector product per iteration: O(N II.-39 Idea: search along the gradient i.e. the steepest descent directions or ( i.e. along the current residual direction Note: Need dierent choice or non-symmetric, non pos. de. case (e.g. loo up GMRES algorithm II.-4
11 Steady State nalysis or inear Dense Problems the Generalized Conugate Residual (GCR algorithm r b -space REPET p r p r r r r αp or to - p p p, p p p α p p, p p p p p p p p α r, p + + α p r r r α p + + UNTI residual small enough r b r r, p r αp II.-4 GCR builds the solution as a proection onto a Krylov subspace r b REPET p r p r or to - p p p, p p p p, p p p p p p p α r, p + + α p r r α p + + UNTI residual small enough p p r span -space r b span{ r, r,..., r } { r, r,..., r r }, II.-4 Steady State nalysis or inear Problems Comparison Direct vs. Iterative Steady State nalysis or inear Problems Comparison Direct vs. Iterative Theorem: the # o iterations K is never larger than N: GCR is slightly worse than U or SPRSE problems GCR is no worse than U or DENSE problems Sparse Dense Sparse Dense GCR O(KN O(KN GCR (Theorem: K N O(N O(N 3 Direct method U decomposition O(N.-.8 O(N 3 Direct method U decomposition O(N.-.8 O(N 3 II.-43 II.-44
12 Steady State nalysis or inear Problems Comparison Direct vs. Iterative With a good pre-conditioner the # o iterations K can be made typically about constant and small (- or 3-4 digits o accuracy: GCR is slightly better than U or SPRSE problems GCR is SIGNIICNTY better than U or DENSE problems GCR (K small const. Sparse Dense O(N O(N Course Outline Quic Snea Preview I. ssembling Models rom Physical Problems II. Simulating Models II.. Steady State Simulation II..a inear Systems II..b Non-inear Systems II.. Time Domain Integration o Dynamical Systems III. Model Order Reduction or inear Systems IV. Model Order Reduction or Non-inear Systems V. Parameterized Model Order Reduction Direct method U decomposition O(N.-.8 O(N 3 II.-45 II.-46 Nonlinear Steady State nalysis Struts Eample Nonlinear Steady State nalysis Struts Eample (, y (, y (, y B l oad orce Need to Solve: Given:, y,, y, l ind:, y y II.-47 (, y (, y B (, y B oad orce l r r ε r r r (, y ( r r r (, y,, y, + B, + l,, y + B, y + l, y B, B, y B ε y y ε B ε B y y ε B (, (, ( + ( y y B ( + ( y y ( B, ( B, B B II.-48
13 Nonlinear Steady State nalysis Circuit Eample Valves in the venous system can be modeled as diodes v v v + - V + d - I r Conservation aws: I d + I V Vt I I ( e d I V s v s I Substitute Branch Equations: r d r r Non-inear System o Equations ( II.-49 II.-5 Steady State nalysis Nonlinear Steady State nalysis Newton s Method D Graphically d ( ( t, u( t dt y( t G ( ( t, u( t t steady-state d u ( t u constant, dt The analysis boils down to solving a non-linear system o equations (, u y G (, u II.-5 II.-5 3
14 Multi-Dimensional Newton s Method General Setting ( Problem: ind such that N N N ˆ ( ( ( + J + R and : R R pproimate ( with its Taylor series about : ( + J ( + [ J ( ] ( ( ( Jacobian ind the solution o the approimation: + Multi-D Newton s Method Jacobian Matri ( ( ( ( ( irst order Taylor Series epansion: + Δ + J Δ +... N Δ J ( Δ N ( N ( Δ N N ( Δ ( +Δ ( J II.-53 II.-54 Multi-D Newton s Method Nodal nalysis Struts and Joints eample Multi-D Newton s Method the lgorithm E ( + y E ( + y ( Δ ( +Δ ( J J (???? Initial Guess Repeat { } ( ( [ J ( ] ( Compute, J + + Why is this REY NOT the way to do it? II.-55 II.-56 4
15 Repeat { } ( ( + + ( Δ ( or Δ Compute, J Solve Multi-D Newton s Method the lgorithm Initial Guess J + Δ Repeat { ( ( + + ( Δ ( or Δ Compute, J Solve Nonlinear Steady State nalysis Newton s Method Convergence Checs Initial Guess J + Δ ( } Until? + + Δ < threshold? < threshold? II.-57 II.-58 Nonlinear Steady State nalysis Newton s Method Convergence Checs Nonlinear Steady State nalysis Newton s Method Convergence Checs Need an " ( " chec to avoid alse convergence Need a "delta-" chec to avoid alse convergence ( + Δ < + ε Yes NO! + ( < ε a a ( NO! + Δ < ε a + + ( < ε Yes a II.-59 II.-6 5
16 Nonlinear Steady State nalysis Newton s Method Convergence Checs Nonlinear Steady State nalysis Newton s Method Convergence Checs Need also a relative delta- chec to avoid alse convergence ZOOM IN ( Yes Δ < + ε a + + ( < ε Yes a Yes Δ < + ε a ( + + ( NO! Δ + < ε Yes Still % error! a < ε r II.-6 II.-6 Group activity or each nonlinear case study in your group, discuss which convergence chec you should apply Outline Motivations Modeling o Networs Steady State nalysis o Networs inear Networs Nonlinear Networs D Newton method Multi-Dimensional Newton method Convergence nalysis Improving Convergence Netwon-GCR (Jacobian-ree method Case Study Dynamical nalysis o Networs II.-63 II.-64 6
17 Nonlinear Steady State nalysis Newton s Method Convergence Eample Nonlinear Steady State nalysis Newton s Method Convergence Theorem D d d Suppose ( ( C or all d d gained.5 digits gained 3 digits Double # o digits gained at each iteration gained 6 digits i C γ < then converges to d d The smaller the bounds on and d d the worse the initial guess can be, and still get guaranteed convergence!!! - II.-65 II.-66 Nonlinear Steady State nalysis Newton s Method Convergence Issues n initial guess ar rom the region where the bounds are small may cause non-convergence ( Initial Guess, Repeat { Solve } Until ( ( ( ( limited( Compute, J J Δ or Δ + Δ + Newton s Method with limiting Δ, Δ, and ( small enough e.g. limiting the changes in might improve convergence II.-67 II.-68 7
18 Newton s Method Continuation Schemes Source and oad Stepping Newton s Method Continuation Schemes Source/oad Stepping Eamples Newton converges given a close initial guess Generate a sequence o problems Mae sure previous problem generates guess or net problem Heat-conducting bar eample V s + - R v Diode ( v( λ, λ idiode ( v + ( v λvs R. Start with heat o, T is a very close initial guess. Increase the heat slightly, T is a good initial guess 3. Increase heat again ( (, λ y(, y, y + λ l II.-69 II.-7 V s + - Newton s Method Continuation Schemes Source/oad Stepping Eamples R v Diode ( v( λ, λ idiode ( v + ( v λvs R ( (, λ y( ( λ ( v, idiode v + Not λ dependent! v v R, y, y + λ Source/oad Stepping Does Not lter Jacobian l II.-7 Newton s Method Continuation Schemes Newton Homotopy methods ( ( ( ( (, ( (, λ ( Observations I ( ( ( ( ( (, ( ( λ, ( ( ( λ, λ λ λ + λ λ Problem is easy to solve and Jacobian deinitely nonsingular. Bac to the original problem and original Jacobian II.-7 8
19 Newton s Method Continuation Schemes Newton Homotopy methods eamples Case Study What inear Solver: direct or iterative? V s + - R v Diode v V λ R λ λ y s + λg d (, y + λ (, y + λ ( λ v ( v +, y, + + ( λ ( λ y Computation time Memory How accurate should we solve linear system? O(N.5 U O(N Have no choice: 6 digits GCR O(qN q #GCR steps O(qN ust store q vectors Jp Can stop ater one digit! Really Need Jacobian? YES NO! II.-73 II.-74 Outline Newton-GCR method Motivations Modeling o Networs Steady State nalysis o Networs inear Networs Nonlinear Networs D Newton method Multi-Dimensional Newton method Convergence nalysis Improving Convergence Netwon-GCR (Jacobian-ree method Case Study Dynamical nalysis o Networs II.-75 Initial Guess, Repeat { Stamp and J ( ( Solve (Using GCR ( ( + + +Δ +, small enough } Until Δ + ( + J Δ or Δ + + II.-76 9
20 Initial Guess, Repeat { Stamp and ( ( r ( Repeat { Compute } Until r small + + +Δ +, small enough } Until Δ + ( + Newton-GCR method J ( Orthonormalize in the image space Update solution and residual r r Do we REY need to assemble the Jacobian? II.-77 Newton-GCR method Matri ree idea Consider pplying GCR to The Newton Iterate Equation ( + Δ ( J t each iteration GCR orms a matri-vector product J ( r [ ( + ε r ( ] ε It is possible to use Newton-GCR without Jacobians! Need to Select a good ε II.-78 Initial Guess, Repeat { Compute and ( ( r ( Repeat { Compute Orthonormalize } Until r small + + +Δ +, small enough } Until Δ + ( + Matri ree Newton-GCR l l ( r ( + ε r ( J Update solution and residual r ε Do we REY need to assemble the Jacobian? NO [ ] How accurately should we solve with GCR? II.-79
Introduction to Simulation - Lecture 9. Multidimensional Newton Methods. Jacob White
Introduction to Simulation - Lecture 9 Multidimensional Newton Methods Jacob White Thanks to Deepak Ramaswamy, Jaime Peraire, Michal Rewienski, and Karen Veroy Outline Quick Review of -D Newton Convergence
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More informationIntroduction to Simulation - Lecture 10. Modified Newton Methods. Jacob White
Introduction to Simulation - Lecture 0 Modified Newton Methods Jacob White Thans to Deepa Ramaswamy, Jaime Peraire, Michal Rewiensi, and Karen Veroy Outline Damped Newton Schemes SMA-HPC 003 MIT Globally
More informationSparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition.
Sparsity. Introduction We saw in previous notes that the very common problem, to solve for the n vector in A b ( when n is very large, is done without inverting the n n matri A, using LU decomposition.
More information= V I = Bus Admittance Matrix. Chapter 6: Power Flow. Constructing Ybus. Example. Network Solution. Triangular factorization. Let
Chapter 6: Power Flow Network Matrices Network Solutions Newton-Raphson Method Fast Decoupled Method Bus Admittance Matri Let I = vector of currents injected into nodes V = vector of node voltages Y bus
More informationFundamentals Algorithms for Linear Equation Solution. Gaussian Elimination LU Factorization
Fundamentals Algorithms for Linear Equation Solution Gaussian Elimination LU Factorization J. Roychowdhury, University of alifornia at erkeley Slide Dense vs Sparse Matrices ircuit Jacobians: typically
More informationChapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence
Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we
More informationIntroduction to Simulation - Lecture 2. Equation Formulation Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy
Introduction to Simulation - Lecture Equation Formulation Methods Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Formulating Equations rom Schematics Struts and Joints
More information(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)
Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not
More informationQuiescent Steady State (DC) Analysis The Newton-Raphson Method
Quiescent Steady State (DC) Analysis The Newton-Raphson Method J. Roychowdhury, University of California at Berkeley Slide 1 Solving the System's DAEs DAEs: many types of solutions useful DC steady state:
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationNumerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan
IERAIVE MEHODS Numerical methods in chemical engineering Edwin Zondervan 1 OVERVIEW Iterative methods for large systems of equations We will solve Laplace s equation for steady state heat conduction LAPLACE
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationLecture 8 Optimization
4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationRevised Simplex Method
DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationParallel VLSI CAD Algorithms. Lecture 1 Introduction Zhuo Feng
Parallel VLSI CAD Algorithms Lecture 1 Introduction Zhuo Feng 1.1 Prof. Zhuo Feng Office: EERC 513 Phone: 487-3116 Email: zhuofeng@mtu.edu Class Website http://www.ece.mtu.edu/~zhuofeng/ee5900spring2012.html
More informationLECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form
LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)
More informationToday. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods
Optimization Last time Root inding: deinition, motivation Algorithms: Bisection, alse position, secant, Newton-Raphson Convergence & tradeos Eample applications o Newton s method Root inding in > 1 dimension
More informationSolving Large Nonlinear Sparse Systems
Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More information5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)
5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h
More informationIterative Linear Solvers
Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationSupplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values
Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,
More informationIterative Methods for Linear Systems
Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the
More informationCISE-301: Numerical Methods Topic 1:
CISE-3: Numerical Methods Topic : Introduction to Numerical Methods and Taylor Series Lectures -4: KFUPM Term 9 Section 8 CISE3_Topic KFUPM - T9 - Section 8 Lecture Introduction to Numerical Methods What
More informationM 340L CS Homework Set 1
M 340L CS Homework Set 1 Solve each system in Problems 1 6 by using elementary row operations on the equations or on the augmented matri. Follow the systematic elimination procedure described in Lay, Section
More information2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4
2.29 Spring 2015 Lecture 4 Review Lecture 3 Truncation Errors, Taylor Series and Error Analysis Taylor series: 2 3 n n i1 i i i i i n f( ) f( ) f '( ) f ''( ) f '''( )... f ( ) R 2! 3! n! n1 ( n1) Rn f
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION,, 1 n A linear equation in the variables equation that can be written in the form a a a b 1 1 2 2 n n a a is an where
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationGeometric Modeling Summer Semester 2010 Mathematical Tools (1)
Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric
More informationOPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION
OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION Xu Bei, Yeo Jun Yoon and Ali Abur Teas A&M University College Station, Teas, U.S.A. abur@ee.tamu.edu Abstract This paper presents
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationConsider the following example of a linear system:
LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationCinChE Problem-Solving Strategy Chapter 4 Development of a Mathematical Algorithm. partitioning. procedure
Development o a Mathematical Algorithm Transormation Process Mathematical Model partitioning procedure Mathematical Algorithm The mathematical algorithm is a plan (or blueprint) that identiies the independent
More informationLecture 9: SVD, Low Rank Approximation
CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 9: SVD, Low Rank Approimation Lecturer: Shayan Oveis Gharan April 25th Scribe: Koosha Khalvati Disclaimer: hese notes have not been subjected
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationPower System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI
Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Power Flow VI (Refer Slide Time: 00:57) Welcome to lesson 21. In this
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationNumerical Methods. Root Finding
Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real
More information10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review
10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics
More informationDependence and independence
Roberto s Notes on Linear Algebra Chapter 7: Subspaces Section 1 Dependence and independence What you need to now already: Basic facts and operations involving Euclidean vectors. Matrices determinants
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More information( x) f = where P and Q are polynomials.
9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationFINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION
FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper presents a new fine-grained parallel algorithm for computing an incomplete LU factorization. All nonzeros
More informationReview of Matrices and Vectors 1/45
Reiew of Matrices and Vectors /45 /45 Definition of Vector: A collection of comple or real numbers, generally put in a column [ ] T "! Transpose + + + b a b a b b a a " " " b a b a Definition of Vector
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More information18-660: Numerical Methods for Engineering Design and Optimization
8-66: Numerical Methods or Engineering Design and Optimization Xin Li Department o ECE Carnegie Mellon University Pittsburgh, PA 53 Slide Overview Linear Regression Ordinary least-squares regression Minima
More informationAdvanced Computational Methods for VLSI Systems. Lecture 4 RF Circuit Simulation Methods. Zhuo Feng
Advanced Computational Methods for VLSI Systems Lecture 4 RF Circuit Simulation Methods Zhuo Feng 6. Z. Feng MTU EE59 Neither ac analysis nor pole / zero analysis allow nonlinearities Harmonic balance
More informationThe conjugate gradient method
The conjugate gradient method Michael S. Floater November 1, 2011 These notes try to provide motivation and an explanation of the CG method. 1 The method of conjugate directions We want to solve the linear
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationThere is a unique function s(x) that has the required properties. It turns out to also satisfy
Numerical Analysis Grinshpan Natural Cubic Spline Let,, n be given nodes (strictly increasing) and let y,, y n be given values (arbitrary) Our goal is to produce a function s() with the following properties:
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationAP Calculus Notes: Unit 1 Limits & Continuity. Syllabus Objective: 1.1 The student will calculate limits using the basic limit theorems.
Syllabus Objective:. The student will calculate its using the basic it theorems. LIMITS how the outputs o a unction behave as the inputs approach some value Finding a Limit Notation: The it as approaches
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationMatrix Assembly in FEA
Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,
More informationAn artificial neural networks (ANNs) model is a functional abstraction of the
CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationAnalog Computing Technique
Analog Computing Technique by obert Paz Chapter Programming Principles and Techniques. Analog Computers and Simulation An analog computer can be used to solve various types o problems. It solves them in
More informationk is a product of elementary matrices.
Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation
More informationCourse Notes: Week 4
Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationMA3232 Numerical Analysis Week 9. James Cooley (1926-)
MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier
More informationMath 60. Rumbos Spring Solutions to Assignment #17
Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad bc 0 then the matrix A = is invertible and c d compute A 1. a b Solution: Let A = and assume that ad bc 0. c d First consider
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationNumerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods
Numerical Methods - Lecture 1 Numerical Methods Lecture. Analysis o errors in numerical methods Numerical Methods - Lecture Why represent numbers in loating point ormat? Eample 1. How a number 56.78 can
More informationExact and Approximate Numbers:
Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationExample: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods
Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit
More information0.1. Linear transformations
Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly
More informationSolving Linear Systems of Equations
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More information