Introduction to Compact Dynamical Modeling. II.1 Steady State Simulation. Luca Daniel Massachusetts Institute of Technology. dx dt.

Similar documents
Introduction to Simulation - Lecture 9. Multidimensional Newton Methods. Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Introduction to Simulation - Lecture 10. Modified Newton Methods. Jacob White

Sparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition.

= V I = Bus Admittance Matrix. Chapter 6: Power Flow. Constructing Ybus. Example. Network Solution. Triangular factorization. Let

Fundamentals Algorithms for Linear Equation Solution. Gaussian Elimination LU Factorization

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Introduction to Simulation - Lecture 2. Equation Formulation Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

Quiescent Steady State (DC) Analysis The Newton-Raphson Method

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Solvers. Andrew Hazel

Course Notes: Week 1

Numerical Methods Process Systems Engineering ITERATIVE METHODS. Numerical methods in chemical engineering Edwin Zondervan

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

Lecture 8 Optimization

Numerical Methods I Non-Square and Sparse Linear Systems

Revised Simplex Method

Incomplete Cholesky preconditioners that exploit the low-rank property

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Parallel VLSI CAD Algorithms. Lecture 1 Introduction Zhuo Feng

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Solving Large Nonlinear Sparse Systems

Notes on Some Methods for Solving Linear Systems

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

Iterative Linear Solvers

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values

Iterative Methods for Linear Systems

CISE-301: Numerical Methods Topic 1:

M 340L CS Homework Set 1

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 4

Linear Equations in Linear Algebra

Written Examination

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Statistical Geometry Processing Winter Semester 2011/2012

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Consider the following example of a linear system:

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

CinChE Problem-Solving Strategy Chapter 4 Development of a Mathematical Algorithm. partitioning. procedure

Lecture 9: SVD, Low Rank Approximation

Numerical Methods - Numerical Linear Algebra

17 Solution of Nonlinear Systems

Iterative Methods for Solving A x = b

Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI

1 Computing with constraints

Numerical Methods. Root Finding

10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review

Dependence and independence

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Higher-Order Methods

( x) f = where P and Q are polynomials.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION

Review of Matrices and Vectors 1/45

Numerical Methods I Solving Nonlinear Equations

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

18-660: Numerical Methods for Engineering Design and Optimization

Advanced Computational Methods for VLSI Systems. Lecture 4 RF Circuit Simulation Methods. Zhuo Feng

The conjugate gradient method

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

M.A. Botchev. September 5, 2014

There is a unique function s(x) that has the required properties. It turns out to also satisfy

Lecture 18 Classical Iterative Methods

AP Calculus Notes: Unit 1 Limits & Continuity. Syllabus Objective: 1.1 The student will calculate limits using the basic limit theorems.

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Matrix Assembly in FEA

An artificial neural networks (ANNs) model is a functional abstraction of the

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Notes for CS542G (Iterative Solvers for Linear Systems)

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Analog Computing Technique

k is a product of elementary matrices.

Course Notes: Week 4

Computational Methods. Least Squares Approximation/Optimization

6.4 Krylov Subspaces and Conjugate Gradients

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

Math 60. Rumbos Spring Solutions to Assignment #17

Nonlinear Optimization for Optimal Control

EECS 275 Matrix Computation

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods

Exact and Approximate Numbers:

Computational Linear Algebra

ANONSINGULAR tridiagonal linear system of the form

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Linear Methods for Regression. Lijun Zhang

Chapter 7 Iterative Techniques in Matrix Algebra

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

0.1. Linear transformations

Solving Linear Systems of Equations

Block Bidiagonal Decomposition and Least Squares Problems

The Conjugate Gradient Method

Chapter 3 Numerical Methods

Transcription:

Course Outline NS & NIH Introduction to Compact Dynamical Modeling II. Steady State Simulation uca Daniel Massachusetts Institute o Technology Quic Snea Preview I. ssembling Models rom Physical Problems II. Simulating Models II.. Steady State Simulation II..a inear Systems II..b Non-inear Systems II.. Time Domain Integration o Dynamical Systems III. Model Order Reduction or inear Systems IV. Model Order Reduction or Non-inear Systems V. Parameterized Model Order Reduction II.- II.- Steady State nalysis o Dynamical inear Models d dt y ( t C t steady-state state ( t + ( t B u ( t d u ( t u constant, dt The analysis boils down to solving a linear system o equations y B u C b Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o networs with local connectivity nalysis o networs with global connectivity Nonlinear networs Outline Dynamical nalysis o Networs Model Order Reduction 3 II.-4

Networ Steady State Equations Eistence and Uniqueness o Steady State N N b b b N + ++ N N b.i.columns.d. columns b range{ } b range{ } Solution eists and is unique Ininite solutions eist No solutions No solutions ind a set o weights,, so that the weighted sum o the columns o the matri is equal to the right hand side b Given b, where is square I columns are linearly independent (nonsingular matri then solution always eist (and o course is unique 5 6 Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o networs with local connectivity nalysis o networs with global connectivity Nonlinear networs Outline Dynamical nalysis o Networs Model Order Reduction II.-7 Use Eqn to Eliminate rom Eqn and 3 + + 33 b + + 3 3 b ( 3 + 3 + 3 33 3 3 b 3 Gaussian Elimination Basics Reminder by eample the ey idea + b b 3 3 3 + b b + + b 3 3 3 3 3 33 3 33 3 3 3 3 3 ( + + 33 b II.-8

Gaussian Elimination Basics Reminder by eample the ey idea (matri view Gaussian Elimination Basics Reminder by eample simpliy the notation Pivot MUTIPIERS b 3 3 3 b b 3 3 3 33 3 3 3 b3 b b 3 3 b b 3 b 3 3 3 33 3 33 3 3 b3 b3 b II.-9 II.- Gaussian Elimination Basics Reminder by eample apply recursively Gaussian Elimination Basics The right hand side update b Pivot 3 3 b b 3 b 3 3 3 3 3 33 b 3 3 b 3 3 b3 b Multiplier II.- y y y 3 b b b b 3 3 b 3 b Note: irst loop multipliers 3 3 Note: second loop multiplier y y y 3 b b b 3 II.- 3

4 II.-3 b U b b y y U Gaussian Elimination Putting it all bac together U decomposition basics II.-4 U 3 3 33 3 3 U decomposition basics itting the pieces together II.-5 U 33 3 3 3 3 U decomposition basics itting the pieces together II.-6 3 3 3 3 33 33 3 3 3 33 3 U decomposition basics n in place implementation

U decomposition basics Heat conducting bar eample Incoming Heat U decomposition basics Heat conducting bar eample Group activity T I s R TN Nodal matri In place U decomposition: II.-7 II.-8 U decomposition basics The three steps to solving a linear system Solve b Step U Decomposition U O( N 3 Group activity Discuss in your team in what situations you thin it is practically convenient to separate and organize the computation in 3 steps as opposed to the standard Gaussian Elimination procedure. Discuss i that situation may apply and what it means or each o the case studies o the members o your team. Step orward Elimination Solve y b O( N Step 3 Bacward Substitution Solve U y O( N II.-9 II.- 5

Outline Sparse Matrices Tridiagonal eample: matri orm Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o sparse systems (e.g. networs with local connectivity nalysis o dense systems (e.g. networs with global connectivity Nonlinear networs Dynamical nalysis o Networs Model Order Reduction II.- 3 4 m m m II.- Sparse Matrices Tridiagonal eample: U decomposition Sparse Matrices - Circuit Grid Eample or i to N- { or each Row or i+ to N { or each target Row below the source i i ii or i+ to N { or each Row element beyond Pivot i i 3 4 m m + m + m + 3 m m } } } O(N Operations! ( m ( m+ Unnowns : Node Voltages Equations : currents m II.-3 II.-4 6

Sparse Matrices - Circuit Grid Eample Matri non-zero locations or Resistor Grid Sparse Matrices Temperature in a Cube Eample Temperature nown on surace, determine interior temperature Circuit Model m + m + m + m + II.-5 II.-6 Steady State nalysis o Networs with local (sparse connectivity, when neglecting sparsity Banded U computational analysis 3 3 D ( ( pt grid ON Om 3 6 D ( ( ON Om grid O( 6 ( ops ops 3 9 8 3 D ( ( rid ON Om O msec 7min g O( ops 3years E.g. on a Glops computer using m and constant c b b NONZEROS or i to N- { or i+ to N i+b- { { or i+ to N i+b- { { } } } i i ii II.-7 N i (min(, ( b N i O b N Multiply-adds II.-8 7

Steady State nalysis o Networs with local (sparse connectivity, eploiting banded inormation matri size band -D Nm b -D Nm bm 3-D Nm 3 bm D Ob ( N Om ( pt grid O( ops D 4 Ob ( N Om ( 8 grid O( ops 3 D ( ( 7 grid 4 Ob N Om Domain size m O( ops.us.sec E.g. on a Glops computer using m and constant c 7h II.-9 Sparse Matrices - Struts and Joints rame Eample Space rame 3 4 5 6 7 8 9 Unnowns : Joint positions Equations : orces Nodal Matri i i i i bloc II.-3 Sparse Matrices: ill-in Sparse Matrices: ill-in potential locations ill-ins Propagate: ill-ins rom Step result in ill-ins in step II.-3 Where can ill-in occur? Multipliers lready actored Possible ill-in ocations ill-in Estimate (Non zeros in unactored part o Row - (Non zeros in unactored part o Col - Marowitz product II.-3 8

Sparse Matrices: ill-in reordering algorithm Sparse Matrices: pivoting or sparsity or i to N Marowitz Reordering ind diagonal i with min Marowitz Product Swa p rows i and columns i actor the new row i and determine ill-ins End Greedy lgorithm! Pattern o a illed-in Matri Very Sparse Very Sparse Dense II.-33 II.-34 Computational Compleity o Steady State nalysis o Networs with local (sparse connectivity Computational Compleity o Steady State nalysis o Networs with local (sparse connectivity Dimension Banded U Sparse U GCR 3 ( ( ( ( 4 ( 3 ( 3 ( 7 ( 6 ( 4 O m O m O m O m O m O m O m O m O m Dimension Banded U Sparse U GCR 3 (.us.us (.ms ( 4 3 3 (.sec ms ( ms ( 7 6 4 ( ( ( O m O m O m O m O m O m O7hours m O7min m Osec m E.g. on a Glops computer using m, and constant c no precond II.-35 II.-36 9

Sparse Matrices Data Structures or Sparse Matrices Outline Vector o row pointers N Val Val Col Col Val Val Col Col Val N Val N Col N Col N rrays o Data in a Row Val N Col N Val K Col K Matri entries Column inde Val Col Motivations ssembling Networ Equations... utomatically Steady State nalysis o Networs inear networs Eistence and Uniqueness o Steady State U decomposition nalysis o sparse systems (e.g. networs with local connectivity nalysis o dense systems (e.g. networs with global connectivity Nonlinear networs Dynamical nalysis o Networs Model Order Reduction II.-37 II.-38 Steady State nalysis or inear Dense Problems Iterative methods the general idea Problem: solve b guess REPET how good was my guess? Calculate residual r b- pic a search direction based on previous guess history ind the net guess + along the search direction, which minimizes the net residual r + UNTI residual < desired accuracy Steady State nalysis or inear Dense Problems Selection o Search Direction Vectors ssume T T (symmetric and > (pos. de Observation: then the solution o b corresponds T T to the location o the minimum o ( b ( b b minimizes Gradient o ( Residual at dvantages over the direct method (i.e. U decomposition: Have a great control on accuracy (can stop and save computation when desired accuracy is achieved Only need a matri vector product per iteration: O(N II.-39 Idea: search along the gradient i.e. the steepest descent directions or ( i.e. along the current residual direction Note: Need dierent choice or non-symmetric, non pos. de. case (e.g. loo up GMRES algorithm II.-4

Steady State nalysis or inear Dense Problems the Generalized Conugate Residual (GCR algorithm r b -space REPET p r p r r r r αp or to - p p p, p p p α p p, p p p p p p p p α r, p + + α p r r r α p + + UNTI residual small enough r b r r, p r αp II.-4 GCR builds the solution as a proection onto a Krylov subspace r b REPET p r p r or to - p p p, p p p p, p p p p p p p α r, p + + α p r r α p + + UNTI residual small enough p p r span -space r b span{ r, r,..., r } { r, r,..., r r }, II.-4 Steady State nalysis or inear Problems Comparison Direct vs. Iterative Steady State nalysis or inear Problems Comparison Direct vs. Iterative Theorem: the # o iterations K is never larger than N: GCR is slightly worse than U or SPRSE problems GCR is no worse than U or DENSE problems Sparse Dense Sparse Dense GCR O(KN O(KN GCR (Theorem: K N O(N O(N 3 Direct method U decomposition O(N.-.8 O(N 3 Direct method U decomposition O(N.-.8 O(N 3 II.-43 II.-44

Steady State nalysis or inear Problems Comparison Direct vs. Iterative With a good pre-conditioner the # o iterations K can be made typically about constant and small (- or 3-4 digits o accuracy: GCR is slightly better than U or SPRSE problems GCR is SIGNIICNTY better than U or DENSE problems GCR (K small const. Sparse Dense O(N O(N Course Outline Quic Snea Preview I. ssembling Models rom Physical Problems II. Simulating Models II.. Steady State Simulation II..a inear Systems II..b Non-inear Systems II.. Time Domain Integration o Dynamical Systems III. Model Order Reduction or inear Systems IV. Model Order Reduction or Non-inear Systems V. Parameterized Model Order Reduction Direct method U decomposition O(N.-.8 O(N 3 II.-45 II.-46 Nonlinear Steady State nalysis Struts Eample Nonlinear Steady State nalysis Struts Eample (, y (, y (, y B l oad orce Need to Solve: Given:, y,, y, l ind:, y y II.-47 (, y (, y B (, y B oad orce l r r ε r r r (, y ( r r r (, y,, y, + B, + l,, y + B, y + l, y B, B, y B ε y y ε B ε B y y ε B (, (, ( + ( y y B ( + ( y y ( B, ( B, B B II.-48

Nonlinear Steady State nalysis Circuit Eample Valves in the venous system can be modeled as diodes v v v + - V + d - I r Conservation aws: I d + I V Vt I I ( e d I V s v s I Substitute Branch Equations: r d r r Non-inear System o Equations ( http://www.vsegypt.org/inde.php/vascular-surgery-cme/articles/44-chronic-venous-insuiciency-.html - + II.-49 II.-5 Steady State nalysis Nonlinear Steady State nalysis Newton s Method D Graphically d ( ( t, u( t dt y( t G ( ( t, u( t t steady-state d u ( t u constant, dt The analysis boils down to solving a non-linear system o equations (, u y G (, u II.-5 II.-5 3

Multi-Dimensional Newton s Method General Setting ( Problem: ind such that N N N ˆ ( ( ( + J + R and : R R pproimate ( with its Taylor series about : ( + J ( + [ J ( ] ( ( ( Jacobian ind the solution o the approimation: + Multi-D Newton s Method Jacobian Matri ( ( ( ( ( irst order Taylor Series epansion: + Δ + J Δ +... N Δ J ( Δ N ( N ( Δ N N ( Δ ( +Δ ( J II.-53 II.-54 Multi-D Newton s Method Nodal nalysis Struts and Joints eample Multi-D Newton s Method the lgorithm E ( + y E ( + y ( Δ ( +Δ ( J J (???? Initial Guess Repeat { } ( ( [ J ( ] ( Compute, J + + Why is this REY NOT the way to do it? II.-55 II.-56 4

Repeat { } ( ( + + ( Δ ( or Δ Compute, J Solve Multi-D Newton s Method the lgorithm Initial Guess J + Δ + + + Repeat { ( ( + + ( Δ ( or Δ Compute, J Solve Nonlinear Steady State nalysis Newton s Method Convergence Checs Initial Guess J + Δ + + + ( } Until? + + Δ < threshold? < threshold? II.-57 II.-58 Nonlinear Steady State nalysis Newton s Method Convergence Checs Nonlinear Steady State nalysis Newton s Method Convergence Checs Need an " ( " chec to avoid alse convergence Need a "delta-" chec to avoid alse convergence ( + Δ < + ε Yes NO! + ( < ε a a ( NO! + Δ < ε a + + ( < ε Yes a II.-59 II.-6 5

Nonlinear Steady State nalysis Newton s Method Convergence Checs Nonlinear Steady State nalysis Newton s Method Convergence Checs Need also a relative delta- chec to avoid alse convergence ZOOM IN ( Yes Δ < + ε a + + ( < ε Yes a Yes Δ < + ε a ( + + ( NO! Δ + < ε Yes Still % error! a < ε r II.-6 II.-6 Group activity or each nonlinear case study in your group, discuss which convergence chec you should apply Outline Motivations Modeling o Networs Steady State nalysis o Networs inear Networs Nonlinear Networs D Newton method Multi-Dimensional Newton method Convergence nalysis Improving Convergence Netwon-GCR (Jacobian-ree method Case Study Dynamical nalysis o Networs II.-63 II.-64 6

Nonlinear Steady State nalysis Newton s Method Convergence Eample Nonlinear Steady State nalysis Newton s Method Convergence Theorem D d d Suppose ( ( C or all d d gained.5 digits gained 3 digits Double # o digits gained at each iteration gained 6 digits i C γ < then converges to d d The smaller the bounds on and d d the worse the initial guess can be, and still get guaranteed convergence!!! - II.-65 II.-66 Nonlinear Steady State nalysis Newton s Method Convergence Issues n initial guess ar rom the region where the bounds are small may cause non-convergence ( Initial Guess, Repeat { Solve } Until ( ( ( ( limited( Compute, J J Δ or Δ + Δ + Newton s Method with limiting + + + + Δ, Δ, and ( small enough e.g. limiting the changes in might improve convergence II.-67 II.-68 7

Newton s Method Continuation Schemes Source and oad Stepping Newton s Method Continuation Schemes Source/oad Stepping Eamples Newton converges given a close initial guess Generate a sequence o problems Mae sure previous problem generates guess or net problem Heat-conducting bar eample V s + - R v Diode ( v( λ, λ idiode ( v + ( v λvs R. Start with heat o, T is a very close initial guess. Increase the heat slightly, T is a good initial guess 3. Increase heat again ( (, λ y(, y, y + λ l II.-69 II.-7 V s + - Newton s Method Continuation Schemes Source/oad Stepping Eamples R v Diode ( v( λ, λ idiode ( v + ( v λvs R ( (, λ y( ( λ ( v, idiode v + Not λ dependent! v v R, y, y + λ Source/oad Stepping Does Not lter Jacobian l II.-7 Newton s Method Continuation Schemes Newton Homotopy methods ( ( ( ( (, ( (, λ ( Observations I ( ( ( ( ( (, ( ( λ, ( ( ( λ, λ λ λ + λ λ Problem is easy to solve and Jacobian deinitely nonsingular. Bac to the original problem and original Jacobian II.-7 8

Newton s Method Continuation Schemes Newton Homotopy methods eamples Case Study What inear Solver: direct or iterative? V s + - R v Diode v V λ R λ λ y s + λg d (, y + λ (, y + λ ( λ v ( v +, y, + + ( λ ( λ y Computation time Memory How accurate should we solve linear system? O(N.5 U O(N Have no choice: 6 digits GCR O(qN q #GCR steps O(qN ust store q vectors Jp Can stop ater one digit! Really Need Jacobian? YES NO! II.-73 II.-74 Outline Newton-GCR method Motivations Modeling o Networs Steady State nalysis o Networs inear Networs Nonlinear Networs D Newton method Multi-Dimensional Newton method Convergence nalysis Improving Convergence Netwon-GCR (Jacobian-ree method Case Study Dynamical nalysis o Networs II.-75 Initial Guess, Repeat { Stamp and J ( ( Solve (Using GCR ( ( + + +Δ +, small enough } Until Δ + ( + J Δ or Δ + + II.-76 9

Initial Guess, Repeat { Stamp and ( ( r ( Repeat { Compute } Until r small + + +Δ +, small enough } Until Δ + ( + Newton-GCR method J ( Orthonormalize in the image space Update solution and residual r r Do we REY need to assemble the Jacobian? II.-77 Newton-GCR method Matri ree idea Consider pplying GCR to The Newton Iterate Equation ( + Δ ( J t each iteration GCR orms a matri-vector product J ( r [ ( + ε r ( ] ε It is possible to use Newton-GCR without Jacobians! Need to Select a good ε II.-78 Initial Guess, Repeat { Compute and ( ( r ( Repeat { Compute Orthonormalize } Until r small + + +Δ +, small enough } Until Δ + ( + Matri ree Newton-GCR l l ( r ( + ε r ( J Update solution and residual r ε Do we REY need to assemble the Jacobian? NO [ ] How accurately should we solve with GCR? II.-79