Equation to Be Solved. Grid i,j Notation. A Small Grid (N = 6, M = 5) Solving the Equations. General Equation in a Matrix

Similar documents
ME 501A Seminar in Engineering Analysis Page 1

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

1 GSW Iterative Techniques for y = Ax

Errors for Linear Systems

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

2.3 Nilpotent endomorphisms

Lecture 21: Numerical methods for pricing American type derivatives

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Lecture 12: Discrete Laplacian

Linear Approximation with Regularization and Moving Least Squares

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Report on Image warping

Multigrid Methods and Applications in CFD

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Consistency & Convergence

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Chapter Newton s Method

Singular Value Decomposition: Theory and Applications

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 12

Lecture Notes on Linear Regression

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

Norms, Condition Numbers, Eigenvalues and Eigenvectors

EEE 241: Linear Systems

Polynomial Regression Models

FTCS Solution to the Heat Equation

PART 8. Partial Differential Equations PDEs

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

4DVAR, according to the name, is a four-dimensional variational method.

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

APPENDIX A Some Linear Algebra

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

Computational Astrophysics

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Difference Equations

1 Matrix representations of canonical matrices

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

MMA and GCMMA two methods for nonlinear optimization

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

New Method for Solving Poisson Equation. on Irregular Domains

More metrics on cartesian products

Finite Element Modelling of truss/cable structures

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Lecture 10 Support Vector Machines II

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Generalized Linear Methods

Min Cut, Fast Cut, Polynomial Identities

Inductance Calculation for Conductors of Arbitrary Shape

Limited Dependent Variables

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

2.29 Numerical Fluid Mechanics

Global Sensitivity. Tuesday 20 th February, 2018

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

NUMERICAL DIFFERENTIATION

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Problem Set 9 Solutions

Grid Generation around a Cylinder by Complex Potential Functions

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

SUMMARY OF STOICHIOMETRIC RELATIONS AND MEASURE OF REACTIONS' PROGRESS AND COMPOSITION FOR MULTIPLE REACTIONS

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Dynamic Systems on Graphs

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

Least Squares Fitting of Data

The Geometry of Logit and Probit

[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962).

Kernel Methods and SVMs Extension

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Chapter 3 Differentiation and Integration

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Finite Difference Method

Linear Feature Engineering 11

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Discretization. Consistency. Exact Solution Convergence x, t --> 0. Figure 5.1: Relation between consistency, stability, and convergence.

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems )

Appendix B. The Finite Difference Scheme

for Linear Systems With Strictly Diagonally Dominant Matrix

Inexact Newton Methods for Inverse Eigenvalue Problems

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

MAE140 - Linear Circuits - Fall 10 Midterm, October 28

Solution of the Navier-Stokes Equations

Anouncements. Multigrid Solvers. Multigrid Solvers. Multigrid Solvers. Multigrid Solvers. Multigrid Solvers

e i is a random error

Transcription:

Iteraton olutons arch 7, umercal olutons of nte Volume quatons arry aretto echancal ngneerng 9 omputatonal lud ynamcs arch 7, quaton to e olved W W Q J - j W -j Have a set of smultaneous lnear equaton to be solved algebracally coeffcents dfferent for u, v, p, but all equatons seen here lnk central () node to nearest neghbors parse matrx system, look at teratve methods for soluton mall Grd (, ) j j oundary j nodes j j j omputatonal olecule Grd,j otaton or system typcally use ths notaton n combnaton wth compass ponts otaton pont s general coeffcent refers to a partcular node ont (orth), (outh), (ast), W(est) refers to neghborng nodes by drecton General equaton shown below W j j W W j b olvng the quatons Typcally have large number of equatons formng sparse matrx or Δx Δy. have 99 equatons so matrx has 9x potental coeffcents Only 89 (.%) are nonzero Want data structure and algorthm for handlng sparse matrces Gauss elmnaton uses storage for banded matrces Iteratve methods used for solutons General quaton n a atrx ook at separaton between coeffcents coeffcents that are zero ot present n frst rows W ot present n frst equaton and zero n equaton and every equatons thereafter coeffcents that are zero ot present n last rows Zero n equaton and every equatons thereafter. ot present n last equaton 9 omputatonal lud ynamcs

Iteraton olutons arch 7, parse atrx tructure equatons can have coeffcents Here each equaton has no more than fve coeffcents ( possble) oundares gve another ( - ) zero coeffcents (8 n ths example) Thus, we have 8 nonzero coeffcents and 8 8 zeros n matrx early 8% of coeffcents are zero racton ncreases for larger grds 7 How parse s the atrx? The by grd has ( )( ) nodes wth equatons gvng ( ) ( ) possble coeffcents Wthout boundares we have only ( )( ) nonzero coeffcents oundares gve ( ) ( ) ( ) addtonal zero coeffcents onzero ( )( ) ( ) ( ) racton ( ) ( ) 8 ( )( ) ( )( ) ( ) ( ) What akes parseness? ach node s connected only to a small number of nearest neghbors roblem here has four neghbors Hgher order schemes and fntevolume equaton can have more neghbors an have complex coeffcents so long as number of neghbors s lmted onvecton-dffuson coeffcents wth uneven grd spacng are an example of complex coeffcents n a sparse matrx 9 Iteratve olutons mplest examples are Jacob, Gauss- edel, and uccessve Over Relaxaton ove from teraton n to teraton n Iteraton s ntal guess (often all zero) traghtforward approach: solve equaton for and use ths as bass for teraton W b j j ' ' W ' ' ' b j j Iteratve olutons II Use superscrpt (n) for teraton number Jacob teraton uses all old values ) ' ' W ' ' ' b j j Gauss edel uses most-recent values ) ' ' ) W ' ) ' ' b j j Relaxaton bass: Gauss edel provdes a correcton that can be adjusted ) ( n ), G ω ) [ ] xample roblem ook at smple system of equatons ould solve exactly to fnd x, y Use to llustrate teraton 8 y x y 8 x Orgnal system x y x y Jacob general form and frst steps ) 8 y () 8 y x () x x () ) x y () x y y Iteraton orm Relaxaton actor () () 8 9 omputatonal lud ynamcs

Iteraton olutons arch 7, Jacob xample ontnued () 8 y x () x y () () 8 ( 8 ) 8 () () 8 y 8 8 x 8 () () x ( 8 ) y What s next teraton? How do we know we re fnshed? oncludng Iteratons In general, do not know correct answers Two common measures Resdual: r Σ j a x j b fference n one teraton x (n) x (n) an use relatve or absolute measure eed vector norm such as maxmum absolute value or root mean square ook at summary of teratons for Jacob n 7 8 9 Jacob Iteraton Hstory x n y n x resdual y resdual x change y change 8.. -. -.7...9.87..8 -. -...7 -. -.7778.88889..99.99.. -.7 -..8.78 -.7 -.8.9.7.9997.999.7. -.78 -.7.. -.7 -.79.9.7.99998.9999.8.7 -. -... -.7- -.-.-.7- y x ) ) 8 y ( n x Gauss-edel Iteraton pply Gauss-edel Iteraton to same set of equatons 8 y x y 8 x Orgnal Iteraton system x y x orm y Gauss-edel general teraton form and frst step (uses most recent values) ) x y () () y () () () 8 y 8 x () x 8 y Gauss-edel Iteraton II () () () 8 y 8 8 x 8 () x ( 8 8) 79 9 () () 8 y 8 79 9 8 x.8... () () x ( 8 ) y.99... aster convergence n Gauss-edel 7 Relaxaton ethods Relaxaton factor, ω, greater than or less than s over- or underrelaxaton Underrelaxaton procures stablty n problems that wll not converge Overrelaxaton procures speed n wellbehaved problems ) ω, G ) ' ) ω, G ) [ ] ( ) ( ω ) ω[ W ' ) j ' j b ' ω ] ' 8 9 omputatonal lud ynamcs

Iteraton olutons arch 7, Relaxaton ode (f s ) do ter, maxiter maxresd do, do j, old f(,j) f(,j) (omega ) * f(,j) One set of teratons (omtted on next page) omega * ( (,j) * f(,j) (,j) * f(,j) (,j) * f(,j-) W(,j) * f(-,j) - b(,j) ) resd abs( ( f(,j) old ) / f(,j) ) f ( resd > maxresd ) then maxresd resd; end f 9 Relaxaton ode II do ter, maxiter maxresd ; do, - do j, -!compute new f(,j) and maxresd end do end do f ( maxresd < errtol ) ext end do f ( maxresd > errtol ) then prnt *, ot converged else call dooutput( f,, ) end f onvergng Iteratons Have three dfferent solutons orrect soluton to dfferental equaton xact soluton to fnte-dfference equatons urrent and prevous teraton values Iteratons should approach correct soluton to fnte-dfference equatons nce nether correct soluton s known, we use norm of error estmates Resdual n fnte-dfference equatons hange n teraton value onvergng Iteratons II t each grd node we can compute a relatve change or a resdual oth are zero at convergence Relatve hange [ Resdual] W ' ) j ) ' j ) ) ' ) ' ' b onvergng Iteratons III eed some overall measure of convergence error onsder error (relatve change or resdual) at each pont as one component of a vector Use vector norm for overall error axmum absolute value (zero norm) Root mean squared error (two norm) ε node all nodes εoverall nodes mple umercal xample ook at smple, two-dmensonal case wth dffuson only (veloctes are zero) rchlet (fxed ) boundary condtons Use fnte-volume equaton from orgnal work on dffuson wth a source term et source term to zero and use constant grd szes and Γ olve fnte-volume equaton for ths case (v, Δx, Δy fxed, constant Γ) 9 omputatonal lud ynamcs

Iteraton olutons arch 7, Γ Γ v, Δx, Δy, Γ constant ( ) e ( ) w Δ y Γ x x W Δ y Γ x xw vde by ΓΔy/Δx Δx Δy ( ) n ( ) s Δ x y y Δ x y y ( ) ( ) W Δx Δy W -j v, Δx, Δy, Γ constant II Δy Δx Δx Δy - j W ( ) ( ) Δx Δy Δx Δy efne β Δx/Δy and rearrange terms ( ) ( β ) ( ) ( β ) W β j j β nte-dfference quaton nte-volume form typcal of twodmensonal aplace equaton If β Δx/Δy, s the average of ts four nearest neghbors j j onsder rchlet boundary condtons known at all boundary nodes eed to fnd ( )( ) unknown values of on grd 7 mall Grd (, ) j j oundary j nodes j j j omputatonal olecule 8 Grd quatons (β ) and gves ( )( ) equatons only eght shown agonal structure ncorrect here 9 xecuton Tmes and rrors xamne square regon wth zero boundary condtons at x, x x max, and y ; two cases for y y max ase : constant value of (x) ase : (x) sn(πx) rst case has dscontnuty for y y max at x and x x max Use overrelaxaton (OR) wth varable relaxaton factors 9 omputatonal lud ynamcs

Iteraton olutons arch 7, xecuton Tmes and rrors II Iterate untl maxmum teraton dfference n s about machne error ase : constant value of (x) ase : (x) sn(πx) rst case has dscontnuty for y y max at x and x x max ompare solutons to exact soluton of dfferental equaton and exact soluton of fnte dfference equatons xecuton Tme (seconds) ffect of Relaxaton actor on xecuton Tme quare ( H ) by Grd Zero boundary on left, rght and bottom Top boundary has (x,h) sn(πx/) "Other" s dfferent code wth (x,h). x grd x grd x grd 8x8 grd Other code.....7.8.9 Relaxaton actor ffect of Iteratons on rrors ompare three error measures usng the maxmum value on the grd True teraton error: dfference between the current value and the value found by an exact soluton of the dfference equatons fference n between two teratons Resdual j -j - xact error s dfference between teraton value and exact soluton of dfferental equaton rrors ffects of Iteratons on aplace quaton rrors..-.-.-.-.-.- fference.-7 Resdual.-8 Iteraton error.-9 quare ( H ) xact rror.- by Grd Zero boundary on left,.- rght and bottom.- Top boundary has.- u(x,h) sn(πx/).-.- 7 Iteratons Wll Iteratons onverge? How do we ensure that an teratve process converges? ook at general example of solvng a system of smultaneous equatons by teraton Wrte equaton n matrx form b evelop general teraton algorthm n matrx form ook at crteron for error to decrease atrx quaton orm dvanced soluton technques treat matrx for fnte-dfference equatons eads to dmensonal confuson tart wth grd (x and y ndces) Treat as matrx equaton where unknowns form a column vector (one-dmensonal) The coeffcents n the matrx form a twodmensonal dsplay xamne small grd example Take W, - 9 omputatonal lud ynamcs

Iteraton olutons arch 7, General atrx tructure onfuson about two twodmensonal representatons Grd has two space dmensons wth ( )( ) unknown nodes forms a one dmensonal column matrx of unknowns (at rght) oeffcent matrx has fve dagonals Rght-hand sde has boundary values W b j j 7 revous oluton atrx W omplex coeffcents have same structure but dfferent values lements occur on dagonals, s prncpal dagonal General oluton atrx, ke the one on the prevous chart Has more rows for more grd ponts oeffcents may not be the same Wll be generally sparse Has regular structure for smple grds Unstructured grds do not gve smple structure, but keep a sparse matrx We want to solve b where s a vector of all the unknowns on the grd General Iteraton pproaches We want to solve b by teraton s the soluton to the fnte-dfference equatons, has truncaton error even wth perfect teraton soluton efne teraton error as ε (n) (n) efne resdual, r (n) b (n) ombne equatons: r (n) b (n) (n) ( (n) ) ε (n) r (n) ε (n) relates computable r (n) to ε (n) that we want to control but can t compute 9 General Iteraton pproaches II One teraton step takes the old values, (n), to the new values, (n) General teraton: (n) (n) b ethods select and to accelerate convergence of teratons t convergence, (n) (n), so that (n) (n) b s ( ) b We are solvng b, so we must have xample of and atrces Heat conducton wth constant propertes and no source term wth x y ystem of equatons wth nne unknowns oundary values known olve b 9 omputatonal lud ynamcs 7

Iteraton olutons arch 7, xample of and atrces II Iterate Gauss edel from lower left to upper rght usng newest values ) ) ) j j When solvng for we wll have current teraton values for b and (n) (n) b ) ) ) and j j b b G matrx G matrx atrx for OR ) ω ω ω ω ω ω ω ω ω ω ω ω ) ) ) ( ω ω ω ω ω j j n) ( ) d atrx for OR ω ω d ω ω d ω d ω ω d ω ω d ω d ω d ω d d ω ω ( ω ) ) ) ) ω ω ω ω j j ext teps ook at general teraton equaton: (n) (n) b Get equaton for evoluton of error vector, ε (n), representng error n each unknown at step n How does error at new step, ε (n) depend on error at old step, ε (n) How can we guarantee that error does not grow at each step? onvergence tart wth (n) (n) b ubtract (n) from each sde Result s ( (n) (n) )b ( ) (n) ut we sad that, so result s ( (n) (n) )b ( ) (n) b (n) We defned b (n) r (n) ε (n), so ( (n) (n) )b (n) r (n) ε (n) 7 8 9 omputatonal lud ynamcs 8

Iteraton olutons arch 7, onvergence II rom last chart: ( (n) (n) ) r (n) ε (n) efne update δ (n) (n) (n) ( (n) (n) )δ (n) r (n) ε (n) Have two computable measures of error, ε (n) ; these are δ (n) and r (n) What makes error decrease? oes the rror ecrease? Iteraton equaton: (n) (n) b t convergence, (n) (n), so that b ubtract b from (n) (n) b gvng ( (n) ) ( (n) ), whch gves ε (n) ε (n) ew error gven by ε (n) - ε (n) oes the error go to zero as we take more teratons? 9 atrx genvalues: x λx Used to determne convergence If a matrx,, multples a vector x and produces a constant λ tmes x x s an egenvector of λ s the egenvalue assocated wth x n n by n matrx can have up to n lnearly ndependent egenvalues If the n egenvectors are lnearly ndependent we can expand any n component vector n terms of the egenvectors ε ε rror ecrease epends on - () () ssume that - has a complete set of egenvalues, x (k) so we can expand the ntal error vector n terms of these egenvectors ε () Σ k a k x (k) Iteraton process gves followng results where λ k s egenvalue ( - x (k) λ k x (k) ) ε ε () () akx( k ) k akλkx( k ) k a x k k a λ x k k k a λ x ( k ) k k ( k ) k ( k ) akλkx( k ) k General rror quaton Reasonng by nducton from the last two equatons gves ε (n) as follows ε k a λ x k n k ( k ) or error to become small as teratons ncrease, we must have all λ k < argest λ k λ, called spectral radus, wll domnate sum for large n ε (n) a λ n x () General rror quaton II To control error n ε (n) a λ n x () requre factor a λ n ln δ or λ n δ/a n Take logs of both sdes and ln solve for n Recall that ln( x) x for small x When λ s close to, ln λ wll be a small number and n wll be large eek teraton matrces wth small λ δ a λ Want ths to reach desred error, δ 9 omputatonal lud ynamcs 9

Iteraton olutons arch 7, OR pectral Radus Use T to compute the spectral radus, λ maxmum λ for OR nd optmum ω (mnmum λ ) by tral and error ω..7.8 λ.9.79.7.8 ω.7.7.7 λ.9778.7.7.7 ω.8.8.87 λ.99.898.8.87 agonal omnance The real requrement s that the largest egenvalue be less than one n absolute value Ths s guaranteed n the soluton matrx s dagonally domnant Ths means that the dagonal coeffcent (n absolute value) s greater than or equal to the sum of the absolute values of all other coeffcents n the equaton agonal omnance II If the coeffcents n the matrx are a km, the rules for dagonal domnance n an x matrx are a kk > Σ m a km for k a kk > Σ m a km for at least one value of k General fnte-dfference equatons satsfy the > condton and boundary condtons satsfy the > condton Upwnd dfference dagonally domnant dvanced ethods ee text for greater dscusson ethods use dfferent teraton matrces to get faster convergence lternatng recton Implct (I) tone s ethod onjugate Gradent ultgrd ultgrd generally consdered fastest method for calculatons 7 8 ultgrd ethod olve equatons on a set of dfferent grds nalyss of error shows that convergence rate depends on grd sze Gettng a soluton to a coarse grd then usng those results for the fne grd gves soluton faster Use prolongaton and restrcton to get results between fne and coarse grde ultgrd ethod II fferent patterns used; example below tart wth fne grd fter partal convergence on fnd grd use coarser grd and do teratons to get more convergence on that grd ontnue to coarsest grd and get convergence there rolong soluton to fner grds and get converged soluton on each grd nally get converged soluton on fnest grd 9 9 omputatonal lud ynamcs

Iteraton olutons arch 7, 9 omputatonal lud ynamcs Thomas lgorthm Used for smple soluton of onedmensonal problems an be extended to mproved teraton approach for two- or three-dmensonal problems asc problem: a W W a a b Generalzed one-dmensonal problem k f k- k f k k f k k ook at matrx form Thomas lgorthm II General format for trdagonal equatons O Thomas lgorthm III The matrx s called a trdagonal matrx Has prncpal dagonal, one dagonal above prncpal dagonal, and one dagonal below prncpal dagonal an apply tradtonal Gauss elmnaton for soluton of smultaneous lnear equatons to get smple upper trangular form mple equatons to obtan ths Thomas lgorthm IV O Gauss elmnaton upper trangular form Thomas lgorthm V orward computatons Intal: / / or, -: x ack substtute: x x Get last x value frst Thomas xample 8 9 8 7 ( ) ( ) ( ) ontnue to fnd,,, and

Iteraton olutons arch 7, ack substtuton (shows and results) Orgnal equaton set shows results are correct Thomas xample II 7 9 ( ) 8 8 7 Thomas for Two mensons Two dmensonal equaton: a a - a j a a W -j b ook at one-dmensonal approach n x drecton: a j a a W -j b a a - Use Thomas algorthm n x-drecton a ) W j a a b a a ) ) j ) ) ) j j ext apply algorthm n y drecton 8 Thomas for Two mensons II y-drecton form a ) a a b a a ) ) j ) ) ) W j Ths approach nvolves more calculatons per teraton, but t can reduce error more quckly by gettng smultaneous solutons of results along one coordnate drecton an be extended to three dmensons Unstructured Grds o not have k ndexng system that regular grds have odes numbered sequentally wth sngle ndex ust store nformaton on numbers of nearest neghbors for each node quaton matrx s stll sparse, but not so well structured o not have all coeffcents on or 7 dagonals 9 7 onlnear roblems equatons are nonlnear system of dfference equatons Have terms lke u and uu Have to solve for u, v, w, T, p, etc. Typcally lnearze problem by wrtng terms lke u as u (n) (n) to solve for (n) Once teraton n s complete, update lnearzed terms Usually requres underrelaxaton 7 9 omputatonal lud ynamcs