Gregory Capra. April 7, 2016
|
|
- Belinda Lindsey
- 5 years ago
- Views:
Transcription
1 ? Occidental College April 7, / 39
2 Outline? diffusion? Method Gaussian Elimination Convergence, Memory, & Time Complexity Conjugate Gradient Squared () Algorithm Concluding remarks 2 / 39
3 Standard You should be familiar with these: Food coloring in water cigarette smoke into the air water into cooking noodles (pasta)? 3 / 39
4 Standard? You should be familiar with these: Food coloring in water cigarette smoke into the air water into cooking noodles (pasta) and maybe these: oxygen from plant cells into the air oxygen from blood cells in the human body s blood stream into muscles 3 / 39
5 Everything isn t Standard Standard random motion & mean-free path Gaussian Distribution? 4 / 39
6 Everything isn t Standard Standard random motion & mean-free path Gaussian Distribution? σr 2 Dt 1 u t = D 2 u x 2 4 / 39
7 Everything isn t Standard Standard random motion & mean-free path Gaussian Distribution? σr 2 Dt 1 u t = D 2 u x 2 Anomalous inter-dependencies & random fluctuations Pareto Distribution 4 / 39
8 Everything isn t Standard Standard random motion & mean-free path Gaussian Distribution? σr 2 Dt 1 u t = D 2 u x 2 Anomalous inter-dependencies & random fluctuations Pareto Distribution σr 2 Dt α u t = D α u x α 4 / 39
9 Gaussian vs. Pareto Distribution? 5 / 39
10 Anomalous? Definition: Anomalous is a diffusion process that has a non-linear relationship to time, i.e., it is affected by random fluctuations and dependencies.[1] Examples include: Transport of electrons in a photocopier Foraging behavior of animals Trapping of contaminants in groundwater Proteins across cell membranes Super-: which exceeds linear relationship with time, faster than classical diffusion. Dt α, α > 1 Sub-: less than linear, slower than classical diffusion. Dt α, α < 1 [1] 6 / 39
11 Anybody?? basic definition of the whole derivative of a function f(x): d f(x) f(x h) f(x) = lim dx h 0 h Repeated composition of this operation leads to d n 1 n ( ) n f(x) = lim dxn h 0 h n ( 1) j f(x jh), n Z + j j=0 Grünwald-Letnikov fractional p-th order derivative of f(x) [2]: d p f(x) = lim dxp h 0 x x0 h j=0 Γ(j p) f(x jh) Γ( p)γ(j + 1) p R + 7 / 39
12 Half-Derivative of x? Blue:f(x) = x Red:f (x) = 1 Purple:f ( 1 x 2 ) (x) = 2 π 8 / 39
13 x n dx = xn+1 n C n 1? 9 / 39
14 ? x n dx = xn+1 n C n 1 result of integrating a function is non-local; it depends on how the function behaves over the range for which the integration is performed, not just at a single point. 9 / 39
15 ? x n dx = xn+1 n C n 1 result of integrating a function is non-local; it depends on how the function behaves over the range for which the integration is performed, not just at a single point. d dx xn = nx n 1 9 / 39
16 ? x n dx = xn+1 n C n 1 result of integrating a function is non-local; it depends on how the function behaves over the range for which the integration is performed, not just at a single point. d dx xn = nx n 1 However, differentiation is thought of as local, because whole derivatives happen to possess this attribute. 9 / 39
17 ? x n dx = xn+1 n C n 1 result of integrating a function is non-local; it depends on how the function behaves over the range for which the integration is performed, not just at a single point. d dx xn = nx n 1 However, differentiation is thought of as local, because whole derivatives happen to possess this attribute. apparent paradoxes of fractional derivatives arise from the fact that, in general, differentiation is non-local, just as in integration! 9 / 39
18 Where Do We Go From Here Anomalous diffusion is modeled with Partial Differential (PDE s) that incorporate these fractional derivatives.? 10 / 39
19 Where Do We Go From Here Anomalous diffusion is modeled with Partial Differential (PDE s) that incorporate these fractional derivatives. Non-local? 10 / 39
20 Where Do We Go From Here Anomalous diffusion is modeled with Partial Differential (PDE s) that incorporate these fractional derivatives. Non-local knowledge of previous solutions? 10 / 39
21 Where Do We Go From Here? Anomalous diffusion is modeled with Partial Differential (PDE s) that incorporate these fractional derivatives. Non-local knowledge of previous solutions need to store these solutions 10 / 39
22 Where Do We Go From Here? Anomalous diffusion is modeled with Partial Differential (PDE s) that incorporate these fractional derivatives. Non-local knowledge of previous solutions need to store these solutions full matrices 10 / 39
23 Where Do We Go From Here? Anomalous diffusion is modeled with Partial Differential (PDE s) that incorporate these fractional derivatives. Non-local knowledge of previous solutions need to store these solutions full matrices Most problems cannot be solved directly, instead you have to approximate the solution 10 / 39
24 Often the framework for setting up an approximate solution is with a discretization.? 11 / 39
25 ? Often the framework for setting up an approximate solution is with a discretization. b a f(x)dx = lim n f(x i ) x i=1 11 / 39
26 Typical Initial-Boundary Value Problem (IBVP)? two-sided diffusion equation, 1 < α < 2: u(x, t) t Domain: d + (x, t) α u(x, t) + x α d (x, t) α u(x, t) x α = f(x, t). Boundary Conditions: x L x x R, 0 < t T, Initial Condition: u(x L, t) = 0, u(x R, t) = 0. u(x, 0) = u 0 (x) 12 / 39
27 IBVP cont.? [10]: step size h: h = (x R x L ) N time step t: t = T M spatial partition: x i = x L + ih for i = 0, 1,..., N temporal partition: t m = m t for m = 0, 1,..., M notation: u m i = u(x i, t m ), d m +,i = d +(x i, t m ), d m,i = d (x i, t m ), and fi m = f(x i, t m ). 13 / 39
28 Simplifying the Equation? To replace the fractional derivatives we use the Grünwald approximations, shown earlier. resulting equation is such: u(x, t) t d (x, t) h α d +(x, t) h α ( N i+1 k=0 ( i+1 k=0 g (α) k um i k+1 + O(h) g (α) k um i+k 1 + O(h) ) ) = f(x, t). Grünwald weights gk α are defined with gα k = ( 1)k( ) α k where ( α k) are binomial coefficients of order α. se weights satisfy the following recursive relation [14]: ( g (α) 0 = 1, g (α) k = 1 α + 1 ) g (α) k k 1 for k / 39
29 (CN) Method general diffusion equation:? 15 / 39
30 (CN) Method? general diffusion equation: finite difference scheme [11]: ( u t = F u, x, t, u ) x, α u x α 15 / 39
31 (CN) Method? general diffusion equation: finite difference scheme [11]: u m+1 i u m i t = 1 2 ( u t = F u, x, t, u ) x, α u x α [ ( F m+1 i u, x, t, u ) x, α u x α + Fi m ( u, x, t, u )] x, α u x α 15 / 39
32 (CN) Method? general diffusion equation: finite difference scheme [11]: u m+1 i u m i t = 1 2 ( u t = F u, x, t, u ) x, α u x α [ ( F m+1 i u, x, t, u ) x, α u x α + Fi m ( u, x, t, u )] x, α u x α purpose of the finite difference method is to approximate the solution to a differential equation by approximating the derivatives with finite differences. 15 / 39
33 (CN) Method? general diffusion equation: finite difference scheme [11]: u m+1 i u m i t = 1 2 ( u t = F u, x, t, u ) x, α u x α [ ( F m+1 i u, x, t, u ) x, α u x α + Fi m ( u, x, t, u )] x, α u x α purpose of the finite difference method is to approximate the solution to a differential equation by approximating the derivatives with finite differences. This method is second-order in time, i.e., O(( t) 2 ). It is first-order in space, i.e., O(( x)). se are convergence rates. [11] 15 / 39
34 It was stated earlier that the converges at a rate of O(( t) 2 ) + O( x).? 16 / 39
35 ? It was stated earlier that the converges at a rate of O(( t) 2 ) + O( x). Convergence is dragged down by O( x). It will follow linear rate. 16 / 39
36 ? It was stated earlier that the converges at a rate of O(( t) 2 ) + O( x). Convergence is dragged down by O( x). It will follow linear rate. In order to improve to second-order in space, we will use the well-known [11] So O( x) O(( x) 2 ) 16 / 39
37 How it Works : 2u m+1 i ( ) h u m+1 2 i (h) u m+1 i (h)? 17 / 39
38 How it Works? : 2u m+1 i ( ) h u m+1 2 i (h) u m+1 i (h) for example, the regular solution (N = 5) and twice-refined (N = 10): u 1 u 2 u 3 u 4 v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 17 / 39
39 Test Problem? Order: α = 1.8 Spatial Domain: x L = 0, x R = 1 Time Interval: t L = 0, t R = 1 d + coefficient: γ(1.2)(x α ) d coefficient: γ(1.2)((1 x) α ) Source term: f(x, t) =.0032e (5000x t 2( 1 x ) 2 ( x 2 + ( 1 x ) 2) 13.2 ( x 3 + ( 1 x ) 3) ( + 12 x 4 + ( 1 x ) 4) ) Initial Condition: f(x) = 16x 2 (1 x) 2 True : f(x, t) = 16e t x 2 (1 x) 2 18 / 39
40 Measuring Error? Since we know the true solution, we can generate an error vector by taking the difference between the true solution vector and our generated approximation. two most common ways to analyze that error vector is with the L 2 or L norm, defined below: x 2 = n i=1 x 2 i x = max i=1,...,n x i 19 / 39
41 Non-Extrapolated Results? Without the : N=M x error ratio / 39
42 Extrapolated Results? With the : N=M x error ratio / 39
43 Non-Extrapolated Graph? 22 / 39
44 Extrapolated Graph? 23 / 39
45 cont.? When applying to our PDE from earlier, the resulting equation can be expressed in matrix form as such: ( I + t ) ( 2h α Am+1 u m+1 = I t ) 2h α Am u m + t ( f m + f m+1) / 39
46 cont.? When applying to our PDE from earlier, the resulting equation can be expressed in matrix form as such: ( I + t ) ( 2h α Am+1 u m+1 = I t ) 2h α Am u m + t ( f m + f m+1). 2 A x = b 24 / 39
47 cont.? When applying to our PDE from earlier, the resulting equation can be expressed in matrix form as such: ( I + t ) ( 2h α Am+1 u m+1 = I t ) 2h α Am u m + t ( f m + f m+1). 2 A x = b What can we use to solve this? 24 / 39
48 cont.? When applying to our PDE from earlier, the resulting equation can be expressed in matrix form as such: ( I + t ) ( 2h α Am+1 u m+1 = I t ) 2h α Am u m + t ( f m + f m+1). 2 A x = b What can we use to solve this? Gaussian Elimination. 24 / 39
49 cont.? When applying to our PDE from earlier, the resulting equation can be expressed in matrix form as such: ( I + t ) ( 2h α Am+1 u m+1 = I t ) 2h α Am u m + t ( f m + f m+1). 2 A x = b What can we use to solve this? Gaussian Elimination. x = A 1 b 24 / 39
50 Gaussian Elimination In order to get A 1 from A, we must use Gaussian Elimination. An overview of this process:? 25 / 39
51 Gaussian Elimination? In order to get A 1 from A, we must use Gaussian Elimination. An overview of this process: A series of row operations I (identity matrix) simultaneously: I same series of row operations A 1 [9] 25 / 39
52 Gaussian Elimination? In order to get A 1 from A, we must use Gaussian Elimination. An overview of this process: A series of row operations I (identity matrix) simultaneously: I same series of row operations A 1 [9] Row operations Gaussian Elimination 25 / 39
53 Gaussian Elimination? In order to get A 1 from A, we must use Gaussian Elimination. An overview of this process: A series of row operations I (identity matrix) simultaneously: I same series of row operations A 1 [9] Row operations Gaussian Elimination How efficient is this process? 25 / 39
54 Gaussian Elimination requires O(N 3 ) time, demonstrated by counting the number of arithmetic operations to get the necessary numbers in each spot in the matrix. [9]? 26 / 39
55 ? Gaussian Elimination requires O(N 3 ) time, demonstrated by counting the number of arithmetic operations to get the necessary numbers in each spot in the matrix. [9] What does this mean? 26 / 39
56 ? Gaussian Elimination requires O(N 3 ) time, demonstrated by counting the number of arithmetic operations to get the necessary numbers in each spot in the matrix. [9] What does this mean? n we must do A 1 b. This process is O(N 2 ). Total computational cost: O(N 3 ). Why? Conclusion: This is VERY inefficient. Can we do better? 26 / 39
57 Room for Improvement We saw earlier how matrix inversion is expensive.? 27 / 39
58 Room for Improvement We saw earlier how matrix inversion is expensive. O(N 3 ) computation and O(N 2 ) memory? 27 / 39
59 Room for Improvement? We saw earlier how matrix inversion is expensive. O(N 3 ) computation and O(N 2 ) memory To replace this costly inversion process done via Gaussian Elimination, we will use an iterative method known as Conjugate Gradient Squared Method. the goal is to get the resultant vector at each time step with significantly less computation 27 / 39
60 What Is An Iterative Method? Initial guess, x 0, evaluate residual make some modification & generate approximation residual: b Ax i? 28 / 39
61 What Is An Iterative Method?? Initial guess, x 0, evaluate residual make some modification & generate approximation residual: b Ax i After every approximation, a residual is calculated and compared to the convergence criteria- also known as the tolerance. tolerance used in the test cases was / 39
62 What Is An Iterative Method?? Initial guess, x 0, evaluate residual make some modification & generate approximation residual: b Ax i After every approximation, a residual is calculated and compared to the convergence criteria- also known as the tolerance. tolerance used in the test cases was Error analysis is crucial in carrying out an iterative method. 28 / 39
63 What Is An Iterative Method?? Initial guess, x 0, evaluate residual make some modification & generate approximation residual: b Ax i After every approximation, a residual is calculated and compared to the convergence criteria- also known as the tolerance. tolerance used in the test cases was Error analysis is crucial in carrying out an iterative method. L 2 and L norms. 28 / 39
64 Trust the Process? iterative process [7]: x i : current solution x i = x i 1 + αr i 1 x i 1 : previous solution generated by r i 1 : the residual vector r (line in the direction of steepest descent) α: the factor that identifies how far down the line to go residual in this case is b Ax i 29 / 39
65 CG vs.? Conjugate Gradient (CG) Method: CG requires a symmetric matrix simpler algorithm, requires less calculation Conjugate Gradient Squared () Method: Does not require symmetry Requires more scalar computations, including 6 matrix-vector multiplications An extended version of CG 30 / 39
66 Benefit: method has a computational cost of O(N 2 ) per time step, as opposed to O(N 3 ).? Note: did you catch that the matrix A is still being used in the? Still O(N 2 ) memory. 31 / 39
67 Less is More, Or Just Better? Claim: A m+1 can be stored using only O(N) memory. Consider the following decomposition: ( ) ( A m+1 = d m+1 +,i A m+1 L + d m+1,i ) A m+1 R 32 / 39
68 Less is More, Or Just Better? Claim: A m+1 can be stored using only O(N) memory. Consider the following decomposition: ( ) ( A m+1 = d m+1 +,i A m+1 L + d m+1,i ) A m+1 R diffusion coefficients line the main diagonal of their own matrices 32 / 39
69 Less is More, Or Just Better? Claim: A m+1 can be stored using only O(N) memory. Consider the following decomposition: ( ) ( A m+1 = d m+1 +,i A m+1 L + d m+1,i ) A m+1 R diffusion coefficients line the main diagonal of their own matrices A m+1 L and A m+1 R are Toeplitz matrices made up of the Grünwald Weights 32 / 39
70 Less is More, Or Just Better? Claim: A m+1 can be stored using only O(N) memory. Consider the following decomposition: ( ) ( A m+1 = d m+1 +,i A m+1 L + d m+1,i ) A m+1 R diffusion coefficients line the main diagonal of their own matrices A m+1 L and A m+1 R are Toeplitz matrices made up of the Grünwald Weights Definition: Toeplitz matrices are matrices that are constant, left-to-right, along all diagonals. 32 / 39
71 ? Definition: Circulant matrices are matrices with the property that each row or column are rotations of another. Let T 3 = B 3 = then the Toeplitz matrix T n can be embedded into a Circulant matrix C n with the help of B n as follows: C 6 = / 39
72 ? A circulant matrix C 2N can be decomposed as C 2N = F 1 2N diag(f 2Nc)F 2N This decomposition also helps reduce the complexity of matrix-vector multiplications: 34 / 39
73 ? A circulant matrix C 2N can be decomposed as C 2N = F 1 2N diag(f 2Nc)F 2N This decomposition also helps reduce the complexity of matrix-vector multiplications: We want to do A m+1 u m+1, which means we will need C m+1 2N um+1 2N. 34 / 39
74 ? A circulant matrix C 2N can be decomposed as C 2N = F 1 2N diag(f 2Nc)F 2N This decomposition also helps reduce the complexity of matrix-vector multiplications: We want to do A m+1 u m+1, which means we will need C m+1 2N um+1 2N. F 2N u m+1 2N can be carried out in O(N log N) operations via FFT 34 / 39
75 ? A circulant matrix C 2N can be decomposed as C 2N = F 1 2N diag(f 2Nc)F 2N This decomposition also helps reduce the complexity of matrix-vector multiplications: We want to do A m+1 u m+1, which means we will need C m+1 2N um+1 2N. F 2N u m+1 2N can be carried out in O(N log N) operations via FFT C m+1 2N um+1 2N can be evaluated in O(N log N) operations 34 / 39
76 ? A circulant matrix C 2N can be decomposed as C 2N = F 1 2N diag(f 2Nc)F 2N This decomposition also helps reduce the complexity of matrix-vector multiplications: We want to do A m+1 u m+1, which means we will need C m+1 2N um+1 2N. F 2N u m+1 2N can be carried out in O(N log N) operations via FFT C m+1 2N A m+1 L,N um+1 2N um+1 N and A m+1 O(N log N) operations can be evaluated in O(N log N) operations R,N um+1 N can be evaluated in 34 / 39
77 ? A circulant matrix C 2N can be decomposed as C 2N = F 1 2N diag(f 2Nc)F 2N This decomposition also helps reduce the complexity of matrix-vector multiplications: We want to do A m+1 u m+1, which means we will need C m+1 2N um+1 2N. F 2N u m+1 2N can be carried out in O(N log N) operations via FFT C m+1 2N A m+1 um+1 2N um+1 can be evaluated in O(N log N) operations L,N N and A m+1 R,N um+1 N can be evaluated in O(N log N) operations A m+1 u m+1 can be evaluated in O(N log N) operations! 34 / 39
78 ? This process brings the computational cost of the from O(N 2 ) to O(NlogN) per time step. C 2N = F 1 2N diag(f 2Nc)F 2N Notice how all we need is a diagonal from each circulant matrix? O(N) memory achieved! Overview: Decompose A embed Toeplitz parts into Circulant matrices decompose Circulant matrices using FFT evaluate C L u and C R u 35 / 39
79 ? This process brings the computational cost of the from O(N 2 ) to O(NlogN) per time step. C 2N = F 1 2N diag(f 2Nc)F 2N Notice how all we need is a diagonal from each circulant matrix? O(N) memory achieved! Overview: Decompose A embed Toeplitz parts into Circulant matrices decompose Circulant matrices using FFT evaluate C L u and C R u Reduced memory and computational complexity! 35 / 39
80 Complexity Analysis-? Naive vs. : N=M Naive Time (s) Fast Time (s) Naive ratio Fast ratio DNF NaN seconds 8 hours seconds 14 minutes 36 / 39
81 What Did We Accomplish?? : Overall convergence increased from first to second-order in space Properties of Toeplitz & Circulant Matrices: Memory required reduced from O(N 2 ) to O(N) Iterative Method & FFT Decomposition: Time complexity improved from O(N 3 ) to O(NlogN) per time step. 37 / 39
82 Future Advances? Although this algorithm handles the one-dimensional case well (variables x and t), most problems in nature are two or three dimensional in space. so u = u(x, y, t) or u = (x, y, z, t) refore the Method,, and fast vector-multiplication technique all need to be applied to these extended differential equations. Additional Concern: Moving boundary problem 38 / 39
83 Acknowledgments? Special Thanks to: Occidental College Library Computers Professor Treena Basu 39 / 39
84 ? [1] Joseph Klafter and Igor M Sokolov. Anomalous diffusion spreads its wings. Physics World. physicsweb.org, August, [2] Donato Cafagna. : A Mathematical Tool from the Past for Present Engineers. IEEE Industrial Electronics Magazine, pp , [3] Treena S. Basu and Hong Wang. Second-Order Finite Difference -. International Journal of Numerical Analysis and Modeling, Volume 9, Number 3, pp , [4] Hong Wang and Treena S. Basu. Finite Difference Two-Dimensional -. SIAM J. Sci. Comput. Vol. 34, No. 5, pp. A2444-A / 39
85 ? [5] H. O. A. Wold and P. Whittle. A Model Explaining the Pareto Distribution of Wealth. Econometrica, Volume 25, Number 4, pp , Oct [6] G. E. Uhlenbeck and L. S. Ornstein. On the ory of the Brownian Motion. Physics Review, Volume 36, Number 5, pp , Sep [7] Jonathan R. Shewchul. An Introduction to the Conjugate Gradient Method Without the Agonizing Pain. Carnegie Mellon University, Pittsburgh, PA, [8] Gene H.Golub and Charles F.Van Loan. Matrix Computations. Johns Hopkins University Press, Third Edition, Oct / 39
86 ? [9] John H.Mathews and Kurtis D.Fink. Numerical Methods Using Matlab. Prentice Hall, Upper Saddle River, Third Edition, pp , NJ, [10] Mark M. Meerschaert, Hans-Peter Scheffler, and Charles Tadjeran. Finite difference methods for two-dimensional fractional dispersion equation. Journal of Computational Physics, 211, pp , [11] Charles Tadjeran and Mark M. Meerschaert. A second-order accurate numerical method for the two-dimensional fractional diffusion equation. Journal of Computational Physics, 220, pp , [12] Phillip J. Davis. Circulant Matrices. John Wiley & Sons, New York, / 39
87 ? [13] Robert M. Gray. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information ory, 2, pp , [14] Mark M. Meerschaert and Charles Tadjeran. Finite Difference Approximations for Two-Sided - Partial Differential Applied Numerical Mathematics, Volume 56, Issue 1, pp , Jan / 39
EECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationA Fast O(N log N) Finite Difference Method for the One-Dimensional Space-Fractional Diffusion Equation
Mathematics 2015, 3, 1032-1044; doi:103390/math3041032 OPEN ACCESS mathematics ISSN 2227-7390 wwwmdpicom/journal/mathematics Article A Fast O(N log N) Finite Difference Method for the One-Dimensional Space-Fractional
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax
More informationThe Conjugate Gradient Method for Solving Linear Systems of Equations
The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationAnomalous diffusion equations
Anomalous diffusion equations Ercília Sousa Universidade de Coimbra. p.1/34 Outline 1. What is diffusion? 2. Standard diffusion model 3. Finite difference schemes 4. Convergence of the finite difference
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationthe method of steepest descent
MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More information446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method
446 CHAP. 8 NUMERICAL OPTIMIZATION Newton's Search for a Minimum of f(xy) Newton s Method The quadratic approximation method of Section 8.1 generated a sequence of seconddegree Lagrange polynomials. It
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationSEC POWER METHOD Power Method
SEC..2 POWER METHOD 599.2 Power Method We now describe the power method for computing the dominant eigenpair. Its extension to the inverse power method is practical for finding any eigenvalue provided
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationAutomatica, 33(9): , September 1997.
A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm
More informationA New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods
From the SelectedWorks of Dr. Mohamed Waziri Yusuf August 24, 22 A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods Mohammed Waziri Yusuf, Dr. Available at:
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationDirect and Incomplete Cholesky Factorizations with Static Supernodes
Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationMATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year
1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear
More information1 GSW Sets of Systems
1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationPoisson Equation in 2D
A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method
More informationLecture 9: Elementary Matrices
Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationPreconditioning Techniques Analysis for CG Method
Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system
More informationMatrix Assembly in FEA
Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More information632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method
632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationA Comparison of Solving the Poisson Equation Using Several Numerical Methods in Matlab and Octave on the Cluster maya
A Comparison of Solving the Poisson Equation Using Several Numerical Methods in Matlab and Octave on the Cluster maya Sarah Swatski, Samuel Khuvis, and Matthias K. Gobbert (gobbert@umbc.edu) Department
More informationCache Oblivious Stencil Computations
Cache Oblivious Stencil Computations S. HUNOLD J. L. TRÄFF F. VERSACI Lectures on High Performance Computing 13 April 2015 F. Versaci (TU Wien) Cache Oblivious Stencil Computations 13 April 2015 1 / 19
More informationLinear Systems of Equations. ChEn 2450
Linear Systems of Equations ChEn 450 LinearSystems-directkey - August 5, 04 Example Circuit analysis (also used in heat transfer) + v _ R R4 I I I3 R R5 R3 Kirchoff s Laws give the following equations
More informationHow might we evaluate this? Suppose that, by some good luck, we knew that. x 2 5. x 2 dx 5
8.4 1 8.4 Partial Fractions Consider the following integral. 13 2x (1) x 2 x 2 dx How might we evaluate this? Suppose that, by some good luck, we knew that 13 2x (2) x 2 x 2 = 3 x 2 5 x + 1 We could then
More informationPreconditioning for Nonsymmetry and Time-dependence
Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationFast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov
Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov Week 1. Introduction. What are multipole methods and what is this course about. Problems from physics, mathematics,
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationSparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations
Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationGeneralized Newton-Type Method for Energy Formulations in Image Processing
Generalized Newton-Type Method for Energy Formulations in Image Processing Leah Bar and Guillermo Sapiro Department of Electrical and Computer Engineering University of Minnesota Outline Optimization in
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationSolving PDEs with Multigrid Methods p.1
Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration
More informationParallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen
Parallel Iterative Methods for Sparse Linear Systems Lehrstuhl für Hochleistungsrechnen www.sc.rwth-aachen.de RWTH Aachen Large and Sparse Small and Dense Outline Problem with Direct Methods Iterative
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More information(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by
1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationIDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht
IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationDepartment of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004
Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationTrace balancing with PEF plane annihilators
Stanford Exploration Project, Report 84, May 9, 2001, pages 1 333 Short Note Trace balancing with PEF plane annihilators Sean Crawley 1 INTRODUCTION Land seismic data often shows large variations in energy
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationTikhonov Regularization of Large Symmetric Problems
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationCyclic Reduction History and Applications
ETH Zurich and Stanford University Abstract. We discuss the method of Cyclic Reduction for solving special systems of linear equations that arise when discretizing partial differential equations. In connection
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More information434 CHAP. 8 NUMERICAL OPTIMIZATION. Powell's Method. Powell s Method
434 CHAP. 8 NUMERICAL OPTIMIZATION Powell's Method Powell s Method Let X be an initial guess at the location of the minimum of the function z = f (x, x 2,...,x N ). Assume that the partial derivatives
More informationSpectral Methods for Subgraph Detection
Spectral Methods for Subgraph Detection Nadya T. Bliss & Benjamin A. Miller Embedded and High Performance Computing Patrick J. Wolfe Statistics and Information Laboratory Harvard University 12 July 2010
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationIterative Methods for Smooth Objective Functions
Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationLecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationConjugate Gradient Tutorial
Conjugate Gradient Tutorial Prof. Chung-Kuan Cheng Computer Science and Engineering Department University of California, San Diego ckcheng@ucsd.edu December 1, 2015 Prof. Chung-Kuan Cheng (UC San Diego)
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More information