Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Size: px
Start display at page:

Download "Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods"

Transcription

1 Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén (Linköping Univ.) Krylov Method February 21 1 / 59

2 Motivating example: Ilmenite iron melting furnace Thermocouple Electrode Level K to level D thermocouples with level K highest Center Under electrodes Between electrodes The furnace material properties are temperature dependent. Problem: find the inner shape of the furnace. Nonlinear, and (rather) complex geometry PhD thesis: I M Skaar, Monitoring the Lining of a Melting Furnace, NTNU, Trondheim, 21 Lars Eldén (Linköping Univ.) Krylov Method February 21 2 / 59

3 Inverse Heat Conduction Problem Steady state heat conduction problem: The upper boundary is unavailable for measurements 3D problem! See also Egger et al., Inverse Problems, 29 Lars Eldén (Linköping Univ.) Krylov Method February 21 3 / 59

4 Outline 1 Cauchy Problem Regularization Error Estimate 2 Rational Krylov method Krylov/Regularization 3 Numerical Examples Example 1 Animation Example 2 4 Conclusions Future work Lars Eldén (Linköping Univ.) Krylov Method February 21 4 / 59

5 Ill-Posed Cauchy Problem Ω: connected in R 2 with smooth boundary Ω L: linear, self-adjoint, positive definite elliptic in Ω. u zz Lu =, (x, y) Ω, z [, z 1 ], u(x, y, z) =, (x, y) Ω, z [, z 1 ], u(x, y, ) = g(x, y), (x, y) Ω, u z (x, y, ) =, (x, y) Ω. Sought: f(x, y) = u(x, y, z 1 ), (x, y) Ω. Formal solution u(x, y, z) = cosh(z L)g BUT: L is a pos. def. unbounded operator ILL-POSED! Lars Eldén (Linköping Univ.) Krylov Method February 21 5 / 59

6 Standard Iterative Procedure Guess f (1) 1 for k = 1, 2,... until convergence 1 Solve u zz Lu =, (x, y) Ω, z [, z 1 ], u(x, y, z) =, (x, y) Ω, z [, z 1 ], u(x, y, z 1 ) = f (k), (x, y) Ω, u z (x, y, ) =, (x, y) Ω. giving u (k) 2 Evaluate g(, ) u (k) (,, ) and adjust f (k) f (k+1) In every iteration: Solve a 3D well-posed problem Often slow convergence. Lars Eldén (Linköping Univ.) Krylov Method February 21 6 / 59

7 Other possible methods? Tikhonov regularization? Impossible, because we do not know the integral operator for equations with variable coefficients and/or complicated geometry. Replace unbounded L by a bounded approximation? Possible in connection with finite difference approximation, but more difficult with finite elements. BUT: Krylov method! Lars Eldén (Linköping Univ.) Krylov Method February 21 7 / 59

8 Regularization Formal solution u(x, y, z) = cosh(z L)g BUT: L is a pos. def. unbounded operator ILL-POSED! High frequency perturbations in g are blown up Regularization Replace unbounded operator by a bounded one! Lars Eldén (Linköping Univ.) Krylov Method February 21 8 / 59

9 Cut Off High Frequencies Eigenvalues of L: λ 2 j, j = 1, 2,... and λ j + as j General approach: Compute the k eigenvalues of smallest modulus: LX k = X k D k, where X k holds orthonormal eigenvectors Approximate by projection cosh(z L)g cosh(z L)X k Xk g = X k cosh(z D k )Xk g Lars Eldén (Linköping Univ.) Krylov Method February 21 9 / 59

10 Eigenvalues of L L is large and sparse (of the order , say) Compute the smallest eigenvalues Operate with L 1 Solve many standard 2D elliptic problems Lw = v Lars Eldén (Linköping Univ.) Krylov Method February 21 1 / 59

11 Error Estimate L 2 (Ω) setting, u is an exact solution v is an approximate solution with perturbed data g m Theorem Assume that u(,, 1) M and that the data perturbation satisfies g g m ǫ. Then if v is computed by projection using the eigenvalues satisfying λ j λ c, where then λ c = (1/z 1 ) log(m/ǫ) u(,, z) v(,, z) 3ǫ 1 z/z 1 M z/z 1, z z 1. Optimal error bound Lars Eldén (Linköping Univ.) Krylov Method February / 59

12 Eigenvalue method Is it necessary to compute the eigenvalues and eigenvectors accurately? Do we need all the information that we get in the eigenvalues? Can we take advantage of the fact that we want to compute an approximation of cosh(z L)g for this particular vector? Lars Eldén (Linköping Univ.) Krylov Method February / 59

13 Eigenvalue method Is it necessary to compute the eigenvalues and eigenvectors accurately? NO! Do we need all the information that we get in the eigenvalues? NO! Can we take advantage of the fact that we want to compute an approximation of cosh(z L)g for this particular vector? YES! Lars Eldén (Linköping Univ.) Krylov Method February / 59

14 Lanczos tridiagonalization Choose q 1 and iterate L 1 q k = q k 1 β k 1 + q k α k + q k+1 β k, k = 1, 2,..., with α k = qk L 1 q k and β k = qk+1 L 1 q k ; One matrix-vector multiply L 1 q k per step One standard 2D elliptic solve (black box) per step Lw = q k Lars Eldén (Linköping Univ.) Krylov Method February / 59

15 Lanczos properties Initial convergence influenced by starting vector v 1. Choose q 1 = 1/β g m, β = g m Faster for largest eigenvalues of L 1 Fast convergence for some of the smallest eigenvalues of L Optional: To get faster convergence for eigenvalues in [,λ c ] operate with (L τi) 1, where τ = λ c /2 Lars Eldén (Linköping Univ.) Krylov Method February / 59

16 Lanczos reduction L 1 Q k = Q k T k + β k+1 q k+1 e k+1 Q kt k Approximation cosh(z L)g Q k cosh(zt 1/2 k )Qk g Problem: We cannot prevent T k from approximating large eigenvalues! Solution: Regularize T k : cut off large eigenvalues 1 1 Krylov+regularization: O Leary & Simmons (1981), Björck, Grimme & Van Dooren (1994) Lars Eldén (Linköping Univ.) Krylov Method February / 59

17 Projected and Truncated Approximation Let ((θ (k) j ) 2, y (k) j ), j = 1,...,k be the eigenpairs of T 1 k Define F(z,λ) = cosh(zλ 1/2 ) and S k = T 1 k Truncated approximation: v k (z) = Q k F(z, S c k )Q k g m := Q k θ (k) j λ c y (k) j cosh(zθ (k) j )(y (k) j ) e 1 g m. Lars Eldén (Linköping Univ.) Krylov Method February / 59

18 Error Estimate Recall F(z,λ) = cosh(zλ 1/2 ) and S k = T 1 k Theorem Let u be the exact solution and v k (z) = Q k F(z, Sk c )Q k g m Under the same hypotheses as earlier, u(z) v k (z) 3ǫ 1 z/z 1 M z/z [F(z, L c ) Q k F(z, Sk c )Q k ]g. Lars Eldén (Linköping Univ.) Krylov Method February / 59

19 Krylov/Regularization, First version Starting vector q 1 = 1/β g m for k = 2, 3,... until stable [Q k, T k ] = krylovstep(l 1, Q k 1, T k 1 ) Compute v k (z) = Q k F(z, S c k )Q k g m end Check residual Kv k g m < ǫ Kv k is the solution of the 3D problem with u = v k at the upper boundary and u z = at the lower. Expensive! Lars Eldén (Linköping Univ.) Krylov Method February / 59

20 Residual: Kv k g m < ǫ Solve 3D problem (denote solution u k ) u zz Lu =, (x, y) Ω, z [, z 1 ], u(x, y, z) =, (x, y) Ω, z [, z 1 ], u(x, y, 1) = v k (x, y), (x, y) Ω, u z (x, y, ) =, (x, y) Ω. Well-posed but expensive! Kv k = u k (x, y, ) We only want to compute this when we are sure that Kv k g m < ǫ Lars Eldén (Linköping Univ.) Krylov Method February 21 2 / 59

21 Krylov/Regularization: stable Starting vector q 1 = 1/β g m for k = 1, 2,... until stable [Q k, T k ] = krylovstep(l 1, Q k 1, T k 1 ) Compute v k (z) = Q k F(z, S c k )Q k g m end Check residual Kv k g m < ǫ How can we quantify stable? Lars Eldén (Linköping Univ.) Krylov Method February / 59

22 Krylov/Regularization Recall F(z,λ) = cosh(zλ 1/2 ) and S k = T 1 k Cheap (2D) approximate residual: r (k+p) k Starting vector q 1 = 1/β g m for k = 1, 2,... maxit [Q k, T k ] = krylovstep(l 1, Q k 1, T k 1 ) Compute w k (z) = F(z, S k c)q k g m if r (k 1+p) < tol then end r (k+p) k k 1 / r (k+p) k if Kv k g m < ǫ then stop iterating endif Compute v k = Q k w k Lars Eldén (Linköping Univ.) Krylov Method February / 59

23 Test example 1: Laplace equation Ω: unit square u zz + u =, (x, y, z) Ω [,.1], u(x, y, z) =, (x, y, z) Ω [,.1], u(x, y, ) = g(x, y), (x, y) Ω, u z (x, y, ) =, (x, y) Ω. Determine the values at the upper boundary, f(x, y) = u(x, y,.1), (x, y) Ω. Data perturbation: g g m / g eigenvalues are smaller than the tolerance eigs performs approximately 3 2D elliptic solves. Lars Eldén (Linköping Univ.) Krylov Method February / 59

24 Solution and Exact Data Lars Eldén (Linköping Univ.) Krylov Method February / 59

25 Convergence History (.8λ c ) 1 1 True error Residual Res k+2 ε/ g m step Lars Eldén (Linköping Univ.) Krylov Method February / 59

26 Solutions as function of iteration index.45.4 k= 3 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

27 Solutions as function of iteration index.45.4 k= 4 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

28 Solutions as function of iteration index.6.5 k= 5 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

29 Solutions as function of iteration index.45.4 k= 6 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

30 Solutions as function of iteration index.45.4 k= 7 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February 21 3 / 59

31 Solutions as function of iteration index.45.4 k= 8 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

32 Solutions as function of iteration index.6.5 k= 9 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

33 Solutions as function of iteration index.45.4 k= 1 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

34 Solutions as function of iteration index.45.4 k= 11 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

35 Solutions as function of iteration index.45.4 k= 12 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

36 Solutions as function of iteration index.45.4 k= 13 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

37 Solutions as function of iteration index.45.4 k= 14 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

38 Solutions as function of iteration index.45.4 k= 15 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

39 Solutions as function of iteration index.45.4 k= 16 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

40 Solutions as function of iteration index.45.4 k= 17 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February 21 4 / 59

41 Solutions as function of iteration index.45.4 k= 18 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

42 Solutions as function of iteration index.45.4 k= 19 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

43 Solutions as function of iteration index.45.4 k= 2 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

44 Solutions as function of iteration index.45.4 k= 21 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

45 Solutions as function of iteration index.45.4 k= 22 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

46 Solutions as function of iteration index.45.4 k= 23 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

47 Solutions as function of iteration index.45.4 k= 24 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

48 Stopping criterion satisfied here.45.4 k= 25 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

49 Solutions as function of iteration index.45.4 k= 26 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

50 Solutions as function of iteration index.45.4 k= 27 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February 21 5 / 59

51 Solutions as function of iteration index.45.4 k= 28 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

52 Solutions as function of iteration index.45.4 k= 29 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

53 Solutions as function of iteration index.45.4 k= 3 exact solution approx. sol Lars Eldén (Linköping Univ.) Krylov Method February / 59

54 Example 2 Finite element discretization of L with variable coefficients on the ellipse Ω = {(x, y, z) x 2 + y 2 /4 1, z z 1 =.6}. The stiffness matrix has dimension 865 Data perturbation: 3% Lars Eldén (Linköping Univ.) Krylov Method February / 59

55 Solution and Exact Data Lars Eldén (Linköping Univ.) Krylov Method February / 59

56 Perturbed Data Lars Eldén (Linköping Univ.) Krylov Method February / 59

57 Convergence history (.6λ c ). Solution after 9 steps 1 True error Residual Res k+2 ε/ g m step Lars Eldén (Linköping Univ.) Krylov Method February / 59

58 Conclusions 3D Cauchy problem: complex 2D geometry + cylinder in z Krylov method + black box 2D elliptic solver Stability theory = recipe for cut-off level Exponential of small matrix is computed (cheap) Safe-guarded stopping criterion: Approximate residual (cheap) + True residual (rather expensive) Much fewer 2D elliptic solves than eigenvalue computation: 98 eigenvalues were smaller than the tolerance. MATLAB s eigs: 3 2D solves Krylov: 18 solves Highly accurate eigenvalues are not needed The data influence the basis (projection) vectors Lars Eldén (Linköping Univ.) Krylov Method February / 59

59 Extensions Variable coefficients in z? Other Cauchy problems: parabolic: Zohreh Ranjbar s thesis Helmholtz, transient electromagnetics? Paper: Inverse Problems 29 Lars Eldén (Linköping Univ.) Krylov Method February / 59

Linköping University Post Print. A numerical solution of a Cauchy problem for an elliptic equation by Krylov subspaces

Linköping University Post Print. A numerical solution of a Cauchy problem for an elliptic equation by Krylov subspaces Linöping University Post Print A numerical solution of a Cauchy problem for an elliptic equation by Krylov subspaces Lars Eldén and Valeria Simoncini N.B.: When citing this wor, cite the original article.

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012 University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR-202-07 April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER LARS ELDÉN AND VALERIA SIMONCINI Abstract. Almost singular linear systems arise in discrete ill-posed problems. Either because

More information

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Eigenvalue problems III: Advanced Numerical Methods

Eigenvalue problems III: Advanced Numerical Methods Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Solving Boundary Value Problems (with Gaussians)

Solving Boundary Value Problems (with Gaussians) What is a boundary value problem? Solving Boundary Value Problems (with Gaussians) Definition A differential equation with constraints on the boundary Michael McCourt Division Argonne National Laboratory

More information

ECS130 Scientific Computing Handout E February 13, 2017

ECS130 Scientific Computing Handout E February 13, 2017 ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ

More information

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Descent methods. min x. f(x)

Descent methods. min x. f(x) Gradient Descent Descent methods min x f(x) 5 / 34 Descent methods min x f(x) x k x k+1... x f(x ) = 0 5 / 34 Gradient methods Unconstrained optimization min f(x) x R n. 6 / 34 Gradient methods Unconstrained

More information

Direct methods for symmetric eigenvalue problems

Direct methods for symmetric eigenvalue problems Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory

More information

Lab 2 Iterative methods and eigenvalue problems. Introduction. Iterative solution of the soap film problem. Beräkningsvetenskap II/NV2, HT (6)

Lab 2 Iterative methods and eigenvalue problems. Introduction. Iterative solution of the soap film problem. Beräkningsvetenskap II/NV2, HT (6) Beräkningsvetenskap II/NV2, HT 2008 1 (6) Institutionen för informationsteknologi Teknisk databehandling Besöksadress: MIC hus 2, Polacksbacken Lägerhyddsvägen 2 Postadress: Box 337 751 05 Uppsala Telefon:

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini Iterative solvers for saddle point algebraic linear systems: tools of the trade V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem

More information

Inexact Inverse Iteration for Symmetric Matrices

Inexact Inverse Iteration for Symmetric Matrices Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Mitglied der Helmholtz-Gemeinschaft Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Birkbeck University, London, June the 29th 2012 Edoardo Di Napoli Motivation and Goals

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

High Performance Nonlinear Solvers

High Performance Nonlinear Solvers What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

SOE3213/4: CFD Lecture 3

SOE3213/4: CFD Lecture 3 CFD { SOE323/4: CFD Lecture 3 @u x @t @u y @t @u z @t r:u = 0 () + r:(uu x ) = + r:(uu y ) = + r:(uu z ) = @x @y @z + r 2 u x (2) + r 2 u y (3) + r 2 u z (4) Transport equation form, with source @x Two

More information

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9 Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Exponential integrators for semilinear parabolic problems

Exponential integrators for semilinear parabolic problems Exponential integrators for semilinear parabolic problems Marlis Hochbruck Heinrich-Heine University Düsseldorf Germany Innsbruck, October 2004 p. Outline Exponential integrators general class of methods

More information

Adaptation of the Lanczos Algorithm for the Solution of Buckling Eigenvalue Problems

Adaptation of the Lanczos Algorithm for the Solution of Buckling Eigenvalue Problems Original Article Adaptation of the Lanczos Algorithm for the Solution of Buckling Eigenvalue Problems Abstract An adaptation of the conventional Lanczos algorithm is proposed to solve the general symmetric

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework Jan Mandel University of Colorado Denver May 12, 2010 1/20/09: Sec. 1.1, 1.2. Hw 1 due 1/27: problems

More information

Iterative solvers for linear equations

Iterative solvers for linear equations Spectral Graph Theory Lecture 23 Iterative solvers for linear equations Daniel A. Spielman November 26, 2018 23.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Tsung-Ming Huang. Matrix Computation, 2016, NTNU Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Reconstructing inclusions from Electrostatic Data

Reconstructing inclusions from Electrostatic Data Reconstructing inclusions from Electrostatic Data Isaac Harris Texas A&M University, Department of Mathematics College Station, Texas 77843-3368 iharris@math.tamu.edu Joint work with: W. Rundell Purdue

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

A orthonormal basis for Radial Basis Function approximation

A orthonormal basis for Radial Basis Function approximation A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Nonparametric density estimation for elliptic problems with random perturbations

Nonparametric density estimation for elliptic problems with random perturbations Nonparametric density estimation for elliptic problems with random perturbations, DqF Workshop, Stockholm, Sweden, 28--2 p. /2 Nonparametric density estimation for elliptic problems with random perturbations

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems

Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems Gert Lube 1, Tobias Knopp 2, and Gerd Rapin 2 1 University of Göttingen, Institute of Numerical and Applied Mathematics (http://www.num.math.uni-goettingen.de/lube/)

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9. PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional

More information

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen

More information

ON DUALITY AND THE BI-CONJUGATE GRADIENT ALGORITHM. by Kristin E. Harnett M.A., University of Pittsburgh, 2006 B.S., University of Pittsburgh, 2003

ON DUALITY AND THE BI-CONJUGATE GRADIENT ALGORITHM. by Kristin E. Harnett M.A., University of Pittsburgh, 2006 B.S., University of Pittsburgh, 2003 ON DUALITY AND THE BI-CONJUGATE GRADIENT ALGORITHM by Kristin E. Harnett M.A., University of Pittsburgh, 2006 B.S., University of Pittsburgh, 2003 Submitted to the Graduate Faculty of the Department of

More information

Partial Differential Equations

Partial Differential Equations Partial Differential Equations Introduction Deng Li Discretization Methods Chunfang Chen, Danny Thorne, Adam Zornes CS521 Feb.,7, 2006 What do You Stand For? A PDE is a Partial Differential Equation This

More information

A globally convergent numerical method and adaptivity for an inverse problem via Carleman estimates

A globally convergent numerical method and adaptivity for an inverse problem via Carleman estimates A globally convergent numerical method and adaptivity for an inverse problem via Carleman estimates Larisa Beilina, Michael V. Klibanov Chalmers University of Technology and Gothenburg University, Gothenburg,

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

LSTRS 1.2: MATLAB Software for Large-Scale Trust-Regions Subproblems and Regularization

LSTRS 1.2: MATLAB Software for Large-Scale Trust-Regions Subproblems and Regularization LSTRS 1.2: MATLAB Software for Large-Scale Trust-Regions Subproblems and Regularization Marielba Rojas Informatics and Mathematical Modelling Technical University of Denmark Computational Methods with

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Section 1.1 Algorithms. Key terms: Algorithm definition. Example from Trapezoidal Rule Outline of corresponding algorithm Absolute Error

Section 1.1 Algorithms. Key terms: Algorithm definition. Example from Trapezoidal Rule Outline of corresponding algorithm Absolute Error Section 1.1 Algorithms Key terms: Algorithm definition Example from Trapezoidal Rule Outline of corresponding algorithm Absolute Error Approximating square roots Iterative method Diagram of a general iterative

More information