Risoluzione di equazioni matriciali con matrici infinite quasi-toeplitz
|
|
- Thomasine Bates
- 5 years ago
- Views:
Transcription
1 Risoluzione di equazioni matriciali con matrici infinite quasi-toeplitz Leonardo Robol, ISTI-CNR, Pisa, Italy joint work with: D. A. Bini, B. Meini (UniPi) S. Massei (EPFL) Montecatini, 16 Feb / 26
2 GNCS Project / Participants Project: Metodi numerici avanzati per equazioni e funzioni di matrici con struttura, coordinated by B. Meini. Dario Bini Gianna Del Corso Dario Fasino Luca Gemignani Bruno Iannazzo Nicola Mastronardi Federico Poloni Valeria Simoncini Massimiliano Fasi Stefano Massei Davide Palitta Leonardo Robol 2 / 26
3 GNCS Project / Participants Project: Metodi numerici avanzati per equazioni e funzioni di matrici con struttura, coordinated by B. Meini. Dario Bini Gianna Del Corso Dario Fasino Luca Gemignani Bruno Iannazzo Nicola Mastronardi Federico Poloni Valeria Simoncini Massimiliano Fasi Stefano Massei Davide Palitta Leonardo Robol 2 / 26
4 Random walks on a quadrant We consider random walks on N N: Allowed moves only to adjacent states. Probabilities of going up/down/left/right are eventually independent of position (i, j). The problem can be stated also on the infinite strip {1,..., m} N. 3 / 26
5 The link with Toeplitz matrices Several queuing problems can be recasted in this framework. The simpler 1D case considers movements on a line: The probability transition matrix is as follows: ρ 0 + ρ 1 ρ 1 P = ρ 1 ρ 0 ρ ,.. where ρ 1 is the probability of moving left, ρ 1 right, and ρ 0 = 1 ρ 1 ρ 1 : a quasi-toeplitz semi-infinite matrix. 4 / 26
6 Toeplitz matrices Toeplitz matrices have constant diagonals. In our case, the entry (1, 1) is different: we have a top-left correction (quasi-toeplitz structure). In the 2D case, the matrix P is (quasi-)tbbt (Toeplitz-block block-toeplitz): Â 0 A 1 P = A 1 A 0 A ,.. with A i, Â 0 quasi-toeplitz (and semi-infinite). 5 / 26
7 Computing the invariant vector The steady state probability vector solving π T P = π T can be recovered by finding the minimal non-negative solutions to A 1 + A 0 G + A 1 G 2 = G, R 2 A 1 + RA 0 + A 1 = R Cyclic reduction (CR) is a matrix iteration that converges to the correct solutions. Needs to perform multiplications, sum, and inversions. Complexity O(m 3 ) if the matrices A i are m m for instance a random walk on {1,..., m} N. 6 / 26
8 A quick look at the matrix iteration A (h+1) 0 = A (h) 0 + A (h) 1 S (h) A (h) 1 + A (h) 1 S (h) A (h) 1 A (h+1) 1 = A (h) 1 S (h) A (h) 1 A (h+1) 1 = A (h) 1 S (h) A (h) 1 S (h) = (I A (h) 0 ) 1 The initial matrices are A (0) i structure preserved? := A i, so they are Toeplitz. Is the 7 / 26
9 A quick look at the matrix iteration A (h+1) 0 = A (h) 0 + A (h) 1 S (h) A (h) 1 + A (h) 1 S (h) A (h) 1 A (h+1) 1 = A (h) 1 S (h) A (h) 1 A (h+1) 1 = A (h) 1 S (h) A (h) 1 S (h) = (I A (h) 0 ) 1 The initial matrices are A (0) i structure preserved? := A i, so they are Toeplitz. Is the Experimentally, approximately, up to a top-left correction. Non-trivial to exploit in practice. In fact, typical implementation ignore the structure and require O(m 3 ) flops = m = is unsolved. 7 / 26
10 Basic facts about Toeplitz matrices a 0 a a 1 a 0 a 1 T (a(z)) =.. a , a(z) = a j z j j Z a(z) is in the Wiener algebra W if a j < j Z a(z) W 1 a (z) W j Z i a i <. 8 / 26
11 Hankel matrices Given a power series f (z) = j=0 f jz j, we have f 1 f 2 f 3... f 2 f 3 H(f (z)) =. f.. 3. the Hankel matrix defined by f (z). Often, we use the notation a + (z) = j 1 a j z j, a (z) = j 1 a j z j. 9 / 26
12 Toeplitz matrices are not an algebra The power series in W and W 1 form an algebra. However, T (a(z)) T (b(z)) T (c(z)), c(z) = a(z)b(z). The link between Toeplitz matrices and Laurent series is indeed an isomorphism for bi-infinite matrices. 10 / 26
13 Toeplitz matrices are not an algebra The power series in W and W 1 form an algebra. However, T (a(z)) T (b(z)) T (c(z)), c(z) = a(z)b(z). The link between Toeplitz matrices and Laurent series is indeed an isomorphism for bi-infinite matrices. Theorem (Gohberg Feldman) Let a(z), b(z) W. Then, T (a)t (b) = T (c) H(a )H(b + ), H(f ) Hankel matrix. The correction H(a )H(b + ) is a compact operator on l / 26
14 Toeplitz matrices are not an algebra The power series in W and W 1 form an algebra. However, T (a(z)) T (b(z)) T (c(z)), c(z) = a(z)b(z). The link between Toeplitz matrices and Laurent series is indeed an isomorphism for bi-infinite matrices. Theorem (Gohberg Feldman) Let a(z), b(z) W. Then, T (a)t (b) = T (c) H(a )H(b + ), H(f ) Hankel matrix. The correction H(a )H(b + ) is a compact operator on l 2. Toeplitz matrices are an algebra up to compact corrections. 10 / 26
15 The Hankel correction Since a(z), b(z) are in W, then a j, b j 0 for j, so: T (a), T (b) are numerically banded. H(a ), H(b + ) have the support in the top-left corner. Therefore, T (a) T (b) = T (c) H(a )H(b + ) is numerically: = + 11 / 26
16 A new class of matrices Consider the norm E F := i,j E ij, and F := {E R N N E F < }. 12 / 26
17 A new class of matrices Consider the norm E F := i,j E ij, and F := {E R N N E F < }. Definition A = T (a(z)) + E a is quasi-toeplitz (QT), if: a(z) W 1 W. E a satisfies E a F <. We define A QT := a(z) W + a (z) W + E a F. 12 / 26
18 A new class of matrices Consider the norm E F := i,j E ij, and F := {E R N N E F < }. Definition A = T (a(z)) + E a is quasi-toeplitz (QT), if: a(z) W 1 W. E a satisfies E a F <. We define A QT := a(z) W + a (z) W + E a F. QT-matrices form a (computationally-friendly) algebra. E a F < = E a compact operator on l / 26
19 Numerical representation of a QT matrix We can represent QT matrices with a finite number of parameters and arbitrary accuracy. If A = T (a) + E a : T (a) can be stored by a (truncated) approximation of its symbol, since i >j a i < ; E a UV T by means of SVD; U, V have compact support and σ j (E a ) 0, since E a is compact and so numerically low-rank. In our case we use E a F < to pick a top-left block as approximation to E a. 13 / 26
20 QT is an algebra We can check that: A, B QT = A + B QT : T (a) + E a + T (b) + E b = T (a + b) + (E a + E b ). 14 / 26
21 QT is an algebra We can check that: A, B QT = A + B QT : T (a) + E a + T (b) + E b = T (a + b) + (E a + E b ). A, B QT = AB QT : (T (a) + E a ) (T (b) + E b ) = T (ab)+ (T (a)e b + E a T (b) + E a E b H(a )H(b + )) 14 / 26
22 QT is an algebra We can check that: A, B QT = A + B QT : T (a) + E a + T (b) + E b = T (a + b) + (E a + E b ). A, B QT = AB QT : (T (a) + E a ) (T (b) + E b ) = T (ab)+ (T (a)e b + E a T (b) + E a E b H(a )H(b + )) A = T (a) + E a implies A 1 = T (a 1 ) +? 14 / 26
23 QT is an algebra (cont d) If a(z), b(z) W 1 and E a, E b F, then the corrections obtained by sum and multiplication are in F. 15 / 26
24 QT is an algebra (cont d) If a(z), b(z) W 1 and E a, E b F, then the corrections obtained by sum and multiplication are in F. Proof: some (not so interesting) computations. a(z) W 1 plays an important role. 15 / 26
25 QT is an algebra (cont d) If a(z), b(z) W 1 and E a, E b F, then the corrections obtained by sum and multiplication are in F. Proof: some (not so interesting) computations. a(z) W 1 plays an important role. And the inverse? 15 / 26
26 Computing the inverse For the inverse of T (a), we need a few steps: Factor a(z) = u(z)l(z 1 ), where u, l are power series (Wiener-Hopf factorization). This corresponds to the UL factorization T (a) = T (u(z)) T (l(z 1 )) Triangular Toeplitz matrices have Toeplitz inverses: T (a(z)) 1 = T (l 1 (z 1 ))T (u 1 (z)), and we can use what we know about multiplications. 16 / 26
27 Computational remarks We can operate on quasi-toeplitz matrices combining: Operations/Factorizations on power series (FFT-based); Toeplitz-vector multiplications (again FFT-based); Compression of low-rank matrices of the form H(a )H(b + ). For the last item, fast matvec is available, so we can run either Lanczos or random sampling to compress the products of Hankel matrices. 17 / 26
28 Computational remarks We can operate on quasi-toeplitz matrices combining: Operations/Factorizations on power series (FFT-based); Toeplitz-vector multiplications (again FFT-based); Compression of low-rank matrices of the form H(a )H(b + ). For the last item, fast matvec is available, so we can run either Lanczos or random sampling to compress the products of Hankel matrices. Ranks of the corrections are small in practice: is a typical value in applications. 17 / 26
29 Keeping the rank bounded When we perform arithmetic operations, the rank of the correction increases in the representation. This can be kept low by recompression. 18 / 26
30 Keeping the rank bounded When we perform arithmetic operations, the rank of the correction increases in the representation. This can be kept low by recompression. Assume E a = U a V T a, U a, V a with k columns. [Q U, R U ] = qr(u a ), and [Q V, R V ] = qr(v a ). 18 / 26
31 Keeping the rank bounded When we perform arithmetic operations, the rank of the correction increases in the representation. This can be kept low by recompression. Assume E a = U a V T a, U a, V a with k columns. [Q U, R U ] = qr(u a ), and [Q V, R V ] = qr(v a ). [U 1, Σ 1, V 1 ] svd(r U RV T ) (truncated SVD). 18 / 26
32 Keeping the rank bounded When we perform arithmetic operations, the rank of the correction increases in the representation. This can be kept low by recompression. Assume E a = U a V T a, U a, V a with k columns. [Q U, R U ] = qr(u a ), and [Q V, R V ] = qr(v a ). [U 1, Σ 1, V 1 ] svd(r U RV T ) (truncated SVD). E a Q U U 1 Σ1 (Q V V 1 Σ1 ). The new representation has k < k columns. 18 / 26
33 Keeping the rank bounded When we perform arithmetic operations, the rank of the correction increases in the representation. This can be kept low by recompression. Assume E a = U a V T a, U a, V a with k columns. [Q U, R U ] = qr(u a ), and [Q V, R V ] = qr(v a ). [U 1, Σ 1, V 1 ] svd(r U RV T ) (truncated SVD). E a Q U U 1 Σ1 (Q V V 1 Σ1 ). The new representation has k < k columns. The cost of the compression is O(nk 2 + k 3 ) flops, where n is the support of E a. 18 / 26
34 Keeping the rank low (pictorial version) = +
35 Keeping the rank low (pictorial version) = = + 19 / 26
36 A Banach algebra We can say even more about QT. Theorem The set of QT-matrices, endowed with the norm is a Banach algebra. T (a) + E a := a W + a W + E a F, This means that it is an algebra, it is a Banach space, and that AB A B. 20 / 26
37 Back to our matrix iteration Recall we wanted to use cyclic reduction to solve A 1 + A 0 G + A 1 G 2 = G, R 2 A 1 + RA 0 + A 1 = R Theorem Let A i B, a Banach algebra. If ϕ(z) = z 1 A 1 + (A 0 I ) + za 1 is invertible on {t 1 < z < t} for t > 1, then CR converges quadratically and G, R can be obtained from the limits of A (h) i. 21 / 26
38 Good news The hypotheses of the previous results are always satisfied in the Markov chain setting. QT-matrices are a Banach algebra so cyclic reduction works in this class, and we know how to compute the iteration. The semi-infinite solutions G, R, will also be represented in the QT format. We have a MATLAB toolbox that makes it easy to play with these matrices: You can google cqt-toolbox. 22 / 26
39 A simple example Consider a tandem Jackson queue. It has been shown that finite truncation may not approximate the infinite-dimensional case behavior 1. 1 Motyer, A.J. and Taylor, P.G., Decay rates for quasi-birth-and-death processes with countably many phases and tridiagonal block generators. Advances in applied probability, 38(2), pp / 26
40 Explicit solution We can compute the steady state vector π using CR: Problem Time (s) Residue Band Support Rank The residue is π T P π T, and each problem corresponds to different rates and parameters. Ranks and support refer to the corrections in G, R, and the band to their Toeplitz part / 26
41 Conclusions and future outlook Computing with infinite matrices might be fun, if you have the right structure. Some hypotheses on the norms can be relaxed this is work in progress. We can do much more fancy things: compute matrix functions, solve other kinds of matrix equations,.... This has applications to other fields, such as finance. Finite case handled as well. A MATLAB toolbox is available for you to try: feedback is very welcome! 25 / 26
42 Thank you for your attention! 26 / 26
Quasi-Toeplitz matrix arithmetic: a MATLAB toolbox
Quasi-Toeplitz matrix arithmetic: a MATLAB toolbox Dario A. Bini Stefano Massei Leonardo Robol January 24, 2018 Abstract A Quasi Toeplitz (QT) matrix is a semi-infinite matrix of the kind A = T (a) + E
More informationFactoring block Fiedler Companion Matrices
Chapter Factoring block Fiedler Companion Matrices Gianna M. Del Corso, Federico Poloni, Leonardo Robol, and Raf Vandebril Abstract When Fiedler published his A note on Companion matrices in 00 in Linear
More informationExploiting off-diagonal rank structures in the solution of linear matrix equations
Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)
More informationarxiv: v2 [math.na] 22 Aug 2018
SOLVING RANK STRUCTURED SYLVESTER AND LYAPUNOV EQUATIONS STEFANO MASSEI, DAVIDE PALITTA, AND LEONARDO ROBOL arxiv:1711.05493v2 [math.na] 22 Aug 2018 Abstract. We consider the problem of efficiently solving
More informationCyclic reduction and index reduction/shifting for a second-order probabilistic problem
Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Federico Poloni 1 Joint work with Giang T. Nguyen 2 1 U Pisa, Computer Science Dept. 2 U Adelaide, School of Math.
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationCompression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv: v1 [math.
Compression of unitary rank structured matrices to CMV-like shape with an application to polynomial rootfinding arxiv:1307.186v1 [math.na] 8 Jul 013 Roberto Bevilacqua, Gianna M. Del Corso and Luca Gemignani
More informationStructured Markov chains solver: tool extension
Structured Markov chains solver: tool extension D. A. Bini, B. Meini, S. Steffé Dipartimento di Matematica Università di Pisa, Pisa, Italy bini, meini, steffe @dm.unipi.it B. Van Houdt Department of Mathematics
More informationMatrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution
1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are
More informationarxiv: v1 [math.na] 24 Jan 2019
ON COMPUTING EFFICIENT DATA-SPARSE REPRESENTATIONS OF UNITARY PLUS LOW-RANK MATRICES R BEVILACQUA, GM DEL CORSO, AND L GEMIGNANI arxiv:190108411v1 [mathna 24 Jan 2019 Abstract Efficient eigenvalue solvers
More informationLinear algebra & Numerical Analysis
Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationInverse Eigenvalue Problems: Theory and Applications
Inverse Eigenvalue Problems: Theory and Applications A Series of Lectures to be Presented at IRMA, CNR, Bari, Italy Moody T. Chu (Joint with Gene Golub) Department of Mathematics North Carolina State University
More informationTransformations to rank structures by unitary similarity
Linear Algebra and its Applications 402 (2005) 126 134 www.elsevier.com/locate/laa Transformations to rank structures by unitary similarity Roberto Bevilacqua, Enrico Bozzo, Gianna M. Del Corso Dipartimento
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationA Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem
A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationQueuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe
Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What
More informationA fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration
NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration
More informationPerturbing Palindromic Matrix Equations to Make Them Solvable
Perturbing Palindromic Matrix Equations to Make Them Solvable Federico Poloni 1 Joint work with Tobias Brüll 2, Giacomo Sbrana 3, Christian Schröder 4 1 U Pisa, Dept of Computer Science 2 Computer Simulation
More informationThe Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix
The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationSpectral properties of Toeplitz+Hankel Operators
Spectral properties of Toeplitz+Hankel Operators Torsten Ehrhardt University of California, Santa Cruz ICM Satellite Conference on Operator Algebras and Applications Cheongpung, Aug. 8-12, 2014 Overview
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationsublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)
sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works
CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More information3.3 Eigenvalues and Eigenvectors
.. EIGENVALUES AND EIGENVECTORS 27. Eigenvalues and Eigenvectors In this section, we assume A is an n n matrix and x is an n vector... Definitions In general, the product Ax results is another n vector
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationLecture 4. CP and KSVD Representations. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix
More informationBlock-tridiagonal matrices
Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute
More informationEfficient Solution of Parameter Dependent Quasiseparable Systems and Computation of Meromorphic Matrix Functions
Efficient Solution of Parameter Dependent Quasiseparable Systems and Computation of Meromorphic Matrix Functions Paola Boito, Yuli Eidelman, Luca Gemignani To cite this version: Paola Boito, Yuli Eidelman,
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationStochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS
Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review
More informationStationary Distribution of a Perturbed Quasi-Birth-and-Death Process
Université Libre de Bruxelles Faculté des Sciences Stationary Distribution of a Perturbed Quasi-Birth-and-Death Process Rapport d avancement des recherches 2009-2011 Sarah Dendievel Promoteur : Guy Latouche
More informationMATH 5524 MATRIX THEORY Pledged Problem Set 2
MATH 5524 MATRIX THEORY Pledged Problem Set 2 Posted Tuesday 25 April 207 Due by 5pm on Wednesday 3 May 207 Complete any four problems, 25 points each You are welcome to complete more problems if you like,
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationAnd, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider
.2. Echelon Form and Reduced Row Echelon Form In this section, we address what we are trying to achieve by doing EROs. We are trying to turn any linear system into a simpler one. But what does simpler
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationVector Spaces and Subspaces
Vector Spaces and Subspaces Our investigation of solutions to systems of linear equations has illustrated the importance of the concept of a vector in a Euclidean space. We take time now to explore the
More informationLECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.
Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply
More information18.312: Algebraic Combinatorics Lionel Levine. Lecture 22. Smith normal form of an integer matrix (linear algebra over Z).
18.312: Algebraic Combinatorics Lionel Levine Lecture date: May 3, 2011 Lecture 22 Notes by: Lou Odette This lecture: Smith normal form of an integer matrix (linear algebra over Z). 1 Review of Abelian
More informationChapter 4 Markov Chains at Equilibrium
Chapter 4 Markov Chains at Equilibrium 41 Introduction In this chapter we will study the long-term behavior of Markov chains In other words, we would like to know the distribution vector sn) when n The
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationKatholieke Universiteit Leuven Department of Computer Science
Orthogonal similarity transformation of a symmetric matrix into a diagonal-plus-semiseparable one with free choice of the diagonal Raf Vandebril, Ellen Van Camp, Marc Van Barel, Nicola Mastronardi Report
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationUsing q-calculus to study LDL T factorization of a certain Vandermonde matrix
Using q-calculus to study LDL T factorization of a certain Vandermonde matrix Alexey Kuznetsov May 2, 207 Abstract We use tools from q-calculus to study LDL T decomposition of the Vandermonde matrix V
More informationA fast and well-conditioned spectral method: The US method
A fast and well-conditioned spectral method: The US method Alex Townsend University of Oxford Leslie Fox Prize, 24th of June 23 Partially based on: S. Olver & T., A fast and well-conditioned spectral method,
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationCover Page. The handle holds various files of this Leiden University dissertation
Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationEE364a Review Session 7
EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative
More informationThe MATLAB toolbox SMCSolver for matrix-analytic methods
The MATLAB toolbox SMCSolver for matrix-analytic methods D. Bini, B. Meini, S. Steffe, J.F. Pérez, B. Van Houdt Dipartimento di Matematica, Universita di Pisa, Italy Department of Electrical and Electronics
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationNumerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis
Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 12, Issue 2 (December 2017), pp. 988 1001 Applications and Applied Mathematics: An International Journal (AAM) Numerical Experiments
More informationGraph Partitioning Using Random Walks
Graph Partitioning Using Random Walks A Convex Optimization Perspective Lorenzo Orecchia Computer Science Why Spectral Algorithms for Graph Problems in practice? Simple to implement Can exploit very efficient
More informationLecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure
Stat260/CS294: Spectral Graph Methods Lecture 19-04/02/2015 Lecture: Local Spectral Methods (2 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationStructured Matrix Methods for Polynomial Root-Finding
Structured Matrix Methods for Polynomial Root-Finding Luca Gemignani Department of Mathematics University of Pisa Largo B. Pontecorvo 5, 5617 Pisa, Italy gemignan@dm.unipi.it ABSTRACT In this paper we
More informationPseudospectra and Nonnormal Dynamical Systems
Pseudospectra and Nonnormal Dynamical Systems Mark Embree and Russell Carden Computational and Applied Mathematics Rice University Houston, Texas ELGERSBURG MARCH 1 Overview of the Course These lectures
More informationLecture 6, Sci. Comp. for DPhil Students
Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page
More informationMulti-Factor Finite Differences
February 17, 2017 Aims and outline Finite differences for more than one direction The θ-method, explicit, implicit, Crank-Nicolson Iterative solution of discretised equations Alternating directions implicit
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationLecture 4: Linear Algebra 1
Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation
More informationCourse Description - Master in of Mathematics Comprehensive exam& Thesis Tracks
Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks 1309701 Theory of ordinary differential equations Review of ODEs, existence and uniqueness of solutions for ODEs, existence
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationCensoring Technique in Studying Block-Structured Markov Chains
Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationA CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES
A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES NICOLAE POPA Abstract In this paper we characterize the Schur multipliers of scalar type (see definition below) acting on scattered
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationMathematics I. Exercises with solutions. 1 Linear Algebra. Vectors and Matrices Let , C = , B = A = Determine the following matrices:
Mathematics I Exercises with solutions Linear Algebra Vectors and Matrices.. Let A = 5, B = Determine the following matrices: 4 5, C = a) A + B; b) A B; c) AB; d) BA; e) (AB)C; f) A(BC) Solution: 4 5 a)
More informationLifted approach to ILC/Repetitive Control
Lifted approach to ILC/Repetitive Control Okko H. Bosgra Maarten Steinbuch TUD Delft Centre for Systems and Control TU/e Control System Technology Dutch Institute of Systems and Control DISC winter semester
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationAn Introduction to NeRDS (Nearly Rank Deficient Systems)
(Nearly Rank Deficient Systems) BY: PAUL W. HANSON Abstract I show that any full rank n n matrix may be decomposento the sum of a diagonal matrix and a matrix of rank m where m < n. This decomposition
More informationModelling Complex Queuing Situations with Markov Processes
Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field
More information1 GSW Sets of Systems
1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationSimultaneous Transient Analysis of QBD Markov Chains for all Initial Configurations using a Level Based Recursion
Simultaneous Transient Analysis of QBD Markov Chains for all Initial Configurations using a Level Based Recursion J Van Velthoven, B Van Houdt and C Blondia University of Antwerp Middelheimlaan 1 B- Antwerp,
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationOn the reduction of matrix polynomials to Hessenberg form
Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu
More information