Analysis of a Fast Hankel Eigenvalue Algorithm Franklin T. Luk a and Sanzheng Qiao b a Department of Computer Science Rensselaer Polytechnic Institute
|
|
- Theodore Johns
- 5 years ago
- Views:
Transcription
1 nalysis of a Fast Hankel Eigenvalue lgorithm Franklin T. Luk a and Sanzheng Qiao b a Department of omputer Science Rensselaer Polytechnic Institute Troy, New York 28 US b Department of omputing and Software McMaster University Hamilton, Ontario L8S 4L7 anada STRT This paper analyzes the important steps of an O(n 2 log n) algorithm for nding the eigenvalues of a complex Hankel matrix. The three key steps are a Lanczos-type tridiagonalization algorithm, a fast FFT-based Hankel matrix-vector product procedure, and a QR eigenvalue method based on complex-orthogonal transformations. In this paper, we present an error analysis of the three steps, as well as results from numerical experiments. Keywords: Hankel matrix, eigenvalue decomposition, Lanczos tridiagonalization, Hankel matrix-vector multiplication, complex-orthogonal transformations, error analysis.. INTRODUTION The eigenvalue decomposition of a structured matrix has important applications in signal processing. In this paper, we consider a complex Hankel matrix H 2 nn : H h h 2 : : : h n h n h 2 h 3 : : : h n h n h n h n : : : h 2n 3 h 2n 2 h n h n+ : : : h 2n 2 h 2n : () The authors 4 proposed a fast algorithm for nding the eigenvalues of H. The key step is a fast Lanczos tridiagonalization algorithm which employs a fast Hankel matrix-vector multiplication based on the Fast Fourier transform (FFT). Then the algorithm performs a QR-like procedure using the complex-orthogonal transformations in the diagonalization to nd the eigenvalues. In this paper, we present an error analysis and discuss the stability properties of the key steps of the algorithm. This paper is organized as follows. In Section 2, we discuss eigenvalue computation using complex-orthogonal transformations, and describe the overall eigenvalue procedure and three key steps. In Section 3, we introduce notations and error analysis of some basic complex arithmetic operations. We analyze the orthogonality of the Lanczos vectors in Section 4 and the accuracy of the Hankel matrix-vector multiplication in Section 5. In Section 6, we derive the condition number of a complex-orthogonal plane rotation and we provide a backward error analysis of the application of this transformation to a vector. Numerical results are given in Section 7. Supported by the Natural Sciences and Engineering Research ouncil of anada under grant OGP463
2 2. OMPLEX-ORTHOGONL TRNSFORMTIONS n eigenvalue decomposition of a nondefective matrix H is given by H = XDX : (2) We will pick the matrix X to be complex-orthogonal ; that is, XX T = I. So, H = XDX T. We apply a special Lanczos tridiagonalization to the Hankel matrix (assuming that the Lanczos process does not prematurely terminate): where Q is complex-orthogonal and J is complex-symmetric tridiagonal: J Let Q ( q q 2 : : : q n ). Since Q T Q = I, we see that H = QJQ T ; (3) n : (4) n n q T i q j = ij : (5) We call the property dened by (5) as complex-orthogonality (c-orthogonality for short). The dominant cost of the Lanczos method is matrix-vector multiplication. We propose an O(n log n) FFT-based multiplication scheme so that we can tridiagonalize a Hankel matrix in O(n 2 log n) operations. Given w 2 n we want to compute the product p = Hw. Dene bc 2 2n by Let w ( w w 2 : : : w n ) T. Dene bw 2 2n by bc = ( h n h n+ : : : h 2n h h 2 : : : h n ) T : (6) bw = ( w n w n : : : w : : : ) T : (7) Let t(v) denote one-dimensional FFT of a vector v, it(v) one-dimensional inverse FFT of v, and \." componentwise multiplication of two vectors. The desired product is given by the rst n elements of the vector it(t(bc):t( bw)). The authors 4 showed that lgorithm 2 requires 3n log(n) + O(n) ops (real oating-point operations). To maintain symmetry and tridiagonality of J, we apply complex-orthogonal rotations in the QR iterations: J = W DW T ; where W is complex-orthogonal and D is diagonal. Thus, we get (2) with X = QW. The workhorse of our QR-like iteration 3 is a complex-orthogonal \plane-rotation" P 2 nn. That is, P is an identity matrix except for four strategic positions formed by the intersection of the ith and jth rows and columns: c s ; (8) s c pii p G = ij = p ji p jj where c 2 + s 2 =. The matrix G is constructed to annihilate the second component of a 2-vector x = (x ; x 2 ) T. We present our computational schemes in the following. Overall Procedure (Hankel Eigenvalue omputation). Given H, compute all its eigenvalues.. pply a Lanczos procedure (lgorithm ) to reduce H to a tridiagonal form J. Use lgorithm 2 to compute the Hankel matrix-vector products. 2. pply a QR-type procedure (lgorithm 3) to nd the eigenvalues of J.
3 lgorithm (Lanczos Tridiagonalization). Given H, this algorithm computes a tridiagonal matrix J. Initialize q such that q T q = ; v = Hq ; for j = : n j = q T j v j; r j = v j j q j ; if j < n q j = r T j r j; if j =, break; end; q j+ = r j = j ; v j+ = Hq j+ j q j ; end end lgorithm 2 (Fast Hankel Matrix-Vector Multiplication). Given w 2 n, compute p = Hw. onstruct the vectors bc and bw as in (6) and (7). alculate y 2 2n by y = it(t(bc): t(bw)): The desired p 2 n is given by the rst n elements of y: p = ( y y 2 : : : y n y n ) T. lgorithm 3 (omplex-symmetric QR Step). Given J 2 mm, this algorithm implements one step of the QR method with the Wilkinson shift. Initialize Q = I; Find the eigenvalue of J m :m;m :m that is closer to J mm ; Set x = J ; x 2 = J 2 ; for k = : m Find a complex-orthogonal matrix P T k to annihilate x 2 using x ; J = P T k JP k; Q = QP k ; if k < m x = J k+;k ; x 2 = J k+2;k ; end if end for. 3. ERROR NLYSIS FUNDMENTLS Let u denote the unit roundo. Since the objective of this paper is to analyze the stability of our algorithm 4 and not to derive precise error bounds, we carry out rst-order error analysis. Sometimes, we ignore moderate constant coecients. The following results for the basic complex arithmetic operations are from Higham 2 : fl(x y) = (x y)( + ); jj u; (9) fl(xy) = xy( + ); jj p 2 2 ; () fl(x=y) = (x=y)( + ); jj p 2 4 ; () Using () and, similar to the real case, 2 repeatedly applying (9), we can show that for two n-vectors x and y where fl(x T y) = x T (y + y) = (x + x) T y; (2) jyj n+ jyj; jxj n+ jxj; n+ = We assume that nu. So we can think of n as approximately nu. (n + )u (n + )u :
4 4. LNZOS TRIDIGONLIZTION nalogous to the analysis of the conventional Lanczos tridiagonalization, 5 we shall show that column vectors q j can lose c-orthogonality. Let b j = fl(bq T j bv j ) = bq T j (bv j + v) for some v. It follows immediately from (2) that kvk 2 n+ kbv j k 2. Thus, and If for some v and q, then (9) and () show that b j = bq T j bv j + ; jj n+ kbq j k 2 kbv j k 2 ; (3) jb j j ( + n+ )kbq j k 2 kbv j k 2 : (4) br j = fl(bv j b j bq j ) (bv j + v) b j (bq j + q) (5) kqk 2 p 2 2 kbq j k 2 ; kvk 2 ukbv j b j bq j k 2 : Denoting r = v b j q as the rst order error in br j and using (4), we get Now, from (), we have that is, To check c-orthogonality, we multiply bq T j krk 2 ukbv j b j bq j k 2 + jb j j p 2 2 kbq j k 2 (u + ( p u)kbq j k 2 2)kbv j k 2 : (6) bq j+ = fl(br j =b j ) = (br j + r)=b j ; krk 2 p 2 4 kbr j k 2 ; b j bq j+ = br j + r; krk 2 p 2 4 kbr j k 2 : (7) on the both sides of the above equation: b j bq T j bq j+ = bq T j (br j + r) bq T j bv j b j bq T j bq j + bq T j (r + r): Since bq j is normalized and bq T j bv j = b j from (3), we get Next we derive the bound for krk 2 by using (7), (5), and (4), It follows from (6) and (9) that Finally, (3) and (8) imply that b j bq T j bq j+ + bq T j (r + r): (8) krk 2 p 2 4 kbv j b j bq j k 2 p 2 4 ( + kbq j k 2 2)kbv j k 2 : (9) kr + rk 2 krk 2 + krk 2 = ( p u)kbq j k u kbv j k 2 + p 2 4 ( + kbq j k 2 2)kbv j k 2 ( p p u)kbq j k ( p u) kbv j k 2 : jb j bq T j bq j+ j jj + kr + rk 2 kbq j k 2 ( n+ + p u) + ( p p u)kbq j k 2 2 kbv j k 2 kbq j k 2 : Using the approximation n nu when nu and ignoring the moderate constant coecients, we obtain jbq T j bq j+ j (n + kbq jk 2 2)kbq j k 2 kbv j k 2 jb j j! u: (2) The above inequality says that bq j can lose c-orthogonality when either jb j j is small or kbq j k 2 is large (recall that bq j and b j denote computed results). Hence the loss is due to either the cancellation in computing j or the growth in kbq j k 2, and not the accumulation of roundo errors. In the real case, kbq j k 2 and (2) is consistent with the results in Paige, 5 whereas in the complex case, the norm kbq j k 2 can be arbitrarily large. This is an additional factor that causes the loss of c-orthogonality.
5 5. HNKEL MTRIX-VETOR MULTIPLITION The only computation in the fast Hankel matrix-vector multiplication is Denoting the computed y as y = it(t(bc): t(bw)): by = fl(it(t(bc): t(bw))); we consider the computation of t(bw). ssume that the weights! j k in the FFT are precalculated and that the computed weights b! j k satisfy b! j k =!j k + kj; where j kj j for all j and k. Depending on the method that computes the weights, we can pick = cu; = cu log j; or = cuj; where c denotes a constant dependent on the method. 6 Following Higham 2 we let We have where pproximately, Similarly, let Then x = t(bw) and bx = fl(t(bw)): kx bxk 2 p log2 (2n ) 2n kxk 2 ; log 2 (2n ) kx bxk 2 = + 4 ( + ): p! 2n log2 n kxk 2 : (2) log 2 n v = t(bc) and bv = fl(t(bc)): p! 2n log2 n kv bvk 2 kvk 2 : (22) log 2 n Next, we consider the componentwise multiplication z = bv: bx. Denoting the computed z as bz = fl(bv: bx); we have approximately Finally, we consider Since F n is the FFT matrix, we have Thus, similar to (2) and (22), we get kz bzk 2 u kzk 2 : (23) y = it(bz) and by = fl(it(bz)): ky byk 2 F n = n F n : p! 2n log2 n kyk 2 : (24) log 2 n
6 s n =2 F n is unitary, it follows that onsequently, using (23) and (24), we get kyk 2 = kit(bz)k 2 = kf 2n bzk 2 =(2n ) = kbzk 2 = p 2n kbzk 2 = p 2n: ky byk 2 log2 n log 2 n log2 n kbzk 2 ( + u)kzk 2 : log 2 n Ignoring the second and higer order terms of u, we have log2 n ky byk 2 kbv: bxk 2 : (25) log 2 n Now, it remains to express kbv: bxk 2 in terms of kbck 2 and kbwk 2. Using this inequality: ka: bk 2 max ja i j kbk 2 kak 2 kbk 2 i and ignoring the second and higher order terms of u, we obtain It follows from (2) and (22) that Note that and kbv: bxk 2 kv: xk 2 + k(v bv): xk 2 + kbv: (x bx)k 2 kvk 2 kxk 2 + kv bvk 2 kxk 2 + kbvk 2 kx bxk 2 kvk 2 kxk 2 + kv bvk 2 kxk 2 + kvk 2 kx bxk 2 : kbv: bxk p 2n log n 2 kvk 2 kxk 2 : (26) log 2 n kvk 2 = kt(bc)k 2 p 2n kbck 2 kxk 2 p 2n kbwk 2 : Using (25) and (26) and ignoring the second and higher order terms, we obtain 2n log2 n ky byk 2 kbck 2 kbwk 2 : log 2 n What does the above error bound tell us about the accuracy of the fast multiplication? For simplicity, we assume that the entries in the Hankel matrix H are about the same size. Recall that bc consists of the rst column and the last row of H. So kbck 2 p 2=n khk F ; and we conclude that ky byk 2 2n p 2 log 2 n For conventional matrix-vector multiplication Hw, we have 2! log 2 n p n khk F kbwk 2 : khw fl(hw)k 2 n khk 2 kwk 2 : Since n nu and is of size u, our fast multiplication is more accurate than conventional multiplication by roughly a factor of log 2 n= p n. In summary, the fast Hankel matrix-vector multiplication scheme is stable. This is consistent with the error bound for the FFT, which says that the FFT has an error bound smaller than that for conventional multiplication by the same factor as the reduction in complexity of the method. 2
7 6. ONDITION OF OMPLEX-ORTHOGONL TRNSFORMTION In this section, we derive the condition number of G of (8) and present a backward error analysis of an application of G. First, consider cond(g). We have t iv G H G = ; (27) iv t where t = jcj 2 + jsj 2 and v = 2 Im(cs): It can be veried that the two eigenvalues of G H G are t + v and t v, and that s t + jvj cond(g) = t jvj : Suppose that G is constructed to annihilate the second entry of x = (x ; x 2 ) T. Then in (8) where onsequently, in (27) The two eigenvalues t v are therefore c = x and s = x 2 ; = jx T xj : t = x H x and v = x H i i i x H i x and x H i i The condition number of G can be arbitrarily large for some complex x. For example, when x = ( + ; i) T, the two eigenvalues are jj j2 + j and : j2 + j jj This also shows that kgk 2 can be arbitrarily large. For any real x 6=, however, t =, v =, and cond(g) = kgk 2 =. Now we consider the application of G to y = (y ; y 2 ) T. Let From (9) and (), we get bz = bz = fl(gy) = fl cy + sy 2 sy + cy 2 : x: (cy ( + ) + sy 2 ( + 2 ))( + 3 ) ( sy ( + 4 ) + cy 2 ( + 5 ))( + 6 ) where j i j p 2 2 for i = ; 2; 4; 5 and j i j u for i = 3; 6. Using rst-order error analysis, we derive c( + bz = (G + G)y where G = 3 ) s( ) s( ) s( ) and jgj ( p u) jgj : This shows that the application of G is perfectly stable in the real case but can be unstable in the complex case because jgj can be large for some complex x. x: ;
8 9 ERRORS TESTS Figure. Errors in the eigenvalues of one hundred random Hankel matrices. 7. NUMERIL EXPERIMENTS We implemented the fast eigenvalue algorithm for Hankel matrices in MTL and tested the following examples on a SUN Ultra station. Example We generated one hundred 2 2 random complex Hankel matrices. Each matrix was constructed by choosing two random complex vectors as its rst column and as its last row. oth the real and imaginary parts of each component of the random vectors were uniformally distributed over [ ; ]. For each matrix, we measured errors by assuming that the eigenvalues computed by the MTL function eig are exact. Let b and be the eigenvalues computed by our program and by MTL, respectively. Figure plots the square roots of the sum of squares of the relative errors: E eig = We observe that the relative accuracy is generally better than 2. 2X i= jb i i j 2 j i j 2! =2 : (28) Example 2 rank decient complex Hankel matrix H was generated using the Vandermonde decomposition: H = z z 2 z k z n z n 2 z n k a a 2... a k z z n z 2 z n z k z n k : (29) The n n matrix H has rank k. In this example, we chose n =, k = 6, z i as random complex numbers with modulus one, and a i as random real numbers uniformly distributed on [ ; ]: z = :8585 :528i :995 :3i :838 + :5565i :9 :9959i :9855 :696i : :9299i ; a = :8436 :4764 :6475 :886 :879 :8338 Using (29), we generated the rst column and the last row of a complex Hankel matrix. Then we slightly perturbed these two vectors by adding two small random noise vectors of size 6 with components normally distributed with :
9 zero mean. We obtained the following two vectors as the rst column and the last row of our test Hankel matrix: H ;: = 2:887 :i :846 :394i :9 :29i :4866 2:6229i :9623 2:77i :738 :599i :2395 :255i :23 + :427i : :759i :35 :676i ; H :;n = :35 :676i :7279 :566i :542 :7927i :2653 :88i :793 2:353i :696 3:342i :2362 2:884i :845 :9649i :269 + :3874i 2: :6284i In the sixth and seventh Lanczos iterations, we got small subdiagonal elements: 6 = (3:436 + :38i) 4 and 7 = (2: :32i) 6 : The complex-orthogonality of the matrix Q computed by the Lanczos method was lost: Our fast algorithm computed the eigenvalues kq T Q I k F = 2:: :368 9:27i 4:339 7:236i : :754i :86 + :94i :442 :355i :6 + :25i : :i : + :i 4:3588 7:929i :428 9:222i The last two eigenvalues are spurious. For comparison, MTL eig gave :368 9:27i 4:339 7:236i : :754i :86 + :94i :442 :355i :6 + :25i : :i : + :i : + :i : + :i The errors in the rst six eigenvalues were in the magnitude of 3. So, if we know the rank or an estimate k, we can stop the Lanczos process after k iterations. This leads to an algorithm for computing the k dominant eigenvalues of a complex Hankel matrix. In this example, we quit the Lanczos procedure after six iterations and computed the eigenvalues of the 6-by-6 triadiagonal matrix. ll six computed eigenvalues were correct to at least four digits. The three largest eigenvalues were correct to at least nine digits. T :
10 2 ERRORS TESTS Figure 2. Errors in the eigenvalues of one hundred random Hermitian Toeplitz matrices. Example 3 Our fast algorithm for complex Hankel matrices can be adapted for Hermitian Toeplitz matrices by incorporating the following simple changes: Replace the fast matrix-vector multiplication algorithm for Hankel matrices with one for Toeplitz matrices. Replace the modied Lanczos method with a conventional one for Hermitian matrices; Replace the modied QR method with a conventional one for Hermitian matrices. We generated one hundred 2-element random complex vectors as the rst columns of the Hermitian Toeplitz matrices. Figure 2 shows the error parameter E eig as dened in (28). Not surprisingly, the errors here are smaller than those in Example ; specically, the accuracy is better than 3. REFERENES. G. H. Golub and. F. Van Loan. Matrix omputations. 3rd Ed., The Johns Hopkins University Press, altimore, MD, US, N. J. Higham. ccuracy and Stability of Numerical lgorithms. Society for Industrial and pplied Mathematics, Philadelphia, P, US, F. T. Luk and S. Qiao. Using omplex-orthogonal Transformations to Diagonalize a omplex-symmetric Matrix. In dvanced Signal Processing lgorithms, rchitectures, and Implementations VII, Franklin T. Luk, Editor, Proceedings of SPIE Vol. 362, pp. 48{425, F. T. Luk and S. Qiao. Fast Eigenvalue lgorithm for Hankel Matrices. In dvanced Signal Processing lgorithms, rchitectures, and Implementations VIII, Franklin T. Luk, Editor, Proceedings of SPIE Vol. 346, pp. 249{256, Paige. Error nalysis of the Lanczos lgorithm for Tridiagonalizing a Symmetric Matrix. J. Inst. Maths pplics, Vol. 8, pp. 34{349, F. Van Loan. omputational Frameworks for the Fast Fourier Transform, Society for Industrial and pplied Mathematics, Philadelphia, P, US, 992.
then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja
Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk
More informationExponential Decomposition and Hankel Matrix
Exponential Decomposition and Hankel Matrix Franklin T Luk Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong luk@csecuhkeduhk Sanzheng Qiao Department
More informationBlock Lanczos Tridiagonalization of Complex Symmetric Matrices
Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic
More informationInteger Least Squares: Sphere Decoding and the LLL Algorithm
Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationUMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.
UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated
More informationESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then
ESTIMATION OF ERROR Let bx denote an approximate solution for Ax = b; perhaps bx is obtained by Gaussian elimination. Let x denote the exact solution. Then introduce r = b Abx a quantity called the residual
More informationThe restarted QR-algorithm for eigenvalue computation of structured matrices
Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationUMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract
UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationChater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms
More informationAn algorithm for symmetric generalized inverse eigenvalue problems
Linear Algebra and its Applications 296 (999) 79±98 www.elsevier.com/locate/laa An algorithm for symmetric generalized inverse eigenvalue problems Hua Dai Department of Mathematics, Nanjing University
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationInstitute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum
More informationIntroduction to Linear Algebra. Tyrone L. Vincent
Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationNorms and Perturbation theory for linear systems
CHAPTER 7 Norms and Perturbation theory for linear systems Exercise 7.7: Consistency of sum norm? Observe that the sum norm is a matrix norm. This follows since it is equal to the l -norm of the vector
More informationKEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio
COMPUTING THE SVD OF A GENERAL MATRIX PRODUCT/QUOTIENT GENE GOLUB Computer Science Department Stanford University Stanford, CA USA golub@sccm.stanford.edu KNUT SLNA SC-CM Stanford University Stanford,
More informationInstitute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationA DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION
SIAM J MATRIX ANAL APPL Vol 0, No 0, pp 000 000 c XXXX Society for Industrial and Applied Mathematics A DIVIDE-AND-CONQUER METHOD FOR THE TAKAGI FACTORIZATION WEI XU AND SANZHENG QIAO Abstract This paper
More informationLinear Algebra. James Je Heon Kim
Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationA Divide-and-Conquer Method for the Takagi Factorization
A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationAlgorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices
Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationThe idea is that if we restrict our attention to any k positions in x, no matter how many times we
k-wise Independence and -biased k-wise Indepedence February 0, 999 Scribe: Felix Wu Denitions Consider a distribution D on n bits x x x n. D is k-wise independent i for all sets of k indices S fi ;:::;i
More information12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)
1.4. INNER PRODUCTS,VECTOR NORMS, AND MATRIX NORMS 11 The estimate ^ is unbiased, but E(^ 2 ) = n?1 n 2 and is thus biased. An unbiased estimate is ^ 2 = 1 (x i? ^) 2 : n? 1 In x?? we show that the linear
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More information2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique
STABILITY OF A METHOD FOR MULTIPLYING COMPLEX MATRICES WITH THREE REAL MATRIX MULTIPLICATIONS NICHOLAS J. HIGHAM y Abstract. By use of a simple identity, the product of two complex matrices can be formed
More information4 Linear Algebra Review
Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationNumerical methods for solving linear systems
Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationRank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col
Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an
More informationFunctional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009
Eduardo Corona Functional Analysis: Assignment Set # Spring 2009 Professor: Fengbo Hang April 29, 2009 29. (i) Every convolution operator commutes with translation: ( c u)(x) = u(x + c) (ii) Any two convolution
More information10.34: Numerical Methods Applied to Chemical Engineering. Lecture 2: More basics of linear algebra Matrix norms, Condition number
10.34: Numerical Methods Applied to Chemical Engineering Lecture 2: More basics of linear algebra Matrix norms, Condition number 1 Recap Numerical error Operations Properties Scalars, vectors, and matrices
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationLinear algebra & Numerical Analysis
Linear algebra & Numerical Analysis Eigenvalues and Eigenvectors Marta Jarošová http://homel.vsb.cz/~dom033/ Outline Methods computing all eigenvalues Characteristic polynomial Jacobi method for symmetric
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationAM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization
AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationHomework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)
CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationOutline Background Schur-Horn Theorem Mirsky Theorem Sing-Thompson Theorem Weyl-Horn Theorem A Recursive Algorithm The Building Block Case The Origina
A Fast Recursive Algorithm for Constructing Matrices with Prescribed Eigenvalues and Singular Values by Moody T. Chu North Carolina State University Outline Background Schur-Horn Theorem Mirsky Theorem
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationAn Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias
An Analog of the Cauchy-Schwarz Inequality for Hadamard Products and Unitarily Invariant Norms Roger A. Horn and Roy Mathias The Johns Hopkins University, Baltimore, Maryland 21218 SIAM J. Matrix Analysis
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationEIGENVALUE PROBLEMS (EVP)
EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationSection 4.5 Eigenvalues of Symmetric Tridiagonal Matrices
Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete
More informationThe in uence of orthogonality on the Arnoldi method
Linear Algebra and its Applications 309 (2000) 307±323 www.elsevier.com/locate/laa The in uence of orthogonality on the Arnoldi method T. Braconnier *, P. Langlois, J.C. Rioual IREMIA, Universite de la
More informationDaily Register Q7 w r\ i i n i i /"\ HTT /"\I AIM MPmf>riAnpn ^ ^ oikiar <mn ^^^*»^^*^^ _.. *-.,..,» * * w ^.. i nr\r\
$ 000 w G K G y' F: w w w y' w-y k Yk w k ' V 0 8 y Q w \ /"\ /"\ > ^ ^ k < ^^^*»^^*^^ _*-» * * w ^ \\ Y 88 Y Y 8 G b k =-- K w' K y J KKY G - - v v -y- K -y- x- y bb K' w k y k y K y y w b v w y F y w
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationFast Hankel Tensor-Vector Products and Application to Exponential Data Fitting
Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting Weiyang Ding Liqun Qi Yimin Wei arxiv:1401.6238v1 [math.na] 24 Jan 2014 January 27, 2014 Abstract This paper is contributed
More informationA TOUR OF LINEAR ALGEBRA FOR JDEP 384H
A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More informationHomework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.
Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationI.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue
Acta Numerica (998), pp. { Relative Perturbation Results for Matrix Eigenvalues and Singular Values Ilse C.F. Ipsen Department of Mathematics North Carolina State University Raleigh, NC 7695-85, USA ipsen@math.ncsu.edu
More information