Performance Analysis of Modified Gram-Schmidt Cholesky Implementation on 16 bits-dsp-chip
|
|
- Karin Powell
- 6 years ago
- Views:
Transcription
1 Int. J. Com. Dig. Sys., No., -7 (03) International Journal of Computing and Digital 03 UOB CSP, University of Bahrain Performance Analysis of odified Gram-Schmidt Implementation on 6 bits-dsp-chip Rabah aoudj, Luc Fety and Christophe Alexandre 3 Lab. CEDRIC / LAETITIA, CNA, Paris, France rabah.maoudj@cnam.fr, luc.fety@cnam.fr, 3 christophe.alexandre@cnam.fr Received 9 Oct. 0, Revised 0 Nov. 0, Accepted 5 Dec. 0 Abstract: This paper focuses on the performance analysis of a linear system solving based on decomposition and QR factorization, implemented on 6bits fixed-point DSP-chip (TS30C6474). The classical method of decomposition has the advantage of low execution time. However, the modified Gram-Schmidt QR factorization performs better in term of robustness against the round-off error propagation. In this study, we have proposed a third method called odified Gram-Schmidt Decomposition. We have shown that it provides a compromise of the two performance criterias cited above. A joint theoretical and experimental analysis of global performance of the three methods has been presented and discussed. Keywords: fixed-point; cholesky; qr; system solving; dsp; signal processing. I. INTRODUCTION This paper presents the implementation results on the 6bitfixed-point DSP chip (TS 30C6474 which is optimized for 6bits arithmetic operations), [] of a linear system solving performed by decomposition and modified Gram- Schmidt process. The algorithms that contain a matrix inversion or a leastsquares problem are nowadays frequently used in a vast range of domains. But wireless telecommunications is the main user of linear system solving, as attest the several works published in the IEEE. For the field of wireless telecommunications, linear system solving are found mainly in the algorithmic part of the propagation channel equalization and interference cancellation for IO (ulti Input ulti Output) systems where the least square problem is often encountered [, 3] as result of solving SE (inimum ean Square Estimation ) or LSE (Least Square Estimation) algorithms. In the case of least square problem, the most popular techniques are the classical decomposition and QR factorization as will be detailed in section. We can show that the lower triangular matrix in the classical decomposition can be obtained by the QR factorization and vice versa. This property can be used for performing a solving system using only the upper triangular matrix issued from QR factorization without using the orthogonal matrix Q as in the classical method. We call this method Gram- Schmidt- (GS-). In this work, we try to evaluate the efficiency versus the round-off error of this method compared to the classical method of. Essentially it is shown that the QR factorization using the modified Gram-Schmidt process (GS) is less sensitive to the round-off error than the classical Gram-Schmidt process (GS) when using the fixed-point calculation format [4, 5]. Therefore, the study will focus on evaluating the robustness of methods versus their round-off errors in intermediate calculations. In the second section, we give an overview of the classical decomposition and GS-QR factorization algorithms that are implemented along with the analytical aspects. In the third section, the 6-bits implementation of solving systems based on decomposition, QR factorization and GS- are detailed. In the fourth section, a theoretical study followed by an experimental evaluation is performed. The performance in sense of time execution and robustness against the round-off error in intermediate calculations due to 6-bit resolution are discussed. II. LINEAR SYSTE SOLVING BASED ON CHOLESKY DECOPOSITION AND QR FACTORIZATION The decomposition is among the well-known methods and most popular for the linear system solving. This method is applied only if the matrix is Hermitian positive definite which is often encountered when dealing with least
2 R. aoudj, L. Fety, C. Alexandre: Performance Analysis of odified square problems. The classic decomposition can be achieved in two ways as we shall develop below. A. Least square problem The least square problem results often from an overdetermined linear system [6]. It means that the number of equations, is greater than the number N of unknowns. This situation occurs in the case of pilot-aided propagation channel estimation with assumption of low rank channel [, 7]. The following mathematical expressions can summarize the situation. We assume a linear system given by, Ax b = ε (.) A C N, b C and ε C The two most popular methods to resolve the equation (.) for min ε are given by the following sections and. ) Normal equation method The normal equation method is based on the search of the minimum modulus of the residual error ε as expressed in the following equation, min ε d Ax b H Ax b dx = 0 Therefore, A H Ax = A H b (.) If we set, Γ = A H A and β = A H b then, Γx = β and x = Γ β (3.) Γ C N N being Hermitian positive definite matrix and β C N. Γ can be decomposed by the method, and the equation (3.) becomes, x = LL H β (4.) C N N, a lower triangular matrix ) QR factorization It consists in a QR decomposition of A. Then equation (.) becomes, QRx = b + ε x = R Q H b, ε = 0 (5.) A = QR R C N N, an upper triangular matrix Q C N, an orthogonal matrix From section A and section B we have, Γ = A H A = L H L = RR H L = R H Q can be obtained from the decomposition by the following system solving, A = QL H Q = AL H (6.) The operation in equation (6.) is commonly called orthogonalization of matrix A. We conclude in this case that decomposition is a way to obtain QR decomposition and vice versa. This observation confirms that the QR factorization requires more execution time than the decomposition since QR can be considered as a decomposition, plus construction of matrix Q. In the rest of the paper, we focus on the experimental comparison of robustness versus round-off errors introduced by the intermediate calculations in 6-bit fixed-point of the decomposition. For this purpose, we evaluated the classical method and the Gram-Schmidt process, by taking Gram-Schmidt QR decomposition as reference. B. Classical decomposition This method needs to compute initially the Hermitian matrix Γ = A H A before performing the classical decomposition process as given below. Let s denote γ ij the elements of matrix Γ, and l ij the elements of matrix L which is the decomposition of Γ with i, j N. Then the elements l ji are obtained by the following algorithm. for i = to N i = γ ii k= l ik for j = i + to N l ji = C. odified Gram-Schmidt process This second method does not require to pre-compute the Hermitian matrix Γ, but uses the matrix A directly as input. atrix Γ is implicitly obtained in a sequential manner in the process of QR decomposition. This decomposition uses the Gram-Schmidt process as given by the following algorithm. i γ l ij k= l ik l jk ii
3 R. aoudj, L. Fety, C. Alexandre: Performance Analysis of odified 3 We denote, r ij the elements of matrix R and q ij the elements of matrix Q. Q = A for i = to N q H mi q m = mi = for j = i + to N r ij = q H mj q H m = mi for m = to q mj = q mj The number of mathematical operations required in floating point, for and QR decompositions, associated to the solving systems proposed in (4.) and (5.), is given in the following tables. The number of square roots is not included in the tables because it is the same for different methods, and equal to N. It should be noted that, multiplication and addition are complex operations. All the division operations used in the different algorithms are divisions of a complex number by a real number. Table. Cost in number of multiplication and addition operation ultiplications and additions QR GS- A H A N N A H b N - N Classical N N - - factorization. odified Gram - N N Schmidt process x = LL H A H b N N - N N x = R Q H b - N + N - q mi r ij Table. Cost in number of division operation Divisions QR GS- A H A A H b Classical factorization. N N - - odified Gram - N + N Schmidt process x = LL H A H b N - N x = R Q H b - N - III. DSP IPLÉENTATION N N This section is dedicated to the practical 6bit fixed-point implementation of the system solving algorithms described in the last section. In order to compare results of robustness of these various algorithms, we give the exact code embedded in the DSP by flowcharts to 3 shown below. The main functions are given first, followed by the final DSP implementation of each algorithm. The Q x/y notation of fixed point format is widely used in fixed-point arithmetic [8] and briefly explained in [9]. We note that Q x/y do not mean the classical representation Q a b (where a is the number of integer bits and b the number of fractional bits). The Q x/y representation means here that x is the number of fractional bits and y the number of total available bits including signed bit. These two representations are related by this expression, y = + a + b and x = b The initial and final vectors and matrices are expressed on Q 5/6 format. Intermediate vectors and matrices are in Q Z/6 format in order to prevent over flow in multiplication process. In the following flowcharts, the real positive number Z is calculated by the following equation: Z = 5 round log (7.) In the following, Figures and 3 show respectively the classical decomposition and odified Gram-Schmidt factorization processes as implemented on the DSP. Once these processes are performed the triangular matrices resulted are carried to the final step of the system solving where the solution is obtained as depicted by figure.
4 4 R. aoudj, L. Fety, C. Alexandre: Performance Analysis of odified A Q 5/6 Q = A i = Q 5/6 Γ = A H A i = i N triangular system solving i N triangular system solving j = Q 5+z/3 = Z Γ i, i j = j N j j < i L j, i = Z Γ i, j k = Q 5+z/3 Q 30/3 = + Q j, i Q H j, i Q 30/3 = L i, j L H i, j k < i Q 5/6 = j = i + Q 5/6 = j = i + L j, i = L j, i L i, k L H j, k Q 30/3 j N k = k = L j, i = L j, i Q 5/6 k k Figure. Flowchart () of Classical decomposition Q 5/6 β i = i N end system solving R i, j = R i, j + Q k, j Q H k, i R i, j R i, j = ρ = 5 R i, j Q k, j = Q k, j Q k, i ρ 5 Q k, i = 5 Q k, i Q 5/6 Q 5+Z/3 x = Z β j = i Figure 3. Flowchart (3) of modified Gram-Schmidt QR factorization Q 30/3 Q 5/6 j x i = x i L i, j x j x i x i = Z Finally, the three system solving implemented in the DSP are given as follows, A. System solving using classical method, as depicted in fig. (4-a) B. System solving using GS- method, as depicted in fig. (4-b) C. System solving using QR method, as depicted in fig. (4-c) Figure. Flowchart () of triangular system solving
5 R. aoudj, L. Fety, C. Alexandre: Performance Analysis of odified 5 Flowchart Flowchart Flowchart w + ε w = x + ε x y + ε y = x + ε x y + ε y a. Classical Flowchart 3 Flowchart Flowchart b. GS- By Taylor development of y +ε y around y, we have y + ε y y ε y y ε w y ε x + xε y Flowchart 3 Q H b Flowchart ε w y t c. QR Figure 4. System solving algorithms 4) Square root w + ε w = x + ε x IV. NUERICAL RESULTS A. Theoretical analysis of round-off errors In this section, we assume that the absolute value of all real and imaginary parts are less than. All calculations are performed without overflow. The additions and subtractions are made with t + Z bits (t = 5). It means that the following error model for the arithmetic operations used to process the various algorithms given in last section is accurate [0]. Let s denote x, y, and w to be complex variables with absolute values of their real and imaginary components inferior to. In addition ε x, ε y, ε w are the corresponding complex errors. ) Addition and subtraction ) ultiplication w + ε w = x + ε x + y + ε y, ε x t, ε y t ε w t w + ε w = x + ε x y + ε y ε w t In the case of scalar product of two vectors the model is, 3) Division w + ε w = K k= x k + ε xk y k + ε yk ε w t By Taylor development of x + ε x around x, x + ε x x ε x x ε w ε x x ε w x t Now we apply this error model to the decomposition and QR factorization. 5) decomposition According to the flowchart, Γ is obtained by, Γ + E Γ = A + E A H A + E A ε A i,j t We denote by: E Γ, and E A are the error matrices corresponding to Γ and A respectively. Therefore, ε Γ i,j t, i, j N The diagonal elements of matrix are given by the following equation, i + ε lii = γ ii +ε γii l ik + ε lik k= For i =, the round-off error is ε lii t For i >, ε lii t i + k= ε lik, ε lik is calculated iteratively as shown below. The non-diagonal element l ji of the matrix L is given by,
6 6 R. aoudj, L. Fety, C. Alexandre: Performance Analysis of odified i l ji + ε lji = γ + ε ij +ε γij l ik + ε lik l jk + ε ljk lii k= i q mj + ε qmj = q mj + ε qmj + ε rii q mi + ε qmi r ij + ε rij ε lji ε γij + γ ij l ik l jk ε lji k= t + ε The upper limit of the round-off error of the components of the matrix L is too difficult. But if we assume the calculation with only two recursive iterations, we find the following result. And, ε lii t ε lji 6) QR Factorization + l i i + t + ε ε lii l i i According to the flowchart 3, the diagonal elements of matrix R can be written as, + ε rii = q mi + ε qmi H q mi + ε qmi m= Then the round-off error ε rii can be bounded by, ε rii t For the non-diagonal element of the matrix R, the round-off error is given as follows, r ij + ε rij = q + ε mj + ε H qmj q mi + ε qmi rii m= H ε qmj ε qmj + q mi ε rij + r ij ε qmi + q mi r ij ε rii ε qmj 3 + t 7) Observation From the sections e and f, we observe that the round-off error is greater in the case of Cholseky decomposition than in the case of QR factorization. This round-off error depends on the values of or, but the decomposition is more sensitive to than QR factorization to. B. Simulation results The fixed point implementation is done according to flowcharts,, and 3. The target is TS 30C6474 with GHz clock rate. The experimental results depicted by figures 5, 6 and 7 are obtained using onte Carlo statistical method of solving system Ax = b with well-conditioned A H A matrices i.e.,( condition number A H A 30 ). Simulations are performed for a fixed row number = 6 and several values of column number N = 4, 6, 8, 0,, 4. Figure 5 depicts the relative norm of rounding errors on the triangular matrix L, L L 0 L 0. The reference triangular matrix L 0 is obtained with full precision floating-point (IEEE float-point) format. Figure 6 depicts the square root of the mean square error Ax b. These experimental results confirm the theoretical ones and show that the GS-QR is less sensitive to the round off errors than Classical and GS-. However GS- is more robust than Classical. This classification criteria based on round-off error immunity is inverted when applying the time execution criteria. Classical requires twice less execution time than GS-QR and around.5 less than GS- as depicted in figure 7. ε rij q mj H ε qmj + q mi ε mi m= ε rij ε t + ε rii q mj H q mi m= Finally we calculate the round-off error of matrix Q. The matrix Q is obtained by performing the following equation, Figure 5. -norm of error triangular matrix L, L L 0 L 0
7 R. aoudj, L. Fety, C. Alexandre: Performance Analysis of odified 7. However this robustness has a cost in terms of execution time. Classical requires twice less execution time than GS-QR and around.5 less than GS-. With the introduction of the GS-, we give an alternative to GS-QR if we search a good robustness of the system solving against additive round-off error in the final solution with a slight degradation in execution time performance. V. CONCLUSION Figure 6. ean square error, Ax b Figure 7. Execution time (us) In this paper, we have analyzed, theoretically and experimentally using atlab simulations, the solving system using classical, GS-QR and GS-. Both analyses are concordant. As expected through the theoretical analysis, the experimental analysis has confirmed that the GS-QR is more robust than the classical and GS- REFERENCES [] TS30C6474 ulticore Digital Signal Processor Data anual (Rev. H), April 00. [] Rabah aoudj, Christophe Alexandre, Denis Popielski, ichel Terré, DSP Implementation of Interference Cancellation Algorithm for a SIO System, ISWCS 0, Paris 0. [3] Saqib Saleem, Qamar-ul-Islam, Optimization of LSE and LSE Channel Estimation Algorithms based on CIR Samples and Channel Taps, IJCSI International Journal of Computer Science Issues, vol. 8, Issue, January 0. [4] A. Bjorck, Numerics of Gram-Schmidt Orthogonalization, Linear Algebra and Its Applications, vol. 98, pp , Feb [5] Chitranjan K. Singh, Sushma Honnavara Prasad, and Poras T. Balsara, VLSI Architecture for atrix Inversion using odified Gram-Schmidt based QR Decomposition, 6th International Conference on Embedded Systems, 0th International Conference on, pp , Jan [6] Charles L. Lawson, Richard J. Hanson, Solving Least Squares Problems, Printice-Hall, Inc., Englewood Cliffs, N. J, 974. [7] Chang Dong Lee, Jin Sam Kwak, and Jae Hong Lee, Low- Rank Pilot-Symbol-Aided Channel Estimation for IO- OFD Systems, IEEE VTC 004, vol. 7, pp [8] Israel Koren, Computer Arithmetic Algorithms, Brookside Court Publishers, 998. [9] Ali Irturk, Shahnam irzaei and Ryan Kastner, FPGA Implementation of Adaptive Weight Calculation Core Using QRD-RLS Algorithm, Technical repport CS , ar [0] J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University Press Inc, New York, 965..
Computational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More information(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationL2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015
L-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 015 Marty McFly: Wait a minute, Doc. Ah... Are you telling me you built a time machine... out of a DeLorean? Doc Brown: The way I see
More informationLecture 6, Sci. Comp. for DPhil Students
Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More information1 GSW Sets of Systems
1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationBLOCK DATA TRANSMISSION: A COMPARISON OF PERFORMANCE FOR THE MBER PRECODER DESIGNS. Qian Meng, Jian-Kang Zhang and Kon Max Wong
BLOCK DATA TRANSISSION: A COPARISON OF PERFORANCE FOR THE BER PRECODER DESIGNS Qian eng, Jian-Kang Zhang and Kon ax Wong Department of Electrical and Computer Engineering, caster University, Hamilton,
More informationlecture 2 and 3: algorithms for linear algebra
lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations
More informationWHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS
IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationLinear Analysis Lecture 16
Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationHomework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)
CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange
More informationMath Fall Final Exam
Math 104 - Fall 2008 - Final Exam Name: Student ID: Signature: Instructions: Print your name and student ID number, write your signature to indicate that you accept the honor code. During the test, you
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationFINITE-DIMENSIONAL LINEAR ALGEBRA
DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup
More informationThe QR Factorization
The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationlecture 3 and 4: algorithms for linear algebra
lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations
More informationA Computationally Efficient Block Transmission Scheme Based on Approximated Cholesky Factors
A Computationally Efficient Block Transmission Scheme Based on Approximated Cholesky Factors C. Vincent Sinn Telecommunications Laboratory University of Sydney, Australia cvsinn@ee.usyd.edu.au Daniel Bielefeld
More informationApplications of Lattices in Telecommunications
Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationNumerical Linear Algebra
Numerical Linear Algebra By: David McQuilling; Jesus Caban Deng Li Jan.,31,006 CS51 Solving Linear Equations u + v = 8 4u + 9v = 1 A x b 4 9 u v = 8 1 Gaussian Elimination Start with the matrix representation
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationWord-length Optimization and Error Analysis of a Multivariate Gaussian Random Number Generator
Word-length Optimization and Error Analysis of a Multivariate Gaussian Random Number Generator Chalermpol Saiprasert, Christos-Savvas Bouganis and George A. Constantinides Department of Electrical & Electronic
More informationI-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x
Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationInverses. Stephen Boyd. EE103 Stanford University. October 28, 2017
Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number
More information18.06 Quiz 2 April 7, 2010 Professor Strang
18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationMatrices. Chapter Definitions and Notations
Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationProblem 1. CS205 Homework #2 Solutions. Solution
CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationMTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education
MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationLinear Algebra. and
Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationInteger Least Squares: Sphere Decoding and the LLL Algorithm
Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationarxiv: v2 [math.na] 27 Dec 2016
An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]
More informationBasic Math Review for CS1340
Basic Math Review for CS1340 Dr. Mihail January 15, 2015 (Dr. Mihail) Math Review for CS1340 January 15, 2015 1 / 34 Sets Definition of a set A set is a collection of distinct objects, considered as an
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationApplications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices
Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.
More information11 Adjoint and Self-adjoint Matrices
11 Adjoint and Self-adjoint Matrices In this chapter, V denotes a finite dimensional inner product space (unless stated otherwise). 11.1 Theorem (Riesz representation) Let f V, i.e. f is a linear functional
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationMatrix Factorizations
1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationWe will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m
Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian
More information8.3 Householder QR factorization
83 ouseholder QR factorization 23 83 ouseholder QR factorization A fundamental problem to avoid in numerical codes is the situation where one starts with large values and one ends up with small values
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationTotal 170. Name. Final Examination M340L-CS
1 10 2 10 3 15 4 5 5 10 6 10 7 20 8 10 9 20 10 25 11 10 12 10 13 15 Total 170 Final Examination Name M340L-CS 1. Use extra paper to determine your solutions then neatly transcribe them onto these sheets.
More informationAn Introduction to NeRDS (Nearly Rank Deficient Systems)
(Nearly Rank Deficient Systems) BY: PAUL W. HANSON Abstract I show that any full rank n n matrix may be decomposento the sum of a diagonal matrix and a matrix of rank m where m < n. This decomposition
More informationComputing least squares condition numbers on hybrid multicore/gpu systems
Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationA fast randomized algorithm for the approximation of matrices preliminary report
DRAFT A fast randomized algorithm for the approximation of matrices preliminary report Yale Department of Computer Science Technical Report #1380 Franco Woolfe, Edo Liberty, Vladimir Rokhlin, and Mark
More information