Let ^A and ^Z be a s.p.d. matrix and a shift matrix of order
|
|
- Godfrey Stokes
- 5 years ago
- Views:
Transcription
1 FAS IMPLEMENAION OF HE QR FACORIZAION IN SUBSPACE IDENIFICAION N. Mastronardi, P. Van Dooren, S. Van Huffel Università della Basilicata, Potenza, Italy, Katholieke Universiteit Leuven, Leuven, Belgium Catholic University of Louvain, Louvain-la-Neuve, Belgium Katholieke Universiteit Leuven, Leuven, Belgium wo recent approaches [, ] in subspace identification problems require the computation of the R factor of the QR factorization of a block Hankel matrix H, which, in general has a huge number of rows. Since the data are perturbed by noise, the involved matrix H is, in general, full rank. It is well known that, from a theoretical point of view, the R factor of the QR factorization of H is equivalent to the Cholesky factor of the correlation matrix H H, apart from a multiplication by a sign matrix. In [] a fast Cholesky factorization of the correlation matrix, exploiting the block Hankel structure of H, is described. In this paper we consider a fast algorithm to compute the R factor based on the generalized Schur algorithm. he proposed algorithm allows to handle the rank deficient case. Keywords: identification methods least squares solutions multivariable systems singular values. INRODUCION Subspace based system identification techniques have been studied intensively in the last decades []. he major drawbacks of these direct state space identification techniques are the high computation and storage costs. Let u k and y k be the m dimensional input vector and the l dimensional output vector, respectively, of the basic linear time invariant state space model x k+ Ax k + Bu k + w k y k Cx k + Du k + v k where x k is the n dimensional state vector at time k fw k g and fv k g are state and output disturbances or noise sequences, and A B C and D are unknown real matrices of appropriate dimensions. For non sequential data processing, the N (m + l)s N fl (m + l)s matrix H [UsN Y sn ] is constructed, where U sn and Y sn are block Hankel matrices defined in terms of the input and output data, respectively, U sn Y sn u u u ::: u N u u u ::: u N+ u u u :: u N u s u s+ u s+ ::: u N+s y y y ::: y N y y y ::: y N+ y y y :: y N y s y s+ y s+ ::: y N+s 5 hen the R factor of a QR factorization H QR is used for data compression. In [] a fast Cholesky factorization of the correlation matrix, exploiting the block Hankel structure of H, is described. In this paper we consider a fast algorithm to compute the R factor of the QR factorization of H based on the generalized Schur algorithm, exploiting its displacement structure. he implementation of the algorithm depends, in particular, on the involved matrix. he paper is organized as follows. First the generalized Schur algorithm to compute the Cholesky factor of a symmetric positive definite matrix is described. hen the generalized Schur algorithm applied to the matrix H is considered. he rank deficient case is also discussed. Finally, some numerical experiments are reported. HE GENERALIZED SCHUR ALGORIHM FOR SYMMERIC POSIIVE DEFINIE MARICES In this section the key properties of the generalized Schur algorithm to compute the Cholesky factor of a symmetric positive definite (s.p.d.) matrix, which will be used in the next section, are summarized. More details can be found in [, 8]. Let ^A and ^Z be a s.p.d. matrix and a shift matrix of order
2 ^N respectively. Let ^Z r ^A ^A ^Z ^A ^Z : 5 hen rank(r ^A) is called the displacement rank of ^A: If the displacement rank of ^A is ^M then r ^A ^M ^p +^q: ^px i ^g (+) i ^g (+) i ^qx i ^g ( ) i ^g ( ) i () he vectors ^g (+) i ^g ( ) i R ^N are called the positive and the negative generators of ^A respectively. Let h i ^G ^g (+) ^g (+) ^p ^g ( ) ^g ( ) ^q : he generator matrix ^G is said to be in proper form if the entries in the first non zero column are equal to zero, except for a non-zero in the first row. Define I r the identity matrix of order r: Let J^p^q I^p Φ ( I^q ) where the symbol» Φ denotes the concatenated direct sum, i.e., A 0 A Φ B : hen () can be written in the 0 B following way r ^A ^G J^p^q ^G: Amatrix ^Q is said J^p^q orthogonal if ^Q J^p^q ^Q J^p^q : A generator matrix is not unique. In fact, if ^Q is a J^p^q orthogonal matrix, then ^Q ^G is a generator matrix, too. he following theorem holds []. heorem Let ^A be a s.p.d. matrix. Let ^G () [g (+) :::g (+) ^p g ( ) :::g ( ) ^q ] be a generator matrix in proper form of ^A with respect to the shift matrix ^Z: hen g (+) is the first row of ^R the Cholesky factor of ^A: Furthermore the generator matrix for the Schur complement is given by ^G () h ^A ^A g (+) g (+) ^Zg (+) g (+) ^p g ( ) ^g ( ) ^q i : Let F be a full rank rectangular matrix. hen F F is a s.p.d. matrix, with Cholesky decomposition ~ R ~ R ~ R upper triangular. If» R is the R factor of the QR decomposition of F it is well known that»r ~ R where diag(±:::±): Hence both the problem of computing the R factor of the QR factorization of F and that of computing the Cholesky factor of F F are equivalent. We observe that the entries of the first row (and of the first column) of ^A are equal to 0. he entries of the first column of ^G () are equal to 0, too. o compute the second row of the Cholesky factor, we need to find a J^p^q orthogonal matrix ^Q such that ~ G () ^Q ^G () is in proper form. hen the first row of ~ G () becomes the second row of ^R: If we pose ~G () [g (+) :::g (+) ^p g ( ) :::g ( ) ^q ] then a generator matrix for the Schur complement is given by ^A ^A g (+) g (+) ^G () [^Zg (+) :::g (+) ^p g ( ) :::g ( ) ^q ] : By successively repeating the described procedure, we obtain the triangular factor ^R: Generalized Schur algorithm Input: ^G () [g (+) :::g (+) ^p g ( ) :::g ( ) ^q ] Output: ^R for i : ^N Find a J^p^q orthogonal matrix ^Q i such that G ~ (i) ^Q ^G i (i) is in proper form. i th row of ^R : first row of G ~ (i) ^G (i+) [^Zg (i+) :::g (i+) ^p g (i ) :::g (i ) ^q ] end he computational complexity is O( ^M ^N ): Hence the generalized Schur algorithm can be efficiently used when ^M fi ^N i.e., when ^A is a oeplitz like matrix. he next section describes the particular choice of (i) ^Q i ^G and ^Z for solving the System Identification problem. REDUCION OF HE GENERAORS MARIX IN PROPER FORM At each iteration of the generalized Schur algorithm a J^p^q orthogonal matrix Q i is chosen in order to reduce the generator matrix in proper form. his can be accomplished in a few different ways (see for instance [0, 8, ]). However, for the sake of the stability, we consider an implementation that allow us to compute the R factor also when the matrix H is rank deficient. First of all, we observe that the product of two J^p^q orthogonal matrices is a J^p^q orthogonal matrix. Furthermore, if Q and Q are orthogonal matrices of order ^p and ^q respectively, then the orthogonal matrix Q Q Φ Q is a J^p^q orthogonal matrix, too.
3 If jρj < the matrix product, called modified hyperbolic rotation [],» #» ρ» p 0 " 0 p ρ ρ ρ 0 is J orthogonal w.r.t. the matrix J» 0 0 : 0 Let ^G (i) be the generator matrix at the beginning of the ith iteration of the generalized Schur algorithm. he J^p^q orthogonal matrix Q i such that G ~ (i) Q ^G i (i) is in proper form, can be constructed in the following way. Let» ^G ^G (i) (i+) ^G (i ) where ^G (i+) g (i+).. g (i+) ^p 5 and ^G (i ) g (i ).. g (i ) ^q Let Q i and Q i be two orthogonal matrices of order ^p and ^q respectively, such that G ~ (i+) Q ^G (i+) i and ~G (i ) Q ^G (i ) i are both in proper form and let ^Q i Q i Φ Q i :Q i and Q i can be either Householder matrices or products of Givens rotations chosen in order to annihilate all the entries of the ith column except for the first one of the matrices ^G (i+) and ^G (i ) respectively. Let v R whose components are the nonzero entries in the ith column of ~ G (i+) and ~ G (i ) respectively. Consider the J orthogonal matrix ^ such that the second component of the vector ^v ^v is 0: Let ~ i ^ 0 ^ 0 0 I^p 0 0 ^ 0 ^ I^q 5 where I^p and I^q are identity matrices of order ^p and ^q respectively, and the 0 s in the matrix ~ i are null matrices of appropriate dimension. It is easy to see that ~ i is a J^p^q orthogonal matrix. he J^p^q orthogonal matrix ^Q i ~ i is such that ^Q i ~ i ^G (i) is in proper form. Hence Q i ^Q i ~ i is the sought matrix. FAS COMPUAION OF HE R FACOR OF HE QR FACORIZAION OF H In this section the computation of the R factor of the QR factorization of H defined in x, exploiting the displacement structure of H H is described. We first show that the displacement rank of H H is (m + l +)at most, with the same number of positive and negative generators. Hence the generalized Schur algorithm to compute the R factor requires O s (m + l) flops. Furthermore the computation of the generators requires O ((m + l)n ) flops. Let» W H UsN UsN H U sn YsN Y sn UsN Y sn YsN Let Z m and Z l be shift matrices of order m and l respectively. Denote Ms Ms Z sm Z m Z sl Z l and i Z Z sm M Z sl : i Partition U sn and Y sn in the following way,»» ^u ^u ^y ^y U sn Y ^u ^u sn ^y ^y with ^u R m (N ) ^u R m ^u R (s )m (N ) ^u R (s )m and ^y R l (N ) ^y R l ^y R (s )l (N ) ^y R (s )l : hen r W W ZWZ ^u ^u ^u ^u ^u ^y ^u ^y ^u ^u 0 ^u ^y 0 ^y ^u ^y ^u ^y ^y ^y ^y ^y ^u 0 ^y ^y 0 5 +g (+) m+l+ g(+) ( ) m+l+ g m+l+ g( ) m+l+ ( ) g ^W + g (+) m+l+ g(+) m+l+ m+l+ g( ) m+l+ where g (+) m+l+ [^u ^u ^y ^y ] g ( ) m+l+ ZH( : ) : he symmetric matrix ^W has rank (m + l) at most. Hence the displacement rank of W is (m + l +) i:e: r W W ZWZ m+l+ X i X m+l+ i g (+) i g (+) i g ( ) i g ( ) i : he computation of the remaining generators requires the computation of ^W that is, of the first m rows and the rows sm + i i : l: his can be accomplished in O ((l + m)n ) flops. We observe that the computation of the matrix ^W requires O((s(m + l) N ) flops. Furthermore we point out that the matrix H involved in the computation of ^W is not explicitly stored. he computation indeed is done considering only the vectors y and u taking into account the block Hankel structure of H:
4 COMPUAIONAL BURDEN o compute the generators of H H the product H H(: [ : m sm +:sm + l]) () is needed. his can be accomplished in N (m + l) flops. All the techniques described in the appendix require O(s(m + l)) flops to compute the generators. If Householder matrices are chosen as the orthogonal matrices involved at each iteration, the total number of flops of the generalized Schur algorithm is 8s (m + l) + s (m + l) : As N is significantly larger than s m l the bulk of the algorithm is the computation of (). HE GENERALIZED SCHUR ALGORIHM FOR RANK DEFICIEN MARICES In this section we describe how the generalized Schur algorithm can be modified to compute the R factor in case H is a rank deficient matrix. More details can be found in [5]. We show that the computational complexity is reduced to O(rs(m + l)) in this case, where r is the rank of H: Without loss of generality, we suppose that the first k k :::s(m + l) columns of H are full rank and the (k +) th column linearly depends on the first k columns, i.e., H H H w H Λ where H R N k w R k H R N (m+l)s k : hen H H H H H H w H H w H H w H H w w H H H H H H w H H he Schur complement of H H with respect to H H is (H H) (k) H (I H (H H ) H )H We observe that the first row (and column) of (H H) (k) is a null vector. Let ~ G (k+) and ~ G (k ) be the matrix of positive and negative generators of (H H) (k) respectively, in proper form. hen the first row of ~ G (k+) is equal to the first row of ~ G (k ) : Hence these rows can be dropped from the corresponding matrices, reducing the number of generators by and the (k +)th row of the R factor is a null vector. From a numerical point of view we can say that the described procedure works in an accurate way when it is applied to matrices H such that the gap between the large singular values and the negligible ones is sufficiently able : Numerical results for Example R M R S b. e. R M b. e. R S n. r. # flops # flops 0 8 :5 0 : large. A loss of accuracy in the computed factor R happens when the distribution of the small singular values of H shows a uniform and slow decrease. his behaviour of the algorithm is described in the following examples. Example Consider the matrix H [U jy ] with Y U, where the first row and the last column of U are [ ::: ] [ 5 ] respectively. he rank of the matrix H is 5: Figure : Distribution of the singular values, in logarithmic scale, of the matrix considered in Example In able the results of the computation of the R factor of the matrix H by means of the standard QR and the generalized Schur algorithm are shown. We denote by R M b.e.r S b.e.r Λ and n.r., the R factor of the QR factorization of H computed by the matlab function triu(qr(h)) and by the generalized Schur algorithm, the backward error of H H defined as kh H R Λ R Λk kh Hk and the rank of H detected by the generalized Schur algorithm, respectively. In this case, the R factor is accurately computed by the generalized Schur algorithm, because of the big difference between the significant singular values and the negligible ones of H (see Figure ). Example his is the fourth application considered in the next section. In Figure we can see that the distribution of the small singular values of the involved matrix H slightly decreases. We point out that the correlation matrix H H computed by matlab is not numerically
5 able : Numerical results for Example R M R S b. e. R M b. e. R S n. r. # flops # flops 9 0 5: 0 s.p.d. because of the nearly rank deficiency of H: So, in this case the fast Cholesky factorization, exploiting the block Hanckel structure of H and described in [], can not be used. In able we can see that, although the generalized Schur algorithm is very fast w.r.t. the standard QR algorithm, the achieved accuracy is not satisfactory. able : Summary description of applications. Appl. # Application m l s t N Glass tubes 0 0 Labo dryer Glass oven 0 Mechanical flutter Flexible robot arm Evaporator CD player arm Ball and beam Wall temperature able : Comparative results for the computation of the R factor. Figure : Distribution of the singular values, in logarithmic scale, of the matrix considered in Example NUMERICAL RESULS In this section results computing the R matrix by means of the generalized Schur algorithm are summarized. he data sets considered are publicly available on the DAISY web site Appl. # Application R M R S # flops # flops Glass tubes : 0 :9 0 Labo dryer :0 0 : Glass oven : 0 9:8 0 Mechanical flutter :5 0 : 0 5 Flexible robot arm :5 0 : 0 5 Evaporator : : 0 CD player arm 5: 0 : 0 8 Ball and beam : 0 : Wall temperature : 0 :8 0 For all the considered applications, the generators have been computed using the second technique described in the Appendix. At each iteration of the generalized Schur algorithm, two products of Givens rotations and a modified hyperbolic rotation are performed in order to reduce the generator matrix in proper form. All the numerical results have been obtained on a Sun workstation Ultra 5 using Matlab 5.. able gives a summary description of the applications considered in our comparison, indicating the number of inputs m the number of outputs l the number of block rows s the total number of data samples used t and the number of rows of H: In able and 5 some results for the computation of the R factor of the QR factorization of H are presented. Rel. residual denotes able 5: Comparative results for the computation of the R factor. Appl. # Application backward error R S Rel.residual Glass tubes :0 0 0 Labo dryer : Glass oven :8 0 : 0 9 Mechanical flutter ΛΛΛΛΛΛ ΛΛΛΛΛΛΛ 5 Flexible robot arm : Evaporator :89 0 9:5 0 CD player arm : 0 : Ball and beam 8: 0 5 5:8 0 9 Wall temperature 5: kjr M j jr S jk kjr M jk :
6 he results in able and 5 are comparable with those described in [], where the R factor is obtained considering the Cholesky factorization of the correlation matrix H H and exploiting the block-hankel structure of H: he analysis of the fourth application is described in Example of the previous section. We can not compare, in this case, the elements of the matrices computed by both considered algorithms, since the generalized Schur detects a rank deficiency in the matrix H. CONCLUSIONS In this paper the generalized Schur algorithm to compute the R factor of the QR factorization of block Hankel matrices, arising in some subspace identification problems, is described. It is shown that the generalized Schur algorithm is significantly faster than the classsical QR factorization. A rank revealing implementation of the generalized Schur algorithm in case of rank deficient matrices is also discussed. Algorithmic details and numerical results have been presented. ACKNOWLEDGEMENS his paper presents research results of the Belgian Programme on Interuniversity Poles of Attraction (IUAP P-0 and P-), initiated by the Belgian State, Prime Minister s Office - Federal Office for Scientific, echnical and Cultural Affairs, of the European Community MR Programme, Networks, project CHRX-C9-00, of the Brite Euram Programme, hematic Network BRR-C9-500 Niconet, of the Concerted Research Action (GOA) projects of the Flemish Government MEFISO- (Mathematical Engineering for Information and Communication Systems echnology), of the IDO/99/0 project (K.U.Leuven) Predictive computer models for medical classification problems using patient data and expert knowledge, of the FWO Krediet aan navorsers G.0.98 and the FWO project G References [] A.W. Bojanczyk, R.P. Brent, P. Van Dooren and F.R. De Hoog, A note on downdating the Cholesky factorization, SIAM J. Sci. Stat. Comput., (980),pp [] Y.M. Cho, G.Xu and. Kailath, Fast recursive identification of state space models via exploitation of displacement structure, Automatica, Vol.0. No., (99) pp [] J. Chun,. Kailath and H. Lev ari, Fast parallel algorithms for QR and triangular factorization, SIAM J. Sci. and Stat. Comp., Vol. 8, No. (98), pp [] B. De Moor, P. Van Overschee and W. Favoreel, Numerical Algorithms for Subspace State-Space System Identification An Overview, in Applied and Computational Control, Signals and Circuits, Vol., Chapter, Birkhäuser, Boston, 999. [5] K.A. Gallivan, S. hirumalai, P. Van Dooren and V. Vermaut, High performance algorithms for oeplitz and block oeplitz matrices, Linear Algebra Appl., -,(99) -88. []. Kailath, S. Kung and M. Morf, Displacement ranks of matrices and linear equations, J. Math. Anal. Appl., 8, (99), pp []. Kailath and A.H. Sayed, Displacement structure: theory and applications, SIAM Review,, (995), pp [8]. Kailath, Displacement structure and array algorithms, in Fast Reliable Algorithms for Matrices with Structure,. Kailath and A. H. Sayed, Ed., SIAM, Philadelphia, 999. [9] M. Moonen, B. De Moor, L. Vandenberghe and J. Vandewalle, On- and off-line identification of linear state space models, Int. J. Control, 9(), (989) pp. 9-. [0] C.M. Rader and A.O. Steihardt, Hyperbolic Householder transforms, SIAM J. Matrix Anal. Appl., 9, (988), pp [] V. Sima, Subspace based algorithms for multivariable system identification, Studies in Informatics and Control, Vol.5, No., (99) pp.5. [] V. SIMA, Cholesky or QR factorization for data compression in subspace based identification?, Proceedings Second NICONE Workshop, Paris Versailles, Dec., (999), pp [] M. Stewart and P. Van Dooren, Stability issues in the factorization of structured matrices, SIAM J. Matrix Anal. Appl., Vol. 8, (99), pp [] P. Van Overschee and B. De Moor, Subspace Identification for Linear Systems: heory, Implementation, Applications, Kluwer Academic Publishers, Dordrecht, 99. [5] P. Van Overschee and B. De Moor, NSID: wo subspace algorithms for the identification of combined deterministic-stochastic systems, Automatica, 0(), (99) pp [] M.Verhaegen, Subspace model identification. Part :Analysis of the ordinary output error state space model identification algorithm, Int. J. Control, 58() (99), pp
Implementation of the regularized structured total least squares algorithms for blind image deblurring
Linear Algebra and its Applications 391 (2004) 203 221 wwwelseviercom/locate/laa Implementation of the regularized structured total least squares algorithms for blind image deblurring N Mastronardi a,,1,
More informationUsing semiseparable matrices to compute the SVD of a general matrix product/quotient
Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department
More informationStructured weighted low rank approximation 1
Departement Elektrotechniek ESAT-SISTA/TR 03-04 Structured weighted low rank approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 January 2003 Accepted for publication in Numerical
More informationSystem Identification by Nuclear Norm Minimization
Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems
More informationOn the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.
On the application of different numerical methods to obtain null-spaces of polynomial matrices Part 2: block displacement structure algorithms JC Zúñiga and D Henrion Abstract Motivated by some control
More informationOn Weighted Structured Total Least Squares
On Weighted Structured Total Least Squares Ivan Markovsky and Sabine Van Huffel KU Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium {ivanmarkovsky, sabinevanhuffel}@esatkuleuvenacbe wwwesatkuleuvenacbe/~imarkovs
More informationBlock-row Hankel Weighted Low Rank Approximation 1
Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 03-105 Block-row Hankel Weighted Low Rank Approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 July 2003
More informationA Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem
A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van
More informationIdentification of continuous-time systems from samples of input±output data: An introduction
SaÅdhanaÅ, Vol. 5, Part, April 000, pp. 75±83. # Printed in India Identification of continuous-time systems from samples of input±output data: An introduction NARESH K SINHA Department of Electrical and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationKatholieke Universiteit Leuven Department of Computer Science
Orthogonal similarity transformation of a symmetric matrix into a diagonal-plus-semiseparable one with free choice of the diagonal Raf Vandebril, Ellen Van Camp, Marc Van Barel, Nicola Mastronardi Report
More informationA Schur based algorithm for computing the smallest eigenvalue of a symmetric positive definite Toeplitz matrix
A Schur based algorithm for computing the smallest eigenvalue of a symmetric positive definite Toeplitz matrix Nicola Mastronardi Marc Van Barel Raf Vandebril Report TW 461, May 26 n Katholieke Universiteit
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationOn the stability of the Bareiss and related Toeplitz factorization algorithms
On the stability of the Bareiss and related oeplitz factorization algorithms A. W. Bojanczyk School of Electrical Engineering Cornell University Ithaca, NY 14853-5401, USA adamb@toeplitz.ee.cornell.edu
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationDominant feature extraction
Dominant feature extraction Francqui Lecture 7-5-200 Paul Van Dooren Université catholique de Louvain CESAME, Louvain-la-Neuve, Belgium Goal of this lecture Develop basic ideas for large scale dense matrices
More informationCONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren
CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,
More informationKatholieke Universiteit Leuven Department of Computer Science
The Lanczos-Ritz values appearing in an orthogonal similarity reduction of a matrix into semiseparable form M Van Barel R Vandebril N Mastronardi Report TW 36, May 23 Katholieke Universiteit Leuven Department
More informationTheoretical Computer Science
Theoretical Computer Science 412 (2011) 1484 1491 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: wwwelseviercom/locate/tcs Parallel QR processing of Generalized
More informationGeneralized Shifted Inverse Iterations on Grassmann Manifolds 1
Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.
More informationOn the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.
On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationKatholieke Universiteit Leuven Department of Computer Science
Structures preserved by matrix inversion Steven Delvaux Marc Van Barel Report TW 414, December 24 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 2A B-31 Heverlee (Belgium)
More informationDisplacement Structure in Computing Approximate GCD of Univariate Polynomials
MM Research Preprints, 230 238 MMRC, AMSS, Academia, Sinica, Beijing No 21, December 2002 Displacement Structure in Computing Approximate GCD of Univariate Polynomials Lihong Zhi 1) Abstract We propose
More informationCANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM
CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,
More information12. Cholesky factorization
L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationAffine iterations on nonnegative vectors
Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationInformation, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1
Katholiee Universiteit Leuven Departement Eletrotechnie ESAT-SISTA/TR 06-156 Information, Covariance and Square-Root Filtering in the Presence of Unnown Inputs 1 Steven Gillijns and Bart De Moor 2 October
More informationDownloaded 03/01/13 to Redistribution subject to SIAM license or copyright; see
SIAM J MARIX ANAL APPL Vol 34 No 1 pp 173 196 c 213 Society for Industrial and Applied Mathematics HE ANIRIANGULAR FACORIZAION OF SYMMERIC MARICES NICOLA MASRONARDI AND PAUL VAN DOOREN Abstract Indefinite
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationStructured Matrices and Solving Multivariate Polynomial Equations
Structured Matrices and Solving Multivariate Polynomial Equations Philippe Dreesen Kim Batselier Bart De Moor KU Leuven ESAT/SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium. Structured Matrix Days,
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationOld and new algorithms for Toeplitz systems
Old and new algorithms for Toeplitz systems Richard P. Brent Computer Sciences Laboratory Australian National University Canberra, Australia ABSTRACT Toeplitz linear systems and Toeplitz least squares
More informationOn The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point
Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationMultirate MVC Design and Control Performance Assessment: a Data Driven Subspace Approach*
Multirate MVC Design and Control Performance Assessment: a Data Driven Subspace Approach* Xiaorui Wang Department of Electrical and Computer Engineering University of Alberta Edmonton, AB, Canada T6G 2V4
More informationCanonical lossless state-space systems: staircase forms and the Schur algorithm
Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More informationAlgorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix
Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S.
More informationKrylov Subspace Methods to Calculate PageRank
Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationThe QR decomposition and the singular value decomposition in the symmetrized max-plus algebra
K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 96-70 The QR decomposition and the singular value decomposition in the symmetrized max-plus algebra B. De Schutter and B. De
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationUsing Hankel structured low-rank approximation for sparse signal recovery
Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,
More informationThe Bock iteration for the ODE estimation problem
he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12
More informationLinear Least squares
Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to
More informationThe Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem B. De Schutter 1 B. De Moor ESAT-SISTA, K.
Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 9-1 The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem 1 Bart De Schutter
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationOn max-algebraic models for transportation networks
K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 98-00 On max-algebraic models for transportation networks R. de Vries, B. De Schutter, and B. De Moor If you want to cite this
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric
More informationDepartement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1
Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 994-24 About the choice of State Space asis in Combined Deterministic-Stochastic Subspace Identication Peter Van Overschee and art
More informationModel reduction via tangential interpolation
Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time
More informationSome Notes on Least Squares, QR-factorization, SVD and Fitting
Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least
More informationKey words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design.
BLOCK TOEPLITZ ALGORITHMS FOR POLYNOMIAL MATRIX NULL-SPACE COMPUTATION JUAN CARLOS ZÚÑIGA AND DIDIER HENRION Abstract In this paper we present new algorithms to compute the minimal basis of the nullspace
More informationNumerically Stable Cointegration Analysis
Numerically Stable Cointegration Analysis Jurgen A. Doornik Nuffield College, University of Oxford, Oxford OX1 1NF R.J. O Brien Department of Economics University of Southampton November 3, 2001 Abstract
More informationA subspace fitting method based on finite elements for identification and localization of damages in mechanical systems
Author manuscript, published in "11th International Conference on Computational Structures Technology 212, Croatia (212)" A subspace fitting method based on finite elements for identification and localization
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationThe total least squares problem in AX B. A new classification with the relationship to the classical works
Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich The total least squares problem in AX B. A
More informationSubspace Identification
Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating
More informationKatholieke Universiteit Leuven Department of Computer Science
Fast computation of determinants of Bézout matrices and application to curve implicitization Steven Delvaux, Ana Marco José-Javier Martínez, Marc Van Barel Report TW 434, July 2005 Katholieke Universiteit
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationSparse BLAS-3 Reduction
Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc
More informationSubspace Identification With Guaranteed Stability Using Constrained Optimization
IEEE TANSACTIONS ON AUTOMATIC CONTOL, VOL. 48, NO. 7, JULY 2003 259 Subspace Identification With Guaranteed Stability Using Constrained Optimization Seth L. Lacy and Dennis S. Bernstein Abstract In system
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationEfficient and Accurate Rectangular Window Subspace Tracking
Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationExtending a two-variable mean to a multi-variable mean
Extending a two-variable mean to a multi-variable mean Estelle M. Massart, Julien M. Hendrickx, P.-A. Absil Universit e catholique de Louvain - ICTEAM Institute B-348 Louvain-la-Neuve - Belgium Abstract.
More informationx 104
Departement Elektrotechniek ESAT-SISTA/TR 98-3 Identication of the circulant modulated Poisson process: a time domain approach Katrien De Cock, Tony Van Gestel and Bart De Moor 2 April 998 Submitted for
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationH 2 -optimal model reduction of MIMO systems
H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m
More informationOn the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationSINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS
3 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 278 SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION
More information