Structured weighted low rank approximation 1

Size: px
Start display at page:

Download "Structured weighted low rank approximation 1"

Transcription

1 Departement Elektrotechniek ESAT-SISTA/TR Structured weighted low rank approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 January 2003 Accepted for publication in Numerical Linear Algebra with Applications 1 This report is available by anonymous ftp from ftp.esat.kuleuven.ac.be in the directory pub/sista/mschuerm/reports/03-04.ps.gz 2 K.U.Leuven, Dept. of Electrical Engineering (ESAT), Research group SISTA, Kasteelpark Arenberg 10, 3001 Leuven-Heverlee, Belgium, Tel. 32/16/ , Fax 32/16/ , WWW: mieke.schuermans@esat.kuleuven.ac.be. Prof. dr. Sabine Van Huffel is a full professor, dr. Philippe Lemmerling is a postdoctoral researcher of the FWO (Fund for Scientific Research Flanders) and Mieke Schuermans is a research assistant at the Katholieke Universiteit Leuven, Belgium. Our research is supported by Research Council KUL: GOA-Mefisto 666, IDO/99/003 (Predictive computer models for medical classification problems using patient data and expert knowledge), several PhD/postdoc & fellow grants; Flemish Government: FWO: PhD/postdoc grants, projects, G (damage detection in composites by optical fibers), G (structured matrices), G (support vector machines), G (magnetic resonance spectroscopic imaging), G (nonlinear Lp approximation), research communities (ICCoS, ANMMM); AWI: Bil. Int. Collaboration Hungary/Poland; IWT: PhD Grants, Belgian Federal Government: DWTC (IUAP IV-02 ( ) and IUAP V- 22 ( ): Dynamical Systems and Control: Computation, Identification & Modelling) ); EU: NICONET, INTERPRET, PDT-COIL, MRS/MRI signal processing (TMR); Contract Research/agreements: Data4s, IPCOS;

2 NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2003; 10:1 10 [Version: 2002/09/18 v1.02] Structured weighted low rank approximation M. Schuermans 1,, P. Lemmerling 1 and S. Van Huffel 1 1 K.U.Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium SUMMARY This paper extends the Weighted Low Rank Approximation (WLRA) approach towards linearly structured matrices. In the case of Hankel matrices an equivalent unconstrained optimization problem is derived and an algorithm for solving it is proposed. The correctness of the latter algorithm is verified on a benchmark problem. Finally the statistical accuracy and numerical efficiency of the proposed algorithm is compared with that of STLNB, a previously proposed algorithm for solving Hankel WLRA problems. Copyright c 2003 John Wiley & Sons, Ltd. key words: rank reduction; structured matrices; weighted norm. 1. Introduction Multivariate linear problems play an important role in many applications, including MIMO system identification and deconvolution problems with multiple outputs. These linear models can be described as AX B or equivalently as [AB][X T I] T 0 with A R n (m d), B R n d the observed variables and X R (m d) d the parameter matrix to be estimated. Since often all variables (i.e., those in A and B) are perturbed by noise, the Total Least Squares (TLS) approach is used instead of the Least Squares (LS) approach. The TLS approach solves the following problem: min A, B,X [ A B] 2 F such that (A + A)X = B + B, (1) where. F stands for the Frobenius norm, S [A B] is the observed data matrix and S [ A B] is the correction applied to S. The standard way for solving the TLS problem is by means of the Singular Value Decomposition (SVD) [1, 2]. The latter approach yields maximum likelihood (ML) estimates of X when the noise part of S has rows that are independently identically distributed (i.i.d.) with common zero mean vector and common covariance matrix that is a (unknown) multiple of the identity matrix I m. However, in many applications the observed data matrix S is structured and since the SVD Correspondence to: K.U.Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium The noise part corresponds to S n R n m in S = S o+s n where S o R n m contains the unobserved noiseless data and S R n m contains the observed noisy data. Received January 10, 2003 Copyright c 2003 John Wiley & Sons, Ltd. Revised October 10, 2003

3 2 M. SCHUERMANS ET AL. does not preserve structure, [A + A B + B] will typically be unstructured. In [3] it was shown that this loss of structure implies -under noise conditions that occur naturally in many problems- a loss of statistical accuracy of the estimated parameter matrix. These specific noise conditions can best be explained by representing the structured matrix S by its minimal vector representation s (e.g., a Hankel matrix S R n m is converted into a minimal representation s R n+m 1 ). The noise part of s has to obey a Gaussian i.i.d. distribution in order to generate the noise condition mentioned earlier. To obtain a maximum likelihood parameter matrix, in these frequently occurring structured cases, one has to solve a so-called Structured Weighted Low Rank Approximation(SWLRA) problem: min R rank(r) r<k R Ω S R 2 V, (2) with R, S R n m, Ω the set of all matrices having the same structure as S and M 2 V vec(m) T V vec(m) where vec(m) stands for the vectorized form of M, i.e., a vector constructed by stacking the consecutive columns of M in one vector. So, for a given structured data matrix S of a certain rank k, we are looking for the nearest (in weighted norm. 2 V -sense) lower rank matrix R with the same structure as S. We define r m r. For specific choices of V, r and Ω, (2) corresponds to well-known problems: If V is the identity matrix, denoted as V = I, and Ω = R n m, problem (2) reduces to the well-studied TLS problem (1) with R = [A + A B + B] and d = r, i.e., we enforce r independent linear relations among the columns of R in order to reduce R to rank r. When Ω is a set of matrices sharing a particular linear structure (for example, Hankel matrices) and V is a diagonal matrix, then (2) corresponds to the so-called Structured TLS problem. STLS problems with r = 1 have been studied extensively [3, 4, 5, 6]. For r > 1 only a few algorithms are described [7, 8], most of which yield statistically suboptimal results such as the algorithm to find lower rank structured matrices by alternating iterations between lower rank matrices and structured matrices [7]. The case in which Ω = R n m corresponds to the previously introduced Unstructured Weighted Low Rank Approximation (UWLRA) problem [9]. In this paper we consider (2) with Ω the set of a particular type of linearly structured matrices (for example, the set of Hankel matrices). From a statistical point of view it only makes sense to treat identical elements in an identical way, e.g., in a Hankel matrix all the elements on one antidiagonal should be treated the same way. Therefore (2) is reformulated as: min vec 2 (R) S R 2 W, (3) rank(r) r<k with M 2 W vec 2(M) T Wvec 2 (M), where vec 2 (M) is a minimal vector representation of the linearly structured matrix M. E.g., when Ω represents the set of Hankel matrices, vec 2 (M) is a vector containing the different elements of the different antidiagonals. Note that due to the one-to-one relation between M and vec 2 (M) the condition R Ω no longer appears in (3). Problem (3) is the so-called Structured Weighted Low Rank Approximation (SWLRA). The major contribution of this paper is the extension of the concept and the algorithm presented in [9] to linearly structured matrices (i.e., the extension of UWLRA to SWLRA

4 STRUCTURED WEIGHTED LOW RANK APPROXIMATION 3 problems). Problem (3) can also be interpreted as an extension of the STLS problems described in [4, 3, 5, 6] by allowing r to be larger than 1. The paper is structured as follows. In section 2, an unconstrained optimization problem equivalent to problem (3) is derived by means of the method of Lagrange multipliers. Section 3 introduces an algorithm for solving this unconstrained optimization problem. Finally, section 4 concludes with some numerical experiments illustrating the statistical optimality of the SWLRA algorithm and the SWLRA algorithm is compared with the STLNB algorithm described in [8]. N R m (m r) N T N=I 2. Derivation The first step in the derivation is to reformulate (3) into an equivalent double-minimization problem: min ( min S R 2 R R n m W ). (4) RN=0 The second step consists of finding a closed form expression f(n) for the solution of the inner minimization min R R n m S R 2 W. The latter is obtained as follows. Applying the technique RN=0 of Lagrange multipliers to the inner minimization of (4) yields the Lagrangian ψ(l, R) = vec 2 (S R) T Wvec 2 (S R) tr(l T (RN)), (5) where tr(a) stands for the trace of matrix A and L is the matrix of Lagrange multipliers. Using the equalities (5) becomes vec(a) T vec(b) = tr(a T B), vec(abc) = (C T A)vec(B), ψ(l, R) = vec 2 (S R) T Wvec 2 (S R) vec(l) T (N T I)vec(R). (6) Note that when R is a linearly structured matrix it is straightforward to write down a relation between vec(r) and its minimal vector representation vec 2 (R) : vec(r) = Hvec 2 (R), (7) where H R nm q and q is the number of different elements in R. E.g., in the case of a Hankel matrix, q = n + m 1 and vec 2 (R) can be constructed from the first column and last row of R. Substituting (7) in (6), the following expression is obtained for the Lagrangian: ψ(l, R) = vec 2 (S R) T Wvec 2 (S R) vec(l) T (N T I n )Hvec 2 (R). (8) Setting the derivatives of ψ w.r.t. vec 2 (R) and L equal to 0 yields the following set of equations: [ ] [ ] [ ] 2W H T (N I n ) vec2 (R) 2Wvec2 (S) (N T =. (9) I n )H 0 vec(l) 0 Using the fact that [ A B B T 0 ] 1 = [ A 1 A 1 B(B T A 1 B) 1 B T A 1 (B T A 1 B) 1 B T A 1 ],

5 4 M. SCHUERMANS ET AL. and setting H 2 (N I n ) T H, it follows from (9) that vec 2 (R) = (I q W 1 H T 2 (H 2 W 1 H T 2 ) 1 H 2 )vec 2 (S) vec 2 (S R) = W 1 H T 2 (H 2W 1 H T 2 ) 1 H 2 vec 2 (S) (10) As a result, the double-minimization problem (4) can be written as the following optimization problem: min vec 2 (S) T H2 T (H 2W 1 H2 T ) 1 H 2 vec 2 (S). (11) N R m (m r) N T N=I 3. Algorithm From this point on we will focus on a specific Ω, since as will become clear further on, it is not possible to derive a general algorithm that can deal with any type of linearly structured matrices. The Hankel structure will be investigated further on since this is one of the most frequently occurring structures in signal processing applications. As a result Toeplitz matrices are also dealt with since they can be converted into Hankel matrices with a simple permutation of the rows, whereas the solution of the corresponding SWLRA problem does not change. The straightforward approach for solving (4) would be to apply a nonlinear least squares (NLLS) solver to (11). For r = 1 this works fine but when r > 1 this approach breaks down by yielding the trivial solution R = 0. This can easily be understood by considering (4) with W equal to the identity matrix. As can be seen from (10), the inner minimization of (4) corresponds to an orthogonal projection of vec 2 (S) on the orthogonal complement of the column space of H2 T R (n+m 1) n r. Therefore it is clear that for r = 1 the orthogonal complement of the column space of H2 T will never be empty (the dimension of the column space of H2 T = rank(ht 2 ) = n) but for r > 1 the latter orthogonal complement will typically be empty since n m. As a result the projection will yield R = 0. The problem lies in an overparameterization of the matrix H2 T = HT (N I n ). To find a solution to the latter problem we first state the inner minimization in (4) in words: Given a matrix N whose column space is a nullspace of a non-trivial Hankel matrix, find the Hankel matrix R closest to S such that the nullspace of R is spanned by the column space of N. A parameterization of the nullspace of a Hankel matrix can be found by means of the so-called Vandermonde decomposition of a Hankel matrix. Given a Hankel matrix R R n m of rank r, it is straightforward to see that it can be parameterized as follows: R = z 1 z 2... z r.... z1 n 1 z2 n 1... zr n 1 c 1 c 2... cr. 1 z 1... z m z 2... z m z r... zr m 1, (12) with z i, i = 1,...,r the so-called complex signal poles and c i, i = 1,...,r the so-called complex amplitudes. By defining a vector p = [p 1 p 2...p r+1 ] T such that the following polynomial in λ p 1 + p 2 λ p r+1 λ r

6 STRUCTURED WEIGHTED LOW RANK APPROXIMATION 5 has roots z i, i = 1,...,r, it is easy to see that the nullspace of R can be parameterized by p i, i = 1,...,r + 1 in the following matrix whose rows span the nullspace of R: p 1 p 2... p r p 1 p 2... p r R(m r) m p 1 p 2... p r+1 We could now use the previous matrix to parameterize N in (11) but a different approach is taken here. First note that from (12) it follows that the vector vec 2 (R) (with R being a rank r Hankel matrix) can be written as the following linear combination: vec 2 (R) = c 1 b 1 + c 2 b c r b r, with vectors b i [1 z i... z n+m 2 i ] T and c i scalar coefficients for i = 1,...,r. As a result, it is clear that the rows of the following matrix p 1 p 2... p r p 1 p 2... p r H R(n+m 1 r) (n+m 1) p 1 p 2... p r+1 span the space perpendicular to b i, i = 1,...,r. Therefore -bearing in mind the goal of the inner minimization of (4)- the rank deficient Hankel matrix R closest to S can be found by means of the following orthogonal projection: and (11) thus becomes vec 2 (R) = (I W 1 H T 3 (H 3 W 1 H T 3 ) 1 H 3 )vec 2 (S), (13) min p vec 2 (S) T H T 3 (H 3W 1 H T 3 ) 1 H 3 vec 2 (S), (14) with p [p 1 p 2... p r+1 ] T. Remark that the new expressions are similar as before, although we now work with a vector p instead of a matrix N. The algorithm for solving the SWLRA for Hankel matrices can therefore be summarized as follows: Algorithm SWLRA Input: Hankel matrix S with first column = [s 1 s 2... s n ] T and last row = [s n... s n+m 1 ], S R n m, rank r and weighting matrix W. Output: Hankel matrix R of rank r, such that R is as close as possible to S in. W -sense. Begin Step 1 Construct matrix S by rearranging the elements of matrix S such that S R (n+m r 1) (r+1) is a Hankel matrix with first column = [s 1...s n+m r 1 ] T and last row = [s n+m r 1... s n+m 1 ]. Step 2 Compute SVD of S : UΣV T.

7 6 M. SCHUERMANS ET AL. End Step 3 Take starting value p 0 equal to the (r + 1)-right singular vector of S. Step 4 Minimize the cost function in (14) Step 5 Compute R using (13) In Step 4 a standard NLLS (Matlab s lsqnonlin) is used. In order to use a NLLS routine, the cost function has to be cast in the form f T f and in order to do so the Cholesky decomposition of H3 T(H 3W 1 H3 T) 1 H 3 has to be computed. This can be done by a QR factorisation of W 1/2 H3 T. The computationally most intensive step is Step Numerical experiments In this section we first consider a small SWLRA problem of which the solution is known in order to illustrate the correctness of the proposed algorithm. The last subsection compares the efficiency and statistical accuracy of algorithm SWLRA and the STLNB algorithm proposed in [8] Benchmarks To illustrate the numerical correctness of the algorithm SWLRA, we use two small SWLRA problems proposed in [8] of which the exact solution can be calculated analytically. In [8], modeling problems have been discussed, which approximate a given data sequence z k C p by the impulse response z k C p of a finite-dimensional linear time-invariant (LTI) system of a given order, yielding the following SWLRA problem p ( ) min [(z k z k )w k ] 2 X s.t. C = 0, (15) z k I k=1 where C C n m is a Toeplitz matrix constructed from z k and w k are appropriate weights. Example 1: Consider the following LTI-system of order 4 w k+1 = diag(0.4, 0.3, 0.2, 0.1)w k ; z k = [ ]w k with w 0 = [ ] T. By arranging the first eight samples z k of the impulse response in a Toeplitz matrix C C = and computing the best rank 1 structure-preserving approximation C to C using the weights (w 2 1, w 2 2,..., w 2 8) = (1, 2, 3, 4, 4, 3, 2, 1), we obtain with algorithm SWLRA the same ratio

8 STRUCTURED WEIGHTED LOW RANK APPROXIMATION 7 z k / z k 1 = , k = 2,...,8 as the analytically determined solution in [8]. The result is only accurate up to 8 digits, since accuracy is lost due to the squaring effect of the data matrix S in the cost function of (14). Example 2: Consider the following LTI-system of order 2 w k+1 = ( z k = [1 0]w k ( ) 6 with w 0 =. 1 We arrange the data z k in the 5 2 Toeplitz matrix C = ) w k ; and determine the best rank 1 structure-preserving approximation C to C by solving the corresponding SWLRA problem with weights (w 2 1, w 2 2,..., w 2 6) = (1, 2, 2, 2, 2, 1). We obtain the optimal solution with z k / z k 1 = , k = 2,...,6 as determined in [8]. Also here, the accuracy is limited in the same way by the squaring effect. We lose accuracy, in contrast to STLNB which has an accuracy up to 10 16, because of the squaring effect of our cost function f T f that has to be minimized by the algorithm SWLRA Statistical accuracy and numerical efficiency To compare the statistical accuracy and the computational efficiency of the SWLRA algorithm and the STLNB algorithm on larger examples, we perform Monte-Carlo simulations. In Table I, the results for a Hankel matrix are presented, whereas Table II contains the results for a Hankel matrix. The Monte-Carlo simulations are performed as follows. A Hankel matrix of the required rank, i.e., r = m r is constructed. The latter matrix is perturbed by a Hankel matrix that is constructed using a vector containing i.i.d. Gaussian noise of standard deviation Finally, the SWLRA and the STLNB algorithm are applied to this perturbed Hankel matrix. Each Monte-Carlo simulation consists of 100 runs, i.e., it considers 100 noisy realizations. The results presented are averaged over these runs and obtained by coding both algorithms in Matlab (version 6.1) and running on a PC i686 with 800 MHz and 256 MB memory. From both tables we can conclude the following. The statistical accuracy of the SWLRA algorithm is worse than that of the STLNB algorithm for small r but for large r the SWLRA algorithm performs best. From the computational point of view, the SWLRA algorithm outperforms the STLNB algorithm. The behaviour of the computational time for both algorithms can be explained as follows. The number of flops for minimizing the cost function (Step 4) in algorithm SWLRA is dominated by a QR decomposition (NLLS solver) whose computational cost is of the order of 2m 2 (n m/3) for a n m matrix, so in our case of the order of 2(r + 1) 2 (n + m 1 r (r + 1)/3). As a result, the number of flops decreases for decreasing

9 8 M. SCHUERMANS ET AL. r SWLRA STLNB S R F cputime (sec) S R F cputime (sec) Table I. Numerical results for the matrix r SWLRA STLNB S R F cputime (sec) S R F cputime (sec) Table II. Numerical results for the matrix r, or equivalently for increasing r. On the contrary, the cost of the STLNB algorithm is dominated by a QR factorization of a (n+m 1+n r) (n+m 1+m r) matrix requiring 2(n + m 1 + m r) 2 (n + m 1 + n r (n + m 1 + m r)/3) flops, which results in an increasing number of flops for increasing r. However, in both algorithms the computational cost can significantly decrease by exploiting the matrix structure in the computation of the QR decomposition, e.g., by using displacement rank theory [10, 11]. This is the subject of future work.

10 STRUCTURED WEIGHTED LOW RANK APPROXIMATION 9 5. Conclusions In this paper we have developed an extension of the Weighted Low Rank Approximation introduced by Manton et al. [9] for linearly structured matrices. For a particular type of structure, namely the Hankel structure, an algorithm was developed. The numerical accuracy of the latter algorithm was tested on a benchmark problem. Furthermore, the statistical accuracy and the computational efficiency are compared for the STLNB [8] and the SWLRA algorithm. For both properties, the SWLRA algorithm performs better than the STLNB algorithm for large r. ACKNOWLEDGEMENTS Prof. Sabine Van Huffel is a full professor, dr. Philippe Lemmerling is a postdoctoral researcher of the FWO (Fund for Scientific Research Flanders) and Mieke Schuermans is a research assistant at the Katholieke Universiteit Leuven, Belgium. Our research is supported by Research Council KUL: GOA-Mefisto 666, IDO/99/003 and IDO/02/009 (Predictive computer models for medical classification problems using patient data and expert knowledge), several PhD/postdoc & fellow grants; Flemish Government: FWO: PhD/postdoc grants, projects, G (damage detection in composites by optical fibers), G (structured matrices), G (support vector machines), G (magnetic resonance spectroscopic imaging), G (nonlinear Lp approximation), research communities (ICCoS, ANMMM); AWI: Bil. Int. Collaboration Hungary/Poland; IWT: PhD Grants, Belgian Federal Government: DWTC (IUAP IV-02 ( ) and IUAP V-22 ( ): Dynamical Systems and Control: Computation, Identification & Modelling) ); EU: NICONET, INTERPRET, PDT-COIL, MRS/MRI signal processing (TMR); Contract Research/agreements: Data4s, IPCOS;

11 10 M. SCHUERMANS ET AL. REFERENCES 1. Van Huffel S. and Vandewalle J. The total least squares problem: computational aspects and analysis. Frontiers in Applied Mathematics series, Vol. 9, SIAM, Philadelphia, Golub G. H., Van Loan C. F. An analysis of the total least squares problem. SIAM Journal Numer. Anal. Vol. 17, p , Abatzoglou T. J., Mendel J. M. and Harada G. A. The Constrained Total Least Squares Technique and its Applications to Harmonic Superresolution. IEEE Transactions on Signal Processing, 39 (1991), pp Abatzoglou T. J. and Mendel J. M. Constrained Total Least Squares. IEEE International Conf. on Acoustics, Speech & Signal Processing, Dallas, 1987, pp De Moor B. Total least squares for affinely structured matrices and the noisy realization problem. IEEE Transactions on Signal Processing, 42 (1994), pp Rosen J. B., Park H., and Glick J. Total least norm formulation and solution for structured problems. SIAM Journal on Matrix Anal. Appl., 1996, vol. 17, no. 1, pp Chu M. T., Funderlic R. E., Plemmons R. J. Structured Lower Rank Approximation. Linear Algebra and Its Applications, 2003, vol. 366, pp Van Huffel S., Park H., Rosen J. B. Formulation and solution of structured total least norm problems for parameter estimation. IEEE Transactions on signal processing, Vol. 44, No 10, October Manton J. H., Mahony R. and Hua Y. The geometry of weighted low rank approximations. IEEE Transactions on Signal Processing, in press Lemmerling P., Mastronardi N., Van Huffel S. Fast algorithm for solving the Hankel/Toeplitz structured total least squares problem. Numerical Algorithms, 23 (2000), pp Kailath T. and Sayed A. H., editors, Fast reliable algorithms for matrices with structure. SIAM, Philadelphia, 1999.

Block-row Hankel Weighted Low Rank Approximation 1

Block-row Hankel Weighted Low Rank Approximation 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 03-105 Block-row Hankel Weighted Low Rank Approximation 1 Mieke Schuermans, Philippe Lemmerling and Sabine Van Huffel 2 July 2003

More information

On Weighted Structured Total Least Squares

On Weighted Structured Total Least Squares On Weighted Structured Total Least Squares Ivan Markovsky and Sabine Van Huffel KU Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium {ivanmarkovsky, sabinevanhuffel}@esatkuleuvenacbe wwwesatkuleuvenacbe/~imarkovs

More information

Linear dynamic filtering with noisy input and output 1

Linear dynamic filtering with noisy input and output 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 22-191 Linear dynamic filtering with noisy input and output 1 Ivan Markovsky and Bart De Moor 2 2 November 22 Published in the proc

More information

Compactly supported RBF kernels for sparsifying the Gram matrix in LS-SVM regression models

Compactly supported RBF kernels for sparsifying the Gram matrix in LS-SVM regression models Compactly supported RBF kernels for sparsifying the Gram matrix in LS-SVM regression models B. Hamers, J.A.K. Suykens, B. De Moor K.U.Leuven, ESAT-SCD/SISTA, Kasteelpark Arenberg, B-3 Leuven, Belgium {bart.hamers,johan.suykens}@esat.kuleuven.ac.be

More information

x 104

x 104 Departement Elektrotechniek ESAT-SISTA/TR 98-3 Identication of the circulant modulated Poisson process: a time domain approach Katrien De Cock, Tony Van Gestel and Bart De Moor 2 April 998 Submitted for

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

Implementation of the regularized structured total least squares algorithms for blind image deblurring

Implementation of the regularized structured total least squares algorithms for blind image deblurring Linear Algebra and its Applications 391 (2004) 203 221 wwwelseviercom/locate/laa Implementation of the regularized structured total least squares algorithms for blind image deblurring N Mastronardi a,,1,

More information

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1 Katholiee Universiteit Leuven Departement Eletrotechnie ESAT-SISTA/TR 06-156 Information, Covariance and Square-Root Filtering in the Presence of Unnown Inputs 1 Steven Gillijns and Bart De Moor 2 October

More information

Joint Regression and Linear Combination of Time Series for Optimal Prediction

Joint Regression and Linear Combination of Time Series for Optimal Prediction Joint Regression and Linear Combination of Time Series for Optimal Prediction Dries Geebelen 1, Kim Batselier 1, Philippe Dreesen 1, Marco Signoretto 1, Johan Suykens 1, Bart De Moor 1, Joos Vandewalle

More information

For extensive reviews of the total least squares (TLS)

For extensive reviews of the total least squares (TLS) Advanced Review Ivan Markovsky, 1 Diana M. Sima, and Sabine Van Huffel Recent advances in total least squares approaches for solving various errorsin-variables modeling problems are reviewed, with emphasis

More information

Structured Matrices and Solving Multivariate Polynomial Equations

Structured Matrices and Solving Multivariate Polynomial Equations Structured Matrices and Solving Multivariate Polynomial Equations Philippe Dreesen Kim Batselier Bart De Moor KU Leuven ESAT/SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium. Structured Matrix Days,

More information

Maximum Likelihood Estimation and Polynomial System Solving

Maximum Likelihood Estimation and Polynomial System Solving Maximum Likelihood Estimation and Polynomial System Solving Kim Batselier Philippe Dreesen Bart De Moor Department of Electrical Engineering (ESAT), SCD, Katholieke Universiteit Leuven /IBBT-KULeuven Future

More information

Let ^A and ^Z be a s.p.d. matrix and a shift matrix of order

Let ^A and ^Z be a s.p.d. matrix and a shift matrix of order FAS IMPLEMENAION OF HE QR FACORIZAION IN SUBSPACE IDENIFICAION N. Mastronardi, P. Van Dooren, S. Van Huffel Università della Basilicata, Potenza, Italy, Katholieke Universiteit Leuven, Leuven, Belgium

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Departement Elektrotechniek ESAT-SISTA/TR 98- Stochastic System Identication for ATM Network Trac Models: a Time Domain Approach Katrien De Cock and Bart De Moor April 998 Accepted for publication in roceedings

More information

Downloaded 11/07/17 to Redistribution subject to SIAM license or copyright; see

Downloaded 11/07/17 to Redistribution subject to SIAM license or copyright; see SIAM J MATRIX ANAL APPL Vol 20, No 1, pp 14 30 c 1998 Society for Industrial and Applied Mathematics STRUCTURED TOTAL LEAST NORM FOR NONLINEAR PROBLEMS J B ROSEN, HAESUN PARK, AND JOHN GLICK Abstract An

More information

Improved initial approximation for errors-in-variables system identification

Improved initial approximation for errors-in-variables system identification Improved initial approximation for errors-in-variables system identification Konstantin Usevich Abstract Errors-in-variables system identification can be posed and solved as a Hankel structured low-rank

More information

Data Assimilation in 2D Magneto-Hydrodynamics Systems 1

Data Assimilation in 2D Magneto-Hydrodynamics Systems 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 0-7 Data Assimilation in D Magneto-Hydrodynamics Systems Oscar Barrero Mendoza, Dennis S. Bernstein, and Bart L.R. De Moor. September

More information

Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor y ESAT-SISTA, KU Leuven, Kardinaal Mercierlaan 94, B-3 Leuven (

Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor y ESAT-SISTA, KU Leuven, Kardinaal Mercierlaan 94, B-3 Leuven ( Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 97- Optimal trac light control for a single intersection Bart De Schutter and Bart De Moor Proceedings of the 997 International

More information

Sparse least squares and Q-less QR

Sparse least squares and Q-less QR Notes for 2016-02-29 Sparse least squares and Q-less QR Suppose we want to solve a full-rank least squares problem in which A is large and sparse. In principle, we could solve the problem via the normal

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

The total least squares problem in AX B. A new classification with the relationship to the classical works

The total least squares problem in AX B. A new classification with the relationship to the classical works Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich The total least squares problem in AX B. A

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem B. De Schutter 1 B. De Moor ESAT-SISTA, K.

The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem B. De Schutter 1 B. De Moor ESAT-SISTA, K. Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 9-1 The Linear Dynamic Complementarity Problem is a special case of the Extended Linear Complementarity Problem 1 Bart De Schutter

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. Multivariate polynomials are usually discussed in the framework of algebraic

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

SVD-based optimal ltering with applications to noise reduction in speech signals Simon Doclo ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Me

SVD-based optimal ltering with applications to noise reduction in speech signals Simon Doclo ESAT - SISTA, Katholieke Universiteit Leuven Kardinaal Me Departement Elektrotechniek ESAT-SISTA/TR 999- SVD-based Optimal Filtering with Applications to Noise Reduction in Speech Signals Simon Doclo, Marc Moonen April, 999 Internal report This report is available

More information

Low-rank approximation and its applications for data fitting

Low-rank approximation and its applications for data fitting Low-rank approximation and its applications for data fitting Ivan Markovsky K.U.Leuven, ESAT-SISTA A line fitting example b 6 4 2 0 data points Classical problem: Fit the points d 1 = [ 0 6 ], d2 = [ 1

More information

Numerically Stable Cointegration Analysis

Numerically Stable Cointegration Analysis Numerically Stable Cointegration Analysis Jurgen A. Doornik Nuffield College, University of Oxford, Oxford OX1 1NF R.J. O Brien Department of Economics University of Southampton November 3, 2001 Abstract

More information

SUBSPACE IDENTIFICATION METHODS

SUBSPACE IDENTIFICATION METHODS SUBSPACE IDENTIFICATION METHODS Katrien De Cock, Bart De Moor, KULeuven, Department of Electrical Engineering ESAT SCD, Kasteelpark Arenberg 0, B 300 Leuven, Belgium, tel: +32-6-32709, fax: +32-6-32970,

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Using semiseparable matrices to compute the SVD of a general matrix product/quotient Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department

More information

Chapter 8 Structured Low Rank Approximation

Chapter 8 Structured Low Rank Approximation Chapter 8 Structured Low Rank Approximation Overview Low Rank Toeplitz Approximation Low Rank Circulant Approximation Low Rank Covariance Approximation Eculidean Distance Matrix Approximation Approximate

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

REGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS

REGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS REGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS MARIYA ISHTEVA, KONSTANTIN USEVICH, AND IVAN MARKOVSKY Abstract. We consider the problem of approximating an affinely structured matrix,

More information

Primal-Dual Monotone Kernel Regression

Primal-Dual Monotone Kernel Regression Primal-Dual Monotone Kernel Regression K. Pelckmans, M. Espinoza, J. De Brabanter, J.A.K. Suykens, B. De Moor K.U. Leuven, ESAT-SCD-SISTA Kasteelpark Arenberg B-3 Leuven (Heverlee, Belgium Tel: 32/6/32

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

THROUGHOUT the last few decades, the field of linear

THROUGHOUT the last few decades, the field of linear IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 10, OCTOBER 2005 1509 Subspace Identification of Hammerstein Systems Using Least Squares Support Vector Machines Ivan Goethals, Kristiaan Pelckmans, Johan

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation

Two well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation Two well known examples Applications of structured low-rank approximation Ivan Markovsky System realisation Discrete deconvolution School of Electronics and Computer Science University of Southampton The

More information

Exponential Decomposition and Hankel Matrix

Exponential Decomposition and Hankel Matrix Exponential Decomposition and Hankel Matrix Franklin T Luk Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong luk@csecuhkeduhk Sanzheng Qiao Department

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Local Convergence of Sequential Convex Programming for Nonconvex Optimization

Local Convergence of Sequential Convex Programming for Nonconvex Optimization Local Convergence of Sequential Convex Programming for Nonconvex Optimization Quoc Tran Dinh and Moritz Diehl *Department of Electrical Engineering (SCD-ESAT) and OPTEC, Katholieke Universiteit Leuven,

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Orthogonal similarity transformation of a symmetric matrix into a diagonal-plus-semiseparable one with free choice of the diagonal Raf Vandebril, Ellen Van Camp, Marc Van Barel, Nicola Mastronardi Report

More information

System Identification by Nuclear Norm Minimization

System Identification by Nuclear Norm Minimization Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

Matrix factorization and minimal state space realization in the max-plus algebra

Matrix factorization and minimal state space realization in the max-plus algebra KULeuven Department of Electrical Engineering (ESAT) SISTA Technical report 96-69 Matrix factorization and minimal state space realization in the max-plus algebra B De Schutter and B De Moor If you want

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

The characteristic equation and minimal state space realization of SISO systems in the max algebra

The characteristic equation and minimal state space realization of SISO systems in the max algebra KULeuven Department of Electrical Engineering (ESAT) SISTA Technical report 93-57a The characteristic equation and minimal state space realization of SISO systems in the max algebra B De Schutter and B

More information

Identification of MIMO linear models: introduction to subspace methods

Identification of MIMO linear models: introduction to subspace methods Identification of MIMO linear models: introduction to subspace methods Marco Lovera Dipartimento di Scienze e Tecnologie Aerospaziali Politecnico di Milano marco.lovera@polimi.it State space identification

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

BASED on the minimum mean squared error, Widrow

BASED on the minimum mean squared error, Widrow 2122 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 8, AUGUST 1998 Total Least Mean Squares Algorithm Da-Zheng Feng, Zheng Bao, Senior Member, IEEE, and Li-Cheng Jiao, Senior Member, IEEE Abstract

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES John Lipor Laura Balzano University of Michigan, Ann Arbor Department of Electrical and Computer Engineering {lipor,girasole}@umich.edu ABSTRACT This paper

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices Part 2: block displacement structure algorithms JC Zúñiga and D Henrion Abstract Motivated by some control

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)

Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1) MM Research Preprints, 375 387 MMRC, AMSS, Academia Sinica No. 24, December 2004 375 Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1) Lihong Zhi and Zhengfeng Yang Key

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

On max-algebraic models for transportation networks

On max-algebraic models for transportation networks K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 98-00 On max-algebraic models for transportation networks R. de Vries, B. De Schutter, and B. De Moor If you want to cite this

More information

Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction

Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction Downloaded from orbit.dtu.dk on: Nov 02, 2018 Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction Hansen, Per Christian; Jensen, Søren Holdt Published in: I E E E Transactions

More information

Higher-order tensor-based method for delayed exponential fitting

Higher-order tensor-based method for delayed exponential fitting Higher-order tensor-based method for delayed exponential fitting Remy Boyer, Lieven Delathauwer, Karim Abed-Meraim To cite this version: Remy Boyer, Lieven Delathauwer, Karim Abed-Meraim Higher-order tensor-based

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Subspace Identification

Subspace Identification Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Transductively Learning from Positive Examples Only

Transductively Learning from Positive Examples Only Transductively Learning from Positive Examples Only Kristiaan Pelckmans and Johan A.K. Suykens K.U.Leuven - ESAT - SCD/SISTA, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium Abstract. This paper considers

More information

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc Aalborg Universitet Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc Published in: I E E E International Conference

More information

GTLS ALGORITHMS IN THE FREQUENCY DOMAIN SYSTEM IDENTIFICATION USING NOISE INFORMATION OUT OF A FINITE NUMBER OF REPEATED INDEPENDENT REALIZATIONS

GTLS ALGORITHMS IN THE FREQUENCY DOMAIN SYSTEM IDENTIFICATION USING NOISE INFORMATION OUT OF A FINITE NUMBER OF REPEATED INDEPENDENT REALIZATIONS GTLS ALGORITHMS IN THE REQUENCY DOMAIN SYSTEM IDENTIICATION USING NOISE INORMATION OUT O A INITE NUMBER O REPEATED INDEPENDENT REALIZATIONS Gerd Versteen *, Rik Pintelon Johan Schoukens Vrije Universiteit

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Equivalence of state representations for hidden Markov models

Equivalence of state representations for hidden Markov models Systems & Control Letters 57 (2008) 410 419 www.elsevier.com/locate/sysconle Equivalence of state representations for hidden Markov models Bart Vanluyten, Jan C. Willems, Bart De Moor K.U.Leuven, ESAT/SCD(SISTA),

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Structures preserved by matrix inversion Steven Delvaux Marc Van Barel Report TW 414, December 24 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 2A B-31 Heverlee (Belgium)

More information

The Joint MAP-ML Criterion and its Relation to ML and to Extended Least-Squares

The Joint MAP-ML Criterion and its Relation to ML and to Extended Least-Squares 3484 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 12, DECEMBER 2000 The Joint MAP-ML Criterion and its Relation to ML and to Extended Least-Squares Arie Yeredor, Member, IEEE Abstract The joint

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

A geometrical approach to finding multivariate approximate LCMs and GCDs

A geometrical approach to finding multivariate approximate LCMs and GCDs A geometrical approach to finding multivariate approximate LCMs and GCDs Kim Batselier, Philippe Dreesen, Bart De Moor 1 Department of Electrical Engineering, ESAT-SCD, KU Leuven / IBBT Future Health Department

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Some notes on Linear Algebra. Mark Schmidt September 10, 2009

Some notes on Linear Algebra. Mark Schmidt September 10, 2009 Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub

More information

The QR decomposition and the singular value decomposition in the symmetrized max-plus algebra

The QR decomposition and the singular value decomposition in the symmetrized max-plus algebra K.U.Leuven Department of Electrical Engineering (ESAT) SISTA Technical report 96-70 The QR decomposition and the singular value decomposition in the symmetrized max-plus algebra B. De Schutter and B. De

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

A Perturbation Analysis using Second Order Cone Programming for Robust Kernel Based Regression

A Perturbation Analysis using Second Order Cone Programming for Robust Kernel Based Regression A Perturbation Analysis using Second Order Cone Programg for Robust Kernel Based Regression Tillmann Falc, Marcelo Espinoza, Johan A K Suyens, Bart De Moor KU Leuven, ESAT-SCD-SISTA, Kasteelpar Arenberg

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information