7. Line Spectra Signal is assumed to consists of sinusoidals as

Size: px
Start display at page:

Download "7. Line Spectra Signal is assumed to consists of sinusoidals as"

Transcription

1 7. Line Spectra Signal is assumed to consists of sinusoidals as n y(t) = α p e j(ω pt+ϕ p ) + e(t), p=1 (33a) where α p is amplitude, ω p [ π, π] is angular frequency and ϕ p initial phase. By using β p = α p e jϕ p a simple model is obtained. That is n y(t) = β p e jωpt + e(t). (33b) p=1 The spectrum estimation problem is now reduced to estimation of frequences ω p in the presence of nuisance parameters β p. 92

2 Itisassumedthate(t) is white Gaussian noise with variance σ 2. It can be shown that the true autocorrelation and PSD are n r(k) = α 2 pe jω pk (34a) p=1 φ(ω) =2π n α 2 pδ(ω ω p )+σ 2, p=1 (34b) where δ is the Dirac impulse (delta function), that has property π π F(ω)δ(ω ω p) dω = F(ω p ) for any function that is continuous at ω p. The PSD forms a line spectrum, that is the reason for name of the methods. Also a name discrete spectrum is used. After estimating the unknowns, the spectrum can be drawn. 93

3 The frequency estimation methods introduced here are superresolution methods, i.e., they can separate sinusoidals that are spaced closer than 1/N, that is the limit for the classical periodogram. They are consistent and have low variance (near Cramer-Rao bound), if the assumed model is correct. Assumptions of noise can be relaxed, especially, if nonlinear least squares (NLS) is used. 94

4 Other estimators to be discussed are multiple signal classification (MUSIC) methods, Min-Norm and estimation of signal parameters via rotational invariance (ESPRIT). These are so called subspace methods, that use eigendecomposition of the sample correlation matrix. An improved correlation estimate is also discussed. This is known as forward-backward approach. It is empirically shown to improve frequency estimation accuracy. 95

5 NLS Using notations β =[β 1...β n ] T, y =[y(1)...y(n)] T, e = [e(1)...e(n)] T, ω =[ω 1...ω n ] T and B(ω) =B = ejω 1... e jω n.. e jω1n... e jω nn the signal model (33) can be expressed as Y = Bβ + e. (35) The model is linear with respect to amplitudes β but nonlinear with respect to frequencies ω. NLS solves β and ω by minimizing f(β, ω) = y Bβ 2. (36) 96

6 It was early shown that ^β = ( B H B ) 1 B H y. By substituting this into (36) we get ˆω = arg max ω yh B ( B H B ) 1 B H y. (37) The noise power is estimated as mean of (36). Asymptotically, as N is large, the covariance matrix of estimation errors ˆω ω is given as Cov( ˆω) = 6σ2 1/α (38) N 3 0 1/α 2 n If noise is Gaussian, this is also the Cramer-Rao lower bound. In the Gaussian noise case NLS provides (asymptotically) the most accurate frequency estimates. 97

7 Solution of (36) requires multidimensional search. E.g., Newton or Gauss methods or their variants can be used for search. Wrong initial values may lead the algorithms to convergence to local maximum, not global, or even diverge. There exist several methods to find initial values. One method is obtained by assuming n = 1 (discussed later). Another is the alternating projection (AP) algorithm. NLS solution computes n-dimensional functional f(^β, ω) and finds maximum from that surface. 98

8 As an example, a two dimensional case is computed. N = 32 such that resolution in angular frequency is The frequencies are π/3 and π/ such that separation is below the classical resolution. Amplitudes are 1 and

9

10 NLS is a superresolution method. Spectral leakage is not a problem. NLS works even if signal amplitudes are different such that in the periodogran the other signal is buried under the sidelobes of another. In terms of spread spectrum communications, NLS is a near-far robust method. 101

11 NLS when n = 1 If n = 1 we get from (37) 1 ˆω = ˆω 1 = arg max ω N BH y 2, (39) where B C N and 1/N term becomes from fact N = B H B. This can be ignored. This is the periodogram without windowing. The periodogram is NLS estimator if only one signal is present. From the properties of the periodogram it follows that (39) can be used in multiple signal case if: Signal separation is larger than 2π/N and Signal powers are rather equal such that leakage does not form problems. 102

12 In such case NLS estimates are n largest peaks of the periodogram that are not necessary quaranteed to have covariance properties of the NLS. The difference is of order 1/N. The periodgram can be used as an initial value solver for NLS (if leakage is not a problem). 103

13 Eigenanalysis of Correlation Matrix Note that (for n = 1) y(t) =β k e jω kt y(t 1) =e jω kβ k e jω kt which, in a matrix form is.. y(t m + 1) =e j(m 1) β k e jω kt ỹ(t) =a(ω k )x k (t)+ẽ(t), where a(ω) = [1e jω...e j(m 1)ω ] T C m and x k (t) = β k e jωkt. In generel case ỹ(t) =A(ω) x(t)+ẽ(t), (40) where A(ω) =[a(ω 1 )...a(ω n )] T C m n and x(t) =[x 1 (t)...x n (t)] T. 104

14 Since ω k are different, a(ω k ) are linearly independent and rank{a} = n. R has correlation matrix R = E{ỹ(t)ỹ H (t)} = APA H + σ 2 I, where P = α α 2 n and I is a suitable (m m in this case) identity matrix. (41a) (41b) P = lim N N t=1 x(t) xh (t) is the covariance of sinusoidals. Off-diagonal elements consists of average of terms like e jω kt e jω lt = e j(ω k ω t )t = e jω ht, and that is zero (average of sinusoidal). 105

15 Since APA H C m m) has rank n (n <m) it is nonsingular, i.e., its determinant is zero, i.e, APA H = 0. This is equal to R σ 2 I = 0, which means that σ 2 is an eigenvalue of R. This means that Ru i = σ 2 u i, where u i is eigenvector associated with σ 2. Since R = APA H + σ 2 I APA H u i = 0. (42) This means that eigenvector associated with σ 2 lies in the null space of A H. 106

16 Since the dimension of the range space of A rank{apa H } = n, it follows that the dimension of the null space is m n. It follows that eigenvalue σ 2 has multiplicity m n. The remainder n eigenvalues satisfy APA H u i =(λ i σ 2 )u i. (43) Since R is positive semidefinite, its eigenvalues are nonnegative. Thus σ 2 is the minimum eigenvalue. The eigenvalues can be ordrered as λ 1 λ 2...λ n >σ 2 =...σ 2. The n largest eigenvectors form range space of A (if P is full rank). This is also called as signal subspace. The m n smallest eigenvectors form null space of A H.Thisis also called as noise subspace. Analysis hold also for other linearly independed vectors a(ω) if P is full rank. In a general case, P is not diagonal. 107

17 MUSIC (42) means that sinusoidals a(ω) (with true values) are orthogonal to noise subspace. This can be stated as m i=m n a(ω)h u i = 0. Let G =[u m n...n m ] C m m n include eigenvectors that span noise subspace. Then, the true frequencies are solutions to a(ω) H GG H a(ω) = G H a(ω) 2 = 0. (44) In practise, only estimate ˆR = 1/N N t=m ỹ(t)ỹh (t) of R is available. It follows that only estimate Ĝ of G is available and it is better to find frequences as minimizers of f music (ω) = ĜH a(ω) 2. (45) 108

18 Sometimes 1/f music (ω) is called the MUSIC (pseudo)spectrum, since the highest peaks are associated with the sinusoidals. In the MUSIC method a) find sample correlation matrix, b) compute its eigenvectors, c) form noise subspace estimate Ĝ from the eigenvectors associated with m n smallest eigenvalues. The d) frequency estimates are minima of (45). Often the faster way to compute the eigenvectors is to compute the SVD of Y =[ỹ(m)...ỹ(n)]. The decomposition is SVD{Y} = UΣV H and U =[SG]. Let S =[u 1...u n ] contain the rest eigenvectors of R. Since SS H + GG H = I it follows that frequencies can also obtained as maximizers of f music (ω) = ŜH a(ω) 2. This form is usefull if n<m n since the computational com- 109

19 plexity is reduced. The Pisarenko method uses only the eigenvector that is associated with the smallest eigenvalue of ˆR, i.e, is a special case of MUSIC. (No order determination problems but decreased performance.) The root MUSIC computes frequencies as angular positions of n roots of equation a T (z 1 )ĜĜH a(z) =0 (46) that are nearest (inside) the unit circle. a(z) is obtained from a(ω) by replacing e jω by z so that a(z) = [1z 1... z (m 1) ] T. 110

20 Parameter m should be as large as possible (for resolution), but not too close to N in order to compute ^R sufficiently (for variance). Sometimes is is said that ˆR requires (2 4)m vectors. Since number of vectors ỹ(t) is N m, this suggest that m<n/(3-5). Matlab includes pmusic, rootmusic. Amplitudes can be estimated as in NLS. Noise power can be estimated as an average of noise subspace eigenvalues (or some of them). 111

21 As an example, let N = 128, m = N/4 = 32. Now, 1/N angular resolution limit is 0.05, and angular frequencies are π/20 = 0.15 π/3 = 1.05 and π/ Amplitudes are 1, 10 and 10, correspondingly. Angular resolution 1/m is Next pages present eigenvalues and the MUSIC spectrum, of which the highest peaks corresponds to frequency estimates. 112

22 normalized eigenvalue eigenvalue number 113

23 35 separation MUSIC spectrum angular frequency 114

24 Three large eigenvalues can be separated. Can not separate frequencies below 1/N limit. In the next pages separation is 0.15 and

25 35 separation MUSIC spectrum angular frequency 116

26 35 separation MUSIC spectrum angular frequency 117

27 The two close frequencies are separable under 1/m limit (if signal-to-noise ratio is high enough or N large such that SNR becomes high after averaging). Computation of root-music is then addressed. Find Ĝ. Compute GG = ĜĜH (m m). Find the 2m 1 coefficients of (46). Each is obtained as as sum of diagonal elements of GG. The first coefficient is the lowest diagonal (i.e., element (m 1, 1)). The second coefficient is the diagonal above that(i.e., elements (m 1, 1) (m, 2)), etc. (sum(diag(gg,j) j=-(m-1):m-1 in Matlab). Find n roots ξ k that are nearest unit circle but inside it (roots command in Matlab). Angular frequency estimates are ˆω k = ξ k. 118

28 For the previous example we got following results: correct value estimated value Separation Separation Separation correct value estimated value correct value estimated value As observed, the method can separate very close frequencies (at least in this case), that spectral-music was not able to separate

29 In the next page the roots inside the unit cirle are shown (polar(angle(r),abs(r), * ) in Matlab). 120

30

31 Min-Norm Is 1 noise eigenvector sufficient? This would give computational savings, if compared with MUSIC. It is not, if the Pisarenko method is used. Min-Norm tries to do this with similar statistical accurary than MUSIC. Select a vector [ 1 q = ĝ] from the noise subspace, which has minimum Euclidian norm. The Min-Norm frequency estimates are n maxima of the pseudospectrum 1 f min norm (ω) = a H (ω)q 2. (47) 122

32 Also arootversion exist. That solves frequencies as angular positions of the n roots of the polynomial a T (z 1 )q that are located nearest the unit cirle (not necessary inside). To compute root-min-norm, find roots of q. 123

33 The course books gives two ways to determine ĝ. The first one is usefull if n<m n. [ ] [ ] Let Ŝ and Ĝ partitioned as Ŝ = α H S and Ĝ =, i.e, α H and β H are the first rows of Ŝ and Ĝ. The solutions are and ĝ = Sα/(1 α 2 ), β H Ḡ (48a) ĝ = Ḡβ/ β 2 (48b) Note that relation α 2 1 must be valid. It is valid, at least, for large N. 124

34 As an example we repeated examples used to demonstrate MU- SIC with close frequency separations 0.02, 0.15 and

35 3 x 104 separation Min Norm spectrum angular frequency 126

36 800 separation Min Norm spectrum angular frequency 127

37 500 separation Min Norm spectrum angular frequency 128

38 As can be seen, the Min-Norm is not neseccary so reliable. It may lead to wrong conclusions (now that happened when very near signals are more powerful than the separated one). As in the MUSIC case, the pseudospectrum does not give correct amplitudes. The peaks in Min-Norm seems to be narrower than in MUSIC. Then, the root-min-norm is examined. 129

39 For the previous example we got following results: correct value estimated value Separation Separation Separation correct value estimated value correct value estimated value As observed, the method can separate very close frequencies (at least in this case)

40 The roots are shown in the next page for the last case. 131

41

42 Rationale behind root-methods This is not discussed in the course book, but added here for educational purposes. Equation (42) means that sinusoidal a(ω) is orthogonal to vectors that lie in the noise subspace. By replazing e jω by z in a(ω) we get u H a(z) =0, where u lies in the noise subspace. Let elements of u be u i. Then we have polynomial y(z) = m 1 i=0 u i z i, which have roots z k = e jω k,k= 1,..., n. These roots lie in the unit circle (remember: if a polynomial x(z) has n roots z i, then it can be presented as x(z) = n i=1 (z z i)). In the root-min-norm q lies in the noise subspace and we select roots of polynomial y(z) = m 1 i=0 q i z i that are closest to unit circle. 133

43 In the root-music we use all noise subspace level eigenvectors. lth elementh of vector G H a(z) is polynomial y l (z) = m 1 i=0 G i+1,l z i, where G l,i denotes (l, i)th elementh of matrix G. kth elementh of vector a(z 1 )G is polynomial x k (z) = m 1 i=0 G i+1,kz i, a(z 1 )GG H a(z) is therefore polynomial r(z) = m m 1 i=0 G j+1,kg i+1,k zj i. k=1 m 1 j=0 r(z) = m 1 i= (m 1) a iz i has n roots z k = e jω k on the unit circle. r(z) can also be factored as r(z) =p(z)p (1/z ), which means that roots are reciprocal. In the root-music we select n roots of r(z) that are inside the unit circle and closest to it. 134

44 How are the coefficient a i selected? Consider first a (m 1), which is obtained if j = 0 and i = m 1. In this case a (m 1) = m k=1 G 1,kG m,k, which means that first row of G is multiplied by the last column of G H, i.e, (1, m)th elementh of GG H, or sum of elements of mth diagonal of GG H. Consider then a (m 2), which is obtained as sum of terms where (j = 0, i = m 2) and (j = 1, i = m 1). This leads to a m (m 2) k=1 (G 1,kG m 1,k + G 2,kG m,k ),whichissumof elements of m 1th diagonal of GG H. This leads to given algorithm. 135

45 ESPRIT Note that m 1 1 vectors 1 a 1 (ω) = e jω. a 2 (ω) = e j(m 2)ω e jω e j2ω. e j(m 1)ω have connection a 2 (ω) =a 1 (ω)e jω. This can be generalized to n signal case by division such that A 1 =[I m 1 0 m 1 ]A A 2 =[0 m 1 I m 1 ]A A 2 = A 1 D (49a) 136

46 where D = e jω e jω n (49b) Relation (49) is known as rotation (D is a unitary matrix). Similarly, define S 1 =[I m 1 0 m 1 ]S S 2 =[0 m 1 I m 1 ]S. Multiply equation (43) from left by [I m 1 0 m 1 ] and [0 m 1 I m 1 ]. Let Λ be diagonal matrix with elements λ i σ 2. Then, the result is A 1 PA H S = S 1 Λ A 2 PA H S = S 2 Λ. 137

47 From the first row we get A 1 = S 1 C 1, where C = PA H SΛ 1. The second row gives S 2 = A 2 C = A 1 DC = S 1 C 1 DC = S 1 Φ, where Φ = C 1 DC. Φ and D have equal eigenvalues ν k = e jω k. In practice, only estimate of S is available, and Φ is obtained as a minimum norm solution. The ESPRIT method a) determines ˆΦ, b) its eigenvalues ˆν k and c) solves frequencies as ˆω = ˆν k. Alternatives for ˆΦ are LS and Total LS (TLS). LS solution is ) 1Ŝ H ˆΦ = (ŜH 1 Ŝ 1 Ŝ 2. (50) 1 138

48 For the previous example we got following results: correct value estimated value Separation Separation Separation correct value estimated value correct value estimated value As observed, the method can separate very close frequencies (at least in this case)

49 As a conclusions: root versions of MUSIC and Min-Norm provide direct frequency estimates. With spectral versions, maxima (or minima) search is required. Amplitudes can be computed like in NLS, after the frequencies have been estimated. Noise power can be estimated as mean estimation error in NLS, or as an average of noise subspace eigenvalues. 140

50 Forward-Backward Method As a last subject, we consider improved correlation matrix estimator that is empirically shown to lead more accurate frequency estimates. The regular correlation matrix estimate ˆR = 1 N y(t). [y (t)... y (t m + 1)] N t=m y(t m + 1) can be seen as a result of a LS problem where newest value y(t) is estimated by m previous values. This is forward prediction. 141

51 As an alternatively, the m + 1 latest value can be estimated by m newer samples. This is known as backward prediction. This leads to sample correlation matrix ˆR b = 1 N y (t m + 1). [y(t m + 1)... y (t)], N t=m y (t) that can be expressed as R b = JˆR T J, where J = is so called exchange matrix. (51) 142

52 The Forward-Backward (FB) correlation matrix estimate is R = 1 2 (ˆR + JˆR T J). (52) This can be used instead of ^R in hope of more accurate results. 143

High-resolution Parametric Subspace Methods

High-resolution Parametric Subspace Methods High-resolution Parametric Subspace Methods The first parametric subspace-based method was the Pisarenko method,, which was further modified, leading to the MUltiple SIgnal Classification (MUSIC) method.

More information

8. Filter Bank Methods Filter bank methods assume that the true spectrum φ(ω) is constant (or nearly so) over the band [ω βπ, ω + βπ], for some β 1.

8. Filter Bank Methods Filter bank methods assume that the true spectrum φ(ω) is constant (or nearly so) over the band [ω βπ, ω + βπ], for some β 1. 8. Filter Bank Methods Filter bank methods assume that the true spectrum φ(ω) is constant (or nearly so) over the band [ω βπ, ω + βπ], for some β 1. Usefull if it is not known that spectrum has special

More information

Practical Spectral Estimation

Practical Spectral Estimation Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the

More information

DOA Estimation using MUSIC and Root MUSIC Methods

DOA Estimation using MUSIC and Root MUSIC Methods DOA Estimation using MUSIC and Root MUSIC Methods EE602 Statistical signal Processing 4/13/2009 Presented By: Chhavipreet Singh(Y515) Siddharth Sahoo(Y5827447) 2 Table of Contents 1 Introduction... 3 2

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

ADAPTIVE ANTENNAS. SPATIAL BF

ADAPTIVE ANTENNAS. SPATIAL BF ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc

Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc Aalborg Universitet Robust Implementation of the MUSIC algorithm Zhang, Johan Xi; Christensen, Mads Græsbøll; Dahl, Joachim; Jensen, Søren Holdt; Moonen, Marc Published in: I E E E International Conference

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown. Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real

More information

Digital Image Processing Lectures 13 & 14

Digital Image Processing Lectures 13 & 14 Lectures 13 & 14, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Properties of KL Transform The KL transform has many desirable properties which makes

More information

6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m

6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m 6. Methods for Rational Spectra It is assumed that signals have rational spectra m k= m φ(ω) = γ ke jωk n k= n ρ, (23) jωk ke where γ k = γ k and ρ k = ρ k. Any continuous PSD can be approximated arbitrary

More information

(a)

(a) Chapter 8 Subspace Methods 8. Introduction Principal Component Analysis (PCA) is applied to the analysis of time series data. In this context we discuss measures of complexity and subspace methods for

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them. Prof A Suciu MTH U37 LINEAR ALGEBRA Spring 2005 PRACTICE FINAL EXAM Are the following vectors independent or dependent? If they are independent, say why If they are dependent, exhibit a linear dependence

More information

EEE508 GÜÇ SİSTEMLERİNDE SİNYAL İŞLEME

EEE508 GÜÇ SİSTEMLERİNDE SİNYAL İŞLEME EEE508 GÜÇ SİSTEMLERİNDE SİNYAL İŞLEME Signal Processing for Power System Applications Frequency Domain Analysis Techniques Parametric Methods for Line Spectra (Week-5-6) Gazi Üniversitesi, Elektrik ve

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Root-MUSIC Time Delay Estimation Based on Propagator Method Bin Ba, Yun Long Wang, Na E Zheng & Han Ying Hu

Root-MUSIC Time Delay Estimation Based on Propagator Method Bin Ba, Yun Long Wang, Na E Zheng & Han Ying Hu International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 15) Root-MUSIC ime Delay Estimation Based on ropagator Method Bin Ba, Yun Long Wang, Na E Zheng & an Ying

More information

Adaptive beamforming. Slide 2: Chapter 7: Adaptive array processing. Slide 3: Delay-and-sum. Slide 4: Delay-and-sum, continued

Adaptive beamforming. Slide 2: Chapter 7: Adaptive array processing. Slide 3: Delay-and-sum. Slide 4: Delay-and-sum, continued INF540 202 Adaptive beamforming p Adaptive beamforming Sven Peter Näsholm Department of Informatics, University of Oslo Spring semester 202 svenpn@ifiuiono Office phone number: +47 22840068 Slide 2: Chapter

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego

Spatial Smoothing and Broadband Beamforming. Bhaskar D Rao University of California, San Diego Spatial Smoothing and Broadband Beamforming Bhaskar D Rao University of California, San Diego Email: brao@ucsd.edu Reference Books and Papers 1. Optimum Array Processing, H. L. Van Trees 2. Stoica, P.,

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego Direction of Arrival Estimation: Subspace Methods Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu Reference Books and Papers 1 Optimum Array Processing, H L Van Trees 2 Stoica, P,

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Periodogram and Correlogram Methods. Lecture 2

Periodogram and Correlogram Methods. Lecture 2 Periodogram and Correlogram Methods Lecture 2 Lecture notes to accompany Introduction to Spectral Analysis Slide L2 1 Periodogram Recall 2nd definition of (!): (!) = lim N!1 E 8 >< >: 1 N NX t=1 y(t)e

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

Full-State Feedback Design for a Multi-Input System

Full-State Feedback Design for a Multi-Input System Full-State Feedback Design for a Multi-Input System A. Introduction The open-loop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Solutions. Number of Problems: 10

Solutions. Number of Problems: 10 Final Exam February 9th, 2 Signals & Systems (5-575-) Prof. R. D Andrea Solutions Exam Duration: 5 minutes Number of Problems: Permitted aids: One double-sided A4 sheet. Questions can be answered in English

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Non-parametric identification

Non-parametric identification Non-parametric Non-parametric Transient Step-response using Spectral Transient Correlation Frequency function estimate Spectral System Identification, SSY230 Non-parametric 1 Non-parametric Transient Step-response

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

Vector Space Concepts

Vector Space Concepts Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

On the choice of parameters in Singular Spectrum Analysis and related subspace-based methods

On the choice of parameters in Singular Spectrum Analysis and related subspace-based methods Statistics and Its Interface Volume 3 (2010) 259 279 On the choice of parameters in Singular Spectrum Analysis and related subspace-based methods Nina Golyandina In the present paper we investigate methods

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

More chapter 3...linear dependence and independence... vectors

More chapter 3...linear dependence and independence... vectors More chapter 3...linear dependence and independence... vectors It is important to determine if a set of vectors is linearly dependent or independent Consider a set of vectors A, B, and C. If we can find

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

HEURISTIC METHODS FOR DESIGNING UNIMODULAR CODE SEQUENCES WITH PERFORMANCE GUARANTEES

HEURISTIC METHODS FOR DESIGNING UNIMODULAR CODE SEQUENCES WITH PERFORMANCE GUARANTEES HEURISTIC METHODS FOR DESIGNING UNIMODULAR CODE SEQUENCES WITH PERFORMANCE GUARANTEES Shankarachary Ragi Edwin K. P. Chong Hans D. Mittelmann School of Mathematical and Statistical Sciences Arizona State

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Iterative solvers for linear equations

Iterative solvers for linear equations Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

LECTURE NOTE #10 PROF. ALAN YUILLE

LECTURE NOTE #10 PROF. ALAN YUILLE LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and

More information

The problem is to infer on the underlying probability distribution that gives rise to the data S.

The problem is to infer on the underlying probability distribution that gives rise to the data S. Basic Problem of Statistical Inference Assume that we have a set of observations S = { x 1, x 2,..., x N }, xj R n. The problem is to infer on the underlying probability distribution that gives rise to

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Singular Value Decomposition Analysis

Singular Value Decomposition Analysis Singular Value Decomposition Analysis Singular Value Decomposition Analysis Introduction Introduce a linear algebra tool: singular values of a matrix Motivation Why do we need singular values in MIMO control

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

CENTER MANIFOLD AND NORMAL FORM THEORIES

CENTER MANIFOLD AND NORMAL FORM THEORIES 3 rd Sperlonga Summer School on Mechanics and Engineering Sciences 3-7 September 013 SPERLONGA CENTER MANIFOLD AND NORMAL FORM THEORIES ANGELO LUONGO 1 THE CENTER MANIFOLD METHOD Existence of an invariant

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology

More information