Sampling considerations for modal analysis with damping

Size: px
Start display at page:

Download "Sampling considerations for modal analysis with damping"

Transcription

1 Sampling considerations for modal analysis with damping Jae Young Park, a Michael B Wakin, b and Anna C Gilbert c a University of Michigan, 3 Beal Ave, Ann Arbor, MI 489 USA b Colorado School of Mines, 5 Illinois St, Golden, CO 84 USA c University of Michigan, 53 Church St, Ann Arbor, MI 489 USA ABSTRACT Structural health monitoring (SHM) systems are critical for monitoring aging infrastructure (such as buildings or bridges) in a cost-effective manner Wireless sensor networks that sample vibration data over time are particularly appealing for SHM applications due to their flexibility and low cost However, in order to extend the battery life of wireless sensor nodes, it is essential to minimize the amount of vibration data these sensors must collect and transmit In recent work, we have studied the performance of the Singular Value Decomposition (SVD) applied to the collection of data and provided new finite sample analysis characterizing conditions under which this simple technique also known as the Proper Orthogonal Decomposition (POD) can correctly estimate the mode shapes of the structure Specifically, we provided theoretical guarantees on the number and duration of samples required in order to estimate a structure s mode shapes to a desired level of accuracy In that previous work, however, we considered simplified Multiple-Degree-Of-Freedom (MDOF) systems with no damping In this paper we consider MDOF systems with proportional damping and show that, with sufficiently light damping, the POD can continue to provide accurate estimates of a structure s mode shapes We support our discussion with new analytical insight and experimental demonstrations In particular, we study the tradeoffs between the level of damping, the sampling rate and duration, and the accuracy to which the structure s mode shapes can be estimated Keywords: Modal analysis, proportional damping, Proper Orthogonal Decomposition (POD), Singular Value Decomposition (SVD), structural health monitoring (SHM) INTRODUCTION The Proper Orthogonal Decomposition (POD) is a prototypical algorithm for modal analysis of a vibrating structure Let {d(t)} denote an N vector of displacements recorded from N sensor locations on the structure Suppose we record the displacements at M discrete times t, t 2,, t M We can then stack the M displacement vectors into an N M matrix: [D] = [ {d(t )} {d(t 2 )} {d(t M )} ] () An effective strategy for matrix-based data analysis is often to factor a data matrix into constituent parts that reveal certain structural properties One candidate factorization is the Singular Value Decomposition (SVD) In fact, by taking the SVD of the data matrix [D], one obtains the POD Specifically, let denote the truncated SVD of [D], where [U] is an N r matrix with orthonormal columns (r is the rank of [D]), [D] = [U][Σ][V ] (2) [Σ] is an r r diagonal matrix with positive entries along the diagonal sorted from large to small: σ σ 2 σ r >, and [V ] is an M r matrix with orthonormal columns ([V ] is the conjugate transpose of [V ]) jaeypark@umichedu, mwakin@minesedu, annacg@umichedu This work was partially supported by NSF grants CCF-6233 and CIF-9765 and NSF CAREER grant CCF Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 25, edited by Jerome P Lynch, Kon-Well Wang, Hoon Sohn, Proc of SPIE Vol 9435, 9435U 25 SPIE CCC code: X/5/$8 doi: 7/ Proc of SPIE Vol U-

2 Then the columns of [U] are known as the principal orthogonal modes (POMs) of the data set Each of these columns is a length-n vector, with entries corresponding to POM displacement at the N sensor locations One reason the POD is useful in structural health monitoring (SHM) is that it can provide information about a structure s true, normal mode shapes It has previously been shown that, under certain assumptions, the POMs converge to the true mode shapes asymptotically as the number of observations M goes to infinity, 2 In situations where displacement readings are collected by a (possibly battery operated) wireless sensor network, however, it may be impractical to collect a large amount of data (large M) Motivated by research in the field of Compressive Sensing (CS), we have recently provided finite-sample guarantees on the accuracy to which the POMs approximate the true mode shapes 3, 4 These results are described in Section 2 One of the conclusions from this work is that, under the right conditions, it can be possible to use the POD in situations even where M is very small Our analysis, 3, 4 however, was relevant only to systems in free vibration with no damping In this paper, we explore the performance of the POD in the case of proportional damping Specifically, we discuss conditions on the damping coefficients, modal frequencies, sampling rate, and number of samples M under which the POMs will provide an accurate approximation of a system s true mode shapes We support the new insight with analytical discussion and experimental demonstrations 2 MATRIX FACTORIZATIONS IN THE UNDAMPED CASE We begin with a review of our previous work dealing with undamped systems, and we provide some insight into why taking the SVD of a data matrix can reveal structural properties of the underlying system The general solution to the equations of motion for an undamped N-degree-of-freedom system in free vibration is given by N {d(t)} = {ψ n }A n sin(ω n t + θ n ), (3) where n= ω, ω 2,, ω N are the undamped natural modal frequencies of the system (determined by the system s mass and stiffness matrices), {ψ }, {ψ 2 },, {ψ N } are N vectors corresponding to the system s true, normal mode shapes (also determined by the system s mass and stiffness matrices), A, A 2,, A N are amplitudes and θ, θ 2,, θ N are phases of the sinusoidal loadings (determined by the system s initial conditions) If certain modal frequencies have not been excited, it is possible that some of the amplitudes A n may be zero We let K denote the number of nonzero amplitudes While in general K may be as large as N, in some cases K may be smaller than N and it will not be possible to estimate the modal parameters corresponding to the zero amplitudes Without loss of generality, we also assume the amplitudes are ordered from large to small, and thus we will have A A K > A K+ = = A N = (We do not assume any particular ordering to the modal frequencies) When data is collected from such a system by sampling {d(t)} at M distinct points in time t, t 2,, t M, and the data is assembled into an N M matrix [D] as described in (), it follows from (3) that [D] admits a factorization of the form [D] = [Ψ][Γ][S], (4) where [Ψ] = [{ψ }, {ψ 2 },, {ψ K }] Proc of SPIE Vol U-2

3 denotes an N K matrix formed from the mode shape vectors corresponding to the nonzero amplitudes, A A 2 [Γ] = A K denotes a K K diagonal matrix containing the nonzero amplitudes along the diagonal, and sin(ω t + θ ) sin(ω t 2 + θ ) sin(ω t M + θ ) sin(ω 2 t + θ 2 ) sin(ω 2 t 2 + θ 2 ) sin(ω 2 t M + θ 2 ) [S] = sin(ω K t + θ K ) sin(ω K t 2 + θ K ) sin(ω K t M + θ K ) (5) denotes a K M matrix containing samples of K sinusoids (with frequencies ω, ω 2,, ω K and phases θ, θ 2,, θ K ) at the M time points t, t 2,, t M Now, recall the POD technique described in Section This technique involves taking the SVD of the data matrix [D] By comparing (4) with (2), one can appreciate the reason why the POMs may approximately coincide with a system s true mode shapes In particular, suppose the system s mass matrix is proportional to the identity (or that the data has been suitably renormalized as discussed in Ref 3) In this case, the true mode shape vectors {ψ }, {ψ 2 },, {ψ K } are orthonormal So, the factorization appearing in (4) consists of [Ψ], which has orthonormal columns, times [Γ], which is diagonal, times [S] If [S] happened to have orthogonal rows, the factorization appearing in (4) would coincide with the SVD factorization of the data matrix [D], and so the POMs appearing in the columns of [U] would exactly match the true mode shapes in [Ψ] up to trivial ambiguities (such as multiplication by ) More generally, if the rows of [S] were nearly, but not perfectly orthogonal, then by taking an SVD of [D] one would obtain left singular vectors [U] that nearly, but not perfectly match the true mode shapes in [Ψ] Our analysis in Refs 3,4 formalizes this Specifically, for n =, 2,, K, let q n denote the l 2 norm of the n th row of [S], and define the K K diagonal matrix q q 2 [Q] := qk This allows us to write [S] = [Q][P ], where for each n, the n th row of [P ] equals the normalized n th row of [S] Now, consider the matrix [P ][P ], which contains inner products between all pairs of rows in [P ] Since [P ] has normalized rows, the diagonal entries of [P ][P ] are all equal to Let us write [P ][P ] = [I] + [ ], where [I] is the K K identity matrix and [ ] is a K K matrix containing the off-diagonal entries of [P ][P ] Much of our discussion in this paper focuses on the quantity [ ] 2 = [P ][P ] [I] 2, Technically, in a valid SVD factorization the rows of the rightmost matrix must have unit norm If the rows of [S] were orthogonal but not normalized, any scaling factors could be absorbed into the central diagonal matrix without affecting the left singular vectors Proc of SPIE Vol U-3

4 which measures how close the rows of [P ] (and thus the rows of [S]) are to being orthogonal In Refs 3, 4 we prescribe conditions on the sample times t, t 2,, t M which ensure that [ ] 2 is small, and consequently that the left singular vectors [U] nearly match the true mode shapes [Ψ] Let us examine the case where the sample times are chosen uniformly That is, for some sampling interval T s >, suppose t =, t 2 = T s,, t M = (M )T s (6) Let δ min and δ max denote lower and upper bounds on the minimum and maximum separation of the modal frequencies That is, assume that δ min min ω l ω n, where the minimum is taken over all l, n {,, K} with l n, and that δ max max ω l ω n, where the maximum is also taken over all l, n {,, K} Our analysis in Ref 3 shows that if the following conditions are satisfied: the sampling interval T s is proportional to δ max, essentially requiring that the samples be collected at the Nyquist rate, the number of samples M at each sensor exceeds the number of active modes K, the total sampling duration t M = (M )T s is proportional to log K ɛδ min, where ɛ (, ) relates to the quality of the final estimate below, and one uses complex data samples to form the data matrix (see Ref 3 for full details), then for each mode shape n =, 2,, K, the n th true mode shape {ψ n } and the n th POM { ψ n } will obey {ψ n }, { ψ n } 2 ɛ2 ( + ɛ) ɛ (sep n (ɛ)) 2 (7) Since both {ψ n } and { ψ n } have unit norm, when {ψ n }, { ψ n } 2 is close to, { ψ n } provides a highly accurate estimate of {ψ n } The term sep n (ɛ) appearing in (7) measures the relative separation between the n th amplitude A n and all other amplitudes When A n is well separated from all other amplitudes, the mode shape estimates will be more accurate (This behavior is examined more closely in Ref 4) As mentioned above, ɛ controls the accuracy of the mode shape estimates: one can improve the quality of the estimates by setting ɛ as small as desired, as long as the total sampling duration t M = (M )T s increases proportionally In short, this result holds because, once the sampling interval T s is chosen according to the Nyquist rate (that is, on the order of δ max ), increasing the number of samples M per sensor causes the rows of [S] to become more orthogonal, leading to a decrease in the quantity [ ] 2 As a simple demonstration of this fact, we consider a collection of modal frequencies ω =, ω 2 = 3, ω 3 = 7, ω 4 =, randomly choose phases θ, θ 2, θ 3, θ 4 according to uniform distribution in the interval [, 2π], set T s = π 2δ max = 5π, and construct the [S] matrix according to (5) For various values of M, we compute the quantity [ ] 2 and plot this quantity as a function of M in Figure We see that, in general, [ ] 2 continues to decrease as M grows larger 3 MATRIX FACTORIZATIONS IN THE DAMPED CASE The general solution to the equations of motion for a proportionally damped N-degree-of-freedom system in free vibration is given by 5 N {d(t)} = {ψ n }A n e ξnωnt sin(ω d,n t + θ n ), (8) where n= ω, ω 2,, ω N are, again, the undamped natural modal frequencies of the system (determined by the system s mass and stiffness matrices), {ψ }, {ψ 2 },, {ψ N } are, again, the system s true, normal mode shapes (also determined by the system s mass and stiffness matrices), Proc of SPIE Vol U-4

5 [ ] M Figure : In the undamped case, with a fixed sampling interval T s, increasing the number of samples M will generally reduce the quantity 2, and consequently the POMs will provide better approximations to the true mode shapes In this plot of 2 vs M, the [S] matrix was populated via (5) with ω =, ω 2 = 3, ω 3 = 7, ω 4 =, T s = 5π, and θ = 274, θ 2 = 63, θ 3 = 345, θ 4 = 274 chosen uniformly at random from the interval [, 2π] ξ, ξ 2,, ξ N are the modal damping coefficients (determined by the proportional damping terms), ω d,, ω d,2,, ω d,n are the damped natural frequencies of the system and are given by ω d,n = ω n ξ 2 n, and A, A 2,, A N and θ, θ 2,, θ N are, again, amplitudes and phases of the sinusoidal loadings (determined by the system s initial conditions) As in Section 2, we let K denote the number of nonzero amplitudes, and without loss of generality, we also assume that A A K > A K+ = = A N = Finally, we again suppose the system s mass matrix is proportional to the identity (or that the data has been suitably renormalized as discussed in Ref 3), so that the true mode shape vectors {ψ }, {ψ 2 },, {ψ K } are orthonormal Now, when data is collected from such a system by sampling {d(t)} at M distinct points in time t, t 2,, t M, and the data is assembled into an N M matrix [D] as described in (), it follows from (8) that [D] admits a factorization of the form (4) where [Ψ] and [Γ] are as described in Section 2, but [S] now takes the form e ξωt sin(ω d, t + θ ) e ξωt2 sin(ω d, t 2 + θ ) e ξωt M sin(ω d, t M + θ ) e ξ2ω2t sin(ω d,2 t + θ 2 ) e ξ2ω2t2 sin(ω d,2 t 2 + θ 2 ) e ξ2ω2t M sin(ω d,2 t M + θ 2 ) [S] = e ξ Kω K t sin(ω d,k t + θ K ) e ξ Kω K t 2 sin(ω d,k t 2 + θ K ) e ξ Kω K t M sin(ω d,k t M + θ K ) Defining [Q] as a K K diagonal matrix containing the norms of the rows of [S], writing [S] = [Q][P ], and defining [ ] = [P ][P ] [I] as in Section 2, the success of the POD for estimating the mode shapes of this damped structure now rests on the question of whether [ ] 2 is small in this case In the sections that follow, we discuss conditions on the damping coefficients ξ n, modal frequencies ω n, sampling rate T s, and number of samples M under which uniform sampling produces a matrix [S] such that [ ] 2 is small, and we demonstrate that in such cases the POMs will provide an accurate approximation of a system s true mode shapes 3 Experimental Setup As a running example throughout our discussion, we make use of a simple boxcar system with N = 4 degrees of freedom, as illustrated in Figure 2 For simplicity we let all masses equal, ie, m = m 2 = m 3 = m 4 = This simple system has a stiffness matrix of the form [K] = k + k 2 k 2 k 2 k 2 + k 3 k 3 k 3 k 3 + k 4 k 4 k 4 k 4 + k 5 (9) Proc of SPIE Vol U-5

6 k k 2 k 3 k 4 k 5 m m 2 m 3 m 4 Figure 2: A simple N-degree-of-freedom boxcar system with N = 4 With the mass matrix [M] equaling the identity, and assuming that we only consider proportional damping of the form [C] = α[m] + β[k], where α, β R, the mode shapes [Ψ] of this system are the eigenvectors of the stiffness matrix [K] (and are orthonormal), and the modal stiffness values k n are equal to the eigenvalues of the stiffness matrix The undamped natural modal frequencies are then ω n = are ξ n = αmn+βk n 2 m nk n = α+βk n 2 k n k n m n = k n The modal damping coefficients, from which we can obtain the damped natural frequencies ω d,n = ω n ξ 2 n In the following experiments, we vary [K], α, and β to generate different systems and we simulate the system behavior using (8) In all experiments we sample at T s = π 2δ max, where δ max = max ω l ω n 32 The Effect of Damping on [ ] Let us start by considering the off-diagonal entries of [ ], and suppose that we sample at uniform time intervals t m = mt s, where m =,, M and T s is the sampling interval Then, the off-diagonal entries of [ ] can be written as M m= [ ] j,l = e (ξjωj+ξ lω l )mt s sin(ω d,j mt s + θ j ) sin(ω d,l mt s + θ l ) M m= sin 2 M () (ω e 2ξjωjmTs d,j mt s + θ j ) m= e 2ξ lω l mt s sin 2 (ω d,l mt s + θ l ) Suppose we let c = MT s be a fixed constant, and we decrease the sampling interval to zero, ie, T s This, in turn, requires M to increase such that M Essentially, this means we are keeping the total sampling time span constant and sampling at infinitely fine sampling intervals By doing so, the off-diagonal entry will converge to the normalized analog inner product between the damped sinusoids within the interval [, c], [ ] j,l c=mt s T s c e (ξjωj+ξ lω l )t sin(ω d,j t + θ j ) sin(ω d,l t + θ l )dt c e 2ξjωjt sin 2 c (ω d,j t + θ j )dt e 2ξ lω lt sin 2 (ω d,l t + θ l )dt An illustration of a pair of damped sinusoids and the corresponding [ ] j,l is shown in Figure 3(a) The top panel shows two damped sinusoids overlaid on top of each other These sinusoids have been sampled at T s = 5, which is faster than what the Nyquist rate prescribes The blue sinusoid has a damping coefficient of 3 and the black sinusoid has a damping coefficient of 5 Thus, the black sinusoid decays to zero faster than the blue sinusoid In order for the normalized inner product of the two sinusoids to be small it is important that we sample fast enough, and long enough To sample fast enough we set the sampling rate to be higher than the Nyquist rate In our previous undamped analysis we have seen that the total sampling time span should be inversely proportional to δ min = min j,l ω j ω l The presence of damping, in effect, shortens the maximum possible sampling time span (which is infinite in the undamped case), and can prevent us from observing the signal for the requisite amount of time Once the sinusoid has fully decayed, adding more measurements will not help improve the inner product, which will saturate beyond this point Sampling at a faster rate will also not overcome the saturation problem as this does not delay when the sinusoids decay to zero The bottom panel of Figure 3(a) shows in green the normalized inner product between the two sinusoids from the top panel as a function of the number of measurements, M Notice how this plot starts to decrease but saturates at around The saturation point happens at around M = samples, roughly at the same point when the black sinusoid has decayed close to zero We also plot in red the normalized inner product of the same two sinusoids but with no damping As we can see, the two curves follow each other very closely at low values of M This is when the damped sinusoids still have not decayed significantly Then, after about M = 3 samples, the two lines separate; the red line continues to decay while the green saturates In summary, whenever we are dealing with damping, the off-diagonal entries of [ ] (which correspond to normalized inner products between Proc of SPIE Vol U-6

7 Damped Sinusoids ξ j =3 ω j =4 ω d,j =4 ξ l =5 ω l =224 ω d,l = ξ max = ξ max = ξ max =2 ξ max =4 ξ max =6 ξ max =8 [ ]i,j M 5 Normalized Inner Product vs M Undamped Sinusoids Damped Sinusoids [ ] M (a) (b) Figure 3: (a) The top panel shows two damped sinusoids and the bottom panel shows their normalized inner product against the number of measurements, M (b) A plot of 2 versus the number of measurements, M The six plots vary in the maximum damping coefficient but have the same set of natural frequencies ω = [ ] with δ max = 38 and δ min = 27 M damped sinusoids) will eventually saturate, and no matter how many additional samples we take or how fast we sample, we cannot decrease [ ] j,l beyond this point The saturation in [ ] j,l means that there is a limit to how small [ ] 2 can become as M grows A crude lower-bound on [ ] 2 can be written in terms of the maximum off-diagonal entry: [ ] 2 max [ ] j,l j,l It is reasonable to argue that the largest off-diagonal entry will probably involve the inner product with one of the most heavily damped sinusoids, and that the greater the amount of damping, the greater [ ] 2 will be To illustrate the effect of the maximum damping coefficient on [ ] 2, Figure 3(b) plots [ ] 2 versus the number of samples M Each line corresponds to a different set of damping coefficients {ξ,, ξ 4 } As a representative value of the damping coefficients for each line, we indicate the maximum damping coefficient in the legend, ie, ξ max = max i ξ i There are two points to notice here The first is that for all plots with damping there is a point in time where [ ] 2 saturates, ie, [ ] 2 stops decreasing This is unlike the plot without damping which exhibits a general trend of decrease proportional to /M The second is that this saturation point increases with damping With more damping the time series decays faster to zero and thus saturates at a higher [ ] 2 Both of these points are as expected from the above discussion We also note that even though the set of damping coefficients may mostly consist of small values, the quantity [ ] 2 will be dominated by the largest damping coefficient To see how the saturation value of [ ] 2 increases with respect to the level of damping, we plot in Figure 4(a) the saturation value of [ ] 2 versus the maximum damping coefficient for each experiment From this figure we see that the two quantities are almost linearly related, and for this particular case the rate of increase is roughly 8 According to the overlaid linear regression plot, if we want to ensure [ ] 2 < 3 we can roughly handle a maximum damping coefficient of up to 5 Note that the aim of the linear regression plot was to find out the approximate slope of the black line when it exhibits a linear behavior, eg, when ξ max 2 The linear relationship or the slope of 8 may only be relevant to this specific example, but we do expect a similar positive correlation between [ ] 2 and ξ max in more general cases 33 POD with Damping We have seen that damping in the sinusoids causes the off-diagonal entries [ ] j,l to saturate when M grows large, while in the case of undamped sinusoids this quantity continues to decrease with more measurements Proc of SPIE Vol U-7

8 9 8 [ ] 2 vs ξ max [ ] 2 =8ξ max 5 IP min vs ξ max IP min = 22ξ max [ ] IPmin ξ max ξ max (a) (b) Figure 4: (a) The effect of [ ] 2 with respect to maximum damping coefficient ξ max Simple regression shows that the trend is linear with a slope of 8 (b) POD performance as measured by the minimum inner product, IP min, between the true and estimated mode shapes All mode shapes can be estimated accurately when the damping is sufficiently low As the amount of damping increases, the POD performance degrades roughly linearly as a function of ξ max 85 The saturation of [ ] i,j, however, does not mean that the POD will fail for all damped cases Consider, for example, two sinusoids with very light damping Before the sinusoids decay to zero it may be possible to collect a sufficient number of measurements that [ ] 2 is sufficiently small Consequently, as we have discussed in Section 2, when [ ] 2 is small we can expect the POD to accurately recover the mode shapes Let us look at Figure 3(b) again Although the presence of damping results in the saturation of [ ] 2, we can see that when the damping is relatively light, eg, ξ max =, the saturation point is fairly small as well, eg, [ ] 2 4 To estimate how small [ ] 2 needs to be for a satisfactory mode shape estimate, we plot in Figure 4(b) an experimental relationship between the maximum damping coefficient, ξ max, and the least accurate mode shape estimate as measured by the minimum squared inner product between the estimated and true mode shapes: IP min := min n {ψ n }, { ψ n } 2 A linear fit to this curve is plotted as IP min = 22ξ max + 6 As with the previous regression plot, this linear fit also focused on estimating the linear slope for cases where ξ max 4 When damping is fairly light, eg, when ξ max 4, the empirical curve is nearly flat and IP min is very close to Thus, it appears that we can tolerate a certain amount of damping before the performance diminishes linearly with ξ max From our plots we can obtain the following empirical approximations, which can be applied for any ξ max : [ ] 2 8ξ max, IP min 22ξ max + Combining these, we obtain the approximate relationship IP min 27 [ ] 2 Based on this rule of thumb, to guarantee the POD performance such that IP min 9, we would require roughly that [ ] 2 < 37 Once again, this tells us that we can expect POD to remain effective when the system is under light damping So far we have established that when damping is light we can expect the POD to return accurate estimates of the true mode shapes Actually, to be more precise, the amount of damping that the POD can tolerate without breaking down also depends on the quantity δ min Recall that δ min reflects the minimum separation between any pair of undamped natural modal frequencies among ω, ω 2,, ω K When δ min is small, our results Proc of SPIE Vol U-8

9 described in Section 2 for the undamped case show that the total sampling duration t M = (M )T s must increase To support our discussion of the damped case, let us focus on the damping term in equation (), which is e (ωjξj+ω lξ l )t For the sake of argument, let us assume that ω j ξ j = ω l ξ l ; we do not assume ω j and ω l are equal, but they may possibly be close to one another Suppose we require the term e (ωjξj+ω lξ l )t to be at least c at time t = t M That is, we require e 2ωjξjt M c, and here we take c to be some number between and Rearranging, the requirement can be expressed as ω j ξ j t M ln(c) 2 If we also wish to ensure that t M ω j ω l, this places an upper bound on how much damping we can tolerate In particular, ω j cannot be larger than a constant times ln(c) 2 ω j ω l ω j Consequently, when the spacing between the modal frequencies reduces (and, in particular, when δ min decreases), we must sample for a longer time span before the signals decay significantly, and so we cannot tolerate as much damping When the spacing between modal frequencies increases (ie, when δ min increases), we only need observe the signals over a shorter time span before they decay significantly, and so we can tolerate a higher level of damping To put this intuition to the test, we carry out the following experiment We first generate stiffness values k,, k 5 uniformly at random in the interval [, 5] For each randomly generated system, we increase the damping of the system with the proportional damping model, and record at what point in damping the performance of POD breaks down We define the break down point to be when the minimum out of the four squared inner product values (between the true and estimated mode shapes) dips below 9, ie, when IP min is below 9 We plot in Figure 5(a) the average maximum damping tolerance versus the minimum separation Though the x-axis is labeled as δ min, since we are generating the boxcar system randomly, every plotted data point actually reflects the performance over systems in the range [δ min 5, δ min + 5] For each system configuration we record the maximum amount of damping for which IP min remains above 9 In doing so, we choose M large enough so that the sinusoids can decay fully for maximum performance Then, we plot the average of the maximum damping over 3 trials for each δ min range Overall, this plot is in agreement with our intuition When δ min is small we can only handle light damping, and when δ min is large we are able to tolerate stronger damping In Figure 5(b) we vary the number of samples M and plot the average time t M it takes for IP min to reach above 9 as a function of the maximum damping coefficient Results are averaged over 2 randomly generated systems For each system, we generate the stiffness values uniformly at random in the interval [, 3] Nine lines are overlaid in this plot, each corresponding to a specific δ min (Again, every line actually reflects the performance over systems in the range [δ min 5, δ min + 5]) Lines corresponding to the smallest values of δ min are at the top of this plot Therefore, consistent with our discussion, we must observe the system for a longer time span when δ min is small than when δ min large We also see that the slope of the lines is highest for the systems with the smaller values of δ min Thus, for a system with a large δ min, we do not have to increase the sampling time span as dramatically as for a system with a small δ min as the damping coefficient increases We also note that as δ min decreases, the lines get shorter; for example, the top most line is the shortest of all, and it ends at a damping level of roughly 3 This is because the lines stop when IP min cannot reach above 9 For certain combinations of δ min and the damping coefficient, the minimum squared inner product saturates prior to reaching 9 The general trend of lines becoming longer towards the bottom means that we are able to tolerate more damping as we increase δ min We also see from Figure 5(b) that when ξ max is small the lines are fairly flat This means that we do not have to sample much longer when ξ max is small than when ξ max = In such cases we may not have sample all the way until the sinusoids decay to zero; rather, we can stop collecting samples when we reach a similar time span as would be prescribed for ξ max = We illustrate this point with an example shown in Figure 6 We consider a system with fixed δ min = 8 and vary ξ max from to to 23 For each value of ξ max, we continue to collect samples until we reach IP min 95 In the figure we plot the data samples as a function of time and mark the end of each line with a circle As expected, with light damping we see that the time series follow closely that of ξ max =, and as such the POD only requires a few more additional samples to produce accurate estimates of the mode shapes Proc of SPIE Vol U-9

10 ξmax 5 5 Time To IPmin > 9, s δ min =5 δ min =25 δ min =45 δ min =65 δ min =25 δ min =285 δ min =335 δ min =485 δ min = δ min ξ max (a) (b) Figure 5: (a) The amount of damping we can tolerate for a given minimum frequency separation For each system we find the maximum amount of damping for which IP min remains above 9 We plot the average over 3 repetitions for each δ min (b) The time it takes for IP min to reach above 9 as a function of the maximum damping coefficient Each line corresponds to a different minimum separation δ min d3(t) d4(t) d(t) d2(t) t t t t Figure 6: A plot of data samples d (t), d 2(t), d 3(t), d 4(t) against time We overlay three plots, each corresponding to a different maximum damping coefficient The red line corresponds to ξ max =, the black line corresponds to ξ max =, and the blue line corresponds to ξ max = 23 All of these plots have the same undamped natural modal frequencies with δ min = 8 The end of each line is marked with a circle and this point represents the time when the POD resulted in IP min 95 This example illustrates the point that when damping is light we do not have to wait until the sinusoids decay to zero With light damping, we may sample roughly as long as we need to sample in the undamped scenario 4 CONCLUSION In this paper, we have examined the performance of the POD in the presence of proportional damping Throughout the paper, we have assumed that the system s mass matrix is proportional to the identity (or that the data has been suitably renormalized as discussed in Ref 3) Our previous analysis on undamped systems revealed that the POD requires 2 to be small in order to provide accurate estimates of the mode shapes In the damped case, we have seen that the decay in the sinusoids causes 2 to saturate and that this saturation point increases with damping This is essentially because taking more measurements after the sinusoids have decayed to zero does not help to improve the POD performance Sampling at a higher rate will also not help to overcome this problem as this will not delay the decay The performance of POD is largely dominated by the Proc of SPIE Vol U-

11 maximum damping coefficient When at least one damping coefficient is large, this will almost surely express itself in diminished performance of the POD technique However, as long as the maximum damping coefficient is small, ie, as long as the system is lightly damped, the POD will be able to accurately recover the mode shapes The amount of damping the POD can tolerate depends mainly on the minimum separation of the modal frequencies; when the minimum separation is large we can tolerate higher levels of damping, but when the minimum separation is small we can only tolerate light damping When damping is light we can sample roughly for a similar time span as is required for an undamped system REFERENCES [] B F Feeny and R Kappagantu, On the physical interpretation of proper orthogonal modes in vibrations, Journal of Sound and Vibration, vol 2, no 4, pp 67 66, 998 [2] G Kerschen and J-C Golinval, Physical interpretation of the proper orthogonal modes using the singular value decomposition, Journal of Sound and Vibration, vol 249, no 5, pp , 22 [3] J Y Park, M Wakin, and A Gilbert, Modal analysis with compressive measurements, IEEE Transactions on Signal Processing, vol 62, no 7, pp , April 24 [4] J Y Park, A Gilbert, and M Wakin, Compressive measurement bounds for wireless sensor networks on structural health monitoring, World Conference on Structural Control and Monitoring (WCSCM), July 24 [5] C R Farrar and K Worden, Structural health monitoring: a machine learning perspective John Wiley & Sons, 22 Proc of SPIE Vol U-

Compressive Measurement Bounds for Wireless Sensor Networks in Structural Health Monitoring

Compressive Measurement Bounds for Wireless Sensor Networks in Structural Health Monitoring Compressive Measurement Bounds for Wireless Sensor Networks in Structural Health Monitoring Jae Young Park, Anna C. Gilbert, and Michael B. Wakin Abstract Structural Health Monitoring (SHM) systems are

More information

A NEW METHOD FOR VIBRATION MODE ANALYSIS

A NEW METHOD FOR VIBRATION MODE ANALYSIS Proceedings of IDETC/CIE 25 25 ASME 25 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference Long Beach, California, USA, September 24-28, 25 DETC25-85138

More information

Simple Modification of Proper Orthogonal Coordinate Histories for Forced Response Simulation

Simple Modification of Proper Orthogonal Coordinate Histories for Forced Response Simulation Simple Modification of Proper Orthogonal Coordinate Histories for Forced Response Simulation Timothy C. Allison, A. Keith Miller and Daniel J. Inman I. Review of Computation of the POD The POD can be computed

More information

COMPARISON OF MODE SHAPE VECTORS IN OPERATIONAL MODAL ANALYSIS DEALING WITH CLOSELY SPACED MODES.

COMPARISON OF MODE SHAPE VECTORS IN OPERATIONAL MODAL ANALYSIS DEALING WITH CLOSELY SPACED MODES. IOMAC'5 6 th International Operational Modal Analysis Conference 5 May-4 Gijón - Spain COMPARISON OF MODE SHAPE VECTORS IN OPERATIONAL MODAL ANALYSIS DEALING WITH CLOSELY SPACED MODES. Olsen P., and Brincker

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

arxiv: v4 [cs.it] 21 Dec 2017

arxiv: v4 [cs.it] 21 Dec 2017 Atomic Norm Minimization for Modal Analysis from Random and Compressed Samples Shuang Li, Dehui Yang, Gongguo Tang, and Michael B. Wakin December 5, 7 arxiv:73.938v4 cs.it] Dec 7 Abstract Modal analysis

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Full-State Feedback Design for a Multi-Input System

Full-State Feedback Design for a Multi-Input System Full-State Feedback Design for a Multi-Input System A. Introduction The open-loop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C

More information

Mathematical Properties of Stiffness Matrices

Mathematical Properties of Stiffness Matrices Mathematical Properties of Stiffness Matrices CEE 4L. Matrix Structural Analysis Department of Civil and Environmental Engineering Duke University Henri P. Gavin Fall, 0 These notes describe some of the

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Modal Analysis: What it is and is not Gerrit Visser

Modal Analysis: What it is and is not Gerrit Visser Modal Analysis: What it is and is not Gerrit Visser What is a Modal Analysis? What answers do we get out of it? How is it useful? What does it not tell us? In this article, we ll discuss where a modal

More information

Structural Dynamics A Graduate Course in Aerospace Engineering

Structural Dynamics A Graduate Course in Aerospace Engineering Structural Dynamics A Graduate Course in Aerospace Engineering By: H. Ahmadian ahmadian@iust.ac.ir The Science and Art of Structural Dynamics What do all the followings have in common? > A sport-utility

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Multi Degrees of Freedom Systems

Multi Degrees of Freedom Systems Multi Degrees of Freedom Systems MDOF s http://intranet.dica.polimi.it/people/boffi-giacomo Dipartimento di Ingegneria Civile Ambientale e Territoriale Politecnico di Milano March 9, 07 Outline, a System

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION APPENDIX B EIGENVALUES AND SINGULAR VALUE DECOMPOSITION B.1 LINEAR EQUATIONS AND INVERSES Problems of linear estimation can be written in terms of a linear matrix equation whose solution provides the required

More information

Automated Modal Parameter Estimation For Operational Modal Analysis of Large Systems

Automated Modal Parameter Estimation For Operational Modal Analysis of Large Systems Automated Modal Parameter Estimation For Operational Modal Analysis of Large Systems Palle Andersen Structural Vibration Solutions A/S Niels Jernes Vej 10, DK-9220 Aalborg East, Denmark, pa@svibs.com Rune

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010

ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010 ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010 PROBLEM 1: Given the mass matrix and two undamped natural frequencies for a general two degree-of-freedom system with a symmetric stiffness matrix, find the stiffness

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Identification Methods for Structural Systems. Prof. Dr. Eleni Chatzi Lecture March, 2016

Identification Methods for Structural Systems. Prof. Dr. Eleni Chatzi Lecture March, 2016 Prof. Dr. Eleni Chatzi Lecture 4-09. March, 2016 Fundamentals Overview Multiple DOF Systems State-space Formulation Eigenvalue Analysis The Mode Superposition Method The effect of Damping on Structural

More information

1. The Polar Decomposition

1. The Polar Decomposition A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Advanced Vibrations. Elements of Analytical Dynamics. By: H. Ahmadian Lecture One

Advanced Vibrations. Elements of Analytical Dynamics. By: H. Ahmadian Lecture One Advanced Vibrations Lecture One Elements of Analytical Dynamics By: H. Ahmadian ahmadian@iust.ac.ir Elements of Analytical Dynamics Newton's laws were formulated for a single particle Can be extended to

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

FREE VIBRATION RESPONSE OF UNDAMPED SYSTEMS

FREE VIBRATION RESPONSE OF UNDAMPED SYSTEMS Lecture Notes: STRUCTURAL DYNAMICS / FALL 2011 / Page: 1 FREE VIBRATION RESPONSE OF UNDAMPED SYSTEMS : : 0, 0 As demonstrated previously, the above Equation of Motion (free-vibration equation) has a solution

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Structural Dynamics Lecture Eleven: Dynamic Response of MDOF Systems: (Chapter 11) By: H. Ahmadian

Structural Dynamics Lecture Eleven: Dynamic Response of MDOF Systems: (Chapter 11) By: H. Ahmadian Structural Dynamics Lecture Eleven: Dynamic Response of MDOF Systems: (Chapter 11) By: H. Ahmadian ahmadian@iust.ac.ir Dynamic Response of MDOF Systems: Mode-Superposition Method Mode-Superposition Method:

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

ABSTRACT INTRODUCTION

ABSTRACT INTRODUCTION ABSTRACT Presented in this paper is an approach to fault diagnosis based on a unifying review of linear Gaussian models. The unifying review draws together different algorithms such as PCA, factor analysis,

More information

Performance of various mode indicator functions

Performance of various mode indicator functions Shock and Vibration 17 (2010) 473 482 473 DOI 10.3233/SAV-2010-0541 IOS Press Performance of various mode indicator functions M. Radeş Universitatea Politehnica Bucureşti, Splaiul Independenţei 313, Bucureşti,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

4 Bias-Variance for Ridge Regression (24 points)

4 Bias-Variance for Ridge Regression (24 points) Implement Ridge Regression with λ = 0.00001. Plot the Squared Euclidean test error for the following values of k (the dimensions you reduce to): k = {0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Damage detection of truss bridge via vibration data using TPC technique

Damage detection of truss bridge via vibration data using TPC technique Damage detection of truss bridge via vibration data using TPC technique Ahmed Noor AL-QAYYIM 1,2, Barlas Özden ÇAĞLAYAN 1 1 Faculty of Civil Engineering, Istanbul Technical University, Istanbul, Turkey

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

Chapter 23: Principles of Passive Vibration Control: Design of absorber

Chapter 23: Principles of Passive Vibration Control: Design of absorber Chapter 23: Principles of Passive Vibration Control: Design of absorber INTRODUCTION The term 'vibration absorber' is used for passive devices attached to the vibrating structure. Such devices are made

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

Operational modal analysis using forced excitation and input-output autoregressive coefficients

Operational modal analysis using forced excitation and input-output autoregressive coefficients Operational modal analysis using forced excitation and input-output autoregressive coefficients *Kyeong-Taek Park 1) and Marco Torbol 2) 1), 2) School of Urban and Environment Engineering, UNIST, Ulsan,

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

Proper Orthogonal Decomposition Based Algorithm for Detecting Damage Location and Severity in Composite Plates

Proper Orthogonal Decomposition Based Algorithm for Detecting Damage Location and Severity in Composite Plates Proper Orthogonal Decomposition Based Algorithm for Detecting Damage Location and Severity in Composite Plates Conner Shane 1 and Ratneshwar Jha * Department of Mechanical and Aeronautical Engineering

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION

VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION Michael Döhler 1, Palle Andersen 2, Laurent Mevel 1 1 Inria/IFSTTAR, I4S, Rennes, France, {michaeldoehler, laurentmevel}@inriafr

More information

Structural Matrices in MDOF Systems

Structural Matrices in MDOF Systems in MDOF Systems http://intranet.dica.polimi.it/people/boffi-giacomo Dipartimento di Ingegneria Civile Ambientale e Territoriale Politecnico di Milano April 9, 2016 Outline Additional Static Condensation

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

5 Irreducible representations

5 Irreducible representations Physics 29b Lecture 9 Caltech, 2/5/9 5 Irreducible representations 5.9 Irreps of the circle group and charge We have been talking mostly about finite groups. Continuous groups are different, but their

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

Reduction in number of dofs

Reduction in number of dofs Reduction in number of dofs Reduction in the number of dof to represent a structure reduces the size of matrices and, hence, computational cost. Because a subset of the original dof represent the whole

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Lecture 2: Linear Algebra

Lecture 2: Linear Algebra Lecture 2: Linear Algebra Rajat Mittal IIT Kanpur We will start with the basics of linear algebra that will be needed throughout this course That means, we will learn about vector spaces, linear independence,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Principal Input and Output Directions and Hankel Singular Values

Principal Input and Output Directions and Hankel Singular Values Principal Input and Output Directions and Hankel Singular Values CEE 629 System Identification Duke University, Fall 2017 1 Continuous-time systems in the frequency domain In the frequency domain, the

More information

Outline. Structural Matrices. Giacomo Boffi. Introductory Remarks. Structural Matrices. Evaluation of Structural Matrices

Outline. Structural Matrices. Giacomo Boffi. Introductory Remarks. Structural Matrices. Evaluation of Structural Matrices Outline in MDOF Systems Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano May 8, 014 Additional Today we will study the properties of structural matrices, that is the operators that

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Tuning TMDs to Fix Floors in MDOF Shear Buildings

Tuning TMDs to Fix Floors in MDOF Shear Buildings Tuning TMDs to Fix Floors in MDOF Shear Buildings This is a paper I wrote in my first year of graduate school at Duke University. It applied the TMD tuning methodology I developed in my undergraduate research

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Camera Models and Affine Multiple Views Geometry

Camera Models and Affine Multiple Views Geometry Camera Models and Affine Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in May 29, 2001 1 1 Camera Models A Camera transforms a 3D

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Image Compression Using Singular Value Decomposition

Image Compression Using Singular Value Decomposition Image Compression Using Singular Value Decomposition Ian Cooper and Craig Lorenc December 15, 2006 Abstract Singular value decomposition (SVD) is an effective tool for minimizing data storage and data

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information