BATCH-FORM SOLUTIONS TO OPTIMAL INPUT SIGNAL RECOVERY IN THE PRESENCE OF NOISES
|
|
- Phyllis Bennett
- 5 years ago
- Views:
Transcription
1 (Preprint) AAS BATCH-FORM SOLUTIONS TO OPTIMAL INPUT SIGNAL RECOVERY IN THE PRESENCE OF NOISES P Lin, M Q Phan, and S A Ketcham This paper studies the problem of optimally recovering the input signals to a linear time-invariant system when both input and measurement noises are present We focus on batch-form solutions which are suitable for applications that deal with short-duration events The system, the input and measurement noise covariances, the noise-corrupted output signals are assumed known, and we seek to recover the input signals that enter the system prior to being corrupted by input noise The proposed solution works through a filter to characterize the input and measurement noise statistics The input signal recovery is optimal in the sense that the filter residual is correctly recovered from the given information about the model and the output measurements A weighted least-squares solution is found to be both simple and useful in acoustic signal recovery applications INTRODUCTION In the area of short-duration large-domain acoustic and seismic signal propagation, highly accurate reduced-order models represent an enabling technology, Refs []-[] These techniques can also be applied to study the propagation of vibrations throughout a large flexible structure Such models are derived from HPC-derived data, and can be used for rapid prediction of the dynamic responses without resorting to the time-consuming HPC simulation Significant savings in computational resources and time can be achieved by this strategy, reducing what normally takes hours on a HPC supercomputer to minutes on a laptop Current research efforts are being made to extend the use of reduced-order models beyond output prediction These efforts include the problems of source signal recovery and source localization By taking advantage of the knowledge of the dynamics of the environment represented by these reduced-order models, it is possible to address the source signal recovery and localization problems in a highly complex multi-path environment with non-line-of-sight sensors This paper studies the problem of optimally recovering the input signals when both input and measurement noises are present We particularly focus on batch forms of the solutions for applications that deal with short-duration events, where a typical time duration of the signals to be recovered is in the range of 5 samples The system, the input and measurement noise covariances, and the noise-corrupted output signals, are assumed known, and we seek to recover the input signal that enters the system prior to being corrupted by noise For the given system and the specific input and measurement noise sequences, there is a unique residual associated with Thayer School of Engineering, Dartmouth College, Hanover, NH 755 Thayer School of Engineering, Dartmouth College, Hanover, NH 755 Signature Physics Branch, Cold Regions Research and Engineering Laboratory, Hanover, NH 755
2 this specific noise-free input signal For optimal input recovery, we seek an input signal that reproduces this unique residual The challenge of the problem lies in the fact that this unique residual is not known a priori because we do not assume knowledge of the specific input and measurement noise sequences, but only their covariances Instead of working with the input and measurement noise covariances, our approach works through a filter, because given the model and the input and measurement noise statistics, the filter gain is uniquely determined In fact, such a filter and system model can be identified from input-output data using modern system identification techniques, Refs [5]-[] Two approaches are considered for the input identification problem The first approach is based on an ARX model (Auto-Regressive model with exogenous inputs) The coefficients of this model are related to the state-space model and the filter gain This model is suitable because when the order of the ARX model is sufficiently large, the single additive noise term in the model is the residual which is known to be white and minimized White additive error is especially attractive for least-squares identification problems The input signal is found by minimizing this residual Next, we seek to determine how this solution is related to a simpler noise-free solution that is based on the unit pulse response model This second approach turns out be useful in that it leads to a weighted least-squares solution The improvement of the weighted least-squares solution is due to the fact that it minimizes the correct filter residual, whereas the ordinary least-squares solution minimizes an incorrect colored residual The filter gain is involved in a weighting matrix of the weighted least-squares solution that improves the recovery of the input signals Furthermore, in the absence of noises, the weighted least-square solution reduces to the regular least-squares solution Intuitively appealing implications can also be gained with this approach: It explains when explicit consideration of the noise covariances leads to improved identification results over the regular least-squares solution It also explains why for many problems, the regular least-squares solution, when applied to noisy measurements, is found to be adequate Numerical examples are supplied to demonstrate the effectiveness of the proposed approach PROBLEM STATEMENT Consider a linear time-invariant, finite dimensional system x(k + ) = Ax(k) + Bu(k) + w(k) () y(k) = Cx(k) + Du(k) + v(k) () where u(k) R m is the input signal that we would like to recover, w(k) R n and v(k) R l represent process noise and measurement noise, respectively, both unknown, but assumed to be white and Gaussian Given system matrices A, B, C, D and the covariance matrices of w(k) and v(k), we would like to recover u(k) from the output y(k) R l, k =,,,, s We assume that the number independent outputs is at least equal to the number of independent inputs Furthermore, because we are dealing with short time records in our applications, we are particularly interested in the batch forms of the solution
3 THE KALMAN FILTER Knowing the system model and the covariance matrices of process noise and measurement noise is equivalent to knowing the corresponding filter with gain K, ˆx(k + ) = Aˆx(k) + Bu(k) + Kε(k) () y(k) = C ˆx(k) + Du(k) + ε(k) () where ε(k) is the filter residual, which is minimized and white Instead of dealing with the covariances of the process and measurement noises, we will work with the fitter gain K Not only this approach is mathematically simpler, in practice the system state-space model and the filter gain can be identified directly from input-output data by several available techniques, Refs [5]-[] Another form of the filter is found to be useful, especially in system identification applications Substituting () into () and re-arranging terms produces, ˆx(k + ) = Aˆx(k) + Bu(k) + K (y(k) C ˆx(k) Du(k)) (5) By making the following simplifying definitions, = (A KC) ˆx(k) + (B KD) u(k) + Ky(k) (6) Ā = A KC B = B KD (7) we have another form of the filter where the residual is not explicitly present in the state equation, ˆx(k + ) = Āˆx(k) + Bu(k) + Ky(k) (8) y(k) = C ˆx(k) + Du(k) + ε(k) (9) In the following we will derive various input-output models from these two forms of the filter and use the resultant models for input identification INPUT IDENTIFICATION BY ARX MODEL We derive an ARX model with the aid of the filter, and invert it for input identification Propagating (8), (9) forward in time yields the following ARX model (Auto-Regressive model with exogenous input) The model is valid for k p such that Ā p regardless of the initial state x(), y(k) = α p y(k p) + + α y(k ) + β p u(k p) + + β u(k) + ε(k) () The coefficients α i s and β i s of the ARX model are related to the original state-space system matrices by the following relationship, i =,,, p, α i = CĀi K β i = CĀi B β = D () If the order p of the ARX model is sufficiently large such that Āp, the above ARX model has a single additive error term ε(k) This error term is the output residual, which is white and minimized We aim to find the input signal by minimizing this residual Since we are looking for a batch form of the solution, we need to write the input-output equations for all available time steps, and re-package the results before inversion This step is done in the next section
4 Solution by ARX Model Inversion To separate the known terms (model coefficients and output measurements) from the unknown terms (input values), re-write () as y(k) = [ y(k p) ] α p α + [ u(k p) ] β p β + ε(k) () y(k ) u(k p) where k p Defining the common input and output history vectors, y() u() y() u() y s = y(s ) u = u(s) Writing () for all available time steps starting from k = p and packaging them in a single matrix equation produces, where y ps and ε ps are defined as y ps = () y ps = H α y s + H β u + ε ps () y(p) y(p + ) y(s) ε ps = e(p) e(p + ) e(s) and the ARX model coefficients are arranged in H α and H β, α p α β p β H α = H β = (6) α p α β p β For further simplicity, define Equation (7) can be written succinctly as (5) y α = y ps H α y s (7) y α = H β u + ε ps (8) Because ε ps contains the residual which is minimized We make use of this fact to find u that minimizes the norm of ε ps, [ (H u = H + β y ) T ] α = β H β H T β y α (9) where the + denotes the pseudoinverse computed via the singular value decomposition To correctly recover u from the above solution, the matrix H β needs to have full-column rank If the number of independent outputs is equal to the number of inputs, then H β is "wide" (having more columns than rows) In this case, the input u cannot be correctly solved for The requirement, therefore, is to have the number of independent outputs to be larger than the number of inputs, in which case H β becomes a "tall" matrix (having more rows than columns)
5 Orthogonal constraints The above solution minimizes the residual The residual is also orthogonal to the input and output measurements, and thus one might wish to build these constraints into the optimization problem to solve for u We have taken this approach, and discovered that it is not a viable one for the following reasons: Because the residual ε ps is a linear function of u, the orthogonality condition of ε ps with respect to u becomes quadratic in the elements of u Optimization problems with quadratic equality constraints are known to be very difficult In our applications, we are typically deal with short data records Imposing orthogonality conditions of the residuals on short data records turns out to be overly restrictive, as these orthogonality conditions should be only applied to long data records, as they would be the case in system identification where long input-output records are assumed to be available, and these orthogonality conditions are automatically satisfied by the least-squares solutions to find the ARX model coefficients with or without residual whitening Because ε ps is also orthogonal to output measurements, and the output measurements are known, the orthogonality conditions associated with the output measurements are linear Minimizing the residual ε ps subject to linear constraints is straightforward For this reason, we might replace the quadratic constraints associated with u by additional linear constraints associated with additional output measurements This strategy of replacing nonlinear constraints by linear constraints is theoretically attractive However, they do nothing to avoid the overly restrictive nature of imposing the orthogonality conditions on short data records as discussed above INPUT IDENTIFICATION FROM UNIT PULSE RESPONSE MODEL Because the filter gain K is involved in the coefficients of the ARX model in (), it is not immediately obvious what the solution proposed in (9) reduces to in the absence of noises For this reason, we will now take the approach of first finding a simple noise-free solution, then seeing how this would be biased in the presence of noises We then determine how this solution might be improved in the presence of noises Solution by Ordinary Least-Squares of Unit Pulse Response Model Returning to (), () and setting the noise terms to zero, we have the following deterministic model, x(k + ) = Ax(k) + Bu(k) () y(k) = Cx(k) + Du(k) () For the simplest case where the initial condition is zero, x() =, the relationship between an input history to an output history is y = P u () 5
6 Define y = y() y() y() y(s) u = u() u() u() u(s) P = D CB D CAB CB D CA s B CAB CB D () The coefficients CA i B in P are the Markov parameters of the system, which are also the unit pulse response samples Hence the model embedded in () is the unit pulse response model Given y and P, as long as P is full column rank, the input u can be recovered from: [ (P u = P + y = T P ) ] P T y () where the + again denotes the pseudoinverse computed via the singular value decomposition The above solution is referred to as the ordinary least-squares () solution In the presence of noise, this solution is biased because the additive noise term is colored instead of white To see why this is the case, in the presence of noises, the counterpart of () is derived from (), (), The additive noise term e is defined as y = P u + e (5) e = Qε (6) where Q is given as Q = I CK I CAK CK I CA s K CAK CK I ε = ε() ε() ε() ε(s) (7) Because ε(k) is the residual which is white, Qε is colored A better solution is one that is based on an equation whose additive noise term is white as explained in the next section Solution by Weighted Least-Squares To set up an equation where the additive noise term is white, we need to return to (8), (9) to obtain, Qy = P u + ε (8) where Q and P are defined as I CK I Q = CĀK CK I CĀs K CĀK CK I (9) 6
7 P = D C B D CĀ B C B D CĀs B C Ā B C B D () A key feature of this model is that the additive error term ε is white The ARX model considered in () is subsumed in the present model in (8), which is only valid for zero initial condition The least-squares solution associated with (8) is therefore unbiased and given as, u W LS = P [ ( + ) Q y = P T ] T P P Q y () The above solution is referred to as a weighted least-squares solution () for the reason explained in the next section Relationship between and Solutions To reveal the relationship between the two solutions given in () and (), we need to determine the relationship between the two models given in (5) and (8) Pre-multiplying (5) with Q produces Qy = QP u + QQε () Comparing to (8), it can be shown that P = QP The main diagonal matrices of the product QP are D s, and the off-diagonal matrices match those of P, ( CK) D + I (CB) = C (B KD) = C B ( C ĀK ) D (CK) (CB) + I (CAB) = CĀKD + C (A KC) B = CĀ B, () and QQ = I The main diagonal matrices of the product QQ are identity matrices, and the offdiagonal matrices are zero, ( CK) I + I (CK) = ( C ĀK ) I (CK) (CK) + CAK = CĀK + CĀK =, () Substituting P = QP into () produces u W LS = (P T QT QP ) P T QT Qy = (P T W P ) P T W y, (5) where W = Q T Q serves as a weighting matrix when the solution in (5) is compared to the solution of () We have explained why () which is derived from (8) can be interpreted as a weighted leastsquares solution The solution can also be derived from (5) as follows Pre-multiplying (5) with Q and comparing the result to (8) produces a relationship between ε and e as follows, ε = Qe (6) 7
8 The above relationship can also be established from (6) with Q = Q because QQ = I The residual ε is white and minimized, hence the correct cost function to minimize is J = ε T ε = e T QT Qe = e T W e (7) Substituting e = y P u into the cost function (7) and minimizing it with respect to u produces u W LS = ( P T W P ) P T W y, which is identical to the expression given in (5) We should note here that an important implication of the weighted least-squares solution is that any improvement over the ordinary least-squares solution is due to the weighting matrix W which can be computed based on the model and the filter gain In cases where W is close to an identity matrix, one should not expect any significant difference between the input signals recovered by both methods Finally, the inverse operation in the solutions can be replaced by the Moore-Penrose pseudoinverse if the matrix that needs to be inverted is ill-conditioned The pseudoinverse should be computed via the singular value decomposition where the nearly zero singular values are not inverted In this case, the resultant solution is non-causal because the pseudoinverse contains elements that multiply future output measurements to produce the input signal at the current time step Indeed for many discrete-time models where the pole-zero excess of the original continuous-time system is three or more, it is possible for the discrete-time representation to contain at least one zero outside the unit circle in the complex plane if the sampling interval is sufficiently small, Refs []-[] Causal inverses become unstable, and finding a non-causal inverse is a way to handle such systems, Ref [5] Here in batch forms, incorporation of non-causality in the solution is automatic through the truncation of the small singular values that are associated with these "unstable" zeros of the forward discrete-time model NUMERICAL EXAMPLES Two sets numerical results are presented The first set is based on a realistic -state acoustic propagation model of an office and laboratory complex derived from HPC (High Performance Computing) simulations The second set is based on a fictitious -state model to clarify the results obtained with the HPC model HPC Model of an Office and Laboratory Complex The original HPC model is derived by the following procedure A D finite-difference timedomain (FDTD) computation is used to simulate the propagation of a sound source placed at the center of the complex, Fig This simulation takes approximately 5 hours using 56 cores of a Cray XT with GB of memory per core The FDTD model has just under 7 billion cells, out of which 758 thousand output locations are selected to represent an output field 6-m above the ground surface and building roofs From this simulation data, the inverse FFT method is used to compute Markov parameters that describe the -sample long dynamics from the center source to the 758 thousand output locations The sampling interval is selected to be 5 sec The unit pulse response model, defined by these Markov parameters, is then converted into a -state model via the second form of the superstable representation, Refs [5], [6] From this original state-space model, model reduction is applied to produce a -state model for use in this numerical study For this illustration, we arbitrarily selected two outputs at locations number and 6 Model reduction is further applied to reduce the dimension of the original state-space model to states Thus the HPC model used in this paper is a -state -input -output model 8
9 Input Recovery Based on an HPC Model A random test input signal is applied at the center source Prior to entering the model this test input signal is corrupted by % input noise The outputs at locations number and 6 are recorded, and these outputs are further corrupted by % measurement noise The -time step test input signal is shown in Figure, and the output signals are shown in Figures The recovered input signals are shown in Figure for both the and methods along with the original test input To facilitate the comparison, zoomed-in segments are shown in Figure 5 Careful examination of the results reveal that the test input is recovered rather well up to time step 876 for the following reason In both methods matrix inverses or pseudoinverses are called for In the case of the solution, it s the pseudoinverse of P, and in the case of the solution, the pseudoinverse of P These matrices could be ill-conditioned as seen in this example The singular value decomposition was used to compute the pseudoinverses where the smallest singular values were truncated The singular values of P and P are shown in Figure 6 In these examples we chose to keep 876 singular values in the computation of the pseudoinverses for both methods This choice caused the recovered input signal to start decaying from the 876-th time step, and the recovered input is only valid up to that time step It turned out that the recovered input signal is relatively insensitive to this choice of singular value cut-off as long as a "sufficient" number of singular values is kept The number of singular values retained could be more or less without significantly changing the recovered input signal Next, we computed the output residuals by both methods, and compared them to the optimal residuals The optimal residuals are unique for the specific model, and the specific noise-corrupted input and output time histories This comparison is shown in Figure 7, and their zoomed-in versions in Figure 8 Close examination of these figures suggests that the residuals arevquite similar to the residuals, and both somewhat resemble the optimal residuals Close similarity between the and residuals is a surprise, whereas the weak correlation with the optimal residuals is caused by the ill-conditioning of the problem as revealed by the small singular values in Figure 6 To test this theory, we performed a simulation where the test input signal was restricted in a certain input space so that conditioning of the problem could be improved We used 8 random vectors as basis vectors to build the test input signal, and recover the input signal within this set of basis vectors The recovered input signals by both methods were found to match the test signal very well as shown in Figure 9, and there residuals strongly tracked the optimal residuals as shown in Figure and their zoomed-in versions in Figure This test confirmed that the source of the residual mismatch was due to the ill-conditioning of the model Despite this ill-conditioning, the input signal was recovered rather well One issue remains, however We need to determine the reason why the and solutions are so similar In order to show that the two solutions could be different, a fictitious model was used The results are shown in the next subsection Input Recovery Based on a Fictitious Model Consider the following fictitous model, [ ] [ ] [ ] [ ] 9 A =, B =, C =, D = 9 5 In this example the input noise level was set to be around %, and output noise levels around 5% To eliminate numerical ill-conditioning, we also restricted the input space for the test input 9
10 Figure shows that despite the relatively high noise levels, both methods recovered the input signal extremely well, and the solution is indeed closer to the test input signal than the solution Furthermore, the residuals matched the optimal residuals much better than the residuals (an almost perfect match was observed for the second output) Having seen that the solution is indeed more accurate than the solution, we determine why the two solutions were so similar to each other in the HPC model By examining the weighting matrices, it is possible to see the reason Figure graphically shows the -by- upper portion of these ]weighting matrices In both cases, the weighting matrices are diagonally dominant, but the weighting matrix for the HPC model is closer to an identify matrix when compared to the weighting matrix for the fictitious model This fact explains why the two solutions were similar to each other in the case of the HPC model, but the solution is better than the solution in the case of the fictitious model CONCLUSIONS In this paper we have considered batch-form solutions to the problem of input signal recovery in the presence of input and measurement noises The solutions are suitable for short-duration events in a signal propagation problem, such as the propagation of an acoustic signal in a large domain, or the propagation of a vibration throughout a large structure Instead of working with the input and output noise covariances, we take the approach of working through a filter, which characterizes both the system model and the noise covariances We have considered two working approaches: one that involves the inversion of an ARX model and another that involves the inversion of a pulse response model The second approach turns out to be useful in that it not only subsumes the first approach, it also leads to a simple weighted least squares solution () Advantages of the solution over an ordinary least-squares solution () also become apparent in that the solution minimizes a white residual whereas the solution minimizes a colored residual, which is known to lead to bias in the result Furthermore, by examining the weighting matrix of the solution, it is also possible to determine if the solution is expected to offer significant improvement over the solution in a practical application Another unexpected benefit of the batch forms of the solution lies in the fact that the regular inverse can be replaced by the pseudoinverse computed via the singular value decomposition when the inverse problem is ill-conditioned This type of ill-conditioning can arise when the original continuous-time system that the discrete-time model represents has pole-zero excess of three or more and when the sampling interval is sufficiently small In this case, a casual inverse is unstable The batch-form of the solution is advantageous because it sidesteps this unstable causal-inverse issue by making the inverse solution non-causal through the pseudoinverse operation Non-causality is not an issue for a batch type solution applied after the fact (ie, the source recovery is carried out after measurement of the output signals is complete) For completeness, we also considered ways to explicitly impose the conditions that the residual is also orthogonal to the input and output measurements Because the residual depends on the input signals, and the input signals themselves are not known (eg, to be solved for in this identification problem), orthogonality of the residual to the input signals leads to quadratic constraints in the optimization problem Such quadratic constraints are known to be extremely difficult to solve Various options to bypass this issue were considered, but they all lead to the conclusion that the orthogonality conditions themselves when forced on short data records are overly restrictive, although these orthogonality conditions are routinely achieved on long data
11 records in a typical system identification solution We have studied the applicability of the proposed solution on a realistic high-performance computing (HPC) model of an office and laboratory campus in Hanover, NH We found that the source signals can be recovered well with the proposed solution techniques Due to the nature of the propagation dynamics, the solution offers little improvement over the solution This result is a rather surprising, but not general, because it is system-specific To validate the method, we have also tested the method on a fictitious model where noticeable improvement of the solution over the solution was observed Finally, in both the HPC model and the fictitious model illustrations, the optimal residuals were correctly recovered This fact confirmed the validity of the proposed solution technique REFERENCES [] Anderson, TS, Moran, ML, Ketcham, SA, Lacombe, J: Tracked Vehicle Simulations and Seismic Wavefield Synthesis in Seismic Sensor Systems Computing in Science and Engineering, pp 8 () [] Ketcham, SA, Moran, ML, Lacombe, J, Greenfield, RJ, Anderson, TS: Seismic Source Model for Moving Vehicles IEEE Transactions on Geoscience and Remote Sensing,, No, pp 8 56 (5) [] Ketcham, SA, Wilson, DK, Cudney, H, Parker, M: Spatial Processing of Urban Acoustic Wave Fields From High-Performance Computations ISBN: , Digital Object Identifier: 9/HPCMP UGC768, DoD High Performance Computing Modernization Program Users Group Conference, pp (7) [] Ketcham, SA, Phan, MQ, and Cudney, HH: Reduced-Order Wave Propagation Modelling Using the Eigensystem Realization Algorithm Modeling, Simulation, and Optimization of Complex Process Bock, HG, Phu, HX, Rannacher, R, and Schlöder, JP (editors), Springer-Verlag, pp 8 9 () [5] Juang, J-N, Phan, MQ, Horta, LG, and Longman, RW: Identification of Observer/ Filter Markov Parameters - Theory and Experiments Journal of Guidance, Control, and Dynamics 6, No, 9 (99) [6] Phan, MQ, Horta, LG, Juang, J-N, and Longman, RW: Improvement of Observer/ Filter Identification (OKID) by Residual Whitening Journal of Vibrations and Acoustics 7, 8 (995) [7] Phan, MQ: Interaction Matrices in System Identification and Control Proceedings of the5th Yale Workshop on Adaptive and Learning Systems, New Haven, CT () [8] Lin, P, Phan, MQ, and Ketcham, SA: State-Space Model and Filter Gain Identification by a Superspace Method The 5th International Conference of High Performance Scientific Computing, Hanoi, Vietnam () [9] Juang, J-N: Applied System Identification Prentice-Hall, Upper Saddle River, NJ () [] Van Overchee, P and De Moor, B: Subspace Identification for Linear Systems Kluwer Academic Publishers (996) [] Verhaegen, M and Dewilde P: Subspace Model Identification Part : The Output Error State-Space Model Identification Class of Algorithms International Journal of Control 56, No 5, 87 (99) [] Panomruttanarug, B and Longman, RW: Repetitive Controller Design Using Optimization in the Frequency Domain Proceedings of the AIAA/AAS Astrodynamics Specialist Conference, Providence, RI () [] Longman, RW: On the Theory and Design of Linear Repetitive Control Systems European Journal of Control, 6, No 5, pp 7 96 () [] Longman, RW, Peng, Y-T, Kwon, T, Lus, H, Betti, R, and Juang, J-N:Adaptive Inverse Iterative Learning Control Advances in the Astronautical Sciences, Vol, pp 5- () [5] Brown, HM, Phan, MQ, and Ketcham, SA: A Non-Causal Inverse Model for Source Signal Recovery in Large Domain Wave Propagation The 5th International Conference of High Performance Scientific Computing, Hanoi, Vietnam ()
12 [6] Phan, MQ, Ketcham, SA, Darling, RS, and Cudney, HH: Superstable Models for Short-Duration Large Domain Wave Propagation Modeling, Simulation, and Optimization of Complex Process Bock, HG, Phu, HX, Rannacher, R, and Schlöder, JP (editors), Springer-Verlag, pp () Figure A complex with a center source and various sensor locations -
13 5 Test Input Signal original noise free input additive input noise Figure Test input signal with additive input noise x 5 Output response due to noisy input additive output noise x 5 Output response due to noisy input additive output noise Figure Output signals with additive output noises at locations (left) and 6 (right) 5 Recovered Input Signal vs Test Input Signal test recovered () 5 Recovered Input Signal vs Test Input Signal test recovered () Figure Recovered input signals with 876 singular values kept: (left) and (right)
14 Recovered Input Signal vs Test Input Signal test recovered () Recovered Input Signal vs Test Input Signal test recovered () Figure 5 Zoomed-in portions of the recovered input signals of Figure : (left) and (right) The test input signal is shown in red Singular Values Singular Values X: 876 Y: X: 876 Y: singular value index 6 8 singular value index Figure 6 Singular values of P for (left) and singular values of P for 5 x Residual Comparison (Output ) 5 x Residual Comparison (Output ) Figure 7 Comparison of residuals (blue), residuals (green) vs optimal residuals (red) for Output (left) and Output (right)
15 x 5 Residual Comparison (Output ) x 5 Residual Comparison (Output ) Figure 8 Zoomed-in portions of the residuals of Figure 7: Output (left) and Output (right) Optimal residuals are shown in red Recovered Input Signal vs Test Input Signal test recovered () Recovered Input Signal vs Test Input Signal test recovered () Figure 9 Recovered input signals in specified input space: (left) and (right) The test input signal is shown in red x 8 6 Residual Comparison (Output ) x 8 6 Residual Comparison (Output ) Figure Comparison of residuals (blue), residuals (green) vs optimal residuals (red) for Output (left) and Output (right) for test input in specified input space 5
16 5 x Residual Comparison (Output ) 5 x Residual Comparison (Output ) Figure Zoomed-in portions of Figure : Output (left) and Output (right) The and residuals closely resemble the optimal residuals in red Recovered Input Signal vs Test Input Signal test recovered () Recovered Input Signal vs Test Input Signal test recovered () Figure Recovered input signals for fictitious model: (left) and (right) showing the solution reproduces the test input signal better than the solution The test input signal is shown in red 8 6 Residual Comparison (Output ) 8 6 Residual Comparison (Output ) Figure Comparison of residuals (blue), residuals (green) vs optimal residuals (red) for Output (left) and Output (right) for fictions model zoomed-in portions The results show the residuals are better at matching the optimal residuals than the residuals 6
17 Weighting Matrix W Weighting Matrix W Figure weighting matrices for HPC model (left) and fictitious model (right) 7
Generalized Framework of OKID for Linear State-Space Model Identification
Generalized Framework of OKID for Linear State-Space Model Identification Francesco Vicario, Minh Q. Phan, Richard W. Longman, and Raimondo Betti Abstract This paper presents a generalization of observer/kalman
More informationAn All-Interaction Matrix Approach to Linear and Bilinear System Identification
An All-Interaction Matrix Approach to Linear and Bilinear System Identification Minh Q. Phan, Francesco Vicario, Richard W. Longman, and Raimondo Betti Abstract This paper is a brief introduction to the
More informationObservers for Bilinear State-Space Models by Interaction Matrices
Observers for Bilinear State-Space Models by Interaction Matrices Minh Q. Phan, Francesco Vicario, Richard W. Longman, and Raimondo Betti Abstract This paper formulates a bilinear observer for a bilinear
More informationAbstract. 1 Introduction
Relationship between state-spaceand input-output models via observer Markov parameters M.Q. Phan,' R. W. Longman* ^Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ
More informationExtension of OKID to Output-Only System Identification
Extension of OKID to Output-Only System Identification Francesco Vicario 1, Minh Q. Phan 2, Raimondo Betti 3, and Richard W. Longman 4 Abstract Observer/Kalman filter IDentification (OKID) is a successful
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationA New Subspace Identification Method for Open and Closed Loop Data
A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems
More informationDESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof
DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg
More informationOperational modal analysis using forced excitation and input-output autoregressive coefficients
Operational modal analysis using forced excitation and input-output autoregressive coefficients *Kyeong-Taek Park 1) and Marco Torbol 2) 1), 2) School of Urban and Environment Engineering, UNIST, Ulsan,
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More informationSystem Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating
9 American Control Conference Hyatt Regency Riverfront, St Louis, MO, USA June 1-1, 9 FrA13 System Identification Using a Retrospective Correction Filter for Adaptive Feedback Model Updating M A Santillo
More informationFREQUENCY DOMAIN IDENTIFICATION TOOLBOX
NASA Technical Memorandum 109039 FREQUENCY DOMAIN IDENTIFICATION TOOLBOX Lucas G. Horta and Jer-Nan Juang NASA Langley Research Center Hampton, Virginia 23681-0001 Chung-Wen Chen North Carolina State University
More informationS deals with the problem of building a mathematical model for
JOURNAL OF GUIDANCE, CONTROL, AND DYNAMICS Vol. 18, NO. 4, July-August 1995 Identification of Linear Stochastic Systems Through Projection Filters Chung-Wen Chen* Geophysical and Environmental Research
More informationMASS, STIFFNESS AND DAMPING IDENTIFICATION OF A TWO-STORY BUILDING MODEL
COMPDYN 2 3 rd ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and Earthquake Engineering M. Papadrakakis, M. Fragiadakis, V. Plevris (eds.) Corfu, Greece, 25-28 May 2 MASS,
More informationLecture Notes 5: Multiresolution Analysis
Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and
More informationSubspace Identification With Guaranteed Stability Using Constrained Optimization
IEEE TANSACTIONS ON AUTOMATIC CONTOL, VOL. 48, NO. 7, JULY 2003 259 Subspace Identification With Guaranteed Stability Using Constrained Optimization Seth L. Lacy and Dennis S. Bernstein Abstract In system
More informationSINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS
3 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 278 SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION
More informationSubspace-based Identification
of Infinite-dimensional Multivariable Systems from Frequency-response Data Department of Electrical and Electronics Engineering Anadolu University, Eskişehir, Turkey October 12, 2008 Outline 1 2 3 4 Noise-free
More informationIDENTIFICATION AND CONTROL OF A DISTILLATION COLUMN. Department of Informatics and Systems University of Pavia, ITALY
IDENTIFICATION AND CONTROL OF A DISTILLATION COLUMN Antonio Tiano (1), Antonio Zirilli (2) (1) Department of Informatics and Systems University of Pavia, ITALY Email: antonio@control1.unipv.it (2) Honeywell
More informationRecursive Deadbeat Controller Design
NASA Technical Memorandum 112863 Recursive Deadbeat Controller Design Jer-Nan Juang Langley Research Center, Hampton, Virginia Minh Q Phan Princeton University, Princeton, New Jersey May 1997 National
More informationIdentification of modal parameters from ambient vibration data using eigensystem realization algorithm with correlation technique
Journal of Mechanical Science and Technology 4 (1) (010) 377~38 www.springerlink.com/content/1738-494x DOI 107/s106-010-1005-0 Identification of modal parameters from ambient vibration data using eigensystem
More informationOn the Equivalence of OKID and Time Series Identification for Markov-Parameter Estimation
On the Equivalence of OKID and Time Series Identification for Markov-Parameter Estimation P V Albuquerque, M Holzel, and D S Bernstein April 5, 2009 Abstract We show the equivalence of Observer/Kalman
More informationState Estimation using Moving Horizon Estimation and Particle Filtering
State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /
More informationObservers for Linear Systems with Unknown Inputs
Chapter 3 Observers for Linear Systems with Unknown Inputs As discussed in the previous chapters, it is often the case that a dynamic system can be modeled as having unknown inputs (e.g., representing
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of
More information6 The SVD Applied to Signal and Image Deblurring
6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationState Estimation of Linear and Nonlinear Dynamic Systems
State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationAn LQ R weight selection approach to the discrete generalized H 2 control problem
INT. J. CONTROL, 1998, VOL. 71, NO. 1, 93± 11 An LQ R weight selection approach to the discrete generalized H 2 control problem D. A. WILSON², M. A. NEKOUI² and G. D. HALIKIAS² It is known that a generalized
More informationAn Observer for Phased Microphone Array Signal Processing with Nonlinear Output
2010 Asia-Pacific International Symposium on Aerospace Technology An Observer for Phased Microphone Array Signal Processing with Nonlinear Output Bai Long 1,*, Huang Xun 2 1 Department of Mechanics and
More informationIdentification of ARX, OE, FIR models with the least squares method
Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation
More informationVARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION
VARIANCE COMPUTATION OF MODAL PARAMETER ES- TIMATES FROM UPC SUBSPACE IDENTIFICATION Michael Döhler 1, Palle Andersen 2, Laurent Mevel 1 1 Inria/IFSTTAR, I4S, Rennes, France, {michaeldoehler, laurentmevel}@inriafr
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationExpressions for the covariance matrix of covariance data
Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden
More informationLecture 5: Recurrent Neural Networks
1/25 Lecture 5: Recurrent Neural Networks Nima Mohajerin University of Waterloo WAVE Lab nima.mohajerin@uwaterloo.ca July 4, 2017 2/25 Overview 1 Recap 2 RNN Architectures for Learning Long Term Dependencies
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationWavelet de-noising for blind source separation in noisy mixtures.
Wavelet for blind source separation in noisy mixtures. Bertrand Rivet 1, Vincent Vigneron 1, Anisoara Paraschiv-Ionescu 2 and Christian Jutten 1 1 Institut National Polytechnique de Grenoble. Laboratoire
More informationCHAPTER 10: Numerical Methods for DAEs
CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationData-driven signal processing
1 / 35 Data-driven signal processing Ivan Markovsky 2 / 35 Modern signal processing is model-based 1. system identification prior information model structure 2. model-based design identification data parameter
More informationComputing tomographic resolution matrices using Arnoldi s iterative inversion algorithm
Stanford Exploration Project, Report 82, May 11, 2001, pages 1 176 Computing tomographic resolution matrices using Arnoldi s iterative inversion algorithm James G. Berryman 1 ABSTRACT Resolution matrices
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationA Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang
More information1 Cricket chirps: an example
Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number
More informationISyE 691 Data mining and analytics
ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)
More informationPrediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets
2 5th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 2-5, 2 Prediction, filtering and smoothing using LSCR: State estimation algorithms with
More informationCONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener
NONLINEAR OBSERVERS A. J. Krener University of California, Davis, CA, USA Keywords: nonlinear observer, state estimation, nonlinear filtering, observability, high gain observers, minimum energy estimation,
More informationClosed and Open Loop Subspace System Identification of the Kalman Filter
Modeling, Identification and Control, Vol 30, No 2, 2009, pp 71 86, ISSN 1890 1328 Closed and Open Loop Subspace System Identification of the Kalman Filter David Di Ruscio Telemark University College,
More informationState Estimation with ARMarkov Models
Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,
More informationPosition Control Using Acceleration- Based Identification and Feedback With Unknown Measurement Bias
Position Control Using Acceleration- Based Identification and Feedback With Unknown Measurement Bias Jaganath Chandrasekar e-mail: jchandra@umich.edu Dennis S. Bernstein e-mail: dsbaero@umich.edu Department
More informationIterative Learning Control Analysis and Design I
Iterative Learning Control Analysis and Design I Electronics and Computer Science University of Southampton Southampton, SO17 1BJ, UK etar@ecs.soton.ac.uk http://www.ecs.soton.ac.uk/ Contents Basics Representations
More informationRELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS
(Preprint) AAS 12-202 RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS Hemanshu Patel 1, T. Alan Lovell 2, Ryan Russell 3, Andrew Sinclair 4 "Relative navigation using
More informationOPTIMAL ESTIMATION of DYNAMIC SYSTEMS
CHAPMAN & HALL/CRC APPLIED MATHEMATICS -. AND NONLINEAR SCIENCE SERIES OPTIMAL ESTIMATION of DYNAMIC SYSTEMS John L Crassidis and John L. Junkins CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationComputational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1
Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 B. N. Datta, IEEE Fellow 2 D. R. Sarkissian 3 Abstract Two new computationally viable algorithms are proposed for
More informationSignal Identification Using a Least L 1 Norm Algorithm
Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationSELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM
SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM Kutluyıl Doğançay Reza Arablouei School of Engineering, University of South Australia, Mawson Lakes, SA 595, Australia ABSTRACT
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationLINEAR-QUADRATIC CONTROL OF A TWO-WHEELED ROBOT
Доклади на Българската академия на науките Comptes rendus de l Académie bulgare des Sciences Tome 67, No 8, 2014 SCIENCES ET INGENIERIE Automatique et informatique Dedicated to the 145th Anniversary of
More informationTitle without the persistently exciting c. works must be obtained from the IEE
Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544
More informationCBE495 LECTURE IV MODEL PREDICTIVE CONTROL
What is Model Predictive Control (MPC)? CBE495 LECTURE IV MODEL PREDICTIVE CONTROL Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University * Some parts are from
More informationEFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander
EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North
More informationGaussian Process for Internal Model Control
Gaussian Process for Internal Model Control Gregor Gregorčič and Gordon Lightbody Department of Electrical Engineering University College Cork IRELAND E mail: gregorg@rennesuccie Abstract To improve transparency
More informationDiscrete-time linear systems
Automatic Control Discrete-time linear systems Prof. Alberto Bemporad University of Trento Academic year 2-2 Prof. Alberto Bemporad (University of Trento) Automatic Control Academic year 2-2 / 34 Introduction
More informationPerformance Analysis of an Adaptive Algorithm for DOA Estimation
Performance Analysis of an Adaptive Algorithm for DOA Estimation Assimakis K. Leros and Vassilios C. Moussas Abstract This paper presents an adaptive approach to the problem of estimating the direction
More informationOptimal State Estimators for Linear Systems with Unknown Inputs
Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators
More informationApplication of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit
Application of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit Nafay H. Rehman 1, Neelam Verma 2 Student 1, Asst. Professor 2 Department of Electrical and Electronics
More informationAdvanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification
Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants
More informationKrylov Techniques for Model Reduction of Second-Order Systems
Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of
More informationarxiv: v1 [cs.sy] 8 May 2015
arxiv:150501958v1 cssy 8 May 2015 Direct identification of fault estimation filter for sensor faults Yiming Wan Tamas Keviczky Michel Verhaegen Delft University of Technology, 2628CD, Delft, The Netherlands
More informationCONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström
PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,
More informationExploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability
Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationOn Identification of Cascade Systems 1
On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se
More informationA new trial to estimate the noise propagation characteristics of a traffic noise system
J. Acoust. Soc. Jpn. (E) 1, 2 (1980) A new trial to estimate the noise propagation characteristics of a traffic noise system Mitsuo Ohta*, Kazutatsu Hatakeyama*, Tsuyoshi Okita**, and Hirofumi Iwashige*
More informationNONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES*
CIRCUITS SYSTEMS SIGNAL PROCESSING c Birkhäuser Boston (2003) VOL. 22, NO. 4,2003, PP. 395 404 NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* Feza Kerestecioğlu 1,2 and Sezai Tokat 1,3 Abstract.
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More informationECE 636: Systems identification
ECE 636: Systems identification Lectures 9 0 Linear regression Coherence Φ ( ) xy ω γ xy ( ω) = 0 γ Φ ( ω) Φ xy ( ω) ( ω) xx o noise in the input, uncorrelated output noise Φ zz Φ ( ω) = Φ xy xx ( ω )
More informationAuxiliary signal design for failure detection in uncertain systems
Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a
More informationWhat is Image Deblurring?
What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.
More informationIdentification of MIMO linear models: introduction to subspace methods
Identification of MIMO linear models: introduction to subspace methods Marco Lovera Dipartimento di Scienze e Tecnologie Aerospaziali Politecnico di Milano marco.lovera@polimi.it State space identification
More informationNetwork Reconstruction from Intrinsic Noise: Non-Minimum-Phase Systems
Preprints of the 19th World Congress he International Federation of Automatic Control Network Reconstruction from Intrinsic Noise: Non-Minimum-Phase Systems David Hayden, Ye Yuan Jorge Goncalves Department
More informationBlind deconvolution of dynamical systems using a balanced parameterized state space approach
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2003 Blind deconvolution of dynamical systems using a balanced parameterized
More information4F3 - Predictive Control
4F3 Predictive Control - Discrete-time systems p. 1/30 4F3 - Predictive Control Discrete-time State Space Control Theory For reference only Jan Maciejowski jmm@eng.cam.ac.uk 4F3 Predictive Control - Discrete-time
More informationQuaternion Data Fusion
Quaternion Data Fusion Yang Cheng Mississippi State University, Mississippi State, MS 39762-5501, USA William D. Banas and John L. Crassidis University at Buffalo, State University of New York, Buffalo,
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationNotes for System Identification: Impulse Response Functions via Wavelet
Notes for System Identification: Impulse Response Functions via Wavelet 1 Basic Wavelet Algorithm for IRF Extraction In contrast to the FFT-based extraction procedure which must process the data both in
More informationy k = ( ) x k + v k. w q wk i 0 0 wk
Four telling examples of Kalman Filters Example : Signal plus noise Measurement of a bandpass signal, center frequency.2 rad/sec buried in highpass noise. Dig out the quadrature part of the signal while
More informationSystem Identification
System Identification Lecture : Statistical properties of parameter estimators, Instrumental variable methods Roy Smith 8--8. 8--8. Statistical basis for estimation methods Parametrised models: G Gp, zq,
More informationBobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, and Anupam Goyal III. ASSOCIATIVE RECALL BY A POLYNOMIAL MAPPING
Synthesis of a Nonrecurrent Associative Memory Model Based on a Nonlinear Transformation in the Spectral Domain p. 1 Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, Anupam Goyal Abstract -
More informationLECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.
Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply
More informationBasic Linear Inverse Method Theory - DRAFT NOTES
Basic Linear Inverse Method Theory - DRAFT NOTES Peter P. Jones 1 1 Centre for Compleity Science, University of Warwick, Coventry CV4 7AL, UK (Dated: 21st June 2012) BASIC LINEAR INVERSE METHOD THEORY
More informationHPC and High-end Data Science
HPC and High-end Data Science for the Power Grid Alex Pothen August 3, 2018 Outline High-end Data Science 1 3 Data Anonymization Contingency Analysis 2 4 Parallel Oscillation Monitoring 2 / 22 PMUs in
More informationDetermining Appropriate Precisions for Signals in Fixed-Point IIR Filters
38.3 Determining Appropriate Precisions for Signals in Fixed-Point IIR Filters Joan Carletta Akron, OH 4435-3904 + 330 97-5993 Robert Veillette Akron, OH 4435-3904 + 330 97-5403 Frederick Krach Akron,
More informationRevision of Lecture 4
Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel
More informationACTIVE VIBRATION CONTROL PROTOTYPING IN ANSYS: A VERIFICATION EXPERIMENT
ACTIVE VIBRATION CONTROL PROTOTYPING IN ANSYS: A VERIFICATION EXPERIMENT Ing. Gergely TAKÁCS, PhD.* * Institute of Automation, Measurement and Applied Informatics Faculty of Mechanical Engineering Slovak
More information1 What a Neural Network Computes
Neural Networks 1 What a Neural Network Computes To begin with, we will discuss fully connected feed-forward neural networks, also known as multilayer perceptrons. A feedforward neural network consists
More information