Redacted for Privacy

Size: px
Start display at page:

Download "Redacted for Privacy"

Transcription

1 AN ABSTRACT OF THE THESIS OF FREDERICK WILLIAM MIRANDA for the MASTER OF SCIENCE (Name of student) (Degree) in Electrical and Electronics Engineering presented on (Major) at ) Title: SYSTEM IDENTIFICATION BY SPECTRAL ANALYSIS USING CLOSED-LOOP PROCESS DATA Abstract approved: Redacted for Privacy Solon Stone Time series from the input and the output of a process are analyzed by spectral estimation methods to develop a system transfer function. Existing process data were used. All the published computation methods were examined. Three of these have been explained and illustrated. The three methods for computing autospectra and crossspectra have been referred to as: periodogram smoothing, averaging periodograms of segmented series, and the Blackman-Tukey method. In the first two, the Fourier coefficients are calculated directly from the data and the resulting periodograms smoothed to obtain estimates of the spectra. The Blackman-Tukey approach is based on computing the covariances from the data and then Fourier transforming the smoothed time averages. Also described here is an adaptation of the Blackman-Tukey method, which takes advantage

2 of the fast Fourier transform. This thesis also lists the precautions necessary in planning and collecting the data so as to derive maximum benefit from spectral analysis. Mutual relationships between the various forms of the linear system equations and spectral estimates have been explored.

3 System Identification by Spectral Analysis Using Closed-loop Process Data by Frederick William Miranda A THESIS submitted to Oregon State University in partial fulfillment of the requirements for the degree of Master of Science June 1971

4 APPROVED: Redacted for Privacy Professor of Electrical and Electronics Engineering in charge of major Redacted for Privacy Head Department of electrical and Electronics Engineering Redacted for Privacy Dean of Graduate School Date thesis is presented Typed by Il la Atwood for Frederick William Miranda

5 ACKNOWLEDGMENT This research was conducted under the guidance of professor S. A. Stone of Oregon State University. To him and to professors L. N. Stone, L. C. Jensen and L. J. Weber, the author extends his sincere appreciation for their assistance, interest, and encouragement. The author also wishes to express his appreciation to Mr. E. H. Nunn and to the various personnel that have assisted him in this project.

6 TABLE OF CONTENTS Chapter Page INTRODUCTION 1 Introduction 1 Statement of the Problem 3 Literature Review 5 II. RELATIONS BETWEEN PROCESS INPUTS AND OUTPUTS III, DESIRABLE EXPERIMENTAL CONDITIONS 13 IV, SPECTRAL ANALYSIS AND ITS USE IN PROCESS IDENTIFICATION 16 V. COMPUTATION OF SPECTRAL DENSITY FUNCTIONS 22 Spectral Density by Smoothing Periodograms 26 Fourier Series Transform 27 The Fourier Integral Transform 28 Discrete Fourier Transform 28 Data Preparation 33 Calculation of Periodograms 33 Smoothing of Periodograms 35 Spectral Density by Averaging Periodograms of Segmented Data 36 Data Preparation 37 Calculation of Periodograms 37 Averaging the Periodograms 38 Spectral Density by Fourier Transformation of Covariance Functions 38 Data Preparation 39 Computation of Covariance Functions 40 Blackman-Tukey Method 40 Covariance Functions by Fast Fourier Transformation 41 Weighting the Correlation Functions 43 Transforming the Weighted Correlations 44 VI. RESULTS 46 Results and Discussion 46 Recommendations 63 BIBLIOGRAPHY APPENDIX A 65 71

7 LIST OF TABLES Table Page 1 Sample autospectra for the output and the input by three different methods Frequency response by three methods of estimating spectra 49

8 LIST OF FIGURES Figure. 1 Curve of power consumed by resistor as a function of cutoff angular frequency of low-pass filter Relationship between power spectrum and the power curve of Figure Relationship between the various Fourier transforms Autospectral density of output. Method: Blackman-Tukey, using 16 lags. 6.2 Autospectral density of input. Method: Blackman-Tukey, using 16 lags. 6.3 Autospectrum of output. Method: Smoothing periodograms. 6.4 Autospectrum of input. Method: Smoothing periodograms Autospectrum of output. Method: Averaging periodograms of segmented series. Page Autospectrum of input. Method: Averaging periodograms of segmented series Gain vs frequency. Method: Blackman-Tukey using 16 lags Gain vs frequency. Method: Smoothing periodograms Gain vs frequency. Method: Averaging periodograms of segmented series Histogram of output data Histogram of input data. 62

9 NOMENCLATURE The following notation is adhered to in this thesis. As indicated, some symbols have more than one meaning. Symbol Definition a, b A(n), B(n), C(n) C (T) xy D E E' f h Coefficients Fourier Transforms Covariance between x and y with y lagging x by T Distance between starting points of segments of time se rie s Length of segments of time series Length of segments extended with zeroes so that E' is a power of 2 Frequency variable, in hertz H( s), H(j w ) Transfer function H(z) I k xx K L xy (n) m Impulse response; weighting function Pulse transfer function Periodogram of x(t) Discrete time variable Number of segments derived from each time series Cospectrum of x and y at frequency wn. A subscript; an index M Number of lags used in Blackman-Tukey method

10 Symbol n N N 1 pxx pxy Pav Q (n) xy r R (T) xy s sx s t T U W(k) W (n) xy x(t) X(s) y(t) Definition Corresponds to frequency wn Number of data points in a given time series Number of points in the time series extended with zeroes so that N' is a power of 2 Estimate of auto spectral density of x Estimate of cross-spectral density of x and y Average power dissipated in unit resistance Quadrature spectiumat frequency wn A subscript; an index Correlation between x and y with y lagging x by T Laplace transform variable; s= a+ j w Standard deviation of x(t), the input series Standard deviation of y(t), the output series Time Period A coefficient depending on the spectral window and the length of the series Spectral window function Coherency between x and y as a function of radian frequency wn Input function of time Laplace transform of input Output function of time

11 Symbol Y(s) z A t T co Definition Laplace transform of output Shift operator Sampling interval Time lag (age) variable, time constant Radian frequency variable

12 SYSTEM IDENTIFICATION BY SPECTRAL ANALYSIS USING CLOSED-LOOP PROCESS DATA INTRODUCTION Introduction Spectral analysis is a very useful technique for defining the relationship between the outputs and inputs of a linear dynamic system. It can also be used for non-linear systems that can be linearized over a relatively small operating region. Basically, it consists of analyzing the signal intensity in various frequency ranges and then determining the correlation at the corresponding frequencies. The empirical model derived from power density spectrum can be used to optimize the control system and the process. The model can be further refined by simulation. A method of approximating analytical relationships from process responses is described in the following page s. A mathematical model of the given process is helpful and usually necessary for efficient closed-loop control design as well as for process optimization. In simple control systems, such parameters as gain, reset rate and derivative are computed from the observed process reaction to a step change of the set-point of the controller. The objective is to obtain a reasonably fast controller reaction without oscillations. The instruments are usually tuned

13 at one operating point though the optimum settings will vary for different operating levels. Adaptive control of these parameters requires a knowledge of the process model. Modern control hardware is capable of computing and adjusting setpoint values for maximizing plant profit or similar criteria of overall performance. Optimization techniques such as linear programming and hillclimbing techniques can be utilized. In all cases the first step is the derivation of some mathematical relationship between the outputs and measurable inputs, both controllable and uncontrollable. Spectral analysis thus satisfies a real need. Derivation of a mathematical model from theoretical considerations is not always feasible. Simple methods such as regression analysis may suffice for static systems with independent observations, but are unsuitable for most continuous processes. Economists and business analysts as well as control engineers can benefit from spectral analysis. A certain amount of caution is necessary in the application of spectral analysis. The computations can be neatly mechanized but the results are no better than the validity of the data and the assumptions. It is not always easy to tell physical effects from the effects of data processing. This thesis outlines the computation methods and gives a relatively non-mathematical description of the concepts involved. Factors enhancing the validity of the results have also 2

14 been outlined. Though practically all these developments have taken place in the last few years, hardware is already available for their implementation. Statistical tests and nonstationary systems have not been discussed here though such information is available in the references cited. The linear relationships often seen under different guises in diverse fields have been summarized in one of the sections. 3 Statement of the Problem It was not possible to conduct open-loop tests on the process whose dynamic characteristics were to be determined. The process was known to have a major time constant in the order of minutes. Hence one output variable and several possible input variables were logged at 2-second intervals. Some of the input variables themselves were on closed-loop control. The process can be explained in terms of a paper machine on which it is desired to maintain the paper basis weight or density within given limits. The basis weight is usually regarded as a function of consistency, couch pressure, flow rates, and other input variables. Uncontrollable factors, some too complex to measure, will preclude the existence of a unique mathematical relationship say between the basis weight and consistency. But statistical communication theory can be used to derive working equations relating the basis weight to the principal input variables. This procedure gets

15 around the need for complex analytic derivations. It assumes that the data record used is a member of an infinite aggregate of waveforms of the same nature. Spectral densities and other computed quantities are regarded as statistical estimates of the true or theoretical values. It is possible to improve the estimates by averaging the results over several data records. Transfer functions derived from these calculations can be checked and improved by simulation methods. The resulting information would facilitate the use of available control techniques such as the IBM Control Optimization Program. Since spectral methods are quite new and presented in rather mathematical language, it was decided to study their adaptability to practical situations. The published techniques were reviewed and the given data were analyzed by three selected methods. Only one input is considered in this thesis. A method of deriving transfer functions from spectral estimates has been illustrated. The value of the transport delay was taken from prior knowledge rather than from the phase information. No allowance has been made for nonlinearities present in the system, Though this thesis is limited to a single input, analogous transfer functions can be derived for other inputs and then combined to obtain a satisfactory input-output relationship. 4

16 5 Literature Review Spectral Analysis had its beginning in the attempts of the meteorologists and geophysicists to detect hidden periodicities for the prediction of earthquakes, sunspots and such natural phenomena. It was soon discovered that the results were masked by the end effects due to the finite lengths of the samples. Random noise often indicated periodicities that did not exist in the signal. In the 1940's and 1950's statisticians Bartlett in England and Tukey in the United States introduced smoothing techniques to extract meaningful information from spectral analysis. They showed that Schuster's periodogram method (53) of harmonic analysis gave erroneous results in applications other than the investigation of harmonics of a fixed identifiable frequency in a genuinely periodic function. They developed an indirect method in which lagged time averages (covariances) calculated from the given data were smoothed with a relatively broadband filter and then Fourier transformed into spectra. Though the well known book by Blackman and Tukey (9) dealt with the identification of periodicity in a single series, some engineers were trying these frequency domain methods for studying relationships between two or more concurrent time series. The calculations were tedious but computers were coming into their own, The U. S. Department of Defense sponsored symposiums that

17 aroused the interest of several statisticians and engineers in spectral analysis. The May 1961 issue of Technometrics was devoted almost entirely to this new field. Spectral analysis was extensively dealt with in several books (7, 10, 33) published around However the emphasis was on the Blackman-Tukey method and the related window carpentry. Computation time is drastically reduced by the fast Fourier 6 transform that has become popular since 1965 (15, 27, 28). The need for the detection of underground nuclear explosions has spurred the development of new algorithms (38, 50, 54) and the periodogram method has been revived. There is considerable literature on the methods for system identification (22, 37, 66). Several books and papers have been published on the use of sine waves and pulses for the determination of process dynamics (14, 39). Spectral analysis can be considered to be a special case of pulse testing.

18 7 RELATIONS BETWEEN PROCESS INPUTS AND OUTPUTS System identification usually starts with the application of linear system theory. It is desirable to obtain a tractable input-output relationship that is a function of the system characteristics and is independent of the form or magnitude of the inputs. The determination of such a relationship is complicated not only by the presence of nonlinearities but also by random noise. The objective is to characterize the system adequately with a minimum number of parameters determined from the least amount of experimental data. When an input to a process is changed, its effect on the output will not be felt immediately but will build up gradually because of the inertia of the system. At any instant, the output is a function of the previous inputs and not always independent of the earlier values of the output variable itself. This dynamic relationship between the input and the output can be expressed graphically or by suitable equations. A linear system can be described by a linear differential equation of the form: bn d n y dt n + + b1 y +b 0 y=a m d m x dt m +. +a 1 x+a 0 x (2-1) or by a set of state variable equations. The same concept can also be expressed in terms of the impulse response in the form of convolutions. Response at time t due

19 to a unit impulse excitation applied at time v is h(t-v). An arbitrary input x(t) can be regarded as a superposition of a sequence of impulses of strength x(v)dv, where v is the excitation time. The resulting response y(t) at time t will be: 8 y(t) = h(t-v) x(v) dv t-w (2-2) The lower limit is based on the assumption that the response h(t-v) will not be affected by impulses applied more than w time units prior to the response time. By changing the variable of integration, equation 2-2 can also be written as: y(t) = h(t) x(t-t) dt 0 (2-3) where the variable T = t-v is the age at the response time t, of the input applied at excitation time v = t - T. If S h(t) dt = 1, the response is essentially a weighted or smoothed average of the excitation. These equations indicate that the response at a given time depends on what happened at previous instants of time. When the initial conditions are zero, the input-output relationship can be conveniently expressed in terms of the transfer function H(s) which is the Laplace transform of h(t): Y(s) = H(s) X(s) (2-4) To determine the attenuation and phase shift of the system in response

20 to sinusoidal outputs, equation 2-4 is used by replacing s by jw. Spectral analysis will also yield suitable estimates of H(jw). In digital analysis of continuous systems, equations 2-1 to 2-4 are replaced by their discrete analogs. The convolution relationships 9 become: y(na t) = n k=0 h(kat) x(nat - kat) (2-5) y(na t) = h(nat kat) x(k,at) (2-6) k=0 where At is the sampling interval. x(nat kat) is the value of x at time (n - k) At i.e.,kat time units earlier. Thus the response is a weighted sum over previous values of the input sequence. In the graphical calculation of equations 2-5 and 2-6, the impulse response is "folded" i.e., plotted backwards on the input (23). The products of ordinates are then computed and summed. These equations indicate the possibility of evaluating the parameters h(kat) from observed values of the input and the output. This is not a practical procedure because of the noise invariably present in the observations. Some of the objections can be overcome by using statistical covariance functions in place of the data points themselves. The covariance functions have the same relationship as equations 2-5 and 2-6 (23, 47).

21 C (nat) xy n k=0 h(kat) C (nat kat) (2-7) xx 10 The cross-covariance will exclude the effects of noise that is not correlated with the system output (67). Before the widespread use of digital computers, these computations were carried out on specially built correlation analyzers (23, 47, 51, 57). Fast Fourier transform has speeded up the computation of convolutions (16, 17), but statisticians (12, 33) have noted that equations of the form 2-5, 2-6, and 2-7 involve a large number of values of h(kat). These equations do not have any provision for the effects of the feedback from the output to the input by a controller. They are not easily adaptable for multivariate systems. Besides, the neighboring values of the "hls" are usually correlated. Hence there is a great surge of interest in parameterizing the problem by fitting what is usually called the mixed autoregressive moving average model. This is merely a discrete version of the linear differential equation. Thus: r y(nat) + b - kat) = a k k x(nat kat) (2-8) k=1 k=0 This equation includes the effects of feedback, and can be easily modified (64) to handle non-stationary processes. A stagewise regression process can be used to determine the parameters (11, 12). The

22 11 number of required parameters is determined by introducing more terms on both sides of this equation and at each stage computing the variance and the autocorrelation function of the residuals. The model is adequate when there is no evidence of autocorrelation in the residuals (33). The coefficients of this equation can also be determined by other approaches (35, 36, 37, 59). Frequently, equation 2-8 simplifies to a pure autoregressive (a k = 0, k = 1,2,, r) or a moving average (all bk = 0) model. By defining the shift operator z as: e (xk) = (2-9) equation 2-8 can be written in terms of a sampled-data or pulse transfer function. If the sampling interval A t is unity, -1 -m a + al z +. + a z 0 H(z) -1 -r 1 + b z + brz 1 (2-10) For discrete linear systems the z transforms are related by: Y(z) = H(z) X(z) (2-11) The pulse transfer function can be derived from the transfer function of a sampled signal by substituting: z = esb.t (2-12) where A t is the sampling interval. Thus z is related to the frequency response function of the

23 sampled signal by: z = ejcoat 12 (2-13) Non-linearities as well as parameter optimization can be handled by simulation. But the methods of linear analysis are of immense help at least in the choice of the initial models.

24 13 DESIRABLE EXPERIMENTAL CONDITIONS Time series data obtained from normal operating records can be used to determine spectral density and process characteristics. Sometimes that is the only data available, but it can also lead to incorrect conclusions. Proper experimental conditions will improve the efficiency and the dependability of the results. Some of the desirable conditions are stated here. 1. The time series used in the computations must be sufficiently accurate representation of the input and output of the process. is essential that the instruments including the transducers have adequate bandwidth. In some cases it may be desirable to filter the signal prior to sampling. Attention should be paid to instrument calibration and the resolution of the analog-to-digital converters used. 2. To determine the dynamic characteristics of a process, open-loop tests are desirable. Primarily one desires to know how the system reacts to a given input and how much it "remembers" the previous inputs and outputs. In a closed-loop system, feedback will introduce the effects of the output on the input and interpretation may become difficult. 3. The input must be deliberately varied while recording the It inputs and the corresponding output values. The results will then indicate if the changes in the output were due to the suspected input

25 or due to some other variable that is merely correlated with the input. 4. The magnitude of the input signal should be such that the process operates as close to the linear region as possible. large an input may cause saturation. But the signal should be above the threshold level of the instruments and the system. 5. The nature of the input signal will affect the efficiency of any system identification method. A pseudo-random noise is an ideal input for computing the frequency response by statistical methods. Theoretically the averaging period for correlation can be a single period of the pseudo-random sequence. The estimates can be improved by repeating the experiment and averaging the results of the replications. Suitable noise generators are now available (5, 17, 48). The bandwidth of the input should be considerably greater than the bandwidth of the system. The autocorrelation of input noise will then appear as an impulse and the resulting response will approximate the impulse response of the system (52). 6. It is not necessary to limit the measurements to the inputs and the outputs of the process. Recordings of values at intermediate points may simplify computations and yield useful information. 7. Operating conditions usually change over a period of time. Repetition of the experiment at different times can yield valuable information about the process and detect previously unknown factors that influence the output variable. Too 14

26 15 8. If the data is not from a stationary time series, the nonstationarity has to be removed at the beginning of the calculations. A correlation curve that fails to decrease rapidly to oscillate about the time axis, or a high value at low frequencies of a spectral density curve are usually indicative of a trend in the data. In general, n-th order non-stationarity can be eliminated by differencing the data n times. Regression methods can also be used to remove nonstationarity. 9. One should have some knowledge of the spectral density curve before designing a sound experiment. A pilot analysis (9, 33) may yield a useful preview of the shape of the spectrum. 10. The sampling interval At must be sufficiently small so that the spectral density p(co) is essentially zero for frequencies above the Nyquist-frequency. ZAt If this is not true, aliasing will occur, i.e. gain for frequencies between zero and Nyquist frequencies will be confounded with the gain for frequencies above the Nyquist limit. 11. The sample length must be adequate. It is not always possible to know if a peak in a spectral density curve is real or spurious. Statistical tests can help. The larger the number of degrees of freedom associated with the estimate, the more precise are these tests. These degrees of freedom as well as the bandwidth of the window determine the length of the record required.

27 16 SPECTRAL ANALYSIS AND ITS USE IN PROCESS IDENTIFICATION Statisticians describe spectral density as distribution of variance over a range of frequencies. Electrical engineers visualize the same quantity as the distribution of power with frequency when a swept-frequency voltage or current wave is applied to unit resistance. In either case, spectral analysis is a method of describing and relating the frequency composition of signals. If an input signal of spectrum pxx(w) is applied to a linear system and the corresponding cross-power spectrum is p (w), the transfer function H(jw) of the xy system is given by: H(jw) pxy(w) xy P xx (w) (4-1) From analogy with current flow in unit resistances, the average power contained in x(t) is defined by: P = av 1 2T T J -T x2(t) dt (4-2) If x(t) can be expanded in Fourier series: x(t) = a0 2 + n=1 cos wnt + b n sin wnt), (4-3) the average power can also be expressed as:

28 2 a0 1 Pay = 4 2 n=1 + jb (4-4) The contribution to the sample variance or average power from the n-th harmonic is -(a2 n + b2). For an arbitrary stochastic function n x(t), the power spectrum will be a continuous function of frequencies. The power contributed by components of the voltage wave between zero and a given frequency w 1 can be expressed as S (41 0 p(w) dw. (4-5) If this upper limit is varied from zero to a large value, the power consumed by the unit resistor will vary as a function of frequency w 1 as shown in figure 4.1. A plot of the slope of this curve as a function of frequency (fig. 4.2) is called power spectrum. A non-deterministic signal is often regarded as a mixture of random and periodic components. White noise, whose autocorrelation function is a spike at lag 0, has a flat autospectrum. Periodicity in the autocorrelation and peaks in an autospectrum may indicate impulses due to periodic components in the signal. The crossspectrum, a complex quantity, describes the frequency dependence between two signals. The absolute value of the cross-spectrum, also called the cross-amplitude spectrum, is the covariance between the input and the output components in the same frequency band. The

29 18 gain is given by H(jw) I = (4-6) where p xx p xy = autospectrum of x(t). = cross-spectrum of x(t) and y(t). Thus the gain is essentially a regression coefficient defined at each frequency. The phase angle of the cross-spectrum, computed as the arc tangent of the ratio of the quadrature spectrum or imaginary part to the co-spectrum or real part, is a measure of the average phase differences between components in the same frequency range in the two records. Gain and phase values can be plotted against frequency to obtain frequency-response curves. The frequency-response function is then derived by graphical approximation methods. Coherency is given by W2 (co) = xy I Pxy(w) 12 Pxx(w) Pyy(w) (4-7) This ratio is a measure of the linear correlation between input and output values at each frequency, and varies between 0 and 1 like the square of the ordinary correlation coefficient. A distinction is sometimes made between power spectrum and power spectral density. Power spectrum is computed from covariance functions, whereas power spectral density is computed from

30 19 0 U) U) a) ;- 0 cu U) 0 U a) 0 Cif Cutoff angular frequency of low-pass filter. Figure 4.1. Curve of power consumed by resistor as a function of cutoff angular frequency of low-pass filter. Slope of curve of Figure Power spectrum* Figure 4.2. Relationship between power spectrum and the power curve of Figure 4.1.

31 correlation functions. Thus autospectrum is the product of the autospectral density and the variance of the signal. Gain is always computed from covariance type functions. Correlations have values between 0 and 1 and are preferred for computations when numbers of different magnitudes are involved. 20 where C (T) R (T) xy xy s s x y R xy (T) is the correlation between x and y C xy (T) is the covariance between x and y sx y is the standard deviation of x is the standard deviation of y. Spectral analysis methods are ideally suited for experimental determination of the frequency characteristics of noisy systems. There is considerable literature (25, 39, 60) describing the use of amplifier-testing techniques in the determination of process dynamics. These transient response and frequency response methods require specific input signals that can upset the process. The response to the test signals can get buried in the noise present in the process. The testing process itself may introduce non-linearities. Correlation techniques have been used (20, 47, 67) to overcome these problems. Various mechanical and electronic analog devices were designed (23, 47, 51) for computing correlation and spectral functions

32 of industrial processes. Recent advances in computing devices and algorithms have revived the interest in digital spectral analysis methods. One of the advantages of the spectral approach is that the adjacent spectral estimates are not correlated as highly as are the auto or cross-correlation estimates, If necessary, the time series can be filtered into frequency bands which may be analyzed separately. The spectral approach can be easily generalized to deal with multivariate systems. Though this method is derived for linear lumpedparameter systems, it can also be used to suggest possible models for simulation of non-linear systems. Hence spectral methods have been used in widely diverse fields such as geophysics (50, 53), detection of underground nuclear explosions (38, 54), correlation of electroencephalographs (18), economics (24), and for developing transfer functions of turboalternators (58). 21

33 22 COMPUTATION OF SPECTRAL DENSITY FUNCTIONS Basically there are two types of methods for computing power spectra. The direct methods involve the computation of the periodograms from the data itself or from the data modified by a window. In the indirect methods, the first step is the calculation of the correlation functions which are then weighted and Fourier transformed. Two direct methods and one indirect one are described in the following sections. A new indirect method based on stepwise autoregression of autocorrelation functions has been recently proposed (43, 44). However, this method has not yet been sufficiently developed for the computation of cross-spectra. Statistical estimates are chosen to have certain desirable properties such as minimum mean square error and consistency. An unbiased estimate is said to be consistent if its variance decreases (i. e. precision increases) as the sample length is increased. Determination of acceptable estimates of spectral density involves digital filtration at some stage of the computations. A sample record T seconds long is essentially a product of the infinitely long function x(t) and a square pulse T seconds long with unity amplitude. The Fourier Transform of the finite realization or sample is thus the convolution of the Fourier Transforms A(f) of the Sin ir T f function x(t) and Tr T f of the square pulse. The sine function

34 has its maximum value of unity at f=0 and has non-zero values at points other than 1/T, 2/T, etc. decreases at 20 lb per decade. The amplitude of its side lobes If x(t) were periodic in the square pulse, its Fourier Transform would have spectral lines exactly at f=0, 1/T, 2/T, etc. and convolving A(f) with the sine function will yield A(f). When x(t) is a non-periodic random function, the convolution will smear each spectral line of A(f) over the whole spectrum, its amplitude decreasing at 20 db per decade. 23 To reduce this leakage effect, the finite sample of x(t) is adjusted to be quasi-periodic in the time interval T by window-shaping methods. points x(k) can be multiplied by the Hanning Window: The given data 2Trk W(k) = -2- (1 - Cos ), k=0, 1,, N-1, (5-1) which approaches zero at both ends of the record. An equivalent operation in the frequency domain consists of taking local weighted averages of the spectra over frequencies surrounding the frequency for which the power spectrum is sought, -1 and -.1-* is Averaging with weights -1,f,, considered to be similar to multiplying the covariance functions by the Hanning window: W(k) = i (1 + Cos irmk ), k =0, 1,, M (5-2) = 0 k> M where M is the maximum "lag" used (7). The value of M determines the bandwidth of the window. This latter form of Hanning

35 24 window is used in the Blackman-Tukey method. A fairly elaborate discussion of the shapes and bandwidths of a variety of filters can be found in recent literature (9, 10, 24, 29, 33, 40, 43, 55). selection is a compromise between two conflicting requirements: a. A filter of wide bandwidth reduces the variance of the power density estimates. But the estimates are smudged. Due to the masking of detail that may be present, it is hard to detect the differences between power at adjacent frequencies. Also, the bias of the estimates may be increased. b. A narrow pass-band filter will focus on a particular frequency and show fine detail but has a fairly large variance of the estimate. It may also cause spurious peaks in the spectral density curve. Some advance knowledge of the structure of the spectrum is necessary for planning the number and the spacing of the samples. The bandwidth of the window and the degree of smoothing can then be varied to obtain satisfactory estimates of the spectral density. Spectral computation methods work most efficiently if the spectra are nearly uniform or white. If a side lobe of the spectral window coincides with a high peak in the true spectrum, then a high ordinate will be mistakenly estimated at the frequency corresponding to the main lobe of the window. Also, the high peaks of the autospectra will give unusually low values for the coherency estimates. The

36 For optimum results, such known high peaks as well as the lowfrequency trends should be filtered out prior to spectral calculations. If the frequency response of the pre-filter is known, it is relatively easy to adjust the final results. Bode diagrams of the linear system can be developed from the computed values of spectra. p xx Let (n) be the autospectrum of input x(t) p (n) the autospectrum of output y(t) YY L (n) xy the cospectrum Q xy (n) Then corresponding to the frequency con, the quadrature spectrum. 25 Gain = (L 2(n) + Q 2(n)) 1/2 (5-3) pxx(n) Phase = Arctan (-Q(n)/L(n)) (5-4) Coherency L2(n) + Q 2(n) pxx(n) Pyy (n) (5-5) These concepts are delineated in the next few sections. When it is known that the system has a transportation lag, the phase information can be utilized to compute the value of the time delay. For example, if the gain ed transfer function is of the form: -sl E 1 + Ts (1 + (02 1 T2)1/2, and the suspect-

37 z6 The corresponding phase will be given by: so that L can be determined. 4:1) (w) = -wl - tan- 1(wT) (5-6) Spectral Density by Smoothing Periodograms In the periodogram method of computing spectral density, the Fourier coefficients are directly computed from the time series by using the fast Fourier transform (FFT). The values of the periodogram are then calculated and smoothed to obtain power density spectrum. The Fast Fourier Transform and the algorithms for its computation have been widely publicized. Two entire issues of IEEE Transactions of Audio and Electroacoustics (27, 28) have been devoted to the FFT and its applications. In the fast Fourier transform the given discrete function is expressed as a sum of cosine and sine terms of a Fourier series to approximate the Fourier integral of the non-periodic time series. In digital computation, the infinite integral has to be approximated by finite sums. The quantity computed by FFT methods is called the discrete Fourier Transform and regarded as a transform distinct from the Fourier integral or Fourier series transforms. The relationship between these transforms has been summarized here to explain why the sines and

38 cosines enter the computations of the spectra of non-periodic random phenomena. If a time function is composed of overlapping sine waves of different frequencies, its Fourier transform will consist of a series of distinct impulses located at the various frequencies. The equation going from the time function to the frequency function is called the direct transform or harmonic analysis. The other equation of the transform pair is called harmonic inversion or simply the inverse. The divisor for averaging the integral of the sum will be included with the direct transform though other conventions also occur in the literature. Fourier Series Transform If x(t) is a periodic function of time having a period T, it can be represented in the form of the Fourier expansion: 27 x(t) = CO n=-00 C(n) exp (j w nt) (5-7) where T/2 C(n) = = x(t) exp Nwnt) dt (5-8) 'El 51 -T/2 The angular frequency wn = 2Trn Since x(t) and wn are periodic, T the limits can be changed to 0, T, so that equation 5-8 can be rewritten as follows: C(n) = 1 T 0 x(t) exp (-j wnt) dt (5-9)

39 The Fourier Integral Transform 28 The Fourier Integral has been used to describe the transient behaviour of linear systems. The transform pair is oo A(f) = x(t) exp (-2Trjft) dt -00 x(t) = Se A(f) exp (2n-jft) df -oo (5-10) (5-11) In terms of angular frequency w, A(w) = 1 2Tr J x(t) exp (-jwt) dt (5-12) x(t) = A(w) exp (jwt) d w (5-13) These relations can be derived from the Fourier series transforms by increasing the period T indefinitely. As T tends to infinity, the spacing between harmonics approaches zero so that A(f) is a continuous function. Discrete Fourier Transform Time series data are usually not periodic and hence suggest the use of the integral transform. On the other hand, it is inappropriate to talk about the Fourier integral transform of a sequence of samples. A discrete sequence of samples x(kat) can be regarded as a set of equally spaced impulses associated with a time function:

40 29 co x*(t) = x(kat) A(t - kat). k=-00 (5-14) x*(t) has the Fourier transform B* (w) =-00 x(kat) exp (-jwk At). (5-15) It can be seen that the transform of equation 5-14 is periodic with period 27/At The discrete Fourier transform of a finite sequence x(kat), k = 0, 1,, N - 1, is defined as : The inverse is: N-1 1 B(n) = x(k) exp(-2trjkn/n), n=0, 1,, N-1. (5-16) k=0 N-1 x(k) = B(n) exp(27 jkn/n), k=0, 1,, N-1. (5-17) n=0 These equations assume sampling interval At equal to unity without sacrificing generality. The results can be modified for other values of At by using the relations stated in appendix A. The discrete Fourier transform can be derived from either the Fourier integral transform or the Fourier series transform (16, 27). The derivations involve implicit assumptions that B(n) is periodic with period 1/At and that x(k) is periodic with period T = NAt. When FFT is used for computing the Fourier series transform

41 of a periodic function x(t), x(t) is first sampled at intervals At and the resulting FFT is actually the periodic function 30 1=-00 C(n + IN), n = 0, 1,, N - 1 (5-18) as shown in figure 5-1. However, only one period (I =0) of this sum is viewed as B(n) and approximated to the real Fourier series transform C(n) of x(t). As shown in the figure, the negative half of C(n) is produced as the right half of B(n). The two tails of C(n) may differ from the corresponding part at the center of B(n). This difference will depend on the aliasing error. Suitably small values of At will minimize this error in the frequency range of interest. The following theorem explains the situation when FFT is used to calculate Fourier integral transform. If x(t) and A(f) are Fourier integral transforms of each other, then x (kat) and T 1 p Discrete Fourier Transforms of each other, a p(naf) are where Af = 1 NAt x (kat) = the periodic sum x(t + IT) a (naf) = the periodic sum A(f + IF) k = 0, 1,, (N - 1), corresponds to sampling points F = 1 At i e. twice the Nyquist frequency.

42 31 -N 0 N C(n), Fourier series coefficients n A(f), Fourier Integral transform f co -N 0 11 Ili, 11 I C(n + IN), Sum of displaced Fourier series coefficients n N N-1 B(n), Discrete Fourier transfor2m coefficients n Figure 5.1. Relationship between the various Fourier transforms.

43 Thus the FFT of the sampled version of the given signal is a periodic sum as shown. co C(n + 1=00 One period of this sum (/ = 0) is chosen as the approximation to the desired Fourier integral transform A(f). In the Fourier analysis of random functions, the term period refers to the length of the sample. Thus the fundamental frequency is merely the reciprocal of the sample length. In the Blackman- Tukey method the period corresponds to the maximum lag used. The significance of the concept of frequency of random phenomena is not the same as it is for periodic phenomena. Once At is chosen, the resolution of the frequency function can be increased by reducing Of 1 NA t 32 that is, by increasing the length N of the time series. A similar effect is obtained by extending the number of samples by adding zeroes. Addition of zeroes is necessary when FFT is used for computing correlations and convolutions to avoid the wrap-around error described later. The FFT method is faster than the classical Blackman-Tukey procedure. Once the tedious computations of the periodograms are completed, various smoothing schemes can be tried on a small computer. The resulting spectra will contain weighted information from all lags of the correlation function and hence should show more detail

44 than the Blackman-Tukey method which conventionally uses correlations computed for a limited number of lags. The procedure for computing spectra by smoothing periodograms is summarized below. 33 Data Preparation The data should be adjusted to have mean value of zero by subtracting the average of the series from each point. Any non-stationarity, especially trends, should be removed by differencing the data. Alternatively a simple regression equation can be used to eliminate the non-zero mean and trend in one operation. The data is then multiplied by a lag function so that it tapers to zero at each end. The window used: W(k) = 1 k - =1. N N k = 0, 1,, N - 1. (5-19) has a shape similar to the Harming window (16), but does not involve trigonometric functions. In this thesis, each series of 400 points was padded with zeroes at the end to obtain 512 points. Calculation of Periodograms The Fourier coefficients of the modified data can be computed for each series by using the Cooley-Tukey algorithm (15). Several subroutines (13, 38, 62) are available for computing the Fourier coefficients. 512 real values for the data points yield 257 complex

45 34 points of the form (a0, 0), (a1, b1), (a.256, 0). For autospectra, the periodograms are computed as: I xx ( ) = A multiple of N' 2 N' (a2 b 2), n = 0, 1,, (5-20) arrnu n n where U n 2.7rn N' N-1 k=0 W2(k) N = number of points in the given time series N' = number of points in the extended time series (padded with zeroes) W(k) = weights of the lag window. * denotes complex conjugate The real and imaginary parts of the cross-spectrum are computed in two steps. If x(k) transforms to An where An = a in -jb ln and y(k) transforms to Bn where Bn = a2n-jb2n then the cross-periodogram is: I xy (n) = A multiple of A '13 N' 2 = 2TrNU [ (a lna2n + b n n -j(b ln a 2n -a ln b Zn 0], n=0,1,, N' (5-21)

46 The cospectrum L and the quadrature spectrum Q are computed as: 35 L(n) = N'2 2TrNU [ a lna2n + b lnb2n (5-22) Q(n) - 1\1.'2 2.TrNU [b lna2n - a lnb2n (5-23) Smoothing of Periodograms The periodograms obtained in the last section were smoothed by convolving them with the three-point operator (-.T., ) and then eliminating every other point. This procedure was repeated twice and would have been repeated still further if the time series were long enough. Such drastic smoothing would not be needed on data resembling a signal rather than noise. Though the convolution with the above three-point operator is equivalent to a multiplication of the correlation function by the Hanning function (equation 5-2), the statistical effect of the decimation is not clear. Since each point of the final estimate was based on the average of several initial points, the method appears intuitively reasonable. When there are over 1000 frequencies between d-c and the Nyquist frequency, this procedure can reduce the results to a manageable size. Unfortunately this smoothing of the spectral curve will be accompanied by an increase in the correlation between the adjacent spectral values. Power spectra can also be smoothed by using symmetrical

47 triangular weighting of the form: m-1 p(n) = 1 m2 k=-m+1 (m I) Pn+k 36 (5-24) The index of the smoothed value would be the center of the span 2m-1. This smoothing would be equivalent to multiplying the correlation estimate by: %ma mirk Sin irk msin N 2 k= 0, 1, where N is the length of the time series. 2 The three-point operator used above corresponds to a triangular smoothing with span equal to three. Spectral Density by Averaging Periodograms of Segmented Data In this procedure, the given data are sectioned into several segments of equal length E. Periodograms are computed for each section by the FFT algorithm. The periodogram values at each frequency are averaged over all the segments to obtain spectral density values. No other windows need be applied to the periodogram. Non-stationarity will be indicated if the values from some segments are grossly different from the others. This method is particularly suitable for very long strings of data (16, 64).

48 Data Preparation 37 The given time series record is divided into K segments each of length E. Starting point of each segment is D units apart so that the subseries are of the form: X 1(k) = x (k); k= 0, 1, 2,, E - 1 X 2 (k) = x(k+ D); k= 0, 1, 2,, E - 1 (5-25) X K(k) = x(k+ [K- 1] D); k= 0, 1, 2,, E - 1 If the length of the time series is fixed, the variance is known to be near its minimum value when the segments overlap by one half their length; i.e. when D = E/2. The data in each segment are adjusted to have mean value zero and then windowed using the W(k) of equation There are K sequences each of the form x(k) W(k). Each segment of length E is extended to Et by adding zeroes. E' is a power of two preferably equal to 2E. In this case, each subseries of 100 points was extended to 128 points by adding zeroes. Calculation of Periodograms Finite Fourier transforms of each extended series were computed by the fast Fourier transform. K sets of periodograms were calculated as

49 38 Ik(n) = l2 2TrEU IA k(n)i2; k = 1, 2,, K (5-26) where A k Wn = Complex Fourier coefficient for kth subseries E" 27rn L-1 E' n= 0, 1, W2(.]) In the estimation of cross-spectra, each of the two series x(k) and y(k) were sectioned, windowed and transformed as above. K sets of cross-periodograms were computed for each pair of segments by the formulas analogous to equation Averaging the Periodograms Spectral estimates were computed by averaging the periodograms over the K of values at each frequency. K where 1 p(n) = K I k(n) k=1 2rr" n (5-27) wn = E n = 0, 1, 2 ' ectral Density by Fourier Transformation of Covariance Functions In the Blackman-Tukey method, the autocovariance and crosscovariance functions are computed as statistical averages of lagged

50 products. These averages are smoothed and converted into spectral densities by Fourier Transformation. In the classical Blackman- Tukey method the time averages were computed for about 20 percent of the possible lags. Recently there has been a trend to compute the covariance functions by a two-step fast Fourier transformation. either case, the bandwidth of the smoothing window is controlled by changing the number of lags involved. Fewer lags correspond to a smoother filter and lower variance of the estimate. The method is based on the Weiner-Khinchin theorem which states that the covariance functions and the corresponding power spectrum are Fourier transforms of each other. The computations are outlined below. 39 In Data Preparation The observed values of the discrete series, or samples of the continuous series must be adjusted to have mean value of zero. Any non-stationarity present in the series should be removed by appropriate filters (9, 33). Linear trend is a common non-stationarity that may become obvious in the initial correlation and spectral density curves. This may be reduced by differencing the data. Alternatively the trends as well as the mean can also be computed by regression analysis and subtracted from the observations. residuals are then analyzed for spectral characteristics. The

51 40 Computation of Covariance Functions follow s: Blackman-Tukey Method. Cross-covariances are computed as C xy (T) - C (T) - yx where T = lag 1 N - T 1 N - T N-T k=0 N-T k=0 xkyk+t' M = Maximum number of lags used T 0,1,, 1 (5-28) xk+tyk' T =0, M 1 (5-29) xk = k- th yk = k- th point in the input point in the output N is the total number of observations in each time series C xy (T) = Cross-covariance of x and y with y lagging x. Some authors (33) interpret this to mean just the opposite. Sometimes the divisor N - T is replaced by N. The use of N is preferred by some mathematicians (7, 40) because the resulting estimate has a lower mean square error. Either of the above equations gives the autocovariance when both x and y stand for the same function. Basically these functions are time averages of products of two values separated in time by a constant interval of T time units. the two functions have identical shape except that y lags x by a fixed interval u, then the average C xy (T) will have a relatively high value If

52 at T = u. Thus the covariance curve indicates how the two time series vary with respect to each other. The cross-covariance function shows the lag at which the input has the maximum effect on the output. Covariance functions are used for simulating processes with suitable models (17). This may not always be desirable because the adjacent values of the covariances of time series data are highly correlated. But covariances can be used to compute estimates of spectral density functions whose adjacent points are relatively independent. 41 Covariance Functions by Fast Fourier Transformation. If the discrete Fourier transforms of two series are multiplied, the inverse finite Fourier transform of the product will yield the convolution of the two original functions. This convolution is cyclic, i, e., when one sequence moves beyond the end of the other it does not encounter zeroes, but rather the periodic extension of the sequence. This ''wrap around" will also be present in the covariances computed by the fast Fourier transform methods. By adding zeroes to the data prior to the transformation, non-cyclic covariances can be obtained from the cyclic products (16). To calculate the sample autocovariance functions up to M lags, at least M zeroes must be added to the data that was adjusted earlier. For the FFT algorithm used here N' can be the smallest power of 2 with N' = N + M where N is the

53 42 length of the given series. The extended sequence will have the form: xe(k) = x(k), k = 0,, N - 1 = 0 k = N,, N' - 1 If A(n) is the finite Fourier Transform of xe(t), then Z(T) can be obtained as the finite Fourier transform of the products A(n). A(N' - n). It should be noted that A(N' - n) will be a complex conjugate of A(n). The autocorrelation is given by: C xx N' N T (T) Z(T), T=0, 1,, N' -N (5-30) The first (N' - N + 1) values of Z(T) are unaffected by the "wrap around" and will yield sample autocovariance functions. To calculate cross-covariances, each of the time series should be extended with zeroes to the length N' as before. For ease of computation both series should have the same length N. The finite Fourier transforms B(n) of y(t), and A(n) of x(t) are then computed. The inverse Z (T) of products A(n) B(n) will yield the crossxy covariances: N' C (T) = Z xy N T xy(t), T = 0, 1, M (5-31) C (T) = Z (T), yx N T xy T = -1, -2,, -M - (5-32) From the N' values of Z xy (T) obtained for T = 0,, N' - 1, the first M + 1 values yield the cross-covariances C (T) for xy T = 0,, M with y lagging x. The last M values yield the

54 corresponding cross-covariances for T = -M to -1 with y leading x. 43 Weighting the Correlation Functions To obtain statistically consistent estimates of the spectral densities, the correlation functions must be weighted prior to Fourier transfo Illation. As shown later in equation 5-35, this is a fairly straight-forward procedure in the computation of autospectral densities. The cross-covariances have to be handled as follows: a. The cross-covariance function must be aligned so that its largest absolute value is lined up with the origin, i.e., at zero lag. Failure to do this can result in low values and biases in the estimates of the amplitude gain and coherency (1, 2, 3, 4). b. From the aligned cross-covariances, two new functions are formed to correspond to the real and imaginary parts of the cross-spectrum: E(T) = 0.5 [G xy (T)+C yx (T)], T = 0, 1,, M (5-33) 0(T) = 0.5 [C (T) C (T) ], xy yx T = 0, 1,, M (5-34) The second function is obtained by differencing two quantities which may be very close to each other especially if correlations are used instead of covariances. This can cause sign reversals which will persist through the computation of the arctangent of the phase

55 44 angle. The resulting angle will shift from one quadrant to the other, and may not be meaningful. The Hanning cosine arch window of equation 5-2, was used in this thesis. Its equivalent bandwidth is approximated as VIET (33L A higher value of lag M will show finer details of the spectrum. In empirical spectral analysis, one would vary M from say, 4 to the value of the maximum lag used in the computation of the covariances. The optimum value is chosen where a higher value of M gives no significant enchancement of the main details of the spectral density curve. Too high a value of M can introduce noise in the estimates. On the other hand if significant features continue to appear in the curves as M is increased to its maximum value, the correlation functions should be computed to a larger number of lags. This in turn may indicate the need for longer data series. Transforming the Weighted Correlations Auto spectral estimates can be obtained as M-1 pxx(n) = 2. At [C xx (0)+2/ C xx (T) W(T) Cos TrTn, (5-35) T=1 The subscript n will vary from 0 to U. If U = M, then 2 independent values of the spectral density may be obtained. U can be about 3 times as large as the number of lags used. The closer spacing of the resulting values will show any peaks that may otherwise be missed.

56 The corresponding formulas for cosprectrum L and quadrature spectrum Q are as follows: M-1 L(n) = 2 At [E(0) + 2 E(T) W(T) Cos M-1 T=1 Q(n) = 4 At[ 0(T). W(T) Sin 1T-112 T=1 irtn U 45 (5-36) (5-37) An algorithm that computes only one trigonometric function per frequency is available (33). This is somewhat slower than the fast Fourier transform, but is easy to implement on small computers.

57 46 RESULTS Results and Discussion The sampling interval of two seconds limits the spectra to frequencies under 1.57 radians per second. Because of the instability at the higher frequencies, the following conclusions are confined to the lowest third of that range. The autospectra and frequency response were computed by three different methods. The three techniques, described in section V have been identified as 1. Smoothing of Periodograms obtained by Fast Fourier Transform (FFT), 2. Averaging Periodograms of segmented time series, and 3, The Blackman-Tukey method. Values were calculated at equispaced points, the spacing being a sixtyfourth of the total frequency range. Blackman-Tukey method gave negative values for autospectra at frequencies above 0.56 radians per second. Though the spectra of physical systems is always positive, spectral estimates derived from finite length samples can have negative values. Positive estimates may be assured by using special spectral windows at the price of introducing higher correlation between the values at adjacent frequencies. Standard Hanning window (equation 5-2) was used in these computations. In this

58 investigation there is no interest in frequencies above 0.56 radians per second. Hence, other spectral windows were not investigated. Values of the autospectra are shown in table 1 and in figures 6.1 to 6.6. Almost identical autospectra were obtained by all three methods. The output variable shows high energy at low frequencies. Autocovariance of the output, however, does not indicate any significant trend. The input variable has a conspicuous peak at about 0.2 radians per second. This confirmed the periodicity of the correlation function of the input. The reason for the periodic behaviour could not be traced since instrumentation changes had taken place. The input variable itself was on closed-loop control. The cyclic behaviour of the input variable could have been the result of incorrect values of the controller parameters. Instrument technicians approximate the values of the gain, reset rate and the derivative constant from the amplitude of the oscillation observed as a result of a step input. But changes in operating levels can have profound effects particularly on processes involving transport lags. Table 2 and figures 6.7, 6.8 and 6.9 show the gain of the process computed by the three different methods. The three curves are not incompatible. A break frequency of 0.17 radians per second is indicated. By graphical approximation a transfer function r,e -26t (s + 17)2 47

59 Table 1. Sample Auto spectra for the Output and the Input by Three Different Methods Frequency w in radians per second By Smoothing Periodograms Output Input By Averaging Periodograms Output Input By Blackman-. Tukey Method Output Input ,

60

61 30 x MOMMala mmmmm MEmm limmilimmommilmommummumummimmimmimmmommmilommilmemmil mommlimmilmmwwwimmmommmummemm... mmummummorm mommommemmimmomummommummilmmommommmommimmommom milmommummaummagammommommummmommommilommimmommummil mm MIIIMMEMMEMEMMEmMEKAIMMINIMMEMEMMEMEMEMMUMMOMMEMEMEM MMEMEMMINWOMMEMMEMEMINIMMEmMEMEMMUMMUMMEMMINIMMEMmom M A EMMMINI o ======.=E E=== = = = =6--g==k = EmE os ==m=====-...--==e== --= g_--=. ======IME= 11 = M ===OMM110MM MEMMMMWMMEMESEEM M= ESIMM mp)m MMMIIIMM =MMMONIMI=M1 MOIS III -MI IMMENO EMIIIIIIMIMUMUNIMMEMMEIIMMM=M WMUMMI MMM MEM MUMM M... MMIIIMMMMEMO... m In limum... MI MMIMMMWMMMIIMMIMMMOOMMMOMMUM IMI Env... ========= mm.. hi =mar= EmEmemmEE=MEE MEM= MEM -mmimmaa MILIMMEM ammitaimmemmmo sm mimm maim mom meemmemmimmmm :WW1 v mm====.= -moms mew MIMMEM M ma MMOMMEMME IIMIIMEMEMMOImmomM MM. MOM A IIIIMMIMMMMMMMMMMMMM MMMMM illi MU MMOOMMIMIMMOMM MMM M ME IMIMOMMMMMMMMMM MMM1111MMIIMMIA11=101:111MOM100:1 MI ONIMMWM.M.MMIMMMMMMEMMI MMMMM MEMEMMMEMMIMME MEMEMEMMEMMEMEMMEMEMME MEMMEMMEMMEMEMEMMUMME MMIIMEMMEMMEEMEMEMEMEME MIIWOM AMM IMMOMEMMOMM IIIMOMMOMMEMMEM MUMMOMMMEM MMOMMOMMOMMO IMMOMMOMMI == MOmE=Ems==-= MOMMOOMMOOMMOOM ============EMEMm= IMIL=MEEMMENI ammm=======emembemm mmmumme..m mmillmems= meeft. ===m-====miimi...mm Sommmmimmmmmommummom IIMMAImamm: OMMEMMOM MMMIOIMOMMMMMMMMMMMMWIIMMMMMIIOMMIMMMMMMMMMMMMMMMOOMM IMMOMMEMMIMMMMIIMMIIMMEMMOOMMIMMIIIIMMIMOMMIMMMMOMOMM MMMM IIIMMIMMMIIIMMM MOOMMMMMMMMEIMMIMMMIMMIMOM = ilimmmmoviiiimomm MMINMIll MIME MOM MMIMOMM MMIIIMMMIMMIUMMIMOMMIMMIPM MOMMMMMMEIMMMMIMMMMMOMMMII MII MI 1 A M.:6661El1111: = MMMIIMMEMINI mmomm--= = = = == =BEEmm mmmumm mummemmi M 21 AIME M mom mo Mal' mem mmmillm lomm milmmomm mmmmmimommmmmmmmmmmmmli mommommommommmirm m mmmmull mommomommmilmmo m==m mmmommmommommillim mil mml immalmemmemimml mom I Frequency units, 1 unit = radians per second Figure Autospectral density of input. Method: Blackman-Tukey, using 16 lags

62 700 if _ 51 ff:/, 4 t t_. pimp i r 100 U C14 cf) a) Frequency units, 1 unit = radians per second Figure Autospectrum of output. Method: Smoothing periodograms.

63 t MIE ;;;;:12= 5-5= E11 Em =' E mmm=======g==== miiimmeii mgm """m nal! 111:111111L 111..mm m: OMMOMMMMEME NOMMOMMOMMIMOMMIIIIIIIIm am mmmmmmmmmmmmmmmmmmmm Ma MOMMMMMIMMUMMOOMMUMMIMMOMOMWM mmmmmmmm. NEE=EMESME======== MImEMMLMEWIMEME!===== ===SEESSIMISSIII i==-b= m : thiihn limaiiiims :- II. manwsommull11111 alma imhandormionninsmissunrammorarre nfal allienneneenteb WV jiiiiiiiiiiiiiiiiiii" ME Im&... III Ilimmummommomm.. ammorm mom= Imo ill : EgEmssatamglE MigmlIMMEEMEMart=2= pubmga=====m==m=m= DeBEENENESSES=BEMEN =- == == ==== =E1====2=1= =====g= ===m mammas $mo Imminammoirmar.3 wawa 133= mg= MINIIME Infirm= -mmis.: m Imillsomis 111.EilkatittErd 1111:11111 somen=-== ===smmisslitersisi ==-= m mom gm= 11111:_m_mm-was mmmm.1 mmemod riedgra" :1: rati...m, Nri E ffillIMPOINFININAK UNIVALINIMMENUITIMENEMEMINETRIMMI KMENIZI FINIUMMINAI iiii musimai 11111, Frequency units, 1 unit = radians per second Figure 6.4. Autospectrum of input Method: Smoothing periodograms

64 c M KIHMI IIMIMININIMIL NI MI MIMIIIMIIMINMENIIIIMIONIMINMIIMIIIIIIIIIIIIIIIIIIIMINIII H ION MIN INMININ UM ThAlli NES..., M ININUIIIMENUNNIE NIIIIIMum I =11111= MIIM um IMMO N IMIN BM c, IMINIIIIMIM UM111111M IINNEIIIINIMIIIIUM MINN.2 nummum...m. onnummor HUI RIMIN N E MIME EMINIIIIIIMIIIIIIIMIIN M = 1 IIIIIIIIffi l_ M MEN min M NNION MIMIIIIMIMUM M E= N0 1. '

65 smos IIIIIIIIIIIIIIIIIIIImIEBEGIIIIIIIV II i iiiiiiiiiiiiiiiiiiiiiiiiiiiiiii mil. Immo MOM OMMINUMIOOMMEMOMMOOMMOMMOMOMOMMOOMO I IIIII VOMMEMOMOMIIIIIIIMOOMMOMMOMOMOM "1 Em= 1M-. ==...Ams iirm== =Mil 1111ESEIE..._.= =a.m..=.,,. = um.m. a=.,.. pm.. IlliiiMEMmil....M.1.wi.mm..em...mim..OM1...MMM...M...O.m.M...mM...IS...S...IC...O.w.MOm..SIOOMOOMMUOOMOM IliMEEE=Mm-m-,-=='"=-2 ammassumm...11.==...me ar...- Al M m ill -=== M ma _ BỊ ma. m-mmre' im ISMIESwEirml MMIIMMIMMWM MMMONE ilia...4: 1:12:11:22:11112=22=22:222222:2222: M41 MMAIMOMOISIMOSIOMOMMIONCIMMMIOMMOMMOIMMOOMMIM Limmornoriappurenurnamonsawaraummi limmummill 2MOO LAMM OM EMEMMOMMEMMEMEMOMMEMMEMMEM EMIL II IIIIIIMIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIE IMINNOWEEMONMEM MO illt:11111 IMMEMIUMEMEMOMEMEM E II M IMMEMMIXTOMMON MOIMEMO MOM= 0 I 1 IIMMENEM2141EmEmplirm Iiiiiiin====slaiiiiill;;;: i mg m UMW MOM I Iiiii n= tam:spa:1mm ==== ms=m-amms-- Emma:mammon Es==n m EMEMBEEE= jiii- MEMITI=MENEMEMPRISTAMINE NEENIMEEMOINEILmerajii sam.mmiso -+-L-, 4_t firm r- M MMAM. 7 4-I-+ Mill WWWWWWWWWIMMWAM = Empallill me 4 4-1D r - -- sate.se.s= so = INIBLIMPEr MMINIMMMIXWOMME 111E11111 ' M *MMIIIIMIOMMWJM TA14111 MIMI= Mom... m -111=1115 S -' - Mmlimmorimmir - mineeme Mmmimm=mmilmmmmul muumuu _, M EMEMOI M t-+4 4 ME --t MIMM EMMMEME MME 1--, 4 { f 4-4 t, 4 OM I MOO MENOMONEE 0, _ t...t f Frequency units, 1 unit = radians per second Figure Autospectrum of input. Method: Averaging periodograms of segmented series

66 4 I IMIIIMIIM IIIMIN U IIININIIII II II III II MN U I ; T ;111111a r1111M EMU mummuluill rimmumummo N Ilin.HM M ififfill MMINEMEMII miiiii1is; iiimimiiiiiiuiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiimimmimiiiimltll NE Imummumwmilimumme ummommimmumw 11111NunimmumummomEnuellnummimomnimmummuummmumi am NU osommommmiummuminummiumuumummmmumummeiptimunmumulmoonmummunieummunnua mmullimmougmumunnommumsmummu Ems mil mon innumimmunummummonom le ll m no m III mu u 1 Firs loli I II III ill! Ifni IF d i I hi I ill IN I il 1 II il I I ll I ll I II iii 1111 i I 1 Iii I 1 m 1 pi 1 1 II III ili. ill I-- 1 IN I 111 II I'll 1 1 I 1 li_ III " I I I 1 Ili PP 1111 MI III Iiii I III III 1 I If mu I h d iiiiiiiriri 1-30 II11111 i '10 I ' 11 Id I III 110 I 11 I nu m! i "11'1' 111- I 1111 li I ill I I III " 1 "1" 1111 I I 1111 "1 1 1'11'11 I 1 III Hi III In 11 IIMMII 1111 III III us Hull 1 to pill 1 ' III 11 nun. ' I it_ 1 _I Id_ lei_ 1_ Ill ib II a...mmunommimunn immilmommim I II 1 1 I I 0. 1 Frequency, radians per second Figure 6.. Gain vs. frequency Method: Blackman-Tukey, using 16 lags I i 1 I 1..0

67 s Imommig se... IN 111 MI II NI MIMI WU IIMMIIIMIIMMI III MIIMIIIHMM111111M iiiiimalmumimmommumnumminna M mummaliim MMIUM [ M ran il OMEN 111= IN mammon mmumumumm. mum MIUM IMAINIENIMUll NI II muumuummumenuumnumunn...mmus 1... IN II nu mom MUNI IA 0 M III EIM r namm mon mr dmrorm....9,9r ILA M MIIII M MUUMUU ummurrorr MI II III MIMI MIMI OWN MI ill OHM MMEMMIlitl unnimm nmilimulliiimmimmillnimilannumminaiiniminimini IN. I mffinnummmummumumumm muliminiiiiiin M I MIIINIMMIIIIIIIIIIIIMINIIIII M M WU, MINIMUM! V = IMMO N g m11 I. II11ISIIIM1IIII1IM111IIIIM111I111MUMHmIIM111115I I I I1uB11I1U1m11M1MrU1U1I11II1P1101M111I1111M11IIIIIII1II1I 1 me=numm11m NmMm1mM M1 11I1M11UMM W NtmIm1M mu , MN OM NI III M A NMI IMMINI III MI MUM MINIMUM M = EMI U111M111U1U111 M M M M M11U1 1 MU M MI1N1 I E111M111NI iii I MMI R1O EMU I MI , U111N NM 1 IOil 11191,MMI (111 M IN II : I M III MI NUMMI MIIIII Whillrill IN 111 MI I NI M M NMI III IIIIIMMIII1IMMIMI II M III M III E I Pantrib Frequency, radians per second Figure 6.8. Gain vs. frequency Method: Smoothing periodograms 1. 0 C.S1 cr

68 IIM M OMNI IMIIIIIIIIhM II Hp M11 MIIIIIIIMINEMS u Mill MEMEMI NM M II MUMMIMUMMIN MU MINIMISMIMMI MI M12111m r111 IMMIMIM M MMIMIIIIMM IIMMOIMMIIIMII III IMUMIIIIMI IIMMIIIIIIMMISS 1111MIMMINIMBISIMIIIIIIMIIMIll MI NMI MUMS M M IIMIIIMILIMM IN nmemuipmu moms Nu , MEN NMI MEE MI MMI 111 Ill IIIIIIMMIHMIIMI NM 11B BISRMS II IIMUM MMUI MMIMI 11M II MI IIIIIIMIIIMIIIIMS E1pM M EMEEBNISIMMI M IIIMMIIIIMIIIMI riaribliwimiiiiiiiiiiimimmd mmimmfitisfimm nem 1m mumsumn IIIII umnsumnummulummumum NM MINIMINIMMId P INN IN MINI MI HUI MIMMUMM NM IMIIIHNIMMIIMIIIIIIIIIMMIII IMONM MMUMMIIM magammenamunummunimmumuniunmninummumnsimpomununumuummummumumm 1111= IIIMMENIMIMMI M MINNIIIIMIIMMIIMIIIIIIIIIIIMMEIMMINIIIIIIIIIIIIIIIMMINIMISTM M IIIIIM INIMMIIIIIIIIIIIIMMIIIIIIIMIIMM imillimmiiiiiiiiiiiiiiiiiiiiiiiiiiiinie M MEMIMMIMINIM111111d MINEIMONII M MMMINIMMIN IIIIMMNIMIIMME M MMENIMI MIIMIIIIIIMMM IMMIIIIIIIMIIIIIIMMIIMMIIIIIIIIMMIMMIIIIIIIIIMINIMIIIIIIIMMifill IIIMIIIIIMMIMM MIIIIIIIIIMIMIMI b MMMIIMMENIIMIIIIIIIIIMUIM S I IIIIMIIIIIIMMISS

69 Frequency Radians/ Second Table 2. Frequency Response by Three Methods of Estimating Spectra By Smoothing Periodograms Gain, db Phase Angle, Radians By Averaging Periodograms Gain, db Phase Angle, Radians By Blackman- Tukey Method Gain, db Phase Angle, Radians ,

70 has been derived. The transportation lag of 13 time units is based on the cross-correlation results and on prior knowledge. Better estimates of this lag can generally be obtained from the phase information. However, the phase calculations depend heavily on the results of subtracting two nearly equal quantities. The phase angles obtained by the three methods are not compatible and involve frequent changes in sign. Since the data was correct to four significant figures and the series were relatively short, great confidence cannot be placed on the phase angle estimates. When better data are available, it may be justifiable to use a double precision mode for phase angle computations. The value of K varies with the method used. Coherency is an index of the efficiency of the derived linear input-output relationship. Values near unity indicate the highest degree of linear association between the two quantities. Most of the values in this illustration are under 0.4. Since only one out of the several possible inputs has been considered and the process is known to have non-linearities, such low values of coherency can be expected. Coherency can be used as one of the criteria for the selection of the most important causal variables in multivariable systems. Unlike the correlation coefficient of regression, it has not been possible to obtain consistent estimates of coherency in spectral analysis. Analysis of this type is usually based on the assumption that the input variables are known accurately, all the error being in the 59

71 output variable. This condition is not satisfied by the data used here. At this stage, it would be premature to state that any one of the computation methods is preferable to the others. The familiar Blackman- Tukey method is quite flexible. Inclusion of fewer lags widens the bandwidth and smoothens the curve. A fast Fourier technique for smoothing by varying the number of lags has been described but not illustrated. The illustrations are based on the special fast Fourier algorithms that are applicable only when the number of points is a power of two. This is extremely fast, but lacks the flexibility to handle an arbitrary number of lags. The situation will change as newer methods of FFT are developed. The method of averaging periodograms of segments of the given series has some unique features. It is easily adapted for very long series. Since periodograms are computed for small segments of data, it is easy to detect non- stationarity. But this method has the least amount of local smoothing which is reflected in rough curves. Parzen's method (43, 44) for computing spectra by autoregression of autocovariances has not been explored. A frequency analysis is shown in figures 6.10 and The range of the series was divided into 20 intervals to obtain the abcissa. Recurrence of some values in the original data indicates the lack of resolution in the measuring and data logging system. 60

72 0 0 k ism Pm...r.immutosimin Fp......m MNEm MNIMININI MN MINIM MAIONIOMMINE=IMMENNO11111 ELIIIIINIM 1111=111111M1 MINININ NE NE inessn... iii. mi EN...0..u. LOOI PLABILIBLIELIA Ell m,,mumnotim ipi... III IL. MONNE MENOMMENE mu mmilainimilipmei lillii.. iiiiiiiiiimpiiiiiipar inummoommmiummumommmilmni UM MN NEI ION MEM =I= =II= NEI I EINE MEN INIMMIIIMNIENNINIMMIIMMII ININEMIK=1. MEI NEM MINNININIMMIIME =I= MII NONOMONEMMINI II INOMEMII N IIIII MI= MI= MEI ME =I MOM MEMEMUMIN MN NEI I= MINN IIMI IMO NEI NM I Moo NONE MO ININNIONNEINNIIIIII MINE= IIEI EMS =II Em mom' um ow mummus um mom nu I INIMMINNEI ONNI1ooMMEM =MI EN RE NEI I= NEI I= NEI IIIII =II......, OM NM HUNEI Env : kid Nom m mtial t MEM I ill mmburfillimode Eilim =.111 MIll MENII NEMN NIMNI Mill NE NE EMMEN. MINIM= NE MEMINIMIIIII EMINIMMINI 11111= I= IMIONIMMIENE IONNE NE MEMONIMOM EN MOO MUNN = === 11=111 NE MINNINENION OM NM UM MINIM MN IN 11111E 1111=1 NE NMI MENNE MENIS NEEM MN INNEOMME EMEN Ell IMIIIIMME MENE EM MENE ENNE MN ME MIIII NEM MIENI IMO =1= MENII OMI EMI. MIMI ENIIN =MEN NEM ENIIN IIII NOM MEN INNIN =NEM NMI= INIEN NM NNE= NEM NM NM NM ONINN MININ NM IN= =MEMNON= =MEMNON= Mill MOM MEM NE =NEN MEE INIONENEMIIIMMI NEMENEMENNEE NE MENIIMEME INIIINO =MEMO= NEM IIIIIIMMIMIONMI EMEN ON MOINNENNI INNE NNW I= IN MEIN EIMIONIMM =I El I MEIMEMNIE NEI IN ION NEMIIMENO mamma simmiumminiummismtmommuni =111, Class interval Figure Histogram of output data. Maximum value = 477.9, minimum value =

Sound & Vibration Magazine March, Fundamentals of the Discrete Fourier Transform

Sound & Vibration Magazine March, Fundamentals of the Discrete Fourier Transform Fundamentals of the Discrete Fourier Transform Mark H. Richardson Hewlett Packard Corporation Santa Clara, California The Fourier transform is a mathematical procedure that was discovered by a French mathematician

More information

Topic 3: Fourier Series (FS)

Topic 3: Fourier Series (FS) ELEC264: Signals And Systems Topic 3: Fourier Series (FS) o o o o Introduction to frequency analysis of signals CT FS Fourier series of CT periodic signals Signal Symmetry and CT Fourier Series Properties

More information

IV. Covariance Analysis

IV. Covariance Analysis IV. Covariance Analysis Autocovariance Remember that when a stochastic process has time values that are interdependent, then we can characterize that interdependency by computing the autocovariance function.

More information

ESS Finite Impulse Response Filters and the Z-transform

ESS Finite Impulse Response Filters and the Z-transform 9. Finite Impulse Response Filters and the Z-transform We are going to have two lectures on filters you can find much more material in Bob Crosson s notes. In the first lecture we will focus on some of

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

Chirp Transform for FFT

Chirp Transform for FFT Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a

More information

Practical Spectral Estimation

Practical Spectral Estimation Digital Signal Processing/F.G. Meyer Lecture 4 Copyright 2015 François G. Meyer. All Rights Reserved. Practical Spectral Estimation 1 Introduction The goal of spectral estimation is to estimate how the

More information

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang 1-1 Course Description Emphases Delivering concepts and Practice Programming Identification Methods using Matlab Class

More information

ω 0 = 2π/T 0 is called the fundamental angular frequency and ω 2 = 2ω 0 is called the

ω 0 = 2π/T 0 is called the fundamental angular frequency and ω 2 = 2ω 0 is called the he ime-frequency Concept []. Review of Fourier Series Consider the following set of time functions {3A sin t, A sin t}. We can represent these functions in different ways by plotting the amplitude versus

More information

Lecture 4 - Spectral Estimation

Lecture 4 - Spectral Estimation Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated

More information

Correlator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi

Correlator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi Chapter 8 Correlator I. Basics D. Anish Roshi 8.1 Introduction A radio interferometer measures the mutual coherence function of the electric field due to a given source brightness distribution in the sky.

More information

Sensors. Chapter Signal Conditioning

Sensors. Chapter Signal Conditioning Chapter 2 Sensors his chapter, yet to be written, gives an overview of sensor technology with emphasis on how to model sensors. 2. Signal Conditioning Sensors convert physical measurements into data. Invariably,

More information

Basic Descriptions and Properties

Basic Descriptions and Properties CHAPTER 1 Basic Descriptions and Properties This first chapter gives basic descriptions and properties of deterministic data and random data to provide a physical understanding for later material in this

More information

Review of Linear Time-Invariant Network Analysis

Review of Linear Time-Invariant Network Analysis D1 APPENDIX D Review of Linear Time-Invariant Network Analysis Consider a network with input x(t) and output y(t) as shown in Figure D-1. If an input x 1 (t) produces an output y 1 (t), and an input x

More information

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.34: Discrete-Time Signal Processing OpenCourseWare 006 ecture 8 Periodogram Reading: Sections 0.6 and 0.7

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

2A1H Time-Frequency Analysis II

2A1H Time-Frequency Analysis II 2AH Time-Frequency Analysis II Bugs/queries to david.murray@eng.ox.ac.uk HT 209 For any corrections see the course page DW Murray at www.robots.ox.ac.uk/ dwm/courses/2tf. (a) A signal g(t) with period

More information

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e Transform methods Some of the different forms of a signal, obtained by transformations, are shown in the figure. X(s) X(t) L - L F - F jw s s jw X(jw) X*(t) F - F X*(jw) jwt e z jwt z e X(nT) Z - Z X(z)

More information

Review: Continuous Fourier Transform

Review: Continuous Fourier Transform Review: Continuous Fourier Transform Review: convolution x t h t = x τ h(t τ)dτ Convolution in time domain Derivation Convolution Property Interchange the order of integrals Let Convolution Property By

More information

Cast of Characters. Some Symbols, Functions, and Variables Used in the Book

Cast of Characters. Some Symbols, Functions, and Variables Used in the Book Page 1 of 6 Cast of Characters Some s, Functions, and Variables Used in the Book Digital Signal Processing and the Microcontroller by Dale Grover and John R. Deller ISBN 0-13-081348-6 Prentice Hall, 1998

More information

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous

More information

Introduction to Biomedical Engineering

Introduction to Biomedical Engineering Introduction to Biomedical Engineering Biosignal processing Kung-Bin Sung 6/11/2007 1 Outline Chapter 10: Biosignal processing Characteristics of biosignals Frequency domain representation and analysis

More information

Autoregressive Models Fourier Analysis Wavelets

Autoregressive Models Fourier Analysis Wavelets Autoregressive Models Fourier Analysis Wavelets BFR Flood w/10yr smooth Spectrum Annual Max: Day of Water year Flood magnitude vs. timing Jain & Lall, 2000 Blacksmith Fork, Hyrum, UT Analyses of Flood

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:305:45 CBC C222 Lecture 8 Frequency Analysis 14/02/18 http://www.ee.unlv.edu/~b1morris/ee482/

More information

SEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis

SEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis SEISMIC WAVE PROPAGATION Lecture 2: Fourier Analysis Fourier Series & Fourier Transforms Fourier Series Review of trigonometric identities Analysing the square wave Fourier Transform Transforms of some

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseWare http://ocw.mit.edu HST.58J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

PART 1. Review of DSP. f (t)e iωt dt. F(ω) = f (t) = 1 2π. F(ω)e iωt dω. f (t) F (ω) The Fourier Transform. Fourier Transform.

PART 1. Review of DSP. f (t)e iωt dt. F(ω) = f (t) = 1 2π. F(ω)e iωt dω. f (t) F (ω) The Fourier Transform. Fourier Transform. PART 1 Review of DSP Mauricio Sacchi University of Alberta, Edmonton, AB, Canada The Fourier Transform F() = f (t) = 1 2π f (t)e it dt F()e it d Fourier Transform Inverse Transform f (t) F () Part 1 Review

More information

X. Cross Spectral Analysis

X. Cross Spectral Analysis X. Cross Spectral Analysis Cross Spectrum We have already dealt with the crosscovariance function (ccvf): ˆ 1 C (k) = X(i)Y(i + k) N N k i= 0 The Fourier transform of the ccvf is called the cross spectrum.

More information

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Lecture No. # 33 Probabilistic methods in earthquake engineering-2 So, we have

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit   dwm/courses/2tf Time-Frequency Analysis II (HT 20) 2AH 2AH Time-Frequency Analysis II Bugs/queries to david.murray@eng.ox.ac.uk HT 20 For hints and answers visit www.robots.ox.ac.uk/ dwm/courses/2tf David Murray. A periodic

More information

3 Time Series Regression

3 Time Series Regression 3 Time Series Regression 3.1 Modelling Trend Using Regression Random Walk 2 0 2 4 6 8 Random Walk 0 2 4 6 8 0 10 20 30 40 50 60 (a) Time 0 10 20 30 40 50 60 (b) Time Random Walk 8 6 4 2 0 Random Walk 0

More information

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive

More information

Time Series Analysis: 4. Linear filters. P. F. Góra

Time Series Analysis: 4. Linear filters. P. F. Góra Time Series Analysis: 4. Linear filters P. F. Góra http://th-www.if.uj.edu.pl/zfs/gora/ 2012 Linear filters in the Fourier domain Filtering: Multiplying the transform by a transfer function. g n DFT G

More information

Frequency Response and Continuous-time Fourier Series

Frequency Response and Continuous-time Fourier Series Frequency Response and Continuous-time Fourier Series Recall course objectives Main Course Objective: Fundamentals of systems/signals interaction (we d like to understand how systems transform or affect

More information

C(s) R(s) 1 C(s) C(s) C(s) = s - T. Ts + 1 = 1 s - 1. s + (1 T) Taking the inverse Laplace transform of Equation (5 2), we obtain

C(s) R(s) 1 C(s) C(s) C(s) = s - T. Ts + 1 = 1 s - 1. s + (1 T) Taking the inverse Laplace transform of Equation (5 2), we obtain analyses of the step response, ramp response, and impulse response of the second-order systems are presented. Section 5 4 discusses the transient-response analysis of higherorder systems. Section 5 5 gives

More information

Problem Sheet 1 Examples of Random Processes

Problem Sheet 1 Examples of Random Processes RANDOM'PROCESSES'AND'TIME'SERIES'ANALYSIS.'PART'II:'RANDOM'PROCESSES' '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''Problem'Sheets' Problem Sheet 1 Examples of Random Processes 1. Give

More information

Stability of Feedback Control Systems: Absolute and Relative

Stability of Feedback Control Systems: Absolute and Relative Stability of Feedback Control Systems: Absolute and Relative Dr. Kevin Craig Greenheck Chair in Engineering Design & Professor of Mechanical Engineering Marquette University Stability: Absolute and Relative

More information

HARMONIC WAVELET TRANSFORM SIGNAL DECOMPOSITION AND MODIFIED GROUP DELAY FOR IMPROVED WIGNER- VILLE DISTRIBUTION

HARMONIC WAVELET TRANSFORM SIGNAL DECOMPOSITION AND MODIFIED GROUP DELAY FOR IMPROVED WIGNER- VILLE DISTRIBUTION HARMONIC WAVELET TRANSFORM SIGNAL DECOMPOSITION AND MODIFIED GROUP DELAY FOR IMPROVED WIGNER- VILLE DISTRIBUTION IEEE 004. All rights reserved. This paper was published in Proceedings of International

More information

Single-Time-Constant (STC) Circuits This lecture is given as a background that will be needed to determine the frequency response of the amplifiers.

Single-Time-Constant (STC) Circuits This lecture is given as a background that will be needed to determine the frequency response of the amplifiers. Single-Time-Constant (STC) Circuits This lecture is given as a background that will be needed to determine the frequency response of the amplifiers. Objectives To analyze and understand STC circuits with

More information

Time Series Analysis: 4. Digital Linear Filters. P. F. Góra

Time Series Analysis: 4. Digital Linear Filters. P. F. Góra Time Series Analysis: 4. Digital Linear Filters P. F. Góra http://th-www.if.uj.edu.pl/zfs/gora/ 2018 Linear filters Filtering in Fourier domain is very easy: multiply the DFT of the input by a transfer

More information

(Refer Slide Time: 01:30)

(Refer Slide Time: 01:30) Networks and Systems Prof V.G K.Murti Department of Electrical Engineering Indian Institute of Technology, Madras Lecture - 11 Fourier Series (5) Continuing our discussion of Fourier series today, we will

More information

GATE EE Topic wise Questions SIGNALS & SYSTEMS

GATE EE Topic wise Questions SIGNALS & SYSTEMS www.gatehelp.com GATE EE Topic wise Questions YEAR 010 ONE MARK Question. 1 For the system /( s + 1), the approximate time taken for a step response to reach 98% of the final value is (A) 1 s (B) s (C)

More information

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes EAS 305 Random Processes Viewgraph 1 of 10 Definitions: Random Processes A random process is a family of random variables indexed by a parameter t T, where T is called the index set λ i Experiment outcome

More information

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters Signals, Instruments, and Systems W5 Introduction to Signal Processing Sampling, Reconstruction, and Filters Acknowledgments Recapitulation of Key Concepts from the Last Lecture Dirac delta function (

More information

Fourier Transforms For additional information, see the classic book The Fourier Transform and its Applications by Ronald N. Bracewell (which is on the shelves of most radio astronomers) and the Wikipedia

More information

Frequency Resolution Effects on FRF Estimation: Cyclic Averaging vs. Large Block Size

Frequency Resolution Effects on FRF Estimation: Cyclic Averaging vs. Large Block Size Frequency Resolution Effects on FRF Estimation: Cyclic Averaging vs. Large Block Size Allyn W. Phillips, PhD Andrew. Zucker Randall J. Allemang, PhD Research Assistant Professor Research Assistant Professor

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Generation of bandlimited sync transitions for sine waveforms

Generation of bandlimited sync transitions for sine waveforms Generation of bandlimited sync transitions for sine waveforms Vadim Zavalishin May 4, 9 Abstract A common way to generate bandlimited sync transitions for the classical analog waveforms is the BLEP method.

More information

System Identification & Parameter Estimation

System Identification & Parameter Estimation System Identification & Parameter Estimation Wb3: SIPE lecture Correlation functions in time & frequency domain Alfred C. Schouten, Dept. of Biomechanical Engineering (BMechE), Fac. 3mE // Delft University

More information

Patrick F. Dunn 107 Hessert Laboratory Department of Aerospace and Mechanical Engineering University of Notre Dame Notre Dame, IN 46556

Patrick F. Dunn 107 Hessert Laboratory Department of Aerospace and Mechanical Engineering University of Notre Dame Notre Dame, IN 46556 Learning Objectives to Accompany MEASUREMENT AND DATA ANALYSIS FOR ENGINEERING AND SCIENCE Second Edition, Taylor and Francis/CRC Press, c 2010 ISBN: 9781439825686 Patrick F. Dunn pdunn@nd.edu 107 Hessert

More information

MAXIMIZING SIGNAL-TO-NOISE RATIO (SNR) IN 3-D LARGE BANDGAP SEMICONDUCTOR PIXELATED DETECTORS IN OPTIMUM AND NON-OPTIMAL FILTERING CONDITIONS

MAXIMIZING SIGNAL-TO-NOISE RATIO (SNR) IN 3-D LARGE BANDGAP SEMICONDUCTOR PIXELATED DETECTORS IN OPTIMUM AND NON-OPTIMAL FILTERING CONDITIONS 9 International Nuclear Atlantic Conference - INAC 9 Rio de Janeiro,RJ, Brazil, September7 to October, 9 ASSOCIAÇÃO BRASILEIRA DE ENERGIA NUCLEAR - ABEN ISBN: 978-85-994-3-8 MAXIMIZING SIGNAL-TO-NOISE

More information

It is common to think and write in time domain. creating the mathematical description of the. Continuous systems- using Laplace or s-

It is common to think and write in time domain. creating the mathematical description of the. Continuous systems- using Laplace or s- It is common to think and write in time domain quantities, but this is not the best thing to do in creating the mathematical description of the system we are dealing with. Continuous systems- using Laplace

More information

A=randn(500,100); mu=mean(a); sigma_a=std(a); std_a=sigma_a/sqrt(500); [std(mu) mean(std_a)] % compare standard deviation of means % vs standard error

A=randn(500,100); mu=mean(a); sigma_a=std(a); std_a=sigma_a/sqrt(500); [std(mu) mean(std_a)] % compare standard deviation of means % vs standard error UCSD SIOC 221A: (Gille) 1 Reading: Bendat and Piersol, Ch. 5.2.1 Lecture 10: Recap Last time we looked at the sinc function, windowing, and detrending with an eye to reducing edge effects in our spectra.

More information

Lecture 3 - Design of Digital Filters

Lecture 3 - Design of Digital Filters Lecture 3 - Design of Digital Filters 3.1 Simple filters In the previous lecture we considered the polynomial fit as a case example of designing a smoothing filter. The approximation to an ideal LPF can

More information

Biomedical Signal Processing and Signal Modeling

Biomedical Signal Processing and Signal Modeling Biomedical Signal Processing and Signal Modeling Eugene N. Bruce University of Kentucky A Wiley-lnterscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane Singapore Toronto

More information

Frequency Dependent Aspects of Op-amps

Frequency Dependent Aspects of Op-amps Frequency Dependent Aspects of Op-amps Frequency dependent feedback circuits The arguments that lead to expressions describing the circuit gain of inverting and non-inverting amplifier circuits with resistive

More information

Data Processing and Analysis

Data Processing and Analysis Data Processing and Analysis Rick Aster and Brian Borchers September 10, 2013 Energy and Power Spectra It is frequently valuable to study the power distribution of a signal in the frequency domain. For

More information

Roundoff Noise in Digital Feedback Control Systems

Roundoff Noise in Digital Feedback Control Systems Chapter 7 Roundoff Noise in Digital Feedback Control Systems Digital control systems are generally feedback systems. Within their feedback loops are parts that are analog and parts that are digital. At

More information

Stochastic Processes. A stochastic process is a function of two variables:

Stochastic Processes. A stochastic process is a function of two variables: Stochastic Processes Stochastic: from Greek stochastikos, proceeding by guesswork, literally, skillful in aiming. A stochastic process is simply a collection of random variables labelled by some parameter:

More information

Signal Processing Signal and System Classifications. Chapter 13

Signal Processing Signal and System Classifications. Chapter 13 Chapter 3 Signal Processing 3.. Signal and System Classifications In general, electrical signals can represent either current or voltage, and may be classified into two main categories: energy signals

More information

Lecture 7 Random Signal Analysis

Lecture 7 Random Signal Analysis Lecture 7 Random Signal Analysis 7. Introduction to Probability 7. Amplitude Distributions 7.3 Uniform, Gaussian, and Other Distributions 7.4 Power and Power Density Spectra 7.5 Properties of the Power

More information

Dr. Ian R. Manchester

Dr. Ian R. Manchester Dr Ian R. Manchester Week Content Notes 1 Introduction 2 Frequency Domain Modelling 3 Transient Performance and the s-plane 4 Block Diagrams 5 Feedback System Characteristics Assign 1 Due 6 Root Locus

More information

Chapter 8 The Discrete Fourier Transform

Chapter 8 The Discrete Fourier Transform Chapter 8 The Discrete Fourier Transform Introduction Representation of periodic sequences: the discrete Fourier series Properties of the DFS The Fourier transform of periodic signals Sampling the Fourier

More information

(Refer Slide Time: 01:28 03:51 min)

(Refer Slide Time: 01:28 03:51 min) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture 40 FIR Design by Windowing This is the 40 th lecture and our topic for

More information

STABILITY ANALYSIS OF DAMPED SDOF SYSTEMS WITH TWO TIME DELAYS IN STATE FEEDBACK

STABILITY ANALYSIS OF DAMPED SDOF SYSTEMS WITH TWO TIME DELAYS IN STATE FEEDBACK Journal of Sound and Vibration (1998) 214(2), 213 225 Article No. sv971499 STABILITY ANALYSIS OF DAMPED SDOF SYSTEMS WITH TWO TIME DELAYS IN STATE FEEDBACK H. Y. HU ANDZ. H. WANG Institute of Vibration

More information

13.7 Power Spectrum Estimation by the Maximum Entropy (All Poles) Method

13.7 Power Spectrum Estimation by the Maximum Entropy (All Poles) Method 3.7 Maximum Entropy (All Poles) Method 565 LP coefficients for each segment of the data. The output is reconstructed by driving these coefficients with initial conditions consisting of all zeros except

More information

LOPE3202: Communication Systems 10/18/2017 2

LOPE3202: Communication Systems 10/18/2017 2 By Lecturer Ahmed Wael Academic Year 2017-2018 LOPE3202: Communication Systems 10/18/2017 We need tools to build any communication system. Mathematics is our premium tool to do work with signals and systems.

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 3 Brief Review of Signals and Systems My subject for today s discussion

More information

Theory and Problems of Signals and Systems

Theory and Problems of Signals and Systems SCHAUM'S OUTLINES OF Theory and Problems of Signals and Systems HWEI P. HSU is Professor of Electrical Engineering at Fairleigh Dickinson University. He received his B.S. from National Taiwan University

More information

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13 CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates

More information

General Appendix A Transmission Line Resonance due to Reflections (1-D Cavity Resonances)

General Appendix A Transmission Line Resonance due to Reflections (1-D Cavity Resonances) A 1 General Appendix A Transmission Line Resonance due to Reflections (1-D Cavity Resonances) 1. Waves Propagating on a Transmission Line General A transmission line is a 1-dimensional medium which can

More information

SPECTRUM. Deterministic Signals with Finite Energy (l 2 ) Deterministic Signals with Infinite Energy N 1. n=0. N N X N(f) 2

SPECTRUM. Deterministic Signals with Finite Energy (l 2 ) Deterministic Signals with Infinite Energy N 1. n=0. N N X N(f) 2 SPECTRUM Deterministic Signals with Finite Energy (l 2 ) Energy Spectrum: S xx (f) = X(f) 2 = 2 x(n)e j2πfn n= Deterministic Signals with Infinite Energy DTFT of truncated signal: X N (f) = N x(n)e j2πfn

More information

APPENDIX A. The Fourier integral theorem

APPENDIX A. The Fourier integral theorem APPENDIX A The Fourier integral theorem In equation (1.7) of Section 1.3 we gave a description of a signal defined on an infinite range in the form of a double integral, with no explanation as to how that

More information

ANALOG AND DIGITAL SIGNAL PROCESSING ADSP - Chapter 8

ANALOG AND DIGITAL SIGNAL PROCESSING ADSP - Chapter 8 ANALOG AND DIGITAL SIGNAL PROCESSING ADSP - Chapter 8 Fm n N fnt ( ) e j2mn N X() X() 2 X() X() 3 W Chap. 8 Discrete Fourier Transform (DFT), FFT Prof. J.-P. Sandoz, 2-2 W W 3 W W x () x () x () 2 x ()

More information

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering Advanced Digital Signal rocessing (18-792) Spring Fall Semester, 201 2012 Department of Electrical and Computer Engineering ROBLEM SET 8 Issued: 10/26/18 Due: 11/2/18 Note: This problem set is due Friday,

More information

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur Module Signal Representation and Baseband Processing Version ECE II, Kharagpur Lesson 8 Response of Linear System to Random Processes Version ECE II, Kharagpur After reading this lesson, you will learn

More information

INTRODUCTION TO DELTA-SIGMA ADCS

INTRODUCTION TO DELTA-SIGMA ADCS ECE37 Advanced Analog Circuits INTRODUCTION TO DELTA-SIGMA ADCS Richard Schreier richard.schreier@analog.com NLCOTD: Level Translator VDD > VDD2, e.g. 3-V logic? -V logic VDD < VDD2, e.g. -V logic? 3-V

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

EA2.3 - Electronics 2 1

EA2.3 - Electronics 2 1 In the previous lecture, I talked about the idea of complex frequency s, where s = σ + jω. Using such concept of complex frequency allows us to analyse signals and systems with better generality. In this

More information

Automated Modal Parameter Estimation For Operational Modal Analysis of Large Systems

Automated Modal Parameter Estimation For Operational Modal Analysis of Large Systems Automated Modal Parameter Estimation For Operational Modal Analysis of Large Systems Palle Andersen Structural Vibration Solutions A/S Niels Jernes Vej 10, DK-9220 Aalborg East, Denmark, pa@svibs.com Rune

More information

Vibration Testing. an excitation source a device to measure the response a digital signal processor to analyze the system response

Vibration Testing. an excitation source a device to measure the response a digital signal processor to analyze the system response Vibration Testing For vibration testing, you need an excitation source a device to measure the response a digital signal processor to analyze the system response i) Excitation sources Typically either

More information

Multi-Level Fringe Rotation. JONATHAN D. ROMNEY National Radio Astronomy Observatory Charlottesville, Virginia

Multi-Level Fringe Rotation. JONATHAN D. ROMNEY National Radio Astronomy Observatory Charlottesville, Virginia I VLB A Correlator Memo No. S3 (8705) Multi-Level Fringe Rotation JONATHAN D. ROMNEY National Radio Astronomy Observatory Charlottesville, Virginia 1987 February 3 Station-based fringe rotation using a

More information

1.4 Fourier Analysis. Random Data 17

1.4 Fourier Analysis. Random Data 17 Random Data 17 1.4 Fourier Analysis In the physical world we sample (and acquire) signals in either the temporal domain or the spatial domain. However, we are frequently interested in thefrequencycontentof

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

Spectral Analysis. Jesús Fernández-Villaverde University of Pennsylvania

Spectral Analysis. Jesús Fernández-Villaverde University of Pennsylvania Spectral Analysis Jesús Fernández-Villaverde University of Pennsylvania 1 Why Spectral Analysis? We want to develop a theory to obtain the business cycle properties of the data. Burns and Mitchell (1946).

More information

BCT Lecture 3. Lukas Vacha.

BCT Lecture 3. Lukas Vacha. BCT Lecture 3 Lukas Vacha vachal@utia.cas.cz Stationarity and Unit Root Testing Why do we need to test for Non-Stationarity? The stationarity or otherwise of a series can strongly influence its behaviour

More information

Chapter 3 Derivatives

Chapter 3 Derivatives Chapter Derivatives Section 1 Derivative of a Function What you ll learn about The meaning of differentiable Different ways of denoting the derivative of a function Graphing y = f (x) given the graph of

More information

Experiment 13 Poles and zeros in the z plane: IIR systems

Experiment 13 Poles and zeros in the z plane: IIR systems Experiment 13 Poles and zeros in the z plane: IIR systems Achievements in this experiment You will be able to interpret the poles and zeros of the transfer function of discrete-time filters to visualize

More information

ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010

ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010 ME 563 HOMEWORK # 7 SOLUTIONS Fall 2010 PROBLEM 1: Given the mass matrix and two undamped natural frequencies for a general two degree-of-freedom system with a symmetric stiffness matrix, find the stiffness

More information

Lecture 11: Spectral Analysis

Lecture 11: Spectral Analysis Lecture 11: Spectral Analysis Methods For Estimating The Spectrum Walid Sharabati Purdue University Latest Update October 27, 2016 Professor Sharabati (Purdue University) Time Series Analysis October 27,

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Lecture 1: Introduction to System Modeling and Control. Introduction Basic Definitions Different Model Types System Identification

Lecture 1: Introduction to System Modeling and Control. Introduction Basic Definitions Different Model Types System Identification Lecture 1: Introduction to System Modeling and Control Introduction Basic Definitions Different Model Types System Identification What is Mathematical Model? A set of mathematical equations (e.g., differential

More information

Each of these functions represents a signal in terms of its spectral components in the frequency domain.

Each of these functions represents a signal in terms of its spectral components in the frequency domain. N INTRODUCTION TO SPECTRL FUNCTIONS Revision B By Tom Irvine Email: tomirvine@aol.com March 3, 000 INTRODUCTION This tutorial presents the Fourier transform. It also discusses the power spectral density

More information

Laplace Transform Analysis of Signals and Systems

Laplace Transform Analysis of Signals and Systems Laplace Transform Analysis of Signals and Systems Transfer Functions Transfer functions of CT systems can be found from analysis of Differential Equations Block Diagrams Circuit Diagrams 5/10/04 M. J.

More information

Radar Systems Engineering Lecture 3 Review of Signals, Systems and Digital Signal Processing

Radar Systems Engineering Lecture 3 Review of Signals, Systems and Digital Signal Processing Radar Systems Engineering Lecture Review of Signals, Systems and Digital Signal Processing Dr. Robert M. O Donnell Guest Lecturer Radar Systems Course Review Signals, Systems & DSP // Block Diagram of

More information

Continuous Fourier transform of a Gaussian Function

Continuous Fourier transform of a Gaussian Function Continuous Fourier transform of a Gaussian Function Gaussian function: e t2 /(2σ 2 ) The CFT of a Gaussian function is also a Gaussian function (i.e., time domain is Gaussian, then the frequency domain

More information