Atmospheric Flight Dynamics Example Exam 2 Solutions

Similar documents
Atmospheric Flight Dynamics Example Exam 1 Solutions

1 Signals and systems

Spectral Analysis of Random Processes

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

ECE 636: Systems identification

ECE 301 Division 1, Fall 2008 Instructor: Mimi Boutin Final Examination Instructions:

ECE 301 Division 1, Fall 2006 Instructor: Mimi Boutin Final Examination

ECE 301. Division 3, Fall 2007 Instructor: Mimi Boutin Midterm Examination 3

ECE 301. Division 2, Fall 2006 Instructor: Mimi Boutin Midterm Examination 3

IV. Covariance Analysis

The Johns Hopkins University Department of Electrical and Computer Engineering Introduction to Linear Systems Fall 2002.

16.584: Random (Stochastic) Processes

ECE302 Spring 2006 Practice Final Exam Solution May 4, Name: Score: /100

Signal Processing Signal and System Classifications. Chapter 13

Homework 4. May An LTI system has an input, x(t) and output y(t) related through the equation y(t) = t e (t t ) x(t 2)dt

Modern Navigation. Thomas Herring

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Chapter 4 The Fourier Series and Fourier Transform

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

Review of Frequency Domain Fourier Series: Continuous periodic frequency components

Lecture 8 ELE 301: Signals and Systems

Signals & Systems. Lecture 5 Continuous-Time Fourier Transform. Alp Ertürk

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

Fourier series for continuous and discrete time signals

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

Convolution. Define a mathematical operation on discrete-time signals called convolution, represented by *. Given two discrete-time signals x 1, x 2,

Series FOURIER SERIES. Graham S McDonald. A self-contained Tutorial Module for learning the technique of Fourier series analysis

Chapter 4 The Fourier Series and Fourier Transform

Random Processes Handout IV

Signal and systems. Linear Systems. Luigi Palopoli. Signal and systems p. 1/5

MAS113 Introduction to Probability and Statistics. Proofs of theorems

LECTURE 12 Sections Introduction to the Fourier series of periodic signals

Problem Sheet 1 Examples of Random Processes

Chapter 5 Frequency Domain Analysis of Systems

Practice Problems For Test 3

Fourier transform representation of CT aperiodic signals Section 4.1

Fourier Series. Spectral Analysis of Periodic Signals

System Identification & Parameter Estimation

Chapter 4 Random process. 4.1 Random process

EE 637 Final April 30, Spring Each problem is worth 20 points for a total score of 100 points

e iωt dt and explained why δ(ω) = 0 for ω 0 but δ(0) =. A crucial property of the delta function, however, is that

Practice Problems For Test 3

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

SIGNALS AND SYSTEMS: PAPER 3C1 HANDOUT 6. Dr Anil Kokaram Electronic and Electrical Engineering Dept.

Professor Fearing EECS120/Problem Set 2 v 1.01 Fall 2016 Due at 4 pm, Fri. Sep. 9 in HW box under stairs (1st floor Cory) Reading: O&W Ch 1, Ch2.

PROBABILITY AND RANDOM PROCESSESS

Statistical signal processing

X. Chen More on Sampling

Stochastic Process II Dr.-Ing. Sudchai Boonto

Chapter 6 THE SAMPLING PROCESS 6.1 Introduction 6.2 Fourier Transform Revisited

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University

Wavenumber-Frequency Space

Definition of a Stochastic Process

Aspects of Continuous- and Discrete-Time Signals and Systems

Table 1: Properties of the Continuous-Time Fourier Series. Property Periodic Signal Fourier Series Coefficients

Properties of LTI Systems

Random signals II. ÚPGM FIT VUT Brno,

Each problem is worth 25 points, and you may solve the problems in any order.

Figure 3.1 Effect on frequency spectrum of increasing period T 0. Consider the amplitude spectrum of a periodic waveform as shown in Figure 3.2.

Mechanical Vibrations Chapter 13

e iωt dt and explained why δ(ω) = 0 for ω 0 but δ(0) =. A crucial property of the delta function, however, is that

Communication Theory Summary of Important Definitions and Results

ECE 301 Fall 2010 Division 2 Homework 10 Solutions. { 1, if 2n t < 2n + 1, for any integer n, x(t) = 0, if 2n 1 t < 2n, for any integer n.

EE 224 Signals and Systems I Review 1/10

Chapter 5 Frequency Domain Analysis of Systems

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

Lecture 2 OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE

New Mexico State University Klipsch School of Electrical Engineering EE312 - Signals and Systems I Fall 2015 Final Exam

Stochastic Processes. A stochastic process is a function of two variables:

Signals and Spectra (1A) Young Won Lim 11/26/12

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation

EE 3054: Signals, Systems, and Transforms Summer It is observed of some continuous-time LTI system that the input signal.

Discrete Time Signals and Systems Time-frequency Analysis. Gloria Menegaz

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

Massachusetts Institute of Technology

Signals and Systems Profs. Byron Yu and Pulkit Grover Fall Midterm 2 Solutions

Fig 1: Stationary and Non Stationary Time Series

Signals and Systems. Lecture 14 DR TANIA STATHAKI READER (ASSOCIATE PROFESSOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

Practical Spectral Estimation

Amplitude and Phase A(0) 2. We start with the Fourier series representation of X(t) in real notation: n=1

Complex symmetry Signals and Systems Fall 2015

Fourier Analysis and Power Spectral Density

Homework Solutions: , plus Substitutions

Chapter 7: The z-transform

EC Signals and Systems

Problem Value

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

EE 4372 Tomography. Carlos E. Davila, Dept. of Electrical Engineering Southern Methodist University

CHAPTER 1 Systems of Linear Equations

2.161 Signal Processing: Continuous and Discrete Fall 2008

Notes on Random Processes

Mathematical Foundations of Signal Processing

Table 1: Properties of the Continuous-Time Fourier Series. Property Periodic Signal Fourier Series Coefficients

Chapter 5. Fourier Analysis for Discrete-Time Signals and Systems Chapter

CALCULUS MATH*2080 SAMPLE FINAL EXAM

MEDE2500 Tutorial Nov-7

Review of Linear Time-Invariant Network Analysis

ECE 301 Division 1 Final Exam Solutions, 12/12/2011, 3:20-5:20pm in PHYS 114.

Transcription:

Atmospheric Flight Dynamics Example Exam Solutions 1 Question Given the autocovariance function, C x x (τ) = 1 cos(πτ) (1.1) of stochastic variable x. Calculate the autospectrum S x x (ω). NOTE Assume that, and, 1 Solution cos(πτ) = e jπτ + e jπτ (1.) e jωτ dτ = πδ(ω) (1.3) To find the autospectrum S x x (ω), we simply take the Fourier transform of the autocovariance function C x x (τ). So, { } 1 S x x (ω) = F {C x x (τ)} = F cos(πτ) = π (δ(ω π) + δ(ω + π)). (1.4) You can do the last step if you know the Fourier transform of the cosine function by heart. If not, then you can also derive it. We then have { e jπτ + e jπτ } e jπτ + e jπτ F {cos(πτ)} = F = e jωτ dτ. (1.5) Rewriting the above equation gives F {cos(πτ)} = 1 ( e j(ω+π)τ dτ + We can use the relation for δ(ω) to simplify the above equation. We then get ) e j(ω π)τ dτ. (1.6) F {cos(πτ)} = 1 (πδ(ω π) + πδ(ω + π)). (1.7) Question Prove that, (a) the variance of the stochastic variable ȳ = c equals σ ȳ = (b) µ x = E { x } σ x (c) if ȳ = b x + c then σ ȳ = b σ x 1

In the above mentioned questions b and c are constants. Solution (a) There are multiple ways to show this. One way is by using the expectation operator. We have σ ȳ = E { (ȳ µȳ) } = E { (c c) } = E {} =. (.1) Note that we have used µȳ = c. Another way to show it is to use the probability density function. For a stochastic variable ȳ = c, the probability density function equals fȳ(y) = δ(y c). The mean of ȳ is now given by µȳ = The variance of ȳ can now be found using σ ȳ = yfȳ(y) dy = (y µȳ) fȳ(y) dy = (b) The variance of x can be found according to yδ(y c) dy = c. (.) (y c) δ(y c) dy = (c c) =. (.3) σ x = E { ( x µ x ) } = E { x xµ x + µ x} = E { x } µ x E { x} + µ x. (.4) If we use the fact that µ x = E {x}, we can rewrite the above to This is equivalent to the relation which we wanted to prove. (c) First let s find the mean value of ȳ. It is equal to The variance of ȳ = b x + c can be found using σ x = E { x } µ x. (.5) µȳ = E {ȳ} = E {b x + c} = bµ x + c. (.6) σ ȳ = E { (ȳ µȳ) } = E { (b x + c bµ x c) } = b E { ( x µ x ) } = b σ x. (.7) 3 Question In figure 1 the product function Rūū (τ) of the stationary stochastic process ū is given. What can be said about the properties of the stochastic variable ū? (a) It is white noise. (b) It is noise with a small bandwidth. (c) It is white noise plus a sinus. (d) It is a sinus. 3 Solution It can be noted that Rūū (τ) is a sinc function. If you transform a sinc function, you will get a block function. The PSD function Sūū (ω) is thus a block function. If this block function would be infinitely wide, then the PSD function would be constant. The result would thus be white noise. However, the block function is not infinitely wide. The bandwidth is thus limited. We therefore deal with white noise with a small bandwidth. The correct answer is (b).

Figure 1: Product function Rūū (τ) 4 Question Given the probability density function of the stochastic variable x with parameter λ (λ > ), { λe λx if x, f x (x) = if x <. (4.1) Calculate the probability distribution function F x (x), and prove that the mean value µ x and the variance σ x are equal to, µ x = 1 λ and σ x = 1 λ. (4.) 4 Solution The probability distribution function F x (x) can be found by integrating the probability density function f x (x). This then gives x F x (x) = f τ (τ) dτ. (4.3) For x <, we thus simply have F x (x) =. If, however, x, then F x (x) = x Now we need to find the mean value µ x. It is given by µ x = Applying integration by parts now gives µ x = [ xe λx] + + λe λτ dτ = [ e λτ ] x = 1 e λx. (4.4) xf x (x) dx = λxe λx dx. (4.5) e λx dx = + [ 1λ ] + e λx = 1 λ. (4.6) 3

The variance σ x can be found using σ x = (x µ x ) f x (x) dx = λ(x x/λ + 1/λ )e λx dx. (4.7) Let s split the integral above up into three parts. Applying integration by parts on the part with x gives λx e λx dx = [ x e λx] + + xe λx dx = + xe λx dx. (4.8) The part with x thus cancels out with the part with x/λ. That saves some work. The variance now equals σ x = 1/λe λx dx = [ 1λ ] + e λx = 1 λ. (4.9) That finishes the proof. 5 Question Figure : Probability density function f x (x) The random variable x has a probability density function f x (x) as depicted in figure. What is the probability of P ( x 1)? (a).15 (b).75 (c).75 (d).75 (e).875 (f) Not enough data available 4

5 Solution We can note that the probability density function is symmetric about x =. Thus, P ( x ) = P ( x ) =.5. (5.1) To find P ( 1 x ), we simply find the area under the function f x (x) in this interval. This gives us We thus have The correct answer is therefore (c). P ( 1 x ) = 1.15 +.3 =.5. (5.) P ( x 1) = P ( 1 x ) + P ( x ) =.75. (5.3) 6 Question Prove that the Fourier transform of the signal x(t t ) equals, F {x(t t )} = ( e jωt) X(ω) (6.1) 6 Solution We first apply the definition of the Fourier transform. This gives F {x(t t )} = Let s define τ = t t. We then have t = τ + t and dt = dτ. So, F {x(t t )} = This concludes the proof. x(t t )e jωt dt. (6.) x(τ)e jω(τ+t) dτ = e jωt x(τ)e jωτ dτ = e jωt X(ω). (6.3) 7 Question Which of the following statements are true? (a) R xȳ (τ) = Rȳ x (τ) (b) C xȳ (τ) = Cȳ x (τ) (c) K x x (τ) = K x x ( τ) (d) K x x () = 1 (e) S xȳ (ω) = Sȳ x (ω) (f) S x x (ω) = S x x ( ω) 7 Solution 5

(a) Let s examine R xȳ (τ). We have R xȳ (τ) = E {x(t)y(t + τ)} = E {y(t + τ)x(t)} = E {y(t)x(t τ)} = Rȳ x ( τ). (7.1) However, we don t generally have R xȳ (τ) = Rȳ x (τ). So, the first statement is false. (b) We know that C xȳ (τ) = R xȳ (τ) µ x µȳ. (7.) So if the previous statement did not hold, then this one will not hold either. statement is false. (c) For the function K x x (τ), we have K x x (τ) = E {x(t)x(t + τ)} σ x = E {x(t + τ)x(t)} σ x So this statement holds. The third statement is thus true. (d) We have This fourth statement is therefore true. K x x () = E {x(t)x(t)} σ x = E {x(t)x(t τ)} σ x So, the second = K x x ( τ). (7.3) = σ x σ x = 1. (7.4) (e) In case of a zero-mean signal, the PSD function is the Fourier transform of the covariance function. So, S xȳ (ω) = C xȳ (τ)e jωτ dτ = Let s substitute t = τ. Note that we have dt = dτ and thus S xȳ (ω) = + Cȳ x (t)e jωt dt = Cȳ x ( τ)e jωτ dτ. (7.5) Cȳ x (t)e jωt dt = Sȳ x (ω). (7.6) Here, Sȳ x (ω) is the complex conjugate of Sȳ x (ω). This follows from the fact that e jωt is the complex conjugate of e jωt and that Cȳ x (t) is real. So we have S xȳ (ω) = Sȳ x (ω). But we don t in general have S xȳ (ω) = Sȳ x (ω). The fifth statement is thus false. (f) Since C x x (τ) = C x x ( τ), we have S x x (ω) = If we again substitute t = τ, we get S x x (ω) = The sixth statement thus holds. C x x (τ)e jωτ dτ = C x x ( τ)e jωτ dτ. (7.7) C x x (t)e j( ω)t dτ = S x x ( ω). (7.8) 8 Question (a) The random variable ū has a probability density function fū(u) as depicted in figure 3. Calculate the probability P (ū = 4). (b) The random variable ū has a probability density function fū(u) as depicted in figure 4. Calculate the probability P (ū = 1). 6

Figure 3: Probability density function fū(u) Figure 4: Probability density function fū(u) 7

8 Solution (a) Normally, when a stochastic variable is continuous, then the chance that ū is exactly a given value is simply equal to. However, this time there is a peak at u = 4. So, P (ū = 4) now equals the magnitude of this peak. The only problem is, we don t know the magnitude of the peak. However, we do know that the area under the graph of fū(u) equals 1. Also, the area under the rectangle equals.1 8 =.8. The magnitude of the peak thus equals P (ū = 4) = 1 P (ū 4) = 1.8 =.. (8.1) (b) When a stochastic variable is continuous, and there is no delta-function-like peak, then the chance that ū exactly equals a given value is simply. We see that this is the case here. So, P (ū = 1) =. 9 Question Proof that the periodogram Iȳȳ [k] of the signal y[n] = ax[n] + b equals, with, and Re {X[k]} the real part of the Fourier transform of x[n]. Iȳȳ [k] = a I x x [k] + (are {X[k]} + b)bδ[k] (9.1) I x x [k] = X [k]x[k] (9.) Note: the Discrete Fourier Transform (FFT) of a constant b equals, ( ) N 1 1 πk j F F T {b} = be N n = bδ[k] (9.3) N n= with δ[k] the Kronecker delta function. Use the result F F T {b} = bδ[k] in your proof. Remember that δ[k] equals for k and δ[k] equals 1 for k =. 9 Solution First, we ll find an expression for Y [k]. We have Y [k] = F F T {y[n]} = F F T {ax[n] + b} = a F F T {x[n]} + F F T {b} = ax[k] + bδ[k]. (9.4) Since a and b are real constants, we also have The periodogram of y[n] is now given by Working out brackets gives Y [k] = ax [k] + bδ[k]. (9.5) Iȳȳ [k] = Y [k]y [k] = (ax [k] + bδ[k]) (ax[k] + bδ[k]). (9.6) Iȳȳ [k] = a X [k]x[k] + ab (X [k] + X[k]) δ[k] + b δ[k]. (9.7) In the discrete domain, we have δ[k] = δ[k]. Also, adding a complex number to its complex conjugate gives twice its real part. (That is, X [k] + X[k] = Re {X[k]}.) And, if we also apply the definition for I x x [k], we will find that Iȳȳ [k] = a I x x [k] + (are {X[k]} + b) bδ[k]. (9.8) And this is exactly what we needed to show. 8