ECE531: Principles of Detection and Estimation Course Introduction

Similar documents
ECE531: Principles of Detection and Estimation Course Introduction

Recitation 2: Probability

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Review of Probability. CS1538: Introduction to Simulations

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

Data Analysis and Monte Carlo Methods

Chapter 2: Random Variables

Appendix A : Introduction to Probability and stochastic processes

Brief Review of Probability

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Algorithms for Uncertainty Quantification

Bivariate distributions

Basic Probability. Introduction

Lecture 4: Probability and Discrete Random Variables

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

1 Random Variable: Topics

Lecture 1: Probability Fundamentals

ECE531 Lecture 8: Non-Random Parameter Estimation

Probability Review. Chao Lan

L2: Review of probability and statistics

Review of Probability Theory

Deep Learning for Computer Vision

1 Probability Review. 1.1 Sample Spaces

STAT 111 Recitation 1

Introduction to Machine Learning

2 (Statistics) Random variables

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

A review of probability theory

ECE531 Lecture 2a: A Mathematical Model for Hypothesis Testing (Finite Number of Possible Observations)

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Lecture 2: Repetition of probability theory and statistics

MATH 3510: PROBABILITY AND STATS July 1, 2011 FINAL EXAM

Statistics for scientists and engineers

Some Concepts of Probability (Review) Volker Tresp Summer 2018

MAT 271E Probability and Statistics

Probability theory basics

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.

Lecture 2: Review of Basic Probability Theory

MAT 271E Probability and Statistics

Midterm Exam 1 Solution

1 Presessional Probability

ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters

SDS 321: Introduction to Probability and Statistics

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

EE4601 Communication Systems

V7 Foundations of Probability Theory

BASICS OF PROBABILITY

Introduction to Probability and Statistics (Continued)

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019

18.05 Final Exam. Good luck! Name. No calculators. Number of problems 16 concept questions, 16 problems, 21 pages

Chapter 2 Random Variables

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Discrete Random Variables

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Lecture 1: Basics of Probability

Probability and random variables

18.05 Practice Final Exam

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

Lecture 1: August 28

Introduction to Probability and Stocastic Processes - Part I

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

ECE Homework Set 3

Multivariate probability distributions and linear regression

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

Mathematical Foundations of Computer Science Lecture Outline October 18, 2018

Review of probability. Nuno Vasconcelos UCSD

Lecture 11. Probability Theory: an Overveiw

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Recursive Estimation

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Random Variables and Expectations

Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term

Joint Probability Distributions and Random Samples (Devore Chapter Five)

MATH 151, FINAL EXAM Winter Quarter, 21 March, 2014

Random Variables. Lecture 6: E(X ), Var(X ), & Cov(X, Y ) Random Variables - Vocabulary. Random Variables, cont.

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Jointly Distributed Random Variables

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

Introduction to Stochastic Processes

ECE531 Lecture 4b: Composite Hypothesis Testing

CME 106: Review Probability theory

Lecture 8: Conditional probability I: definition, independence, the tree method, sampling, chain rule for independent events

Discrete Random Variable

ENGG2430A-Homework 2

Discrete Probability Refresher

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

1.1 Review of Probability Theory

The probability of an event is viewed as a numerical measure of the chance that the event will occur.

ECE 4400:693 - Information Theory

Independence Solutions STAT-UB.0103 Statistics for Business Control and Regression Models

Review: mostly probability and some statistics

Review (Probability & Linear Algebra)

Math Review Sheet, Fall 2008

Chapter 3: Random Variables 1

Transcription:

ECE531: Principles of Detection and Estimation Course Introduction D. Richard Brown III WPI 22-January-2009 WPI D. Richard Brown III 22-January-2009 1 / 37

Lecture 1 Major Topics 1. Web page. 2. Syllabus and textbook. 3. Academic honesty policy. 4. Students with disabilities statement. 5. Course introduction. 6. Review of essential probability concepts. WPI D. Richard Brown III 22-January-2009 2 / 37

Some Notation ECE531: Course Introduction A set with discrete elements: S = { 1, π, 6}. The cardinality of a set: S = 3. The set of all integers: Z = {..., 1,0,1,... }. The set of all real numbers: R = (, ). Intervals on the real line: [ 3,1], (0,1], ( 1,1), [10, ). Multidimensional sets: {a, b, c} 2 is shorthand for the set {aa, ab, ac, ba, bb, bc, ca, cb, cc}. R 2 is the two-dimensional real plane. R 3 is the three-dimensional real volume. An element of a set: s S. A subset: W S. The probability of an event A: Prob[A] [0,1]. The joint probability of events A and B: Prob[A,B] [0,1]. The probability of event A conditioned on event B: Prob[A B] [0,1]. WPI D. Richard Brown III 22-January-2009 3 / 37

Typical Detection Problems 3 2 1 y(t) 0 1 2 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 time Is this a sine wave plus noise, or just noise? Is the frequency of the sine wave 1Hz or 2Hz? Detection is about making smart choices (and the consequences). WPI D. Richard Brown III 22-January-2009 4 / 37

Typical Estimation Problems 3 2 1 y(t) 0 1 2 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 time What is the frequency, phase, and/or amplitude of the sine wave? What is the mean and/or variance of the noise? Estimation is about guessing values (and the consequences). WPI D. Richard Brown III 22-January-2009 5 / 37

Joint Estimation and Detection Suppose we have a binary communication system with an intersymbol interference channel. M symbols are sent through the channel and we observe y k = L 1 h l s k l + w k l=0 for k {0,...,L + M 2} where Unknown binary symbols [s 0,...,s M 1 ] { 1,+1} M Unknown discrete-time impulse response of channel [h 0,...,h L 1 ] R L Unknown noise [w 0,...,w L+M 2 ] R L+M 1 In some scenarios, we may want know the bits that were sent and the channel coefficients. This is a joint estimation and detection problem. Why? WPI D. Richard Brown III 22-January-2009 6 / 37

Consequences To develop optimal decision rules or estimators, we need to quantify the consequences of incorrect decisions or inaccurate estimates. Simple Example It is not known if a coin is fair (HT) or double headed (HH). We are given one observation of the coin flip. Based on this observation, how do you decide if the coin is HT or HH? Observation Rule 1 Rule 2 Rule 3 Rule 4 H HH HT HH HT T HH HT HT HH Suppose you have to pay $100 if you are wrong. Which decision rule is optimum? WPI D. Richard Brown III 22-January-2009 7 / 37

Rule 1: Always decide HH Note that the observation is ignored here. If the coin is HT (fair), the decision was wrong and you must pay $100. If the coin is HH (double headed), the decision was right and you pay nothing. The maximum cost (between HH or HT) for Rule 1 is $100. The average cost for Rule 1 is C 1 = Prob[HT] $100 + Prob[HH] $0 where Prob[HT] and Prob[HH] are the prior probabilities (the probability before any observations) on the coin being fair or double headed, respectively. For purposes of illustration, lets assume Prob[HT] = Prob[HH] = 0.5 so that C 1 = $50. WPI D. Richard Brown III 22-January-2009 8 / 37

Rule 2: Always decide HT Again, the observation is being ignored. Same analysis as for Rule 1... If the coin is HT (fair), the decision was right and you pay nothing. If the coin is HH (double headed), the decision was wrong and you must pay $100. The maximum cost for Rule 2 is $100. The average cost for Rule 2 is C 2 = Prob[HT] $0 + Prob[HH] $100 If Prob[HT] = Prob[HH] = 0.5, then C 2 = $50. WPI D. Richard Brown III 22-January-2009 9 / 37

Rule 3: Decide HH if H observed, HT if T observed If the coin is HT (fair), there is a 50% chance the observation will be H and you will decide HH. This will cost you $100. There is also a 50% chance that the observation will be T and you will decide HT. In this case, you made the correct decision and pay nothing. C HT = Prob[H HT] $100 + Prob[T HT] $0 = $50 If the coin is HH (double headed), what is our cost? $0 The maximum cost for Rule 3 is $50. The average cost for Rule 3 is C 3 = Prob[HT] $50 + Prob[HH] $0 If Prob[HT] = Prob[HH] = 0.5, then C 3 = $25. WPI D. Richard Brown III 22-January-2009 10 / 37

Rule 4: Decide HT if H observed, HH if T observed Obviously, this is a bad rule. If the coin is HT (fair), there is a 50% chance the observation will be T and you will decide HH. This will cost you $100. There is also a 50% chance that the observation will be H and you will decide HT. In this case, you made the correct decision and pay nothing. C HT = Prob[T HT] $100 + Prob[H HT] $0 = $50 If the coin is HH (double headed), what is our cost? $100 The maximum cost for Rule 4 is $100. The average cost for Rule 4 is C 3 = Prob[HT] $50 + Prob[HH] $100 If Prob[HT] = Prob[HH] = 0.5, then C 4 = $75. WPI D. Richard Brown III 22-January-2009 11 / 37

Remarks The notion of maximum cost is the maximum over the possible states of nature (HH and HT in our example), but averaged over the probabilities of the observation. HT T HH H θ p(y θ) y states of nature observations In our example, we could always lose $100, irrespective of the decision rule. But the maximum cost of Rule 3 was $50. Is Rule 3 optimal? WPI D. Richard Brown III 22-January-2009 12 / 37

Probability Basics: Events Let A be a possible (or impossible) outcome of a random experiment. We call A an event and Prob[A] [0,1] is the probability that A happens. Examples: A = tomorrow will be sunny in Worcester, Prob[A] = 0.4. A = a 9 is rolled with two fair 6-sided dice, Prob[A] = 4 36. A = a 13 is rolled with two fair 6-sided dice, Prob[A] = 0. A = an odd number is rolled with two fair 6-sided dice, Prob[A] = 2 36 + 4 36 + 6 36 + 4 36 + 2 36 = 1 2 A = any number but 9 is rolled with two fair 6-sided dice, Prob[A] = 1 4 36 = 32 36 The last result used the fact that Prob[A] + Prob[Ā] = 1, where Ā means not event A and Prob[Ā] is the probability that A doesn t happen. WPI D. Richard Brown III 22-January-2009 13 / 37

Probability Basics: Random Variables Definition A random variable is a mapping from random experiments to real numbers. Example: Let X be the Dow Jones average at the close on Friday. We can easily relate events and random variables. Example: What is the probability that X 8000? X is the random variable. It can be anything on the interval [0, ). The event is A = X is no less than 8000. To answer these types of questions, we need to know the probabilistic distribution of the random variable X. Every random variable has a cumulative distribution function (CDF) defined as for all x R. F X (x) := Prob[X x] WPI D. Richard Brown III 22-January-2009 14 / 37

Probability Basics: Properties of the CDF F X (x) := Prob[X x] The following properties are true for any random variable X: F X ( ) = 0. F X ( ) = 1. If y > x then F X (y) F X (x). Example: Let X be the Dow Jones average at the close on Friday. F X (x) 1 8000 x WPI D. Richard Brown III 22-January-2009 15 / 37

Probability Basics: The PDF The probability density function (PDF) of the random variable X is p X (x) := d dx F X(x) The following properties are true for any random variable X: p X (x) 0 for all x. p X(x)dx = 1. Prob[a < X b] = b a p X(x)dx = F X (b) F X (a). Example: Let X be the Dow Jones average at the close on Friday. p X (x) 8000 x WPI D. Richard Brown III 22-January-2009 16 / 37

Probability Basics: Mean and Variance Definition The mean of the random variable X is defined as E[X] = The mean is also called the expectation. Definition xp X (x)dx. The variance of the random variable X is defined as var[x] = (x E[X]) 2 p X (x)dx. Remark: The standard deviation of X is equal to std[x] = var[x]. WPI D. Richard Brown III 22-January-2009 17 / 37

Probability Basics: Properties of Mean and Variance Assuming c is a known constant, it is not difficult to show the following properties of the mean: 1. E[cX] = ce[x] (by linearity) 2. E[X + c] = E[X] + c (by linearity) 3. E[c] = c Assuming c is a known constant, it is not difficult to show the following properties of the variance: 1. var[cx] = c 2 var[x] 2. var[x + c] = var[x] 3. var[c] = 0 WPI D. Richard Brown III 22-January-2009 18 / 37

Sketch the CDF. Suppose X U(1,5). What is Prob[X = 3]? What is Prob[X < 2]? What is Prob[X > 1]? What is E[X]? What is var[x]? WPI D. Richard Brown III 22-January-2009 19 / 37 ECE531: Course Introduction Uniform Random Variables Uniform distribution: X U(a, b) for a b. p X (x) 1 b a a b x p X (x) = { 1 b a a x b 0 otherwise

Discrete Uniform Random Variables Uniform distribution: X U(S) where S is a finite set of discrete points on the real line. Each element in the set is equally likely. Example: p X (x) 1 2 3 4 5 6 x Given S = {s 1,...,s n }, then Prob[X = s 1 ] =... Prob[X = s n ] = 1 n and p X (x) = 1 n (δ(x s 1) + + δ(x s n )) Sketch the CDF. What is Prob[X = 3]? What is Prob[X < 2]? What is Prob[X 2]? What is E[X]? What is var[x]? WPI D. Richard Brown III 22-January-2009 20 / 37

Gaussian Random Variables Gaussian distribution: X N(µ,σ 2 ) for any µ and σ. ( ) 1 (x µ) 2 p X (x) = exp 2πσ 2σ 2 Remarks: 1. E[X] = µ. 2. var[x] = σ 2. 3. Gaussian random variables are completely specified by their mean and variance. 4. Lots of things in the real world are Gaussian or approximately Gaussian distributed, e.g. exam scores, etc. The Central Limit Theorem explains why this is so. 5. Probability calculations for Gaussian random variables often require the use of erf and/or erfc functions (or the Q-function). WPI D. Richard Brown III 22-January-2009 21 / 37

Final Remarks on Scalar Random Variables 1. The PDF and CDF completely describe a random variable. If X and Y have the same PDF, then they have the same CDF, the same mean, and the same variance. 2. The mean and variance are only partial statistical descriptions of a random variable. If X and Y have the same mean and/or variance, they might have the same PDF/CDF but not necessarily. One Random Variable cdf pdf mean var WPI D. Richard Brown III 22-January-2009 22 / 37

Joint Events ECE531: Course Introduction Suppose you have two events A and B. We can define a new event C = both A and B occur and we can write Prob[C] = Prob[A B] = Prob[A,B] Ω A A B B WPI D. Richard Brown III 22-January-2009 23 / 37

Conditional Probability of Events Suppose you have two events A and B. We can condition on the event B to write the probability Prob[A B] = the probability of event A given event B happened When Prob[B] > 0, this conditional probability is defined as Three special cases: Prob[A B] = Prob[A B] Prob[B] = Prob[A,B] Prob[B] A Ω Ω Ω B A B B A WPI D. Richard Brown III 22-January-2009 24 / 37

Conditional Probabilities: Our Earlier Example It is not known if a coin is fair (HT) or double headed (HH). We can write the conditional probabilities of a one-flip observation as Prob[observe H coin is HT] = 0.5 Prob[observe T coin is HT] = 0.5 Prob[observe H coin is HH] = 1 Prob[observe T coin is HH] = 0 Can you compute Prob[coin is HH observe H]? We can write Prob[coin is HH observe H] = = Prob[coin is HH, observe H] Prob[observe H] Prob[observe H coin is HH]Prob[coin is HH] Prob[observe H] We are missing two things: Prob[coin is HH] and Prob[observe H]... WPI D. Richard Brown III 22-January-2009 25 / 37

Conditional Probabilities: Our Earlier Example The term Prob[coin is HH] is called the prior probability, i.e. it is our belief that the coin is unfair before we have any observations. This is assumed to be given in some of the problems that we will be considering, so let s say for now that Prob[coin is HH] = 0.5. The term Prob[observe H] is the unconditional probability that we observe heads. Can we calculate this? Theorem (Total Probability Theorem) If the events B 1,...,B n are mutually exclusive, i.e. Prob[B i,b j ] = 0 for all i j, and exhaustive, i.e. n i=1 Prob[B i] = 1, then Prob[A] = n Prob[A B i ]P[B i ]. i=1 So how can we use this result to compute Prob[observe H]? WPI D. Richard Brown III 22-January-2009 26 / 37

Independence of Events Two events are independent if their joint probability is equal to the product of their individual probabilities, i.e Prob[A, B] = Prob[A]Prob[B] Lots of events can be assumed to be independent. For example, suppose you flip a coin twice with A = the first coin flip is heads, B = the second coin flip is heads, and C = both coin flips are heads. Are A and B independent? Are A and C independent? Note that when events A and B are independent, Prob[A B] = Prob[A,B] Prob[B] = Prob[A]Prob[B] Prob[B] = Prob[A]. This should be intuitively satisfying since knowing B happened doesn t give you any useful information about A. WPI D. Richard Brown III 22-January-2009 27 / 37

Conditional Probability Example [Leon-Garcia p. 50] An urn contains two black balls and three white balls. Two balls are selected at random from the urn (without replacement). 1. What is the probability that the first ball you select from the urn is black? 2. Given that the first ball that you select from the urn is black, what is the probability that the second ball you select from the urn is also black? 3. What is the probability that both balls you select are black? WPI D. Richard Brown III 22-January-2009 28 / 37

Another Conditional Probability Example An urn contains one ball known to be either black or white with equal probability. A white ball is added to the urn, the urn is shaken, and a ball is removed randomly from the urn. 1. If the ball removed from the urn is black, what is the probability that the ball remaining in the urn is white? 2. If the ball removed from the urn is white, what is the probability that the ball remaining in the urn is white? WPI D. Richard Brown III 22-January-2009 29 / 37

Joint Random Variables When we have two random variables, we require a joint distribution. The joint CDF is defined as F X,Y (x, y) = Prob[X x Y y] = Prob[X x, Y y] and the joint PDF is defined as p X,Y (x, y) = 2 x y F X,Y (x, y) If you know the joint distribution, you can get the marginal distributions: F X (x) = F X,Y (x, ) F Y (y) = F X,Y (, y) p X (x) = p Y (y) = p X,Y (x, y)dy p X,Y (x, y)dx Marginals are not enough to specify the joint distribution, except in special cases. WPI D. Richard Brown III 22-January-2009 30 / 37

Joint Statistics ECE531: Course Introduction Note that the means and variances are defined as usual for X and Y. When we have a joint distribution, we have two new statistical quantities: Definition The correlation of the random variables X and Y is defined as E[XY ] = xyp X,Y (x,y)dxdy. Definition The covariance of the random variables X and Y is defined as cov[x,y ] = (x E[X])(y E[Y ])p X,Y (x,y)dxdy. WPI D. Richard Brown III 22-January-2009 31 / 37

Conditional Distributions If X and Y are both discrete random variables (both are drawn from finite sets) with Prob[X = x] > 0, then Prob[Y = y X = x] = Prob[X = x, Y = y] Prob[X = x] If Y is a discrete random variable and X is a continuous random variable, then the conditional probability that Y = y given X = x is Prob[Y = y X = x] = p X,Y (x, y) p X (x) Prob[x h<x x,y =y] where p X,Y (x, y) := lim h 0 h is the joint PDF-probability of the random variable X and the event Y = y. If X and Y are both continuous random variables, then the conditional PDF of Y given X = x is p Y (y X = x) = p Y (y x) = p X,Y (x, y). p X (x) with p X,Y (x, y) as the usual joint distribution of X and Y. WPI D. Richard Brown III 22-January-2009 32 / 37

Conditional Statistics ECE531: Course Introduction Definition The conditional mean of a random variable Y given X = x is defined as E[Y x] = yp Y (y x)dy. The definition is identical to the regular mean except that we use the conditional PDF. Definition The conditional variance of a random variable Y given X = x is defined as var[y x] = (y E[Y x]) 2 p Y (y x)dx. WPI D. Richard Brown III 22-January-2009 33 / 37

Independence of Random Variables Two random variables are independent if their joint distribution is equal to a product of their marginal distributions, i.e. p X,Y (x,y) = p X (x)p Y (y). If X and Y are independent, the conditional PDFs can be written as p Y (y x) = p X,Y (x,y) p X (x) = p X(x)p Y (y) p X (x) = p Y (y) and p X (x y) = p X,Y (x,y) p Y (y) = p X(x)p Y (y) p Y (y) = p X (x). These results should be intuitively satisfying since knowing X = x (or Y = y) doesn t tell you anything about Y (or X). WPI D. Richard Brown III 22-January-2009 34 / 37

Jointly Gausian Random Variables Definition: The random variables X = [X 1,...,X k ] are jointly Gaussian if their joint density is given as ( (x p X (x) = 2πP 1/2 µx ) P 1 ) (x µ exp X ) 2 where µ X = E[X] and P = E[(X µ X )(X µ X ) ]. Remarks: 1. m X = [E[X 1 ],...,E[X k ]] is a k-dimensional vector of means 2. P is a k k matrix of covariances, i.e., E[(X 1 µ X1 )(X 1 µ X1 )]... (X 1 µ X1 )(X k µ Xk )] P =..... E[(X k µ Xk )(X 1 µ X1 )]... (X k µ Xk )(X k µ Xk )] WPI D. Richard Brown III 22-January-2009 35 / 37

Where We Are Heading We are going to begin our study of detection and estimation by learning the fundamental concepts of hypothesis testing. Hypothesis testing involves making inferences about unknown things (states of nature) from observations. Examples: Infer if the coin is fair or unfair after observing one or more flips. HT T HH θ p(y θ) y states of nature H observations Infer if the airplane is friend or foe by observing a radar signature. We will be working with lots of conditional probability expressions in our study of hypothesis testing. You should feel comfortable with this material. WPI D. Richard Brown III 22-January-2009 36 / 37

Where We Will Be in April: Kalman Filtering 2008 Charles Stark Draper Prize: Dr. Rudolph Kalman The Kalman Filter uses a mathematical technique that removes noise from series of data. From incomplete information, it can optimally estimate and control the state of a changing, complex system over time. Applications include target tracking by radar, global positioning systems, hydrological modeling, atmospheric observations, and time-series analyses in econometrics. (paraphrased from http://www.nae.edu/) WPI D. Richard Brown III 22-January-2009 37 / 37