A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011
|
|
- Cuthbert Shaw
- 5 years ago
- Views:
Transcription
1 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading: Chapter 10 = linear LSQ with Gaussian errors Chapter 11 = Nonlinear fitting Chapter 12 = Markov Chain Monte Carlo Lectures 1-24 are on the web page Lecture 24 Matched Filtering Localization Web page: look for PS 4 + article on genetic algorithms
2 Notions of Best Basis Orthogonal basis functions: allows new terms to be added to fitting function without altering the previous terms parameters (coefficients) have diagonal covariance matrix Compactification: in detection problems it is useful to transform the data to concentrate the signal variance into as few basis vectors as possible Example: DFT of sinusoidal signal = least squares fit; ideally the variance in a single sinusoid accounted for in the coefficient of a single basis vector (apart from binning and leakage effects). Underlying model (e.g. physics) may suggest a particular basis (e.g. monopole, dipole, quadrupole terms in a 3D angular distribution)
3 Notions of Best Basis II Coifman & Wickerhauser Entropy Based Algorithms for Best Basis Selection The Karhunen-Loéve basis is the minimum-entropy orthonormal basis for an ensemble of vectors. They develop a technique useful for a single vector that is based on minimum entropy. Minimum entropy is consistent with compactification.
4 Matched Filtering (Template Fitting) Premise: U(t) = at(t-t 0 ) + n(t) a = scale factor, t 0 = arrival time T(t) = template (assumed known) n(t) = additive noise (arbitrary statistics) Template fitting yields a, t 0 Optimal estimation: cross correlate U(t) with T(t) C UT (τ) = U(t) * T(t+τ) = ac TT (τ-t 0 ) + T(t +τ) * n(t) C UT (τ) maximizes at τ max =t 0 in the mean error in τ max due solely to n(t) Practicality: easier to find τ max in the Fourier domain (sampling issues) If shape of U(t) shape of T(t), there are additional errors in the TOA estimate
5 1 Matched Filtering Matched filtering is an optimal method for detecting a signal of known shape in the presence of additive noise. Consider the model where A is deterministic and known and n is additive WSS noise with arbitrary spectrum: x(t) =a 0 A(t)+n(t). We want a filter that whose output maximizes the signal-to-noise ratio of the output. Note matched filtering is different from Wiener filtering, which yields an estimate for a signal that has a minimum least-squares error from the true signal.
6 Matched filter: We want the linear filter h(t) thatmaximizesthesnroftheoutput and thus maximizes the detection probability. Usually we would convolve the filter with the data but it is convenient to define h so that we cross correlate it with the data instead. Also, assume that h(t) is aligned with the signal A(t) sothatweonlyneedconsiderthesnrof the output when h and A are aligned to give the maximum. Thus while we would generally consider the full correlation function, y(τ) =h(t) x(t) = dt A(t)h(t + τ) (1) (where * means correlation), we consider the special time τ = 0 where by construction we say that y is maximized. The mean and variance of y are y = a 0 dt A(t)h(t) σy 2 = dtdt n(t)n(t ) h(t)h(t )= Define the SNR as S = y = a 0 dt A(t)h(t). σ y σ y We want the filter function that maximizes S (or S 2 ): h S 2 δh(t) =0. 2 dtdt R n (t t )h(t)h(t ). This yields (where h means partial derivative with respect to the filter) 2σ 2 y y h y y 2 h σ 2 y =0.
7 3 Keeping only terms that are first order in δh(t): 2σy 2 a 0 dt A(t)δh(t) 2 y dtdt R n (t t )h(t )=0 or [ dt δh(t) σya 2 0 A(t) y ] dt R n (t t )h(t ) =0. If the solution h(t) yieldsamaximumsnrthentheintegrandfactor that multiplies δh(t) iszeroforanyδh(t), so dt R n (t t )h(t )= a 0σy 2 A(t). (2) y White Noise: This case, the simplest, has R n (t t )=σn 2δ(t t ) and gives dt h 2 (t) h(t) =A(t) dt A(t)h(t) for which h(t)=a(t)isasolution,asisanythingproportionaltoa(t). It is now obvious why this approach is called matched filtering.
8 Low-pass Noise: Non-white noise with a short correlation time (relative to the width of the signal) can be treated in the same way. Let R n (t t )=σ 2 nρ n (t t )whereρ n (0) = 1 and its width width of A. Returning to our original expression for S, wehave ( ) a0 dt A(t)h(t) S = σ [ n dtdt ρ n (t t )h(t)h(t ) ] 1/2 ( ) a0 dt A(t)h(t) [ σ n dτρn (τ) dt h 2 (t) ] 1/2 ( ) a0 dt A(t)h(t) [ Wn dt h2 (t) ], 1/2 σ n where a characteristic time scale W n = dτρ n (τ) hasbeendefined. We can now apply the Cauchy-Schwarz inequality for the numerator [ 2 dt A(t)h(t)] dt A 2 (t) dt h 2 (t) which means that S = ( a0 σ n ( a0 σ n ( a0 σ n ) dt A(t)h(t) [ Wn dt h2 (t)] ] 1/2 ) [ dt A 2 (t) dt h 2 (t) ] 1/2 [ Wn dt h2 (t) ] 1/2 ) [ dt A 2 (t) ] 1/2 Wn We have equality when h(t) = A(t), i.e. when the filter matches the signal. 4
9 Simplification and Interpretation For the matched filter case (equality) we can consider A(t) to be dimensionless with A(0) = 1. Then the integral dt A 2 (t) W A defines the characteristic time scale W A.Wecanalsodefinethesignal to noise of the time series as ( ) a0 SNR t and then rewrite the SNR of the correlation function as ( ) 1/2 WA S =SNR t. W n As we have seen in other contexts, the ratio W A /W n represents the number of independent noise fluctuations that have been averaged to get the correlation amplitude. We then see that the correlation function enhances the SNR in the time series by a factor N eff where N eff is the effective number of independent fluctuations. σ n 5
10 Arbitrary WSS Noise: Generally, the solution is gotten by Fourier transforming Equation 2. Let the noise spectrum be R n (τ) S n (f) anddefinetheftsofa and h as à and { h. Thenwehave } FT dt R n (t t )h(t ) = S n (f) h(f), σ 2 y = dtdt R n (t t )h(t)h(t ) = df S n (f) h(f) 2 6 and y = a 0 dt A(t)h(t) =a 0 df à (f) h(f). so the expression for h becomes h(f) = Ã(f) a 0 σy 2 S n (f) y = Ã(f) S n (f) df Sn (f) h(f) 2 df à (f) h(f). and a solution is h(f) Ã(f) S n (f). For white noise, S n (f) =constantandwegetourpreviousresult. More generally, the matched filter favors frequencies where the ratio of signal to noise is larger. Note again that the amplitude of the signal a 0 does not appear in the solution. It can be considered to be part of the proportionality constant (normalization). Note also that the actual noise variance cancels in the solution for h(f). Thus the amplitudes of both the signal and the noise factor out ofthe solution.
11 Practical Applications in Signal Detection: Example: we know the shape of the function A(t) andtheautocorrelation function of the noise (or its spectrum). The optimal detection scheme is to construct the filter using this information and investigate the cross correlation function Eq. 1. Output amplitudes can be tested against a threshold y T that is some multiple of σ y. Over an ensemble one can then define the detection probability P d and the false-alarm probability P fa. By changing the threshold both P d and P fa will change. Ideally one would like P d =1withP fa but in reality there is a tradeoff between the two. Periodic Signals: The matched filter for a periodic signal is simply aperiodictrainofthepulseshape. Whencorrelatedwiththemeasurements, the output is the same thing as folding the data with the underlying period. 6
12 Fig. 1. Matched filtering of Gaussian pulse with itself as a template. One realization of the template and pulse are shown while ten realizations of the CCF are shown. The SNR of the pulse is 5 (peak to rms noise). The pulse width 1/e) is 25.3 samples. 8
13 Fig. 2. Matched filtering of a narrower Gaussian pulse. One realization of the template and pulse are shown while ten realizations of the CCF are shown. The SNR of the pulse is 5 (peak to rms noise). The pulse width 1/e) is 10.3 samples. 9
14 Fig. 3. Matched filtering of a narrow Gaussian pulse with a broader template. One realization of the template and pulse are shown while ten realizations of the CCF are shown. The SNR of the pulse is 5 (peak to rms noise). The pulse width 1/e) is 10.3 samples while the template is 45.3 samples wide. 10
15 Fig. 4. Matched filtering of Gaussian pulse with a narrower template. The SNR of the pulse is 5 (peak to rms noise). The pulse width 1/e) is 10.3 samples while the template is 4.3 samples wide. 11
16 ROC Curves: AplotofP d vs. P fa is called a receiver operating characteristics curve, named after radar detection schemes. 7
17 ROC Curves ROC = receiver operating characteristic The ROC curve originated during World War II for using radar to detect objects in battle fields Now used in all fields where detection of a signal or classification of events and outcomes is done (biology, medicine, astronomy, etc.)
18 h"p://upload.wikimedia.org/ wikipedia/commons/3/36/ ROC_space- 2.png
19 Fig. 5. ROC plot for a pulse with SNR 5 (peak to rms noise) and width of 10.3 samples. Matched filtering is used. 13
20
21 Evaluating three different HIV epitope predictors. h"p:// upload.wikimedi a.org/ wikipedia/ commons/6/6b/ Roccurves.png
22 12 Localization Using Matched Filtering This handout describes localization of an object in a parameter space. For simplicity we consider localization of a pulse in time. The same formalism applies to localization of a spectral feature in frequency or to an image feature in a 2D image. The results can be extrapolated to a space of arbitrary dimensionality. I. First consider finding the amplitude of a pulse when the shape and location are known. Let the data be I(t) =aa(t)+n(t), where a =theunknownamplitudeandn(t) iszeromeannoise. The known pulse shape is A(t). Let the model be Î(t) =âa(t). Define the cost function to be the integrated squared error, [ ] 2 Q = dt I(t) Î(t).
23 Taking a derivative, we can solve for the estimate of the amplitude, â: ] âq = 2 dt [I(t) Î(t) âî(t) =0 â ] dt [I(t) Î(t) A(t) =0 dtî(t)a(t) = dta 2 (t) = dti(t)a(t) dti(t)a(t) 14 â = dti(t)a(t) dta2 (t). Note that: a. The model is linear in the sole parameter, â b. The numerator is the zero lag of the crosscorrelation function (CCF) of I(t) anda(t). c. The denominator is the zero lag of the autocorrelation function (ACF) of A(t).
24 II. Now consider the case where we don t know the location of the pulse in time (the time of arrival, TOA) and that it is the TOA we wish to estimate. We still know the pulse shape, apriori. Let the data, model and cost function be I(t) = aa(t t 0 )+n(t) 15 Î(t) = âa(t ˆt 0 ). Q = dt [ I(t) Î(t) ] 2. Note that the model is linear in â but is nonlinear in ˆt 0. Minimizing Q with respect to â, wehave ] âq = 2 dt [I(t) Î(t) âî(t) =0 ] dt [I(t) Î(t) A(t ˆt 0 )=0 â dtî(t)a(t ˆt 0 )= dta 2 (t ˆt 0 )= dti(t)a(t ˆt 0 ) dti(t)a(t ˆt 0 ) (3) â = dti(t)a(t ˆt 0 ) dta2 (t ˆt 0 ). This last equation has the same form as in I. except that the estimate for the arrival time ˆt 0 is involved.
25 16 Now, minimizing Q with respect to ˆt 0,wehave ] ˆt 0 Q = 2 dt [I(t) Î(t) ˆt 0 Î(t) =0 â â dt Î(t) }{{} âa(t ˆt 0 ) A (t ˆt 0 )= â dt I(t)A (t ˆt 0 ) dt A(t ˆt 0 )A (t ˆt 0 )= dt I(t)A (t ˆt 0 ). (4) Grid Search: One approach to finding the arrival time is to search over a 2D grid of â, ˆt 0 to find the values that satisfy equations 3 and 4. This approach is inefficient. Instead, one can search over a 1D space for the single nonlinear parameter, ˆt 0,andthensolveforâ using either equation 3 or 4. Linearization + Iteration: Another method is to find solutions for â and ˆt 0,wecanlinearize the equations in ˆt 0 t 0 by using Taylorseries expansions for A(t ˆt 0 )anda (t ˆt 0 ). Let ˆt 0 = t 0 + δˆt 0.Then,tofirstorderinδˆt 0 : A(t ˆt 0 ) A(t t 0 ) A (t t 0 )δˆt 0 A (t ˆt 0 ) A (t t 0 ) A (t t 0 )δˆt 0 A 2 (t ˆt 0 ) A 2 (t t 0 ) 2A (t t 0 )A(t t 0 )δˆt 0.
26 Now equations (3) and (4) become â â dt [A 2 (t t 0 ) 2δˆt 0 A(t t 0 )A (t t 0 )] = dt I(t)[A(t t 0 ) δˆt 0 A (t t 0 )] dt [A(t t 0 )A (t t 0 ) δˆt 0 A(t t 0 )A (t t 0 ) δˆt 0 A 2 (t t 0 )] = dt I(t)[A (t t 0 ) δˆt 0 A (t t 0 )]. 17 Consider the integral The integrand may be written as dt A(t t 0 )A (t t 0 ). A(t t 0 )A (t t 0 )= 1 d 2 dt A2 (t t 0 ) and so the integral equals 1 2 A2 (t t 0 ) t 2 t 1 0 in the limit of (e.g.) t 1,2 = T/2withT pulse width. We then have δˆt 0 â â dt A 2 (t t 0 ) = dt I(t)[A(t t 0 ) δˆt 0 A (t t 0 )] dt [A(t t 0 )A (t t 0 ) + A 2 (t t 0 )] = dt I(t)[A (t t 0 ) δˆt 0 A (t t 0 )].
27 18 Solving for â in both cases we have â = â = dt [I(t)A(t t0 ) δˆt 0 I(t)A (t t 0 )] dt A2 (t t 0 ) [ dt I(t) A (t t 0 )+δˆt 0 A (t t 0 ) ] δˆt 0 dt [ A(t t0 )A (t t 0 )+A 2 (t t 0 ) ]. Using the notation i 0 dt I(t)A(t t 0 ) i 1 dt I(t)A (t t 0 ) i 2 dt I(t)A 2 (t t 0 ) i 3 dt I(t)A (t t 0 ) [ ] dt I(t) A(t t 0 )A (t t 0 )+A 2 (t t 0 ). i 4 (5) (6) (7) we have â = i 0 δˆt 0 i 1 i 2 â = i 1 + δˆt 0 i 3 δˆt 0 i 4. Solving for δˆt 0 (to first order) we have δˆt 0 = i 1i 2 i 0 i 4 + i 2 i 3.
28 19 Iterative Solution for ˆt 0 This equation can be solved iteratively for δˆt 0 : 0. choose a starting value for ˆt calculate δˆt 0 using the linearized equations. 2. is δˆt 0 =0? 3a. if yes, stop. 3b. if no, update ˆt 0 ˆt 0 + δˆt 0 and go back to step 1. For the best fit value for ˆt 0,thechangeiszero,δˆt 0 =0(topofthe hill) and â can be calculated using one of the equations 5 or 6. Correlation Function Approach The iterative solution for ˆt 0 is similar to the following procedure that uses a crosscorrelation approach more directly: 1. cross correlate the template A(t) withi(t) togetaccf. 2. find the lag of peak correlation as an estimate for the arrival time, ˆt 0 = τ max. 3. calculate â if needed. Subtleties of the Cross Correlation Method The CCF is calculated using sampled data and therefore is itself a discrete quantity. Often one wants greater precision on the arrival time than is given by the sample interval. I.e. we want a floating point
29 20 number for ˆt 0,notanintegerindex. Thereforewewanttocalculate the peak of the CCF by interpolating near its peak. The interpolation should be done properly by using the appropriate interpolation formula for sampled data (using the sinc function). Using parabolic interpolation yields excessive errors for the arrival time. In practice, the proper interpolation is effectively done in the frequency domain by calculating the phase shift of the Fourier transform of the CCF, which is the product of the Fourier transform of the template and the Fourier transform of the data.
30 Timing Error from Radiometer Noise rms TOA error from template fitting with additive noise: Gaussian shaped pulse: Low-DM pulsars: DISS (and RISS) will modulate SNR N 6 = N / 10 6 Interstellar pulse broadening, when large, increases Δt S/N in two ways: SNR decreases by a factor W / [W 2 +τ d2 ] 1/2 W increases to [W 2 +τ d2 ] 1/2 Large errors for high DM pulsars and lowfrequency observations 23 June 2010 Jim Cordes IPTA2010 Leiden 30
31 Timing Error from Pulse- Phase Jitter f ϕ = PDF of phase variation a(ϕ) = individual pulse shape N i = number of independent pulses summed m I = intensity modulation index 1 f J = fraction jitter parameter = ϕ rms / W 1 Gaussian shaped pulse: N 6 = N i / June 2010 Jim Cordes IPTA2010 Leiden 31
32 Matched Filtering Pulse shape Templat e Correlation function Template too wide Convoluti on Template too narrow Template matched TO A
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2013
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2013 Lecture 26 Localization/Matched Filtering (continued) Prewhitening Lectures next week: Reading Bases, principal
More informationA6523 Modeling, Inference, and Mining Jim Cordes, Cornell University. False Positives in Fourier Spectra. For N = DFT length: Lecture 5 Reading
A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 5 Reading Notes on web page Stochas
More informationSignal Modeling, Statistical Inference and Data Mining in Astrophysics
ASTRONOMY 6523 Spring 2013 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Course Approach The philosophy of the course reflects that of the instructor, who takes a dualistic view
More informationA6523 Modeling, Inference, and Mining Jim Cordes, Cornell University
A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 18 Op0mal localiza0on Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches
More informationLecture 18. Matched filtering Localization
Lecture 18 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Matched filtering Localization Exp Astron (2015) 39:1 10 DOI
More informationELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process
Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random
More informationStochastic Processes. A stochastic process is a function of two variables:
Stochastic Processes Stochastic: from Greek stochastikos, proceeding by guesswork, literally, skillful in aiming. A stochastic process is simply a collection of random variables labelled by some parameter:
More informationRandom Processes Handout IV
RP-IV.1 Random Processes Handout IV CALCULATION OF MEAN AND AUTOCORRELATION FUNCTIONS FOR WSS RPS IN LTI SYSTEMS In the last classes, we calculated R Y (τ) using an intermediate function f(τ) (h h)(τ)
More informationChapter 6. Random Processes
Chapter 6 Random Processes Random Process A random process is a time-varying function that assigns the outcome of a random experiment to each time instant: X(t). For a fixed (sample path): a random process
More informationProbability and Statistics for Final Year Engineering Students
Probability and Statistics for Final Year Engineering Students By Yoni Nazarathy, Last Updated: May 24, 2011. Lecture 6p: Spectral Density, Passing Random Processes through LTI Systems, Filtering Terms
More informationMassachusetts Institute of Technology
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.011: Introduction to Communication, Control and Signal Processing QUIZ, April 1, 010 QUESTION BOOKLET Your
More informationA523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011
A523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Lecture 6 PDFs for Lecture 1-5 are on the web page Problem set 2 is on the web page Article on web page A Guided
More informationUtility of Correlation Functions
Utility of Correlation Functions 1. As a means for estimating power spectra (e.g. a correlator + WK theorem). 2. For establishing characteristic time scales in time series (width of the ACF or ACV). 3.
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Lecture 4 See web page later tomorrow Searching for Monochromatic Signals
More informationA6523 Modeling, Inference, and Mining Jim Cordes, Cornell University
A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;
More informationProbability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models
Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Statistical regularity Properties of relative frequency
More informationLecture 23. Lidar Error and Sensitivity Analysis (2)
Lecture 3. Lidar Error and Sensitivity Analysis ) q Derivation of Errors q Background vs. Noise q Sensitivity Analysis q Summary 1 Accuracy vs. Precision in Lidar Measurements q The precision errors caused
More informationUCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011
UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,
More informationDETECTION theory deals primarily with techniques for
ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring
Lecture 8 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Applications: Bayesian inference: overview and examples Introduction
More informationEE 574 Detection and Estimation Theory Lecture Presentation 8
Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationA6523 Linear, Shift-invariant Systems and Fourier Transforms
A6523 Linear, Shift-invariant Systems and Fourier Transforms Linear systems underly much of what happens in nature and are used in instrumentation to make measurements of various kinds. We will define
More informationProblem Sheet 1 Examples of Random Processes
RANDOM'PROCESSES'AND'TIME'SERIES'ANALYSIS.'PART'II:'RANDOM'PROCESSES' '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''Problem'Sheets' Problem Sheet 1 Examples of Random Processes 1. Give
More informationA523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011
A523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Lecture 1 Organization:» Syllabus (text, requirements, topics)» Course approach (goals, themes) Book: Gregory, Bayesian
More informationIV. Covariance Analysis
IV. Covariance Analysis Autocovariance Remember that when a stochastic process has time values that are interdependent, then we can characterize that interdependency by computing the autocovariance function.
More information2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES
2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring
A653 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 15 http://www.astro.cornell.edu/~cordes/a653 Lecture 3 Power spectrum issues Frequentist approach Bayesian approach (some
More informationRandom Processes Why we Care
Random Processes Why we Care I Random processes describe signals that change randomly over time. I Compare: deterministic signals can be described by a mathematical expression that describes the signal
More information3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE
3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given
More informationSignals and Spectra - Review
Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs
More information7 The Waveform Channel
7 The Waveform Channel The waveform transmitted by the digital demodulator will be corrupted by the channel before it reaches the digital demodulator in the receiver. One important part of the channel
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More informationAdditional Keplerian Signals in the HARPS data for Gliese 667C from a Bayesian re-analysis
Additional Keplerian Signals in the HARPS data for Gliese 667C from a Bayesian re-analysis Phil Gregory, Samantha Lawler, Brett Gladman Physics and Astronomy Univ. of British Columbia Abstract A re-analysis
More informationE&CE 358, Winter 2016: Solution #2. Prof. X. Shen
E&CE 358, Winter 16: Solution # Prof. X. Shen Email: xshen@bbcr.uwaterloo.ca Prof. X. Shen E&CE 358, Winter 16 ( 1:3-:5 PM: Solution # Problem 1 Problem 1 The signal g(t = e t, t T is corrupted by additive
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationStochastic Processes
Elements of Lecture II Hamid R. Rabiee with thanks to Ali Jalali Overview Reading Assignment Chapter 9 of textbook Further Resources MIT Open Course Ware S. Karlin and H. M. Taylor, A First Course in Stochastic
More information13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.
For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if jt X ( ) = xte ( ) dt, (3-) then X ( ) represents its energy spectrum. his follows from Parseval
More informationESS Finite Impulse Response Filters and the Z-transform
9. Finite Impulse Response Filters and the Z-transform We are going to have two lectures on filters you can find much more material in Bob Crosson s notes. In the first lecture we will focus on some of
More information2A1H Time-Frequency Analysis II
2AH Time-Frequency Analysis II Bugs/queries to david.murray@eng.ox.ac.uk HT 209 For any corrections see the course page DW Murray at www.robots.ox.ac.uk/ dwm/courses/2tf. (a) A signal g(t) with period
More informationStatistical techniques for data analysis in Cosmology
Statistical techniques for data analysis in Cosmology arxiv:0712.3028; arxiv:0911.3105 Numerical recipes (the bible ) Licia Verde ICREA & ICC UB-IEEC http://icc.ub.edu/~liciaverde outline Lecture 1: Introduction
More informationThis is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or
Physics 7b: Statistical Mechanics Brownian Motion Brownian motion is the motion of a particle due to the buffeting by the molecules in a gas or liquid. The particle must be small enough that the effects
More informationDetection theory. H 0 : x[n] = w[n]
Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal
More informationPulse-Code Modulation (PCM) :
PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from
More informationHST.582J/6.555J/16.456J
Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear
More informationCorrelator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi
Chapter 8 Correlator I. Basics D. Anish Roshi 8.1 Introduction A radio interferometer measures the mutual coherence function of the electric field due to a given source brightness distribution in the sky.
More informationStochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno
Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.
More informationDetection theory 101 ELEC-E5410 Signal Processing for Communications
Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2013 Lecture 4 For next week be sure to have read Chapter 5 of
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2013 Lecture 4 For next week be sure to have read Chapter 5 of Gregory (Frequentist Statistical Inference) Today: DFT
More informationModule 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur
Module Signal Representation and Baseband Processing Version ECE II, Kharagpur Lesson 8 Response of Linear System to Random Processes Version ECE II, Kharagpur After reading this lesson, you will learn
More informationStochastic Process II Dr.-Ing. Sudchai Boonto
Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkuts Unniversity of Technology Thonburi Thailand Random process Consider a random experiment specified by the
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More information= 4. e t/a dt (2) = 4ae t/a. = 4a a = 1 4. (4) + a 2 e +j2πft 2
ECE 341: Probability and Random Processes for Engineers, Spring 2012 Homework 13 - Last homework Name: Assigned: 04.18.2012 Due: 04.25.2012 Problem 1. Let X(t) be the input to a linear time-invariant filter.
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 3: Detection Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Detection theory
More informationLecture 8: Signal Detection and Noise Assumption
ECE 830 Fall 0 Statistical Signal Processing instructor: R. Nowak Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(0, σ I n n and S = [s, s,..., s n ] T
More informationOrganization. I MCMC discussion. I project talks. I Lecture.
Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Lecture 23:! Nonlinear least squares!! Notes Modeling2015.pdf on course
More informationCSC 2541: Bayesian Methods for Machine Learning
CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 3 More Markov Chain Monte Carlo Methods The Metropolis algorithm isn t the only way to do MCMC. We ll
More information5 Analog carrier modulation with noise
5 Analog carrier modulation with noise 5. Noisy receiver model Assume that the modulated signal x(t) is passed through an additive White Gaussian noise channel. A noisy receiver model is illustrated in
More informationSequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes
Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract
More informationParameter estimation in epoch folding analysis
ASTRONOMY & ASTROPHYSICS MAY II 1996, PAGE 197 SUPPLEMENT SERIES Astron. Astrophys. Suppl. Ser. 117, 197-21 (1996) Parameter estimation in epoch folding analysis S. Larsson Stockholm Observatory, S-13336
More informationLecture 9. PMTs and Laser Noise. Lecture 9. Photon Counting. Photomultiplier Tubes (PMTs) Laser Phase Noise. Relative Intensity
s and Laser Phase Phase Density ECE 185 Lasers and Modulators Lab - Spring 2018 1 Detectors Continuous Output Internal Photoelectron Flux Thermal Filtered External Current w(t) Sensor i(t) External System
More informationUCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE 53 Handout #46 Prof. Young-Han Kim Thursday, June 5, 04 Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei). Discrete-time Wiener process. Let Z n, n 0 be a discrete time white
More informationSEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis
SEISMIC WAVE PROPAGATION Lecture 2: Fourier Analysis Fourier Series & Fourier Transforms Fourier Series Review of trigonometric identities Analysing the square wave Fourier Transform Transforms of some
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationLecture 32. Lidar Error and Sensitivity Analysis
Lecture 3. Lidar Error and Sensitivity Analysis Introduction Accuracy in lidar measurements Precision in lidar measurements Error analysis for Na Doppler lidar Sensitivity analysis Summary 1 Errors vs.
More informationStochastic Processes: I. consider bowl of worms model for oscilloscope experiment:
Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing
More informationDigital Image Processing Lectures 13 & 14
Lectures 13 & 14, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Properties of KL Transform The KL transform has many desirable properties which makes
More informationState-Space Methods for Inferring Spike Trains from Calcium Imaging
State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline
More informationParticle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates
More informationProperties of the Autocorrelation Function
Properties of the Autocorrelation Function I The autocorrelation function of a (real-valued) random process satisfies the following properties: 1. R X (t, t) 0 2. R X (t, u) =R X (u, t) (symmetry) 3. R
More informationSystem Identification & Parameter Estimation
System Identification & Parameter Estimation Wb3: SIPE lecture Correlation functions in time & frequency domain Alfred C. Schouten, Dept. of Biomechanical Engineering (BMechE), Fac. 3mE // Delft University
More informationContinuous Wave Data Analysis: Fully Coherent Methods
Continuous Wave Data Analysis: Fully Coherent Methods John T. Whelan School of Gravitational Waves, Warsaw, 3 July 5 Contents Signal Model. GWs from rotating neutron star............ Exercise: JKS decomposition............
More informationTransdimensional Markov Chain Monte Carlo Methods. Jesse Kolb, Vedran Lekić (Univ. of MD) Supervisor: Kris Innanen
Transdimensional Markov Chain Monte Carlo Methods Jesse Kolb, Vedran Lekić (Univ. of MD) Supervisor: Kris Innanen Motivation for Different Inversion Technique Inversion techniques typically provide a single
More informationLecture 15. Theory of random processes Part III: Poisson random processes. Harrison H. Barrett University of Arizona
Lecture 15 Theory of random processes Part III: Poisson random processes Harrison H. Barrett University of Arizona 1 OUTLINE Poisson and independence Poisson and rarity; binomial selection Poisson point
More informationSequential Data 1D a new factorization of the discrete Fourier transform matrix :The Hartley Transform (1986); also chirplets
ASTR509-17 Sequential Data 1D Ronald N Bracewell (1921 2007) - PhD in ionospheric research, Cavendish Lab (Ratcliffe). - 1949-1954 Radiophysics Laboratory of CSIRO; a founding father of radio astronomy,
More information2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf
Time-Frequency Analysis II (HT 20) 2AH 2AH Time-Frequency Analysis II Bugs/queries to david.murray@eng.ox.ac.uk HT 20 For hints and answers visit www.robots.ox.ac.uk/ dwm/courses/2tf David Murray. A periodic
More informationCommunications and Signal Processing Spring 2017 MSE Exam
Communications and Signal Processing Spring 2017 MSE Exam Please obtain your Test ID from the following table. You must write your Test ID and name on each of the pages of this exam. A page with missing
More informationGravitational-Wave Data Analysis: Lecture 2
Gravitational-Wave Data Analysis: Lecture 2 Peter S. Shawhan Gravitational Wave Astronomy Summer School May 29, 2012 Outline for Today Matched filtering in the time domain Matched filtering in the frequency
More informationA SuperDARN Tutorial. Kile Baker
A SuperDARN Tutorial Kile Baker Introduction n Multi-pulse Technique and ACFs Why ACFs? Bad and missing lags n Basic Theory of FITACF n Basic Fitting Technique Getting the Power and Width Getting the Velocity
More informationIntroduction to Bayesian Data Analysis
Introduction to Bayesian Data Analysis Phil Gregory University of British Columbia March 2010 Hardback (ISBN-10: 052184150X ISBN-13: 9780521841504) Resources and solutions This title has free Mathematica
More informationENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University
ENSC327 Communications Systems 2: Fourier Representations Jie Liang School of Engineering Science Simon Fraser University 1 Outline Chap 2.1 2.5: Signal Classifications Fourier Transform Dirac Delta Function
More information13.42 READING 6: SPECTRUM OF A RANDOM PROCESS 1. STATIONARY AND ERGODIC RANDOM PROCESSES
13.42 READING 6: SPECTRUM OF A RANDOM PROCESS SPRING 24 c A. H. TECHET & M.S. TRIANTAFYLLOU 1. STATIONARY AND ERGODIC RANDOM PROCESSES Given the random process y(ζ, t) we assume that the expected value
More informationLecture 4 - Spectral Estimation
Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated
More informationSampling. Alejandro Ribeiro. February 8, 2018
Sampling Alejandro Ribeiro February 8, 2018 Signals exist in continuous time but it is not unusual for us to process them in discrete time. When we work in discrete time we say that we are doing discrete
More informationChapter 2 Signal Processing at Receivers: Detection Theory
Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 of Gregory (Frequentist Statistical Inference) Lecture 7 Examples of FT applications Simulating
More informationMetropolis Algorithm
//7 A Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture MCMC example Reading: Ch, and in Gregory (from before) Chapter 9 of Mackay (Monte Carlo Methods) hip://www.inference.phy.cam.ac.uk/itprnn/
More informationTracking of Spread Spectrum Signals
Chapter 7 Tracking of Spread Spectrum Signals 7. Introduction As discussed in the last chapter, there are two parts to the synchronization process. The first stage is often termed acquisition and typically
More informationNoncoherent Integration Gain, and its Approximation Mark A. Richards. June 9, Signal-to-Noise Ratio and Integration in Radar Signal Processing
oncoherent Integration Gain, and its Approximation Mark A. Richards June 9, 1 1 Signal-to-oise Ratio and Integration in Radar Signal Processing Signal-to-noise ratio (SR) is a fundamental determinant of
More informationChapter 2 Random Processes
Chapter 2 Random Processes 21 Introduction We saw in Section 111 on page 10 that many systems are best studied using the concept of random variables where the outcome of a random experiment was associated
More informationLecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process
Lecture Notes 7 Stationary Random Processes Strict-Sense and Wide-Sense Stationarity Autocorrelation Function of a Stationary Process Power Spectral Density Continuity and Integration of Random Processes
More informationLecture 1: Pragmatic Introduction to Stochastic Differential Equations
Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic
More informationKLT for transient signal analysis
The Labyrinth of the Unepected: unforeseen treasures in impossible regions of phase space Nicolò Antonietti 4/1/17 Kerastari, Greece May 9 th June 3 rd Qualitatively definition of transient signals Signals
More informationLOPE3202: Communication Systems 10/18/2017 2
By Lecturer Ahmed Wael Academic Year 2017-2018 LOPE3202: Communication Systems 10/18/2017 We need tools to build any communication system. Mathematics is our premium tool to do work with signals and systems.
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University
ENSC327 Communications Systems 19: Random Processes Jie Liang School of Engineering Science Simon Fraser University 1 Outline Random processes Stationary random processes Autocorrelation of random processes
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More informationGaussian, Markov and stationary processes
Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November
More informationStochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS
Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review
More information