SIGNAL PROCESSING. B14 Option 4 lectures. Stephen Roberts

Similar documents
Lecture 3 - Design of Digital Filters

Review of Linear Time-Invariant Network Analysis

LTI Systems (Continuous & Discrete) - Basics

EE 224 Signals and Systems I Review 1/10

Transform Representation of Signals

GATE EE Topic wise Questions SIGNALS & SYSTEMS

2.161 Signal Processing: Continuous and Discrete Fall 2008

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

EC Signals and Systems

Grades will be determined by the correctness of your answers (explanations are not required).

Like bilateral Laplace transforms, ROC must be used to determine a unique inverse z-transform.

Question Paper Code : AEC11T02

Discrete-Time David Johns and Ken Martin University of Toronto

Grades will be determined by the correctness of your answers (explanations are not required).

2.161 Signal Processing: Continuous and Discrete Fall 2008

Review of Fundamentals of Digital Signal Processing

Lecture 9 Infinite Impulse Response Filters

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

EE 3054: Signals, Systems, and Transforms Summer It is observed of some continuous-time LTI system that the input signal.

Digital Control & Digital Filters. Lectures 21 & 22

Laplace Transforms and use in Automatic Control

Z - Transform. It offers the techniques for digital filter design and frequency analysis of digital signals.

Module 4 : Laplace and Z Transform Problem Set 4

Lecture 8 Finite Impulse Response Filters

Stability Condition in Terms of the Pole Locations

Process Control & Instrumentation (CH 3040)

PS403 - Digital Signal processing

2.161 Signal Processing: Continuous and Discrete

Signals and Systems Spring 2004 Lecture #9

DIGITAL SIGNAL PROCESSING LECTURE 1

Final Exam of ECE301, Section 3 (CRN ) 8 10am, Wednesday, December 13, 2017, Hiler Thtr.

ECE 3084 QUIZ 2 SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING GEORGIA INSTITUTE OF TECHNOLOGY APRIL 2, Name:

Definition of the Laplace transform. 0 x(t)e st dt

Module 4. Related web links and videos. 1. FT and ZT

NAME: 11 December 2013 Digital Signal Processing I Final Exam Fall Cover Sheet

Chapter 7: Filter Design 7.1 Practical Filter Terminology

EE 521: Instrumentation and Measurements

Chapter 5 Frequency Domain Analysis of Systems

6.003: Signals and Systems. CT Fourier Transform

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Final Exam of ECE301, Section 1 (Prof. Chih-Chun Wang) 1 3pm, Friday, December 13, 2016, EE 129.

LOPE3202: Communication Systems 10/18/2017 2

ECE 3620: Laplace Transforms: Chapter 3:

Chapter 5 Frequency Domain Analysis of Systems

The Discrete-Time Fourier

Lecture 7 Discrete Systems

A system that is both linear and time-invariant is called linear time-invariant (LTI).

Convolution. Define a mathematical operation on discrete-time signals called convolution, represented by *. Given two discrete-time signals x 1, x 2,

E2.5 Signals & Linear Systems. Tutorial Sheet 1 Introduction to Signals & Systems (Lectures 1 & 2)

Final Exam of ECE301, Prof. Wang s section 1 3pm Tuesday, December 11, 2012, Lily 1105.

The Johns Hopkins University Department of Electrical and Computer Engineering Introduction to Linear Systems Fall 2002.

/ (2π) X(e jω ) dω. 4. An 8 point sequence is given by x(n) = {2,2,2,2,1,1,1,1}. Compute 8 point DFT of x(n) by

Homework 4. May An LTI system has an input, x(t) and output y(t) related through the equation y(t) = t e (t t ) x(t 2)dt

IIR digital filter design for low pass filter based on impulse invariance and bilinear transformation methods using butterworth analog filter

EE Homework 13 - Solutions

Review of Frequency Domain Fourier Series: Continuous periodic frequency components

Filter Analysis and Design

Discrete Time Systems

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation

Signals and Systems. Lecture 11 Wednesday 22 nd November 2017 DR TANIA STATHAKI

x[n] = x a (nt ) x a (t)e jωt dt while the discrete time signal x[n] has the discrete-time Fourier transform x[n]e jωn

Laplace Transform Part 1: Introduction (I&N Chap 13)

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

Homework 7 Solution EE235, Spring Find the Fourier transform of the following signals using tables: te t u(t) h(t) = sin(2πt)e t u(t) (2)

Frequency Response. Re ve jφ e jωt ( ) where v is the amplitude and φ is the phase of the sinusoidal signal v(t). ve jφ

ECE 301 Division 1 Exam 1 Solutions, 10/6/2011, 8-9:45pm in ME 1061.

Topic 3: Fourier Series (FS)

6.003: Signals and Systems. CT Fourier Transform

Each problem is worth 25 points, and you may solve the problems in any order.

DESIGN OF CMOS ANALOG INTEGRATED CIRCUITS

E : Lecture 1 Introduction

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

CITY UNIVERSITY LONDON. MSc in Information Engineering DIGITAL SIGNAL PROCESSING EPM746

Lecture 3 January 23

Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform. The Laplace Transform. The need for Laplace

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University

A.1 THE SAMPLED TIME DOMAIN AND THE Z TRANSFORM. 0 δ(t)dt = 1, (A.1) δ(t)dt =

ANALOG AND DIGITAL SIGNAL PROCESSING CHAPTER 3 : LINEAR SYSTEM RESPONSE (GENERAL CASE)

Core Concepts Review. Orthogonality of Complex Sinusoids Consider two (possibly non-harmonic) complex sinusoids

Discrete Time Signals and Systems Time-frequency Analysis. Gloria Menegaz

Fourier series for continuous and discrete time signals

Volterra/Wiener Representation of Non-Linear Systems

NAME: ht () 1 2π. Hj0 ( ) dω Find the value of BW for the system having the following impulse response.

Mathematical Foundations of Signal Processing

Digital Signal Processing Lecture 3 - Discrete-Time Systems

DIGITAL SIGNAL PROCESSING. Chapter 6 IIR Filter Design

EE482: Digital Signal Processing Applications

The Laplace Transform

EE482: Digital Signal Processing Applications

6.003: Signals and Systems. Applications of Fourier Transforms

Dynamic Response. Assoc. Prof. Enver Tatlicioglu. Department of Electrical & Electronics Engineering Izmir Institute of Technology.


Lecture 2. Introduction to Systems (Lathi )

Line Spectra and their Applications

Discrete-time signals and systems

DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERING EXAMINATIONS 2010

3.2 Complex Sinusoids and Frequency Response of LTI Systems

Review of Fundamentals of Digital Signal Processing

Linear systems, small signals, and integrators

16.362: Signals and Systems: 1.0

Transcription:

SIGNAL PROCESSING B14 Option 4 lectures Stephen Roberts

Recommended texts Lynn. An introduction to the analysis and processing of signals. Macmillan. Oppenhein & Shafer. Digital signal processing. Prentice Hall Orfanidis. Introduction to Signal Processing. Prentice Hall. Proakis & Manolakis. Digital Signal Processing: Principles, Algorithms and Applications. Matlab Signal processing toolbox manual. 1

Lecture 1 - Introduction 1.1 Introduction Signal processing is the treatment of signals (information-bearing waveforms or data) so as to extract the wanted information, removing unwanted signals and noise. There are compelling applications as diverse as the analysis ofseismic waveforms or the recording of moving rotor blade pressures and heat transfer rates. In these lectures we will consider just the very basics, mainly using filtering as a case example for signal processing. 2

1.1.1 Analogue vs. Digital Signal Processing Most of the signal processing techniques mentioned above could be used to process the original analogue (continuous-time) signals or their digital version (the signals are sampled in order to convert them to sequences of numbers). For example, the earliest type of voice coder developed was the channel vocoder which consists mainly of a bank of band-pass filters. The trend is, therefore, towards digital signal processing systems; even the well-established radio receiver has come under threat. The other great advantage of digital signal processing lies in the ease with which non-linear processing may be performed. Almost all recent developments in modern signal processing are in the digital domain. This lecture course will start by covering the basics of the design of simple (analogue) filters as this a pre-requisite to understanding much of the underlying theory of digital signal processing and filtering. 3

1.2 Summary/Revision of basic definitions 1.2.1 Linear Systems A linear system may be defined as one which obeys the Principle of Superposition. Ifx 1 (t) and x 2 (t) are inputs to a linear system which gives rise to outputs y 1 (t) and y 2 (t) respectively,thenthecombinedinputax 1 (t) +bx 2 (t) willgive rise to an output ay 1 (t)+by 2 (t), where a and b are arbitrary constants. Notes If we represent an input signal by some support in a frequency domain, F in (i.e. the set of frequencies present in the input) then no new frequency support will be required to model the output, i.e. F out F in Linear systems can be broken down into simpler sub-systems which can be re-arranged in any order, i.e. x g 1 g 2 x g 2 g 1 x g 1,2 4

1.2.2 Time Invariance A time-invariant system is one whose properties do not vary with time (i.e. the input signals are treated the same way regardless of their time of arrival); for example, with discrete systems, if an input sequence x(n) produces an output sequence y(n), then the input sequence x(n n o )willproducetheoutputsequence y(n n o ) for all n o. 1.2.3 Linear Time-Invariant (LTI) Systems Most of the lecture course will focus on the design and analysis of systemswhich are both linear and time-invariant. The basic linear time-invariant operation is filtering in its widest sense. 1.2.4 Causality In a causal (or realisable) system, the present output signal depends only upon present and previous values of the input. (Although all practical engineering systems are necessarily causal, there are several important systems which are non-causal (non-realisable), e.g. the ideal digital differentiator.) 5

1.2.5 Stability A stable system (over a finite interval T )isonewhichproducesaboundedoutput in response to a bounded input (over T ). 1.3 Linear Processes Some of the common signal processing functions are amplification (or attenuation), mixing (the addition of two or more signal waveforms) or un-mixing 1 and filtering. Each of these can be represented by a linear time-invariant block with an input-output characteristic which can be defined by: The impulse response g(t) in the time domain. The transfer function in a frequency domain. We will see that the choice of frequency basis may be subtly different from time to time. As we will see, there is (for the systems we examine in this course) an invertable mapping between the time and frequency domain representations. 1 This linear unmixing turns out to be one of the most interesting current topics in signal processing. 6

1.4 Time-Domain Analysis convolution Convolution allows the evaluation of the output signal from a LTI system, given its impulse response and input signal. The input signal can be considered as being composed of a succession of impulse functions, each of which generates a weighted version of the impulse response at the output, as shown in 1.1. The output at time t, y(t), is obtained simply by adding the effect of each separate impulse function this gives rise to the convolution integral: y(t) = τ {x(t τ)dτ}g(τ) dτ 0 0 x(t τ)g(τ)dτ τ is a dummy variable which represents time measured back into the past from the instant t at which the output y(t) is to be calculated. 7

0.4 0.25 0.3 0.2 0.1 0.2 0.15 0.1 0.05 0 0 2 4 6 0 0 20 40 60 80 100 0.1 components 0.25 total 0.08 0.2 0.06 0.15 0.04 0.1 0.02 0.05 0 0 20 40 60 80 100 0 0 20 40 60 80 100 Figure 1.1: Convolution as a summation over shifted impulse responses. 8

1.4.1 Notes Convolution is commutative. Thus: y(t) is also 0 x(τ)g(t τ)dτ For discrete systems convolution is a summation operation: y[n] = x[k]g[n k] = x[n k]g[k] k=0 Relationship between convolution and correlation The general form of the convolution integral f (t) = k=0 x(τ)g(t τ)dτ is very similar 2 to that of the cross-correlation function relating 2 variables x(t) and y(t) R xy (τ) = x(t) y(t τ)dt Convolution is hence an integral over lags at a fixed time whereas correlation is the integral over time for a fixed lag. 2 Note that the lower limit of the integral can be or 0. Why? 9

Step response The step function is the time integral of an impulse. As integration (and differentiation) are linear operations, so the order of application in a LTI system does not matter: δ(t) dt g(t) step response δ(t) g(t) dt step response 10

1.5 Frequency-Domain Analysis LTI systems, by definition, may be represented (in the continuous case) by linear differential equations (in the discrete case by linear difference equations). Consider the application of the linear differential operator, D, to the function f (t) =e st : Df (t) =sf (t) An equation of this form means that f (t) istheeigenfunction of D. Justlikethe eigen analysis you know from matrix theory, this means that f (t) and any linear operation on f (t) may be represented using a set of functions of exponential form, and that this function may be chosen to be orthogonal. This naturally gives rise to the use of the Laplace and Fourier representations. 11

The Laplace transform: where, X(s) = X(s) Transfer function G(s) Y (s) 0 x(t)e st dt Y (s) =G(s)X(s) Laplace transform of x(t) where G(s) can be expressed as a pole-zero representation of the form: G(s) = A(s z 1 )...(s z m ) (s p 1 )(s p 2 )...(s p n ) (NB: The inverse transformation, ie. obtaining y(t) from Y (s), is not a straightforward mathematical operation.) 12

The Fourier transform: where, and X(jω) = X(jω) Frequency response G(jω)Y (jω) x(t)e jωt dt Y (jω) =G(jω)X(jω) Fourier transform of x(t) The output time function can be obtained by taking the inverse Fourier transform: y(t) = 1 Y (jω)e jωt dω 2π 13

1.5.1 Relationship between time & frequency domains Theorem If g(t) is the impulse response of an LTI system, G(jω), the Fourier transform of g(t), is the frequency response of the system. Proof Consider an input x(t) =A cos ωt to an LTI system. Let g(t) betheimpulse response, with a Fourier transform G(jω). Using convolution, the output y(t) is given by: = A 2 0 y(t) = = A 2 ejωt 0 e jω(t τ) g(τ)dτ + A 2 A cos ω(t τ)g(τ)dτ g(τ)e jωτ dτ + A 2 e jωt 0 e jω(t τ) g(τ)dτ g(τ)e jωτdτ (lower limit of integration can be changed from 0 to since g(τ) =0for 14

t<0) = A 2 {ejωt G(jω)+e jωt G( jω)} Let G(jω) =Ce jφ ie. C = G(jω), φ = arg{g(jω)} Then y(t) = AC 2 {ej(ωt+φ) + e j(ωt+φ) } = CAcos(ωt + φ) i.e. an input sinusoid has its amplitude scaled by G(jω) and its phase changed by arg{g(jω)}, where G(jω) is the Fourier transform of the impulse response g(t). 15

Theorem Convolution in the time domain is equivalent to multiplication in the frequency domain i.e. and y(t) =g(t) x(t) F 1 {Y (jω) =G(jω)X(jω)} y(t) =g(t) x(t) L 1 {Y (s) =G(s)X(s)} Proof Consider the general integral (Laplace) transform of a shifted function: L{f (t τ)} = f (t τ)e st dt t = e sτ L{f (t)} Now consider the Laplace transform of the convolution integral L{f (t) g(t)} = f (t τ)g(τ)dτe st dt t τ = g(τ)e sτ dτl{f (t)} τ = L{g(t)}L{f (t)} By allowing s jω we prove the result for the Fourier transform as well. 16

1.6 Filters Low-pass : to extract short-term average or to eliminate high-frequency fluctuations (eg. noise filtering, demodulation, etc.) High-pass : to follow small-amplitude high-frequency perturbations in presence of much larger slowly-varying component (e.g. recording the electrocardiogram in the presence of a strong breathing signal) Band-pass : to select a required modulated carrier frequency out of many (e.g. radio) Band-stop : to eliminate single-frequency (e.g. mains) interference (also known as notch filtering) 17

notch G( ω) low-pass band-pass Figure 1.2: Standard filters. ω high-pass 18

1.7 Design of Analogue Filters We will start with an analysis of analogue low-pass filters, since a low-pass filter can be mathematically transformed into any other standard type. Design of a filter may start from consideration of The desired frequency response. The desired phase response. The majority of the time we will consider the first case. Consider some desired response, in the general form of the (squared) magnitude of the transfer function, i.e. G(s) 2. This response is given as G(s) 2 = G(s)G (s) where denotes complex conjugation. If G(s) represents a stable filter (its poles are on the LHS of the s-plane) then G (s) isunstable (as its poles will be on the RHS). The design procedure consists then of Considering some desired response G(s) 2 as a polynomial in even powers of s. 19

Designing the filter with the stable part of G(s),G (s). This means that, for any given filter response in the positive frequency domain, a mirror image exists in the negative frequency domain. 1.7.1 Ideal low-pass filter G( ω) 1 0 ω c ω Figure 1.3: The ideal low-pass filter. Note the requirement of response in the negative frequency domain. Any frequency-selective filter may be described either by its frequencyresponse 20

(more common) or by its impulse response. The narrower the bandoffrequencies transmitted by a filter, the more extended in time is its impulse response waveform. Indeed, if the support in the frequency domain is decreased by a factor of a (i.e. made narrower) then the required support in the time domain is increased by a factor of a (you should be able to prove this). Consider an ideal low-pass filter with a brick wall amplitude cut-off and no phase shift, as shown in Fig. 1.3. Calculate the impulse response as the inverse Fourier transform of the frequency response: g(t) = 1 2π hence, G(jω)e jωt dω = 1 2π g(t) = π ωc 1 e jωt dω = 1 2πjt (ejt e jt ) ( ) sin ωc t Figure 1.4 shows the impulse response for the filter (this is also referred to as the filter kernel). The output starts infinitely long before the impulse occurs i.e. the filter is not realisable in real time. t 21

2 1.5 g(t) 1 0.5 0 0.5 5 4 3 2 1 0 1 2 3 4 5 t Figure 1.4: Impulse response (filter kernel) for the ILPF. The zero crossings occur at integer multiples of π/. AdelayoftimeT such that g(t) = π sin (t T ) (t T ) would ensure that most of the response occurred after the input (for large T ). The use of such a delay, however, introduces a phase lag proportional to frequency, since arg{g(jω} = ωt. Even then, the filter is still not exactly realisable; instead the design of analogue filters involves the choice of the most suitable approximation to the ideal frequency response. 22

1.8 Practical Low-Pass Filters Assume that the low-pass filter transfer function G(s) is a rational function in s. The type of filter to be considered in the next few pages is all-pole design, which means that G(s) will be of the form: G(s) = org(jω) = 1 (a n s n + a n 1 s n 1 +...a 1 s + a 0 ) 1 (a n (jω) n + a n 1 (jω) n 1 +...a 1 jω + a 0 ) The magnitude-squared response is G(jω) 2 = G(jω) G( jω). The denominator of G(jω) 2 is hence a polynomial in even powers of ω. Hence the task of approximating the ideal magnitude-squared characteristic is that of choosing a suitable denominator polynomial in ω 2,i.e. selectingthefunctionh in the following expression: G(jω) 2 = 1 1+H{( ω ) 2 } where = nominal cut-off frequency and H = rational function of ( ω ) 2. 23

The choice of H is determined by functions such that 1+H{(ω/ ) 2 } is close to unity for ω < and rises rapidly after that. 1.9 Butterworth Filters i.e. H{( ω ) 2 } = {( ω ) 2 } n =( ω ) 2n G(jω) 2 = 1 1+( ω ) 2n where n is the order of the filter. Figure 1.5 shows the response on linear (a) and log (b) scales for various orders n. 24

1 (a) 0.8 G(ω) 0.6 0.4 n=1 n=2 0.2 n=6 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 ω 10 0 (b) log G(ω) 10 1 10 2 10 1 10 0 Figure 1.5: Butterworth filter response on (a) linear and (b) log scales. On a log-log scale the response, for ω > falls off at approx -20db/decade. log ω 25

1.9.1 Butterworth filter notes 1. G = 1 2 for ω = (i.e. magnitude response is 3dB down at cut-off frequency) 2. For large n: in the region ω <, G(jω) =1 in the region ω >,thesteepnessof G is a direct function of n. 3. Response is known as maximally flat, because d n G dω n =0 for n =1, 2,...2N 1 ω=0 Proof Express G(jω) in a binomial expansion: G(jω) = {1+( ω ) 2n } 1 2 =1 1 2 ( ω ) 2n + 3 8 ( ω ) 4n 5 16 ( ω ) 6n +... It is then easy to show that the first 2n 1 derivatives are all zero at the origin. 26

1.9.2 Transfer function of Butterworth low-pass filter G(jω) = G(jω)G( jω) Since G(jω) is derived from G(s) using the substitution s jω, the reverse operation can also be done, i.e. ω js Thus the poles of G(s)G( s) = 1 1+( j s ) 2n or G(s)G( s) = 1 1+( js ) 2n 1+( js ) 2n belong either to G(s) or G( s). The poles are given by: 1 ( ) 2n js = 1 =e j(2k+1)π, k =0, 1, 2,...,2n 1 27

Thus js = e j(2k+1) 2n π Since j = e j π 2, then we have the final result: s = exp j[ π 2 +(2k +1)π 2n ] i.e. the poles have the same modulus and equi-spaced arguments. For example, for a fifth-order Butterworth low-pass filter (LPF), n = 5: π 2n =18o π 2 +(2k +1)π 2n =90o +(18 o, 54 o, 90 o, 126 o, etc...) i.e. the poles are at: 108 o, 144 o, 180 o, 216 o, 252 o, }{{} in L.H. s-plane therefore stable 288 o, 324 o, 360 o, 396 o, 432 o }{{} in R.H.S. s-plane therefore unstable We want to design a stable filter. Since each unstable pole is ( 1) a stable pole, we can let the stable ones be in G(s), and the unstable ones in G( s). Therefore the poles of G(s)are e j108o, e j144o, e j180o, e j216o, e j252o as shown in Figure 1.6. 28

0.5 0 0.5 1.5 1 0.5 0 0.5 Figure 1.6: Stable poles of 5 th order Butterworth filter. 29

G(s) = 1 1 1 s = p k (1 + ( s )){1+2cos72 o s +( s ) 2 }{1+2cos36 o s +( s ) 2 } 1 G(s) = 1+3.2361 s +5.2361( s ) 2 +5.2361( s ) 3 +3.2361( s ) 4 +( s ) 5 Note that the coefficients are palindromic (read the same in reverse order) this is true for all Butterworth filters. Poles are always on same radii, at π n angular spacing, with half-angles at each end. If n is odd, one pole is real. n a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 1 1.0000 2 1.4141 1.0000 3 2.0000 2.0000 1.0000 4 2.6131 3.4142 2.6131 1.0000 5 3.2361 5.2361 5.2361 3.2361 1.0000 6 3.8637 7.4641 9.1416 7.4641 3.8637 1.0000 7 4.4940 10.0978 14.5918 14.5918 10.0978 4.4940 1.0000 8 5.1258 13.1371 21.8462 25.6884 21.8462 13.1371 5.1258 1.0000 Butterworth LPF coefficients for a 8 30

1.9.3 Design of Butterworth LPFs The only design variable for Butterworth LPFs is the order of the filter n. If a filter is to have an attenuation A at a frequency ω a : G 2 ω a = 1 A 2 = 1 1+( ω a ) 2n i.e. n = log(a2 1) 2log ω a or since usually A 1, n log A log ω a Butterworth design example Design a Butterworth LPF with at least 40 db attenuation at a frequency of 1kHz and 3dB attenuation at f c =500Hz. Answer 40dB A =100; ω a =2000π and =1000π rads/sec 31

Therefore n log 10 100 log 10 2 = 2 0.301 =6.64 Hence n =7meetsthespecification Check: Substitute s = j2 into the transfer function from the above table for n =7 which gives A =128. G(j2) = 1 87.38 93.54j 32