Discrete sines, cosines, and complex exponentials

Size: px
Start display at page:

Download "Discrete sines, cosines, and complex exponentials"

Transcription

1 Discrete sines, cosines, and complex exponentials Alejandro Ribeiro January 15, 2015 Sines, cosines, and complex exponentials play a very important role in signal and information processing. The purpose of this lab is to gain some experience and intuition on how this signals look like and behave. The signals we consider here are discrete because they are indexed by a finite and integer time index n = 0, 1,..., N 1. The constant N is referred to as the length of the signal. Start by considering a separate integer number k to define the discrete complex exponential e kn (n) of discrete frequency k and duration N as e kn (n) = p 1 e j2pkn/n = p 1 exp(j2pkn/n). (1) N N The (regular) complex exponential is defined as e j2pkn/n = cos(2pkn/n)+ j sin(2pkn/n) so that if we compute the real and imaginary parts of e kn (n) we have that Re (e kn (n)) = 1 p N cos(2pkn/n), Im (e kn (n)) = 1 p N sin(2pkn/n). (2) We say that the real part of the complex exponential is a discrete cosine of discrete frequency k and duration N and that the imaginary part is a discrete sine of discrete frequency k and duration N. The discrete frequency k in (2) determines the number of oscillations that we see in the N elements of the signal. A sine, cosine, or complex exponential of discrete frequency k has a total of k complete oscillations in the N samples. Mathematically speaking, the complex exponential, the sine, and the cosine are all different signals. Intuitively speaking, all of them are oscillations of the same frequency. Since complex exponentials have imaginary 1

2 parts, they don t exist in the real world. Nevertheless, we work with them because they are instead of sines and cosines because they are easier to handle. 1 Signal generation Let us begin by generating and displaying some complex exponentials and to use the generated signals to explore some important properties that these signals have. 1.1 Generate complex exponentials. Write a Matlab function that takes 1.1 as input Generate the frequency complex k and exponentials signal duration N and returns three vectors with N components containing the elements of the signal e kn (n) defined in (5), as well as its real and imaginary parts [cf. (2)]. Plot the real and imaginary components for N = 32 and different values of k. Observe that some of these signals don t look much like oscillations. In your report, show the plots for k = 0, k = 2, k = 9, and k = Equivalent complex exponentials. Use the code in Part 1.1 to generate complex Equivalent exponentials complex of the exponentials same duration and frequencies k and 1.2 l that are N apart. E.g., Make N = 32 and plot signals for frequencies k = 3,k = = 35, k = 3 32 = 29. You should observe that these signals are identical. 1.3 Conjugate complex exponentials. Use the code in Part 1.1 to generate complex Conjugate exponentials complex of thexponentials same duration and opposite frequencies 1.3 k and k. E.g., Make N = 32 and plot signals for frequencies k = 3 and k = 3. You should observe that these signals have the same real part and opposite imaginary parts. We say that the signals are conjugates of each other. 1.4 More conjugate complex exponentials. Consider now frequencies 1.4 k andmore l in theconjugate interval [0, complex N 1] such exponentials that their sum is k + l = N. To think about this relationship, order the frequencies from k = 0 to k = N and start walking up the chain from k = 0, to k = 1, to k = 3, and so on. Likewise, start walking down the chain from l = N, to l = N 1, to l = N 2 and so on. When you have taken the same number of steps in either direction you have that k + l = N. Given your observations in parts 1.2 and 1.3 you should expect these signals to be conjugates of each other. Verify your expectation with, e.g., k = 3 and l = 32 3 = 29. 2

3 We consider now the energy of complex exponentials and the inner products between complex exponentials of different frequencies. Given two signals x and y of duration N, their inner product is defined as hx, yi := N 1 Â x(n)y(n). (3) n=0 The energy of a signal is defined as the inner product of the signal with itself kxk 2 := hx, xi. If we write the signals x and y as vectors x = [x(0),...,x(n 1)] T and y =[x(0),...,x(n 1)] T, the inner product is simply written as the product x T y = y T x and the energy as the product x T x. We say that a signal is normal, if it has unit energy, i.e., if kxk 2 = 1. We say that two signals are orthogonal if their inner product is null, i.e., if hx, yi. Orthogonality looks like an innocent property, but it is nothing like that. It is one of the most important properties that a group of signals can have. 1.5 Orthonormality. Write a function to compute the inner product 1.5 he kn, eorthonormality ln i between all pairs of discrete complex exponentials of length N and frequencies k = 0, 1,..., N 1. Run and report your result for N = 16. You should observe that the complex exponentials have unit energy and are orthogonal to each other. When this happens, we say that the signals form an orthonormal set. 2 Analysis The numerical experiments of Part 1 pointed two properties that discrete complex exponentials have that are very important for subsequent analyses. In this section we study these properties analytically. We first work on the observation that when we consider frequencies k and l that are N apart, the complex exponentials may have formulas that look different but are actually equivalent. 2.1 Equivalent complex exponentials. Consider two complex exponentials e kn Equivalent (n) and e ln complex (n) as given exponentials by the definition in (5). Prove that if 2.1 k l = N the signals are equivalent. I.e. that e kn (n) =e ln (n) for all times n. 3

4 2.2 More equivalent complex exponentials. Use the result in Part A to 2.2 show More that theequivalent same is truecomplex not only when exponentials k l = N but also whenever k l 2 Ṅ is a multiple of N. The second fundamental property that we want explore is that when we have two complex exponentials that are not equivalent their inner product is null, he kn, e ln i := N 1 Â e kn (n)eln (n). = 0 (4) n=0 We observed that this was true in Part 1.5 for some particular examples. We will now prove that it is true in general. 2.3 Orthogonality. Consider two complex exponentials e kn (n) and e ln (n) 2.3 that are Orthogonality not equivalent, i.e., for which the difference k l /2 Ṅ is not a multiple of N. Prove that the signals are orthogonal to each other. 2.4 Orthonormality. Prove that complex exponentials have unit norm 2.4 ke kn (n)k Orthonormality = he kn (n), e kn (n)i = 1. The combination of this fact with the orthogonality proven in Part 2.3 means that a set of N consecutive complex exponentials form an orthonormal set. Explain this statement. The statements that we derived above are for a specific sort of discrete complex exponential. We can write more generic versions if we do not restrict the discrete frequency k to be discrete or if we shift the argument of the complex exponential. When performing these operations it is interesting to ask if the equivalence properties of parts 2.1 and 2.2 and the orthogonality properties of parts 2.3 and 2.4 hold true. 2.5 Phase shifts. Let f 2 R be an arbitrary given number that we call a 2.5 phasephase shift. We shifts define a shifted complex exponential by subtracting the shift from the exponent in (5) e kn (n f) = 1 p N e j2pkn/n f = 1 p N exp(j2pkn/n f). (5) The reason why subtracting f from (5) is called a shift, is because the frequency of the oscillation doesn t change. It s just that the oscillation gets shifted to the right. In this problem we consider discrete frequencies k 6= l and a common shift f. Is there a condition to make complex exponentials e kn (n f) and e ln (n f) of frequencies k and l equivalent? Is there a condition that guarantees that the complex exponentials e kn (n f) and e ln (n f) are orthogonal? 4

5 2.6 Fractional frequencies. Lift the assumption that k in (5) is integer 2.6 and consider Fractional arbitrary frequencies k, l 2 R. Is there a condition to make complex exponentials e kn (n) and e ln (n) of frequencies k and l equivalent? Is there a condition that guarantees that the complex exponentials e kn (n) and e ln (n) are orthogonal? 3 Generating and playing musical tones Up until now we have considered discrete signals as standalone entities. However, discrete signals are most often used as representations of a continuous signal that exists in the palpable as opposed to virtual world. To connect discrete signals to the physical world we define the sampling time T s as the time elapsed between times n and n + 1. Two ancillary definitions that follow from this one are the definition of the sampling frequency f s = 1/T s and the definition of the signal duration T = NT s. To move from discrete to actual frequencies, say that we are given a discrete cosine of frequency k and duration N with an associated sampling time of T s seconds. We want to determine the frequency f 0 of that cosine. To do so, recall that a discrete cosine of frequency k has a total of k oscillations in the N samples, which is the same as saying that it has a total of k oscillations in T = NT s seconds. The period of the cosine is therefore N/k samples, which, as before, is the same as saying that it has a period of T/k = NT s /k seconds. The frequency of the cosine is the inverse of its period, f 0 = k T = k NT s = k N f s (6) Conversely, if we are given a cosine of frequency f 0 Hertz that we want to observe with a sampling frequency f s for a total of T = NT s = N/ f s seconds, it follows that the corresponding discrete cosine has discrete frequency k = N f 0 f s. (7) In explicit terms, we can use the definition in (2) with the discrete frequency in (7) to write the discrete cosine as h i x(n) = cos 2pkn/N h i = cos 2p[( f 0 / f s )N]n/N (8) 5

6 Simplifying the signal durations N in (8) and recalling that T s = 1/ f s, the cosine x(n) can be rewritten as h i x(n) = cos 2p( f 0 / f s )n h i = cos 2p f 0 (nt s ) The last expression in (9) is intuitive. It s saying that the continuous time cosine x(t) =cos(2p f 0 t) is being sampled every T s seconds during a time interval of length T = NT s seconds. 3.1 Discrete cosine generation. Write down a function that takes as 3.1 input Discrete the sampling cosine frequency generation f s, the time duration T and the frequency f 0 and returns the associated discrete cosine x(n) as generated by (9). Your function has to also return the number of samples N. When T is not a multiple of T s = 1/ f s you can reduce T to the largest multiple of T s smaller than T. 3.2 Generate an A note. The musical A note corresponds to an oscillation atgenerate frequency an f 0 = A440 note Hertz. Use the code of part 3.1 to generate an 3.2 A note of duration T = 2 seconds sampled at a frequency f s = 44, 100 Hertz. Play the note in your computer s speakers. 3.3 Generate musical notes. A piano has 88 keys that can generate different Generate musical musical notes. The notes frequencies of these 88 different musical notes can be generated according to the formula (9) f i = 2 (i 49)/ (10) Modify the code of Part 3.1 so that instead of taking the frequency f i as an argument receives the piano key number and generates the corresponding musical tone. 3.4 Generate musical notes. To play a song, you just need to play differentgenerate notes in order. musical Use the notes code in Part 3.3 to play a song that has at 3.4 least as many notes as Happy Birthday. 6

7 4 Time management The formulation of the problems in Part 1 is lengthy, but their solutions are straightforward. The goal is to finish that up during the Tuesday lab session. Try to get a head start in solving the problems. You may not succeed, but thinking about them will streamline the Tuesday session. This should require just 1 more hour besides the lab. The problems in Part 2 will take more time to complete. You should wait until after class on Wednesday morning to solve them. We will do parts 2.1, 2.2, 2.3, and 2.4 in class. I am asking that you report on them to make sure that you understood them. They are very important properties to understand Fourier transforms. To solve parts 2.5 and 2.6 you have to work on your own, but the solutions are simple generalizations of earlier parts. You should be able to wrap this up in 3 hours, about 30 minutes for each of the questions. Part 4 is the one that will take more time because you have to put in place your creativity and problem solving skills. If you are familiar with tones, beats, and know how to read music, this should take about 6 hours to complete. If you don t, part of being an engineer is being able to do something you don t know how to do. It ll take you a couple more of hours to learn how to read Happy birthday. 7

8 Discrete Fourier transform (DFT) Alejandro Ribeiro March 4, 2015 Let x : [0, N 1]! C be a discrete signal of duration N and having elements x(n) for n 2 [0, N 1]. The discrete Fourier transform (DFT) of x is the signal X : Z! C where the elements X(k) for all k 2 Z are defined as  X(k) := p 1 N 1 x(n)e j2pkn/n = 1 N 1 p x(n) exp( j2pkn/n). (1) N N n=0 The argument k of the signal X(k) is called the frequency of the DFT and the value X(k) the frequency component of the given signal x. When X is the DFT of x we write X = F(x). The DFT X = F(x) is also referred to as the spectrum of x. Recall that for a complex exponential, discrete frequency k is equivalent to (real) frequency f k =(k/n) f s, where N is the total number of samples and f s the sampling frequency. When interpreting DFTs, it is often easier to consider the real frequency values instead of the corresponding discrete frequencies. An alternative form of the DFT is to realize that the sum in (1) is defining the inner product between x and the complex exponential e kn with elementse kn (n) =(1/ p N)e j2pkn/n. We can then write  n=0 X(k) := hx, e kn i. (2) This latter expression emphasizes the fact that X(k) is a measure of how much the signal x resembles an oscillation of frequency k. Because complex exponentials of frequencies k and k + N are equiva- 1

9 lent, it follows that DFT values X(k) and X(k + N) are equal, i.e., Â X(k + N) = p 1 N 1 N = p 1 N 1 N x(n)e j2p(k+n)n/n n=0 Â x(n)e j2pkn/n n=0 = X(k) (3) The relationship in (3) means that the DFT is periodic with period N and what while it is defined for all k 2 Z, only N values are different. For computational purposes we work with the canonical set of frequencies in the interval k 2 [0, N 1]. For interpretation purposes we work with the canonical set of frequencies k 2 [ N/2, N/2]. This latter canonical set contains N + 1 frequencies instead of N frequencies N/2 and N/2 are equivalent in that X(N/2) =X( N/2) but it is used to have a set that is symmetric around k = 0. Going from one canonical set to the other is straightforward. The frequencies in the interval [0, N/2] are present in both sets and to recover, e.g., the negative frequencies k 2 [ N/2, 1] from the positive frequencies [N/2, N 1] we just use the fact that X( k) =X(N k), for all k 2 [ N/2, 1] (4) We say that the operation in (4) is a chop and shift. To recover the DFT values for the canonical set [ N/2, N/2] from the canonical set [0, N 1] we chop the frequencies in the interval [N/2, N 1] and shift them to the from of the set. For the purposes of this homework, when you are asked to report a DFT, you should report the DFT for the canonical set [ N/2, N/2]. 1 Spectrum of pulses In this first part of the lab we will consider pulses and waves of different shapes to understand what information can be gleaned from spectral analysis. A prerequisite for that is to have a function to compute DFTs. 1.1 Computation of the DFT. Write down a function that takes as input 1.1 a signal Computation x of duration Nofand theassociated DFT sample frequency f s and returns the values of the DFT X = F(x) for the canonical set k 2 [ N/2, N/2] as well as a vector of frequencies with the real frequencies associated with 2

10 u M (n) 1/ p M M 1 N 1 t Figure 1. Unit energy square pulse of length T 0 = MT s and duration T = NT s. The signal is constant for indexes n < M and null for other n. The height of the pulse is set to 1/ p M to have unit total energy. each of the discrete frequencies k. Explain how to use the outcome of this function to recover DFT values X(k) associated with frequencies in the canonical set k 2 [0, N 1]. With N samples and a sampling frequency f s the total signal duration is T = NT s. Given a length T 0 = MT s < T, we define the unit energy square pulse of time length T 0, or, equivalently, discrete length M, as u M (n) = 1 p M if 0 apple n < M, u M (n) =0 if M apple n. (5) Intuitively, pulses of shorter length are faster signals than pulse os longer length. We will see that this rate of change information is captured by the DFT. 1.2 DFTs of square pulses. Use the code in Part 1.1 to compute the 1.2 DFT of DFTs square ofpulses square of duration pulses T = 32s sampled at a rate f s = 8Hz and different time lengths. You should observe that the DFT is more concentrated for wider pulses. Make this evaluation more quantitative by computing the DFT energy fraction corresponding to frequencies f k in the interval [ 1/T 0,1/T 0 ]. Report your results for pulses of duration T 0 = 0.5s, T 0 = 1s, T 0 = 4s, and T 0 = 16s. While it is true that wider pulses change more slowly, all square pulses have, at some point, a high rate of change when they jump from x(m 1) =1/ p M to x(m) =0. We can construct a pulse that changes more 3

11 u M (n) M/2 1 M 2 1 M 2 M 1 N 1 t Figure 2. Unit energy triangular pulse of length T 0 = MT s and duration T = NT s. The signal is smoother, i.e., changes more slowly, than the square pulse of equivalent length. slowly by smoothing out the transition. One possibility is to define a triangular pulse by raising and decreasing its height linearly. Specifically, consider an even pulse length M and define the triangular pulse as ^M (n) =n if 0 apple n < M/2, ^M (n) =(M 1) n if M/2 apple n < M, ^M (n) =0 if M apple n. (6) Observe that, as defined in (6), the triangular pulse does not have unit energy. In you comparisons below, you may want to scale the pulse numerically to have unit energy. To do so, you just have to divide the pulse by its norm, i.e., use ^M(n)/k^M k instead of ^M(n). 1.3 DFTs of triangular pulses. Consider the same parameters of Part anddfts observe ofthat, triangular as in thepulses case of square pulses, the DFT is more concentrated for wider pulses. Make this observation quantitative by looking at the DFT energy fraction corresponding to frequencies f k in the interval [ 1/T 0,1/T 0 ]. Report your results for pulses of duration T 0 = 0.5s, T 0 = 1s, T 0 = 4s, and T 0 = 16s. Compare your results with the results of Part 1.2. Is your observation consistent with the intuitive appreciation that the triangular pulse changes more slowly than the square pulse? A qualitative explanation suffices for most people, but a good engineer would provide a quantitative answer. 1.4 Other pulses. We can define some other pulses with more concentratedother spectra. pulses These pulses are also called windows and there is 1.4 an 4

12 extensive literature on windows with appealing spectral properties. Find out about Parzen windows, raised cosine, Gaussian, and Hamming windows. Compare the spectra of these windows to the spectra of square and triangular pulses. 2 Properties of the DFT Our interest in the DFT is, mainly, as a computational tool for signal processing and analysis. For that reason, we will rarely be working on computing analytical expressions. There are, however, some DFT properties that is important to understand analytically.in this part of the assignment we will work on proving three of these properties: conjugate symmetry, energy conservation, and linearity. 2.1 Conjugate symmetry. Consider a real signal x, i.e., a signal with no 2.1 imaginary Conjugate part, andsymmetry let its DFT be X = F(x). The DFT X is conjugate symmetric, X( k) =X (k) (7) 2.2 Energy conservation (Parseval s Theorem). Let X = F(x) be the 2.2 DFT of Energy signal xconservation and restrict the DFT (Parseval s X to a settheorem) of N consecutive frequencies. Prove that the energies of x and the restricted DFT are the same, N 1 Â x(n) 2 = kxk 2 = kxk 2 N 0 +N 1 = Â X(k) 2. (8) n=0 k=n 0 The constant N 0 in (10) is arbitrary. 2.3 Linearity. Prove that the DFT of a linear combination of signals is 2.3 the linear Linearity combination of the respective DFTs of the individual signals, F(ax + by) =af(x)+bf(y). (9) In (9), both signals are of the same duration N otherwise, the sum wouldn t be properly defined. The properties above are very important in the spectral analysis of signals. We present below a fourth property that is not as important, but nevertheless worth knowing. 5

13 2.4 Conservation of inner products (Plancherel s Theorem). Let X = 2.4 F(x) be Conservation the DFT of signal of xinner and Y products = F(y) be(plancherel s the DFT of signal Theorem) y. Restrict the DFTs X and Y to a set of N consecutive frequencies. Prove that the inner products hx, yi between the signals and hx, Yi between the restricted DFTs are the same, N 1 Â x(n)y N 0 +N 1 (n) =hx, yi = hx, Yi = Â X(k)Y (k) (10) n=0 k=n 0 The constant N 0 in (10) is arbitrary. Reserve the result in Part 2.2 as a particular case of Plancherel s Theorem. 3 The spectra of musical tones In the first lab assignment we studied how to generate pure musical tones. To do so we simply noted that for sampling time T s a cosine of frequency f 0 is generated according to the expression h i x(n) = cos 2p( f 0 / f s )n h i = cos 2p f 0 (nt s ), (11) where the index n varies from 0 to N 1, which is equivalent to observing the tone between times 0 and T = NT s. As already noted, the last expression in (11) is intuitive. It s saying that the continuous time cosine x(t) =cos(2p f 0 t) is being sampled every T s seconds during a time interval of length T = NT s seconds. Musical tones have specific frequencies. In particular, the A note corresponds to a frequency of 440Hz, and the 49th key of a piano. The other 88 basic notes generated by a piano have frequencies that follow the formula f i = 2 (i 49)/ (12) We have already used this knowledge to play a song using pure musical tones. In this lab assignment, we will compute the DFT of the song we played and interpret the result. 3.1 DFT of an A note. Generate an A note of duration T = 2 seconds 3.1 sampled DFT at aoffrequency an A note f s = 44, 100 Hertz. Compute the DFT of this signal and verify that: (a) The DFT is conjugate symmetric. (b) Parseval s Theorem holds. We know that the DFT of a discrete cosine is given by a couple of delta functions. The DFT of this A note, however, is close 6

14 to that but not exactly. Explain why and find a frequency or frequency range that contains at least 90% of the DFT energy. What can you change to make the spectrum exactly equal to a pair of deltas? 3.2 DFT of a musical piece. Concatenate tones to interpret a musical 3.2 piece with DFTas ofmany a musical notes as piece Happy Birthday. Compute the DFT of this piece and identify the different musical tones in your piece. 3.3 Energy of different tones of a musical piece. For each of the tones 3.3 identified Energy in Part of 3.2, different compute tones the total of aenergy musical thatpiece the musical piece contains on the tone. Cross check that this energy is, indeed, the energy that you know should be there because of the number of times you played the note. The rich sound of actual musical instruments comes from the fact that they don t play pure tones, but multiple harmonics. A generic model for a musical instrument is to say that when a note is played it generates not only a tone at the corresponding frequency but a set of tones at frequencies that are multiples of the base tone. To construct a model say that we are playing a note that corresponds to base frequency f 0. The instrument generates a signal that is given by a sum of multiple harmonics, x(n) = H h i  a h cos 2phf 0 (nt s ). (13) h=1 In (13), H is the total number of harmonics generated by the instrument and a h is the relative gain of the hth harmonic. The constants a h are specific to an instrument. E.g., we can get a sound reminiscent of an oboe with H = 8 harmonics and gains a h given by the components of the vector: a =[1.386, 1.370, 0.360, 0.116, 0.106, 0.201, 0.037, 0.019] T. (14) Likewise, we can get something not totally unlike a flute with H = 5 harmnics and gains a =[0.260, 0.118, 0.085, 0.017, 0.014] T. (15) A very quacky trumpet can be simulated with H = 13 harmonics having gains a =[1.167, 1.178, 0.611, 0.591, 0.344, 0.139, 0.090, 0.057, 0.035, 0.029, 0.022, 0.020, 0.014] T, (16) 7

15 and an even more quacky clarinet with H = 19 harmonics with gains a =[0.061, 0.628, 0.231, 1.161, 0.201, 0.328, 0.154, 0.072, 0.186, 0.133, 0.309, 0.071, 0.098, 0.114, 0.027, 0.057, 0.022, 0.042, 0.023] T. (17) We can use this harmonic decompositions to play songs with more realistic sounds. 3.4 DFT of an A note of different musical instrumetns. Repeat Part for each DFT of the of4an musical A note instruments of different described musical above. instrumetns 3.5 DFT of an A note of different musical instrumetns. Repeat parts and DFT 3.4 for of an one Aof note the musical of different instruments musical described instrumetns above. If you have no favorite, choose the flute. 4 Time management The problems in Part 1 are not straightforward but not too difficult. The goal is to finish that up during the Tuesday lab session. Try to get a head start in solving the problems. You may not succeed, but thinking about them will streamline the Tuesday session. This should require 2 more hour besides the lab. The problems in Part 2 will take another couple hours to complete. You should wait until after class on Wednesday morning to solve them. We will do parts 2.1, 2.2 and 2.3 in class. I am asking that you report on them to make sure that you understood them. To solve Part 2.4 you have to work on your own, but the solution is a simple generalization of Part 2.3. You should be able to wrap this up in 2 hours, about 30 minutes for each of the questions. Part 3 is the one that will take more time because you have to use your problem solving skills. It should take about 6 hours to complete. I would say something like 4 hours for the first three parts and 2 more hours to wrap up the pieces that simulate the wind instruments. 8

16 Inverse Discrete Fourier transform (DFT) Alejandro Ribeiro January 30, 2015 Suppose that we are given the discrete Fourier transform (DFT) X : Z! C of an unknown signal. The inverse (i)dft of X is defined as the signal x : [0, N 1]! C with components x(n) given by the expression  k=0  k=0 x(n) := p 1 N 1 X(k)e j2pkn/n = 1 N 1 p X(k) exp(j2pkn/n) (1) N N When x is obtained from X through the relationship in (1) we write x = F 1 (X). Recall that if X is the DFT of some signal, it must be periodic with period N. That means that in (1) we can replace the sum over the frequencies k 2 [0, N 1] but a sum over any other set of N consecutive frequencies. In particular, the idft of X can be alternatively written as x(n) = p 1 N/2 N  X(k)e j2pkn/n (2) k= N/2+1 To see that (2) is correct, it suffices to note that X(k + N) =X(k) and that e j2p(k+n)n/n = e j2pkn/n to conclude that all of the terms that appear in (1) are equivalent to one, and only one, of the terms that appear in (2). It is not difficult to see that taking the idft of the DFT of a signal x recovers the original signal x. This means that the idft is, as it names indicates, the inverse operation to the DFT. This result is of sufficient importance to be highlighted in the form of a theorem that we state next. Theorem 1 Given a discrete signal x : [0, N 1]! C, let X = F(x) : Z! C stand in for the DFT of x and x = F 1 (X) : [0, N 1]! C be the idft of X. We then have that x x, or, equivalently, F 1 [F(x)] = x. (3) 1

17 Proof: Write down a proof of Theorem 1. The result in Theorem 1 is important because it tells us that a signal x can be recovered from its DFT X by taking the inverse DFT. This implies that x and X are alternative representations of the same information because we can move from one to the other using the DFT and idft operations. If we are given x we can compute X through the DFT and we are given X we can compute x through the idft. An important practical consequence of this equivalence is that if we are given one of the representations, say the signal x, and the other one is easier to interpret, say the DFT X, we can compute the respective transform and proceed with the analysis. This analysis will neither introduce spurious effect, nor miss important features. Since both representations are equivalent, it is just a matter of which of the representations makes the identification of patterns easier. There is substantial empirical evidence that it is easier to analyze signals in the frequency domain i.e., the DFT X, than it is to analyze signals in the temporal domain the original signal x. 1 Signal reconstruction and compression A more mathematical consequence of Theorem 1 is that any signal x can be written as a sum of complex exponentials. To see that this is true we just need to reinterpret the equations for the DFT and idft. In this reinterpretation, the components of the signal x can be written as [cf. (1) and (2)] Â k=0 x(n) = p 1 N 1 N X(k)e j2pkn/n = p 1 N/2 N Â X(k)e j2pkn/n (4) k= N/2+1 with coefficients X(k) that are given by the formula [cf. equation (1) in lab assignment 2] Â X(k) := p 1 N 1 x(n)e j2pkn/n (5) N n=0 This is quite a remarkable fact. We may have a signal that doesn t look at all like an oscillation, but it is a consequence of Theorem 1 that such signal can be written as a sum of oscillations. It is instructive to rewrite (4) in a expanded form that makes the latter observation clearer. To do so, consider the rightmost expression, write 2

18 the N summands explicitly and reorder the terms so that the terms corresponding to positive frequency k and its opposite frequency k appear together. Doing so and noting that frequencies k = 0 and k = N/2 have no corresponding opposites, it follows that (4) is equivalent to p N x(n) = X(0) e j2p0n/n + X(1) e j2p1n/n + X( 1) e j2p1n/n + X(2) e j2p2n/n + X( 2) e j2p2n/n. N + X 2 N + X 2. 1 e j2p( N 2 e j2p( N 2 )n/n. 1)n/N + X. N e j2p( N 2 1)n/N (6) where we have multiplied both sides of the equality by p N to simplify the expression. Observe that the term that corresponds to frequency k = 0 is simply X(0)e j2p0n/n = X(0). We write the exponential part of this factor to avoid breaking the symmetry of the expression. We can interpret (6) as a set of successive approximations of x(n) that introduce ever finer details in the form of faster signal variations. I.e., we can choose to approximate the signal x by the signal x K which we define by truncating the DFT sum to the first K terms in (6), x K (n) := 1 p N K Â k=0 X(k)e j2pkn/n + X( k)e j2pkn/n. (7) The approximation that uses k = 0 only, approximates the signal x with a constant. The approximation that uses k = 0 and k = ±1 approximates x with a constant and a single oscillation, the approximation that adds k = ±2, refines the signal by adding finer details in the form of a (more rapid) double oscillation. In general, when adding the kth frequency and its opposite k, we add an oscillation of frequency k that makes the approximation closer to the actual signal. If we have a signal that varies slowly, a representation with just a few coefficients is sufficient. For signals that vary faster, we need to add more coefficients to obtain a reasonable approximation. Alternatively, if only gross details are important, we can eliminate the finer irrelevant features by studying the approximated signal instead of 3

19 the original signal. This observation is related to our digression on the empirical value of the DFT as a tool for pattern identification. The representation of x as a sum of complex exponentials facilitates identification of relevant time features that tend to correspond to variations that are slower than patterns. E.g., weather varies from day to day, but there is an underlying slower pattern that we call climate. Weather will manifest in the DFT coefficients for large frequencies and climate in the DFT coefficients associated with slower frequencies. We can study climate by reconstructing a weather signal x with a small number of DFT coefficients. In this part of the lab we will study the quality of the reconstruction of x with approximating signals x K as we increase K. 1.1 Computation of the idft. Consider a DFT X corresponding to a 1.1 real signal Computation of even duration of thenidft and assume that we are are given the N/2 + 1 coefficients corresponding to frequencies k = 0, 1,..., N/2. Write down a function that takes these N/2 coefficients as input as well as the associated sampling frequency f s and returns the idft x = F 1 (X) of the given X. Return also a vector of real times associated with the signal samples. 1.2 Signal reconstruction. Suppose now that we are given the first K coefficients Signal of the reconstruction DFT of a signal of duration N. Write down a function that returns the approximated signal x K with elements x K (n) as given in (7). The inputs to this function include the K + 1 coefficients, the signal duration N, and the sampling frequency f s. Return also a vector of real times associated with the signal samples. Given that you already solved Part 1.1, it should take you less than a minute to solve this part. 1.3 Reconstruction of a square pulse. Generate a pulse of duration 1.3 T = 32s Reconstruction sampled at a rate of fa s square = 8Hz and pulse length T 0 = 4s and compute its DFT. Use the function in Part 1.2 to create successive reconstructions of the pulse. Compute the energy of the difference between the signals x and x K. This energy should decrease for increasing k. Report your results for K = 2, K = 4, K = 8, and K = 16 K = 32. Repeat for a pulse of length T 0 = 2s. Since this pulse varies faster, the reconstruction should be worse. Is that the case? 4

20 1.4 Reconstruction of a triangular pulse. Generate a triangular pulse 1.4 of duration Reconstruction T = 32s sampled of aat triangular a rate f s = pulse 8Hz and length T 0 = 4s and compute its DFT. Use the function in Part 1.2 to create successive reconstructions of the pulse. Compute the energy of the difference between the signals x and x K. Report your results for K = 2, K = 4, K = 8, and K = 16 K = 32. This pulse should be easier to reconstruct than the square pulse. Is that true? 1.5 The energy of the difference signal. In parts 1.3 and 1.4 you have 1.5 computed The the energy of of the the difference between signal the signals x and x K. Just to be formal, define the error signal r K as the one with components r K (n) =x(n) x K (n). The energy you have computing is therefore given by kr K k 2 = N 1 Â n=0 r K (n) 2 = N 1 Â n=0 x(n) x K (n) 2. (8) Using Parseval s theorem, this energy can be computed from the values of the DFT coefficients that you are neglecting to include in the signal approximation. Explain how this can be done, and verify that your numerical results coincide. A square wave can be visualized as a train of square pulses pasted next to each other. Mathematically, it is easier to generate a square wave by simply taking the sign of a discrete cosine. Consider then a given frequency f 0 and a given sampling frequency f s and define the square wave of frequency f 0 as the signal h i x(n) = sign cos 2p( f 0 / f s )n. (9) This signal can be reconstructed with a few DFT coefficients, but not with the first K. To compress this signal well, we pick the K largest DFT coefficients, which are not necessarily the first K. When reconstructing the signal, we use a modified version of (7) in which we sum over the coefficients that were picked during the compression stage. 1.6 Signal compression. Write down a function that receives as input a 1.6 signalsignal x of length compression N, the sampling frequency f s, and a compression target K. The function outputs a vector with the K largest DFT coefficients and the corresponding set of frequencies at which these coefficients are observed. Notice that each of the coefficients that is kept requires storage of two numbers, the coefficient and the frequency. This is disadvantageous 5

21 with respect to keeping just the first K coefficients. This more sophisticated compression is justified only if keeping these coefficients reduces the total number of DFT coefficients by a factor larger than The why of signal compression. Why do we keep the largest DFT 1.7 coefficients? The why Thisof question signalhas compression a very precise mathematical answer that follows from Parseval s Theorem. Provide that very precise answer. You may want to look at Part Signal reconstruction. Write down a function that receives as input 1.8 the output Signal to the reconstruction function in Part 1.6 and reconstructs the original signal x. Given that you already solved Part 1.1, and Part 1.2, it should take you less than a minute to solve this part. 1.9 Compression and reconstruction of a square wave. Generate a 1.9 squarecompression wave of duration and T = reconstruction 32s sampled at of a ratea square f s = 8Hz wave and frequency 4Hz. Compress and reconstruct this wave using the functions in parts 1.6 and 1.8. True different compression targets and report the energy of the error signal for K = 2, K = 4, K = 8 and K = 16. This problem should teach you that a square wave can be approximated better than a square pulse if you keep the same number of coefficients. This should be the case because the square wave looks the same at all points, but the square pulse doesn t. Explain this statement. 2 Speech processing The DFT, in conjunction with the idft can be used to perform some basic speech analysis. In this part of the lab you will record your voice and perform a few interesting spectral transformations. 2.1 Record, graph, and play your voice. Record 5 seconds of your voice 2.1 sampled Record, at a frequency graph, fand s = 20KHz. play your Plot your voicevoice. Compute the DFT of your voice and plot its magnitude. Play it back on the speakers. 2.2 Voice compression. The 5 second recording of your voice at sampling frequency Voice compression f s = 8KHz is composed of 100,000 samples. Use the DFT 2.2 and idft to compress your voice by a factor of 2, i.e., store K = 50, 000 numbers instead of 100,000, a factor of 4, (store K = 25, 000 numbers), a factor of 8 (store K = 12, 500 numbers), and so on. Keep compressing until the sentence you spoke becomes unrecognizable. You can perform this compression by keeping the first K DFT coefficients or the largest K/2 DFT coefficients. Which one works better? 6

22 2.3 Voice masking. Say that you and your partner speak the same sentence. Voice The DFTs masking of the respective recording will be similar because it s 2.3 the same sentence but also different, because your voices are different. You can use this fact to mask your voice by modifying its spectrum, i.e., by increasing the contribution of some frequencies and decreasing the contributions of others. Design a system to record your voice, make it unrecognizable but intelligible, and play it in the speakers. As we saw in Part 1.9, it is easier to reconstruct a square wave than it is to reconstruct a square pulse. This happens because the pulse looks the same at all points, while the pulse looks different at different points. This suggests a problem with approximating the 5 second recording of your voice, namely, that you are trying to use the same complex exponentials to approximate different parts of your speech. You can overcome this limitation by dividing your signal in pieces and compressing each piece independently. 2.4 Better voice compression. Design a system that divides your speech 2.4 in chunks Better of 100ms, voice and compression compresses each of the chunks by a given factor g. Design the inverse system that takes the compressed chunks, reconstructs the individual speech pieces, stitches them together and plays them back in the speakers. You have just designed a rudimentary MP3 compressor and player. Try out for different values of g. Push g to the largest possible compression factor. 3 Uncover a secret message You teaching assistant will provide you on Tuesday with the Answer to the Ultimate Question of Life, the Universe, and Everything. ENIAC has been working on this answer since the early hours of the evening of Valentine s Day, Since this information is of a sensitive nature it will be given in an audio message with a secret code that will make it sound like a fast paced Happy Birthday song. If you are able to decode the message report it back Jeopardy style. If you are a nerd, you will think that this is the coolest thing you have done in your life. In that case, you don t get points for this answer. If you are not a nerd, you will get 2 extra points on top of the four you are to get for the rest of the lab 1. 1 My lawyer just informed me that I am not allowed to ask your nerdal orientation, and that in any event, I am not allowed to exhibiting any nerder bias. Fine, you get 2 points even if you re a nerd. 7

23 4 Time management The effort for this particular lab is evenly divided between parts 1 and 2. There is some overlap between the questions. If you do Part 1 properly, then Part 2 will be easier. Thus, the time split can be 4 and 6 hours or 6 and 4 hours. Depending on the sort of person you are. Do notice that some of the parts are conceptually simple but have finer points that may make them difficult to implement. The teaching assistants will provide substantial help with these fine points. Part 3 is for the fine of it, although the extra points are for real. If you get how to solve it, it ll take you 5 minutes. If you don t, it will take you 5 years. In any event, don t waste much time. Try something. If you don t crack it, ask around. Some of you will figure it out. 8

24 Fourier transform Alejandro Ribeiro February 9, 2015 The discrete Fourier transform (DFT) is a computational tool to work with signals that are defined on a discrete time support and contain a finite number of elements. Time in the world is neither discrete nor finite, which motivates consideration of continuos time signals x : R! C. These signals map a continuous time index t 2 R to a complex value x(t) 2 C. The signal values x(t) can be, and often are, real. Paralleling the development performed for discrete signals, we define the Fourier transform of the continuous time signal x as the signal X : R! C for which the signal values X( f ) are given by the integral X( f ) := Z x(t)e j2p ft dt. (1) The definition in (1) is different in form to the definition of the DFT, but it is conceptually analogous. Whatever intuition we have gained so far on dealing with the DFT of discrete signals extends more or less unchanged to the Fourier transform of continuous signals. The statement above has a very deep meaning that will become clear once we develop the theory of sampling. For the time being we can observe that the DFT can be considered as an approximation of the Fourier transform in which we start with N samples of x to obtain N samples of X. To see that this is true. consider N samples of x, separated by a sampling time T s, and extending between times t = 0 and t = NT s. The Riemann approximation of the integral in (1) is then given by X( f ) = Z x(t)e j2p ft N 1 dt T s  x(nt s )e j2p fnt s. (2) k=0 The approximation above is true for all frequencies, but if we just consider 1

25 x Fourier transform X sample ) f s N sample ) T s x DFT X Figure 1. The discrete Fourier transform provides a numerical approximation to the Fourier transform. the frequencies f =(k/n) f s for k 2 [ N/2, N/2] we can rewrite (2) as k N 1 X N f s T s  x(nt s )e j2p(k/n) f N 1 snt s = T s  x(nt s )e j2pkn/n. k=0 k=0 (3) Except for constants, the rightmost side of (3) is the definition of the DFT of the discrete signal x with components x(n) =x(nt s ). Indeed, the DFT X = F( x) of the discrete signal x has components X(k) = p 1 N 1 N  x(n)e j2pkn/n = 1 k=0  k=0 N 1 p x(nt s )e j2pkn/n. (4) N Upon comparison of (3) and (4) we can conclude that the DFT X of the sampled signal x and the Fourier transform X of the continuous signal x are approximately related by the expression X(k) 1 T s p N X k N f s. (5) The relationship in (5) allows us to approximate the Fourier transform of a signal with numerical operations, or, conversely, to conclude that a property derived for Fourier transforms is approximately valid for DFTs as well. The approximating relationship in (5) is represented schematically in Figure 1. In this lab we will use (5) to verify numerically some formulas that we will derive analytically. 1 Computation of Fourier transforms We define a Gaussian pulse of standard deviation s and average value µ as the signal x with values x(t) given by the formula x(t) =e (x µ)2 /(2s 2). (6) 2

26 The standard deviation s controls the width of the pulse. Large s corresponds to wide pulses and small s corresponds to narrow pulses. The mean value µ controls the location of the pulse on the real line. 1.1 Fourier transform of a Gaussian pulse. Derive an expression for 1.1 the Fourier transform transform of the of Gaussian a Gaussian pulse when pulse µ = 0. You will have to make use of the fact that the integral Z Z x s (t) = e x2 /(2s 2) = p 2ps. (7) 1.2 Numerical verification. Verify numerically that your derivation in 1.2 Part 2.1 Numerical is correct. You verification will have to be careful with the selection of your sampling time and sampling interval. Try the comparison for different values of s. Report for s = 1, s = 2, and s = Fourier transform of a shifted Gaussian pulse. Derive an expression for Fourier the Fourier transformofofa the shifted Gaussian Gaussian pulse forpulse generic µ. Verify 1.3 numerically. The solution to this part is very easy once you have solved Part Modulation and demodulation An important property of Fourier transforms is that shifting a signal in the time domain is equivalent to multiplying by a complex exponential in the frequency domain. More specifically consider a given signal x and shift t and define shifted signal x t as x t = x(t t) (8) The Fourier transform of x is denoted as X = F(x) and the Fourier transform of x t is denoted as X t = F(x t ). We then have that the following theorem holds true. Theorem 1 A time shift of t units in the time domain is equivalent to multiplication by a complex exponential of frequency t in the frequency domain x t = x(t t) () X t ( f )=e j2p f t X( f ) (9) 3

27 e j2pgt x(t) x g (t) X( f ) X g ( f ) -W/2 W/2 f g W/2 g g + W/2 f Figure 2. Modulation of a bandlimited signal. The bandlimited spectrum of signal x is re-centered at frequency g when the signal is multiplied by a complex exponential of frequency g. These result has important applications, the most popular of which is its use in signal detection. This application utilizes the fact that the moduli of X and X t are the same, which allows the comparison of signals without worrying about the selection of the time origin. A property that we can call dual of the result in Theorem 1 is that multiplying a signal by a complex exponential results in a shift in the frequency domain. Specifically, for given signal x and frequency g, we define the modulated signal x g (t) =e j2pgt x(t) (10) We write the Fourier transform of x as X = F(x) and the Fourier transform of x g as X g = F(x g ). We then have that the following theorem holds true. Theorem 2 A multiplication by a complex exponential of frequency g in the time domain is equivalent to a shift of g units in the frequency domain x g (t) =e j2pgt x(t) () X g ( f )=X( f g) (11) Proof: Write down a proof of Theorem 2. I.e., prove that if x g (t) = e j2pgt x(t) we must have X g ( f )=X( f g). Despite looking less interesting than the clain in Theorem 1, the result in Theorem 2 is at least of equal importance because of its application in the modulation and demodulation of bandlimited signals. To explain this statement better, we begin with the definition of a bandlimited signal that we formally introduce next. 4

28 Definition 1 The signal x with Fourier transform X = F(x) is said bandlimited with bandwidth W if we have X( f ) = 0 for all frequencies f /2 [ W/2, W/2]. An illustration of the spectrum of a bandlimited signal is shown in Figure 2, where we also show the result of multiplying x by a complex exponential of frequency g. When we do that, the spectrum is re-centered at the modulating frequency g. Signals that are literally bandlimited are hard to find, but signals that are approximately bandlimited do exist. As an example, we consider voice recordings. 2.1 Voice as a bandlimited signal. Record 3 seconds of your voice at 2.1 a sampling Voicerate as of a bandlimited 40kHz. Take thesignal DFT of your voice and observe that coefficients with frequencies f > 4kHZ are close to null. Set these coefficients to zero to create a bandlimited signal. Play your voice back and observe that the removed frequencies don t affect the quality of your voice. 2.2 Voice modulation. Take the bandlimited signal you created in Part andvoice modulate modulation it with center frequency g 1 = 5kHZ. 2.3 Modulation with a cosine. The problem with modulating with a 2.3 complex Modulation exponential with as we adid cosine Part 2.2 is that complex exponentials are signals with imaginary parts that, therefore, can t be generated in a real system. In a real system we have to modulate using a cosine, or a sine. Redefine then the modulated signal as x g (t) =cos(2pgt)x(t), (12) and let X g = F(x g ) be the respective Fourier transform. Write down an expression for X g in terms of X. Take the bandlimited signal you created in Part 2.1 and modulate it with a cosine with frequency g 1 = 5kHZ. Verify that expression you derived is correct. 2.4 The voice of you partner. Record the voice of your lab partner and 2.4 repeatthe Partvoice 2.1. Repeat of you Part partner 2.3 but use a cosine with frequency g 2 = 15kHZ. Sum up the respective modulated signals to create the mixed signal z. 2.5 Recover individual voices. Explain how to recover your voice and 2.5 the voice Recover of yourindividual partner from the voices mixed signal z. Implement the recovery and play back the individual voice pieces. 5

29 3 Time management This lab is designed to be a respite from the more intensive Lab 3. Part 1 should take 2 hours and Part 2 between 4 and 5. To solve Part 1 you need to make use of a technique called completing squares. If you have never done that, ask one of your teaching assistants right away. To solve Part 2 do remember to make use of our help. There s no reason to struggle when you can receive help. 6

30 Sampling Alejandro Ribeiro February 16, 2015 Signals exist in continuous time but it is not unusual for us to process them in discrete time. When we work in discrete time we say that we are doing discrete signal processing, something that is convenient due to the relative ease and lower cost of using computers to manipulate signals. When we use discrete time representations of continuous time signals we need to implement processes to move back and forth between continuous and discrete time. The process of obtaining a discrete time signal from a continuous time signal is called sampling. The process of recovering a continuous time signal from its discrete time samples is called signal reconstruction. Mathematically, the sampling process has an elementary description. Given a sampling time T s and a continuous time signal x with values x(t), the sampled signal x s is the one that take values x s (n) =x(nt s ). (1) As per (1), the sampled signal retains values at regular intervals spaced by T s and discards the remaining values of x(t) see Figure 1. The process by which this is done in, say, a sound card, is a problem of circuit design. For our purposes, let us just say that (1) is a reasonable model for the transformation of a continuous time signal into a discrete time signal. A relevant question, perhaps the most relevant question, is what information is lost when discarding all the values of x(t) except for those observed at times nt s. To answer this question, we compare the spectral representations of x s and x. In fact, since x s is a discrete time signal and x is a continuous time signal it is convenient to introduce a continuous time representation of the sampled signal as we describe in the following section. 1

31 x Sample ) T s x s T s -4Ts -3Ts -2Ts -Ts 0 Ts 2Ts 3Ts 4Ts t x x s Figure 1. Sampling with sampling time T s. Sampling the continuous time signal x to create the discrete time signal x s entails retaining the values x s (n) =x(nt s ). A relevant question is in what respect the sampled signal x s (n) differs from the original signal x(t). F F 1-4Ts -3Ts -2Ts -Ts 0 Ts 2Ts 3Ts 4Ts t -4 fs -3 fs -2 fs - fs 0 fs 2 fs 3 fs 4 fs f Figure 2. A Dirac train with spacing T s (left). The Fourier transform of the Dirac train is another Dirac train with spacing f s = 1/T s. 1 Dirac train representation of sampled signals A Dirac train, or Dirac comb, with spacing T s is a signal x c defined by a succession of delta functions located at positions nt s see Figure 2, x c (t) =T s  n= d(t nt s ). (2) A Dirac train is, in a sense, an artifice to write down a discrete time signal in continuous time. The train is formally defined to be a continuous time signal but it becomes relevant only at the (discrete) set of times nt s. In our forthcoming discussions of sampling we use the Fourier transform of the Dirac comb. This transform can be seen to be another Dirac comb, but with spacing f s = 1/Ts. I.e., if we denote the Fourier transform of x c as X c = F(x c ) we have that X c ( f )=  d( f kf s ). (3) k= That X c ( f ) does represent the values of the Fourier transform of x c is not difficult to show by identifying x c with the discrete time constant signal x(t) =1, but we don t show this derivation on these notes. 2

32 -4Ts -3Ts -2Ts -Ts 0 Ts 2Ts 3Ts 4Ts t Figure 3. Representation of sampled signal with a modulated Dirac train. The representation is equivalent to the one in Figure 1 but makes comparisons with the original signal x easier. In the Dirac train representation of sampling we use the samples x s (n) =x(nt s ) to modulate the deltas of a Dirac train. Specifically, we define the signal x d as see Figure 3 x d (t) =  x(nt s )d(t nt s ). (4) n= That (1) and (4) are equivalent representations of sampling follows from the simple observation that when given the value x s (n) we can determine x d (nt s ) and vice versa. The representation in (1) is simpler, but the representation in (4) permits comparisons with the original signal x. Indeed, the sampling representation in (4) allows us to realize that we can write x d (t) as the product between x(t) and the Dirac train in (2) apple x d (t) =x(t) d(t nt s ). (5) T s  n= That the expressions in (4) and (5) are equivalent follows from the simple observation that in the multiplication of the function x(t) with the shifted delta function d(t nt s ) only the value x(nt s ) is relevant. It is therefore equivalent to simply multiply d(t nt s ) by x(nt s ). Straightforward though it is, rewriting (4) as (5) allows us to rapidly characterize the spectrum of the sampled signal x d (t). Since we know that multiplication in time is equivalent to convolution in frequency we have that the Fourier transform X d = F(x d ) can be written in terms of the Fourier transforms X = F(x) of x and the Dirac train as apple X d = X F d(t nt s ). (6) T s  n= The Fourier transform of the Dirac train T s  n= d(t nt s ) we have seen is given by the Dirac train in (3). Using this result in (6) and the linearity 3

33 -3 fs/2 - fs - fs /2 0 fs/2 fs 3 fs/2 2 fs 5 fs/2 f Figure 4. Spectrum X d = F(x d ) of the sampled signal x d. The spectrum of the original signal is copied and shifted to all the frequencies that are integer multiples of f s. The spectrum X d is the sum of all these shifted copies. of the convolution operation we further conclude that X d = Â X d( f kf s ). (7) k= A final simplification come from observing that the convolution of X with the shifted delta function d( f kf s ) is just a shifting of the spectrum of X so that it is re-centered at f = f s. We can therefore write X d ( f )= Â X( f kf s ). (8) k= The result in (8) is sufficiently important so as to deserve a summary in the form of a Theorem that we formally state next. Theorem 1 Consider a signal x with Fourier transform X = F(x), a sampling time T s and the corresponding sampled signal x d as defined in (4). The spectrum X d = F(x d ) of the sampled signal x d is a sum of shifted versions of the original spectrum X d ( f )= Â X( f kf s ). (9) k= The result in Theorem 1 is explained in terms of what we call spectrum periodization. We start from the spectrum X of the continuous time signal that we replicate and shift to teach of the frequencies that are multiples of the sampling frequency f s. The spectrum X d of the sampled signal is given by the sum of all these shifted copies see Figure 4. The result of spectrum periodization provides a very clear answer to the question of what information is lost when we sample a signal at frequency f s. The answer is that whatever information is contained by 4

34 frequency components X( f ) outside of the set f 2 [ f s /2, f s /2] is completely lost. Information contained at frequencies f close to the borders of this set are not completely lost but rather distorted by their mixing with the frequency components outside of the set f 2 [ f s /2, f s /2]. We refer to this distortion phenomenon as aliasing. The result in (1) points out to a particularly interesting result for the case of bandlimited signals that you are asked to analyze. 1.1 Sampling of bandlimited signals. Suppose the signal X has bandwidthsampling W, i.e., thatofx( bandlimited f ) = 0 for allsignals f /2 [ W/2, W/2]. In this case, 1.1 sampling entails no loss of information in that it is possible to recover x(t) perfectly if only the samples x s (n) are given. Explain why this is true and describe a method to recover the continuous time signal x from the modulated Dirac train x d. 1.2 Avoiding aliasing. When we sample a signal that is not bandlimited, 1.2 there is Avoiding an unavoidable aliasing loss of the information contained in frequencies larger than f s /2 and the equivalent information contained in frequencies smaller than f s /2. However, it is possible to avoid aliasing through judicious use of a low pass filter. Explain how this is done. 1.3 Reconstruction with arbitrary pulse trains. While it is mathematically possible Reconstruction to reconstruct with x(t) arbitrary from x d (t), pulse it is physically trains implausible 1.3 to generate a Dirac train because delta functions are not physical entities. We can, however, approximate d(t) by a narrow pulse p(t) and attempt to reconstruct x(t) from the modulated train pulse x p (t) = Â x(nt s )p(t nt s ). (10) n= As long as the pulse p(t nt s ) is sufficiently tall and narrow, x p (t) is not too far from x d (t) and the reconstruction method described in 1.1 should yield acceptable results with x p (t) used in lieu of x d (t). Work in the frequency domain to explain what distortion is introduced by the use of x p (t) in lieu of x d (t). In the course of this analysis you will realize that there is a condition on p(t) that guarantees no distortion, i.e., perfect reconstruction of x(t) without using a Dirac train. Derive this condition and propose a particular pulse with this property. Do notice that the pulse you are proposing is not that narrow after all. 5

35 -7Ts -6Ts = 2t -5Ts -4Ts -3Ts = t -2Ts -Ts 0 Ts 2Ts 3Ts = t 4Ts 5Ts 6Ts = t 7Ts t Figure 5. Subsampling (top). When subsampling a discrete time signal we retain a subset of the values of the given discrete time signal. In the figure, the sampling time of the given signal x is T s and the sampling time of the subsampled signal x s is t = 3T s. We therefore keep one out of every three values of x to form x s.. 2 Subsampling Most often, sampling is understood as a technique to generate a discrete time signal from a continuous time signal. However, we can also use sampling to generate a smaller number of a samples from an already sampled signal. Consider then a discrete time signal x with sampling time T s and values x(n). We want to generate a (sub)sampled signal x s with sampling time t and values x s (m) given by x s (m) =x m tts. (11) For the expression in (11) to make sense we need to have the subsampling time t to be an integer multiple of T s. Under that assumption, making x s (m) =x(mt/t s ) means that we retain one value of x(n) out of every t/t s values. E.g., of t/t s = 2, we keep every other sample of x into x(s). If t/t s = 3, we make x s (0) =x(0), x s (1) =x(3), and, in general x s (m) =x(3m) so that we keep all the values in x that correspond to time indexes that are multiples of 3 see Figure 5. As in the case of sampling, we want to understand what information, if any, is lost when we subsample x into x s. And, also as in the case of sampling, the difficulty in answering this question is that the support of the signals x and x s are different. In (1), the continuous time signal x is a function of the continuous time parameter t and the sampled signal x s is a function of the discrete time parameter n. In (11), the original signal x is defined for times nt s, whereas the subsampled signal is defined for times mt. 6

36 -7Ts -6Ts = 2t -5Ts -4Ts -3Ts = t -2Ts -Ts 0 Ts 2Ts 3Ts = t 4Ts 5Ts 6Ts = t 7Ts t Figure 6. Delta train representation of subsampling. The difference with the subsampled signal in Figure 5 is that here we pad with zeros so that the support of this signal is the same support of the original signal.. We can overcome this problem by introducing the analogous of the modulated Dirac train in (4). To do so, consider a train of discrete time delta functions centered at discrete time indexes mt/t s and define the delta train representation of the subsampled signal as x d (n) =  x m tts m= d n m tts. (12) A schematic representation of (12) is available in Figure 6. The difference between x d and x s is that x d is padded with zeros so that its support is the same support of the original signal x. Do notice that it is pointless to utilize x d for signal processing, when we can use the equivalent signal x s. However, the delta train representation x d is more convenient for analysis. In particular, it is ready to repeat the steps in (5) -(8) to conclude that a result equivalent to the periodization statement of Theorem 1 holds. 2.1 Subsampling theorem. Derive the equivalent of Theorem 1 relating thesubsampling spectra of discrete theoremtime signal x and its subsampled version 2.1 x d. To solve this part you need to compute the DTFT of the delta train  m= d(n mt/t s ). This DTFT is a Dirac train with spikes that are spaced by the subsampling frequency n = 1/t. If you have problems with this derivation, which you will most likely have, talk with one of your teaching assistants. If you don t want to talk with them ponder the fact that the train  m= d(n mt/t s ) is akin to a constant function when we use the sampling time t. 7

37 2.2 Subsampling function. Create a function that takes as input a signal x, Subsampling a timefunction T s, and a subsampling time t to return the sub- 2.2 sampled signal x s and its delta train representation x d. The latter signal would not be returned in practice, but we will use it here to perform some analyses. Test your function with a Gaussian pulse of standard deviation s = 100ms and mean µ = 1s. Set the original sample frequency to f s = 40kHz, the subsampling frequency to f s = 4kHz and the total observation period to T = 2s. 2.3 Spectrum periodization. Take the DFT of the functions x and x d of 2.3 Part 2.2 Spectrum and check that the periodization result of Part 2.1 holds. Keep all parameters unchanged and vary the standard deviation of the Gaussian pulse to observe cases with and without aliasing. 2.4 Prefiltering. The function you wrote in Part 2.2 results in aliasing 2.4 when Prefiltering the spectrum of the signal x has a bandwidth W that exceeds n. We can avoid aliasing by implementing a low pass filter to eliminate frequencies above n before subsampling. Modify the function of Part 2.2 to add this feature. 2.5 Spectrum periodization with prefiltering. Repeat Part 2.3 using 2.5 the function Spectrum in Part periodization 2.4. For the cases with without prefiltering aliasing the result should be the same. Observe and comment the differences for the cases in which you had observed aliasing. 2.6 Reconstruction function. Create a function that takes as input a 2.6 subsampled Reconstruction signal x s, a function sampling time T s, and a subsampling time t to return the signal x. Depending on context this process may also be called interpolation because we interpolate the values between subsequent samples in x s or upsampling because we increase the sampling frequency from n to f s. In implementing this function you can assume that the signal x s is bandlimited and was generated without aliasing. Test your function for a Gaussian pulse. Choose parameters of Part 2.3 that did not result in aliasing. Choose parameters for which you observed aliasing and check that, indeed, the reconstructed pulse is not a faithful representation of the original pulse. For this latter experiment utilize both, the subsampling function in Part 2.2 and the subsampling function in Part

38 3 Time management This lab returns to the mean and is more involved that Lab 4. Part 1 includes results that we will derive on class, so it shouldn t be too onerous to finish. The teaching assistants will work on these problems during Tuesday s meeting. It should be an hour or so more to wrap it up. Part 2.1 is an odd man as it is asking that you do a somewhat involved derivation. Work on it during Wednesday and, if you can t solve it before the end of the day, go talk with one of your teaching assistants. A 2 hour investment should do. We will work on the remaining parts on the Thursday session. Completing the rest should take about 5 hours, 1 hour for each of the parts. 9

39 Voice recognition Alejandro Ribeiro February 23, 2015 The goal of this lab is to use your accumulated knowledge of signal and information processing to design a system for the recognition of a spoken digit. Figure 1 shows four different realizations of the DFT of the signal recorded when I spoke the word one. These four DFTs are different to each other because there are variations in the sounds that I produce, but they also have discernible patterns. E.g. in all four DFTs you can see two well defined frequency spikes close to frequencies 0.5kHz and 0.7kHz. That these patterns are specific to the word one can be verified by the four different realizations of the DFT of the signal recorded when I spoke the word two that are shown in Figure 2. The two characteristic spikes of the DFTs in Figure 1 are absent from this second set of DFTs, which, instead, seem to all have a high frequency component a little below 0.4kHz. Another feature that arises upon comparison is that the spectra associated with the word two have their energy more evenly spread out than the energy of the word one. Regardless of the specific features, the general level conclusion is that the four DFTs in Figure 1 are more like each other than they are like the DFTs in Figure 2, which are also more like each other than they are to the DFTs in Figure 1. We could spend days talking about the physical meaning of these differences. Large frequency components are generally associated with vowels that produce high energy sounds at definite frequencies. Consonants generate sounds with less power because the vocal cords are not involved in their generation. Constant sounds also tend to be more spread out in frequency because they are not associated with a well defined oscillating tone. The differences we see between the DFTs of the spoken words one and two are because their vowel sounds are different thus, frequency spikes are observed at different locations and because the consonant sound in two is longer than the consonant sound in one therefore, 1

40 Figure 1. Six different observations of the Fourier transform of the signal recorded when speaking the word one. These transforms are different from each other but they are more like each other than they are to the ones in Figure 2. (Frequency axis in khz, sampling frequency set to f s = 8 khz, signal duration T = 2s) the energy in two is more spread out. However, our interest today is not on analyzing these differences but on using them to detect a spoken digit. To that end, start by recording N waveforms y i for the spoken word one and K waveforms z i for the spoken word two. The respective DFTs are denoted as Y i = F(z i ) and Z i = F(z i ). The sets of all DFTs Y := {Y i, i = 1,..., n} and Z := {Z i, i = 1,..., n} are called training sets. We assume that the signals in the training sets have been normalized to have unit energy. 1 Acquire and process training sets. Acquire and store N = 10 recordings for each of the two digits one and two. Compute and normalize the respective DFTs. Having acquired and processed the training sets Y and Z we acquire a signal x that results from the utterance of either the word one or the 2

41 Figure 2. Six different observations of the Fourier transform of the signal recorded when speaking the word two. These transforms are different from each other but they are more like each other than they are to the ones in Figure 1. word two that we want to (correctly) identify. To do so we compare the DFT X = F(x) which we also assume has been normalized to have unit energy with the DFTs Y i and Z i that were stored in the training sets. There are different choices to make this comparison. We will try two of them in this lab. 2 Comparison with average spectrum. For each of the training sets define the average spectra Ȳ = 1 K N Â Y i, Z = 1 N K Â Z i. (1) n=1 n=1 Further define the inner product p(x, X 0 ) between the spectra X and X 0 as the norm of the inner product between their absolute values, p(x, X 0 )= X T X 0 = apple NÂ X i. Xi 1/2. 0 (2) n=1 3

42 Compare the inner product p(x, Ȳ) between the unknown spectrum X and the average spectrum Ȳ with the inner product p(x, Z) between the unknown spectrum X and the average spectrum Z. Assign the digit to the spectrum with the largest inner product. Estimate your classification accuracy. Explain why we are using the absolute values of the spectra. 3 Nearest neighbor comparison. Compute the inner product p(x, Y i ) between the unknown spectrum X and each to the spectra Ȳ i associated with the word one. Do the same for the inner product p(x, Y i ) between the unknown spectrum X and each to the spectra Z i associated with the word two. Assign the digit to the spectrum with the largest inner product. Estimate your classification accuracy. 3 Larger number of digits. Try developing a system to identify all 10 digits. We will give you 2 extra points for the effort and 3 more extra points if you succeed. 4

43 1 Time management This lab marks a shift with respect to previous labs. You have acquired, or at least I am assuming that you have acquired, all the fundamental concepts on the spectral analysis of one dimensional signals. This lab is just a test of the application of the concepts you have learnt. The pieces are not lengthy. You should be able to solve Part 1 in less than 1 hour and spend two or three hours in each of the other two parts. The extra time you can use it to start preparing for the midterm. 5

44 Voice recognition with a linear time invariant system Alejandro Ribeiro March 2, 2015 To recognize spoken words we can compare the spectra of prerecorded signals associated with known words with the spectrum observed at classification time. More specifically, say that we want to discern between the spoken word one and the spoken word two. We do so by recording N waveforms y i for the spoken word one and N waveforms z i for the spoken word two. The respective signals are normalized to unit energy and DFTs Y i = F(y i /ky i k) and Z i = F(z i /kz i k) are computed. The sets of all DFTs are stored to construct the training sets Y := {Y i, i = 1,..., N} and Z := {Z i, i = 1,..., N}. With the training sets acquired we proceed to observe a signal x and compare the DFT X = F(x) with the DFTs in the training sets Y and Z. To that end we define the energy p(x, X 0 ) of the crossproduct between the spectra X and X 0 as p(x, X 0 )= apple N 1 Â ( X k. Xk 1/2. 0 )2 (1) k=0 The cross product energy p(x, X 0 ) can be interpreted as the energy of the linear filtering of the signal x through a filter with frequency response X 0. If we filter the signal x with that filter, the resulting signal y has a spectrum Y = F(y) given by Y = X X 0, (2) whose energy kyk 2 is given by (1), indeed. This filtering representation can be used to propose a time implementation of voice recognition that doesn t involve computation of DFTs. 1

45 1 Comparison with average spectrum. For each of the training sets define the average spectra Ȳ = 1 K N Â Y i, Z = 1 N K Â Z i. (3) n=1 n=1 Interpret Ȳ and Z as frequency response of respective filters. Determine the individual impulse responses and use them to compute p(x, Ȳ) and p(x, Z) without determining the respective DFTs. Assign the spoken waveform to the digit with the largest energy. 2 Online operation. An advantage of the implementation in Part 1, as opposed to the computation and comparison of the DFTs, is that they can be run online, i.e., as a system that runs continuously and detects digits as they are spoken. Explain how this can be done. 3 Online operation implementation. Your teaching assistants can explain the use of Matlab s real time toolbox to implement the online classifier of Part 2. 1 Time management This lab is intended to be very short as you are supposed to be studying for your midterm. One or two hours should be sufficient. 2

46 Two-Dimensional Signal Processing and Image De-noising Alec Koppel, Mark Eisen, Alejandro Ribeiro March 20, 2015 Before we considered (one-dimensional) discrete signals of the form x : [0, N 1]! C where N is the duration, having elements x(n) for n 2 [0, N 1]. Extend this definition to two dimensions by considering the set of ordered pairs x : [0, M 1] [0, N 1]! C x = {x(m, n) : m 2 [0, M 1], n 2 [0, N 1]}. (1) We can think of x as the set of values along the integer lattice in the twodimensional plane, hence as elements of a discrete spatial domain. We associate these signals with the space of complex matrices C M N. The inner product for two signals x and y, both of duration M N, in twodimensions is a natural extension of the one-dimensional case, and is defined as hx, yi := M 1 Â m=0 N 1 Â x(m, n)y (m, n). (2) n=0 As before, we define the energy of a two-dimensional signal x as hx, xi = kxk 2. We say two signals are orthogonal in two-dimensions if hx, yi = 0 whenever x 6= y, and if in addition both vectors have unit energy, i.e. kxk = kyk = 1 they are said to be orthonormal. The discrete twodimensional impulse d(n, m) is defined as ( 1 if n = m d(m, n) = (3) 0 if n 6= m The discrete complex exponential e kl,mn (m, n) in two-dimensions at fre- 1

47 quencies k and l is the signal e kl,mn (m, n) = 1 p MN e j2pkm/m e j2pln/n = 1 p MN e j2p(km/m+ln/n). (4) It is a straight forward computation to check that e kl,mn is orthonormal to e pq,mn for k 6= p, l 6= q. The two-dimensional discrete Fourier transform (DFT) of x is the signal X : Z 2! C where the elements X(k, l) for all k, l 2 Z are defined as X(k, l) := p 1 M 1 N 1 MN Â Â x(m, n)e j2pkm/m e j2pln/n (5) m=0 n=0 The arguments k and l of the signal X(k, l) are called the vertical and horizontal frequency of the DFT and the value X(k, l) the frequency component of the given signal x. As in the one-dimensional case, when X is the DFT of x we write X = F(x). Recall that for a complex exponential, discrete frequency k is equivalent to (real) frequency f k =(k/n) f s, where N is the total number of samples and f s the sampling frequency. An alternative form of the 2d DFT is to realize that the sum in (5) is defining a two-dimensional inner-product between x and the complex exponential e kl,mn (m, n) with elements e kl,mn (m, n) =(1/ p MN)e j2p(kn/n+lm/m). We can then write X(k, l) := 1 p MN M 1 Â m=0 Â! N 1 Â x(m, n)e j2pln/n e j2pkm/m n=0 = p 1 M 1 hx(m, ), e ln ie j2pkm/m M m=0 = hhx(m, ), e ln i, e km i = hx, e kl,mn i (6) which allows us to view X(k, l) as a measure of how much the signal x resembles an oscillation of frequency k in the vertical direction and l in the horizontal direction. Note that the inner products in the two middle equalities above are scalar, whereas the last is in two dimensions, as may be noted by the bold-face type. Because the complex exponential is (M, N)-periodic, the 2d DFT val- 2

48 ues X(k, l) and X(k + M, l + N) are equal, i.e., X(k + M, l + N) = p 1 M 1 N 1 MN Â Â x(m, n)e j2p(k+m)m/m e j2p(l+n)n/n m=0 n=0 = p 1 M 1 N 1 MN Â Â x(m, n)e j2pkm/m e j2pln/n m=0 n=0 = X(k, l) (7) The relationship in (7) means that the DFT is periodic in in both directions with period N, and while it is defined for all k, l 2 Z 2, only M N values are different. As in the 1d case, we work with the canonical set of frequencies k, l 2 [0, M 1] [0, N 1] for computational purposes, and the set of frequencies k, l 2 [ M/2, M/2] [ N/2, N/2] for interpretation purposes. This latter canonical set contains (M + 1)(N + 1) frequencies instead of M N. As before, we may shift the values of the 2-d DFT to convert from one canonical set to another using periodicity: X( k, l) =X(M k, N l) (8) for all k, l 2 [ M/2, M/2] [ N/2, N/2]. The operation in (8) is a chop and shift, from which we may recover the DFT values for the canonical set [ N/2, N/2] [ N/2, N/2] from the canonical set [0, M 1] [0, N 1]. Simply chop the frequencies in the box [M/2, M 1] [N/2, N 1] and shift them to the from of the set as in the onedimensional case. For the purposes of this homework, when you are asked to report a DFT, you should report the DFT for the canonical set [ M/2, M/2] [ N/2, N/2]. Further define the 2d inverse Discrete Fourier Transform (2d idft) F 1 (X) as the two-dimensional signal x(m, n) Â k=0 x(m, n) := p 1 M 1 MN = 1 p MN M/2 Â N 1 Â X(k, l)e j2pkm/m e j2pln/n l=0 N/2 Â k= M/2+1 l= N/2+1 X(k, l)e j2pkm/m e j2pln/n (9) The expression in (9) means that any arbitrary signal x in two dimensions may be represented as a sum of oscillations, as in the one dimensional case. Moreover, conjugacy means we can represent the sum (9) with only half as many terms. This means that we can effectively represent signals using only particular DFT coefficients under some conditions. 3

49 This observation is the foundation of image de-noising and compression methods which are the focus of the this and next week s lab assignments. For the subsequent questions, you may assume that M = N so that signals are of dimension N 2. 1 Two Dimensional Signal Processing 1.1 Inner products and orthogonality. Write down a function that takes 1.1 as input Inner two-dimensional products and signals orthogonality x and y and outputs their inner product. Each signal is now defined by N 2 complex numbers. 1.2 Discrete Complex Exponentials. Write down a function that takes 1.2 as input Discrete the frequencies Complex k, l and Exponentials signal duration N and returns three matrices of size N 2, the first of which contains the values of e kl,n and the later two contain its real and imaginary parts. 1.3 Unit Energy 2-D Square Pulse. The two dimensional square pulse 1.3 is defined Unit asenergy 2-D Square Pulse u L (m, n) = 1 L 2 if 0 apple m, n < L, u L (m, n) =0 if m, n L. (10) Write down a function that takes as input the signal duration T, T 0, and sampling frequency f s and outputs the square pulse as an array of size N 2 as well as the N N the signal duration. Plot the two dimensional square pulse for T = 32s, f s = 8Hz, and T 0 =.5. (Use MATLAB s imshow function). 1.4 Two-Dimensional Gaussian Signals. The Gaussian pulse centered 1.4 at µ, when Two-Dimensional t and s are uncorrelated, Gaussian is defined Signals as x(t, s) =e [(t µ)2 +(s µ) 2 ]/(2s 2). (11) Write down a function which takes as input µ, s and T s, and outputs the Gaussian pulse in two-dimensions. Recall that as in the one-dimensional case, we sampled t 2 [ 3s,3s] in increments of size T s. Plot the two dimensional Gaussian for µ = 2 and s = 1. Note how these signals when projected onto a single dimension. 4

50 1.5 DFT in two dimensions. Modify your function for the one dimensional DFT in inlabtwo 2 sodimensions that it now computes the DFT of a two-dimensional 1.5 signal. This computation may be expressed in terms of an inner product (written for question 1.2) of the two-dimensional discrete complex exponential (question 1.2). Compute the DFT of a 2d Gaussian pulse with µ = 0 and s = 2. Plot your results in the two-dimensional plane. 1.6 idft in Two Dimensions. Write down a function which takes as 1.6 input idft a two-dimensional in Two Dimensions signal of duration N 2 and computes its twodimensional idft [cf. (9)]. Exploit conjugacy in your computation so that you only need to take in (N/2) 2 DFT coefficients. Compute the idft associated with the DFT of the Gaussian pulse you computed in the previous question. Plot this signal and compare it with the original Gaussian pulse. 2 Image Filtering and de-noising We may de-noise corrupted images using spatial information about the signal. To do this, we consider the two-dimensional convolution of signals x and y, denoted by x y, and defined as [x y](n, m) := = Â Â k= l= M 1 Â k=0 x(k, l)y(k m, l n) N 1 Â x(k, l)y(k m, l n) (12) l=0 The standard way to perform image de-noising is to convolve the image with a Gaussian pulse. To implement this technique, start by defining the Gaussian pulse associated with a 2-D signal x as G s (x) = 1 4ps 2 e kxk2/ 4s 2 (13) where kxk 2 = hx, xi. With two-dimensional signal (image) x the spatial de-noising technique we consider is x de-noised = G s x. (14) 5

51 2.1 Spatial De-noising. Implement the technique in (14) without using 2.1 the 2-DSpatial DFT byde-noising convolving the image with the Gaussian pulse directly. Hint: MATLAB has a built in function to perform the convolution in two dimensions. Try your implementation in the spatial domain out for s = 1, 4, 16 on sample images A and B. Do you observe significant denoising performance differences when varying s? 6

52 The Discrete Cosine Transform and JPEG Alec Koppel, Mark Eisen, Alejandro Ribeiro March 20, 2015 For image processing applications, it is useful to consider the Discrete Cosine Transform (2d DCT) instead of the 2d DFT due to its superior empirical performance for signal compression and reconstruction tasks. We first introduce the two-dimensional discrete cosine C k,l (n, m) of frequencies k, l defined as apple apple kp lp C kl,mn (m, n) = cos (2m + 1) cos (2n + 1). (1) 2M 2N Then the two-dimensional DCT of a signal x is given by substituting C kl,n into the expression for the two-dimensional DFT, yielding X C (k, l) := 1 M 1 N 1 apple apple kp lp MN Â Â x(m, n) cos (2m + 1) cos (2n + 1) 2M 2N m=0 n=0 = hx, C kl,n i. (2) Note that again this may be computed as an inner product in two dimensions, just like the 2d DFT. Crucial to the theory of image reconstruction and compression is the 2d inverse Discrete Cosine Transform (2d idct), which is the signal x C defined as x C (m, n) := M 1 Â k=0 N 1 Â c k c l X C (k, l) cos l=0 apple kp(2m + 1) 2M apple lp(2n + 1) cos 2N where c k = 1 for k = 0 and c k = 2 for k = 1,..., N 1. Analogous to the 2d DFT, we note that the sum in (3) allows us to represent an arbitrary two-dimensional signal as a sum of cosines, and hence we may ask how many cosines are necessary to represent the signal well in terms of reconstruction error. We explore this question in the first part of this lab. Henceforth you may assume that M = N so that signals are of dimension N 2., (3) 1

53 1 Image Compression 1.1 DCT in Two Dimensions. Write down a function which takes as 1.1 input DCT a two-dimensional in Two Dimensions signal of duration N 2 and computes its twodimensional DCT defined in (2). 1.2 Image Compression. When the signal dimension N 2 is very large it 1.2 is difficult Image to represent Compression it well across its entire domain using the same DFT or DCT coefficients. This is because the computation of the inverse in (3) uses the same DFT or DCT coefficients across the entire domain. In lab 3 part 2 we designed an audio compression scheme by partitioning the signal and computing the DFT of each piece so that our DFT coefficients only need to locally represent the signal over a small domain. We will implement a two-dimensional analogue here. Hence write down a function that takes in a signal (image) of size N 2 and partitions it into patches of size 8 8, and for each patch stores the K 2 largest DFT coefficients and their associated frequencies. Your partitioning scheme should resemble the depiction below. Write another function that executes this procedure for the two dimensional DCT. Try both of these functions out on sample image A for K 2 = 4, 16, 32. Make sure to keep track of each patches frequencies associated with the dominant DCT coefficients. 1.3 Quantization. A rudimentary version of the JPEG compression scheme 1.3 for images Quantization includes partitioning the image into patches, performing the two-dimensional DCT on each patch, and then rounding (or quantizing) the associated DCT coefficients. We describe this procedure below in more detail: 1. Perform the DCT on each 8x8 block 2. X ij (k, l) =X C (x ij ), where x ij is the (i, j)th block of the image 2

54 3. We then quantize the DCT coefficients: h Xij (k, l) i ˆX ij (k, l) =round Q(k, l) (4) Q(k, l) is a quantization coefficient used to control how much the (k, l)th frequency is quantized. Since human vision is not sensitive to these rounding errors, this is where the compression takes place. That is, a smaller set of pixel values requires less bits to represent in a computer. The standard JPEG quantization matrix that you should apply to each patch is based upon the way your eye observes luminance, and is given as Q L = (5) B A Write a function that executes the above procedure, making use of your code from questions Image Reconstruction. Write down a function that takes in the compression Image scheme Reconstruction in question 1.2, computes the idct of each patch, and 1.4 then stitches these reconstructed patches together to form the global reconstructed signal. Write another function that executes this procedure for the the quantized DCT coefficients from question 1.3. Run this functions on the results of questions 1.2 and 1.3 (associated with sample image A). Plot the reconstruction error r K versus K for your code from questions 1.3. What do you observe? Are you able to discern what is in the original image? Play around with the quantization matrix. Do higher or lower values of Q(k, l) yield better reconstruction performance? How much can you alter the entries Q(k, l) and still obtain a compression for which the original image is discernible to your eye? Try out all of these compression schemes on sample image B as well. 3

55 Principal Components Analysis Santiago Paternain, Aryan Mokhtari and Alejandro Ribeiro March 30, 2015 At this point we have already seen how the Discrite Fourier Transform and the Discrete Cosine Transform can be written in terms of a matrix product. The main idea is to multiply the signal by a specific Hermitian matrix. Let us remind the definition of Hermitian matrix. Definition 1 (Hermitian Matrix) Let M 2 C N N be a matrix with complex entries. Denote by M the conjugate of M. i.e for each entry i, j we have that (M ) ij = Mij. Then we say M is Hermitian if (M ) T M = I, (1) where I is the N N identity matrix and ( ) T denotes the transpose of a matrix. For the Discrete Fourier transform we can define the following matrix 2 3 e H 2 0N e1n H F = p 1 1 e j2p(1)(1)/n e j2p(1)(n 6 5 N )/N e(n H 1)N 1 e j2p(n 1)(1)/N e j2p(n 1)(N 1)/N (2) Then if we consider a signal x(n) for n = 0..N 1 as a vector 0 x = x(0) x(1). x(n 1) 1 C A, (3)

56 we can write the DFT as the product between F and the vector x i.e x(0) Fx = p 1 1 e j2p(1)(1)/n e j2p(1)(n 1)/N x(1) 6 N e j2p(n 1)(1)/N e j2p(n 1)(N 1)/N x(n 1) = p 1 Ân=1 N 1 x(n)e 2pj N 0 n X(0) 6 N 4 Ân=1 N 1 x(n)e 2pj N 1 n 7 5 = X(1) Â N 1 n=1 2pj N 1 x(n)e N n. X(N 1) In PCA decomposition we define a new Hermitian matrix based on the eigenvectors of the covariance matrix of a dataset. Before defining the covariance matrix we need to the define the mean signal. Definition 2 Let x 1, x 2,...x M be M different points in a dataset, then we can define the mean signal as x = 1 M M Â x m. (5) m=1 We next define the empirical covariance matrix Definition 3 (Covariance matrix) Let x 1, x 2,...x M be M different points in a dataset, then we can define the covariance matrix as S = 1 M (4) M Â (x m µ)(x m µ) T. (6) m=1 Let v 1, v 2,, v N be the eigenvectors of the covariance matrix. Then, as in the Discrete Fourier transform define the Hermitian matrix P = [v 1, v 2,, v N ] (7) Then the PCA decomposition can be written as a product between the matrix P and the difference between the signal and the mean signal i.e X PCA = P (x µ). (8) Just as with the DFT and the DCT we can define the inverse operation to PCA decomposition, which gives us the signal based on X PCA. For this we need to compute (P ) T. Then the inverse transformation is given by x = (P ) T X PCA (9) 2

57 In this Lab we are going to be doing PCA decomposition of faces, therefore we need to deal with two-dimensional signals. A way of doing this is to vectorize the matrices representing the images. Let x 2M N N be a N by N matrix. Let x k be the k-th column of the matrix, then we can represent the matrix x as a concatenation of column vectors x = [x 1, x 2,, x N ]. (10) Then the one dimensional representation of the signal x can be obtained by stacking the columns of x, this is 2 3 x 1 x 2 x = 6 7 (11) 4. 5 x N With this idea we can treat two dimensional signals as if they were one dimensional and the explanation of PCA done before carries on. In the next parts we are going to use PCA decomposition for image compression. We can compress an image by keeping the eigenvectors associated with the larger eigenvalues of the covariance matrix. In the next week we will be using these ideas to build a face recognition system. In both Labs we will be working with the face dataset of AT&T. You can download the dataset in the following link ac.uk/research/dtg/attarchive:pub/data/att_faces.zip. The image set has 10 different images of 40 different persons (cf Figure 1), each one of the images is a pixel image. Since we haven t covered yet the concept of covariance matrix we are providing the covariance matrix for the set which was computed using (6) and the mean face of the data set (c.f. Figure 2b) which was computed using (2). 1 PCA: Decomposition 1.1 Decomposition in principal components. Build a function that 1.1 given Decomposition the covariance matrix inand principal K the number components of desired principal components returns the K eigenvectors of the covariance matrix with the largest eigenvalues. Check that the matrix containing the eigenvectors is Hermitian. 1.2 Decomposition of a face in the space of eigenfaces. Build a function that Decomposition receives as inputsof ana image face containing the spacea face, ofthe eigenfaces mean and 1.2 the eigenfaces and returns the projection in the space of eigenfaces. 3

58 Figure 1: Dataset 4

59 (a) An example of the dataset (b) This is the mean face of the dataset 2 Reconstruction 2.1 Reconstruction of a face using K principal components. Build a 2.1 function Reconstruction that receives as of inputs a face theusing mean face, K principal K eigenfaces components and the K Principal components and outputs a reconstructed face. Test this function for any of the faces of the data set for 5, 25 and 25 principal components. 2.2 Reconstruction error. For any given number of principal components considered Reconstruction we can compute error the reconstruction error as the norm of 2.2 the difference between the original face and the reconstructed face. Compute this error ranging the number of eigenfaces used from 0 to the size of the training set. How many principal components are needed to obtain a 60% accuracy in the reconstruction? 5

Discrete Fourier transform (DFT)

Discrete Fourier transform (DFT) Discrete Fourier transform (DFT) Alejandro Ribeiro January 19, 2018 Let x : [0, N 1] C be a discrete signal of duration N and having elements x(n) for n [0, N 1]. The discrete Fourier transform (DFT) of

More information

Fourier transform. Alejandro Ribeiro. February 1, 2018

Fourier transform. Alejandro Ribeiro. February 1, 2018 Fourier transform Alejandro Ribeiro February 1, 2018 The discrete Fourier transform (DFT) is a computational tool to work with signals that are defined on a discrete time support and contain a finite number

More information

Discrete Fourier transform

Discrete Fourier transform Discrete Fourier transform Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ January 2, 216 Signal

More information

Two-Dimensional Signal Processing and Image De-noising

Two-Dimensional Signal Processing and Image De-noising Two-Dimensional Signal Processing and Image De-noising Alec Koppel, Mark Eisen, Alejandro Ribeiro March 14, 2016 Before we considered (one-dimensional) discrete signals of the form x : [0, N 1] C where

More information

Sampling. Alejandro Ribeiro. February 8, 2018

Sampling. Alejandro Ribeiro. February 8, 2018 Sampling Alejandro Ribeiro February 8, 2018 Signals exist in continuous time but it is not unusual for us to process them in discrete time. When we work in discrete time we say that we are doing discrete

More information

Two-Dimensional Signal Processing and Image De-noising

Two-Dimensional Signal Processing and Image De-noising Two-Dimensional Signal Processing and Image De-noising Alec Koppel, Mark Eisen, Alejandro Ribeiro March 12, 2018 Until now, we considered (one-dimensional) discrete signals of the form x : [0, N 1] C of

More information

Instructor (Brad Osgood)

Instructor (Brad Osgood) TheFourierTransformAndItsApplications-Lecture26 Instructor (Brad Osgood): Relax, but no, no, no, the TV is on. It's time to hit the road. Time to rock and roll. We're going to now turn to our last topic

More information

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis

Elec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous

More information

Signal and Information Processing. Alejandro Ribeiro

Signal and Information Processing. Alejandro Ribeiro Signal and Information Processing Alejandro Ribeiro April 26, 16 Contents 1 Principal Component Analysis 3 11 The DFT and idft as Hermitian Matrices 3 111 Properties of the DFT Matrix 4 112 The Product

More information

Mathematics for Chemists 2 Lecture 14: Fourier analysis. Fourier series, Fourier transform, DFT/FFT

Mathematics for Chemists 2 Lecture 14: Fourier analysis. Fourier series, Fourier transform, DFT/FFT Mathematics for Chemists 2 Lecture 14: Fourier analysis Fourier series, Fourier transform, DFT/FFT Johannes Kepler University Summer semester 2012 Lecturer: David Sevilla Fourier analysis 1/25 Remembering

More information

Multidimensional Signal Processing

Multidimensional Signal Processing Multidimensional Signal Processing Mark Eisen, Alec Koppel, and Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/

More information

Automatic Speech Recognition (CS753)

Automatic Speech Recognition (CS753) Automatic Speech Recognition (CS753) Lecture 12: Acoustic Feature Extraction for ASR Instructor: Preethi Jyothi Feb 13, 2017 Speech Signal Analysis Generate discrete samples A frame Need to focus on short

More information

L29: Fourier analysis

L29: Fourier analysis L29: Fourier analysis Introduction The discrete Fourier Transform (DFT) The DFT matrix The Fast Fourier Transform (FFT) The Short-time Fourier Transform (STFT) Fourier Descriptors CSCE 666 Pattern Analysis

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

2. Signal Space Concepts

2. Signal Space Concepts 2. Signal Space Concepts R.G. Gallager The signal-space viewpoint is one of the foundations of modern digital communications. Credit for popularizing this viewpoint is often given to the classic text of

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

Chirp Transform for FFT

Chirp Transform for FFT Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a

More information

L6: Short-time Fourier analysis and synthesis

L6: Short-time Fourier analysis and synthesis L6: Short-time Fourier analysis and synthesis Overview Analysis: Fourier-transform view Analysis: filtering view Synthesis: filter bank summation (FBS) method Synthesis: overlap-add (OLA) method STFT magnitude

More information

Sequences and Series

Sequences and Series Sequences and Series What do you think of when you read the title of our next unit? In case your answers are leading us off track, let's review the following IB problems. 1 November 2013 HL 2 3 November

More information

Homogeneous Linear Systems and Their General Solutions

Homogeneous Linear Systems and Their General Solutions 37 Homogeneous Linear Systems and Their General Solutions We are now going to restrict our attention further to the standard first-order systems of differential equations that are linear, with particular

More information

Section 1.6 Inverse Functions

Section 1.6 Inverse Functions 0 Chapter 1 Section 1.6 Inverse Functions A fashion designer is travelling to Milan for a fashion show. He asks his assistant, Betty, what 7 degrees Fahrenheit is in Celsius, and after a quick search on

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

1.4 Techniques of Integration

1.4 Techniques of Integration .4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function

More information

The Sampling Theorem for Bandlimited Functions

The Sampling Theorem for Bandlimited Functions The Sampling Theorem for Bandlimited Functions Intuitively one feels that if one were to sample a continuous-time signal such as a speech waveform su ciently fast then the sequence of sample values will

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

ESE 250: Digital Audio Basics. Week 4 February 5, The Frequency Domain. ESE Spring'13 DeHon, Kod, Kadric, Wilson-Shah

ESE 250: Digital Audio Basics. Week 4 February 5, The Frequency Domain. ESE Spring'13 DeHon, Kod, Kadric, Wilson-Shah ESE 250: Digital Audio Basics Week 4 February 5, 2013 The Frequency Domain 1 Course Map 2 Musical Representation With this compact notation Could communicate a sound to pianist Much more compact than 44KHz

More information

Communication Signals (Haykin Sec. 2.4 and Ziemer Sec Sec. 2.4) KECE321 Communication Systems I

Communication Signals (Haykin Sec. 2.4 and Ziemer Sec Sec. 2.4) KECE321 Communication Systems I Communication Signals (Haykin Sec..4 and iemer Sec...4-Sec..4) KECE3 Communication Systems I Lecture #3, March, 0 Prof. Young-Chai Ko 년 3 월 일일요일 Review Signal classification Phasor signal and spectra Representation

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Quadratic Equations Part I

Quadratic Equations Part I Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing

More information

Lecture Wigner-Ville Distributions

Lecture Wigner-Ville Distributions Introduction to Time-Frequency Analysis and Wavelet Transforms Prof. Arun K. Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras Lecture - 6.1 Wigner-Ville Distributions

More information

Tutorial Sheet #2 discrete vs. continuous functions, periodicity, sampling

Tutorial Sheet #2 discrete vs. continuous functions, periodicity, sampling 2.39 utorial Sheet #2 discrete vs. continuous functions, periodicity, sampling We will encounter two classes of signals in this class, continuous-signals and discrete-signals. he distinct mathematical

More information

Tutorial 2 - Learning about the Discrete Fourier Transform

Tutorial 2 - Learning about the Discrete Fourier Transform Tutorial - Learning about the Discrete Fourier Transform This tutorial will be about the Discrete Fourier Transform basis, or the DFT basis in short. What is a basis? If we google define basis, we get:

More information

6.003: Signal Processing

6.003: Signal Processing 6.003: Signal Processing Discrete Fourier Transform Discrete Fourier Transform (DFT) Relations to Discrete-Time Fourier Transform (DTFT) Relations to Discrete-Time Fourier Series (DTFS) October 16, 2018

More information

TheFourierTransformAndItsApplications-Lecture28

TheFourierTransformAndItsApplications-Lecture28 TheFourierTransformAndItsApplications-Lecture28 Instructor (Brad Osgood):All right. Let me remind you of the exam information as I said last time. I also sent out an announcement to the class this morning

More information

2.3 Damping, phases and all that

2.3 Damping, phases and all that 2.3. DAMPING, PHASES AND ALL THAT 107 2.3 Damping, phases and all that If we imagine taking our idealized mass on a spring and dunking it in water or, more dramatically, in molasses), then there will be

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Fourier Transforms For additional information, see the classic book The Fourier Transform and its Applications by Ronald N. Bracewell (which is on the shelves of most radio astronomers) and the Wikipedia

More information

base 2 4 The EXPONENT tells you how many times to write the base as a factor. Evaluate the following expressions in standard notation.

base 2 4 The EXPONENT tells you how many times to write the base as a factor. Evaluate the following expressions in standard notation. EXPONENTIALS Exponential is a number written with an exponent. The rules for exponents make computing with very large or very small numbers easier. Students will come across exponentials in geometric sequences

More information

Notes on Fourier Series and Integrals Fourier Series

Notes on Fourier Series and Integrals Fourier Series Notes on Fourier Series and Integrals Fourier Series et f(x) be a piecewise linear function on [, ] (This means that f(x) may possess a finite number of finite discontinuities on the interval). Then f(x)

More information

The Discrete Fourier Transform. Signal Processing PSYCH 711/712 Lecture 3

The Discrete Fourier Transform. Signal Processing PSYCH 711/712 Lecture 3 The Discrete Fourier Transform Signal Processing PSYCH 711/712 Lecture 3 DFT Properties symmetry linearity shifting scaling Symmetry x(n) -1.0-0.5 0.0 0.5 1.0 X(m) -10-5 0 5 10 0 5 10 15 0 5 10 15 n m

More information

Solving Quadratic & Higher Degree Equations

Solving Quadratic & Higher Degree Equations Chapter 7 Solving Quadratic & Higher Degree Equations Sec 1. Zero Product Property Back in the third grade students were taught when they multiplied a number by zero, the product would be zero. In algebra,

More information

Fourier Transform for Continuous Functions

Fourier Transform for Continuous Functions Fourier Transform for Continuous Functions Central goal: representing a signal by a set of orthogonal bases that are corresponding to frequencies or spectrum. Fourier series allows to find the spectrum

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Positioning Sytems: Trilateration and Correlation

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Positioning Sytems: Trilateration and Correlation EECS 6A Designing Information Devices and Systems I Fall 08 Lecture Notes Note. Positioning Sytems: Trilateration and Correlation In this note, we ll introduce two concepts that are critical in our positioning

More information

Finding Limits Graphically and Numerically

Finding Limits Graphically and Numerically Finding Limits Graphically and Numerically 1. Welcome to finding limits graphically and numerically. My name is Tuesday Johnson and I m a lecturer at the University of Texas El Paso. 2. With each lecture

More information

QM and Angular Momentum

QM and Angular Momentum Chapter 5 QM and Angular Momentum 5. Angular Momentum Operators In your Introductory Quantum Mechanics (QM) course you learned about the basic properties of low spin systems. Here we want to review that

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

LAB 2: DTFT, DFT, and DFT Spectral Analysis Summer 2011

LAB 2: DTFT, DFT, and DFT Spectral Analysis Summer 2011 University of Illinois at Urbana-Champaign Department of Electrical and Computer Engineering ECE 311: Digital Signal Processing Lab Chandra Radhakrishnan Peter Kairouz LAB 2: DTFT, DFT, and DFT Spectral

More information

Generation of bandlimited sync transitions for sine waveforms

Generation of bandlimited sync transitions for sine waveforms Generation of bandlimited sync transitions for sine waveforms Vadim Zavalishin May 4, 9 Abstract A common way to generate bandlimited sync transitions for the classical analog waveforms is the BLEP method.

More information

Fundamentals of the DFT (fft) Algorithms

Fundamentals of the DFT (fft) Algorithms Fundamentals of the DFT (fft) Algorithms D. Sundararajan November 6, 9 Contents 1 The PM DIF DFT Algorithm 1.1 Half-wave symmetry of periodic waveforms.............. 1. The DFT definition and the half-wave

More information

Fourier Series and Integrals

Fourier Series and Integrals Fourier Series and Integrals Fourier Series et f(x) beapiece-wiselinearfunctionon[, ] (Thismeansthatf(x) maypossessa finite number of finite discontinuities on the interval). Then f(x) canbeexpandedina

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseare http://ocw.mit.edu HST.58J / 6.555J / 16.56J Biomedical Signal and Image Processing Spring 7 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

Lecture 5. The Digital Fourier Transform. (Based, in part, on The Scientist and Engineer's Guide to Digital Signal Processing by Steven Smith)

Lecture 5. The Digital Fourier Transform. (Based, in part, on The Scientist and Engineer's Guide to Digital Signal Processing by Steven Smith) Lecture 5 The Digital Fourier Transform (Based, in part, on The Scientist and Engineer's Guide to Digital Signal Processing by Steven Smith) 1 -. 8 -. 6 -. 4 -. 2-1 -. 8 -. 6 -. 4 -. 2 -. 2. 4. 6. 8 1

More information

Math 5a Reading Assignments for Sections

Math 5a Reading Assignments for Sections Math 5a Reading Assignments for Sections 4.1 4.5 Due Dates for Reading Assignments Note: There will be a very short online reading quiz (WebWork) on each reading assignment due one hour before class on

More information

Figure 1: Doing work on a block by pushing it across the floor.

Figure 1: Doing work on a block by pushing it across the floor. Work Let s imagine I have a block which I m pushing across the floor, shown in Figure 1. If I m moving the block at constant velocity, then I know that I have to apply a force to compensate the effects

More information

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 12

Designing Information Devices and Systems II Fall 2018 Elad Alon and Miki Lustig Homework 12 EECS 1B Designing Information Devices and Systems II Fall 01 Elad Alon and Miki Lustig Homework 1 This homework is due on Friday, December 7 at 11:59 pm. Self grades are due on Monday, December 10 at 11:59

More information

Math (P)Review Part II:

Math (P)Review Part II: Math (P)Review Part II: Vector Calculus Computer Graphics Assignment 0.5 (Out today!) Same story as last homework; second part on vector calculus. Slightly fewer questions Last Time: Linear Algebra Touched

More information

Discrete Fourier Transform

Discrete Fourier Transform Discrete Fourier Transform Virtually all practical signals have finite length (e.g., sensor data, audio records, digital images, stock values, etc). Rather than considering such signals to be zero-padded

More information

IB Paper 6: Signal and Data Analysis

IB Paper 6: Signal and Data Analysis IB Paper 6: Signal and Data Analysis Handout 5: Sampling Theory S Godsill Signal Processing and Communications Group, Engineering Department, Cambridge, UK Lent 2015 1 / 85 Sampling and Aliasing All of

More information

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

Extraction of Individual Tracks from Polyphonic Music

Extraction of Individual Tracks from Polyphonic Music Extraction of Individual Tracks from Polyphonic Music Nick Starr May 29, 2009 Abstract In this paper, I attempt the isolation of individual musical tracks from polyphonic tracks based on certain criteria,

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

20. The pole diagram and the Laplace transform

20. The pole diagram and the Laplace transform 95 0. The pole diagram and the Laplace transform When working with the Laplace transform, it is best to think of the variable s in F (s) as ranging over the complex numbers. In the first section below

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 3 Brief Review of Signals and Systems My subject for today s discussion

More information

Ch. 15 Wavelet-Based Compression

Ch. 15 Wavelet-Based Compression Ch. 15 Wavelet-Based Compression 1 Origins and Applications The Wavelet Transform (WT) is a signal processing tool that is replacing the Fourier Transform (FT) in many (but not all!) applications. WT theory

More information

Transforms and Orthogonal Bases

Transforms and Orthogonal Bases Orthogonal Bases Transforms and Orthogonal Bases We now turn back to linear algebra to understand transforms, which map signals between different domains Recall that signals can be interpreted as vectors

More information

Chapter 8 The Discrete Fourier Transform

Chapter 8 The Discrete Fourier Transform Chapter 8 The Discrete Fourier Transform Introduction Representation of periodic sequences: the discrete Fourier series Properties of the DFS The Fourier transform of periodic signals Sampling the Fourier

More information

DIFFERENTIAL EQUATIONS

DIFFERENTIAL EQUATIONS DIFFERENTIAL EQUATIONS Basic Concepts Paul Dawkins Table of Contents Preface... Basic Concepts... 1 Introduction... 1 Definitions... Direction Fields... 8 Final Thoughts...19 007 Paul Dawkins i http://tutorial.math.lamar.edu/terms.aspx

More information

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models Contents Mathematical Reasoning 3.1 Mathematical Models........................... 3. Mathematical Proof............................ 4..1 Structure of Proofs........................ 4.. Direct Method..........................

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Positioning Sytems: Trilateration and Correlation

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Positioning Sytems: Trilateration and Correlation EECS 6A Designing Information Devices and Systems I Fall 08 Lecture Notes Note. Positioning Sytems: Trilateration and Correlation In this note, we ll introduce two concepts that are critical in our positioning

More information

Introducing Proof 1. hsn.uk.net. Contents

Introducing Proof 1. hsn.uk.net. Contents Contents 1 1 Introduction 1 What is proof? 1 Statements, Definitions and Euler Diagrams 1 Statements 1 Definitions Our first proof Euler diagrams 4 3 Logical Connectives 5 Negation 6 Conjunction 7 Disjunction

More information

The Discrete Fourier Transform

The Discrete Fourier Transform In [ ]: cd matlab pwd The Discrete Fourier Transform Scope and Background Reading This session introduces the z-transform which is used in the analysis of discrete time systems. As for the Fourier and

More information

Computer Problems for Fourier Series and Transforms

Computer Problems for Fourier Series and Transforms Computer Problems for Fourier Series and Transforms 1. Square waves are frequently used in electronics and signal processing. An example is shown below. 1 π < x < 0 1 0 < x < π y(x) = 1 π < x < 2π... and

More information

Chapter 7: Trigonometric Equations and Identities

Chapter 7: Trigonometric Equations and Identities Chapter 7: Trigonometric Equations and Identities In the last two chapters we have used basic definitions and relationships to simplify trigonometric expressions and equations. In this chapter we will

More information

Midterm 1 Review. Distance = (x 1 x 0 ) 2 + (y 1 y 0 ) 2.

Midterm 1 Review. Distance = (x 1 x 0 ) 2 + (y 1 y 0 ) 2. Midterm 1 Review Comments about the midterm The midterm will consist of five questions and will test on material from the first seven lectures the material given below. No calculus either single variable

More information

MATH 308 COURSE SUMMARY

MATH 308 COURSE SUMMARY MATH 308 COURSE SUMMARY Approximately a third of the exam cover the material from the first two midterms, that is, chapter 6 and the first six sections of chapter 7. The rest of the exam will cover the

More information

Chapter Five Notes N P U2C5

Chapter Five Notes N P U2C5 Chapter Five Notes N P UC5 Name Period Section 5.: Linear and Quadratic Functions with Modeling In every math class you have had since algebra you have worked with equations. Most of those equations have

More information

The Inductive Proof Template

The Inductive Proof Template CS103 Handout 24 Winter 2016 February 5, 2016 Guide to Inductive Proofs Induction gives a new way to prove results about natural numbers and discrete structures like games, puzzles, and graphs. All of

More information

Predicting AGI: What can we say when we know so little?

Predicting AGI: What can we say when we know so little? Predicting AGI: What can we say when we know so little? Fallenstein, Benja Mennen, Alex December 2, 2013 (Working Paper) 1 Time to taxi Our situation now looks fairly similar to our situation 20 years

More information

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits Lecture for Week 2 (Secs. 1.3 and 2.2 2.3) Functions and Limits 1 First let s review what a function is. (See Sec. 1 of Review and Preview.) The best way to think of a function is as an imaginary machine,

More information

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Lecture No. # 33 Probabilistic methods in earthquake engineering-2 So, we have

More information

MITOCW watch?v=rf5sefhttwo

MITOCW watch?v=rf5sefhttwo MITOCW watch?v=rf5sefhttwo The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To

More information

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

University of Connecticut Lecture Notes for ME5507 Fall 2014 Engineering Analysis I Part III: Fourier Analysis

University of Connecticut Lecture Notes for ME5507 Fall 2014 Engineering Analysis I Part III: Fourier Analysis University of Connecticut Lecture Notes for ME557 Fall 24 Engineering Analysis I Part III: Fourier Analysis Xu Chen Assistant Professor United Technologies Engineering Build, Rm. 382 Department of Mechanical

More information

Signals and Systems. Lecture 14 DR TANIA STATHAKI READER (ASSOCIATE PROFESSOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

Signals and Systems. Lecture 14 DR TANIA STATHAKI READER (ASSOCIATE PROFESSOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Signals and Systems Lecture 14 DR TAIA STATHAKI READER (ASSOCIATE PROFESSOR) I SIGAL PROCESSIG IMPERIAL COLLEGE LODO Introduction. Time sampling theorem resume. We wish to perform spectral analysis using

More information

MITOCW MITRES_6-007S11lec09_300k.mp4

MITOCW MITRES_6-007S11lec09_300k.mp4 MITOCW MITRES_6-007S11lec09_300k.mp4 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for

More information

Why Complexity is Different

Why Complexity is Different Why Complexity is Different Yaneer Bar-Yam (Dated: March 21, 2017) One of the hardest things to explain is why complex systems are actually different from simple systems. The problem is rooted in a set

More information

DCSP-2: Fourier Transform

DCSP-2: Fourier Transform DCSP-2: Fourier Transform Jianfeng Feng Department of Computer Science Warwick Univ., UK Jianfeng.feng@warwick.ac.uk http://www.dcs.warwick.ac.uk/~feng/dcsp.html Data transmission Channel characteristics,

More information

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license. 201,

More information

1 Introduction & Objective

1 Introduction & Objective Signal Processing First Lab 13: Numerical Evaluation of Fourier Series Pre-Lab and Warm-Up: You should read at least the Pre-Lab and Warm-up sections of this lab assignment and go over all exercises in

More information

General Recipe for Constant-Coefficient Equations

General Recipe for Constant-Coefficient Equations General Recipe for Constant-Coefficient Equations We want to look at problems like y (6) + 10y (5) + 39y (4) + 76y + 78y + 36y = (x + 2)e 3x + xe x cos x + 2x + 5e x. This example is actually more complicated

More information

Lectures on Spectra of Discrete-Time Signals

Lectures on Spectra of Discrete-Time Signals Lectures on Spectra of Discrete-Time Signals Principal questions to be addressed: What, in a general sense, is the "spectrum" of a discrete-time signal? How does one assess the spectrum of a discrete-time

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

Fourier Analysis of Signals

Fourier Analysis of Signals Chapter 2 Fourier Analysis of Signals As we have seen in the last chapter, music signals are generally complex sound mixtures that consist of a multitude of different sound components. Because of this

More information

CS1800: Sequences & Sums. Professor Kevin Gold

CS1800: Sequences & Sums. Professor Kevin Gold CS1800: Sequences & Sums Professor Kevin Gold Moving Toward Analysis of Algorithms Today s tools help in the analysis of algorithms. We ll cover tools for deciding what equation best fits a sequence of

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 3

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 3 EECS 6A Designing Information Devices and Systems I Spring 209 Lecture Notes Note 3 3. Linear Dependence Recall the simple tomography example from Note, in which we tried to determine the composition of

More information

Designing Information Devices and Systems I Fall 2018 Homework 3

Designing Information Devices and Systems I Fall 2018 Homework 3 Last Updated: 28-9-5 :8 EECS 6A Designing Information Devices and Systems I Fall 28 Homework 3 This homework is due September 4, 28, at 23:59. Self-grades are due September 8, 28, at 23:59. Submission

More information

Math 31 Lesson Plan. Day 2: Sets; Binary Operations. Elizabeth Gillaspy. September 23, 2011

Math 31 Lesson Plan. Day 2: Sets; Binary Operations. Elizabeth Gillaspy. September 23, 2011 Math 31 Lesson Plan Day 2: Sets; Binary Operations Elizabeth Gillaspy September 23, 2011 Supplies needed: 30 worksheets. Scratch paper? Sign in sheet Goals for myself: Tell them what you re going to tell

More information

BME 50500: Image and Signal Processing in Biomedicine. Lecture 2: Discrete Fourier Transform CCNY

BME 50500: Image and Signal Processing in Biomedicine. Lecture 2: Discrete Fourier Transform CCNY 1 Lucas Parra, CCNY BME 50500: Image and Signal Processing in Biomedicine Lecture 2: Discrete Fourier Transform Lucas C. Parra Biomedical Engineering Department CCNY http://bme.ccny.cuny.edu/faculty/parra/teaching/signal-and-image/

More information