Linear Systems Theory Handout

Similar documents
Spatial Vision: Primary Visual Cortex (Chapter 3, part 1)

Review of Linear System Theory

Chapter 9. Electromagnetic Waves

In this section we extend the idea of Fourier analysis to multivariate functions: that is, functions of more than one independent variable.

LINEAR SYSTEMS. J. Elder PSYC 6256 Principles of Neural Coding

Limulus. The Neural Code. Response of Visual Neurons 9/21/2011

MITOCW 6. Standing Waves Part I

Computer Problems for Fourier Series and Transforms

Reference Text: The evolution of Applied harmonics analysis by Elena Prestini

Resonance and response

Chapter 17: Fourier Series

Instructor (Brad Osgood)

UNIVERSITY OF MALTA G.F. ABELA JUNIOR COLLEGE

Physics 486 Discussion 5 Piecewise Potentials

a n cos 2πnt L n=1 {1/2, cos2π/l, cos 4π/L, cos6π/l,...,sin 2π/L, sin 4π/L, sin 6π/L,...,} (2)

This is the number of cycles per unit time, and its units are, for example,

spring mass equilibrium position +v max

On The Discrete Nature of The Gravitational Force. Dewey Lewis Boatmun.

SEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis

Infinite Series. Copyright Cengage Learning. All rights reserved.

Limits Involving Infinity (Horizontal and Vertical Asymptotes Revisited)

MITOCW MITRES_6-007S11lec09_300k.mp4

4. Sinusoidal solutions

FOURIER TRANSFORMS. At, is sometimes taken as 0.5 or it may not have any specific value. Shifting at

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

MATH 308 COURSE SUMMARY

Linear Operators and Fourier Transform

Physics 212E Spring 2004 Classical and Modern Physics. Computer Exercise #2

FOURIER ANALYSIS. (a) Fourier Series

Antennas Prof. Girish Kumar Department of Electrical Engineering Indian Institute of Technology, Bombay. Module 02 Lecture 08 Dipole Antennas-I

7. Numerical Solutions of the TISE

6. Qualitative Solutions of the TISE

Difference Equations

Junior-level Electrostatics Content Review

Physics 41: Waves, Optics, Thermo

Continuum Limit and Fourier Series

PSY 310: Sensory and Perceptual Processes 1

MITOCW watch?v=8osuq1yxcci

Fourier transform. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

x(t+ δt) - x(t) = slope δt t+δt

Chapter 3 Time dilation from the classical wave equation. from my book: Understanding Relativistic Quantum Field Theory.

Chapter (5) Matter Waves

What is Quantum Mechanics?

Physics 1C Lecture 14B. Today: End of Chapter 14 Start of Chapter 27

More on waves + uncertainty principle

Fourier Analysis PSY 310 Greg Francis. Lecture 08. Representation of information

Mathematics for Chemists 2 Lecture 14: Fourier analysis. Fourier series, Fourier transform, DFT/FFT

Convolution and Linear Systems

Math 111, Introduction to the Calculus, Fall 2011 Midterm I Practice Exam 1 Solutions

Module 1: Signals & System

Which of the six boxes below cannot be made from this unfolded box? (There may be more than ffi. ffi

Frequency Response and Continuous-time Fourier Series

Dr. David A. Clifton Group Leader Computational Health Informatics (CHI) Lab Lecturer in Engineering Science, Balliol College

Chapter 06: Analytic Trigonometry

26. The Fourier Transform in optics

The Simple Harmonic Oscillator

Discrete Fourier Transform

G r a d e 1 1 P h y s i c s ( 3 0 s ) Final Practice exam

APPARENT CONTRAST OF SPATIALLY AND TEMPORALLY SAMPLED GRATINGS

Unit 4 Parent Guide: Waves. What is a wave?

Quantum Mechanics - I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 16 The Quantum Beam Splitter

Astronomy Using scientific calculators

Physics I : Oscillations and Waves Prof. S. Bharadwaj Department of Physics and Meteorology Indian Institute of Technology, Kharagpur

Getting Started with Communications Engineering

2.3 Damping, phases and all that

The Sine Wave. You commonly see waves in the environment. Light Sound Electricity Ocean waves

Period. End of Story?

University of Maryland Department of Physics Physics 132 Spring 2014 Sample exam 2 problems 11 April 2014

Fineman CP Physics Final Study Guide

Superposition and Standing Waves

The dynamics of the Forced Damped Pendulum. John Hubbard Cornell University and Université de Provence

Process Dynamics, Operations, and Control Lecture Notes 2

Re-radiation: Scattering. Electric fields are not blocked by matter: how can E decrease?

Notes on Huygens Principle 2000 Lawrence Rees

Chapter 4: Graphing Sinusoidal Functions

7 PDES, FOURIER SERIES AND TRANSFORMS Last Updated: July 16, 2012

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

Physics 1230: Light and Color

Lecture 6: Differential Equations Describing Vibrations

Honors 225 Physics Study Guide/Chapter Summaries for Final Exam; Roots Chapters 15-18

Answers for Ch. 6 Review: Applications of the Integral

6c Lecture 14: May 14, 2014

Chapter 16 Traveling Waves

MITOCW 8. Electromagnetic Waves in a Vacuum

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

Andrea Zanchettin Automatic Control AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Linear systems (frequency domain)

MITOCW big_picture_derivatives_512kb-mp4

Light as a Transverse Wave.

Lecture 10: Heat Engines and Reversible Processes

Topic 4: Waves 4.3 Wave characteristics

SOUND. Representative Sample Physics: Sound. 1. Periodic Motion of Particles PLANCESS CONCEPTS

Review of Linear Systems Theory

Precalculus idea: A picture is worth 1,000 words

Basic principles of the seismic method

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

Project IV Fourier Series

Transformation of stimulus correlations by the retina

CHAPTER 1. Introduction


7.2.1 Seismic waves. Waves in a mass- spring system

Transcription:

Linear Systems Theory Handout David Heeger, Stanford University Matteo Carandini, ETH/University of Zurich Characterizing the complete input-output properties of a system by exhaustive measurement is usually impossible. When a system qualifies as a linear system, it is possible to use the responses to a small set of inputs to predict the response to any possible input. This can save the scientist enormous amounts of work, and makes it possible to characterize the system completely. These notes explain the following ideas related to linear systems theory: The challenge of characterizing a complex systems Simple linear systems Homogeneity Superposition Shift-invariance Decomposing a signal into a set of shifted and scaled impulses The impulse response function Use of sinusoids in analyzing shift-invariant linear systems Decomposing stimuli into sinusoids via Fourier Series Characterizing a shift-invariant system using sinusoids Linear systems theory can be applied to many systems. For example, it can be used to study the responses of a hi-fi system to sounds, or the response of the eye to color images. Here we describe the theory in general and in simple terms. The examples we give are applied to the responses of the retina to images. To simplify matters we imagine that the retina is a thin slit, and is looking at a single horizontal line of a monitor. This way the stimuli are one-dimensional (they live on a line). In real life they are two-dimensional, and that s a bit harder to visualize. Systems, Inputs, and Responses Step one is to understand how to represent possible inputs to systems. In our case, imagine a

graph of the visual stimulus. Each point in space (abscissa) corresponds a light intensity (ordinate). If there is a single pixel on, and the rest are off, the signal is called an impulse. It looks like a single upwards blip on the graph. More complex images look like more complex graphs on this kind of plot. This sort of graph offers a general way to describe all of the possible stimuli. One possible way to characterize the response of the eye to images might be to build a look-up table: a table that shows the exact neural response for every possible visual stimulus. Obviously, it would take an infinite amount of time to construct such a table, because the number of possible images is unlimited. Instead, we must find some way of making a finite number of measurements that allow us to infer how the system will respond to other stimuli that we have not yet measured. We can only do this for certain kinds of systems with certain properties. If we have a good theory about the kind of system we are studying, we can save a lot of time and energy by using the appropriate theory about the system s responsiveness. Linear systems theory is a good time-saving theory for linear systems which obey certain rules. Not all systems are linear, but many important ones are. Linear Systems To see whether a system is linear, we need to test whether it obeys certain rules that all linear systems obey. The two basic tests of linearity are homogeneity and superposition. Systems that satisfy both homogeneity and superposition are linear. Homogeneity Original input -5 5 - Output -5 5 Original input X -5 5 - Output X -5 5 Homogeneity: As we increase the strength of a simple input to a linear system, say we double it, then we predict that the output function will also be doubled. For example, if the intensity of an image is doubled, the eye should respond twice as much if it s a linear system. This is called homogeneity.

Superposition Input -5 5 - Output -5 5 Input -5 5 Sum of inputs -5 5 - - Output -5 5 Sum of outputs -5 5 Additivity: Suppose we present a complex stimulus S such as the face of a person, and we measure the electrical responses of the nerve fibers coming from the retina. Next, we present a second stimulus S that is a little different: a different person s face. The second stimulus also generates a set of responses which we measure and write down. Then, we present the sum of the two stimuli S + S: we present both faces together and see what happens. If the system is linear, then the measured response of each nerve fiber will be just the sum of its responses to each of the two stimuli presented separately. Shift-invariance: Suppose that we stimulate the left part of your eye with an impulse (a pixel turned on) and we measure the electrical response. Then we stimulate it again with a similar impulse on the right side, and again we measure the response. If the retina is symmetrical (which is true, with the exception of the blind spot), then we should expect that the response to the second impulse will be the same as the response to the first impulse. The only difference between them will be that the second impulse has occurred in a different location, that is, it is stimulating different cells that are shifted in retinal space. When the responses to the identical stimulus presented in different locations are the same, except for the corresponding shift in space, then we have a special kind of linear system called a shift-invariant linear system. Just as not all systems are linear, not all linear systems are shift-invariant. Why impulses are special: Homogeneity, superposition, and shift invariance may, at first, sound a bit abstract but they are very useful. They suggest that the system s response to an impulse can be the key measurement to make. The trick is to conceive of the complex stimuli we encounter (such as a person s face) as the combination of pixels. We can approximate any complex stimulus as if it were simply the sum of a number of impulses that are scaled copies of one another and shifted in space. Indeed, we can show any image we want on a computer

monitor, which lights up different pixels. If we want a better approximation, we need a better computer, one that can show more pixels. For shift-invariant linear systems, we can measure the system s response to an impulse and we will know how to predict the response to any stimulus (combinations of impulses) through the principle of superposition. To characterize shift-invariant linear systems, then, we need to measure only one thing: the way the system responds to an impulse of a particular intensity. This response is called the impulse response function of the system. The problem of characterizing a complex system has become simpler now. For shift-invariant linear systems, there is only a single impulse response function to measure. Once we ve measured this function, we can predict how the system will respond to any other possible stimulus. Approximating a signal with impulses.5 -.5 - Impulse.5 -.5 - Impulse response.5 -.5 - Sinusoid.5 -.5 - Response to sinusoid Higher frequency sinusoid Response.5 -.5 -.5 -.5 - -5 5-5 5 The way we use the impulse response function is illustrated in the above Figure. We conceive of the input stimulus, in this case a sinusoid, as if it were the sum of a set of impulses. We know the responses we would get if each impulse was presented separately (i.e., scaled and shifted copies of the impulse response). We simply add together all of the (scaled and shifted) impulse responses to predict how the system will respond to the complete stimulus. Sinusoidal stimuli Sinusoidal stimuli have a special relationship to shift-invariant linear systems. A sinusoid is a regular, repeating curve, that oscillates around a mean level. The sinusoid has a zero-value at

time zero. The cosinusoid is a shifted version of the sinusoid; it has a value of one at time zero. The sine wave repeats itself regularly. The distance from one peak of the wave to the next peak is called the wavelength or period of the sinusoid and it is generally indicated by the greek letter lambda. The inverse of wavelength is frequency: the number of peaks in the stimulus that arrive per second at the ear. The longer the wavelength, the lower the frequency. Apart from frequency, sinusoids also have various amplitudes, which represent how high they get at the peak of the wave and how low they get at the trough. Thus, we can describe a sine wave completely by its frequency and by its amplitude. When we write the mathematical expression of a sine-wave, the two mathematical variables that correspond to the frequency and the amplitude are A and f: A sin( pi f t) The height of the peaks increase as the value of the amplitude, A, increases. The spacing between the peaks becomes smaller as the frequency, f, increases. The response of shift-invariant systems to sine waves: Just as we can express any stimulus as the sum of a series of shifted and scaled impulses, so too we can express any periodic stimulus (a stimulus that repeats itself over time) as the sum of a series of (shifted and scaled) sinusoids at different frequencies. This is called the Fourier Series expansion of the stimulus. The equation describing this expansion works as follows. Suppose that s(t) is a periodic stimulus. Then we can always express s(t) as a sum of sinusoids: s(t) = A + A sin( pi f t + p) + A sin( pi f t + p) + A sin( pi f t + p) +... (Do not memorize this equation!) The frequencies of the sinusoids are very simple: f = (one single cycle over the stimulus duration), f = (two cycles over the stimulus duration), f =, etc. You can go either way: if you know the coefficients (the A s and p s), you can reconstruct the original stimulus s(t); if you know the stimulus, you can compute the coefficients by a method called the Fourier Transform (a way of decomposing complex stimuli into its component sinusoids).

This decomposition is important because if we know the response of the system to sinusoids at many different frequencies, then we can use the same kind of trick we used with impulses to predict the response via the impulse response function. First, we measure the system s response to sinusoids of all different frequencies. Next, we take our input stimulus (a complex sound) and use the Fourier Transform to compute the values of the coefficients in the Fourier Series expansion. At this point the stimulus has been broken down as the sum of its component sinusoids. Finally, we can predict the system s response to the (complex) stimulus simply by adding the responses for all the component sinusoids. Why bother with sinusoids when we were doing just fine with impulses? The reason is that sinusoids have a very special relationship to shift-invariant linear systems. When we use a sinusoidal stimulus as input to a shift-invariant linear system, the system s responses is always a (shifted and scaled) copy of the input, at the same frequency as the input. That is, when the input is sin( pi f t) the output is always of the form A sin( pi f t + p). Here, p determines the amount of shift and A determines the amount of scaling. Thus, measuring the response to a sinusoid for a shift-invariant linear system entails measuring only two numbers: the shift and the scale. This makes the job of measuring the response to sinusoids at many different frequencies quite practical.

Often, then, when scientists characterize the response of a shift-invariant linear system they will not tell you the impulse response. Rather, they will give you plots that tell you about the values of the shift and scale for each of the possible input frequencies. This representation of how the shift-invariant linear system behaves is equivalent to providing you with the impulse response function. We can use these numbers to compute the response to any stimulus. This is the main point of all this stuff: a simple, fast, economical way to measure the responsiveness of complex systems. If you know the coefficients of response for sine waves at all possible frequencies, then you can determine how the system will respond to any possible periodic stimulus. Part of this material is copyright 998, Department of Psychology, Stanford University Modified 999 by Matteo Carandini, ETH/University of Zurich.