Introduction to Linear Systems

Similar documents
Convolution and Linear Systems

Lecture 3: Linear Filters

Lecture 3: Linear Filters

Image Registration Lecture 2: Vectors and Matrices

Spatial-Domain Convolution Filters

Chapter 2. Matrix Arithmetic. Chapter 2

Digital Image Processing

Section 12.4 Algebra of Matrices

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Multiscale Image Transforms

Linear Operators and Fourier Transform

Lecture 7: Edge Detection

Convolution. Define a mathematical operation on discrete-time signals called convolution, represented by *. Given two discrete-time signals x 1, x 2,

Filtering and Edge Detection

Linear Algebra and Matrix Inversion

Module 1: Signals & System

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

MATH2210 Notebook 2 Spring 2018

Determinants - Uniqueness and Properties

L(x; 0) = f(x) This family is equivalent to that obtained by the heat or diffusion equation: L t = 1 2 L

4 Elementary matrices, continued

MATH 3511 Lecture 1. Solving Linear Systems 1

22A-2 SUMMER 2014 LECTURE 5

Computer Vision Lecture 3

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

4 Elementary matrices, continued

Fourier Transform for Continuous Functions

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Matrices. Chapter Definitions and Notations

Gaussian derivatives

Knowledge Discovery and Data Mining 1 (VO) ( )

Edge Detection. CS 650: Computer Vision

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 6

Introduction to Linear Image Processing

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Lecture 1 January 5, 2016

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Extreme Values and Positive/ Negative Definite Matrix Conditions

Signal Processing COS 323

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

A primer on matrices

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

CS 246 Review of Linear Algebra 01/17/19

Phys 201. Matrices and Determinants

Discrete Fourier Transform

1 Linear transformations; the basics

Volume in n Dimensions

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Matrix Arithmetic. (Multidimensional Math) Institute for Advanced Analytics. Shaina Race, PhD

6 The Fourier transform

and let s calculate the image of some vectors under the transformation T.

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

Today s lecture. Local neighbourhood processing. The convolution. Removing uncorrelated noise from an image The Fourier transform

Lecture 11 FIR Filters

A Review of Matrix Analysis

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Properties of Matrices and Operations on Matrices

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Medical Image Analysis

Elementary maths for GMT

Maths for Signals and Systems Linear Algebra for Engineering Applications

Digital Signal Processing Lecture 4

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system

Signal Processing and Linear Systems1 Lecture 4: Characterizing Systems

ABSTRACT ALGEBRA 1, LECTURE NOTES 4: DEFINITIONS AND EXAMPLES OF MONOIDS AND GROUPS.

Review of some mathematical tools

Neural Networks 2. 2 Receptive fields and dealing with image inputs

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Image processing and Computer Vision

CS168: The Modern Algorithmic Toolbox Lecture #11: The Fourier Transform and Convolution

Multidimensional Signal Processing

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Properties of Matrix Arithmetic

Introduction to Computer Vision. 2D Linear Systems

Fundamental theorem of modules over a PID and applications

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

1 Matrices and matrix algebra

FILTERING IN THE FREQUENCY DOMAIN

Derivation of the Kalman Filter

MATH 590: Meshfree Methods

Outline. Convolution. Filtering

Basic Linear Algebra in MATLAB

1.7 Inequalities. Copyright Cengage Learning. All rights reserved.

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

Lecture 9: Elementary Matrices

Colorado School of Mines Image and Multidimensional Signal Processing

ECE Digital Image Processing and Introduction to Computer Vision. Outline

A primer on matrices

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Appendix C Vector and matrix algebra

Fall 2011, EE123 Digital Signal Processing

Unit 1 Matrices Notes Packet Period: Matrices

Don t forget to think of a matrix as a map (a.k.a. A Linear Algebra Primer)

Transcription:

cfl David J Fleet, 998 Introduction to Linear Systems David Fleet For operator T, input I, and response R = T [I], T satisfies: ffl homogeniety: iff T [ai] = at[i] 8a 2 C ffl additivity: iff T [I + I 2 ]=T [I ]+T [I 2 ] ffl superposition: iff T [ai + bi 2 ] = at[i ]+bt[i 2 ] 8a; b 2 C T is linear iff it satisfies superposition. We ll consider D signals for now, typically in one of three forms: ffl continuous: I(x) for x 2 R ffl discrete: I[n] for n 2 I, and often 0» n» N. ffl vector form: ~ I = (I[0]; :::; I[N ]) T. We will, where convenient, work with discrete signals. If an operator T is linear, andr = T [I], then there exists a matrix A where ~R = A~I The response of a linear operator is given by matrix multiplication. The response at a particular sample, such as R[n] is given by the inner product of ~I and the n th row of A. In the continuous domain, by comparison, the response is given by an integral equation of the form R(x) = Z A(x; ο) I(ο) dο In what follows, however, we are going to concentrate mainly on discrete signals and matrices as a consequence.

cfl David J Fleet, 998 2 Shift-Invariance and Toeplitz Matrices If T is a shift-invariant operator, then 8d 2 I R[n] = T [I[n]] iff R[n m] = T [I[n m]] In words, the operator performs the same operation at every position in the signal. If I shift the input signal, then the response will be shifted a similar amount. There is no preferred origin. A linear, shift-invariant operator is equivalent to a matrix multiplication with a Toeplitz matrix, A: ffl each row of A is equal to the previous row, but shifted right by one; so that the elements of A are constant along the diagonal, as in [ ] You can also see from this that each column is a shifted version of every other column! You might find it useful to prove to yourself that shift-invariance implies a Toeplitz matrix with this structure. ffl E.g., let s say we want to compute a weighted average of the input at each point using that point and its two neighbours. Let the weights be 0.25, 0.5, and 0.25 respectively. Then A has the form: [ ]... 0 2 0...... 0 2 0... 4... 0 2 0... ffl Often we ll deal with local filters; i.e., pixels that contribute to the response at a position are nearby. This produces a banded Toeplitz matrix, like the averaging filter above. The width of nonzero entries in a row is called the support of the operator.

cfl David J Fleet, 998 3 Shift-Invariance and Toeplitz Matrices (cont.) Boundary Issues (failure of shift-invariance?): With finite discrete signals we cannot apply the same operation everywhere because there are no samples beyond the ends of signal. To handle this in a reasonable way:. Assume the input is padded with zeros beyond the endpoints out to infinity. Then we can talk about shift-invariance. But in practice, we only have to pad with zeros out to the support of the linear operator. If the operator support is M samples, then we need to pad by M zeros. This means that the length of the output vector will be longer than the input by M. 2. If we pad the input as above, but we then truncate the output vector so it has the same length, as is often done, then the transform has the form: [ ] 2 0... 2 0... 4... 0 2... 0 2 This case definitely violates shift invariance. But for signals that are much longer than the support of the operator, we re often not too worried about the results at the boundaries. 3. Assume the signal is periodic and that shifts are cyclical. If one had a local operator with a banded matrix, then an assumption of cyclical shifts will introduce nonzero entries in the upper-right and lower-left corners of the matrix. For example: [ ] 2 0... 2 0... 4... 0 2... 0 2

cfl David J Fleet, 998 4 Convolution and Impulse Response One can also characterize a linear operator with its impulse response. ffl Kronecker delta function (discrete) ffi[n] = ( n = 0 0 otherwise ffl Dirac delta function (continuous) ffi(x) = 0 8x 6= 0; and Z ffi(x)f (x) dx = f (0) for sufficiently smooth f (x) The impulse response of a linear shift-invariant system is the response to an impulse. ffl In the discrete case, multiplying A by ffi[n m] amounts to extracting the m th column from A. ~ h = A~em ; ~e m = (0;:::;0; ; 0;:::;0) T : Here, h[n m] is referred to the impulse response. Note that if we handle boundaries by padding with zeros and truncating the result, then it would be wise to make the origin somewhere in the middle of the vector so that we don t get a truncated impulse response! ffl If we applied A to a shifted version of the impulse, then we get another column, which is just a shifted version of the impulse response. ffl Therefore, given the impulse response, you could construct the entire matrix! (ie. it contains all the relevant information to define the operator)

cfl David J Fleet, 998 5 Remarks: Convolution (cont.). Another way to develop the idea of the inpulse response is to express the input signal as a weighted sum of shifted impulses. For example, assume that the signal value at position p was w p. Then we can write I[n] as I[n] = p= w p ffi[n p] Given T, along with superposition and shift-invariance, we find T [I[n]] = T = = " p= w p ffi[n p] p= w p T [ffi[n p]] p= w p h[n p] Notice how the impulse response shows up again. This equation says that operator s output is equal to a weighted sum of shifted impulse responses. 2. We can also rewrite this operation in a slightly different form. Because the value of I at p is equal to w p, and because R[n] = T [I[n]], we can rewrite this last equation as R[n] = I[p] h[n p] p= This is the most common formulation of a linear shift-invariant filter, and this operation is referred to as the convolution of I and h. It s sufficiently important that we have a special operator symbol for it, namely, Λ. I Λ h = p= I[p] h[n p] : With continuous signals, h(x) and I(x), convolution is written as I Λ h = Z p= I(ο) h(x ο)dο #

cfl David J Fleet, 998 6 3. It can be shown that convolution operator is ffl commutative: I Λ h = h Λ I (for periodic boundary treatment). Not all matrix operations commute, but this one does. ffl associative: (h Λ h 2 ) Λ h 3 = h Λ (h 2 Λ h 3 ) This is true of all matrix multiplication. ffl distributive over addition: (h + h 2 ) Λ h 3 = h Λ h 3 + h 2 Λ h 3 This is true of all matrix multiplication. 4. Often the impulse response is relatively limited in its support (its number of nonzero values). Let s say that h[m] is only nonzero for M=2» m» M=2. In that case, it is convenient to rewrite the convolution equation in yet another way as I Λ h = M=2 X p= M=2 I[n + p] h[ p] : In words, this equation describes the following: for each image position, n, center the impulse response at that position, flip the impulse response, and then take its inner product with the image. This describes a much more efficient way to implement the filter than by matrix multiplication! 5. There are a variety of ways of finding an inverse to a convolution operator. One uses the Fourier transform. The other, more expensive but intuitively obvious method is to create the effective Toeplitz matrix, and invert it. One can show that the inverse of a Toepliz matrix (with periodic boundary treatment) is also Toepliz, which shows that the inverse of a linear shift-invariant operator, if it exists, is also linear and shift-invariant.

cfl David J Fleet, 998 7 2D Convolution and Images In two dimensions, a linear shift-invariant filter computes, at each position in the image, a linear combination of pixel values. The convolution equation is given by R[n; m] = p= q= I[p; q] h[n p; m q] Computational Expense: Convolution in two dimensions will require, in the general case, O(N 2 M 2 ) multiplications and additions where N 2 is the number of pixels in the image, and M 2 is the 2d support of the impulse response. If we can decompose the filter into a separable filter, then we can reduce this computational load to O(N 2 M ). Separability: If a 2d filter can be expressed as h(x; y) =h (x) h 2 (y) for some h (x) and some h 2 (y),thenh is said to be separable. In the discrete case, an operator h[n; m] is separable if it can be expressed as an outer product: [ ] ( h[n,m] = h [n] ( ( h [m] 2 With separability, the convolution operation can be decomposed into a cascade of d convolutions, first along the rows, and then along the columns. Because convolution is commumtative, it doesn t matter which is done first. Each d operation requires only O(n 2 m) multiplications and additions. This yields a significant savings when the filter support is larger than 4 or 5 pixels. (

cfl David J Fleet, 998 8 Examples: ffl The m m constant matrix (a crude 2d averaging operator) can be expressed as an outer product of two d constant vectors. ffl In continuous terms, the Gaussian is the only 2d isotropic function that can be decomposed into a separable product of two d Gaussians: 2ßff 2e (x2 +y 2 )=2ff 2 = p 2ßff e x2 =2ff2 p 2ßff e y2 =2ff2 ffl Discrete approximations to Gaussians are given by binomial coefficientsi, (e.g, (, 4, 6, 4, )/6)).

cfl David J Fleet, 998 9 Figure : Blurring of Al: The original image of Al and a blurred version are shown. The blurring kernel was simple a separable kernel composed of the outer product of the 5-tap d impulse response (; 4; 6; 4; ). 6 Figure 2: This shows Al and a high-pass filtered version of Al. This impulse response is defined by ffi[n; m] h[n; m] where h[n; m] is the separable blurring kernel used in the previous figure. Figure 3: From left to right is the original Al, a band-pass filtered version of Al, and the zero-crossings of the filtered image. This impulse response is defined by the difference of two low-pass filters.

cfl David J Fleet, 998 0 Figure 4: Derivative filters are common in image processing as discussed in more detail below. (top) Crude approximations to a horizontal derivative and a vertical derivative, each of which is separable and composed of an outer product of a smoothing filter in one direction (i.e., (; 2; )) and a first-order 4 central difference (i.e., ( ; 0; )) in the other direction. (bottom) The sum of squared derivative 2 responses gives a measure of the magnitude of the image gradient at each pixel. When clipped, this gives us a rough idea of where edges might be found.