Fundamentals of Digital Image Processing

Size: px
Start display at page:

Download "Fundamentals of Digital Image Processing"

Transcription

1 Fundamentals of Digital mage Processing ANL K. JAN University of California, Davis PRENTCE HALL, Englewood Cliffs, NJ 07632

2 5 mage Transforms 5. NTRODUCTON The term image transforms usually refers to a class of unitary matrices used for representing images. Just as a onedimensional signal can be represented by an orthogonal series of basis functions, an image can also be expanded in terms of a discrete set of basis arrays called basis images. These basis images can be generated by unitary matrices. Alternatively, a given N x N image can be viewed as an N2 x vector. An image transform provides a set of coordinates or basis vectors for the vector space. For continuous functions, orthogonal series expansions provide series co efficients which can be used for any further processing or analysis of the functions. For a onedimensional sequence {u (n), 0 ::5 n ::5 N }, represented as a vector u of size N, a unitary transformation is written as v Au where A A *r N } v (k) L a (k, n)u (n), Q :s; k :s; N (5. ) (unitary ). This gives N :::} u (n) L v (k)a * (k, n) ko (5.2) Equation (5.2) can be viewed as a series representation of the sequence u (n). The columns of A that is, the vectors at {a* (k, n), 0 ::5 n ::5 N lv are called the basis vectors of A. Figure 5. shows examples of basis vectors of several orthogonal transforms encountered in image processing. The series coefficients v (k) give a representation of the original sequence u (n) and are useful in filtering, data compression, feature extraction, and other analyses. *r, 32

3 Cosine ko k k2 k3 k4 k5 k6 k7 jtttttttt l l l l l lllll 'lll l! t qp l' l t t l l d!! t w w Sine l'. l. lhl lll' ' T T T ti! T' ' P l! 'l 'l tt l l t Hadamard tt ttt l l ll l l. l l lll llll TTT l l l llll ll ll l l Figure 5. Haar tttt,... TTT T llll ll ll r ".... j.. "..... ".... J Basic vectors of the 8 x 8 transforms. Slant ' q l l l l l l l 'l' TTT TTT!! l ' l l'! ll l ll l l! t! l! KLT rt j l t l l l"l l T TTTT! " ll l d l T l' t T!

4 5.2 TWODMENSONAL ORTHOGONAL AND UNTARY TRANSFORMS n the context of image processing a general orthogonal series expansion for an x N image u (m, n) is a pair of transformations of the form N N v (k, ) LL u (m, n)a k, (m, n), N m, n O 0 :s k, :s N (5. 3) 0 :s m, n :s N (5.4) u (m, n) LL v (k, l) a t,, (m, n), k, l O where {a k, (m, n)}, called an image transform, is a set of complete orthonormal discrete basis functions satisfying the properties Orthonormality: 2: 2: a k, (m, n)a l. r (m, n) &(k k ', ') N m, n O (5.5) N (5.6) LL a k, (m, n)a l,, (m ', n ') &(m m ', n n ') k, / O The elements v (k, ) are called the transform coefficients and V {v (k, /)} is called the transformed image. The orthonormality property assures that any trun Completeness: cated series expansion of the form P Q Up, Q (m, n) L L v (k, l) a l,, (m, n), ko lo P :s N, Q :s N (5.7) will minimize the sum of squares error er LL [ u (m, n) up. Q (m, n)]2 N m, n O (5.8) when the coefficients v (k, ) are given by (5.3). The completeness property assures that this error will be zero for P Q N (Problem 5. ). Separable Unitary Transforms The number of multiplications and additions required to compute the transform coefficients v (k, ) using (5.3) is 0 (N4), which is quite excessive for practicalsize images. The dimensionality of the problem is reduced to 0 (N 3) when the transform is restricted to be separable, that is, a k, (m, n) ak (m)b (n) a (k, m)b (l, n) (5.9) where {ak (m), k 0,..., N }, {b (n), 0,..., N } are onedimensional complete orthonormal sets of basis vectors. mposition of (5.5) and (5.6) shows that A {a (k, m)} and B {b (, n)} should be unitary matrices themselves, for example, (5. 0) Often one chooses B to be the same as A so that (5.3) and (5.4) reduce to 34 mage Tra nsforms Chap. 5

5 N v (k, ) LL a (k, m)u (m, n)a (l, n) V AUA7 m, n O N u (m, n ) LL a* (k, m)v (k, l)a* (!, n) U A*7VA* k, l O For an M x N rectangular image, the transform pair is (5. ) (5. 2) V AM UAN (5. 3) U A,t 7VAV (5. 4) where AM and AN are M x M and N x N unitary matrices, respectively. These are called twodimensional separable transformations. Unless otherwise stated, we will always imply the preceding separability when we mention twodimensional unitary transformations. Note that (5. ) can be written as V7 A[AUf (5. 5) which means (5. ) can be performed by first transforming each column of U and then transforming each row of the result to obtain the rows of V. Basis mages Let at denote the kth column of A* 7. Define the matrices and the matrix inner product of two N x N matrices F and G as N N (F, G) L L f(m, n)g* (m, n) mo no (5. 6) (5. 7) Then (5.4) and (5.3) give a series representation for the image as N U LL v (k, / )A;, k, 0, v (k, ) (U, A:,,) (5. 8) (5. 9) Equation (5. 8) expresses any image U as a linear combination of the N2 matrices A;,, k, 0,..., N, which are called the basis images. Figure 5. 2 shows 8 x 8 basis images for the same set of transforms in Fig. 5.. The transform coefficient v (k, /) is simply the inner product of the (k, /)th basis image with the given image. t is also called the projection of the image on the (k, / )th basis image. Therefore, any N x N image can be expanded in a series using a complete set of N2 basis images. f U and V are mapped into vectors by row ordering, then (5. ), (5. 2), and (5. 6) yield (see Section 2.8, on Kronecker products) o A)u Jtu (5.20) (5.2) where (5.22) Sec. 5.2 TwoDimensional Orthogonal and Unitary Transforms 35

6 ,.::::::! ' ' '!....., r;;;. ll ', ' ' '., 36 :..,... mage Transforms.. Chap. 5

7 is a unitary matrix. Thus, given any unitary transform A, a twodimensional sepa rable unitary transformation can be defined via (5.20) or (5. 3). Example 5. For the given orthogonal matrix A and image U ( ) A yz l ' u (; ) the transformed image, obtained according to (5.), is ( )( )( ) ( )( ) ( ) 2 V To obtain the basis images, we find the outer product of the columns of A * r, which gives * 2 ( ) 2 Ao,0 l l l and similarly ( ) () ( ) *T, Ao,* 2 l l A,o The inverse transformation gives ( )( )( ( A, 2 l l * ) ( )( ) ) ( ) A * r VA * l l 3l 42 which is U, the original image. Kronecker Products and Dimensionality Dimensionality of image transforms can also be studied in terms of their Kronecker product separability. An arbitrary onedimensional transformation (5.23) is called separable if (5.24) This is because (5.23) can be reduced to the separable twodimensional trans formation (5.25) where X and Y are matrices that map into vectors.v and.. ', respectively, by row ordering. f._,,{, is N 2 x N 2 and A, A2 are N x N, then the number of operations required for implementing (5.25) reduces from N4 to about 2N 3. The number of operations can be reduced further if A and A2 are also separable. mage transforms such as discrete Fourier, sine, cosine, Hadamard, Haar, and Slant can be factored as Kronecker products of several smallersized matrices, which leads to fast algorithms for their implementation (see Problem 5.2). n the context of image processing such matrices are also called fast image transforms. Sec. 5.2 TwoDimensional Orthogonal and Unitary Transforms 37

8 Dimensionality of mage Transforms The 2N3 computations for V can also be reduced by restricting the choice of A to the fast transforms, whose matrix structure allows a factorization of the type A Ac!J Ac2J,..., A <Pl (5.26) where AciJ, i,...,p(p << N) are matrices with just a few nonzero entries (say r, with r<< N). Thus, a multiplication of the type y Ax is accomplished in rpn operations. For Fourier, sine, cosine, Hadamard, Slant, and several other transforms, p log 2 N, and the operations reduce to the order of N log 2 N (or N2 log 2 N for N x N images). Depending on the actual transform, one operation can be defined as one multiplication and one addition or subtraction, as in the Fourier transform, or one addition or subtraction, as in the Hadamard transform. Transform Frequency For a onedimensional signal f(x), frequency is defined by the Fourier domain variable t is related to the number of zero crossings of the real or imaginary part of the basis function exp{j2't X }. This concept can be generalized to arbitrary unitary transforms. Let the rows of a unitary matrix A be arranged so that the number of zero crossings increases with the row number. Then in the transformation yax the elements y (k) are ordered according to increasing wave number or transform frequency. n the sequel any reference to frequency will imply the transform frequency, that is, discrete Fourier frequency, cosine frequency, and so on. The term spatial frequency generally refers to the continuous Fourier transform frequency and is not the same as the discrete Fourier frequency. n the case of Hadamard transform, a term called sequency is also used. t should be noted that this concept of frequency is useful only on a relative basis for a particular transform. A lowfrequency term of one transform could contain the highfrequency harmonics of another transform. The Optimum Transform Another important consideration in selecting a transform is its performance in filtering and data compression of images based on the mean square criterion. The KarhunenLoeve transform (KLT) is known to be optimum with respect to this criterion and is discussed in Section PROPERTES OF UNTARY TRANSFORMS Energy Conservation and Rotation n the unitary transformation, v Au llvll 2 llull 2 (5.27) 38 mage Transforms Chap. 5

9 This is easily proven by noting that N N llvll 2 L lv(k)i2v*rvu*ra*rauu*ru L iu(n)l 2 lluii 2 ko no Thus a unitary transformation preserves the signal energy or, equivalently, the length of the vector u in the N dimensional vector space. This means every unitary transformation is simply a rotation of the vector u in the N dimensional vector space. Alternatively, a unitary transformation is a rotation of the basis coordinates and the components of v are the projections of u on the new basis (see Problem 5.4). Similarly, for the twodimensional unitary transformations such as (5.3), (5.4), and (5.) to (5. 4), it can be proven that N N LL lu (m, n)j LL v (k,/)2 2 (5.28) m, n O k, /O Energy Compaction and Variances of Transform Coefficients Most unitary transforms have a tendency to pack a large fraction of the average energy of the image into a relatively few components of the transform coefficients. Since the total energy is preserved, this means many of the transform coefficients will contain very little energy. f f.lu and Ru denote the mean and covariance of a vector u, then the corresponding quantities for the transformed vector v are given by µv E[v] E[Au] AE[u] Aµ.. (5.29) Rv E[(v µv)(vµ.v)*t] A(E[(u µ.)(u µu)*r])a*r ARuA*r (5.30) The transform coefficient variances are given by the diagonal elements of Rv, that is cr (k) [Rvh. k [AR. A * ]k, k (5.3) L jµv(k)l µ: T 2 fj.v µ.;ta *TAµ. L jµ.(n)j2 ko no N N L cr (k) Tr[ARuA*7] Tr[R.) L cr (n) ko no L E[lv(k)j2] L E[ju(n)i 2 ] Since A is unitary, it follows that N N N N ko no (5.32) (5.33) (5.34) The average energy E[lv(k)j2] of the transform coefficients v(k) tends to be unevenly distributed, although it may be evenly distributed for the input sequence u(n). For a twodimensional random field u(m, n) whose mean is µ.(m, n) and covariance is r (m, n; m ', n '), its transform coefficients v (k, /) satisfy the properties µv(k, l) LL a(k, m)a(l, n)µ.(m, n) m n (5.35) Sec. 5.3 Properties of Unitary Transforms 39

10 2 u (k, l ) E [ i v (k, l) f.lv(k, ) ) L L L' L' a (k, m)a (l, n)r (m, n ; m ', n ')a* (k, m ')a* (, n ') m n m (5.36) n f the covariance of u (m, n) is separable, that is r(m, n ; m ', n ') r (m, m ' )r(n, n ') (5.37) then the variances of the transform coefficients can be written as a separable product a (k, l ) u (k) uw ) (5.38) [AR A * 7]k, k [AR2 A * 7],,, where R {r (m, m ' )} and R2 {r2(n, n ')} Decorrelation When the input vector elements are highly correlated, the transform coefficients tend to be uncorrelated. This means the offdiagonal terms of the covariance matrix Rv tend to become small compared to the diagonal elements. With respect to the preceding two properties, the KL transform is optimum, that is, it packs the maximum average energy in a given number of transform coefficients while completely decorrelating them. These properties are presented in greater detail in Section 5.. Other Properties Unitary transforms have other interesting properties. For example, the determinant and the eigenvalues of a unitary matrix have unity magnitude. Also, the entropy of a random vector is preserved under a unitary transformation. Since entropy is a measure of average information, this means information is preserved under a unitary transformation. Example 5.2 (Energy compaction and decorrelation) (v'3 v'3) A 2 x zero mean vector u is unitarily transformed as v2 _ u, ( p) where Ru p, O<p<l The parameter p measures the correlation between u (0) and u (). The covariance of v is obtained as p/2 R v + V3p/2 p/2 \!'3p/2 ( ) From the expression for Ru, <T (O) <T (l), that is, the total average energy of 2 is distributed equally between u (O) and u (l). However, <T (O) + \!'3p/2 and <T (l) \!'3p/2. The total average energy is still 2, but the average energy in v (0) is greater than in v (l). f p 0.95, then 9.% of the total average energy has been packed in the first sample. The correlation between v (O) and v (l) is given by 40 mage Transforms Chap. 5

11 p 2( p2) 2 which is less in absolute value than P for P <. For p 0.95, we find Pv(O, ) E [v (O)v (l)] Pv(O, l) a v(o)a v(l) Hence the correlation between the transform coefficients has been reduced. f the foregoing procedure is repeated for the 2 x 2 transform A of Example 5., then we find a:(o) + p, a:(l) p, and Pv(O, ) 0. For p 0.95, now 97.5% of the energy is packed in v (O). Moreover, v (O) and v (l) become uncorrelated. 5.4 THE ONEDMENSONAL DSCRETE FOURER TRANSFORM (OFT) The discrete Fourier transform (DFT) of a sequence {u (n), n defined as N k 0,,...,N v (k) L u (n)w, no 0,..., N } is (5. 39) where (5.40) The inverse transform is given by N n 0,,..., N (5.4) u (n) _l L v (k) W,vkn, NkO The pair of equations (5.39) and (5.4) are not scaled properly to be unitary transformations. n image processing it is more convenient to consider the unitary DFT, which is defined as v (k) L u (n)wtn, VN no N u (n) k 0,..., N N L v (k) W,vkn, VN k O n O,..., N (5.42) (5.43) The N x N unitary DFT matrix F is given by F { W Nkn VN } ' 0 $ k, n $ N (5.44) Future references to DFT and unitary DFT will imply the definitions of (5.39) and (5.42), respectively. The DFT is one of the most important transforms in digital signal and image processing. t has several properties that make it attractive for image processing applications. Properties of the OFT/Unitary OFT Let u (n) be an arbitrary sequence defined for n 0,,..., N. A circular shift of u (n) by l, denoted by u (n l)c, is defined as u [(n l) modulo NJ. See Fig. 5.3 for l 2, N 5. Sec. 5.4 The OneDimensional Discrete Fourier Transform (DFT) 4

12 u[(n u(n) 0 2 t 3 4 t 2) modulo 5] n n Figure 5.3 Circular shift of u (n) by 2. The OFT and Unitary OFT matrices are symmetric. matrix F is symmetric. Therefore, r F* By definition the (5.45) The extensions are periodic. The extensions of the DT and unitary DT of a sequence and their inverse transforms are periodic with period N. f for example, in the definition of (5.42) we let k take all integer values, then the sequence v (k) turns out to be periodic, that is, v (k) v (k + N) for every k. The OFT is the sampled spectrum of the finite sequence u (n ) extended by zeros outside the interval [O, N ]. f we define a zeroextended sequence u (n) then its Fourier transform is { u (n ), O :.s n :.s N otherwise 0, N D (w) 2: u (n) exp(jwn) 2: u (n) exp (jwn) n O n X Comparing this with (5.39) we see that v (k) {j (2 k) (5.46) (5.47) (5.48) Note that the unitary DT of (5.42) would be U(2rrk /N)VN. The OFT and unitary OFT of dimension N can be implemented by a fast algorithm in 0 (N log2 N) operations. There exists a class of algorithms, called the fast Fourier transform (FT), which requires 0 (N log2 N) operations for imple menting the DT or unitary DT, where one operation is a real multiplication and a real addition. The exact operation count depends on N as well as the particular choice of the algorithm in that class. Most common FT algorithms require N 2P, where p is a positive integer. The OFT or unitary OFT of a real sequence {x (n ), n 0,..., N } is conjugate symmetric about N/2. From (5.42) we obtain N N v* (N k) L u* (n)wn(n k)n L u (n)wtn v (k) no no 42 mage Transforms Chap. 5

13 v ( k) v* ( + k), k 0,... 2N l v ( k) l l v ( + k) (5.49) (5.50) Figure 5.4 shows a 256sample scan line of an image. The magnitude of its DFT is shown in Fig. 5.5, which exhibits symmetry about the point 28. f we consider the periodic extension of we see that v (k), v(k) v(n k) t u(n) n Figure 5.4 A 256sample scan line of an image. 89 t v(k) k Sec Figure 5.5 Unitary discrete transform of Fig The OneDimensional Discrete Fourier Transform (OFT) Fourier 43

14 Hence the (unitary) DFT frequencies N/2 + k, k 0,..., N/2 l, are simply the negative frequencies at w (2'TN )( N 2 + k) in the Fourier spectrum of the finite sequence {u (n), 0 :5 n :5 N } Also, from (5.39) and (5.49), we see that v (O) and v (N 2) are real, so that the N x real sequence. { v (O), Re{v (k)}, k,..., }, {m{v (k)}, k,..., }, v ( ) (5.5) completely defines the DFT of the real sequence u (n). Therefore, it can be said that the DFT or unitary DFT of an N x real sequence has N degrees of freedom and requires the same storage capacity as the sequence itself. The basis vectors of the unitary OFT are the orthonormal eigenvectors of any circulant matrix. Moreover, the eigenvalues of a circulant matrix are given by the OFT of its first column. Let H be an N x N circulant matrix. Therefore, its elements satisfy [H]m, n h (m n) h [(m n) modulo N ], { } The basis vectors of the unitary DFT are columns of F* T _ <!>k _ Wkn < < _ T N,0n N, VN _ Consider the expression [H<j>k] m k 0 :5 m, n :5 N l F*, that is, 0...,N l (5.53), L h (m n)w,\/n.v N n O N Writing m n l and rearranging terms, we can write [ (5.52) (5.54) ] N l N h (l)w i h (l)w (5.55) [H<!>k] m W;/m i h (l)w + VN /m+l l N+ m + l lo Using (5.52) and the fact that W,\/ wz i (since WZ ), the second and third terms in the brackets cancel, giving the desired eigenvalue equation [H<j>k]m or Ak <j>k(m) H<j>k X.k <!>k (5.56) where X.k the eigenvalues of H, are defined as, N Ak L h (l)w ' lo O :s; k :s; N l (5.57) This is simply the D FT of the first column of H. Based on the preceding properties of the DFT, the following additional properties can be proven (Problem 5.9). Circular convolution theorem. The DFT of the circular convolution of two sequences is equal to the product of their DFTs, that is, if 44 mage Transforms Chap. 5

15 N X2 (n) L h (n k),x(k), k O then 0 :5 n :5 N l (5.58) (5.59) where DFf{x (n)}n denotes the DFf of the sequence x (n) of size N. This means we can calculate the circular convolution by first calculating the DFf of x2 (n) via (5.59) and then taking its inverse DFf. Using the FFf this will take O (N log2 N ) oper ations, compared to N2 operations required for direct evaluation of (5.58). A linear convolution of two sequences can also be obtained via the FFT by imbedding it into a circular convolution. n general, the linear convolution of two sequences {h (n), n 0,..., N ' } and {x (n), n 0,..., N } is a se quence {x2 (n), 0 :5 n :5 N ' + N 2} and can be obtained by the following algorithm: Step : Let M N' + N be an integer for which an FFf algorithm is available. Step 2: Define h (n) and i(n), 0 :5 n :5 M, as zero extended sequences corre sponding to h (n) and x(n), respectively. Step 3: Let Y(k) DFf{i(n)}M, A.k DFf{h (n)}m. Define Y2 (k) A.kY(k), k 0,.., M. Step 4: Take the inverse DFf of y2(k) to obtain i2 (n). Then x2 (n) i2 (n) for 0 :5 n :5 N + N ' 2.. Any circulant matrix can be diagonalized by the OFT/unitary OFT. That is, FHF* A (5 60). where A Diag{ A.k, 0 :5 k :5 N } and A.k are given by (5.57). t follows that if C, C and C2 are circulant matrices, then the following hold.. C C2 C2 C, that is, circulant matrices commute. is a circulant matrix and can be computed in O (N log N ) operations. 3. CT, C + C2, and f(c) are all circulant matrices, where f(x) is an arbitrary function of x. 2. c 5.5 THE TWODMENSONAL OFT The twodimensional DFf of an N x N image {u (m, n)} is a separable transform defined as N N v (k, l) L L u (m, n)w tm W, mo no Sec. 5.5 The TwoDimensional DF;T O :5 k, l :5 N (5.6) 45

16 and the inverse transform is u (m, n) 2 k O / O v (k, l)w,vkm w,vn ' NN 0 :::; m, n :::; N (5.62) The twodimensional unitary OFT pair is defined as m O n O u (m, n)w m w ' u (m, n) O O v (k, l)w,vkm w,vn ' k l v (k, l) N N N N 0 :::; k, l :::; N 0 :::; m, n :::; N (5.63) (5.64) n matrix notation this becomes (5.65) V FUF (a) Original image; ( b) phase; (c) magnitude; (d) magnitude centered. Figure Twodimensional unitary DFT of a 256 x 256 image. mage Transforms Chap. 5

17 Figure UC/ 5.7 0}] Unitary DFT of images (a) Resolution chart; (b) its DFr; (c) binary image; ( d) its DFr. The two parallel lines are due to the '/' sign in the binary image. U F*VF* (5.66) f U and V are mapped into rowordered vectors ' and o, respectively, then U/ ::F"* 0 (5.67).9" F F (5.68) The Nz x Nz matrix.9" represents the N x N twodimensional unitary OFT. Figure 5.6 shows an original image and the magnitude and phase components of its unitary DFT. Figure 5.7 shows magnitudes of the unitary DFTs of two other images. Properties of the TwoDimensional OFT The properties of the twodimensional unitary OFT are quite similar to the one dimensional case and are summarized next. Symmetric, unitary. gr.9", gi T* F* F* Periodic extensions. v (k + N, + N) v (k, /), u (m + N, n + N) u (m, n), Sampled Fourier spectrum. u (m, n) 0 otherwise, then ( ) Vk, Vm, n where 0 (w, w2) is the Fourier transform of u (m, n ). Sec. 5.5 The TwoDimensional OFT (5. 70) f u (m, n) u (m, n), 0 ::::; m, n ::::; N, and 2TTk 2TT/ U N' DFT{u (m, n)} v (k, l) N (5.69) (5.7) 47

18 Fast transform. Since the twodimensional DFT is separable, the trans formation of (5.65) is equivalent to 2N onedimensional unitary DFfs, each of which can be performed in O (N log2 N) operations via the FFT. Hence the total number of operations is O (N2 log2 N). Conjugate symmetry. conjugate symmetry, that is, The DFT and unitary DFT of real images exhibit N 0 s k, l s 2 or (5.72) 0 s k, l s N v (k, l) v* (N k, N l), (5.73) 2 From this, it can be shown that v (k, l) has only N independent real elements. For example, the samples in the shaded region of Fig. 5.8 determine the complete DFT or unitary DFT (see problem 5. 0). Basis images. (5.53)] :......T Ak,* l 'f'k 'f'/ The basis images are given by definition [see (5. 6) and + n) _!_ {W(km N ' N O s k, l s N O s m, n s N }, (5.74) Twodimensional circular convolution theorem. The DFT of the two dimensional circular convolution of two arrays is the product of their DFTs. Twodimensional circular convolution of two N x N arrays h (m, n) and u(m, n) is defined as N N O s m, n s N (5.75) ui(m, n) L L h (m m ', n n ') c u (m ', n '), m' O n' O where h (m, n)c h (m modulo N, n modulo N) l (5.76) k ( N/2) N/2 S.8 Discrete Fourier transform coefficients v (k, ) in the shaded area de termine the remaining coefficients. Figure N/2 48 mage Transforms Chap. 5

19 n N n'. u ( m, n) h(m, n) O u (m', n') h ( m, n) * 0 M N M (a) Array h(m, n). (b) Circular convolution of h ( m, n ) with u (m, n) over N X N region. Twodimensional circular convolution. Figure 5.9 Figure 5.9 shows the meaning of circular convolution. t is the same when a periodic extension of h (m, n) is convolved over an N x N region with u (m, n). The two dimensional DFT of h (m m ', n n ' )c for fixed m ', n ' is given by N N N m' N n' mo no i m ' j n ' L h (i, j)c W k + il) L L h (m m ', n n ')c W k + nfj W 'k + n' TJ L N l N w m' k + n' f) 2: 2: h (m, n)w mk + nl) mo n O W 'k + n ') DFT{h (m, n)}n (5.77) where we have used (5.76). Taking the DFT of both sides of (5.75) and using the preceding result, we obtaint (5.78) From this and the fast transform property (page 42), it follows that an N x N circular convolution can be performed in 0 (N 2 log2 N ) operations. This property is also useful in calculating twodimensional convolutions such as M M x3(m, n) L L x2 (m m ', n n ')x (m ', n ') m O O (5.79) where x (m, n ) and x2(m, n ) are assumed to b e zero for m, n fl. [O, M l ]. The region of support for the result x3 (m, n) is {O :s m, n :s 2M 2}. Let N 2: 2M and define N x N arrays n' ' { { h (m, n) x2 (m, n ), 0 :s m, n :s M (5.80) u (m, n) a x(m, n ), 0 :s m, n :s M (5.8) 0, otherwise 0, otherwise ' We denote DFT{x (m, n)}n as the twodimensional DFT of an N Sec The TwoDimensional DFT x N array x (m, n), O :s m, n :sn. 49

20 Evaluating the circular convolution of h (m, n) and ii(m, n) according to (5.75), it can be seen with the aid of Fig that x3(m, n) u2 (m, n), 2M 2 0 s m, n s 2 (5.8 ) This means the twodimensional linear convolution of (5.79) can be performed in 0 (N2 log2 N) operations. Block circulant operations. Dividing both sides of (5.77) by N and using the definition of Kronecker product, we obtain (5.83) F)9C QJ F) where 9C is doubly circulant and QJ is diagonal whose elements are given by [ QJ ]kn + J,kN + J dk, DFT{h (m, n)}n, 0 s k, s N (5.84) Eqn. (5.83) can be written as or (5.85) that is, a doubly block circulant matrix is diagonalized by the twodimensional unitary DFT. From (5.84) and the fast transform property (page 42), we conclude that a doubly block circulant matrix can be diagonalized in O (N2 log2 N) opera tions. The eigenvalues of 9C, given by the twodimensional DFT of h (m, n), are the same as operating Ng;" on the first column of 9C. This is because the elements of the first column of 9C are the elements h (m, n) mapped by lexicographic ordering. Block Toeplitz operations. Our discussion on linear convolution implies that any doubly block Toeplitz matrix operation can be imbedded into a double block circulant operation, which, in turn, can be implemented using the two dimensional unitary DFT. 5.6 THE COSNE TRANSFORM The N x N cosine transform matrix C {c (k, n)}, also called the discrete cosine trans/orm (DCT), is defined as c (k, n) { f2 V N k 0, 0 s n s N cos ' T(2n + l }k, sk sn 2N, (5.86) 0sn sn The onedimensional DCT of a sequence {u (n), 0 s n s N } is defined as v (k) a(k) where a(o) /:!,. 50 n o u (n) cos[ (2 N+ l)k ], N l V{ N, T a(k) Osk sn for s k s N mage Transforms (5.87) (5.88) Chap. 5

21 t y(k) Figure 5.0 Cosine transform of the im age scan line shown in Fig k k2::o a(k)v (k) cos [T(2n2N+ l)k ], The inverse transformation is given by u (n) N l O :s n :s N l (5.89) The basis vectors of the 8 x 8 DCT are shown in Fig. 5.. Figure 5. 0 shows the cosine transform of the image scan line shown in Fig Note that many transform coefficients are small, that is, most of the energy of the data is packed in a few transform coefficients. The twodimensional cosine transform pair is obtained by substituting A A* C in (5. ) and (5. 2). The basis images of the 8 x 8 twodimensional cosine transform are shown in Fig Figure 5. shows examples of the cosine transform of different images. Properties of the Cosine Transform. The cosine transform is real and orthogonal, that is, c c* c ct 2. (5.90) The cosine transform is not the real part of the unitary DFT. This can be seen by inspection of C and the DFT matrix F. ( Also see Problem ) However, the cosine transform of a sequence is related to the DFT of its symmetric extension (see Problem 5. 6). Sec The Cosine Transform 5

22 UC/ (a) Cosine transform examples of monochrome im ages; (b) Cosine transform examples of binary images. Figure The cosine transform is a fast transform. The cosine transform of a vector of N elements can be calculated in 0 (N log2 N) operations via an Npoint FT [9]. To show this we define a new sequence u (n) by reordering the even and odd elements of u (n) as u (n) u (2n) u (N n l) u (2n + l) l ' 0 :5 n :5 ( ) (5.9) Now, we split the summation term in (5.87) into even and odd terms and use to obtain {(N/2) [ (4 + l)k] v (k) o.(k) u (2n) cos T N (N/2) l [ (4n + 3)kJ} + u (2n + ) cos T 2N { [ (N/2) l [ (4n + l)k] o.(k) u (n) cos T 2N (N/2)l u (N n l) cos[ T(4n2N+ 3)kJ} Changing the index of summation in the second term to n ' N n and combining terms, we obtain N[ (4n + )k ] v (k) a(k) u (n) cos T (5.9) o n n o n + l n 52 o _ n o o 2N (5.92) mage Transforms Chap. 5

23 N Re [a(k)e jirk.2n n O u (n)e i2irkn/n] Re[a(k) W DFT{u(n)}N] which proves the previously stated result. For inverse cosine transform we write (5.89) for even data points as ] N n N N ir i ir i /2 k k/ [ u(2n) u (2n) Re O [a(k)v(k)e ]e 2 ' k (5.93) The odd data points are obtained by noting that (5.94) u(2n + ) u [2(N n)), O sn s ( ) Therefore, if we calculate the Npoint inverse FFT of the sequence a(k)v(k) exp(jtk/2n), we can also obtain the inverse DCT in O(N logn) operations. Direct algorithms that do not require FFT as an intermediate step, so that complex arithmetic is avoided, are also possible [8). The computa tional complexity of the direct as well as the FFT based methods is about the same. 4. The cosine transform has excellent energy compaction for highly correlated data. This is due to the following properties. S. The basis vectors of the cosine transform (that is, rows of C) are the eigen vectors of the symmetric tridiagonal matrix Qc, defined as a a a Q (5.95) a a The proof is left as an exercise. The N x N cosine transform is very close to the KL transform of a firstorder stationary Markov sequence of length N whose covariance matrix is given by (2.68) when the correlation parameter p is close to. The reason is that R is a mmetric tridiagonal matrix, which for a scalar J32 ( p2)/( + p2) and a p/( + p2) satisfies the relation pa a a 0 a pa Sec. 5.6 The Cosine Transform (5.96) 53

24 This gives the approximation for (5.97) Hence the eigenvectors of R and the eigenvectors of Q,, that is, the cosine transform, will be quite close. These aspects are considered in greater depth in Section 5.2 on sinusoidal transforms. This property of the cosine transform together with the fact that it is a fast transform has made it a useful substitute for the KL transform of highly correlated firstorder Markov sequences. (32 R Q, p 5.7 THE SNE TRANSFORM The N N sine transform matrix 'T {tli(k, n)}, also called the discrete sine trans form DST, is defined as. 7r(k l)(n + ) ' Osk, n sn (5. 98)!i+l sm (k n) Y/2 N+ ( x ) \ + ' The sine transform pair of onedimensional sequences is defined as v (k) N 2+ Nn'7: oi u (n) sm. T(k +Nl)(n+ + ) ' 0 s k s N (5. 99) u (n) N 2+ N ll.. v (k) sm. T(k +Nl)(n + l + ), Osn sn l (5.00) The twodimensional sine transform pair for N N images is obtained by substituting A A* Ar 'T in (5.) and (5.2). The basis vectors and the basis images of the sine transform are shown in Figs. 5. and Figure 5.2 shows the sine transform of a image. Once again it is seen that a large fraction of the k O x x total energy is concentrated in a few transform coefficients. Properties of the Sine Transform. The sine transform is real, symmetric, and orthogonal, that is, '{f * '{f '{ft '{f Thus, the forward and inverse sine transforms are identical. (5.0) Figure 5.2 Sine transform of a 255 x 255 portion of the 256 x 256 image shown in Fig. 5.6a. 54 mage Transforms Chap. 5

25 The sine transform is not the imaginary part of the unitary DFT. The sine transform of a sequence is related to the DFT of its antisymmetric extension (see Problem 5.6). 3. The sine transform is a fast transform. The sine transform (or its inverse) of a vector of N elements can be calculated in O(N log2n) operations via a 2(N )point FFT. Typically this requires N + 2P, that is, the fast sine transform is usually defined for N 3, 7, 5, 3, 63, 255,.... Fast sine transform algorithms that do not require complex arithmetic (or the FFT) are also possible. n fact, these algorithms are somewhat faster than the FFT and the fast cosine transform algorithms [20]. 4. The basis vectors of the sine transform are the eigenvectors of the symmetric tridiagonal Toeplitz matrix a (5.02) Q a a 0 a 5. The sine transform is close to the KL transform of first order stationary Markov sequences, whose covariance matrix is given in (2.68), when the correlation parameter p lies in the interval (0.5, 0.5). n general it has very good to excellent energy compaction property for images. 6. The sine transform leads to a fast KL transform algorithm for Markov se quences, whose boundary values are given. This makes it useful in many image processing problems. Details are considered in greater depth in Chapter 6 (Sections 6.5 and 6.9) and Chapter (Section.5) OJ [ 5.8 THE HADAMARD TRANSFORM Unlike the previously discussed transforms, the elements of the basis vectors of the Hadamard transform take only the binary values ± and are, therefore, well suited for digital signal processing. The Hadamard transform matrices, Hn, are N x N matrices, where N 2", n, 2, 3. These can be easily generated by the core matrix (5.03) and the Kronecker product recursion "n l Hn ) (5.04) Hn H H Hn (u v2 n Hn As an example, for n 3, the Hadamard matrix becomes 83 H H2 (5.05) (5.06) H2 H H Sec. 5.8 The Hadamard Transform 55

26 which gives Sequency (5.07) l 2 l 5 The basis vectors of the Hadamard transform can also be generated by sampling a class of functions called the Walsh functions. These functions also take only the binary values ± and form a complete orthonormal basis for square integrable Vs + 6 functions. For this reason the Hadamard transform just defined is also called the WalshHadamard transform. The number of zero crossings of a Walsh function or the number of transitions in a basis vector of the Hadamard transform is called its sequency. Recall that for sinusoidal signals, frequency can be defined in terms of the zero crossings. n the Hadamard matrix generated via (5.04), the row vectors are not sequency ordered. The existing sequency order of these vectors is called the Hadamard order. The Hadamard transform of an N x vector is written as u v Hu and the inverse transform is given by uhv where H Hn n log2 N. n series form the transform pair becomes v(k) mn u(m)(l)b(k, ml, 0 5.k 5. N, L YN O (5.08) (5.09) (5.0) (5.) l N2: v(k)(l)b(k, ml, 0 5. m 5. N u(m) VN k O where (5.2) b(k, m) nilo k;m;; k;,m; 0, and {k;},{m;} are the binary representations of k and m, respectively, that is, k ko + 2k + ' ' ' + 2n l kn l (5.3) m mo + 2m ' ' ' + 2n mn The twodimensional Hadamard transform pair for N x N images is obtained by substituting A A * AT H in (5.) and (5.2). The basis vectors and the basis J + 56 mage Transforms Chap. 5

27 UC/ (a) Hadamard transforms of monochrome images. Figure 5.3 (b) Hadamard transforms of binary images. Examples of Hadamard transforms. images of the Hadamard transform are shown in Figs. 5. and 5.2. Examples of twodimensional Hadamard transforms of images are shown in Fig Properties of the Hadamard Transform The Hadamard transform H is real, symmetric, and orthogonal, that is, u H* HT u (5.4) 2. The Hadamard transform is a fast transform. The onedimensional trans formation of (5.08) can be implemented in O(N log2 N) additions and sub tractions. Since the Hadamard transform contains only ± values, no multiplica tions are required in the transform calculations. Moreover, the number of additions or subtractions required can be reduced from N2 to about N log2 N. This is due to the fact that Hn can be written as a product of n sparse matrices, that is, (5.5) H Hn Hn' n log2 N where 0 0 N+ rows Z 0 0 a _l (5.6) " v' Nt rows Z 0 0 t. Sec. 5.8 The Hadamard Transform 57

28 Since ii contains only two nonzero terms per row, the transformation v ii u :ii:ii :Hu, n log2 N (5.7)... n tenns can be accomplished by operating ii n times on Due to the structure of ii only N additions or subtractions are required each time ii operates on a vector, giving a total of Nn N log2 N additions or subtractions. 3. The natural order of the Hadamard transform coefficients turns out to be equal to the bit reversed gray code representation of its sequency f the sequency has the binary representation bn bn _ b and if the correspond ing gray code is gn gn... g, then the bitreversed representation g g gn gives the natural order. Table 5. shows the conversion of sequency2 to natural order h, and vice versa, for N 8. n general, u. s i gk bk < > bk + l, gn bn. s. k l,...,n hk gn k + l and bk gk < > bk +, bn gn k n,..., s (5.8) (5.9) give the forward and reverse conversion formulas for the sequency and natural ordering. 4. The Hadamard transform has good to very good energy compaction for highly correlated images. Let {u (n ), 0 n :s N } be a stationary random :s TABLE 5. Natural Ordering versus Sequency Ordering of Hadamard Transform Coefficients for N 8 Natural order Binary representation h 58 h3 h2 h Gray code of s or reverse binary representation Sequency binary representation Sequency g3g2 g h h2 h3 b3 b2 b mage Transforms s Chap. 5

29 sequence with autocorrelation r(n), 0 ::5 n ::5 N. The fraction of the ex pected energy packed in the first N2! sequency ordered Hadamard transform coefficients is given by [23] ;; (N\ 2/) (Nl2i l) L D/.: k _ O N _ L Dk ko Dk [HRH]k. k, [ ( 2k.) r(k) r(o) ] k R (5.20) (5.2) {r( n)} m where D.: are the first N2! sequency ordered elements Dk. Note that the Dk are simply the mean square values of the transform coefficients [Hu]k. The significance of this result is that 6 (N/2i) depends on the first 2i auto correlations only. For j, the fractional energy packed in the first N2 sequency ordered coefficients will be ( + r(l)/r(0)/2 and depends only upon the onestep correlation p r(l)/r(o). Thus for p , 97.5% of the total energy is concentrated in half of the transform coefficients. The result of (5.20) is useful in calculating the energy compaction efficiency of the Hadamard transform. Example 5.3 Consider the covariance matrix R of obtain (2.68) for N 4. Using the definition of H2 we Sequency 6p + 4p2 + 2p p + 4p2 2p3 3 D diagonal [H2 RH2] 4 + 2p 4p2 2p p 4p2 + 2p3 2 This gives D Do, D; D2, D D3, D3 D and 0( ) tt D/, :6 (4 + 6p + 4p2 + 2p p 4p2 2p3) (; p) as expected' according to (5.20) THE HAAR TRANSFORM The Haar functions hk (x) are defined on a continuous interval, E [O, ], and for k 0,..., N, where N zn. The integer k can be uniquely decomposed as x k 2? + q Sec. 5.9 The Haar Transform (5.22) 59

30 where 0 :s p :s n ; q 0, for p 0 and :s q :s 2P for p ;:/ 0. For example, when N 4, we have k 0 p 0 0 q Representing k by (p, q ), the Haar functions are defined as x [0, ]. ho (x) L.\ ho,o (x) VN ' E ql (5.23a) q :s x < 2P 2P q 2 (5.23b) 2P :s x <!L2P otherwise for x E [O, ] The Haar transform is obtained by letting x take discrete values at m/n, m 0,,..., N. For N 8, the Haar transform is given by Sequency 0 v'2 v'2 v'2 v' v'2 0 v'2 v' v'2 2 Hr (5.24) Vs The basis vectors and the basis images of the Haar transform are shown in Figs. 5. and 5.2. An example of the Haar transform of an image is shown in Fig From the structure of Hr [see (5.24)] we see that the Haar transform takes differences of the samples or differences of local averages of the samples of the input vector. Hence the twodimensional Haar transform coefficients y (k, ), except for k l 0, are the differences along rows and columns of the local averages of pixels in the image. These are manifested as several "edge extractions" of the original image, as is evident from Fig Although some work has been done for using the Haar transform in image data compression problems, its full potential in feature extraction and image anal ysis problems has not been determined. 60 mage Transforms Chap. 5

31 Figure S.S Slant transform of the 256 x 256 image shown in Fig. 5.6a. Haar transform of the 256 x 256 image shown in Fig. 5.6a. Figure S.4 Properties of the Haar Transform. The Haar transform is real and orthogonal. Therefore, Hr Hr* Hr HrT (5. 25) The Haar transform is a very fast transform. On an N x vector it can be implemented in 0 (N) operations. 3. The basis vectors of the Haar matrix are sequency ordered. 4. The Haar transform has poor energy compaction for images THE SLANT TRANSFORM The N x N Slant transform matrices are defined by the recursion o o o o rr, o 0 /N/ 2 (N/2) 2 ( 2) an bn o + bn an 0 Sec an bn o o t bn an + / N/ L0 ( 2) 2 The Slant Transform t o (N/2) 2 0 Sn l + 0 (5.26) Sn 6

32 where N 2n, M denotes an M x M identity matrix, and S Vz [ ] The parameters an and bn are defined by the recursions (5.27) (5.28) which solve to give an + (4N3N2 z ) 2' bn + (4NNzzl) 2' N 2" (5.29) Using these formulas, the 4 x 4 Slant transformation matrix is obtained as Sequency (5.30) Sz Figure 5. shows the basis vectors of the 8 x 8 Slant transform. Figure 5. 2 shows the basis images of the 8 x 8 two dimensional Slant transform. Figure 5.5 shows the Slant transform of a 256 x 256 image. Vs Vs Vs Vs Vs Vs Vs Vs Properties of the Slant Transform The Slant transform is real and orthogonal. Therefore, s S*, s (5.3) 2. The Sia transform is a fast transform, which can be implemented in O(N o 2 N) operations on an N x vector. 3. t has ve good to excellent energy compaction for images. 4. The basis vectors of the Slant transform matrix S are not sequency ordered for n 3. f Sn_ is sequency ordered, the ith row sequency of Sn is given as follows. i 0, sequency 0 i, sequency sequency {22i'i + l, ii even odd sequency 2. sr 2: 62 mage Transforms Chap. 5

33 . l N + l' 2 sequency 3 even i odd + 2 :::; i ::; N l, i 5. THE KL TRANSFORM The KL transform was originally introduced as a series expansion for continuous random processes by Karhunen 27 and Loeve [28. For random sequences Hotel ling 26 first studied what was called a method of principal components, which is the discrete equivalent of the KL series expansion. Consequently, the KL transform is also called the Hotelling transform or the method of principal components. For a real N x random vector the basis vectors of the KL transform (see Section 2. 9) are given by the orthonormalized eigenvectors of its autocorrelation matrix R, that is, (5. 32) R <f>k "Ak <f>b The KL transform of is defined as [] [] ] u, u (5. 33) and the inverse transform is u «t>v N (5. 34) ko (k)<f>k where <f>k is the kth column of«!>. From (2.44) we know «> reduces R to its diagonal form, that is, 2: v (5.35) We often work with the covariance matrix rather than the autocorrelation matrix. With µ. then (5. 36) Ro cov [u] Rf the vector µ. is known, then the eigenmatrix of Ro determines the KL transform of the zero mean random process n general, the KL transform of and µ. need not be identical. Note that whereas the image transforms considered earlier were functionally independent of the data, the KL transform depends on the (secondorder) statistics of the data. E[u], E[(u µ)(u µf] E[uuT] µµt µµt u µ. u u Example 5.4 (KL Transform of Markov Sequences) The covariance matrix of a zero mean Markov sequence of N elements is given by (2.68). ts eigenvalues A.k and the eigenvectors cl>k are given by p2 2p COS Wk + p2 Sec. 5. The KL Transform 63

34 ij>k (m) ij>(m, k) ( (N + >..k) 2 2 sin wk + ') 'TT ) (k_ N_ + ) + '(m + l ' 2 2 (5. 37) Osm, k sn l where the {wk} are the positive roots of the equation tan(nw) ( p2) sin w N even cos w 2p + p2 cos w ' (5. 38) A similar result holds when N is odd. This is a transcendental equation that gives rise to nonharmonic sinusoids!j>k (m). Figure 5. shows the basis vectors of this 8 x 8 KL transform for p Note the basis vectors of the KLT and the DCT are quite similar. Because ij>k (m) are nonharmonic, a fast algorithm for this transform does not exist. Also note, the KL transform matrix is cll T {ij>(k, m)}. Example S.5 Since the unitary DFT reduces any circulant matrix to a diagonal form, it is the KL transform of all random sequences with circulant autocorrelation matrices, that is, for all periodic random sequences. The DCT is the KL transform of a random sequence whose autocorrelation matrix R commutes with Qc of (5.95) (that is, if RQc QcR). Similarly, the DST is the KL transform of all random sequences whose autocorrelation matrices commute with Q of (5. 02). KL Transform of mages f an N x N image u(m, n) is represented by a random field whose autocorrelation function is given by E[u(m, n)u(m', n')] r(m, n ; m', n'), O::::: m, m', n, n ' ::::: N (5.39) then the basis images of the KL transform are the orthonormalized eigenfunctions l\lk, (m, n) obtained by solving N N r(m, n;m', n')l\lk, (m', n') (5.40) A.k, \ik, (m, n), O s k, l sn l, Osm, n sn l n matrix notation this can be written as "; i 0,..., N2 (5.4) where tli; is an N2 vector representation of l\lk. i (m,2 n) and is an N2 N2 autocorrelation matrix of the image mapped into an N vector Thus (5.42) f is separable, then tfie N2 N2 matrix whose columns are {";} becomes separable (see Table 2. 7). For example, let (5.43) r(m, n ; m ', n ') r (m, m ')r2 (n, n ' ) L L m'o n 'O x A.;tli;, x x 64 u. x ' mage Transforms Chap. 5

35 t!jk.t (m, n ) <!> (m, k) <!>2 (n, l) (5.44) (5.45) (5.46) (5.47) (5.48) (5.49) n matrix notation this means where j <l>i Ri <>t T Ai' and the KL transform of U/ is, 2 0 ft*t U/ [<l>j <l>i]u/ For rowordered vectors this is equivalent to and the inverse KL transform is V <l>jtu<l>i The advantage in modeling the image autocorrelation by a separable function is that instead of solving the N2 x N2 matrix eigenvalue problem of (5.4), only two N x N matrix eigenvalue problems of (5.46) need to be solved. Since an N x N matrix eigenvalue problem requires 0 (N3) computations, the reduction in dimen sionality achieved by the separable model is 0 (N6)/0 (N3) 0 (N3), which is very significant. Also, the transformation calculations of (5.48) and (5.49) require 2N3 operations compared to N4 operations required for 'i'*t U/. Example 5.6 Consider the separable covariance function for a zero mean random field r (m ' n ; m ' ' n ') plm m' pln n ' (5. 50) This gives Ul R R, where R is given by (2.68). The eigenvectors of R are given by cl>k in Example 5.4. Hence ' cl> cl> and the KL transform matrix is cl>r ct>r. Figure 5.2 shows the basis images of this 8 x 8 twodimensional KL transform for p Properties of the KL Transform The KL transform has many desirable properties, which make it optimal in many signal processing applications. Some of these properties are discussed here. For simplicity we assume u has zero mean and a positive definite covariance matrix R. Decorrelation. The KL transform coefficients {v (k), k 0,..., N } are uncorrelated and have zero mean, that is, E[v"(k)] 0 E [v (k)v * (l )] A.k 8(k l ) The proof follows directly from (5.33) and (5.35), since E (vv*] <l>*t E (uu]<> <l>*tr<> A Sec. 5. The KL Transform diagonal (5.5) (5.52) 65

36 which implies the latter relation in (5. 5). t should be noted that is not a unique rix with respect to this property. There could be many matrices (unitary and mat nonunitary) that would decorrelate the transformed sequence. For example, a lower triangular matrix could be found that satisfies (5. 52). Example 5.7 [ ol [ P2l 2 ] The covariance matrix R of (2.68) is diagonalized by the lower triangular matrix L p 0 p LrRL 0 2 O p D (5. 53) Hence the transformation v Lr u, will cause the sequence v (k) to be uncorrelated. Comparing with Example 5.4, we see that L of cl>. Moreover, L is not unitary and the diagonal elements of D are not the eigenvalues of R. Basis restriction mean square error. Consider the operations in Fig The vector u is first transformed to v. The elements of w are chosen to be the first m elements of v and zeros elsewhere. Finally, is transformed to z. A and B are N x N matrices and m is a matrix with ls along the first m diagonal terms and zeros elsewhere. Hence w w(k) {v(k), (5. 54) 0, Therefore, whereas u and v are vectors in an Ndimensional vector space, w is a vector restricted to an m ::5 Ndimensional subspace. The average mean square error between the sequences u(n) and z(n) is defined as ) lm E (Nl u (n) z(n)l 2 l Tr[E{(uz)(uz)*r}] 0 (5. 55) l N N This quantity is called the basis restriction error. t is desired to find the matrices A and B such that lm is minimized for each and every value of m [, NJ. This minimum is achieved by the KL transform of u. Theorem 5.. The error lm in (5. 55) is minimum when E B, AB (5. 56) where the columns of are arranged according to the decreasing order of the eigenvalues of R. Proof. From Fig. 5. 6, we have v Au, w mv, and z Bw (5. 57) A N X N Figure B N X N KL transform basis restriction. mage Transforms Chap. 5

37 Using these we can rewrite (5. 55) as lm Tr[( Blm A)R( Blm A)* r] To minimize m we first differentiate it with respect to the elements of A and set the result to zero [see Problem 2.5 for the differentiation rules]. This gives (5. 58) which yields (5. 59) lm B*T lm B* T B/m A At m N, the minimum value of JN must be zero, which requires BA 0 or B A Using this in (5. 60) and rearranging terms, we obtain l s. m s. N (5. 60) (5. 6) (5. 62) For (5. 62) to be true for every m, it is necessary that B*rB be diagonal. Since B A, it is easy to see that (5. 60) remains invariant if B is replaced by DB or BD, where D is a diagonal matrix. Hence, without loss of generality we can normalize B so that B*rB, that is, B is a unitary matrix. Therefore, A is also unitary and B A *r. This gives (5. 63) Since R is fixed, m is minimized if the quantity m Tr(m ARA* lm 2: a{ Rat ko is maximized where ar is the kth row of A. Since A S unitary' ar a: To maximize lm subject to (5. 65), we form the Lagrangian m m lm 2: a{rat + 2: >.k(l a{an ko ko (5. 64) (5. 65) (5.66) and differentiate it with respect to ai. The result giv s a necessary condition where at are orthonormalized eigenvectors of R. This yields (5. 67) (5. 68) Sec. 5. The KL Transform 67

38 which is maximized if {at, 0 j m } correspond to the largest m eigenvalues of R. Because im must be maximized for every m, it is necessary to arrange AN. Then af, the rows of A, are the conjugate transpose of the eigenvectors of R, that is, A is the KL transform of u. Distribution of variances. Among all the unitary transformations Au, the KL transform «l>*r packs the maximum average energy in m N samples of Define Ao ::: A ::: Az ::: ::: v cr t E[J v (k)i2], m (A) L cr t Sm Then for any fixed m v. (5. 69) ko E [, NJ (5. 70) Proof. Note that Sm (A) m L (ARA *) k, k ko Tr(m A*r RA) im which, we know from the last property [see (5. 64)], is maximized when A is the KL transform. Since t Ak when A «l>*r, from (5. 68) cr m m L Ak 2: L crl, ko (5. 7) ko Threshold representation. The KL transform also minimizes E [m ], the expected number of transform coefficients required, so that their energy just exceeds a prescribed threshold (see Problem and [33]). A fast KL transform. n application of the KL transform to images, there are dimensionality difficulties. The KL transform depends on the statistics as well as the size of the image and, in general, the basis vectors are not known analytically. After the transform matrix has been computed, the operations for performing the transformation are quite la_rge for images. t has been shown that certain statistical image models yield a fast KL trans form algorithm as an alternative to the conventional KL transform for images. t is based on a stochastic decomposition of an image as a sum of two random sequences. The first random sequence is such that its KL transform is a fast transform and the second sequence, called the boundary response, depends only on information at the boundary points of the image. For details see Sections 6.5 and The ratedistortion function. Suppose a random vector u is unitary trans formed to and transmitted over a communication channel (Fig. 5. 7). Let and u v v 68 mage Transforms Chap. 5

39 A (Unitary) Channel Unitary transform data transmission. Each element of independently. Figure 5. 7 v is coded be the reproduced values of and respectively. Further, assume that are Gaussian. The average distortion in is D N Since is unitary and and we have D _!_N v u, u E((u u )*T(u u )] u A*T v u A*T v, E ((v v )*TAA *T(v v )] E ((v v )*T(v v )] E(8v*T8v) u A _!!_ N u, v, v, and (5.72) (5.73) N represents the error in the reproduction of From the preceding, where D is invariant under all unitary transformations. The ratedistortion function is now obtained, following Section 2.3, as max 0, log2 0 R (5.74) N D N L min(0, (5.75) where (5.76) depend on the transform Therefore, the rate R R (5.77) also depends on For each fixed D, the KL transform achieves the minimum rate among all unitary transforms, that is, (5.78) This property is discussed further in Chapter, on transform coding. 8v v v v. _!_ Nk2:o [ U'2k] NkO U'l] U'l E(lv (k)l2] (ARA*T]k,k A. (A) 2 A. Example 5.8 Consider a 2 x vector u, whose covariance matrix is The KL transform is R [ P] V2 [ ] p ' \pl < cl> *T cl> Sec. 5. The KL Transform 69

40 The transformation v cl»u gives E {[v (0)]2} Ao + p, [ ( E {[v ()]2} p ) ( p l +p R (cl») 2 max 0, 2 log + max 0, 2 log 0 0 )] Compare this with the case when A (that is, u is transmitted), which gives O", and R () *l 2 log 0], O" 0<0 < Suppose we let 0 be small, say 0 < lpl. Then it is easy to show that R (cl») < R () This means for a fixed level of distortion, the number of bits required to transmit the KLT sequence would be less than those required for transmission of the original sequence b fl i Cosine, K L ndex k 5.8 Distribution of variances of the transform coefficients (in decreasing order) of a stationary Markov sequence with N 6, p 0.95 (see Example 5.9). Figure 70 mage Transforms Chap. 5

41 TABLE 5.2!k Variances of Transform Coefficients of a Stationary Markov Sequence with p 0.95 and N 6. See Example 5.9. KL Cosine Sine Unitary DFf Hadamard Haar Slant Transform Example 5.9 (Comparison Among Unitary Transforms for a Markov Sequence) Consider a firstorder zero mean stationary Markov sequence of length N whose covariance matrix R is given by (2.68) with p Figure 5.8 shows the distribution of variances crt of the transform coefficients (in decreasing order) for different transforms. Table 5.2 lists crt for the various transforms. Define the normalized basis restriction error as L crt N J. m _ km t N L ko cr, m O,...,N (5. 79) where <Tl have been arranged in decreasing order. Figure 5. 9 shows m versus m for the various transforms. t is seen that the cosine transform performance is indistinguishable from that of the KL transform for p n general it seems possible to find a fast sinusoidal transform (that is, a transform consisting of sine or cosine functions) as a good substitute for the KL transform for different values of p as well as for higherorder stationary random sequences (see Section 5. 2). The mean square performance of the various transforms also depends on the dimension N of the transform. Such comparisons are made in Section Example 5.0 (Performance of Transforms on mages) The mean square error test of the last example can be extended to actual images. Consider an N x N image u (m, n) from which its mean is subtracted out to make it zero mean. The transform coefficient variances are estimated as a2 (k, ) E[ l v (k, )2] v (k, )2 Sec. 5. The KL Transform 7

42 g Samples retained (m) Figure 5.9 Performance of different unitary transforms with respect to basis restriction errors (Jm) versus the number of basis (m) for a stationary Markov sequence with N 6, p Figure 5.20 Zonal filters for 2 :, 4:, 8:, 6 : sample reduction. White areas are passbands, dark areas are stopbands. 72 mage Transforms Chap. 5

43 zonal mask (Fig. 5.20) such that only a fraction of The image transform is filtered by a the transform coefficients are retained and the remaining ones are set to zero. Define the normalized mean square error Js Ji k, l e stopband N lvd2 L:L: i vd2 energy in stopband total energy k, l O Figure 5.2 shows an original image and the image obtained after cosine transform zonal filtering to achieve various sample reduction ratios. Figure 5.22 shows the zonal filtered images for different transforms at a 4 : sample reduction ratio. Figure 5.23 shows the mean square error versus sample reduction ratio for different transforms. Again we find the cosine transform to have the best performance. (a) Original; (b) 4 : sample reduction; (c) 8 : sample reduction; (d) 6 : sample reduction. Figure 5.2 Sec. 5. Basis restriction zonal filtered images in cosine transform domain. The KL Transform 73

44 (a) Cosine; (b) sine; (c) unitary OFT; (d) Hadamard; (e) Haar; (f) Slant. Figure S.22 Basis restriction zonal filtering using different transforms with 4 : sample reduction.

45 0 9 B 7 *' 6 UJ U'l ::!: "'O "' 5 z 4 n; E Performance comparison of different transforms with respect to basis restriction zonal filtering for 256 x 256 images. Figure a.._ ' B Sample reduction ratio 5. 2 A SNUSODAL FAMLY OF UNTARY TRANSFORMS This is a class of complete orthonormal sets of eigenvectors generated by the parametric family of matrices whose structure is similar to that of a [see (5. 96)], (5.80) k2 p, k3 0, j32 ( p2)/( p2), and p/( p2), we have J(p,p,O) j32 R (5.8) Since R and 32 a have an identical set of eigenvectors, the transform associated with R can be determined from the eigenvectors of J(p, p, 0). Similarly, it n fact, for k + a + KL Sec A Sinusoidal Family of Unitary Transforms 75

46 can be shown that the basis vectors of previously discussed cosine, sine, and discrete Fourier transforms are the eigenvectors of J(l,,0), J(0,0,0), and J(l,, ), respectively. n fact, several other fast transforms whose basis vectors are sinusoids can be generated for different combinations of kl> k2, and k3 For example, for 0 :s m, k :s N, we obtain the following transforms:. Odd sine : ki k3 0, k2 + )T 2 sm. (k + 2)(2m (5.82) ''m (k) N + V2N 2. Odd cosine : k, k2 kj 0 )(2m + )T 2 cos (2k +2(2N <!> (k) (5.83) + ) V2N + Other members of this family of transforms are given in [34].,,. m + Approximation to the KL Transform The J matrices play a useful role in performance evaluation of the sinusoidal trans forms. For example, two sinusoidal transforms can be compared with the KL transform by comparing corresponding Jmatrix distances.:l (kl! kz, kj) llj (ki, kz, kj) J ( p, p, 0) 2 (5.84) This measure can also explain the close performance of the DCT and the Further, it can be shown that the DCT performs better than the sine trans form for 0. 5 :s p :s and the sine transform performs better than the cosine for other values of p. The J matrices are also useful in finding a fast sinusoidal transform approximation to the KL transform of an arbitrary random sequence whose co variance matrix is A. f A commutes with a J matrix, that is, AJ JA, then they will have an identical set of eigenvectors. The best fast sinusoidal transform may be chosen as the one whose corresponding J matrix minimizes the commuting distance AJ JAf Other uses of the J matrices are () finding fast algorithms for inversion ofll banded Toeplitz matrices, (2) efficient calculation of transform coefficient vari ances, which are needed in transform domain processing algorithms, and (3) estab lishing certain useful asymptotic properties of these transforms. For details see [34]. KLT. 5.3 OUTER PRODUCT EXPANSON AND SNGULAR VALUE DECOMPOSTON n the foregoing transform theory, we considered an N x M image U to be a vector in an NMdimensional vector space. However, it is possible to represent any such image in an rdimensional subspace where r is the rank of the matrix U. Let the image be real and M :s N. The matrices UUT and UTU are non negative, symmetric and have the identical eigenvalues, {>..m}. Since M :s N, there are at most r :s M nonzero eigenvalues. t is possible to find r orthogonal, M x 76 mage Transforms Chap. 5

47 eigenvectors {cf>m} of UTU and r orthogonal N x eigenvectors {\flm} of UUT, that is, UTUcf>m Am cf>m, m,..., r (5. 85) UUT \flm Am \flm, m,...,r (5. 86) The matrix U has the representation u '"A 2 «J>T (5. 87) r 2: tflm ct> (5. 88) m where W and cl> are N x r and M2 x r matrices whose mth columns are the vectors tflm and cf>m, respectively, and A is an r x r diagonal matrix, defined as A 2 [ r l (5. 89) Equation (5. 88) is called the spectral representation, the outer product expansion, or the singular value decomposition (SVD) of U. The nonzero eigenvalues (of UTU), Am, are also called the singular values of U. f r M, then the image contain ing NM samples can be represented by (M N)r samples of the vectors {A 4 tflm ' A 4 cf>m ; m,... ' r }. Since W and cl> have orthogonal columns, from (5. 87) the SVD transform of the image U is defined as + (5. 90) which is a separable transform that diagonalizes the given image.the proof of (5. 88) is outlined in Problem 5.3. Properties of the SVD Transform. Once cf>m, m,..., r are known, the eigenvectors tflm can be determined as m,..., r (5. 9) t can be shown that "m are orthonormal eigenvectors of UUT if cf>m are the orthonormal eigenvectors of UTU. 2. The SVD transform as defined by (5. 90) is not a unitary transform. This is because W and cl> are rectangular matrices. However, we can include in «> and W additional orthogonal eigenvectors cf>m and tflm, which satisfy Ucf>m 0, m r,..., M and UT tflm 0, m r +,..., N such that these matrices are unitary and the unitary SVD transform is + (5. 92) Sec Outer Product Expansion and Singular Value Decom position 77

48 3. The image Uk generated by the partial sum Uk k VA: (5.93) is the best least squares rankk approximation of U if Am are in decreasing order of magnitude. For any k $ r, the least squares error el L L )u (m, n) uk (m, n)) 2, k, 2,., r (5.94) mln reduces to (5.95) mk+ Let L NM. Note that we can always write a twodimensional unitary transform representation as an outer product expansion in an Ldimensional space, namely, U L w a bf (5.96) where w are scalars and a and b are sequences of orthogonal basis vectors of dimensions N x and M x, respectively. The least squares error between U and any partial sum k (5.97) lh L w a bf is minimized for any k [, L] when the above expansion coincides with (5.93), that is, when lh Uk. This means the energy concentrated in the trans[orm coefficients l,..., k is maximized by the SVD transform for the given image. Recall that the KL trans form, maximizes the average energy in a given number of transform coefficients, the average being taken over the ensemble for which the autocorrelation function is defined. Hence, on an imagetoimage basis, the SYD transform will concentrate more energy in the same number of coefficients. But the SYD has to be calculated for each image. On the other hand the KL transform needs to be calculated only once for the whole image ensemble. Therefore, while one may be able to find a reasonable fast transform approximation of the KL transform, no such fast trans form substitute for the SVD is expected to exist. Although applicable in image restoration and image data compression prob lems, the usefulness of SVD in such image processing problems is severely limited because of large computational effort required for calculating the eigenvalues and eigenvectors of large image matrices. However, the SVD is a fundamental result in matrix theory that is useful in finding the generalized inverse of singular matrices and in the analysis of several image processing problems. L ml M tflm <f>, N k $r.. L!! e Wi, Example 5. Let 78 U[f ] mage Transforms Chap. 5

49 The eigenvalues ofuru are found to be A 8.06, Ai.94, which give r 2, and the SVD transform of U is The eigenvectors are found to be [0.509] <l>i ' [ ] <l>i TABLE 5.3 Summary of mage Transforms (continued on page 80) OFT/unitary OFT Cosine Sine Hadamard Haar Slant KarhunenLoeve Fast KL Sinusoidal transforms SVO transform Fast transform, most useful in digital signal processing, convolution, digital filtering, analysis of circulant and Toeplitz systems. Requires complex arithmetic. Has very good energy compaction for images. Fast transform, requires real operations, near optimal substitute for the KL transform of highly correlated images. Useful in designing transform coders and Wiener filters for images. Has excellent energy compaction for images. About twice as fast as the fast cosine transform, symmetric, requires real operations; yields fast KL transform algorithm which yields recursive block processing algorithms, for coding, filtering, and so on; useful in estimating performance bounds of many image processing problems. Energy compaction for images is very good. Faster than sinusoidal transforms, since no multiplications are required; useful in digital hardware implementations of image processing algorithms. Easy to simulate but difficult to analyze. Applications in image data compression, filtering, and design of codes. Has good energy compaction for images. Very fast transform. Useful in feature extracton, image coding, and image analysis problems. Energy compaction is fair. Fast transform. Has "imagelike basis"; useful in image coding. Has very good energy compaction for images. s optimal in many ways; has no fast algorithm; useful in performance evaluation and for finding performance bounds. Useful for small size vectors e.g., color multispectral or other feature vectors. Has the best energy compaction in the mean square sense over an ensemble. Useful for designing fast, recursiveblock processing techniques, including adaptive techniques. ts performance is better than independent blockbyblock processing techniques. Many members have fast implementation, useful in finding practical substitutes for the KL transform, analysis of Toeplitz systems, mathematical modeling of signals. Energy compaction for the optimumfast transform is excellent. Best energypacking efficiency for any given image. Varies drastically from image to image; has no fast algorithm or a reasonable fast transform substitute; useful in design of separable FR filters, finding least squares and minimum norm solutions of linear equations, finding rank of large matrices, and so on. Potential image processing applications are in image restoration, power spectrum estimation and data compression. Sec. 5.3 Outer Product Expansion and Singular Value Decomposition 79

50 [ From above "6 is obtained via (5. 9) to yield l ] U V>:;" "6 c!>'f [ ][ ][ [ ] as the best least squares rank approximation of U. Let us compare this with the two dimensional cosine transform U, which is given by \/2 v2 \/2 2 v'3 0 V CJ UC v'3 2 v ] 0\/2 2V2 v'3 v'3 v2 5 t is easy to see that LL v 2 (k, ) >.. + >..2. The energy concentrated in the K samples of SVD, L! Am, K, 2, is greater than the energy concentrated in any K samples of the cosine transform coefficients (show!). k, 5.4 SUMMARY n this chapter we have studied the theory of unitary transforms and their proper ties. Several unitary tranforms, DFf, cosine, sine, Hadamard, Haar, Slant, KL, sinusoidal family, fast KL, and SVD, were discussed. Table 5.3 summarizes the various transforms and their applications. PROBLEMS For given P, Q show that the error er; of (5.8) is minimized when the series coefficients v (k, ) are given by (5.3). Also show that the basis images must form a complete set for er; to be zero for P Q N. (Fast transforms and Kronecker separability) From (5.23) we see that the number of operations in implementing the matrixvector product is reduced from 0 (N4) to 0 (N J) if.a is a Kronecker product. Apply this idea inductively to show that if.a is M x M and where Ak is nk x nk, M T;;' nk then the transformation of (5.23) can be imple mented in 0 (M "i.;;' nk) which equals nm log" M if nk n. Many fast algorithms for unitary matrices can be given this interpretation which was suggested by Good [9]. Transforms possessing this property are sometimes called Good transforms. For the 2 x 2 transform A and the image U,, 5.3 A2 [v'3 ] l v'3, u [ ;] calculate the transformed image V and the basis images. 80 mage Transforms Chap. 5

51 5.4 Consider the vector x and an orthogonal transform A x [;:l A [ : : i 0 : ] Let ao and a denote the columns of AT (that is, the basis vectors of A). The trans formation y Ax can be written as Yo a:; x, Y a x. Represent the vector x in Cartesian coordinates on a plane. Show that the transform A is a rotation of the coordinates by 0 and y0 and y are the projections of x in the new coordinate system (see Fig. P5.4). a, Figure Prove that the magnitude of determinant of a unitary transform is unity. Also show that all the eigenvalues of a unitary matrix have unity magnitude. Show that the entropy of an N x Gaussian random vector u with mean p. and covariance R. given by is invariant under any unitary transformation. Consider the zero mean random vector u with covariance R. discussed in Example 5.2. From the class of unitary transforms Ae PS.4 [ : : i 0 s l v Aeu determine the value of 0 for which (a) the average energy compressed in Vo is maximum and (b) the components of v are uncorrelated. Prove the twodimensional energy conservation relation of (5.28). (DFT and circulant matrices) a. Show that (5.60) follows directly from (5.56) if cl>k is chosen as the kth column of the unitary DFf F. Now write (5.58) as a circulant matrix operation x2 Hx. Take unitary DFf of both sides and apply (5.60) to prove the circular convolution theorem, that is, (5.59). b. Using (5.60) show that the inverse of an N x N circulant matrix can be obtained in 0 (N log N) operations via the FFf by calculating the elements of its first column as N k [H ] n, 0 _!_ L w,v n >.:;; inverse DFf of {>..;; } Nko Chap. 5 Problems 8

52 Show that the N x vector X2 Txi, where T is N x N Toeplitz but not circulant, can be evaluated in 0 (N log N) operations via the FFT. 5.0 Show that the N 2 complex elements v(k, l) of the unitary DFT of a real sequence {u(m, n), 0 s m, n s N } can be determined from the knowledge of the partial sequence c. { v (k, O), O s k s }, { v (k, ), o s k s }, { v(k, l), O s k s N, s / s }, (N even) which contains only N 2 nonzero real elements, in general. 5. a. Find the eigenvalues of the 2 x 2 doubly block circulant matrix [ : b. Given the arrays x (m, n) and X2 (m, n) as follows: Xi (m, n) n n x2 (m, n) m m 0 0 Write their convolution X3 (m, n) X2 (m, n) 0 X (m, n) as a doubly block circulant matrix operating on a vector of size 6 and calculate the result. Verify your result by performing the convolution directly. 5.2 Show that if an image {u(m, n), 0 s m, n s N } is multiplied by the checkerboard pattern ( r + ", then its unitary DFT is centered at (N2, N2). f the unitary DFT of u(m, n) has its region of support as shown in Fig. P5. 2, what would be the region of support of the unitary DFT of (tr + " u(m, n)? Figure 5.6 shows the magnitude of the unitary DFfs of an image u (m, n) and the image ( tr + " u (m, n). This method can be used for computing the unitary DFT whose origin is at the center of the image matrix. The frequency increases as one moves away from the origin. kn 2 N Q i N 2 N N 2 Figure 82 PS.2 mage Transforms Chap. 5

53 5.3 Show that the real and imaginary parts of the unitary DFf matrix are not orthogonal matrices in general. 5.4 Show that the N x N cosine transform matrix C is orthogonal. Verify your proof for the case N Show that the N x N sine transform is orthogonal and is the eigenmatrix of Q given by (5.02). Verify your proof for the case N 5.6 Show that the cosine and sine transforms of an N x sequence {u (O),..., u (N )} can be calculated from the DFfs of the 2N x symmetrically extended sequence {u (N ), u (N 2),..., u (l), u (O), u (O), u (l),..., u (N )} and of the (2N + 2) x antisymmetrically extended sequence {O u (N ),..., u (l), u (O), 0, u (O), u (l),..., u (N ) }, respectively. 5.7 Suppose an N x N image U is mapped into a rowordered N 2 x vector P. Show that the N2 x N 2 onedimensional Hadamard transform of P gives the N x N two dimensional Hadamard transform of U. s this true for the other transforms discussed in the text? Give reasons. 5.8 Using the Kronecker product recursion (5. 04), prove that a 2n x 2n Hadamard trans form is orthogonal. 5.9 Calculate and plot the energy packed in the first, 2, 4, 8, 6 sequency ordered samples of the Hadamard transform of a 6 x vector whose autocorrelations are 3. r (k) (0.95)k Prove that an N x N Haar transform matrix is orthogonal and can be implemented in 0 (N) operations on an N x vector. 5.2 Using the recursive formula for generating the slant transforms prove that these matrices are orthogonal and fast f the KL transform of a zero mean N x vector u is cl>, then show that the KL transform of the sequence a (n) u (n) + µ., where µ. is a constant, remains the same only if the vector (,,..., l)t is an eigenvector of the covariance matrix of u. Which of the fast transforms discussed in the text satisfy this property? 5.23 f u and u2 are random vectors whose autocorrelation matrices commute, then show that they have a common KL transform. Hence, show that the KL transforms for autocorrelation matrices R, R, and f(r), where f() is an arbitrary function, are identical. What are the corresponding eigenvalues? 5.24* The autocorrelation array of a x zero mean vector u is given by {0.95m nl, 0 :'.S m, n :'.S 3}. a. What is the KL transform of u? b. Compare the basis vectors of the KL transform with the basis vectors of the 4 x 4 unitary DFf, OCT, DST, Hadamard, Haar, and Slant transforms. c. Compare the performance of the various transforms by plotting the basis restriction error m versus m. 5.25* The autocorrelation function of a zero mean random field is given by (5. 50), where p A 6 x 6 segment of this random field is unitarily transformed. a. What is the maximum energy concentrated in 6, 32, 64, and 28 transform coeffi cients for each of the seven transforms, KL, cosine, sine, unitary DFf, Hadamard, Haar, and Slant? 4 Chap. 5 Problems 83

54 b. Compare the performance of these transforms for this random field by plotting the mean square error for sample reduction ratios of 2, 4, 8, and 6. (Hint: Use Table 5.2.) 5.26 (Threshold representation) Referring to Fig. 5. 6, where u (n) is a Gaussian random sequence, the quantity m 2 N l l N Cm u (n) b (n, j)v (j ) [u (n) z (n)] 2 N N [ n O n O j O is a random variable with respect to m. Let m be such that ] f A and B are restricted to be unitary transforms, then show that E [m ] is minimized when A <l> * r, B A, where <l>* r is the KL transform of u (n). For details see [33) (Minimum entropy property of the KL transform) [30] Define an entropy in the Atransform domain as N H[A) L crt log. crt ko where cr t are the variances of the transformed variables v (k). Show that among all the unitary transforms the KL transform minimizes this entropy, that is, H [<> * ] s H[A) a. Write the N x N covariance matrix R defined in (2.68) as 32 R J(k, k, k3) 4J where 4J is a sparse N x N matrix with nonzero terms at the four corners. Show that the above relation yields R 32 r l + 32 rl.ur + rl (AR)r where.r 4.JR.U is also a sparse matrix, which has at most four (corner) non zero terms. f <> diagonalizes J, then show that the variances of the transform coefficients are given by TA T P5. 28 Xi [<l> * T AR<>) k, k CT2k [<l> * R<>) k. k A.k A. [<l>*..., <>] k. k + _!_ t Now verify the formulas where A.k crt (DST) where A.k P o: cos krr!n, : + (N:o: )A.t [ + ( l)k pn+ ] sin2 (( +}i'tt), 2o: cos(k + l )'TTl(N + ), and Osk sn l 32 2( pn)o:[cos(2'tk/n) 2o:) CT2k (DFT) N A.t A.k where A.k 2o: cos 2'!Tk!N, 0 s k s N. 84 mage Transforms P5.283 P5.284 Chap. 5

55 b. Using the formulas P5.282P5.284 and (5. 20), calculate the fraction of energy packed in N2 transform coefficients arranged in decreasing order by the cosine, sine, unitary DFf, and Hadamard transforms for N 4, 6, 64, 256, 024, and 4096 for a stationary Markov sequence whose autocorrelation matrix is given by R {plm nl}, p a. For an arbitrary real stationary sequence, its autocorrelation matrix, R {r(m n)}, is Toeplitz. Show that Atransform coefficient variances denoted by crt (A), can be obtained in O (N log N) operations via the formulas l N crt (F) F unitary DFf L l (N ln l)r(n)w;;;"k, Nn N+ crt (DST) a (k) + jb (k) d [2a (k) + b (k) cot ('T :)) ]. " ' [r(n) + r(n)] exp [ j'!tnn(k+ + ) ] r(o) + N N where 0 s k s N. Find a similar expression for the DCT. b. n two dimensions, for stationary random fields, (5.36) implies we have to evaluate cr, (A) 2: 2: 2: 2: a (k, m)a * (k, m ')r (m m ', n n ')a (l, n)a * (, n ') m n m' n' Show that crt. (A) can be evaluated in 0 (N 2 log N) operations, when A is the FFf, DST, or DCT Compare the maximum energy packed in k SVD transform coefficients for k, 2, of the 2 x 4 image u with that packed by the cosine, unitary DFf, and Hadamard transforms. 5.3 (Proof of SVD representation) Define cf»m such that Ucf» m 0 for m r +,..., M so that the set cf»m, :S m :S M is complete and orthonormal. Substituting for "'m from (5.9) in (5. 88), obtain the following result: ( ) [ ] [ ] m cf» u cf»m cf» u cf»m cf» u m l m l m l "' B BLOGRAPHY Sections 5., 5.2 General references on image transforms:. H. C. Andrews. Computer Techniques in mage Processing. New York: Academic Press, 970, Chapters 5, H. C. Andrews. "Two Dimensional Transforms. " in Topics in Applied Physics: Picture Processing and Digital Filtering, vol. 6, T. S. Huang (ed)., New York: Springer Verlag, N. Ahmed and K. R. Rao. Orthogonal Transforms for Digital Signal Processing. New York: Springer Verlag, 975. Chap. 5 Bibliography 85

Image Transforms. Digital Image Processing Fundamentals of Digital Image Processing, A. K. Jain. Digital Image Processing.

Image Transforms. Digital Image Processing Fundamentals of Digital Image Processing, A. K. Jain. Digital Image Processing. Digital Image Processing Fundamentals of Digital Image Processing, A. K. Jain 2D Orthogonal and Unitary Transform: Orthogonal Series Expansion: {a k,l (m,n)}: a set of complete orthonormal basis: N N *

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

A L A BA M A L A W R E V IE W

A L A BA M A L A W R E V IE W A L A BA M A L A W R E V IE W Volume 52 Fall 2000 Number 1 B E F O R E D I S A B I L I T Y C I V I L R I G HT S : C I V I L W A R P E N S I O N S A N D TH E P O L I T I C S O F D I S A B I L I T Y I N

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

T h e C S E T I P r o j e c t

T h e C S E T I P r o j e c t T h e P r o j e c t T H E P R O J E C T T A B L E O F C O N T E N T S A r t i c l e P a g e C o m p r e h e n s i v e A s s es s m e n t o f t h e U F O / E T I P h e n o m e n o n M a y 1 9 9 1 1 E T

More information

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform Item Type text; Proceedings Authors Jones, H. W.; Hein, D. N.; Knauer, S. C. Publisher International Foundation

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Transforms and Orthogonal Bases

Transforms and Orthogonal Bases Orthogonal Bases Transforms and Orthogonal Bases We now turn back to linear algebra to understand transforms, which map signals between different domains Recall that signals can be interpreted as vectors

More information

Digital Image Processing Lectures 13 & 14

Digital Image Processing Lectures 13 & 14 Lectures 13 & 14, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Properties of KL Transform The KL transform has many desirable properties which makes

More information

Contents. Acknowledgments

Contents. Acknowledgments Table of Preface Acknowledgments Notation page xii xx xxi 1 Signals and systems 1 1.1 Continuous and discrete signals 1 1.2 Unit step and nascent delta functions 4 1.3 Relationship between complex exponentials

More information

Chapter 8 The Discrete Fourier Transform

Chapter 8 The Discrete Fourier Transform Chapter 8 The Discrete Fourier Transform Introduction Representation of periodic sequences: the discrete Fourier series Properties of the DFS The Fourier transform of periodic signals Sampling the Fourier

More information

The Fourier transform allows an arbitrary function to be represented in terms of simple sinusoids. The Fourier transform (FT) of a function f(t) is

The Fourier transform allows an arbitrary function to be represented in terms of simple sinusoids. The Fourier transform (FT) of a function f(t) is 1 Introduction Here is something I wrote many years ago while working on the design of anemometers for measuring shear stresses. Part of this work required modelling and compensating for the transfer function

More information

3.2 Complex Sinusoids and Frequency Response of LTI Systems

3.2 Complex Sinusoids and Frequency Response of LTI Systems 3. Introduction. A signal can be represented as a weighted superposition of complex sinusoids. x(t) or x[n]. LTI system: LTI System Output = A weighted superposition of the system response to each complex

More information

Fourier analysis of discrete-time signals. (Lathi Chapt. 10 and these slides)

Fourier analysis of discrete-time signals. (Lathi Chapt. 10 and these slides) Fourier analysis of discrete-time signals (Lathi Chapt. 10 and these slides) Towards the discrete-time Fourier transform How we will get there? Periodic discrete-time signal representation by Discrete-time

More information

Fall 2011, EE123 Digital Signal Processing

Fall 2011, EE123 Digital Signal Processing Lecture 6 Miki Lustig, UCB September 11, 2012 Miki Lustig, UCB DFT and Sampling the DTFT X (e jω ) = e j4ω sin2 (5ω/2) sin 2 (ω/2) 5 x[n] 25 X(e jω ) 4 20 3 15 2 1 0 10 5 1 0 5 10 15 n 0 0 2 4 6 ω 5 reconstructed

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Ch. 15 Wavelet-Based Compression

Ch. 15 Wavelet-Based Compression Ch. 15 Wavelet-Based Compression 1 Origins and Applications The Wavelet Transform (WT) is a signal processing tool that is replacing the Fourier Transform (FT) in many (but not all!) applications. WT theory

More information

176 5 t h Fl oo r. 337 P o ly me r Ma te ri al s

176 5 t h Fl oo r. 337 P o ly me r Ma te ri al s A g la di ou s F. L. 462 E l ec tr on ic D ev el op me nt A i ng er A.W.S. 371 C. A. M. A l ex an de r 236 A d mi ni st ra ti on R. H. (M rs ) A n dr ew s P. V. 326 O p ti ca l Tr an sm is si on A p ps

More information

Linear Convolution Using FFT

Linear Convolution Using FFT Linear Convolution Using FFT Another useful property is that we can perform circular convolution and see how many points remain the same as those of linear convolution. When P < L and an L-point circular

More information

VII. Discrete Fourier Transform (DFT) Chapter-8. A. Modulo Arithmetic. (n) N is n modulo N, n is an integer variable.

VII. Discrete Fourier Transform (DFT) Chapter-8. A. Modulo Arithmetic. (n) N is n modulo N, n is an integer variable. 1 VII. Discrete Fourier Transform (DFT) Chapter-8 A. Modulo Arithmetic (n) N is n modulo N, n is an integer variable. (n) N = n m N 0 n m N N-1, pick m Ex. (k) 4 W N = e -j2π/n 2 Note that W N k = 0 but

More information

Laboratory 1 Discrete Cosine Transform and Karhunen-Loeve Transform

Laboratory 1 Discrete Cosine Transform and Karhunen-Loeve Transform Laboratory Discrete Cosine Transform and Karhunen-Loeve Transform Miaohui Wang, ID 55006952 Electronic Engineering, CUHK, Shatin, HK Oct. 26, 202 Objective, To investigate the usage of transform in visual

More information

Transform coding - topics. Principle of block-wise transform coding

Transform coding - topics. Principle of block-wise transform coding Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Trade Patterns, Production networks, and Trade and employment in the Asia-US region

Trade Patterns, Production networks, and Trade and employment in the Asia-US region Trade Patterns, Production networks, and Trade and employment in the Asia-U region atoshi Inomata Institute of Developing Economies ETRO Development of cross-national production linkages, 1985-2005 1985

More information

4 8 N v btr 20, 20 th r l f ff nt f l t. r t pl n f r th n tr t n f h h v lr d b n r d t, rd n t h h th t b t f l rd n t f th rld ll b n tr t d n R th

4 8 N v btr 20, 20 th r l f ff nt f l t. r t pl n f r th n tr t n f h h v lr d b n r d t, rd n t h h th t b t f l rd n t f th rld ll b n tr t d n R th n r t d n 20 2 :24 T P bl D n, l d t z d http:.h th tr t. r pd l 4 8 N v btr 20, 20 th r l f ff nt f l t. r t pl n f r th n tr t n f h h v lr d b n r d t, rd n t h h th t b t f l rd n t f th rld ll b n

More information

Transform Coding. Transform Coding Principle

Transform Coding. Transform Coding Principle Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

The Discrete Fourier transform

The Discrete Fourier transform 453.70 Linear Systems, S.M. Tan, The University of uckland 9- Chapter 9 The Discrete Fourier transform 9. DeÞnition When computing spectra on a computer it is not possible to carry out the integrals involved

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 2D SYSTEMS & PRELIMINARIES Hamid R. Rabiee Fall 2015 Outline 2 Two Dimensional Fourier & Z-transform Toeplitz & Circulant Matrices Orthogonal & Unitary Matrices Block Matrices

More information

EE123 Digital Signal Processing

EE123 Digital Signal Processing Announcements EE Digital Signal Processing otes posted HW due Friday SDR give away Today! Read Ch 9 $$$ give me your names Lecture based on slides by JM Kahn M Lustig, EECS UC Berkeley M Lustig, EECS UC

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY DIGITAL SIGNAL PROCESSING DEPT./SEM.: ECE&EEE /V DISCRETE FOURIER TRANFORM AND FFT PART-A 1. Define DFT of a discrete time sequence? AUC MAY 06 The DFT is used to convert a finite discrete time sequence

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

! Circular Convolution. " Linear convolution with circular convolution. ! Discrete Fourier Transform. " Linear convolution through circular

! Circular Convolution.  Linear convolution with circular convolution. ! Discrete Fourier Transform.  Linear convolution through circular Previously ESE 531: Digital Signal Processing Lec 22: April 18, 2017 Fast Fourier Transform (con t)! Circular Convolution " Linear convolution with circular convolution! Discrete Fourier Transform " Linear

More information

Visual features: From Fourier to Gabor

Visual features: From Fourier to Gabor Visual features: From Fourier to Gabor Deep Learning Summer School 2015, Montreal Hubel and Wiesel, 1959 from: Natural Image Statistics (Hyvarinen, Hurri, Hoyer; 2009) Alexnet ICA from: Natural Image Statistics

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Waveform-Based Coding: Outline

Waveform-Based Coding: Outline Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

More information

Signal Analysis. Filter Banks and. One application for filter banks is to decompose the input signal into different bands or channels

Signal Analysis. Filter Banks and. One application for filter banks is to decompose the input signal into different bands or channels Filter banks Multi dimensional Signal Analysis A common type of processing unit for discrete signals is a filter bank, where some input signal is filtered by n filters, producing n channels Channel 1 Lecture

More information

DISCRETE FOURIER TRANSFORM

DISCRETE FOURIER TRANSFORM DISCRETE FOURIER TRANSFORM 1. Introduction The sampled discrete-time fourier transform (DTFT) of a finite length, discrete-time signal is known as the discrete Fourier transform (DFT). The DFT contains

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Sinusoidal Modeling. Yannis Stylianou SPCC University of Crete, Computer Science Dept., Greece,

Sinusoidal Modeling. Yannis Stylianou SPCC University of Crete, Computer Science Dept., Greece, Sinusoidal Modeling Yannis Stylianou University of Crete, Computer Science Dept., Greece, yannis@csd.uoc.gr SPCC 2016 1 Speech Production 2 Modulators 3 Sinusoidal Modeling Sinusoidal Models Voiced Speech

More information

P a g e 5 1 of R e p o r t P B 4 / 0 9

P a g e 5 1 of R e p o r t P B 4 / 0 9 P a g e 5 1 of R e p o r t P B 4 / 0 9 J A R T a l s o c o n c l u d e d t h a t a l t h o u g h t h e i n t e n t o f N e l s o n s r e h a b i l i t a t i o n p l a n i s t o e n h a n c e c o n n e

More information

Exact diagonalization methods

Exact diagonalization methods Summer School on Computational Statistical Physics August 4-11, 2010, NCCU, Taipei, Taiwan Exact diagonalization methods Anders W. Sandvik, Boston University Representation of states in the computer bit

More information

Lecture 7. Fourier Analysis

Lecture 7. Fourier Analysis Lecture 7 Fourier Analysis Summary Lecture 6 Minima and maxima 1 dimension : Bracket minima : 3 values of f(x) : f(2) < f(1) and f(2)

More information

h : sh +i F J a n W i m +i F D eh, 1 ; 5 i A cl m i n i sh» si N «q a : 1? ek ser P t r \. e a & im a n alaa p ( M Scanned by CamScanner

h : sh +i F J a n W i m +i F D eh, 1 ; 5 i A cl m i n i sh» si N «q a : 1? ek ser P t r \. e a & im a n alaa p ( M Scanned by CamScanner m m i s t r * j i ega>x I Bi 5 n ì r s w «s m I L nk r n A F o n n l 5 o 5 i n l D eh 1 ; 5 i A cl m i n i sh» si N «q a : 1? { D v i H R o s c q \ l o o m ( t 9 8 6) im a n alaa p ( M n h k Em l A ma

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Exercise 2.2. Find (with proof) all rational points on the curve y = 2 x.

Exercise 2.2. Find (with proof) all rational points on the curve y = 2 x. Exercise 2.2. Find (with proof) all rational points on the curve y = 2 x. Exercise 2.4: Prove that for any integer n 0, e n is not a rational number. Exercise 2.5: Prove that the only rational point on

More information

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review Chapter III Quantum Computation These lecture notes are exclusively for the use of students in Prof. MacLennan s Unconventional Computation course. c 2017, B. J. MacLennan, EECS, University of Tennessee,

More information

APPENDIX A. The Fourier integral theorem

APPENDIX A. The Fourier integral theorem APPENDIX A The Fourier integral theorem In equation (1.7) of Section 1.3 we gave a description of a signal defined on an infinite range in the form of a double integral, with no explanation as to how that

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Block diagonal structure in discrete transforms. [ WH(N)] = N-point transform matrix for WHT

Block diagonal structure in discrete transforms. [ WH(N)] = N-point transform matrix for WHT Block diagonal structure in discrete transforms Ja-Ling Wu Indexing terms: Mathematical techniques, Matrix algebra, Signal processing Abstract: The author investigates and summarises some of the computational

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

DSP Algorithm Original PowerPoint slides prepared by S. K. Mitra

DSP Algorithm Original PowerPoint slides prepared by S. K. Mitra Chapter 11 DSP Algorithm Implementations 清大電機系林嘉文 cwlin@ee.nthu.edu.tw Original PowerPoint slides prepared by S. K. Mitra 03-5731152 11-1 Matrix Representation of Digital Consider Filter Structures This

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Image Transforms Unitary Transforms and the 2D Discrete Fourier Transform DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON What is this

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Lossless Image and Intra-frame Compression with Integer-to-Integer DST

Lossless Image and Intra-frame Compression with Integer-to-Integer DST 1 Lossless Image and Intra-frame Compression with Integer-to-Integer DST Fatih Kamisli, Member, IEEE arxiv:1708.07154v1 [cs.mm] 3 Aug 017 Abstract Video coding standards are primarily designed for efficient

More information

A NOVEL METHOD TO DERIVE EXPLICIT KLT KERNEL FOR AR(1) PROCESS. Mustafa U. Torun and Ali N. Akansu

A NOVEL METHOD TO DERIVE EXPLICIT KLT KERNEL FOR AR(1) PROCESS. Mustafa U. Torun and Ali N. Akansu A NOVEL METHOD TO DERIVE EXPLICIT KLT KERNEL FOR AR() PROCESS Mustafa U. Torun and Ali N. Akansu Department of Electrical and Computer Engineering New Jersey Institute of Technology University Heights,

More information

1. Calculation of the DFT

1. Calculation of the DFT ELE E4810: Digital Signal Processing Topic 10: The Fast Fourier Transform 1. Calculation of the DFT. The Fast Fourier Transform algorithm 3. Short-Time Fourier Transform 1 1. Calculation of the DFT! Filter

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

8 The Discrete Fourier Transform (DFT)

8 The Discrete Fourier Transform (DFT) 8 The Discrete Fourier Transform (DFT) ² Discrete-Time Fourier Transform and Z-transform are de ned over in niteduration sequence. Both transforms are functions of continuous variables (ω and z). For nite-duration

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Two-Dimensional Signal Processing and Image De-noising

Two-Dimensional Signal Processing and Image De-noising Two-Dimensional Signal Processing and Image De-noising Alec Koppel, Mark Eisen, Alejandro Ribeiro March 14, 2016 Before we considered (one-dimensional) discrete signals of the form x : [0, N 1] C where

More information

Beechwood Music Department Staff

Beechwood Music Department Staff Beechwood Music Department Staff MRS SARAH KERSHAW - HEAD OF MUSIC S a ra h K e rs h a w t r a i n e d a t t h e R oy a l We ls h C o l le g e of M u s i c a n d D ra m a w h e re s h e ob t a i n e d

More information

Use: Analysis of systems, simple convolution, shorthand for e jw, stability. Motivation easier to write. Or X(z) = Z {x(n)}

Use: Analysis of systems, simple convolution, shorthand for e jw, stability. Motivation easier to write. Or X(z) = Z {x(n)} 1 VI. Z Transform Ch 24 Use: Analysis of systems, simple convolution, shorthand for e jw, stability. A. Definition: X(z) = x(n) z z - transforms Motivation easier to write Or Note if X(z) = Z {x(n)} z

More information

Problem structure in semidefinite programs arising in control and signal processing

Problem structure in semidefinite programs arising in control and signal processing Problem structure in semidefinite programs arising in control and signal processing Lieven Vandenberghe Electrical Engineering Department, UCLA Joint work with: Mehrdad Nouralishahi, Tae Roh Semidefinite

More information

P a g e 3 6 of R e p o r t P B 4 / 0 9

P a g e 3 6 of R e p o r t P B 4 / 0 9 P a g e 3 6 of R e p o r t P B 4 / 0 9 p r o t e c t h um a n h e a l t h a n d p r o p e r t y fr om t h e d a n g e rs i n h e r e n t i n m i n i n g o p e r a t i o n s s u c h a s a q u a r r y. J

More information

Maximum variance formulation

Maximum variance formulation 12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

Matrix Representation

Matrix Representation Matrix Representation Matrix Rep. Same basics as introduced already. Convenient method of working with vectors. Superposition Complete set of vectors can be used to express any other vector. Complete set

More information

4 4 N v b r t, 20 xpr n f th ll f th p p l t n p pr d. H ndr d nd th nd f t v L th n n f th pr v n f V ln, r dn nd l r thr n nt pr n, h r th ff r d nd

4 4 N v b r t, 20 xpr n f th ll f th p p l t n p pr d. H ndr d nd th nd f t v L th n n f th pr v n f V ln, r dn nd l r thr n nt pr n, h r th ff r d nd n r t d n 20 20 0 : 0 T P bl D n, l d t z d http:.h th tr t. r pd l 4 4 N v b r t, 20 xpr n f th ll f th p p l t n p pr d. H ndr d nd th nd f t v L th n n f th pr v n f V ln, r dn nd l r thr n nt pr n,

More information

Digital Signal Processing. Midterm 2 Solutions

Digital Signal Processing. Midterm 2 Solutions EE 123 University of California, Berkeley Anant Sahai arch 15, 2007 Digital Signal Processing Instructions idterm 2 Solutions Total time allowed for the exam is 80 minutes Please write your name and SID

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Last 4 Digits of USC ID:

Last 4 Digits of USC ID: Chemistry 05 B Practice Exam Dr. Jessica Parr First Letter of last Name PLEASE PRINT YOUR NAME IN BLOCK LETTERS Name: Last 4 Digits of USC ID: Lab TA s Name: Question Points Score Grader 8 2 4 3 9 4 0

More information

ECE Lecture Notes: Matrix Computations for Signal Processing

ECE Lecture Notes: Matrix Computations for Signal Processing ECE Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 2005 2 Lecture 2 This lecture discusses

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Discrete Time Fourier Transform (DTFT)

Discrete Time Fourier Transform (DTFT) Discrete Time Fourier Transform (DTFT) 1 Discrete Time Fourier Transform (DTFT) The DTFT is the Fourier transform of choice for analyzing infinite-length signals and systems Useful for conceptual, pencil-and-paper

More information

Objectives of Image Coding

Objectives of Image Coding Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction

More information

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems Lecture4 INF-MAT 4350 2010: 5. Fast Direct Solution of Large Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 16, 2010 Test Matrix

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

0 t b r 6, 20 t l nf r nt f th l t th t v t f th th lv, ntr t n t th l l l nd d p rt nt th t f ttr t n th p nt t th r f l nd d tr b t n. R v n n th r

0 t b r 6, 20 t l nf r nt f th l t th t v t f th th lv, ntr t n t th l l l nd d p rt nt th t f ttr t n th p nt t th r f l nd d tr b t n. R v n n th r n r t d n 20 22 0: T P bl D n, l d t z d http:.h th tr t. r pd l 0 t b r 6, 20 t l nf r nt f th l t th t v t f th th lv, ntr t n t th l l l nd d p rt nt th t f ttr t n th p nt t th r f l nd d tr b t n.

More information

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc. 1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a

More information

n r t d n :4 T P bl D n, l d t z d th tr t. r pd l

n r t d n :4 T P bl D n, l d t z d   th tr t. r pd l n r t d n 20 20 :4 T P bl D n, l d t z d http:.h th tr t. r pd l 2 0 x pt n f t v t, f f d, b th n nd th P r n h h, th r h v n t b n p d f r nt r. Th t v v d pr n, h v r, p n th pl v t r, d b p t r b R

More information

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1 IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding

More information

Chapter 1. Matrix Calculus

Chapter 1. Matrix Calculus Chapter 1 Matrix Calculus 11 Definitions and Notation We assume that the reader is familiar with some basic terms in linear algebra such as vector spaces, linearly dependent vectors, matrix addition and

More information

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2 MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

Procedure for Measuring Swing Bearing Movement on 300 A Series Hydraulic Excavators {7063}

Procedure for Measuring Swing Bearing Movement on 300 A Series Hydraulic Excavators {7063} Page 1 of 7 Procedure for Measuring Swing Bearing Movement on 300 A Series Hydraulic Excavators {7063} Excavator: 307 (S/N: 2PM1-UP; 2WM1-UP) 311 (S/N: 9LJ1-UP; 5PK1-UP; 2KM1-UP) 312 (S/N: 6GK1-UP; 7DK1-UP;

More information