ADVANCED IMAGE PROCESSING CELLULAR NEURAL NETWORKS

Size: px
Start display at page:

Download "ADVANCED IMAGE PROCESSING CELLULAR NEURAL NETWORKS"

Transcription

1 Papers International Journal of Bifurcation and Chaos Vol. 17 No. 4 (2007) c World Scientific Publishing Company ADVANCED IMAGE PROCESSING CELLULAR NEURAL NETWORKS MAKOTO ITOH Department of Information and Communication Engineering Fukuoka Institute of Technology Fukuoka Japan LEON O. CHUA Department of Electrical Engineering and Computer Sciences University of California Berkeley Berkeley CA USA Received January ; Revised November Many useful and well-known image processing templates for cellular neural networks (CNN s) can be derived from neural field models thereby providing a neural basis for the CNN paradigm. The potential ability of multitasking image processing is investigated by using these templates. Many visual illusions are simulated via CNN image processing. The ability of the CNN to mimic such high-level brain functions suggests possible applications of the CNN in cognitive engineering. Furthermore two kinds of painting-like image processings namely texture generation and illustration style transformation are investigated. Keywords: CNN; cortical information processing; cognitive science; cognitive engineering; illusion image processing; painting-like image processing; multitasking image processing; illustration style transformation; texture generation. 1. Introduction Cellular Neural Network (CNN) is a dynamic nonlinear system defined by coupling only identical simple dynamical systems called cells located within a prescribed sphere of influence such as nearest neighbors [Chua 1998; Chua & Roska 2002]. Because of its simplicity and ease for chip (hardware) implementation CNN has found numerous applications in image and video signal processing in robotic and biological visions and in higher brain functions. The dynamics of a CNN is governed by a set of nonlinear differential equations with parameters which must be derived analytically or numerically from given functions. Recently a method for designing CNN parameters was proposed [Itoh & Chua 2003]. In this design one has to choose a candidate from a set of basic CNN templates and tune its parameters to achieve a desired task. Almost all templates in this set were derived from neural field equations advection equations and reaction diffusion equations by discretizing spatial integrals and derivatives [Itoh & Chua 2005]. However some useful and well-known templates were not included in this set. The CNN templates are usually designed to achieve only one task. However the CNN has the potential ability to execute two or more tasks simultaneously since the CNN has three kinds of templates: a control (feed-forward) template afeedback template and a threshold template. We may tune these templates to accomplish multitask image processing. Visual illusion is defined as any fallacious perception of reality. Research on visual illusion can provide fundamental insights into the general brain mechanisms of perception and cognition. Many well-known visual illusions have been simulated by CNN image processing [Chua 1998; Chua & Roska 2002]. 1109

2 1110 M. Itoh & L. O. Chua A painting-like rendering technique usually employs digital image processing. There are several image processing techniques which can transform an image into a painting or other artistic styles. Since the CNN has pattern formation and selforganizing properties it has the potential ability to produce images with human expressive characteristics or important details found in real paintings. Studying painting-like image processing can also provide insights into general brain mechanisms. In this paper we first derive several well-known templates from some neural field models without output connections. Then we explore the potential ability of the CNN to achieve multitasking image processing by using these templates. Next we simulate many visual illusions via a CNN by using a set of basic templates. Finally we study various painting-like CNN image processing techniques. That is we investigate templates for generating textures in given images by exploiting CNN pattern formation properties. Furthermore we investigate CNN templates for transforming images into various artistic illustration styles. 2. From Continuous to Discrete Array The neural field [Amari 1978] is a mathematical model of the cortex. The neural field equation represents highly dense cortical neurons as a spatially continuous field. The dynamics of one-dimensional neural fields is defined by u(x t) xmax ( ) = u(x t)+ a(x x )f u(x t) dx t x min xmax + b(x x )s(x )dx + z(x) (1) x min where the interval [x min x max ] denotes the domain of the neural field u(x t) is the average membrane potential of a neuron at position x at time t a(x x ) and b(x x ) are the connectivity functions representing the average intensity of connections from neurons at location x to neurons at location x f(u) is the output function which determines the firing rate of the neuron as a function of its membrane potential s(x ) is the input stimulus applied externally to neurons at position x andz(x) isthe firing threshold of a neuron located at x [Amari 1978; Kubota & Aihara 2004]. The discretized version of Eq. (1) can be obtained by using the method in [Kubota & Aihara 2004]. Divide the domain of field [x min x max ]inton intervals [x i 1 x i ](i =1 2...N)withlength x where and x i = x min + i x (i =1 2...N) (2) x = x max x min. (3) N Let u i be the average membrane potential of neurons in the interval [x i 1 x i ] a ij and b ij be the average intensity of connections from neurons in the interval [x j 1 x j ] to neurons in the interval [x i 1 x i ] s i be the input stimulus applied externally to neurons in the interval [x j 1 x j ] and z i be the firing threshold in the interval [x j 1 x j ]. If we substitute the approximations u(x) u i u(x ) u j f(u j ) p j a(x x ) a ij b(x x ) b ij s(x ) s j z(x) z i (4) for each term in Eq. (1) and replace the integral by a summation we would obtain a one-dimensional fully-connected CNN equation [Chua 1998] du i = u i + = u i + N j=1 N j=1 (a ij p j + b ij s j ) x + z i (a ij p j + b ij s j ) + z i (5) where b ij = b ij x a ij = a ij x. If each cell (neurons in the interval [x i 1 x i ]) C i is coupled locally only to those neighbor cells inside some neighborhood (sphere of influence) N i theneq.(6)canbe written as du i = u i + ) (a ij p j + b ij s j +z i (6) j N i where N i denotes the r-neighborhood of cell C i. Consider next the two-dimensional neural field equation u(wt) = u(wt) t xmax ymax ( ) + a(w w ) f u(w t) dx dy + x min y min xmax ymax x min y min b(w w )s(w )dx dy + z(w) (7)

3 where w =(x y) R 2 and w =(x y ) R 2.In this case the domain [y min y max ] is also divided into M intervals [y i 1 y i ](i =1 2...M)withlength y. Thus the domain bounded by [x min x max ]and [y min y max ] is divided into MN subdomains. Let u ij be the average of u(x t) in the region bounded by [x i 1 x i ]and[y j 1 y j ]. Then the following standard CNN equation [Chua 1998; Chua & Roska 2002] can be obtained from Eq. (7) du ij = u ij + kl N ij (a kl p kl + b kl s kl ) +z ij (i j) {1...M} {1...N} (8) where u ij denotes the state of cell C ij p kl and s kl denote the output and input respectively of cell C kl N ij denotes the r-neighborhood of cell C ij and a kl b kl andz ij denote the feedback control and threshold template parameters respectively. The matrices [a kl ]andb =[b kl ] are referred to as the feedback template A and the feedforward (input) template B respectively. The output p ij and the state u ij of each cell are usually related via the piecewise-linear saturation function p ij = f(u ij )= 1 ( ) u ij +1 u ij 1. (9) 2 If we restrict the neighborhood radius of every cell to r = 1 and assume that z ij isthesameforthe whole network the template {A B z} is fully specified by 19 parameters which are the elements of the 3 3 matrices A and B namely α 1 1 α 10 α 11 α 0 1 α 00 α 01 α 1 1 α 10 α 11 β 1 1 β 10 β 11 B = β 0 1 β 00 β 01 (10) β 1 1 β 10 β 11 and the real number z. Thus the CNN is characterized by the following templates: B = α 1 1 α 10 α 11 α 0 1 α 00 α 01 α 1 1 α 10 α 11 β 1 1 β 10 β 11 β 0 1 β 00 β 01 z = z β 1 1 β 10 β 11 (11) Advanced Image Processing Cellular Neural Networks Image Processing Filter-Based CNN In this section we derive many well-known image processing filters from several neural field models without output connections Neural field models without output connections Consider the neural field equations without output connections namely a(w w ) = 0. Then Eq. (7) has the following form u(wt) = u(wt) t + xmax ymax x min y min b(w w )s(w )dx dy + z(w). (12) The solution u(wt)convergesto u(w ) = as t. xmax ymax x min y min b(w w )s(w )dx dy + z(w) (13) Spatial Fourier Transform-Based CNN Consider Eq. (12) over an infinite space that is [x min x max ] [y min y max ] =[ ] [ ]. (14) If the intensity function b(w w )isgivenby b(w w )=b(w )=e i(ω 1x +ω 2 y ) } (15) z(w) =0 then we obtain the following integral-differential equation u(wt) = u(wt) t + e i(ω 1x +ω 2 y ) s(w )dx dy. (16) The solution u(wt)convergesto U(ω 1 ω 2 ) = u(w ) = e i(ω 1x +ω 2 y ) s(w )dx dy (17) as t. Equation (17) is the spatial Fourier transform of s(w). The inverse spatial Fourier transform of U(ω 1 ω 2 )isgivenby

4 1112 M. Itoh & L. O. Chua s(w ) = 1 4π 2 e i(ω 1x +ω 2 y ) U(ω 1 ω 2 )dω 1 dω 2. (18) Applying the Riemann sum (see Riemann sum Wikipedia: The Free Encylopedia) to Eq. (16) and replacing u(wt)byu(ω 1 ω 2 ) we obtain du(ω 1 ω 2 ) = u(ω 1 ω 2 ) + n= m= s nm e i(ω 1n x+ω 2 m y) x y (19) where x = n x y = m y. (20) Setting x = y =1weobtain du(ω 1 ω 2 ) = u(ω 1 ω 2 ) + s nm e i(ω 1n+ω 2 m). (21) n= m= The solution u(ω 1 ω 2 )convergesto u(ω 1 ω 2 )= s nm e i(ω 1n+ω 2 m) (22) n= m= as t. The function u(ω 1 ω 2 ) is the discrete spatial Fourier transform of s nm.itsinverse transform is given by s nm = 1 4π 2 π π π π u(ω 1 ω 2 )e i(ω 1n+ω 2 m) dω 1 dω 2. (23) In the case of a finite array namely n {0 1...N 1} m {0 1...M 1} we usually use the following discrete spatial Fourier transform u(k l) = N 1 M 1 n=0 m=0 s nm e 2πi( kn N + lm M ). (24) Its inverse discrete spatial Fourier transform is given by s nm = 1 MN N 1 k=0 M 1 l=0 kn v(k l)e 2πi( N + lm M ). (25) Therefore from Eqs. (21) and (24) we obtain a discrete spatial Fourier transform-based CNN equation [Crounse & Chua 1995] du kl N 1 M 1 = u kl + b nm s nm (26) n=0 m=0 (k l) {0 1...N 1} {0 1...M 1} where b nm = e 2πi((kn/N)+(lm/M)).Thesolution u kl (t) converges to u kl ( ) = N 1 M 1 n=0 m=0 b nm s nm (27) as t. Spatial Cosine Transform-Based CNN Define a discrete cosine transform-based CNN by modifying Eq. (26) where du kl N 1 = u kl + α k α l b nm =cos π(2n +1)k 2N M 1 n=0 m=0 cos b nm s nm (28) π(2m +1)l 2M (29) and 1 k =0 N α k = 2 N 1 k N 1 (30) 1 l =0 M α l = 2 M 1 l M 1 The solution u kl (t) convergesto N 1 ũ kl = ukl ( ) =α k α l M 1 n=0 m=0 b nm s nm (31) as t. This equation corresponds to the discrete cosine transform. Its inverse discrete cosine transform is given by s mn = N 1 k=0 M 1 l=0 α k α l b kl ũ kl. (32) Spatial Filter-Based CNN Consider Eq. (12) over an infinite space that is [x min x max ] [y min y max ] =[ ] [ ]. (33) If b(w w )=b(w w ) we obtain a spatial filter u(wt) = u(wt) t + b(w w ) s(w )dx dy + z(w). (34)

5 The solution u(wt) converges to u(w ) = b(w w )s(w )dx dy + z(w) (35) as t. This equation can be transformed into the form U(ω 1 ω 2 )=B(ω 1 ω 2 )S(ω 1 ω 2 )+Z(ω 1 ω 2 ) (36) where U(ω 1 ω 2 ) B(ω 1 ω 2 ) S(ω 1 ω 2 )andz(ω 1 ω 2 ) are the spatial Fourier transforms of u(w ) b(w) s(w) and z(w) respectively that is U(ω 1 ω 2 ) = S(ω 1 ω 2 ) = B(ω 1 ω 2 ) = Z(ω 1 ω 2 ) = e iω 1x e iω 2y u(w )dxdy e iω 1x e iω 2y s(w)dxdy e iω 1x e iω 2y b(w)dxdy e iω 1x e iω 2y z(w)dxdy. (37) Their inverse spatial Fourier transforms are given by u(wt) = s(w) = b(w) = z(w) = e iω 1x e iω 2y U(ω 1 ω 2 )dω 1 dω 2 e iω1x e iω2y S(ω 1 ω 2 )dω 1 dω 2 e iω 1x e iω 2y B(ω 1 ω 2 )dω 1 dω 2 e iω1x e iω2y Z(ω 1 ω 2 )dω 1 dω 2. (38) Advanced Image Processing Cellular Neural Networks 1113 Their discrete versions are also given by U(k l) =B(k l)s(k l)+z(k l) (39) where U(k l) = N 1 M 1 kn u nm e 2πi( N + lm M ) and S(k l) = B(k l) = Z(k l) = u nm = 1 NM n=0 N 1 n=0 N 1 n=0 N 1 n=0 N 1 k=0 N 1 1 s nm = NM k=0 N 1 1 b nm = NM k=0 N 1 1 z nm = NM k=0 m=0 M 1 m=0 M 1 m=0 M 1 m=0 kn s nm e 2πi( N + lm M ) kn b nm e 2πi( N + lm M ) z nm e 2πi( kn N + lm M ) M 1 l=0 M 1 kn U(k l)e 2πi( N + lm M ) S(k l)e l=0 M 1 2πi( kn N + lm M ) B(k l)e 2πi( kn N + lm M ) l=0 Z(k l)e 2πi( kn N + lm M ). M 1 l=0 (40) (41) Thus the spatial filter for a finite array is realized by the equation du ij N 1 = u ij + k=0 M 1 l=0 b kl s kl + z ij (i j) (k l) {0 1...N 1} {0 1...M 1}. (42) If each cell is coupled locally only to all cells within a neighborhood of radius equal to r we would obtain a spatial filter-based CNN du ij = u ij + b kl s kl + z ij (kl) N ij (43) (i j) {0 1...N 1} {0 1...M 1}. The solution u ij (t) convergesto u ij ( ) = b kl s kl + z ij (44) (kl) N ij as t.

6 1114 M. Itoh & L. O. Chua Gaussian Filter-Based CNN Consider Eq. (12) over an infinite space [x min x max ] [y min y max ] =[ ] [ ]. (45) If the intensity function b(w w )isgivenby b(w w )=b(w w )= 1 w w 2 2πσ 2 e 2σ 2 = 1 (x x ) 2 +(y y ) 2 2πσ 2 e 2σ 2 (46) then we obtain the following integral-partial differential equation u(wt) t 1 = u(wt)+ 2πσ 2 e (x x ) 2 +(y y ) 2 2σ 2 s(w )dx dy + z(w) (47) where σ is a constant. The solution u(wt) converges to the Gaussian filter u(w ) = 1 (x x ) 2 +(y y ) 2 2πσ 2 e 2σ 2 s(w )dx dy + z(w) (48) as t. The Gaussian filter can be used as a low-pass filter for blurring or smoothing an image. If we set x =1 y =1 x =1 y =1 x = i x = i y = j y = j x = n x = n y = m y = m (49) and apply the Gauss integral formula to Eq. (48) (that is replace the integral by a summation and replace u(wt) s(w ) and z(w) byu ij s nm and z ij respectively) we would obtain du ij = u ij + n= m= 1 2πσ 2 e (i n)2 +(j m) 2 2σ 2 s nm + z ij. (50) If each cell is coupled locally only to all cells within a neighborhood of radius equal to r we would obtain a Gaussian filter-based CNN equation du ij = u ij + b kl s kl + z ij (kl) N ij (51) (i j) {1...N} {1...M} where b kl = 1 (i k) 2 +(j l) 2 2πσ 2 e 2σ 2. (52) The solution u ij (t) convergesto u ij ( ) = b kl s kl + z ij (53) (kl) N ij as t. If the intensity of the threshold is uniform namely z ij = z then Eq. (51) with σ =1and r = 2 (the neighborhood of cell C ij ) can be approximated by the following CNN templates B = b z = z (54) where b =1/273. Mean Filter-Based CNN Consider Eq. (12) in a finite space. If the intensity function b(w w ) is spatially uniform that is b(w w )=b we would obtain a mean filter described by u(wt) = u(wt) t xmax ymax + b(w w )s(w )dx dy + z(w) x min = u(wt) y min ymax xmax + b x min y min s(w ) dx dy + z(w). (55) The solution u(wt)convergesto xmax ymax u(w ) =b s(w )dx dy + z(w) x min y min (56)

7 as t. The mean filter is normally used to reduce noise in an image. Applying the Gauss integral formula to Eq. (55) we obtain the equation du ij = u ij + b N = u ij + b k=1 N k=1 M s kl x y + z ij l=1 M s kl + z ij (57) l=1 (i j) (k l) {1...M} {1...N} where b = b x y. If each cell is coupled locally only to all cells within a neighborhood of radius equal to r =1andz ij is spatially uniform that is z ij = z we would obtain a mean filter-based CNN equation du ( ij = u ij + b s i 1j 1 + s i 1j + s i 1j+1 + s ij 1 + s ij + s ij+1 + s i+1j 1 + s i+1j + s i+1j+1 ) + z. (58) The solution u ij (t) convergesto ( u ij ( ) =b s i 1j 1 + s i 1j + s i 1j+1 + s ij 1 + s ij + s ij+1 + s i+1j 1 + s i+1j + s i+1j+1 ) + z (59) as t. Thus Eq. (58) is characterized by the following template B = b z = z (60) where we usually set b =1/9.Ifthereisnoconnection between the diagonal cells and the center cell we would obtain the following template B = b z = z (61) where b =1/5. These templates can be used for noise removal. Nonuniform Mean Filter-Based CNN Consider the integral of the neural field equation: Advanced Image Processing Cellular Neural Networks 1115 where u(wt) = u(wt) t + b(w w)s(w ) dx dy + z(w). (62) b(w w) = b ϕ(x x)ϕ(y y) { ϕ(x) = 1 x d 0 x >d (63) and d is a constant. If we apply the iterated trapezoidal approximation [Itoh & Chua 2005] to this equation and each cell is coupled locally only to all cells within a neighborhood of radius equal to r =1( x = y = d) we would obtain du ij = u ij + b 4 ( s i 1j 1 + s i 1j + s i 1j+1 + s ij 1 + s ij+1 + s i+1j 1 + s i+1j + s i+1j+1 +8s ij ) x y + z ( = u ij + b s i 1j 1 + s i 1j + s i 1j+1 + s ij 1 + s ij+1 + s i+1j 1 + s i+1j + s i+1j+1 +8s ij ) + z (64) where b =(b /4) x y. Thesolutionu ij (t) converges to ( u ij ( ) =b s i 1j 1 + s i 1j + s i 1j+1 + s ij 1 + s ij+1 + s i+1j 1 + s i+1j + s i+1j+1 +8s ij ) + z (65) as t. Equation (64) is characterized by the following CNN template B = b z = z (66) We can also obtain the following CNN template without diagonal elements B = b z = z (67) where b =(b /2) x y. These templates can also be used for noise removal.

8 1116 M. Itoh & L. O. Chua 3.2. Neural field models with derivative inputs Consider the neural field equation u(wt) t = u(wt)+ b(w)s(w)dxdy + z(w). (68) If we replace the integral over x by the partial derivative with respect to x we would obtain the following partial differential equation u(wt) t = u(wt)+ b(w) s(w) dy + z(w). x (69) The solution u(wt) converges to u(w ) = b(w) s(w) dy + z(w) (70) x as t. Sobel Filter-Based CNN Assume that the intensity function b(w) anhe threshold z(w) satisfy b(w) =b ϕ(y y) z(w) =z. (71) Furthermore assume that each cell is coupled locally only to all cells within a neighborhood of radius equal to r = 1 ( y = d). If we apply the central difference formula and Simpson s rule [Itoh & Chua 2005] to the spatial derivative and the spatial integral in Eq. (69) respectively we would obtain the Sobel filter-based CNN equation du ij = u ij + b (s i 1j+1 s i 1j 1 )+2(s ij+1 s ij 1 )+(s i+1j+1 s i+1j 1 ) y + z x { } = u ij + b (s i 1j 1 s i 1j+1 )+2(s ij 1 s ij+1 )+(s i+1j 1 s i+1j+1 ) + z (72) or equivalently du ij = u ij + b (s i 1j+1 +2s ij+1 + s i+1j+1 ) (s i 1j 1 +2s ij 1 + s i+1j 1 ) y + z x { } = u ij + b (s i 1j 1 +2s ij 1 + s i+1j 1 ) (s i 1j+1 +2s ij+1 + s i+1j+1 ) + z (73) where b = b ( y/ x). The solution u ij (t) convergesto { } u ij ( ) =b (s i 1j 1 +2s ij 1 + s i+1j 1 ) (s i 1j+1 +2s ij+1 + s i+1j+1 ) + z (74) as t. Equation (73) is characterized by B = b z = z. (75) The Sobel filter can be used to detect edges in an image (see image processing Wikipedia: The Free Encyclopedia). Prewitt Filter-Based CNN Assume that the intensity function b(w) and the threshold z(w) satisfy b(w) =b ϕ(y y) z(w) =z (76) respectively. Apply the central difference formula and the Gauss integral formula to the spatial derivative and the spatial integral in Eq. (69) respectively. If each cell is coupled locally only to all cells within the neighborhood of radius r =1( x = y = d) we would obtain the Prewitt filter-based CNN equation du ij = u ij + b (s i 1j+1 s i 1j 1 )+(s ij+1 s ij 1 )+(s i+1j+1 s i+1j 1 ) y + z x { } = u ij + b (s i 1j 1 s i 1j+1 )+(s ij 1 s ij+1 )+(s i+1j 1 s i+1j+1 ) + z (77)

9 or equivalently du ij Advanced Image Processing Cellular Neural Networks 1117 = u ij + b (s i 1j+1 + s ij+1 + s i+1j+1 ) (s i 1j 1 + s ij 1 + s i+1j 1 ) y + z x { } = u ij + b (s i 1j 1 + s ij 1 + s i+1j 1 ) (s i 1j+1 + s ij+1 + s i+1j+1 ) + z (78) where b = b ( y/ x). The solution u ij (t) convergesto { } u ij ( ) =b (s i 1j 1 s i 1j+1 )+(s ij 1 s ij+1 )+(s i+1j 1 s i+1j+1 ) + z (79) as t. Equation (78) is characterized by the following CNN template B = b z = z (80) The Prewitt filter can be used to emphasize edges in an image (see image processing Wikipedia: The Free Encyclopedia). Consider next the neural field model without an integral term u(wt) = u(wt)+b(w) s(w) + z(w). (81) t x In this case the solution u(wt) converges to u(w ) =b(w) s(w) + z(w) (82) x as t. Roberts Filter-Based CNN Assume that the intensity function b(w) and the threshold z(w) are spatially uniform that is b(w) = b and z(w) = z. Applying the difference formula to the spatial derivative in Eq. (81) we obtain the following CNN equation du ij = u ij + b (s ij+1 s ij ) y + z x = u ij + b(s ij s ij+1 )+z (83) where b = b( y/ x). The solution u ij (t) converges to u ij ( ) =b(s ij s ij+1 )+z (84) as t. Equation (83) is characterized by the following CNN template B = b z = z (85) If we replace the partial derivative of x in Eq. (81) with the diagonal derivative we would obtain the Roberts filter-based CNN template B = b z = z (86) The Roberts filter can be used as a high-pass filter to enhance edges in an image (see image processing Wikipedia: The Free Encyclopedia) Neural field models with second derivative inputs Consider the neural field model with a second derivative input u(wt) t = u(wt)+ b(w) 2 s(w) x 2 dy + z(w). (87) In this case u(wt) converges to u(w ) = b(w) 2 s(w) x 2 dy + z(w) (88) as t. Edge Detection Filter-Based CNN Assume that the intensity function b(w) and the threshold z(w) satisfy b(w) =b ϕ(y y) z(w) =z. (89) If we apply the second derivative approximation [Itoh & Chua 2005] and the Gauss integral formula to the spatial derivative and the spatial integral in Eq. (87) respectively we would obtain the

10 1118 M. Itoh & L. O. Chua edge detection filter-based CNN equation { du ij = u ij + b (si 1j 1 2s i 1j + s i 1j+1 ) x 2 y + (s ij 1 2s ij + s ij+1 ) x 2 y + (s } i+1j 1 2s i+1j s i+1j+1 ) x 2 y + z { = u ij + b ( s i 1j 1 +2s i 1j s i 1j+1 )+( s ij 1 +2s ij s ij+1 ) } +( s i+1j 1 +2s i+1j s i+1j+1 ) + z (90) where b = b ( y/ x 2 ). Here we assumed that each cell is coupled locally only to all cells within a neighborhood of radius r =1( x = y = d). The solution u ij (t) convergesto { u ij ( ) =b ( s i 1j 1 +2s i 1j s i 1j+1 ) +( s ij 1 +2s ij s ij+1 )+( s i+1j 1 } +2s i+1j s i+1j+1 ) + z (91) as t. Equation (90) is characterized by the following CNN template: B =2b z = z (92) Line Detection Filter-Based CNN If the intensity function b(w) in Eq. (87) is not uniform we can obtain the line detection filterbased CNN equation du ij = u ij + ˆb { ( s i 1j 1 +s i 1j s i 1j+1 ) +( s ij 1 + s ij s ij+1 )+( s i+1j 1 } + s i+1j s i+1j+1 ) +z (93) which is characterized by where ˆb is a constant. B = ˆb z = z (94) 3.4. Neural field models with Laplacian inputs Consider the neural field equation u(wt) = u(wt)+ b(w)s(w)dxdy t + z(w). (95) If we replace the integral over x and y by the twodimensional Laplacian operator we would obtain the following partial differential equation ( ) u(wt) = u(wt)+ 2 b(w)s(w) t x 2 + ) (b(w)s(w) 2 y 2 + z(w). (96) The solution u(wt) converges ) ) (b(w)s(w) 2 (b(w)s(w) 2 u(w ) = x 2 + y 2 + z(w) (97) as t. Laplacian Filter-Based CNN Assume that the intensity function b(w) and the threshold z(w) are spatially uniform that is b(w) = b and z(w) = z. If we apply the second derivative approximation to Eq. (96) and assume x = y we would obtain the Laplacian filterbased CNN equation du ij = u ij + b s i 1j + s ij 1 + s ij+1 + s i+1j 4s ij x 2 + z ( = u ij + b s i 1j + s ij 1 + s ij+1 + s i+1j 4s ij )+z (98)

11 Advanced Image Processing Cellular Neural Networks 1119 where b = (b/ x 2 ). The solution u ij (t) converges to ( u ij ( ) =b s i 1j + s ij 1 + s ij+1 + s i+1j 4s ij ) + z (99) as t. The Laplacian filter-based CNN equation can be characterized by the following CNN template: B = b z = z (100) which can be used to detect edges of binary images. We also obtain the following CNN templates if we incorporate additional connections between the diagonal cells and the center cell: B = b z = z (101) To illustrate some application we show two wellknown CNN templates. One is the edge detection CNN template: B = b z = z (102) which extracts edges of objects in a binary image [Chua 1998]. The other is the corner detection CNN template: B = b z = (103) which extracts convex corners in a binary image [Chua 1998]. Sharpening Filter-Based CNN If the intensity function b(w) is not uniform we can obtain the following CNN template from Eq. (96) B = b z = z (104) If we change the sign of b that is setting b = b we would obtain the sharpening filter-based CNN template: B = b z = z (105) which can be used to enhance the sharpness of an image. We also obtain the following CNN template if we incorporate additional connections between the diagonal cells and the center cell B = b z = z. (106) 4. Multitask Image Processing In this section we explore the potential ability of the CNN to execute multitasking image processing tasks Pre-processors and post-processors The dynamics of the two-dimensional neural field equation u(wt) t = u(wt) ( ) + a(w w )f u(w t) dx dy ( ) + b(w w )s(w )dx dy + z(w) (107)

12 1120 M. Itoh & L. O. Chua is divided into two processes: one is the pre-process defined by β(w) = b(w w )s(w )dx dy + z(w) (108) and the other is the post-process defined by u(wt) t = u(wt) ( ) + a(w w )f u(w t) dx dy + β(w). (109) If we look at the post-processor (109) from another point of view it can be considered to be a neural field equation with a nonuniform threshold: z(w) = β(w) and zero input connections. On the other hand if the pre-processor is defined by γ(w) = b(w w )s(w )dx dy (110) then the post-process is given by u(wt) t = u(wt) ( ) + a(w w )f u(w t) dx dy + γ(w)+z(w) (111) which can be considered to be a neural field equation with both input stimulus γ(w) anhreshold z(w). Let ξ(w) be a stable equilibrium state of (107). Then ξ(w) satisfies ( ) ξ(w) = a(w w )f ξ(w) dx dy ( + = ) b(w w )s(w )dx dy + z(w) ( ) a(w w )f ξ(w) dx dy ( ) + γ(w)+z(w) = ( ) a(w w )f ξ(w) dx dy + β(w). (112) Observe that the pre-processed data β(w) =γ(w)+z(w) = b(w w )s(w )dx dy + z(w) (113) is modified by the nonlinear term ( ) a(w w )f ξ(w) dx dy. (114) The dynamics of the CNN equation du ij = u ij + a kl p kl kl N ij + b kl s kl + z ij (115) kl N ij is also divided into two processes: one is the preprocess defined by β ij = b kl s kl + z ij (116) kl N ij and the other is the post-process defined by du ij = u ij + a kl p kl + β ij. (117) kl N ij The post-processor (117) can be considered as a CNN equation with a nonuniform thresholds: z ij = β ij and zero input connections. On the other hand if the pre-processor is defined by the term γ ij = b kl s kl (118) kl N ij then the post-processor is given by du ij = u ij + a kl p kl + γ ij + z ij (119) kl N ij which can be considered as a CNN equation with input γ ij (without connections) and threshold z ij. Let ξ ij be a stable equilibrium state of (115). Then ξ ij satisfies ξ ij = ) a kl p kl + (γ ij + z ij kl N ij = a kl f(ξ kl )+β ij (120) kl N ij where β ij = γ ij +z ij. Observe that the pre-processed data β ij = γ ij + z ij = b kl s kl + z ij (121) kl N ij is modified by the nonlinear term a kl f(ξ kl ). (122) kl N ij Thus Eq. (115) has the potential ability to execute two tasks simultaneously.

13 4.2. Multipurpose post-processor Consider the following post-processor equation u(wt) ( ) = u(wt)+a f u(w t) dx dy t + bs(w)+z. (123) If we apply the trapezoidal rule to Eq. (123) we would obtain du ij = u ij + a ( f(u ij 1 )+f(u ij+1 )+f(u i 1j ) 2 ) + f(u i+1j )+4f(u ij ) x y + bs ij + z ( = u ij + a f(u ij 1 )+f(u ij+1 )+f(u i 1j ) ) + f(u i+1j )+4f(u ij ) +bs ij + z (124) which is characterized by the multipurpose postprocessor CNN template a B = b z = z (125) where a =(a/2) x y. Many useful functions can be realized by using this CNN template (see [Chua 1998; Roska et al. 1999; Hänggi et al. 1999; Itoh & Chua 2005]): 1. Hole-Filling CNN The template of the hole-filling CNN is given by B =6 z = 0. (126) The hole-filling CNN fills the interior of all contours in a binary image. 2. Filled-Contour Extraction CNN The template of the filled-contour extraction CNN is given by B =2 z = 4. (127) The filled-contour extraction CNN extracts from a binary image fed to initial states those regions which contain the boundary and which Advanced Image Processing Cellular Neural Networks 1121 completely filled the interior of all closed curves of an input image. 3. Selected Objects-Extraction CNN The template of the selected objects-extraction CNN is given by B =7 z = 1. (128) The selected objects-extraction CNN can be used to extract marked objects from a binary input image. 4. Face-Vase-Illusion CNN The template of the face-vase-illusion CNN is given by B = 6 z = 0. (129) The face-vase-illusion CNN simulates the wellknown visual illusion where the input image is perceived either as two symmetric faces or as a vase depending on the initial thought or attention. 5. Solid Black Framed Area-Extraction CNN The template of the solid black framed areaextraction CNN is given by B = 0.4 z = 2. The solid black framed area-extraction CNN extracts binary images representing objects of the initial state which fit and fill in the closed curve of a given binary image. 6. Noise Removal CNN The template of the noise removal CNN is given by B =0 z = 0. (130) (131) The noise removal CNN can be used to remove noise from an input image.

14 1122 M. Itoh & L. O. Chua If we incorporate the connections between the diagonal cells and the center cell we would obtain the following CNN template from Eq. (123): a B = b z = z (132) which can achieve the following tasks [Roska et al. 1999; Roska et al. 1999; Itoh & Chua 2005]: 1. Figure-Reconstruction CNN The template of the figure-reconstruction CNN is given by B =8 z = 6.5. (133) The figure-reconstruction CNN reconstructs a given input image from a part of an input image. 2. Concave-Filling CNN The template of the concave-filling CNN is given by B = z = 9. (134) The concave-filling CNN can be used to fill concave areas of a given binary image Mean filter-based image processing The template of the nonuniform mean filter is given by or B = b z = 0 B = b z = (135) (136) The image processing task of this filter is illustrated in Fig. 1. In all figures 1 in Secs. 4 and 5 white pixels represent the value of 1 black pixels +1. A red cell implies that its state u ij is greater than 1 whereas a green cell implies u ij < 1. Values between 1 and 1 are coded in gray-scale. Except for Fig. 22 all other figures are in gray scale since the output p ij of the CNN cell satisfies 1 p ij 1 (see Eq. (9)). We next show that many multitasking CNN templates can be obtained if we replace the template B of a multipurpose post-processor CNN with the template B of a nonuniform mean filter CNN. Fig. 1. Nonuniform mean filter CNN. 1 All numerical simulations in Secs. 4 and 5 are performed by using the CNN Simulator (Institute for Signal and Information Processing Laboratory ETHZ) [Hänggi et al. 1999].

15 Advanced Image Processing Cellular Neural Networks Mean Filter-Based Figure Reconstruction CNN The template of the figure reconstruction CNN is given by B = z = (137) The image processing task of the figure reconstruction template is illustrated in Fig. 2. Replacing the template B of Eq. (137) with that of Eq. (135) and choosing b =1/2 we obtain the following CNN template which can achieve two tasks: B = 1 2 z = (138) In this CNN noise removal is performed by the pre-processor and figure reconstruction is performed by the post-processor. The noise ɛ ij in the range ɛ ij < 1 is removed by this template (Fig. 3). The following template can also achieve the same tasks B = z = 6.5. (139) 2. Mean Filter-Based Concave-Filling CNN Replacing the template B of Eq. (134) with that of Eq. (135) and choosing b =1/16 and z =8 we obtain the following CNN template: B = 1 16 z = Fig. 2. Figure reconstruction CNN. (140) Fig. 3. Mean filter-based figure reconstruction CNN.

16 1124 M. Itoh & L. O. Chua which achieves both noise removal and concavefilling. The noise ɛ ij in the range 1 <ɛ ij 0is removed by this template (Fig. 4). 3. Mean Filter-Based Hole-Filling CNN Replacing the template B of Eq. (126) with that of Eq. (136) and choosing b =5/8 andz = 1.5 we obtain the following CNN template: B = 5 8 z = 1.5 (141) which achieves both noise removal and holefilling. The noise ɛ ij in the range ɛ ij < 1 is removed by this template (Fig. 5). 4. Mean Filter-Based Solid Black Framed Area Extraction CNN Replacing the template B of Eq. (131) with that of Eq. (136) and choosing b =1/20 and z = 2 we obtain the following CNN template: B = 1 20 z = 2 (142) which achieves both noise removal and solid black framed area extraction. The noise ɛ ij in the range ɛ ij < 1 is removed by this template (Fig. 6). 5. Mean Filter-Based Filled Contour Extraction CNN Replacing the template B of Eq. (127) with that of Eq. (136) and choosing b =1/2 andz = 3 we obtain the following CNN template: B = 1 2 z = 3 Fig. 4. Mean filter-based concave-filling CNN. (143) Fig. 5. Mean filter-based hole-filling CNN.

17 Advanced Image Processing Cellular Neural Networks 1125 Fig. 6. Mean filter-based solid black frame area extraction CNN. which achieves both noise removal and filled contour extraction. The noise ɛ ij in the range ɛ ij < 1 is removed by this template (Fig. 7) Sharpening filter-based image processing The template of the sharpening filter is given by B = b z = (144) We next show that many multitasking CNN templates can be obtained if we replace the template B of the multipurpose post-processor CNN with the B template of the sharpening CNN. 1. Sharpening Filter-Based Horizontal Hole Detection CNN The template of the horizontal hole detection CNN [Chua 1998] is given by B = z = 0. (145) Replacing the template B of Eq. (145) with that of Eq. (144) and choosing b = 1/10 and z = 0 we obtain the following CNN template: B = z = 0 (146) which has the potential ability to achieve both image sharpening and horizontal hole detection (Fig. 8). Fig. 7. Mean filter-based filled contour extraction CNN.

18 1126 M. Itoh & L. O. Chua Fig Sharpening Filter-Based Horizontal and Vertical Line Detection CNN The template of the horizontal and vertical line detection CNN [Itoh & Chua 2005] is given by B = z = 0. Sharpening filter-based horizontal hole detection CNN. (147) Replacing the template B of Eq. (147) with that of Eq. (144) and choosing b =1/3 andz =0we obtain the following CNN template: B = 1 3 z = (148) which has the potential ability to achieve both image sharpening and horizontal and vertical line detection (Fig. 9). 3. Sharpening Filter-Based Filled Contour Extraction CNN Replacing the template B of Eq. (127) with that of Eq. (-4) and choosing b =1/2 andz = 4 we obtain the following CNN template: B = z = 4 (149) which has the potential ability to achieve both image sharpening and filled contour extraction (Fig. 10). 4. Sharpening Filter-Based Hole-Filling CNN Replacing the template B of Eq. (126) with that of Eq. (144) and choosing b =5/9 andz = 4 Fig. 9. Sharpening filter-based horizontal and vertical line detection CNN.

19 Advanced Image Processing Cellular Neural Networks 1127 Fig. 10. Sharpening filter-based filled contour extraction CNN. we obtain the following CNN template: B = z = 4 which has the potential ability to achieve both image sharpening and hole-filling (Fig. 11). 5. Sharpening Filter-Based Solid Black Framed Area Extraction CNN Replacing the template B of Eq. (131) with that of Eq. (144) and choosing b =1/2 andz = 4.5 we obtain the following CNN template: B = (150) z = 4.5 (151) which has the potential ability to achieve both image sharpening and solid black framed area extraction (Fig. 12) Laplacian filter-based image processing Laplacian Filter-Based Horizontal Line Detection CNN The template of the threshold CNN [Chua 1998] is given by B = z = 0. (152) Fig. 11. Sharpening filter-based hole-filling CNN.

20 1128 M. Itoh & L. O. Chua Fig. 12. Sharpening filter-based solid black frame area extraction CNN. Fig. 13. Replacing the template B of Eq. (152) with that of the Laplacian-filter CNN B = b Laplacian filter-based Muller Lyer illusion CNN z = 0 (153) and choosing b =1/10 and z = 2.1 we obtain the Muller Lyer illusion CNN template: B = 1 10 z = (154) which simulates the well-known Muller Lyer illusion and removes the noise. The noise ɛ ij in the range ɛ ij < 1 is removed by this template (Fig. 13). 5. Visual Illusion CNN Visual illusion is defined as the fallacious perception of some actually existing object. Research on visual illusion can provide fundamental insights into general brain mechanisms of perception and cognition. Some well-known visual illusions such as the Muller Lyer illusion and the Herring-grid illusion have been simulated by the CNN image processing [Chua 1998; Chua & Roska 2002]. Because of its rich image processing properties the CNN can simulate many more visual illusions using simpler templates. For more details of visual illusions simulated in this paper see optical illusion Wikipedia: The Free Encyclopedia.

21 Advanced Image Processing Cellular Neural Networks Muller Lyer Illusion CNN The threshold CNN template [Chua 1998] is given by B = b z = z (155) where z is a constant. Replacing the template B of this CNN with that of the Laplacian-filter CNN (153) and choosing z = 2.1 we obtain the Muller Lyer illusion CNN template [Chua 1998]: B = b z = 2.1 (156) where b =0.1. The Muller Lyer illusion CNN simulates the visual illusion where two line segments of identical length but terminated respectively by converging and diverging arrowheads are perceived as if one segment is shorter than the other (Fig. 14). The following horizontal line detection CNN template with a threshold parameter is given in [Itoh & Chua 2005]: B = z = z. (157) Replacing the template B of this CNN with that of the Laplacian-filter CNN we obtain another kind of Muller Lyer illusion CNN template (Fig. 15); namely B = b z = 2.1 (158) which deletes the arrowheads. 2. Ponzo Illusion CNN I By choosing z = 1 in Eq. (157) we obtain the Ponzo illusion CNN template: B = z = 1. (159) The Ponzo illusion CNN simulates the visual illusion where two line segments of identical length inside two slanted lines like an inverted V are perceived as if one segment is shorter than the other (Fig. 16). The slanted lines can be further deleted if we use the template of the horizontal line detection CNN with z =0 (Fig. 17): B = z = 0. (160) Fig. 14. Muller Lyer illusion CNN.

22 1130 M. Itoh & L. O. Chua Fig. 15. Another kind of Muller Lyer illusion CNN. Fig. 16. Ponzo illusion CNN I. Fig. 17. Another kind of Ponzo illusion CNN I.

23 Advanced Image Processing Cellular Neural Networks Ponzo Illusion CNN II By replacing the template B of the threshold CNN (152) with that of the Laplacian-filter CNN and by choosing z = 10 we obtain yet another kind of Ponzo illusion CNN template: B = z = (161) This Ponzo illusion type CNN simulates the visual illusion where line segments of identical length inside two slanted lines are perceived as if one segment is shorter than the others (Fig. 18). 4. Horizontal-Vertical Illusion CNN By replacing the template B of the threshold CNN with that of the nonuniform mean filter CNN and by choosing z = 5 we obtain the following Horizontal-Vertical illusion CNN template: B = b z = (162) The horizontal-vertical illusion CNN simulates the visual illusion where the vertical line segmentisperceiveobelongerthananequallength horizontal line segment (Fig. 19). In order to show the visual illusion effectively two pictures like an inverted T are given as initial input images. Observe the illusion is persistent even if the vertical lines have different thicknesses. 5. Herring-Grid Illusion CNN The template of the Laplacian post-processor CNN is given by a B = z = z. (163) By replacing the template B of this CNN with that of the Laplacian-filter CNN and by choosing z = 1 we obtain the Herring-grid illusion CNN template in [Chua 1998] B = z = 1 (164) where the boundary condition for the input is fixed to 0. The Herring-grid illusion CNN simulates the visual illusion where an input image of closely-spaced uniform black squares is perceived at a distance as if there were gray patches present at the intersections of the horizontal and vertical white grid lines separating the squares (Fig. 20). 6. Mach Effect Illusion CNN The template of the sharpening filter CNN can be used to obtain the following Mach effect Fig. 18. Ponzo illusion CNN II.

24 1132 M. Itoh & L. O. Chua (d) input image. (e) initial state. (f) output image. Fig. 19. Horizontal-vertical illusion CNN. (a) input image. (b) initial state. (c) output image at t =1.2. Fig. 20. Herring illusion CNN.

25 Advanced Image Processing Cellular Neural Networks 1133 illusion CNN template: B = z = 0 q q q q 1+8q q q q q (165) where q =0.2. The Mach effect illusion CNN simulates the visual perception of edge enhancement seen at boundaries between abrupt changes in luminance (Fig. 21). 7. Ebbinghaus Illusion CNN The template of the multipurpose postprocessor can be used to obtain the following Ebbinghaus illusion CNN template: B = z = 8. (166) The Ebbinghaus illusion CNN simulates the visual illusion where two central squares of the same size appear to be of different sizes (Fig. 22). 8. Poggendorff Illusion CNN The template of the multipurpose postprocessor can be used to obtain the following Poggendorff illusion CNN template: B = z = (167) Note that some elements of the template A are zero. The Poggendorff illusion CNN simulates the visual illusion where the two ends of a straight line segment passing behind an obscuring rectangle are perceived to have an offset (Fig. 23). 9. Ambiguous Illusion CNN Using the logic not operation template with Fig. 21. Mach effect illusion CNN. Fig. 22. Ebbinghaus illusion CNN.

26 1134 M. Itoh & L. O. Chua Fig. 23. a threshold parameter [Chua 1998] we obtain the ambiguous illusion CNN templates: B = z = c Poggendorff illusion CNN. (168) where c = 1 or 1. The ambiguous illusion CNN simulates the visual illusion where you either see the word LIFE or you see five white shapes. Once you are able to read the letters it is difficult not to see the word (Fig. 24). Observe that unlike the other images shown so far the images displayed in Figs. 24(c) and 24(f) are not gray-scale output images but are color images representing coded values of the cell state variables. By using the cell state image we can see the ambiguous illusion clearly. 10. Face-Vase Illusion CNN As stated in the previous section the face-vase illusion CNN template [Chua 1998; Itoh & Chua 2003] is given by: B = z = (169) The face-vase illusion CNN simulates the visual illusion where the image is perceived either as two symmetric faces or as a vase depending on the initial thought or attention (Fig. 25). 11. Stripe Illusion CNN The template of the one-dimensional nonuniform mean filter is given by B = z = z (170) Replacing the feedback template A of the stripe illusion CNN with that of the threshold CNN template and choosing z = 4 we obtain the following stripe illusion CNN template: B = z = (171) The stripe illusion CNN simulates the visual illusion where a square is perceived to be longer in the vertical direction when a square is drawn with horizontal lines (Fig. 26). 12. Baldwin Illusion CNN Replacing the template B of the threshold CNN with that of the uniform mean filter CNN we obtain the following Baldwin illusion CNN template: B = z = 2. (172) The Baldwin illusion CNN simulates the visual illusion where the distance between the two

27 Advanced Image Processing Cellular Neural Networks 1135 (a) input image. (b) initial state. (c) cell state image. (d) input image. (e) initial state. (f) cell state image. (g) input image. (h) initial state. (i) output image at t =0.4. (j) input image. (k) initial state. (l) output image at t = 20. Fig. 24. Ambiguous illusion CNN. (a) (f) c = 1. (g) (l) c = 1. Note that the figures (c) and (f) are the cell state images and the figures (i) and (l) are the output images for c = 1 att =0.4 an = 20 respectively. small boxes on the right appears to be longer than the distance between the two large boxes on the left. The distance between the two large boxes is the same as the distance between the two small boxes (Fig. 27). 13. Munsterberg Illusion CNN (Cafe-Wall Illusion CNN) Replacing the template B of the multipurpose post-processor with that of the nonuniform mean filter we obtain the following

28 1136 M. Itoh & L. O. Chua (a) input image. (b) initial state. (c) cell state image. (d) input image. (e) initial state. (f) cell state image. Fig. 25. Face-vase illusion CNN. Fig. 26. Stripe illusion CNN.

29 Advanced Image Processing Cellular Neural Networks 1137 Fig. 27. Munsterberg illusion CNN (Cafe-wall illusion CNN) template: B = z = Baldwin illusion CNN. (173) The Munsterberg illusion CNN simulates the visual illusion where the row of black rectangles is perceived as sloping alternately upward and downward (Fig. 28). 14. Size Illusion CNN The template of the multipurpose postprocessor can be used to obtain the following size illusion CNN: B = z = 8. (174) The size illusion CNN simulates the visual illusion where an expanding rectangle appears to be larger than a shrinking rectangle (Fig. 29). 15. Enhanced Darkness Illusion CNN The template of the nonuniform mean filter CNN can be used to obtain the following template of the enhanced darkness illusion CNN: B = z = 0. (175) The enhanced darkness illusion CNN simulates the visual illusion where all gray squares have the same brightness but the gray squares on black background appear to be darker than those on white background (Fig. 30). 16. Enhanced Brightness Illusion CNN The template of the sharpening filter CNN can be used to obtain the following template of the enhanced brightness illusion CNN: B = z = (176) The enhanced brightness illusion CNN simulates the visual illusion where two gray objects have the same brightness but the gray object on black background appears to be brighter than that on white background (Fig. 31). 17. Zöllner Illusion CNN Replacing the template B of the horizontal-line detection CNN with that of the Laplacian-filter CNN and choosing z = 5 we obtain Zöllner illusion CNN template: B = z = (177) The Zöllner illusion CNN simulates the visual illusion where two parallel lines appear to be tilted at an angle or to be expanded. In this computer simulation the output image indicates the expanding direction (Fig. 32).

30 1138 M. Itoh & L. O. Chua (a) input image. (b) initial state. (c) output image at t =0.33. (d) input image. (e) initial state. (f) output image at t =1.1. Fig. 28. Munsterberg illusion CNN. Fig. 29. Size illusion CNN.

31 Advanced Image Processing Cellular Neural Networks 1139 (a) input image. (b) initial state. (c) output image at t =0.3. Fig. 30. Enhanced darkness illusion CNN. Fig. 31. Enhanced brightness illusion CNN. Fig. 32. Zöllner illusion CNN.

32 1140 M. Itoh & L. O. Chua 18. Bourdon Illusion CNN Replacing the template B of the multipurpose post-processor CNN with that of the diagonal Laplacian-filter CNN and choosing z = 6 we obtain the following Bourdon illusion CNN template: 19. Oppel-Kun Illusion CNN Replacing the template B of the threshold CNN with that of the one-dimensional nonuniform mean-filter CNN and choosing z = 4 we obtain the following Oppel Kun illusion CNN template: B = B = (179) z = 6 (178) where the boundary condition for the input is fixed to 0. The Bourdon illusion CNN simulates the visual illusion where the right upper line of two triangles appears to tilt clockwise and the left counter-clockwise (Fig. 33). z = 4. The Oppel Kun illusion CNN simulates the visual illusion where a space containing nothing looks narrower than a space divided by some objects (Fig. 34). 20. Delboeuf Illusion CNN Replacing the template B of the threshold CNN with that of the mean-filter CNN and choosing (a) input image. (b) initial state. (c) output image at t =2.4. Fig. 33. Bourdon illusion CNN. Fig. 34. Oppel Kun illusion CNN.

33 Advanced Image Processing Cellular Neural Networks 1141 Fig. 35. Delboeuf illusion CNN. z = 2 we obtain the following Delboeuf illusion CNN template: B = z = 2. (180) The Delboeuf illusion CNN simulates the visual illusion where a square placed on a big square looks smaller than that placed on a small square (Fig. 35). 6. Painting-Like Image Processing CNN There are several image processing techniques which can transform an image into a painting or other artistic styles. The CNN can produce images with human expressive characteristics or important details found in real paintings since it has pattern formation and self-organizing properties. Research on painting-like image processing of the CNN may provide some insight into general brain mechanisms Texture generation image processing In this section we show that the CNN pattern formation property can generate textures in images Checkerboard Texture The template of checkerboard pattern formation CNN [Chua 1998] is given by B = b z = 0 (181) where b = 0. This template generates a checkerboard pattern under a random initial condition. If we set b = 2 then this template generates an interesting checkerboard texture in a given image as shown in Fig Striped Texture The template of the striped pattern formation CNN [Chua 1998] is given by B = b z = 0 (182) where b = 0. This template generates a striped pattern under a random initial condition. If we set b = 2 then this template generates an interesting striped texture in a given image as shown in Fig All numerical simulations in this section are performed by using the simulator CELL (Circuit Lab. Department of Electric Engineering University of Rome Tor Vergata ). In this simulator color image processing is possible by means of a three-layer network (R-G-B). We applied the same template to each color component.

34 1142 M. Itoh & L. O. Chua (a) input image. (b) initial state. (c) output image at t = 20. Fig. 36. Checker texture. (a) input image. (b) initial state. (c) output image at t = 10. Fig Cortex Texture The template of the cortex pattern formation CNN [Chua 1998] is given by B = b z = z (183) where b =0andz = 0. This template generates cortex patterns under random initial conditions. For b = this template generates an interesting cortex texture in a given image as shown in Figs Striped texture. 4. Squiggle Texture The template of the squiggle pattern formation CNN [Chua 1998] is given by B = b z = z (184) where b = 0. This template generates a cortex pattern under a random initial condition. If we set b =20andz = 0 then this template generates an interesting squiggle texture in a given image as shown in Fig. 41.

35 Advanced Image Processing Cellular Neural Networks 1143 (a) input image. (b) initial state. (c) output image at t = 10. Fig. 38. Cortex texture (b = 1.5). (a) input image. (b) initial state. (c) output image at t = 10. Fig. 39. Cortex texture (b = 2). (a) input image. (b) initial state. (c) output image at t = 10. Fig. 40. Cortex texture (b = 3).

36 1144 M. Itoh & L. O. Chua (a) input image. (b) initial state. (c) output image at t = Fingerprint Texture The template of the fingerprint enhancement CNN [Crounse & Chua 1995] is given by a B = Fig. 41. Squiggle texture (b = 20 z = 0). z = 0 (185) where a = 0. This template generates fingerprint patterns under random initial conditions. If a =1ora = 2 then it generates a fingerprint texture in given input images as shown in Figs. 42 and Maze Texture The templates of maze pattern formation CNNs are given by 1 8 and B = z = 0 (186) z = 0. B = (187) These templates generate maze (or striped) patterns under random initial conditions. They also (a) input image. (b) initial state. (c) output image at t = 10. Fig. 42. Fingerprint texture (α = 1).

37 Advanced Image Processing Cellular Neural Networks 1145 (a) input image. (b) initial state. (c) output image at t = 10. generate a maze (or striped) texture in given input images as shown in Figs. 44 and Miscellaneous Textures The following templates also generate interesting textures in given input images as shown in Figs B = z = 0. B = 1 2 z = 0. B = 1 2 z = 0. Fig. 43. Fingerprint texture (α = 2) (188) 6.2. Illustration-style image processing The CNN can transform given input images into various interesting artistic illustration styles because it is endowed with the pattern formation property. We next show that some well-known CNN templates produce illustration-like images from a mandrill image. Multipurpose post-processor CNN template is given by B = z = z (189) where z =5orz = 9. This template produces the illustration-like images in Figs. 49 and 50. Patch pattern formation CNN template [Chua 1998] is given by B = 0 b 0 z = 0 (190) where b =0orb = 0.5. This template produces the illustration-like images in Figs Cortex pattern formation CNN template is given by B = z = 1 (191) which produces the illustration-like image in Fig. 53.

38 1146 M. Itoh & L. O. Chua (a) input image. (b) initial state. (c) output image at t = 10. Fig. 44. Maze texture I. (a) input image. (b) initial state. (c) output image at t =3.5. Fig. 45. Maze texture II. (a) input image. (b) initial state. (c) output image at t = 10. Fig. 46. Miscellaneous texture I.

39 Advanced Image Processing Cellular Neural Networks 1147 (a) input image. (b) initial state. (c) output image at t = 10. Fig. 47. Miscellaneous texture II. (a) input image. (b) initial state. (c) output image at t = 10. Fig. 48. Miscellaneous texture III. (a) input image. (b) initial state. (c) output image at t =1.7. Fig. 49. Multipurpose post-processor CNN (z = 5).

40 1148 M. Itoh & L. O. Chua (a) input image. (b) initial state. (c) output image at t =0.06. Fig. 50. Multipurpose post-processor CNN (z = 9). (a) input image. (b) initial state. (c) output image at t = 10. Fig. 51. Patch pattern formation CNN (b = 0). (a) input image. (b) initial state. (c) output image at t = 20. Fig. 52. Patch pattern formation CNN (b = 0.5).

DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003

DESIGNING CNN GENES. Received January 23, 2003; Revised April 2, 2003 Tutorials and Reviews International Journal of Bifurcation and Chaos, Vol. 13, No. 10 (2003 2739 2824 c World Scientific Publishing Company DESIGNING CNN GENES MAKOTO ITOH Department of Information and

More information

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.

More information

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news Edge Detection Introduction to Computer Vision CS / ECE 8B Thursday, April, 004 Edge detection (HO #5) Edge detection is a local area operator that seeks to find significant, meaningful changes in image

More information

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay SP-CNN: A Scalable and Programmable CNN-based Accelerator Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay Motivation Power is a first-order design constraint, especially for embedded devices. Certain

More information

Edge Detection. Image Processing - Computer Vision

Edge Detection. Image Processing - Computer Vision Image Processing - Lesson 10 Edge Detection Image Processing - Computer Vision Low Level Edge detection masks Gradient Detectors Compass Detectors Second Derivative - Laplace detectors Edge Linking Image

More information

Filtering and Edge Detection

Filtering and Edge Detection Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity

More information

Lecture 7: Edge Detection

Lecture 7: Edge Detection #1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection

More information

Edge Detection in Computer Vision Systems

Edge Detection in Computer Vision Systems 1 CS332 Visual Processing in Computer and Biological Vision Systems Edge Detection in Computer Vision Systems This handout summarizes much of the material on the detection and description of intensity

More information

CELLULAR NEURAL NETWORKS & APPLICATIONS TO IMAGE PROCESSING. Vedat Tavsanoglu School of EEIE SOUTH BANK UNIVERSITY LONDON UK

CELLULAR NEURAL NETWORKS & APPLICATIONS TO IMAGE PROCESSING. Vedat Tavsanoglu School of EEIE SOUTH BANK UNIVERSITY LONDON UK CELLULAR NEURAL NETWORKS & APPLICATIONS TO IMAGE PROCESSING Vedat Tavsanoglu School of EEIE SOUTH BANK UNIVERSITY LONDON UK Outline What is CNN? Architecture of CNN Analogue Computing with CNN Advantages

More information

Morphological image processing

Morphological image processing INF 4300 Digital Image Analysis Morphological image processing Fritz Albregtsen 09.11.2017 1 Today Gonzalez and Woods, Chapter 9 Except sections 9.5.7 (skeletons), 9.5.8 (pruning), 9.5.9 (reconstruction)

More information

Convolutional networks. Sebastian Seung

Convolutional networks. Sebastian Seung Convolutional networks Sebastian Seung Convolutional network Neural network with spatial organization every neuron has a location usually on a grid Translation invariance synaptic strength depends on locations

More information

Motion Estimation (I) Ce Liu Microsoft Research New England

Motion Estimation (I) Ce Liu Microsoft Research New England Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Multimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis

Multimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis Previous Lecture Multimedia Databases Texture-Based Image Retrieval Low Level Features Tamura Measure, Random Field Model High-Level Features Fourier-Transform, Wavelets Wolf-Tilo Balke Silviu Homoceanu

More information

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig Multimedia Databases Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Previous Lecture Texture-Based Image Retrieval Low

More information

Multimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis

Multimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis 4 Shape-based Features Multimedia Databases Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 4 Multiresolution Analysis

More information

Motion Estimation (I)

Motion Estimation (I) Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Morphology Gonzalez and Woods, Chapter 9 Except sections 9.5.7, 9.5.8, and Repetition of binary dilatation, erosion, opening, closing

Morphology Gonzalez and Woods, Chapter 9 Except sections 9.5.7, 9.5.8, and Repetition of binary dilatation, erosion, opening, closing 09.11.2011 Anne Solberg Morphology Gonzalez and Woods, Chapter 9 Except sections 9.5.7, 9.5.8, 9.5.9 and 9.6.4 Repetition of binary dilatation, erosion, opening, closing Binary region processing: connected

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 7 Sept 11 th, 2018 Pranav Mantini Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Today Review Binary Image Processing Opening and Closing Skeletonization

More information

Machine vision, spring 2018 Summary 4

Machine vision, spring 2018 Summary 4 Machine vision Summary # 4 The mask for Laplacian is given L = 4 (6) Another Laplacian mask that gives more importance to the center element is given by L = 8 (7) Note that the sum of the elements in the

More information

Image Segmentation: Definition Importance. Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation.

Image Segmentation: Definition Importance. Digital Image Processing, 2nd ed. Chapter 10 Image Segmentation. : Definition Importance Detection of Discontinuities: 9 R = wi z i= 1 i Point Detection: 1. A Mask 2. Thresholding R T Line Detection: A Suitable Mask in desired direction Thresholding Line i : R R, j

More information

Natural Image Statistics

Natural Image Statistics Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex

More information

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998 Notes on Regularization and Robust Estimation Psych 67/CS 348D/EE 365 Prof. David J. Heeger September 5, 998 Regularization. Regularization is a class of techniques that have been widely used to solve

More information

Basics on 2-D 2 D Random Signal

Basics on 2-D 2 D Random Signal Basics on -D D Random Signal Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Time: Fourier Analysis for -D signals Image enhancement via spatial filtering

More information

Errata for Robot Vision

Errata for Robot Vision Errata for Robot Vision This is a list of known nontrivial bugs in Robot Vision 1986) by B.K.P. Horn, MIT Press, Cambridge, MA ISBN 0-6-08159-8 and McGraw-Hill, New York, NY ISBN 0-07-030349-5. Thanks

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

Region Description for Recognition

Region Description for Recognition Region Description for Recognition For object recognition, descriptions of regions in an image have to be compared with descriptions of regions of meaningful objects (models). The general problem of object

More information

Machine vision. Summary # 4. The mask for Laplacian is given

Machine vision. Summary # 4. The mask for Laplacian is given 1 Machine vision Summary # 4 The mask for Laplacian is given L = 0 1 0 1 4 1 (6) 0 1 0 Another Laplacian mask that gives more importance to the center element is L = 1 1 1 1 8 1 (7) 1 1 1 Note that the

More information

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q

Reading. 3. Image processing. Pixel movement. Image processing Y R I G Q Reading Jain, Kasturi, Schunck, Machine Vision. McGraw-Hill, 1995. Sections 4.-4.4, 4.5(intro), 4.5.5, 4.5.6, 5.1-5.4. 3. Image processing 1 Image processing An image processing operation typically defines

More information

A Bio-inspired Computer Fovea Model based on Hexagonal-type Cellular Neural Networks

A Bio-inspired Computer Fovea Model based on Hexagonal-type Cellular Neural Networks 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Bio-inspired Computer Fovea Model based on Hexagonal-type Cellular

More information

CS 4495 Computer Vision Binary images and Morphology

CS 4495 Computer Vision Binary images and Morphology CS 4495 Computer Vision Binary images and Aaron Bobick School of Interactive Computing Administrivia PS6 should be working on it! Due Sunday Nov 24 th. Some issues with reading frames. Resolved? Exam:

More information

Errata for Robot Vision

Errata for Robot Vision Errata for Robot Vision This is a list of known nontrivial bugs in Robot Vision 1986 by B.K.P. Horn, MIT Press, Cambridge, MA ISBN 0-262-08159-8 and McGraw-Hill, New York, NY ISBN 0-07-030349-5. If you

More information

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Technischen Universität München Winter Semester 0/0 TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection Slobodan Ilić Overview Image formation Convolution Non-liner filtering: Median

More information

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data

More information

Digital Image Processing ERRATA. Wilhelm Burger Mark J. Burge. An algorithmic introduction using Java. Second Edition. Springer

Digital Image Processing ERRATA. Wilhelm Burger Mark J. Burge. An algorithmic introduction using Java. Second Edition. Springer Wilhelm Burger Mark J. Burge Digital Image Processing An algorithmic introduction using Java Second Edition ERRATA Springer Berlin Heidelberg NewYork Hong Kong London Milano Paris Tokyo 5 Filters K K No

More information

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications Edge Detection Roadmap Introduction to image analysis (computer vision) Its connection with psychology and neuroscience Why is image analysis difficult? Theory of edge detection Gradient operator Advanced

More information

First-order overdetermined systems. for elliptic problems. John Strain Mathematics Department UC Berkeley July 2012

First-order overdetermined systems. for elliptic problems. John Strain Mathematics Department UC Berkeley July 2012 First-order overdetermined systems for elliptic problems John Strain Mathematics Department UC Berkeley July 2012 1 OVERVIEW Convert elliptic problems to first-order overdetermined form Control error via

More information

Medical Image Analysis

Medical Image Analysis Medical Image Analysis CS 593 / 791 Computer Science and Electrical Engineering Dept. West Virginia University 23rd January 2006 Outline 1 Recap 2 Edge Enhancement 3 Experimental Results 4 The rest of

More information

Introduction to Computer Vision. 2D Linear Systems

Introduction to Computer Vision. 2D Linear Systems Introduction to Computer Vision D Linear Systems Review: Linear Systems We define a system as a unit that converts an input function into an output function Independent variable System operator or Transfer

More information

Review Smoothing Spatial Filters Sharpening Spatial Filters. Spatial Filtering. Dr. Praveen Sankaran. Department of ECE NIT Calicut.

Review Smoothing Spatial Filters Sharpening Spatial Filters. Spatial Filtering. Dr. Praveen Sankaran. Department of ECE NIT Calicut. Spatial Filtering Dr. Praveen Sankaran Department of ECE NIT Calicut January 7, 203 Outline 2 Linear Nonlinear 3 Spatial Domain Refers to the image plane itself. Direct manipulation of image pixels. Figure:

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey CSCI 25: Intro to Robotics Spring Term 27 Prof. Levy Computer Vision: A Brief Survey What Is Computer Vision? Higher-order tasks Face recognition Object recognition (Deep Learning) What Is Computer Vision?

More information

MTH301 Calculus II Glossary For Final Term Exam Preparation

MTH301 Calculus II Glossary For Final Term Exam Preparation MTH301 Calculus II Glossary For Final Term Exam Preparation Glossary Absolute maximum : The output value of the highest point on a graph over a given input interval or over all possible input values. An

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 05 Image Processing Basics 13/02/04 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Errata for Robot Vision

Errata for Robot Vision Errata for Robot Vision This is a list of known nontrivial bugs in Robot Vision 1986) by B.K.P. Horn, MIT Press, Cambridge, MA ISBN 0-6-08159-8 and McGraw-Hill, New York, NY ISBN 0-07-030349-5. If you

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

PHYS 7411 Spring 2015 Computational Physics Homework 4

PHYS 7411 Spring 2015 Computational Physics Homework 4 PHYS 7411 Spring 215 Computational Physics Homework 4 Due by 3:pm in Nicholson 447 on 3 March 215 Any late assignments will be penalized in the amount of 25% per day late. Any copying of computer programs

More information

Determining Constant Optical Flow

Determining Constant Optical Flow Determining Constant Optical Flow Berthold K.P. Horn Copyright 003 The original optical flow algorithm [1] dealt with a flow field that could vary from place to place in the image, as would typically occur

More information

Used to extract image components that are useful in the representation and description of region shape, such as

Used to extract image components that are useful in the representation and description of region shape, such as Used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction skeletons convex hull morphological filtering thinning pruning Sets

More information

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington Image Filtering Slides, adapted from Steve Seitz and Rick Szeliski, U.Washington The power of blur All is Vanity by Charles Allen Gillbert (1873-1929) Harmon LD & JuleszB (1973) The recognition of faces.

More information

Self-correcting Symmetry Detection Network

Self-correcting Symmetry Detection Network Self-correcting Symmetry Detection Network Wonil hang 1,yunAhSong 2, Sang-oon Oh 3, and Soo-Young Lee 1,2 1 Department of Bio and Brain Engineering, 2 Department of Electrical Engineering, KAIST, Daejeon

More information

A NONLINEAR DYNAMICS PERSPECTIVE OF WOLFRAM S NEW KIND OF SCIENCE. PART I: THRESHOLD OF COMPLEXITY

A NONLINEAR DYNAMICS PERSPECTIVE OF WOLFRAM S NEW KIND OF SCIENCE. PART I: THRESHOLD OF COMPLEXITY Tutorials and Reviews International Journal of Bifurcation and Chaos, Vol. 12, No. 12 (2002) 2655 2766 c World Scientific Publishing Company A NONLINEAR DYNAMICS PERSPECTIVE OF WOLFRAM S NEW KIND OF SCIENCE.

More information

Additional Pointers. Introduction to Computer Vision. Convolution. Area operations: Linear filtering

Additional Pointers. Introduction to Computer Vision. Convolution. Area operations: Linear filtering Additional Pointers Introduction to Computer Vision CS / ECE 181B andout #4 : Available this afternoon Midterm: May 6, 2004 W #2 due tomorrow Ack: Prof. Matthew Turk for the lecture slides. See my ECE

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Edge Detection. CS 650: Computer Vision

Edge Detection. CS 650: Computer Vision CS 650: Computer Vision Edges and Gradients Edge: local indication of an object transition Edge detection: local operators that find edges (usually involves convolution) Local intensity transitions are

More information

Lifting Detail from Darkness

Lifting Detail from Darkness Lifting Detail from Darkness J.P.Lewis zilla@computer.org Disney The Secret Lab Lewis / Detail from Darkness p.1/38 Brightness-Detail Decomposition detail image image intensity Separate detail by Wiener

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Appendix C: Recapitulation of Numerical schemes

Appendix C: Recapitulation of Numerical schemes Appendix C: Recapitulation of Numerical schemes August 31, 2009) SUMMARY: Certain numerical schemes of general use are regrouped here in order to facilitate implementations of simple models C1 The tridiagonal

More information

Image Gradients and Gradient Filtering Computer Vision

Image Gradients and Gradient Filtering Computer Vision Image Gradients and Gradient Filtering 16-385 Computer Vision What is an image edge? Recall that an image is a 2D function f(x) edge edge How would you detect an edge? What kinds of filter would you use?

More information

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice 1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian

More information

Competitive Learning for Deep Temporal Networks

Competitive Learning for Deep Temporal Networks Competitive Learning for Deep Temporal Networks Robert Gens Computer Science and Engineering University of Washington Seattle, WA 98195 rcg@cs.washington.edu Pedro Domingos Computer Science and Engineering

More information

Math 127C, Spring 2006 Final Exam Solutions. x 2 ), g(y 1, y 2 ) = ( y 1 y 2, y1 2 + y2) 2. (g f) (0) = g (f(0))f (0).

Math 127C, Spring 2006 Final Exam Solutions. x 2 ), g(y 1, y 2 ) = ( y 1 y 2, y1 2 + y2) 2. (g f) (0) = g (f(0))f (0). Math 27C, Spring 26 Final Exam Solutions. Define f : R 2 R 2 and g : R 2 R 2 by f(x, x 2 (sin x 2 x, e x x 2, g(y, y 2 ( y y 2, y 2 + y2 2. Use the chain rule to compute the matrix of (g f (,. By the chain

More information

Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER?

Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? : WHICH ONE LOOKS BETTER? 3.1 : WHICH ONE LOOKS BETTER? 3.2 1 Goal: Image enhancement seeks to improve the visual appearance of an image, or convert it to a form suited for analysis by a human or a machine.

More information

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple

More information

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008 Lecture 6: Edge Detection CAP 5415: Computer Vision Fall 2008 Announcements PS 2 is available Please read it by Thursday During Thursday lecture, I will be going over it in some detail Monday - Computer

More information

Spatial Vision: Primary Visual Cortex (Chapter 3, part 1)

Spatial Vision: Primary Visual Cortex (Chapter 3, part 1) Spatial Vision: Primary Visual Cortex (Chapter 3, part 1) Lecture 6 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Princeton University, Spring 2015 1 Chapter 2 remnants 2 Receptive field:

More information

Wavelets and Multiresolution Processing

Wavelets and Multiresolution Processing Wavelets and Multiresolution Processing Wavelets Fourier transform has it basis functions in sinusoids Wavelets based on small waves of varying frequency and limited duration In addition to frequency,

More information

Moving interface problems. for elliptic systems. John Strain Mathematics Department UC Berkeley June 2010

Moving interface problems. for elliptic systems. John Strain Mathematics Department UC Berkeley June 2010 Moving interface problems for elliptic systems John Strain Mathematics Department UC Berkeley June 2010 1 ALGORITHMS Implicit semi-lagrangian contouring moves interfaces with arbitrary topology subject

More information

Lecture Outline. Basics of Spatial Filtering Smoothing Spatial Filters. Sharpening Spatial Filters

Lecture Outline. Basics of Spatial Filtering Smoothing Spatial Filters. Sharpening Spatial Filters 1 Lecture Outline Basics o Spatial Filtering Smoothing Spatial Filters Averaging ilters Order-Statistics ilters Sharpening Spatial Filters Laplacian ilters High-boost ilters Gradient Masks Combining Spatial

More information

Lecture 3: Linear Filters

Lecture 3: Linear Filters Lecture 3: Linear Filters Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Images as functions Linear systems (filters) Convolution and correlation Discrete Fourier Transform (DFT)

More information

Motion Perception 1. PSY305 Lecture 12 JV Stone

Motion Perception 1. PSY305 Lecture 12 JV Stone Motion Perception 1 PSY305 Lecture 12 JV Stone 1 Structure Human visual system as a band-pass filter. Neuronal motion detection, the Reichardt detector. The aperture problem. 2 The visual system is a temporal

More information

Image Enhancement: Methods. Digital Image Processing. No Explicit definition. Spatial Domain: Frequency Domain:

Image Enhancement: Methods. Digital Image Processing. No Explicit definition. Spatial Domain: Frequency Domain: Image Enhancement: No Explicit definition Methods Spatial Domain: Linear Nonlinear Frequency Domain: Linear Nonlinear 1 Spatial Domain Process,, g x y T f x y 2 For 1 1 neighborhood: Contrast Enhancement/Stretching/Point

More information

The Visual Perception of Images

The Visual Perception of Images C. A. Bouman: Digital Image Processing - January 8, 2018 1 The Visual Perception of Images In order to understand images you must understand how humans perceive visual stimulus. Objectives: Understand

More information

NONLINEAR DIFFUSION PDES

NONLINEAR DIFFUSION PDES NONLINEAR DIFFUSION PDES Erkut Erdem Hacettepe University March 5 th, 0 CONTENTS Perona-Malik Type Nonlinear Diffusion Edge Enhancing Diffusion 5 References 7 PERONA-MALIK TYPE NONLINEAR DIFFUSION The

More information

STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES

STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES International Journal of Bifurcation and Chaos, Vol. 4, No. 5 (2004) 725 772 c World Scientific Publishing Company STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES MAKOTO ITOH Department

More information

Lecture 3: Linear Filters

Lecture 3: Linear Filters Lecture 3: Linear Filters Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Images as functions Linear systems (filters) Convolution and correlation Discrete Fourier Transform (DFT)

More information

ADI iterations for. general elliptic problems. John Strain Mathematics Department UC Berkeley July 2013

ADI iterations for. general elliptic problems. John Strain Mathematics Department UC Berkeley July 2013 ADI iterations for general elliptic problems John Strain Mathematics Department UC Berkeley July 2013 1 OVERVIEW Classical alternating direction implicit (ADI) iteration Essentially optimal in simple domains

More information

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani) Lucas-Kanade Optical Flow Computer Vision 16-385 Carnegie Mellon University (Kris Kitani) I x u + I y v + I t =0 I x = @I @x I y = @I u = dx v = dy I @y t = @I dt dt @t spatial derivative optical flow

More information

18/10/2017. Image Enhancement in the Spatial Domain: Gray-level transforms. Image Enhancement in the Spatial Domain: Gray-level transforms

18/10/2017. Image Enhancement in the Spatial Domain: Gray-level transforms. Image Enhancement in the Spatial Domain: Gray-level transforms Gray-level transforms Gray-level transforms Generic, possibly nonlinear, pointwise operator (intensity mapping, gray-level transformation): Basic gray-level transformations: Negative: s L 1 r Generic log:

More information

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D Achieving scale covariance Blobs (and scale selection) Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region

More information

Implementation of Nonlinear Template Runner Emulated Digital CNN-UM on FPGA

Implementation of Nonlinear Template Runner Emulated Digital CNN-UM on FPGA Implementation of Nonlinear Template Runner Emulated Digital CNN-UM on FPGA Z. Kincses * Z. Nagy P. Szolgay * Department of Image Processing and Neurocomputing, University of Pannonia, Hungary * e-mail:

More information

Morphological Operations. GV12/3072 Image Processing.

Morphological Operations. GV12/3072 Image Processing. Morphological Operations 1 Outline Basic concepts: Erode and dilate Open and close. Granulometry Hit and miss transform Thinning and thickening Skeletonization and the medial axis transform Introduction

More information

Linear Operators and Fourier Transform

Linear Operators and Fourier Transform Linear Operators and Fourier Transform DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 13, 2013

More information

Image Processing 1 (IP1) Bildverarbeitung 1

Image Processing 1 (IP1) Bildverarbeitung 1 MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV (KOGS) Image Processing 1 (IP1) Bildverarbeitung 1 Lecture 14 Skeletonization and Matching Winter Semester 2015/16 Slides: Prof. Bernd Neumann

More information

3.8 Combining Spatial Enhancement Methods 137

3.8 Combining Spatial Enhancement Methods 137 3.8 Combining Spatial Enhancement Methods 137 a b FIGURE 3.45 Optical image of contact lens (note defects on the boundary at 4 and 5 o clock). (b) Sobel gradient. (Original image courtesy of Mr. Pete Sites,

More information

Computational Photography

Computational Photography Computational Photography Si Lu Spring 208 http://web.cecs.pdx.edu/~lusi/cs50/cs50_computati onal_photography.htm 04/0/208 Last Time o Digital Camera History of Camera Controlling Camera o Photography

More information

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points,

In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points, 1 Interpolation 11 Introduction In practice one often meets a situation where the function of interest, f(x), is only represented by a discrete set of tabulated points, {x i, y i = f(x i ) i = 1 n, obtained,

More information

Screen-space processing Further Graphics

Screen-space processing Further Graphics Screen-space processing Rafał Mantiuk Computer Laboratory, University of Cambridge Cornell Box and tone-mapping Rendering Photograph 2 Real-world scenes are more challenging } The match could not be achieved

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

5. LIGHT MICROSCOPY Abbe s theory of imaging

5. LIGHT MICROSCOPY Abbe s theory of imaging 5. LIGHT MICROSCOPY. We use Fourier optics to describe coherent image formation, imaging obtained by illuminating the specimen with spatially coherent light. We define resolution, contrast, and phase-sensitive

More information

Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM)

Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) James R. Nagel September 30, 2009 1 Introduction Numerical simulation is an extremely valuable tool for those who wish

More information

Interpretation of Geomorphology and Geology using Slope Gradation Map

Interpretation of Geomorphology and Geology using Slope Gradation Map 1 11 1 9-22 2000 Geoinformatics, vol.11, no.1, pp.11-24, 2000 Interpretation of Geomorphology and Geology using Slope Gradation Map Izumi KAMIYA, Takahito KUROKI and Kohei TANAKA Abstract: We produced

More information

Discrete Fourier Transform

Discrete Fourier Transform Discrete Fourier Transform DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 13, 2013 Mårten Björkman

More information

4. References. words, the pseudo Laplacian is given by max {! 2 x,! 2 y}.

4. References. words, the pseudo Laplacian is given by max {! 2 x,! 2 y}. words, the pseudo Laplacian is given by ma {! 2,! 2 y}. 4. References [Ar72] Arbib, M.A., The Metaphorical Brain, John Wiley, New York, 1972. [BB82] Ballard, D. H. and Brown, C. M., Computer Vision, Prentice-Hall,

More information

Chapter 2 Simplicity in the Universe of Cellular Automata

Chapter 2 Simplicity in the Universe of Cellular Automata Chapter 2 Simplicity in the Universe of Cellular Automata Because of their simplicity, rules of cellular automata can easily be understood. In a very simple version, we consider two-state one-dimensional

More information

This paper is about a new method for analyzing the distribution property of nonuniform stochastic samples.

This paper is about a new method for analyzing the distribution property of nonuniform stochastic samples. This paper is about a new method for analyzing the distribution property of nonuniform stochastic samples. Stochastic sampling is a fundamental component in many graphics applications. Examples include

More information

A SYSTEMATIC APPROACH TO GENERATING n-scroll ATTRACTORS

A SYSTEMATIC APPROACH TO GENERATING n-scroll ATTRACTORS International Journal of Bifurcation and Chaos, Vol. 12, No. 12 (22) 297 2915 c World Scientific Publishing Company A SYSTEMATIC APPROACH TO ENERATIN n-scroll ATTRACTORS UO-QUN ZHON, KIM-FUN MAN and UANRON

More information

Image processing and Computer Vision

Image processing and Computer Vision 1 / 1 Image processing and Computer Vision Continuous Optimization and applications to image processing Martin de La Gorce martin.de-la-gorce@enpc.fr February 2015 Optimization 2 / 1 We have a function

More information

DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS

DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS Letters International Journal of Bifurcation and Chaos, Vol. 8, No. 8 (1998) 1733 1738 c World Scientific Publishing Company DESYNCHRONIZATION TRANSITIONS IN RINGS OF COUPLED CHAOTIC OSCILLATORS I. P.

More information

Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model

Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model Matthias O. Franz MPI für biologische Kybernetik Spemannstr. 38 D-72076 Tübingen, Germany mof@tuebingen.mpg.de

More information

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun

Boxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033

More information