ELEG 833. Nonlinear Signal Processing
|
|
- Harry Phillips
- 5 years ago
- Views:
Transcription
1 Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware April 15, 2005
2 6 WEIGHTED MEDIAN FILTERS 6 Weighted Median Filters Weighted median smoothers admit only positive weights. This is a limitation as WM smoothers are, in essence, limited to have low-pass type filtering characteristics. Engineering applications require band-pass or high-pass frequency filtering characteristics: Equalization Deconvolution Prediction Beamforming 1
3 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights 6.1 Weighted Median Filters With Real-Valued Weights To formulate the general weighted median filter structure, it is logical to ask how linear FIR filters arise within the location estimation problem. Consider N samples X1, X2,, XN obeying a multivariate Gaussian distribution f(x) = 1 (2π) N/2 [det(r)] exp[ 1 1/2 2 (X eβ)t R 1 (X eβ)] (1) where X = [X1, X2,, XN ] T, e = [1, 1,, 1] T, β is the location parameter, R is the covariance matrix, and det(r) is the determinant of R. The ML estimate of β is ˆβ = e T R T X e T Re = WT X (2) where e T Re > 0, due to the positive definite nature of R, the elements in e T R T can take on positive as well as negative values. 2
4 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights The multivariate Laplacian distribution, and in general all non-gaussian multivariate distributions, do not lead to simple ML location estimates. However, a simple approach was discovered which can overcome these limitations: The sample mean MEAN (X1, X2,, XN) can be generalized to the class of linear FIR filters as β = MEAN (W 1 X1, W2 X2,, WN XN) (3) where Wi R. In order to apply the analogy to the median filter structure (3) must be written as β = MEAN ( W 1 sgn(w1)x1,, WN sgn(wn)xn) 3
5 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Definition 6.1 (Weighted Median Filters) Given a set of N real valued weights W1, W2,, WN and the observation vector X = [X1, X2,, XN] T, the weighted median filter output is defined as β = MEDIAN( W 1 sgn(w1)x1, W2 sgn(w2)x2,, WN sgn(wn)xn), (4) with Wi R for i = 1, 2,, N, and where is the replication operator. Note that the weight signs are uncoupled from the weight magnitude values and are merged with the observation samples. 4
6 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Weighted Median Filter Computation The computation is best illustrated by means of an example: Let W = 1, 2, 3, 2, 1 and X(n) = [2, 6, 9, 1, 12], the weighted median filter output is: Y (n) = MEDIAN[ 1 2, 2 6, 3 9, 2 1, 1 12 ] = MEDIAN[ 1 2, 2 6, 3 9, 2 1, 1 12 ] = MEDIAN[ 2, 6, 6, 9, 9, 9, 1, 1, 12 ] = MEDIAN[ 1, 1, 2, 6, 6, 9, 9, 9, 12 ] = 6 (5) where the median filter output value is underlined in equation (5). Note that the output is a signed sample whose value is not equal to that of any of the input samples. 5
7 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Next consider the case where the WM filter weights add up to an even integer with W = 1, 2, 2, 2, 1 and X(n) = [5, 5, 5, 5, 5]. The weighted median filter output is Y (n) = MEDIAN[ 1 5, 2 5, 2 5, 2 5, 1 5 ] = MEDIAN[ 1 5, 2 5, 2 5, 2 5, 1 5 ] = MEDIAN[ 5, 5, 5, 5, 5, 5, 5, 5 ] (6) = MEDIAN[ 5, 5, 5, 5, 5, 5, 5, 5 ] = 0 where the median filter output is the average of the underlined samples in equation (6). In order for the WM filter to have band- or high-pass frequency characteristics where constant signals are annihilated, the weights must add to an even number 6
8 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights The weighted median filter output for non integer weights can be determined as follows: (1) Calculate the threshold T0 = 1 2 N i=1 W i. (2) Sort the signed observation samples sgn(wi)xi. (3) Sum the magnitude of the weights corresponding to the sorted signed samples beginning with the maximum and continuing down in order. (4) The output is the signed sample whose weight magnitude causes the sum to become T0. For band- and high-pass characteristics, the output is the average between the signed sample whose weight magnitude causes the sum to become T0 and the next smaller signed sample. 7
9 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Consider the real valued weights W1, W2, W3, W4, W5 = 0.1, 0.2, 0.3, 0.2, 0.1. and [X1, X2, X3, X4, X5] = [ 2, 2, 1, 3, 6]. Summing the weights magnitude gives the threshold T0 = i=1 W i = observation samples 2, 2, 1, 3, 6 corresponding weights 0.1, 0.2, 0.3, 0.2, 0.1 sorted signed observation samples 3, 2, 1, 2, 6 corresponding weights magnitude 0.2, 0.1, 0.3, 0.2, 0.1 partial weight sums 0.9, 0.7, 0.6, 0.3, 0.1 Thus, the output is 1. The underlined sum value above indicates that this is the first sum which meets or exceeds the threshold. 8
10 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Cost Function Interpretation The effect that negative weights have on the weighted median operation is illustrated by the cost function minimization: N G2(β) = Wi (sgn(wi)xi β) 2 G1(β) = i=1 N Wi sgn(wi)xi β. (7) i=1 While G2(β) is a convex continuous function, G1(β) is a convex but piecewise linear function 9
11 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights G ( β ) 2 G ( β ) (a) (b) Figure 1: Effects of negative weighting on the cost functions G2(β) and G1(β). The input samples [ 2, 2, 1, 3, 6] are filtered by the two set of weights 0.1, 0.2, 0.3, 0.2, 0.1 and 0.1, 0.2, 0.3, 0.2,
12 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Example: Bandpass Filtering (a) (b) (c) (d) Figure 2: Frequency selective filter outputs: (a)chirp test signal, (b) linear FIR filter output, (c)weighted median smoother output, (d) weighted median filter output with real-valued weights. 11
13 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights (a) (b) (c) (d) Figure 3: Frequency selective filter outputs in noise: (a)chirp test signal in stable noise, (b) FIR filter output, (c) WM smoother output, (d) WM filter output with real-valued weights. 12
14 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Example: Image Sharpening with WM Filters Image sharpening consists in adding to the original image a signal that is proportional to a high-pass filtered version of the original image: Y (m, n) = X(m, n) + λ F(X(m, n)) (8) where X(m, n) is the original pixel, F( ) is the high-pass filter, λ > 0 is a tuning parameter and Y (m, n) is the sharpened pixel. High-pass Filter Original signal + + Sharpened signal Figure 4: Image sharpening by high frequency emphasis. O + 13
15 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Traditionally, linear filters have been used to implement the high-pass filter. Linear techniques can lead to rapid degradation should the input image be corrupted with noise. A trade-off between noise attenuation and edge highlighting can be obtained if a WM filter is used. Consider a WM filter where W = (9) The output is proportional to the difference between the center pixel and the smallest pixel around the center pixel. Thus, the filter output takes relatively large values for prominent edges in an image, and small values in regions that are fairly smooth, being zero only in regions that have constant gray level. 14
16 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights This filtering operation over negative-slope edges is different from that obtained for positive-slope edges O 1 High-pass WM filter Pre-filtering High-pass WM filter O Figure 5: Image sharpening based on the weighted median filter. The solution is pre-filtering defined as X(m, n) = M X(m, n) (10) with M equal to the maximum pixel value of the original image (flipping and shifting). 15
17 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights (a) (b) (c) Figure 6: Original row of a test image (solid line) and row sharpened (dotted line) with (a) only positive-slope edges, (b) only negative-slope edges, and (c) both positive-slope and negative-slope edges. 16
18 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Figure 7: (a) Original image sharpened with (b) the FIR-sharpener, and (c) with the WM-sharpener. 17
19 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Figure 8: (a) Image with added Gaussian noise sharpened with (b) the FIRsharpener, and (c) the WM-sharpener. 18
20 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Permutation Weighted Median Filters Permutation WM filters closely resemble permutation WM smoothers. Definition 6.2 (Permutation WM Filters) Let W 1(R 1), W 2(R 2),, WN(RN ) be rank-order dependent weights assigned to the input observation samples. The output of the permutation WM filter is found as Y = MEDIAN[ W 1(R 1) sgn(w 1(R 1))X1),, W N(RN ) sgn(w N(RN ))XN )]. (11) where W i(r i) is the weight assigned to Xi and selected according to the sample s rank Ri. 19
21 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights The weights used for the WM high-pass filter in (9) were proportional to W = (12) The weight mask for the permutation WM high-pass filter is W 1(R 1) W 2(R 2) W 3(R 3) W = W4(R4) W c(r c) W 6(R 6), (13) W7(R7) W 8(R 8) W 9(R 9) where W i(r i) = 1, for i 5, with the following exceptions. The value of the center weight is given according to Wc(Rc) = 8 for Rc = 2, 3,, 8 1 otherwise. (14) 20
22 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights That is, the value of the center weight is 8 if the center sample is not the smallest or largest. If it happens to be the smallest or largest, the center weight is set to 1, and the weight of 8 is given to the sample that is closest in rank to the center sample Wl (8) = Wl (2) = 8 if Xc = X (9) 1 otherwise, 8 if Xc = X (1) 1 otherwise, (15) (16) l (i) refers to the location of the ith smallest sample and Wl (i) refers to its weight. 21
23 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Figure 9: (a) Image with background noise sharpened with (b) LUM sharpener. 22
24 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Figure 10: Image with background noise sharpened with (a) the FIR-sharpener, (b) the WM-sharpener. 23
25 6 WEIGHTED MEDIAN FILTERS 6.1 Weighted Median Filters With Real-Valued Weights Figure 11: Image with background noise sharpened with (a) the permutation WM sharpener with L = 1, (b) the permutation WM sharpener with L = 2. 24
26 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters 6.2 Spectral Design of Weighted Median Filters This section defines the concept of frequency response for weighted median filters and develops a closed form solution for their spectral design Median Smoothers and Sample Selection Probabilities Spectral analysis of nonlinear smoothers has been carried out based on the theory developed by Mallows (1980). The spectrum of a nonlinear smoother is defined as the spectral response of the corresponding linear filter. 25
27 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Theorem 6.1 (Mallows) Given a nonlinear smoothing function S operating on a random sequence X = Y + Z, where Y is a zero mean Gaussian sequence and Z is independent of Y, we have that if S is stationary, location invariant, centered (i.e., S(0) = 0), it depends on a finite number of values of X and V ar(s(x)) <, There exist a unique linear function S L such that the MSE function: { (S(X) ) E S L 2 } (X) is minimized. The function S L is the closest linear function to the nonlinear smoothing function S or its linear part. 26
28 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Median smoothers have all the characteristics required for this theorem and are also selection type. There is an important corollary of the previous theorem that applies to selection type smoothers: Corollary 6.1 If S is a selection type smoother, the coefficients of S L are the sample selection probabilities of the smoother. 27
29 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Definition 6.3 The Sample Selection Probabilities (SSPs) of a WM smoother W are the set of numbers pj defined by: pj = P (Xj = MEDIAN[W1 X1, W2 X2,..., WN XN]) (17) Thus, pj is the probability that the output of a weighted median filter is equal to the jth input sample. 28
30 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters SSPs for Weighted Median Smoothers Suppose that the WM filter described by the weight vector W = W1, W2,..., WN is applied to the set of independent and identically distributed samples X = (X1, X2,..., XN), then the output is calculated through: (1) Calculate the threshold T0 = 1 2 N i=1 W i; (2) Sort the samples in the observation vector X; (3) Sum the concomitant weights of the sorted samples beginning with the maximum sample and continuing down in order; (4) The output ˆβ is the first sample whose weight causes the sum to become T0. 29
31 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters The objective is to find a general closed form expression for the value pj = P ( ˆβ = Xj). Thejth sample in the input vector can be ranked in N different, equally likely positions in its order statistics. For all i the probability is P (X (i) = Xj) = 1 N. (18) Each sample has a different probability of being the median depending on where it lies in the set of ordered input samples. The final value of pj is found as the sum of the probabilities of the sample Xj being the median for each one of the order statistics 30
32 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters pj = N P (X (i) = Xj)P ( ˆβ = X (i) X (i) = Xj) i=1 = 1 N N P ( ˆβ = X (i) X (i) = Xj) = 1 N i=1 N i=1 Kij ( ).(19) N 1 i 1 Kij is found as the number of subsets of N i elements of the vector W satisfying: N m=i+1 N m=i W[m] < T0 (20) W[m] T0, (21) 31
33 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Conditions (20) and (21) can be rewritten in a more compact way as: N T0 Wj m=i+1 W[m] < T0 (22) In order to count the number of sets satisfying (22), a product of two step functions is used as follows: u(a (T0 Wj))u(T 0 A) (23) where A = N m=i+1 W [m]. 32
34 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Adding the function in (23) over all the possible subsets of i 1 elements of W excluding Wj the result is: Kij = N N m 1 =1 m 2 =m 1 +1 m1 j m2 j N ms=m s 1 +1 ms j u(a T1)u(T 0 A) (24) where A = Wm1 + W m W ms and s = N i. The SSP vector is given by P(W) = [p1, p2,..., pn], where pj is defined as: pj = 1 N N i=1 Kij ( ). (25) N 1 i 1 33
35 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters EXAMPLE 6.1 (SSPS FOR A FOUR TAP WM) Given W = 1, 3, 4, 1, find the sample selection probability of the third sample p3. T1 and T0 are found as: T0 = Wi = 4.5 i=1 T1 = T0 W3 = 0.5 (26) 34
36 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Equation (25) reduces to p3(w) = i=1 Ki3 (i 1)!(4 i)! (4 1)!. (27) For i = 1, W [1] = 4, thus: 4 A = W[m] = = 5 then m=2 u(a T1)u(T 0 A) = u(5 0.5)u(4.5 5) = 0, (28) hence K13 = 0. 35
37 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters For i = 2, W [2] = 4, then there are three possibilities for the ordering of the weights (the first weight can be either one of W1, W2 or W4) and, in consequence, three different values for A = 4 m=3 W [m]: A1 = = 2 u(a1 T1)u(T 0 A 1) = u(2 0.5)u(4.5 2) = 1 A2 = = 4 u(a2 T1)u(T 0 A 2) = u(4 0.5)u(4.5 4) = 1 A3 = = 4 u(a3 T1)u(T 0 A 3) = u(4 0.5)u(4.5 4) = 1 K23 = 3. (29) 36
38 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Following the same procedure, the values of the remaining Ki3 are found to be K33 = 3 and K43 = 0. Therefore, the sample selection probability results in: p3(w) = 1 4 ( 0 0!3! 3! + 3 1!2! 3! + 3 2!1! 3! + 0 3!0! ) 3! = 1 2. (30) The full vector of SSPs is constructed as: P(W) = [ 1 6, 1 6, 1 2, 1 6 ] 37
39 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Synthesis of WM Smoothers 1. The final purpose of this section is to present a spectral design method for WM smoothers. 2. To attain this, the function obtained in (25) should be inverted; however, this nonlinear function is not invertible. 3. It has been demonstrated that weighted median smoothers of a given window size can be divided into a finite number of classes. Such that each one of the smoothers in a class produces the same output when they are fed with the same set of input samples. 4. Each class contains at least one integer-valued weighted median smoother such that the sum of its coefficients is odd. The representative of the class. 38
40 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Table 1: Median weight vectors and their corresponding SSPs for window sizes 1 to 5 N WM SSP 1 1 [1] [ 3 10 [ 3 5 [ 2 5 [ 1 3 [ 1 2 [ ] ] 1 6 ] ] 2 15 ] 1 10 ]
41 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters W 3 h 3 [0 0 1] [0 0 1] W 1 +W 2 =0.5 [1 1 1] W 2 +W 3 =0.5 [0 1 0] [ ] [1 0 0] W 1 +W 3 =0.5 W2 [0 1 0] h 2 [1 0 0] W1 h1 (a) (b) Figure 12: (a) simplex containing the weighted median vectors for window size three.(b) correspondence between linear smoothers and SSP vectors of window size three. 40
42 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters The weighted median closest to a given linear smoother in the mean square error sense is found by minimizing the mean square error cost function N J(W) = P(W) h 2 = (pj(w) hj) 2 (31) j=1 where h is a normalized linear smoother. The procedure to transform a linear smoother into its associated weighted median reduces to finding the region in the linear space where it belongs, finding the corresponding SSP vector and then finding a corresponding WM vector. 41
43 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters The problem sould be solved by simple inspection but the number of different weighted medians grows rapidly. For example, it goes from 2470 for window size eight to 175, 428 for window size nine and there is no certainty about the number of vectors for window size ten and up. This option becomes unmanageable. In the following section, an optimization algorithm for the function J(W) is presented. 42
44 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters General Iterative Solution The optimization process of the cost function in (31) is carried out with a gradient-based algorithm. Wl(n + 1) = Wl(n) + µ ( lj(w)) ( = Wl(n) + µ ) J(W). (32) Wl The first step is to find the gradient of (31) J(W) = J(W) W1 J(W) W2... (33) J(W) WN 43
45 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters where each of the terms in (33) is given by: lj(w ) = = = J(W ) = P(W) h 2 Wl Wl N (pj(w) hj) 2 Wl j=1 N 2 (pj(w) hj) pj(w ). (34) Wl j=1 44
46 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters The derivative of pj(w ) is: pj(w ) Wl = = 1 N 1 Wl N N i=1 N i=1 Kij Wl ( N 1 i 1 Kij ( ) N 1 i 1 ). (35) 45
47 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters The term Kij in (24) is not differentiable. To overcome this situation, u(x) is approximated as: u(x) 1 2 (tanh(x) + 1). The derivative on the right hand side of (35) can be computed as: Kij Wl = 1 4 N N m 1 =1 m 2 =m 1 +1 m1 j m2 j N ms=m s 1 +1 ms j B, Wl (36) where B = (tanh(a T1) + 1) ( tanh(t 0 A) + 1) and B Wl = C1(Wl)sech 2 (A T1) ( tanh(t 0 A) + 1) C2(Wl) (tanh(a T1) + 1) sech 2 (T 0 A). (37) 46
48 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters The coefficients C1(Wl) and C2(Wl) above are defined by: 1 2 l = j C1(Wl) = if i exists s.t. mi = l C2(Wl) = else if i exists s.t. mi = l else. (38) 47
49 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Original Cost Function Smoothed Cost Fuction using usign the tanh approximation (a) (a) (b) (c) (c) Figure 13: (a) Cost functions with respect to one weight, (b) contours with respect to two weights for the original cost function, (c) contours with respect to the same weights for the approximated cost function. 48
50 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Spectral Design of Weighted Median Filters Admitting Real-Valued Weights The real valued medians do not satisfy the location invariance property. However, Mallows results can be extended to cover medians like (4) in the case of an independent, zero mean, Gaussian input sequence. Theorem 6.2 If the input series is Gaussian, independent, and zero centered, the coefficients of the linear part of the weighted median defined in (4) are defined as: hi = sgn(wi)pi, where pi are the SSPs of the WM smoother Wi. 49
51 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Define Yi = sgn(wi)xi. Yi will have the same distribution as the Xi. E {(MEDIAN(Wi Xi) hixi ) 2 } (39) = E { ( MEDIAN( Wi Yi) qiyi ) 2 } where qi = hi/sgn(wi). From Theorem 6.1, (40) is minimized when the qi equal the SSPs of the smoother Wi, say pi. In consequence: qi = hi/sgn(wi) = pi hi = sgn(wi)pi (40) 50
52 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters (1) Given the desired impulse response, design the linear FIR filter h = (h1, h2,..., hn) using one of the traditional design tools for linear filters. (2) Decouple the signs of the coefficients to form the vectors h = ( h1, h2,..., hn ) and sgn(h) = (sgn(h1), sgn(h2),..., sgn(hn)). (3) After normalizing the vector h, use the algorithm in Section to find the closest WM filter to it, say W = W 1, W 2,..., W N. (4) The WM filter weights are given by W = sgn(hi)w i N i=1 51
53 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters EXAMPLE 6.2 Design 11 tap (a) low-pass, (b) band-pass, (c) high-pass, and (d) band-stop WM filters with the cutoff frequencies shown in Table 2. Table 2: Characteristics of the WM filters to be designed Filter Cut-off frequencies Low pass 0.25 Band pass High pass 0.75 Band stop
54 6 WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters Table 3: Weights of the median filters designed using the algorithm in Section and the linear filters used as reference. Low-pass Band-pass High-pass Band-stop Linear Median Linear Median Linear Median Linear Median
55 54 Figure 14: (a) low-pass, (b) high-pass, (c) band-pass, (d) band-stop Normalized Frequency Normalized Frequency (c) (d) db db Normalized Frequency (a) WM filter linear filter Normalized Frequency (b) db db WEIGHTED MEDIAN FILTERS 6.2 Spectral Design of Weighted Median Filters
56 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem 6.3 The Optimal Weighted Median Filtering Problem Threshold Decomposition For Real Valued Signals Consider the set of real-valued samples X1, X2,, XN. Decompose each sample Xi as x q i = sgn (X i q) (41) where < q <, and sgn (Xi q) = 1 if Xi q; 1 if Xi < q. (42) Each sample Xi is decomposed into an infinite set of binary points taking values in { 1, 1}. 55
57 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem x i q I II -X i i 0 X i q III Figure 15: Decomposition of Xi into the binary x q i signal. Threshold decomposition is reversible. To show this, let ˆXi = limt X i <T > where X <T > i = 1 2 Xi T x q i dq Xi Xi x q i dq T Xi x q i dq. (43) 56
58 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Since the first and last integrals in (43) cancel each other and since Xi Xi x q i dq = 2X i, (44) it follows that X <T > i = ˆXi = Xi. Xi = 1 2 = 1 2 x q i dq sgn (Xi q) dq. (45) Xi has a unique threshold signal representation, and vice versa: Xi T.D. {x q i }, 57
59 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Since q can take any real value, the infinite set of binary samples {x q i } seems redundant in representing Xi. Note however that there are at most L + 1 different binary vectors {x q } for each X. X T.D. {x q } = [1, 1,, 1] T for < q X (1) [x X+ (i) 1, x X+ (i) 2,, x X+ (i) L ]T for X (i) < q X (i+1) 1 i L 1 [ 1, 1,, 1] T for X (L) < q < + (46) X + (i) denotes a value on the real line approaching X (i) from the right. 58
60 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Consider three samples X1, X2, X3 and their threshold decomposition representations x q 1, xq 2, xq 3. Assume that X 3 = X (3), X2 = X (1), and X1 = X (2). Next, for each value of q, the median of the decomposed signals is defined as y q = MEDIAN(x q 1, xq 2, xq 3 ). (47) q q q x 2 x 1 x 3 X 2 X X 1 3 q (a) x q (2) X(2) q (b) 59
61 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Note that for q X (2) two of the three x q i for q > X (2) two of these have values equal to 1. Thus, y q 1 for q X (2) ; = 1 for q > X (2). samples have values equal to 1, and (48) Reversing the decomposition using y q in (45), it follows that Y = 1 2 y q dq = 1 2 sgn ( X (2) q ) dq = X (2). q q q x 2 x 1 x 3 X 2 X X 1 3 q (a) x q (2) X(2) q (b) 60
62 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem In the general case: y q = MEDIAN(x q 1, xq 2,, xq N ) = 1 for q X ( N+1 ); 2 1 for q > X ( N+1 ). 2 (49) Reversing the threshold decomposition, Y is obtained as Y = 1 2 = 1 2 = X ( N+1 2 ). y q dq ( ) sgn X( N+1 2 ) q dq (50) 61
63 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem With this threshold decomposition, the weighted median filter operation can be implemented as ( ˆβ = MEDIAN W i sgn (Wi) Xi N ) i=1 ( = MEDIAN Wi 1 ) sgn [sgn (Wi) Xi q] dq N i=1 2. The order of the integral and the median operator can be interchanged without affecting the result leading to ˆβ = 1 2 MEDIAN ( Wi sgn [sgn (Wi) Xi q] N i=1) dq. (51) In this representation, the signed samples play a fundamental role. 62
64 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Define the signed observation vector S as S = [sgn(w1)x1, sgn(w2)x2,, sgn(wn)xn)] T = [S1, S2,, SN] T. (52) The threshold decomposed signed samples, in turn, form the vector: s q = [sgn [sgn(w1)x1 q], sgn [sgn(w2)x2 q],, sgn [sgn(wn)xn q]] T = [ s q 1, sq 2,, sq N] T. (53) Letting Wa be the vector whose elements are the weight s magnitudes, Wa = W1, W2,, WN T then: ˆβ = 1 2 sgn ( W T a s q) dq. (54) 63
65 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem The Least Mean Absolute (LMA) Algorithm Consider N samples in the window at time n X(n) = [X(n N1),, X(n), X(n + N2)] T = [X1(n), X2(n),, XN(n)] T, (55) with N = N1 + N The WM filter outputs the desired signal ˆD(n) = MEDIAN [ W i sgn(wi)xi(n) N i=1], where both the weights Wi and samples Xi(n) take on real values. 64
66 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem The goal is to determine the weight values in W = W1, W2,, WN T which will minimize the Mean Absolute Error (MAE): { J(W) = E D(n) ˆD(n) } { 1 = E sgn(d q) sgn ( W a T s q) } dq 2 (56).(57) The absolute value and integral operators in (57) can be interchanged since the integral acts on a strictly positive or a strictly negative function. 65
67 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem This results in J(W) = 1 2 E { sgn(d q) sgn ( W T a s q) } dq. (58) Since the argument inside the absolute value operator can only take on values in the set { 2, 0, 2} J(W) = 1 4 E { (sgn(d q) sgn ( W T a s q)) 2 } dq. (59) Taking the gradient of the above results in W J (W) = 1 2 E { e q (n) W sgn ( W T a s q) } dq (60) where e q (n) = sgn(d q) sgn ( W T a s q). 66
68 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Since the sign function is discontinuous at the origin, its derivative will introduce Dirac impulse terms. To overcome this difficulty, the sign function in (60) is approximated by: sgn(x) tanh(x) = ex e x e x + e x. (61) Since x tanh(x) = sech2 (x) = 2 e x +e x, it follows that W sgn ( W a T s q) sech 2 ( W a T s q) W ( W T a s q). (62) Evaluating the derivative in (62) and after some simplifications leads to W sgn ( W a T s q) sech 2 ( W a T s q) sgn(w1)s q 1 sgn(w2)s q (63) sgn(wn)s q N 67
69 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Thus Wj J(W) = 1 2 E { e q (n)sech 2 ( W T a s q) sgn(wj)s q j } dq. (64) the optimal coefficients can be found through the steepest descent recursive update [ Wj(n + 1) = Wj(n) + 2µ ] J(W) Wj [ = Wj(n) + µ E { e q (n)sech 2 ( W a T (n)s q (n) ) sgn(wj(n))s q j (n) } dq ]. (65) 68
70 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Using the instantaneous estimate for the gradient we can derive an adaptive optimization algorithm where Wj(n + 1) = Wj(n) +µ = Wj(n) +µ S (1) e q (n)sech 2 W T a (n)s q (n) sgn(wj(n))s q j (n)dq e S (1)(n)sech 2 W T a (n)s S (1)(n) sgn(wj(n))s S (1) j (n) dq +µ N 1 i=1 S(i+1) S(i) s S+ (i) j (n)e S+ i (n)sgn(wj(n)) sech 2 W T a (n)s S+ (i)(n) +µ S (N) e S + (N)(n)sech 2 W a T (n)s S+ (N)(n) sgn(wj(n))s S+ (N) j (n) dq. 69
71 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem The error term e q (n) in the first and last integrals can be shown to be zero; thus, the adaptive algorithm reduces to Wj(n + 1) = Wj(n)+µ N 1 i=1 [ (S(i+1) S (i) ) s S + (i) j (n)e S+ (i)(n)sgn(wj(n)) sech 2 ( W T a (n)s S+ (i)(n) )], (66) for j = 1, 2,, N. This recursion in is referred to as the Least Mean Absolute (LMA) weighted median adaptive algorithm. 70
72 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem The algorithm in (66) is simplified leading to the following recursion referred to as the fast LMA WM adaptive algorithm: ( Wj(n + 1) = Wj(n) + µ D(n) ˆD(n) ) sgn(wj(n)) ( sgn sgn(wj(n))xj(n) ˆD(n) ), (67) for j = 1, 2,, N. 71
73 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Example: Design of Optimal High-Pass WM Filter (a) (b) (c) (d) (e) Figure 16: (a) Two-tone input signal, and output from (b) linear FIR high-pass filter, (c) optimal WM filter,(d) WM filter using the linear FIR weight values,(e) optimal WM smoother with non-negative weights. 72
74 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem Table 4: Mean Absolute Filtering Errors Filter noise free with stable noise Linear FIR Optimal WMF smoother WMF with FIR weights Optimal WMF (fast alg.) Optimal WMF
75 6 WEIGHTED MEDIAN FILTERS 6.3 The Optimal Weighted Median Filtering Problem (a) (b) (c) (d) Figure 17: (a) Two-tone signal in stable noise (α = 1.4), (b) linear FIR filter output, (c) WM filter output, (d) WM smoother output with positive weights. 74
76 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters 6.4 Recursive Weighted Median Filters The general structure of linear IIR filters is defined by the difference equation Y (n) = N Al Y (n l) + l=1 M2 k= M1 Bk X(n k), (68) where the output is formed not only from the input, but also from previously computed outputs. The filter weights consist of two sets: the feedback coefficients {Al}, and the feed-forward coefficients {Bk}. N + M1 + M2 + 1 coefficients are needed to define the recursive difference equation in (68). 75
77 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters For WM filters, the summation operation is replaced with the median operation, and the multiplication weighting is replaced by signed replication: Y (n) = MEDIAN ( Al sgn(al)y (n l) N l=1, ) Bk sgn(bk)x(n k) M 2 k= M1 (69) Definition 6.4 (Recursive Weighted Median Filters) Given a set of N real-valued feed-back coefficients Ai N i=1 and a set of M + 1 real-valued feed-forward coefficients Bi M i=0, the M + N + 1 recursive WM filter output is defined as Y (n) = MEDIAN ( AN sgn(an)y (n N),, A1 sgn(a1)y (n 1), B0 sgn(b0)x(n),, BM sgn(bm)x(n + M)). (70) 76
78 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Recursive WM filters are denoted as: AN,, A1, B0, B1,, BM. The recursive WM filter output for non integer weights can be determined as follows: (1) Calculate the threshold T0 = 1 2 ( N l=1 A l + M k=0 B k ). (2) Jointly sort the signed past output samples sgn(al)y (n l) and the signed input observations sgn(bk)x(n + k). (3) Sum the magnitudes of the weights corresponding to the sorted signed samples beginning with the maximum and continuing down in order. (4) If 2 T0 is an even number, the output is the average between the signed sample whose weight magnitude causes the sum to become T0 and the next smaller signed sample, otherwise the output is the signed sample whose weight magnitude causes the sum to become T0. 77
79 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters The signed samples in the window of the recursive WM filter at time n are denoted by the vector S(n) = [S T Y (n), ST X (n)]t where SY (n) = [sgn(a1)y (n 1), sgn(a2)y (n 2),, sgn(an)y (n N)] T is the vector containing the signed past output samples, and SX(n) = [sgn(b0)x(n), sgn(b1)x(n + 1),, sgn(bm)x(n + M)] T denotes the vector containing the signed input samples. The ith order statistic of S(n) is denoted as S (i) (n), i = 1,, L, where S (1) (n) S (2) (n) S (L) (n) with L = N + M + 1 as the window size. 78
80 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Stability of Recursive WM Filters One of the main problems in the design of linear IIR filters is stability. In order to guarantee the BIBO stability of a linear IIR filter, the poles of its transfer function must lie within the unit circle in the complex plane. Unlike linear IIR filters, recursive WM filters are guaranteed to be stable. Property 6.1 Recursive weighted median filters, as defined in (70), are stable under the bounded-input bounded-output criterion, regardless of the values taken by the feedback coefficients {Al} for l = 1, 2,, N. 79
81 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Threshold Decomposition Representation of Recursive WM Filters Using the threshold signal decomposition, the recursive WM operation in (69) can be expressed as ( Y (n) = MEDIAN Al 1 2 = Bk sgn[sgn(al)y (n l) q] dq N l=1, ) sgn[sgn(bk)x(n + k) q] dq M k=0. MEDIAN ( Al sgn[sgn(al)y (n l) q] N l=1, Bk sgn[sgn(bk)x(n + k) q] M k=0) dq. (71) 80
82 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Let {s q Y } and {sq X } denote the threshold decomposition of the signed past output samples and the signed input samples respectively, i.e, SY (n) T.D. s q Y (n) = [sgn[sgn(a 1)Y (n 1) q],, sgn[sgn(an)y (n N) q]] T SX(n) T.D. s q X (n) = [sgn[sgn(b 0)X(n) q],, sgn[sgn(bm)x(n + M) q]] T (72) where q (, + ). 81
83 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Furthermore, we let s q (n) = [[s q Y (n)]t, [s q X (n)]t ] T be the threshold decomposition representation of the vector S(n) = [S T Y (n), ST X (n)]t containing the signed samples. It can be shown that (71) reduces to Y (n) = sgn ( A T a s q Y (n) + BT a s q X (n)) dq, (73) where Aa = [ A1, A2,, AN ] T, and Ba = [ B0, B1,, BM ] T. 82
84 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Optimal Recursive Weighted Median Filtering Under the MAE criterion the goal is to determine the weights {Al} N l=1 and {Bk} M k=0 so as to minimize J (A1,, AN, B0,, BM ) = E{ D(n) Y (n) }, The steepest descent algorithm is used, in which the filter coefficients are updated according to Al(n + 1) = Al(n) + 2µ[ J(A1,, AN, B0,, BM )] Al Bk(n + 1) = Bk(n) + 2µ[ J(A1,, AN, B0,, BM )] Bk (74) for l = 1,, N and k = 0,, M. J has to be previously computed to update the filter weights. Due to the feedback operation inherent in the recursive WM filter, however, the computation of J becomes intractable. 83
85 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters To overcome this problem, the optimization framework referred to as equation error formulation is used. Here, the fact that ideally the filter s output is close to the desired response is used. The lagged values of Y (n) in (70) can thus be replaced with the corresponding lagged values D(n). The previous outputs Y (n l) N l=1 are replaced with D(n l) N l=1 to obtain a two-input, single-output filter that depends on the input samples X(n + k) M k=0 D(n l) N l=1, namely, and on delay samples of the desired response Ŷ (n) = MEDIAN ( AN sgn(an)d(n N), A1 sgn(a1)d(n 1), B0 sgn(b0)x(n),, BM sgn(bm)x(n + M)). (75) 84
86 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters The derivation of the adaptive algorithm follows similar steps as that used in the derivation of the adaptive algorithm of non-recursive WM filters. This leads to the following fast LMA adaptive algorithm for recursive WM filters Al(n + 1) = Al(n) + µ(d(n) Ŷ (n))sgn(a l(n))sgn(sdl Ŷ (n)) Bk(n + 1) = Bk(n) + µ(d(n) Ŷ (n))sgn(b k(n))sgn(sxk Ŷ (n)), (76) for l = 1, 2,, N and k = 1, 2,, M. 85
87 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Example: Image Denoising Figure 18: Image denoising using 3 3 recursive and non-recursive WM filters: (a) original, (b) image with salt and pepper noise, (c) non-recursive center WM filter. 86
88 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Figure 19: Image denoising using 3 3 recursive and non-recursive WM filters: (d) recursive center WM filter, (e) optimal non-recursive WM filter, (f) optimal RWM filter. 87
89 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Figure 20: Image denoising using 3 3 recursive and non-recursive WM filters: (a) original, (b) image with stable noise, (c) non-recursive center WM filter. 88
90 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Figure 21: Image denoising using 3 3 recursive and non-recursive WM filters: (d) recursive center WM filter, (e) optimal non-recursive WM filter, (f) optimal RWM filter. 89
91 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters Example: Design of a Band Pass RWM Filter (a) (b) (c) (d) (e) (f ) Figure 22: Band pass filter design: (a) input test signal, (b) desired signal, (c) linear FIR filter output, (d) non-recursive WM filter output (e) linear IIR filter output, (f) RWM filter output. 90
92 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters (a) (b) (c) (d) (e) Figure 23: Performance of the band pass filter in noise: (a) chirp test signal in stable noise, (b) linear FIR filter output, (c) non-recursive WM filter output, (d) linear IIR filter output, (e) RWM filter output. 91
93 6 WEIGHTED MEDIAN FILTERS 6.4 Recursive Weighted Median Filters (a) (b) Figure 24: Frequency response (a) to a noiseless sinusoidal signal (b) to a noisy sinusoidal signal. ( ) RWM, ( ) non-recursive WM filter, (- - -) linear FIR filter, and (- - -) linear IIR filter. 92
94 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters 6.6 Complex Valued Weighted Median Filters Sorting and ordering of a set of complex-valued samples is not uniquely defined. Figure 25: Two sets of complex valued samples The complex-valued median is well defined from a statistical estimation framework. 93
95 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters If Xi, i = 1,..., N are i.i.d. complex Gaussian distributed samples with constant but unknown complex mean β, the ML estimate of location is { ( ( 1 )N ˆβ = arg max β πσ 2 exp N i=1 Xi β 2 /σ 2 )}. This is equivalent to minimizing the sum of squares as ˆβ = arg min β N Xi β 2 = MEAN (X1, X2,..., XN). (77) i=1 94
96 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Representing Xi as Xi = XRi + jx Ii a, the minimization in (77) can be carried out marginally as β = βr + j βi (78) where ˆβR = arg min N (XRi β R) 2 βr i=1 = MEAN (XR1, X R2,..., X RN ), (79) and ˆβI = arg min N (XIi β I) 2 βi i=1 = MEAN (XI1, X I2,..., X IN ). (80) a The subindices R and I represent real and imaginary part. 95
97 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters The maximum likelihood estimate of location for the Lapalcian distribution is ˆβ = arg min β N Xi β. (81) i=1 This minimization cannot be computed marginally and does not have a closed-form solution, requiring a two dimensional search over the complex space for the parameter ˆβ. 96
98 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Two suboptimal approaches have been introduced by Astola: Vector median: ˆβ is assumed to be one of the input samples Xi. The output vector minimizes the sum of Euclidean distances between the candidate vector and all the other vectors. Marginal complex median: ˆβ β = βr + j βi where βr = MEDIAN(XR1, X R2,..., X RN ) and βi = MEDIAN(XI1, X I2,..., X IN ). 97
99 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters When the samples are independent but not identically distributed, the ML estimate of location can be generalized: For the gaussian distribution ˆβ = arg min β N Wi Xi β 2 = i=1 N i=1 W i Xi N i=1 W i, (82) with Wi = 1/σ 2 i, a positive real-valued number. For the Laplacian distribution: ˆβ = arg min β N Wi Xi β. (83) i=1 No closed form solution Weights restricted to be positive 98
100 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Marginal Complex WM Re(Wi) N i=1 affect the real part of the samples X Ri N i=1 and Im(Wi) N i=1 affect the imaginary part of the samples X I i N i=1, leading to the marginal complex WM filter: ˆβmarginal = MEDIAN ( WRi sgn(w Ri )X Ri N i=1 ) +jmedian ( WI1 sgn(w I1 )X I1 N i=1), (84) The definition in (84) assumes that the real and imaginary components of the input samples are independent. 99
101 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Consider the weighted mean operation with complex-valued weights, ( β = MEAN W 1 e jθ 1 X 1, W2 e jθ 2 X 2,..., WN e jθ ) N X N = 1 N N Wi e jθ i X i. (85) i=1 The weights have two roles, first their phases are coupled into the samples changing them into a new group of phased samples, and then the magnitudes of the weights are applied. 100
102 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Phase Coupled Complex WM Filter Given X1, X2,, XN and Wi = Wi e jθ i, i = 1,, N, the output of the phase coupled complex WM is defined as ˆβ = arg min β N Wi e jθ i X i β. (86) i=1 To solve (86) the cost function must be searched for its minimum, but any one of the already mentioned suboptimal approximations is applicable. 101
103 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Marginal Phase Coupled Complex WM Filter It reduces the output in (86) to the following two real-valued weighted medians, N ˆβR = arg min Wi Re{e jθ i X i} βr βr i=1 = MEDIAN( Wi Re{e jθ 1 X i} N i=1), (87) N ˆβI = arg min Wi Im{e jθ i X i} βi βi i=1 = MEDIAN( Wi Im{e jθ i X i} N i=1), (88) where is the replication operator, Re{ } and Im{ } denote real and imaginary part respectively, and the filter output is ˆβ = ˆβR + j ˆβI. 102
104 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Im(X) X 1 θ 1 X 2 θ2 Re(X) P 2 P 1 θ 3 P 3 X 3 Figure 26: Marginal phase-coupled CWM illustration, : original samples, : phase-coupled samples, : marginal median output, : marginal phase-coupled median output 103
105 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Complex threshold decomposition It was stated before that for any real-valued signal X, its real threshold decomposition (RTD) representation is X = 1 2 X q dq, (89) where < q <, and X q = sgn(x q) = 1 if X q; 1 if X < q. (90) 104
106 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Given {Xi N i=1 } and {W i N i=1 }, the weighted median filter can be expressed as Y = 1 2 MED ( Wi S q i N i=1) dq, (91) where Si = sgn(wi)xi, S = [S1, S2,, SN] T, S q i = sgn(s i q) and S q = [S q 1, Sq 2, Sq N ]T. Since the samples of the median filter in (91) are either 1 or -1, this median operation can be efficiently calculated as sgn(w T a S q ), where the elements of the new vector W T a Wai = W i N i=1. Equation (91) can be written as Y = 1 2 are given by sgn(w T a S q )dq. (92) 105
107 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Therefore, the extension of the threshold decomposition representation to the complex field can be naturally carried out as, X = 1 2 sgn(re{x} q)dq + j 1 2 sgn(im{x} p)dp, (93) where RTD is applied onto real and imaginary part of the complex signal X separately. 106
108 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Optimal Marginal Phase Coupled Complex WM Given the complex-valued samples {Xi N i=1 }, the complex-valued weights { Wi e jθ i N i=1 }, define Pi = e jθ i X i N i=1 as the phase-coupled input samples and its real and imaginary parts as PRi = Re{P i}, PIi = Im{P i}. Additionally define: P q Ri = sgn(p Ri q), Pq R = [P q R1, P q R2,, P q RN ]T P p Ii = sgn(p Ii p), Pp I = [P p I1, P p I2,, P p IN ]T. 107
109 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters The marginal phase-coupled complex WM can be implemented as Y =MED ( Wi PRi N i=1) + jmed ( W i PIi N i=1 ) =MED = 1 2 = 1 2 ( Wi 1 2 +jmed +j 1 2 { MED ( Wi 1 2 P q Ri dq N i=1 ) P p Ii dp N i=1 ( Wi P q Ri N i=1 ) dq MED ) ( Wi P p Ii N i=1 ) dp sgn(w T a P q R )dq + j sgn(w T a P p I )dp }. (94) 108
110 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Under the Mean Square Error (MSE) criterion, the cost function to minimize is J(n) = E{ β(n) ˆβ(n) 2 } { = E 1 (sgn(βr q) sgn(w a T P q R 2 ))dq +j 1 } (sgn(βi p) sgn(w a T P p 2 I 2 ))dp { ( = 1 ) 2 ( ) } 2 4 E e q R dq + e p I dp, (95) where βr = Re{β(n)}, βi = Im{β(n)}, er = Re{β(n) ˆβ(n)}, ei = Im{β(n) ˆβ(n)}. 109
111 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Utilizing the relationship between the complex gradient vector J and the conjugate derivative J/ W, we have J(n) = 2 J(W) W { ( = E ( + = 2E { er +ei ( e q R dq ) ( e p I dp ) ( ( ) W sgn(wt a P q R )dq ) } W sgn(wt a P p I )dp ) W sgn(wt a P q R )dq ) } W sgn(wt a P p I )dp. (96) 110
112 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters the derivative with respect to only one weight is sgn(x) tanh(x) = ex e x e x +e x and its derivative d dx tanh(x) = sech2 4 (x) = (e x +e x ) 2. Thus, W sgn(w T a P q R ) sech2 (W a T P q R ) W (W T a P q R ). Furthermore, To take the derivatives, the sign function is approximated as W i sgn(w a T P q R ) sech2 (W a T P q R ) W i ( Wi P q ) Ri ( = sech 2 (W a T P q R ) Wi W i P q + W i P q ) Ri Ri W i (97) After some simplifications, we obtain W i sgn(w T a P q R ) 1 2 sech2 (W T a P q R ) ( e jθ i P q Ri sech2 (PRi q)x i ) (98) 111
113 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Integrating both sides of the above equation, 1 2 ejθ i X i W i sgn(w a T P q R )dq sech 2 (W T a P q R )P q Ri dq sech 2 (W T a P q R )sech2 (PRi q)dq. (99) 112
114 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters The second integral can be expanded as follows + sech 2 (W T a P s R)sech 2 (PRi s)ds = sech 2 ( N 1 k=1 + sech 2 ( W a T P P R (1) R sech 2 ( ) P R (1) W a T P P R (k+1) R W a T P P + R (N) R ) sech 2 (PRi s)ds ) P R (k+1) PR (k) sech 2 (PRi s)ds PR (N) sech 2 (PRi s)ds (100) 113
115 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters and recalling that sech 2 (x)dx = dtanh(x) + sech 2 (W T a P s R)sech 2 (PRi s)ds = sech 2 ( N 1 k=1 + sech 2 ( W a T P P R (1) R sech 2 ( ) W a T P P R (k+1) R W a T P P + R (N) R ) tanh(pri s) PR (1) ) tanh(pri s) PR (k+1) tanh(pri s) PR (N). PR (k) (101) 114
116 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Replacing tanh(x) with sgn(x). All terms involving sgn(pri s) = 0, except when PRi = P R (k). In this case: sgn(pri s) PR (k+1) PR (k) = 2. When PRi = ˆβR, W a T P s R 0 sech2 (W a T P s R ) 1. This is the largest contributor to the sum in (101) and all the other terms can be omitted resulting in: W i W i sgn(w T a P s R)ds 1 2 ejθ i ( sgn(pri ˆβR) + 2jPIi δ(p ˆβR) ) Ri sgn(w T a P r I)dr 1 2 ejθ i ( sgn(pii ˆβI) + 2jPRi δ(p ˆβI) ) Ii, (102) 115
117 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters leading to the following weight update equation: Wi(n + 1) = Wi(n) + µ{ J(n)} { Wi(n) + µe jθ i er(n)sgn(pri (n) ˆβR(n)) +ei(n)sgn(pii (n) ˆβI(n)) +2jeR(n)(PIi (n)δ(p Ri (n) ˆβR(n)) +2jeI(n)(PRi (n)δ(p Ii (n) ˆβI(n)) }. (103) 116
118 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Example: Line Enhancement + u(n) Σ e(n) _ Z Z 1 1 Z 1 Z W (n) 1 (n) W 2 (n) W W (n) N 1 N COMPLEX WM y(n) Figure 27: Block diagram for line enhancer implemented with complex WM filter. 117
119 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters (a) alpha = marginal complex WM marginal phase coupled complex WM (b) alpha = marginal complex WM marginal phase coupled complex WM Figure 28: Learning curves of the LMS algorithm of a linear filter, marginal complex WM and marginal phase coupled complex WM (µ=0.001) for line enhancement in α-stable noise with dispersion γ = 0.2 : (a) α=1.3, (b) α=
120 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Table 5: Average MSE using the LMS for Line enhancement after convergence of the algorithm. (µ = 0.001, γ = 0.2) Filter α = 1.3 α = 1.5 α = 1.7 α = 2 Noisy signal Linear filter Marginal complex WM Marginal phase coupled complex WM
121 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters mu=0.1 mu= Figure 29: Learning curves of the LMS algorithm of the marginal phase coupled complex WM with µ = 0.1 and µ = for Line enhancement in α-stable noise (γ = 0.2). 120
122 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Figure 30: Real part of the output of the filters for α = 1.7, γ = 0.2 and µ =
123 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters (a) original signal (b) noisy signal (c) linear filter (d) marginal complex WM complex WM 2 coupled (f) marginal phase Figure 31: Phase of the output of the filters for α = 1.7, γ = 0.2 and µ =
124 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Example: Adaptive modeling Complex linear filter d(n) gaussian noise generator u(n) + _ Σ e(n) Complex W M filter d(n) ^ Figure 32: Block diagram of the adaptive modeling experiment. 123
125 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters marginal complex WM marginal phase coupled complex WM linear filter Figure 33: Learning curves of the LMS of the marginal phase coupled complex WM, the marginal complex WM and a linear filter with µ =
126 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters marginal complex WM 12 marginal phase coupled complex WM linear filter Figure 34: Approximated frequency response of the complex WM filters. 125
127 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Table 6: Average MSE of the output of the complex WM filters and the linear filter in presence of α-stable noise Filter α = 1 α = 1.3 α = 1.7 α = 2 Linear filter Marginal complex WM Marginal phase coupled complex WM
128 6 WEIGHTED MEDIAN FILTERS 6.6 Complex Valued Weighted Median Filters Figure 35: Real part of the output of the complex WM filters for the adaptive modeling problem with (α=1). (the real part of the ideal output is shown in dash-dot) 127
ELEG 833. Nonlinear Signal Processing
Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu February 15, 2005 1 INTRODUCTION 1 Introduction Signal processing
More informationNonlinear Signal Processing ELEG 833
Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu May 5, 2005 8 MYRIAD SMOOTHERS 8 Myriad Smoothers 8.1 FLOM
More informationAdaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling
Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical
More informationHow to manipulate Frequencies in Discrete-time Domain? Two Main Approaches
How to manipulate Frequencies in Discrete-time Domain? Two Main Approaches Difference Equations (an LTI system) x[n]: input, y[n]: output That is, building a system that maes use of the current and previous
More information26. Filtering. ECE 830, Spring 2014
26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem
More informationImpulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter
BIOSIGAL 21 Impulsive oise Filtering In Biomedical Signals With Application of ew Myriad Filter Tomasz Pander 1 1 Division of Biomedical Electronics, Institute of Electronics, Silesian University of Technology,
More informationMachine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.
Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationLecture 19 IIR Filters
Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class
More informationDiscrete-Time Signals & Systems
Chapter 2 Discrete-Time Signals & Systems 清大電機系林嘉文 cwlin@ee.nthu.edu.tw 03-5731152 Original PowerPoint slides prepared by S. K. Mitra 2-1-1 Discrete-Time Signals: Time-Domain Representation (1/10) Signals
More informationLaplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters
Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.
More informationExamples. 2-input, 1-output discrete-time systems: 1-input, 1-output discrete-time systems:
Discrete-Time s - I Time-Domain Representation CHAPTER 4 These lecture slides are based on "Digital Signal Processing: A Computer-Based Approach, 4th ed." textbook by S.K. Mitra and its instructor materials.
More informationIntroduction to Sparsity in Signal Processing
1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system
More informationReading. 3. Image processing. Pixel movement. Image processing Y R I G Q
Reading Jain, Kasturi, Schunck, Machine Vision. McGraw-Hill, 1995. Sections 4.-4.4, 4.5(intro), 4.5.5, 4.5.6, 5.1-5.4. 3. Image processing 1 Image processing An image processing operation typically defines
More informationLINEAR-PHASE FIR FILTERS DESIGN
LINEAR-PHASE FIR FILTERS DESIGN Prof. Siripong Potisuk inimum-phase Filters A digital filter is a minimum-phase filter if and only if all of its zeros lie inside or on the unit circle; otherwise, it is
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationCh 4. Linear Models for Classification
Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,
More informationLet H(z) = P(z)/Q(z) be the system function of a rational form. Let us represent both P(z) and Q(z) as polynomials of z (not z -1 )
Review: Poles and Zeros of Fractional Form Let H() = P()/Q() be the system function of a rational form. Let us represent both P() and Q() as polynomials of (not - ) Then Poles: the roots of Q()=0 Zeros:
More information3.4 Linear Least-Squares Filter
X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum
More information( ) John A. Quinn Lecture. ESE 531: Digital Signal Processing. Lecture Outline. Frequency Response of LTI System. Example: Zero on Real Axis
John A. Quinn Lecture ESE 531: Digital Signal Processing Lec 15: March 21, 2017 Review, Generalized Linear Phase Systems Penn ESE 531 Spring 2017 Khanna Lecture Outline!!! 2 Frequency Response of LTI System
More informationLinear Models in Machine Learning
CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,
More informationAdaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More informationEmpirical Mean and Variance!
Global Image Properties! Global image properties refer to an image as a whole rather than components. Computation of global image properties is often required for image enhancement, preceding image analysis.!
More information2. Typical Discrete-Time Systems All-Pass Systems (5.5) 2.2. Minimum-Phase Systems (5.6) 2.3. Generalized Linear-Phase Systems (5.
. Typical Discrete-Time Systems.1. All-Pass Systems (5.5).. Minimum-Phase Systems (5.6).3. Generalized Linear-Phase Systems (5.7) .1. All-Pass Systems An all-pass system is defined as a system which has
More informationA. Relationship of DSP to other Fields.
1 I. Introduction 8/27/2015 A. Relationship of DSP to other Fields. Common topics to all these fields: transfer function and impulse response, Fourierrelated transforms, convolution theorem. 2 y(t) = h(
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V
More informationIntroduction to Biomedical Engineering
Introduction to Biomedical Engineering Biosignal processing Kung-Bin Sung 6/11/2007 1 Outline Chapter 10: Biosignal processing Characteristics of biosignals Frequency domain representation and analysis
More information2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms
2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior
More informationChapter 7: Filter Design 7.1 Practical Filter Terminology
hapter 7: Filter Design 7. Practical Filter Terminology Analog and digital filters and their designs constitute one of the major emphasis areas in signal processing and communication systems. This is due
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationx log x, which is strictly convex, and use Jensen s Inequality:
2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and
More informationELEG 5173L Digital Signal Processing Ch. 5 Digital Filters
Department of Electrical Engineering University of Aransas ELEG 573L Digital Signal Processing Ch. 5 Digital Filters Dr. Jingxian Wu wuj@uar.edu OUTLINE 2 FIR and IIR Filters Filter Structures Analog Filters
More informationVector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression
Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationToday. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π.
Today ESE 53: Digital Signal Processing! IIR Filter Design " Lec 8: March 30, 207 IIR Filters and Adaptive Filters " Bilinear Transformation! Transformation of DT Filters! Adaptive Filters! LMS Algorithm
More informationy k = (a)synaptic f(x j ) link linear i/p o/p relation (b) Activation link linear i/p o/p relation
Neural networks viewed as directed graph - Signal flow graph: w j f(.) x j y k = w kj x j x j y k = (a)synaptic f(x j ) link linear i/p o/p relation (b) Activation link linear i/p o/p relation y i x j
More informationAn Introduction to Wavelets and some Applications
An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54
More information14.1 Finding frequent elements in stream
Chapter 14 Streaming Data Model 14.1 Finding frequent elements in stream A very useful statistics for many applications is to keep track of elements that occur more frequently. It can come in many flavours
More informationLinear Optimum Filtering: Statement
Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error
More informationLTI Systems, Additive Noise, and Order Estimation
LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationECE521 lecture 4: 19 January Optimization, MLE, regularization
ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity
More informationDimension Reduction Techniques. Presented by Jie (Jerry) Yu
Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage
More informationrepresentation of speech
Digital Speech Processing Lectures 7-8 Time Domain Methods in Speech Processing 1 General Synthesis Model voiced sound amplitude Log Areas, Reflection Coefficients, Formants, Vocal Tract Polynomial, l
More informationGradient-Adaptive Algorithms for Minimum Phase - All Pass Decomposition of an FIR System
1 Gradient-Adaptive Algorithms for Minimum Phase - All Pass Decomposition of an FIR System Mar F. Flanagan, Member, IEEE, Michael McLaughlin, and Anthony D. Fagan, Member, IEEE Abstract Adaptive algorithms
More informationAnalog vs. discrete signals
Analog vs. discrete signals Continuous-time signals are also known as analog signals because their amplitude is analogous (i.e., proportional) to the physical quantity they represent. Discrete-time signals
More informationStatistical and Adaptive Signal Processing
r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory
More informationDIGITAL SIGNAL PROCESSING UNIT III INFINITE IMPULSE RESPONSE DIGITAL FILTERS. 3.6 Design of Digital Filter using Digital to Digital
DIGITAL SIGNAL PROCESSING UNIT III INFINITE IMPULSE RESPONSE DIGITAL FILTERS Contents: 3.1 Introduction IIR Filters 3.2 Transformation Function Derivation 3.3 Review of Analog IIR Filters 3.3.1 Butterworth
More informationINTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie
WIENER FILTERING Presented by N.Srikanth(Y8104060), M.Manikanta PhaniKumar(Y8104031). INDIAN INSTITUTE OF TECHNOLOGY KANPUR Electrical Engineering dept. INTRODUCTION Noise is present in many situations
More informationKernel and Nonlinear Canonical Correlation Analysis
Kernel and Nonlinear Canonical Correlation Analysis Pei Ling Lai and Colin Fyfe Applied Computational Intelligence Research Unit Department of Computing and Information Systems The University of Paisley,
More informationIntroduction to Sparsity in Signal Processing
1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system
More informationChapter 8: Least squares (beginning of chapter)
Chapter 8: Least squares (beginning of chapter) Least Squares So far, we have been trying to determine an estimator which was unbiased and had minimum variance. Next we ll consider a class of estimators
More informationFourier Transform Chapter 10 Sampling and Series
Fourier Transform Chapter 0 Sampling and Series Sampling Theorem Sampling Theorem states that, under a certain condition, it is in fact possible to recover with full accuracy the values intervening between
More informationOn the Frequency-Domain Properties of Savitzky-Golay Filters
On the Frequency-Domain Properties of Savitzky-Golay Filters Ronald W Schafer HP Laboratories HPL-2-9 Keyword(s): Savitzky-Golay filter, least-squares polynomial approximation, smoothing Abstract: This
More informationAdaptiveFilters. GJRE-F Classification : FOR Code:
Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationAn Application of the Data Adaptive Linear Decomposition Transform in Transient Detection
Naresuan University Journal 2003; 11(3): 1-7 1 An Application of the Data Adaptive Linear Decomposition Transform in Transient Detection Suchart Yammen Department of Electrical and Computer Engineering,
More informationComputational Intelligence Lecture 6: Associative Memory
Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence
More informationOn the positivity of linear weights in WENO approximations. Abstract
On the positivity of linear weights in WENO approximations Yuanyuan Liu, Chi-Wang Shu and Mengping Zhang 3 Abstract High order accurate weighted essentially non-oscillatory (WENO) schemes have been used
More informationFourier Series Representation of
Fourier Series Representation of Periodic Signals Rui Wang, Assistant professor Dept. of Information and Communication Tongji University it Email: ruiwang@tongji.edu.cn Outline The response of LIT system
More informationFilter structures ELEC-E5410
Filter structures ELEC-E5410 Contents FIR filter basics Ideal impulse responses Polyphase decomposition Fractional delay by polyphase structure Nyquist filters Half-band filters Gibbs phenomenon Discrete-time
More informationE : Lecture 1 Introduction
E85.2607: Lecture 1 Introduction 1 Administrivia 2 DSP review 3 Fun with Matlab E85.2607: Lecture 1 Introduction 2010-01-21 1 / 24 Course overview Advanced Digital Signal Theory Design, analysis, and implementation
More informationLinear Convolution Using FFT
Linear Convolution Using FFT Another useful property is that we can perform circular convolution and see how many points remain the same as those of linear convolution. When P < L and an L-point circular
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationImage as a signal. Luc Brun. January 25, 2018
Image as a signal Luc Brun January 25, 2018 Introduction Smoothing Edge detection Fourier Transform 2 / 36 Different way to see an image A stochastic process, A random vector (I [0, 0], I [0, 1],..., I
More informationChannel Estimation with Low-Precision Analog-to-Digital Conversion
Channel Estimation with Low-Precision Analog-to-Digital Conversion Onkar Dabeer School of Technology and Computer Science Tata Institute of Fundamental Research Mumbai India Email: onkar@tcs.tifr.res.in
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationTRINICON: A Versatile Framework for Multichannel Blind Signal Processing
TRINICON: A Versatile Framework for Multichannel Blind Signal Processing Herbert Buchner, Robert Aichner, Walter Kellermann {buchner,aichner,wk}@lnt.de Telecommunications Laboratory University of Erlangen-Nuremberg
More informationSignal Denoising with Wavelets
Signal Denoising with Wavelets Selin Aviyente Department of Electrical and Computer Engineering Michigan State University March 30, 2010 Introduction Assume an additive noise model: x[n] = f [n] + w[n]
More informationFilter Design Problem
Filter Design Problem Design of frequency-selective filters usually starts with a specification of their frequency response function. Practical filters have passband and stopband ripples, while exhibiting
More informationToday s lecture. Local neighbourhood processing. The convolution. Removing uncorrelated noise from an image The Fourier transform
Cris Luengo TD396 fall 4 cris@cbuuse Today s lecture Local neighbourhood processing smoothing an image sharpening an image The convolution What is it? What is it useful for? How can I compute it? Removing
More informationChapter 4. Matrices and Matrix Rings
Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More informationData-driven methods in application to flood defence systems monitoring and analysis Pyayt, A.
UvA-DARE (Digital Academic Repository) Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A. Link to publication Citation for published version (APA): Pyayt, A.
More informationELEN E4810: Digital Signal Processing Topic 2: Time domain
ELEN E4810: Digital Signal Processing Topic 2: Time domain 1. Discrete-time systems 2. Convolution 3. Linear Constant-Coefficient Difference Equations (LCCDEs) 4. Correlation 1 1. Discrete-time systems
More informationDATA MINING AND MACHINE LEARNING
DATA MINING AND MACHINE LEARNING Lecture 5: Regularization and loss functions Lecturer: Simone Scardapane Academic Year 2016/2017 Table of contents Loss functions Loss functions for regression problems
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationVU Signal and Image Processing
052600 VU Signal and Image Processing Torsten Möller + Hrvoje Bogunović + Raphael Sahann torsten.moeller@univie.ac.at hrvoje.bogunovic@meduniwien.ac.at raphael.sahann@univie.ac.at vda.cs.univie.ac.at/teaching/sip/18s/
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationwhere u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.
Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,
More informationIntroduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be
Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show
More informationLecture 04 Image Filtering
Institute of Informatics Institute of Neuroinformatics Lecture 04 Image Filtering Davide Scaramuzza 1 Lab Exercise 2 - Today afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Work description: your first
More informationData Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.
Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)
More informationMMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm
MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm D.R. Brown III WPI WPI D.R. Brown III 1 / 19 Problem Statement and Assumptions known input x[n] unknown system (assumed
More informationLessons in Estimation Theory for Signal Processing, Communications, and Control
Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL
More informationCity, University of London Institutional Repository
City Research Online City, University of London Institutional Repository Citation: Zhao, S., Shmaliy, Y. S., Khan, S. & Liu, F. (2015. Improving state estimates over finite data using optimal FIR filtering
More informationNotes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed
18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,
More informationOn a class of stochastic differential equations in a financial network model
1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University
More informationElec4621 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis
Elec461 Advanced Digital Signal Processing Chapter 11: Time-Frequency Analysis Dr. D. S. Taubman May 3, 011 In this last chapter of your notes, we are interested in the problem of nding the instantaneous
More information(Refer Slide Time: )
Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi FIR Lattice Synthesis Lecture - 32 This is the 32nd lecture and our topic for
More informationA time series is called strictly stationary if the joint distribution of every collection (Y t
5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a
More informationCEPSTRAL ANALYSIS SYNTHESIS ON THE MEL FREQUENCY SCALE, AND AN ADAPTATIVE ALGORITHM FOR IT.
CEPSTRAL ANALYSIS SYNTHESIS ON THE EL FREQUENCY SCALE, AND AN ADAPTATIVE ALGORITH FOR IT. Summarized overview of the IEEE-publicated papers Cepstral analysis synthesis on the mel frequency scale by Satochi
More informationAnalysis of Discrete-Time Systems
TU Berlin Discrete-Time Control Systems 1 Analysis of Discrete-Time Systems Overview Stability Sensitivity and Robustness Controllability, Reachability, Observability, and Detectabiliy TU Berlin Discrete-Time
More informationOptimum Ordering and Pole-Zero Pairing of the Cascade Form IIR. Digital Filter
Optimum Ordering and Pole-Zero Pairing of the Cascade Form IIR Digital Filter There are many possible cascade realiations of a higher order IIR transfer function obtained by different pole-ero pairings
More information5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model
5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n
More informationV. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline
V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least
More informationHaar wavelets. Set. 1 0 t < 1 0 otherwise. It is clear that {φ 0 (t n), n Z} is an orthobasis for V 0.
Haar wavelets The Haar wavelet basis for L (R) breaks down a signal by looking at the difference between piecewise constant approximations at different scales. It is the simplest example of a wavelet transform,
More information