LINEAR-PHASE FIR FILTER DESIGN BY LINEAR PROGRAMMING

Similar documents
FINITE PRECISION EFFECTS 1. FLOATING POINT VERSUS FIXED POINT 3. TYPES OF FINITE PRECISION EFFECTS 4. FACTORS INFLUENCING FINITE PRECISION EFFECTS

Maximally Flat Lowpass Digital Differentiators

EFFICIENT REMEZ ALGORITHMS FOR THE DESIGN OF NONRECURSIVE FILTERS

On the Frequency-Domain Properties of Savitzky-Golay Filters

Filter Design Problem

Exercises in Digital Signal Processing

INFINITE-IMPULSE RESPONSE DIGITAL FILTERS Classical analog filters and their conversion to digital filters 4. THE BUTTERWORTH ANALOG FILTER

Efficient algorithms for the design of finite impulse response digital filters

Design of Optimum High Order Finite Wordlength Digital FIR Filters with Linear Phase

DSP. Chapter-3 : Filter Design. Marc Moonen. Dept. E.E./ESAT-STADIUS, KU Leuven

Design of Stable IIR filters with prescribed flatness and approximately linear phase

EE123 Digital Signal Processing

FIR filter design is one of the most basic and important

FIR BAND-PASS DIGITAL DIFFERENTIATORS WITH FLAT PASSBAND AND EQUIRIPPLE STOPBAND CHARACTERISTICS. T. Yoshida, Y. Sugiura, N.

A DESIGN OF FIR FILTERS WITH VARIABLE NOTCHES CONSIDERING REDUCTION METHOD OF POLYNOMIAL COEFFICIENTS FOR REAL-TIME SIGNAL PROCESSING

Signal Processing. Lecture 10: FIR Filter Design. Ahmet Taha Koru, Ph. D. Yildiz Technical University Fall

Towards Global Design of Orthogonal Filter Banks and Wavelets

2.098/6.255/ Optimization Methods Practice True/False Questions

Butterworth Filter Properties

Optimization (168) Lecture 7-8-9

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

APPLIED SIGNAL PROCESSING

T ulation and demodulation. It has applications not only

Lagrangian Duality Theory

Design of Coprime DFT Arrays and Filter Banks

Two Dimensional Linear Phase Multiband Chebyshev FIR Filter

MODELLING OF RECIPROCAL TRANSDUCER SYSTEM ACCOUNTING FOR NONLINEAR CONSTITUTIVE RELATIONS

Optimization-based Design and Implementation of Multi-dimensional Zero-phase IIR Filters

SYNTHESIS OF BIRECIPROCAL WAVE DIGITAL FILTERS WITH EQUIRIPPLE AMPLITUDE AND PHASE

INF3440/INF4440. Design of digital filters

DESIGN OF QUANTIZED FIR FILTER USING COMPENSATING ZEROS

Summary of the simplex method

Closed-Form Design of Maximally Flat IIR Half-Band Filters

Iterative reweighted l 1 design of sparse FIR filters

Filter Banks for Image Coding. Ilangko Balasingham and Tor A. Ramstad

Wavelets and Multiresolution Processing

OPTIMIZED PROTOTYPE FILTER BASED ON THE FRM APPROACH

A Simple Proof of the Alternation Theorem

Linear Programming using NNLS

4214 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006

Minimax Design of Complex-Coefficient FIR Filters with Low Group Delay

Algorithms for Constrained Optimization

Design of Biorthogonal FIR Linear Phase Filter Banks with Structurally Perfect Reconstruction

Optimal Design of Real and Complex Minimum Phase Digital FIR Filters

Optimum and Suboptimum Design of FIR and SAW Filters: Remez Exchange Algorithm, LP, NLP, WLMS

Optimal Polynomial Control for Discrete-Time Systems

1. FIR Filter Design

Problem Set 9 Solutions

Multidisciplinary System Design Optimization (MSDO)

Magnitude F y. F x. Magnitude

Discrete Time Systems

Computer-Aided Design of Digital Filters. Digital Filters. Digital Filters. Digital Filters. Design of Equiripple Linear-Phase FIR Filters

DSP-CIS. Chapter-4: FIR & IIR Filter Design. Marc Moonen

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Digital Wideband Integrators with Matching Phase and Arbitrarily Accurate Magnitude Response (Extended Version)

Introduction to DSP Time Domain Representation of Signals and Systems

ON THE REALIZATION OF 2D LATTICE-LADDER DISCRETE FILTERS

Using fractional delay to control the magnitudes and phases of integrators and differentiators

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Gaussian-Shaped Circularly-Symmetric 2D Filter Banks

Convolution. Define a mathematical operation on discrete-time signals called convolution, represented by *. Given two discrete-time signals x 1, x 2,

Lecture 15: October 15

Basic Design Approaches

Laboratory 4: Optimal Filter Design. Miki Lustig

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

ISSN (Print) Research Article. *Corresponding author Nitin Rawal

Low-delay perfect reconstruction two-channel FIR/IIR filter banks and wavelet bases with SOPOT coefficients

We have shown earlier that the 1st-order lowpass transfer function

INTRODUCTION, FOUNDATIONS

Tunable IIR Digital Filters

Lecture 13: Constrained optimization

Symmetric Wavelet Tight Frames with Two Generators

A New Algorithm for Designing FIR-Filters with Small Word Length

Finite impulse response (FIR) filters have played a central role in digital

Fitting The Unknown 1/28. Joshua Lande. September 1, Stanford

Averaged Lagrange Method for interpolation filter

SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu

Digital Filters Ying Sun

a( i), where N i N ) is accurately d( i), where N i N. The derivative at ( ) ( ) ( )

The Dolph Chebyshev Window: A Simple Optimal Filter

Multimedia Signals and Systems - Audio and Video. Signal, Image, Video Processing Review-Introduction, MP3 and MPEG2

ON FINITE-PRECISION EFFECTS IN LATTICE WAVE-DIGITAL FILTERS

POLYNOMIAL-BASED INTERPOLATION FILTERS PART I: FILTER SYNTHESIS*


Filter structures ELEC-E5410

Generation of Bode and Nyquist Plots for Nonrational Transfer Functions to Prescribed Accuracy

THE FOURIER TRANSFORM (Fourier series for a function whose period is very, very long) Reading: Main 11.3

IE 400: Principles of Engineering Management. Simplex Method Continued

Digital Signal Processing

LABORATORY 3 FINITE IMPULSE RESPONSE FILTERS

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

EE123 Digital Signal Processing

Responses of Digital Filters Chapter Intended Learning Outcomes:

Problem structure in semidefinite programs arising in control and signal processing

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1

A HIGH-SPEED PROCESSOR FOR RECTANGULAR-TO-POLAR CONVERSION WITH APPLICATIONS IN DIGITAL COMMUNICATIONS *

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

ONE-DIMENSIONAL (1-D) two-channel FIR perfect-reconstruction

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

University of Illinois at Urbana-Champaign ECE 310: Digital Signal Processing

Transcription:

LINEAR-PHASE FIR FILTER DESIGN BY LINEAR PROGRAMMING Often it is desirable that an FIR filter be designed to minimize the Chebyshev error subject to linear constraints that the Parks- McClellan algorithm does not allow An example described by Rabiner includes time domain constraints Another example comes from a communication application [1] given h 1 (n), design h 2 (n) so that h(n) = h 1 (n) h 2 (n) is an Nyquist-M filter (ie h(mn) = for all n ) Such constraints are linear in h 2 (n) (In the special case that h 1 (n) = δ(n), h 2 (n) is itself a Nyquist-M filter, and is often used for interpolation) Linear programming formulations of approximation problems (and optimization problems in general) are very attractive because well developed algorithms exist (namely the simplex algorithm and more recently, interior point methods) for solving such problems Although linear programming requires significantly more computation than the Remez algorithm, for many problems it is a very viable technique Furthermore, this approach is very flexible it allows arbitrary linear equality and inequality constraints I Selesnick EL 713 Lecture Notes 1

WHAT IS A LINEAR PROGRAM? A linear program is not a type of computer program the term linear program is from optimization theory an optimization problem having a certain form A linear program is Specifically, the cost function and the constraints are both linear functions of the independent variables A linear program has the following form Minimize c 1 x 1 + c 2 x 2 + + c n x n over x k : 1 k n such that A x b The cost function is: f(x) = c 1 x 1 + c 2 x 2 + + c n x n The variables are: (1) x 1, x 2,, x n The constraints are: A x b where A is a matrix of size m n, x = x 1 x 2 and b = b 1 b 2 x n b m A x b means that each components of the vector A x is less than the corresponding component of the vector b Here m is the number of constraints I Selesnick EL 713 Lecture Notes 2

WHAT IS A LINEAR PROGRAM? Note that f(x) = c 1 x 1 + c 2 x 2 + + c n x n can be written in vector form as f(x) = c t x where [ c t = c 1 c 2 c n ] and x = x 1 x 2 x n With this vector notation, the linear program can be written in the following notation Minimize c t x over x k : 1 k n such that A x b (2) I Selesnick EL 713 Lecture Notes 3

THE MATLAB lp COMMAND The Matlab command lp in the Optimization Toolbox can be used to solve a linear program Here is the command description >> help lp LP Linear programming X=LP(f,A,b) solves the linear programming problem: min f x subject to: Ax <= b x X=LP(f,A,b,VLB,VUB) defines a set of lower and upper bounds on the design variables, X, so that the solution is always in the range VLB <= X <= VUB X=LP(f,A,b,VLB,VUB,X) sets the initial starting point to X X=LP(f,A,b,VLB,VUB,X,N) indicates that the first N constraints defined by A and b are equality constraints X=LP(f,A,b,VLB,VUB,X,N,DISPLAY) controls the level of warning messages displayed Warning messages can be turned off with DISPLAY = -1 [x,lambda]=lp(f,a,b) returns the set of Lagrangian multipliers, LAMBDA, at the solution [X,LAMBDA,HOW] = LP(f,A,b) also returns a string how that indicates error conditions at the final iteration LP produces warning messages when the solution is either unbounded or infeasible I Selesnick EL 713 Lecture Notes 4

EXAMPLE For example, suppose we have four variables x 1, x 2, x 3, x 4 and we want to minimize 3 x 1 + 2 x 2 x 3 + 5 x 4 subject to the constraints x 1 + x 2 + x 3 + x 4 5 x 1 x 2 x 3 x 4 This can be written as Minimize c t x over x k : 1 k 4 such that A x b (3) where [ c t = 3 2 1 5 ] A = 1 1 1 1 1 1 1 1 and b = 5 I Selesnick EL 713 Lecture Notes 5

EXAMPLE We can find the values of x k solving this constrained optimization problem using the Matlab lp command as follows >> c = [3 2-1 5] ; >> A = [1 1 1 1; -1 ; -1 ; -1 ; -1]; >> b = [5; ; ; ; ]; >> x = lp(c,a,b) x = 5 The optimal solution is therefore x 1 =, x 2 =, x 3 = 5, x 4 = I Selesnick EL 713 Lecture Notes 6

EXAMPLE 2 Here is an example with an equality constraint Suppose we have four variables x 1, x 2, x 3, x 4 and we want to minimize 3 x 1 + 2 x 2 x 3 + 5 x 4 subject to the constraints 2 x 1 x 3 = 3 x 1 + x 2 + x 3 + x 4 5 x 1 x 2 x 3 x 4 We can also find the values of x k solving this constrained optimization using the Matlab command lp The syntax x = lp(c,a,b,[],[],[],n); indicates that the first N constraints defined by A and b are equality constraints The Matlab code for solving this problem is shown on the following page I Selesnick EL 713 Lecture Notes 7

EXAMPLE 2 >> c = [3 2-1 5]; >> A = [2-1 ; 1 1 1 1; -1 ; -1 ; -1 ; -1]; >> b = [3; 5; ; ; ; ]; >> x = lp(c,a,b,[],[],[],1) x = 15 The optimal solution is therefore x 1 = 15, x 2 =, x 3 =, x 4 = I Selesnick EL 713 Lecture Notes 8

FILTER DESIGN AS A LINEAR PROGRAM To understand how the problem of minimizing the weighted error E(ω) = W (ω) (A(ω) D(ω)) (4) where (assuming a type I FIR filter) M A(ω) = a(n) cos (nω) (5) n= can be formulated as a linear program we give a chain of equivalent problems, beginning with the following one Find a(n) such that E(ω) δ, for ω π (6) for the smallest possible value of δ Equivalently, we have the following statement of the problem which treats δ as variable We have the M + 1 cosine coefficients a(n) and the variable δ, so the total number of variables is M + 2 Minimize over δ a(n), δ such that E(ω) δ for ω π (7) I Selesnick EL 713 Lecture Notes 9

DERIVATION We can remove the absolute value: Minimize δ over a(n), δ such that E(ω) δ for ω π and δ E(ω) for ω π (8) Expanding E(ω): Minimize δ over a(n), δ such that W (ω) (A(ω) D(ω)) δ for ω π and δ W (ω) (A(ω) D(ω)) for ω π (9) I Selesnick EL 713 Lecture Notes 1

DERIVATION Moving the variables to the left-hand side: Minimize δ over a(n), δ such that A(ω) δ D(ω) W (ω) for ω B and A(ω) δ D(ω) W (ω) for ω B where B = {ω [, π]; W (ω) > } (1) The variables are a(),, a(m) and δ The cost function and the constraints are linear functions of the variables, hence the formulation is that of a linear program To be precise, this is a semi-infinite linear program because there is a continuum of constraints one constraint for each ω [, π] where W (ω) > To obtain a linear program with a finite number of constraints, a suitably dense grid of points distributed in [, π] is commonly used I Selesnick EL 713 Lecture Notes 11

DISCRETIZATION Discretize: Use a discretization of the frequency variable {ω k : 1 k L} so that W (ω k ) > for all 1 k L This discretization need not be uniform, and it will not be uniform in general because we need W (ω k ) > Minimize δ over a(n), δ such that A k δ D k for 1 k L W k (11) and A k δ W k D k for 1 k L where A k = A(ω k ), D k = D(ω k ), W k = W (ω k ) (12) The inequalities A k δ W k D k for 1 k L (13) can be written in matrix form as A 1 A k A L δ W 1 δ W k δ W L D 1 D k D L (14) I Selesnick EL 713 Lecture Notes 12

DISCRETIZATION The vector of values A k can be written as A 1 1 cos (w 1 ) cos (Mw 1 ) A k = 1 cos (w k ) cos (Mw k ) A L 1 cos (w L ) cos (Mw L ) a() a(1) a(m) (15) or as A = C a (16) Therefore, the inequalities can be written as C a V δ d (17) where V = 1 W 1 1 W 2 1 W L, d = D 1 D 2 D L (18) Similarly, the constraints A k δ D k W k for 1 k L (19) can be written as C a V δ d (2) I Selesnick EL 713 Lecture Notes 13

DISCRETIZATION Combining the inequality constraints into one matrix: [ ] [ ] [ ] C V a d (21) C V δ d The new constrained minimization problem is: Minimize δ over a(n), δ [ ] [ ] [ C V a d ] (22) such that C V δ d This fits the standard linear program: Minimize c x over x such that Q x b (23) where we use [ C V Q = C V ] [ d, b = d ], c = [,,,, 1] (24) Then the variables a(n) and δ can be directly obtained from the vector x x = [a(), a(1),, a(m), δ] t (25) I Selesnick EL 713 Lecture Notes 14

EXAMPLE To show how to use the linear programming approach, we repeat the design of a length 31 FIR low-pass filter with K p = 1, K s = 4 and ω p = 26π, ω s = 34π The filter minimizing the weighted Chebyshev error is illustrated in the figure It is the same as the filter obtained with the Remez algorithm Also in the figure, the step response of the filter is plotted In the next example, constraints will be imposed upon the step response A(ω) step response (DETAIL) A(ω), in db 1 8 6 4 2 2 2 4 6 8 1 ω/π 1 1 2 3 4 5 6 2 4 6 8 1 ω/π 15 1 5 5 1 h(n) 3 2 1 1 1 2 3 n step response 1 5 1 2 3 n 15 2 4 6 8 1 12 14 16 n I Selesnick EL 713 Lecture Notes 15

EXAMPLE N = 31; Kp = 1; Ks = 4; wp = 26*pi; ws = 34*pi; wo = 3*pi; L = 5; w = [:L]*pi/L; W = Kp*(w<=wp) + Ks*(w>=ws); D = (w<=wo); % remove points where W == SN = 1e-8; k = (W>SN); w = w(k); W = W(k); D = D(k); % construct matrices M = (N-1)/2; C = cos(w *[:M]); V = 1/W ; Q = [C, -V; -C, -V]; b = [D ; -D ]; c = [zeros(m+1,1); 1]; % solve linear program x = lp(c,q,b); a = x(1:m+1); del = x(m+2); h = [a(m+1:-1:2); 2*a(1); a(2:m+1)]/2; I Selesnick EL 713 Lecture Notes 16

REMARKS This approach is slower than the Remez algorithm because solving a linear program is in general a more difficult problem In this example, we used a grid density of 5 instead of 1 Because the required computation depends on both the number of variables and the number of constraints, for long filters (which would require more grid points) the discretization of the constraints may require more computation than is acceptable This problem can be alleviated by a procedure that invokes a sequence of linear programs By using a coarse grid density initially, and by appending to the existing set of constraints, new constraints where the constraint violation is greatest, a more accurate solution is obtained without the need to use an overly dense grid I Selesnick EL 713 Lecture Notes 17

TIME-DOMAIN CONSTRAINTS The usefulness of linear programming for filter design is well illustrated by an example described by Rabiner [2] Using a linear program, he illustrates that some simultaneous restrictions on both the time and the frequency response of a filter can be readily imposed In his example, the oscillatory behavior of the step response of a low-pass filter is included in the design formulation EXAMPLE In the following design problem, the first set of constraints correspond to the frequency response, and the second set of constraints correspond to the time-domain s(n) is the step response n s(n) = h(k) (26) k= N 1 is that region in which the step response oscillates around Minimize δ over a(n), δ (27) such that E(ω) δ for ω π and s(n) δ t for n N 1 I Selesnick EL 713 Lecture Notes 18

EXAMPLE After discretization, this can be written as the following linear program, Minimize over such that δ a(n), δ C V d [ ] C V a d T δ v T v (28) where the matrices T and v have the form 1 1 1 T = and v = 2 δ 1 1 1 t 1 1 1 (29) (Can you work out why?) When the step response is not included in the problem formulation, the length 31 solution is shown in the previous figure For that filter, the peak-error in the pass-band (δ) is 844, but the oscillation of the step response around is 1315 When δ t is set equal to 5 and N 1 = 12, the solution is illustrated in following figure I Selesnick EL 713 Lecture Notes 19

EXAMPLE A(ω) step response (DETAIL) A(ω), in db 1 8 6 4 2 2 2 4 6 8 1 ω/π 1 1 2 3 4 5 6 2 4 6 8 1 ω/π 15 1 5 5 1 h(n) 3 2 1 1 1 2 3 n step response 1 5 1 2 3 n 15 2 4 6 8 1 12 14 16 n As is evident from the figures, the step response is indeed less oscillatory, however, the frequency response is not quite as good For this filter the oscillation of the step response around is decreased to 5, but the peak-error in the pass-band (δ) has increased to 126 It is clear that the trade-off between time and frequency response specifications can be well managed with a linear program formulation This filter was designed using the following Matlab code I Selesnick EL 713 Lecture Notes 2

EXAMPLE N = 31; Kp = 1; Ks = 4; wp = 26*pi; ws = 34*pi; wo = (wp+ws)/2; L = 5; w = [:L]*pi/L; W = Kp*(w<=wp) + Ks*(w>=ws); D = (w<=wo); % remove points where W == SN = 1e-8; k = (W>SN); w = w(k); W = W(k); D = D(k); % construct matrices M = (N-1)/2; C = cos(w *[:M]); V = 1/W ; Q = [C, -V; -C, -V]; b = [D ; -D ]; c = [zeros(m+1,1); 1]; % time-domain constraints T = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ]; v = 2*5*ones(13,1); % append time-domain constraints to existing constraints Q = [Q; -T; T]; b = [b; v; v]; % solve linear program I Selesnick EL 713 Lecture Notes 21

x = lp(c,q,b); a = x(1:m+1); del = x(m+2); h = [a(m+1:-1:2); 2*a(1); a(2:m+1)]/2; I Selesnick EL 713 Lecture Notes 22

REFERENCES REFERENCES PROS AND CONS Optimal with respect to chosen criteria Easy to include arbitrary linear constraints including inequality constraints Criteria limited to linear programming formulation Computational cost higher than Remez algorithm References [1] F M de Saint-Martin and Pierre Siohan Design of optimal linearphase transmitter and receiver filters for digital systems In Proc IEEE Int Symp Circuits and Systems (ISCAS), volume 2, pages 885 888, April 3-May 3 1995 [2] L R Rabiner and B Gold Theory and Application of Digital Signal Processing Prentice Hall, Englewood Cliffs, NJ, 1975 I Selesnick EL 713 Lecture Notes 23