Katholieke Universiteit Leuven Department of Computer Science

Similar documents
CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

Bounds on the Pythagoras number of the sum of square magnitudes of Laurent polynomials

SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu

MINIMUM PEAK IMPULSE FIR FILTER DESIGN

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

BOUNDED REAL LEMMA FOR MULTIVARIATE TRIGONOMETRIC MATRIX POLYNOMIALS AND FIR FILTER DESIGN APPLICATIONS

Affine iterations on nonnegative vectors

Katholieke Universiteit Leuven Department of Computer Science

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones

NEW APPROACH TO THE RATIONAL INTERPOLATION PROBLEM

Uniqueness of the Solutions of Some Completion Problems

BEAMPATTERN SYNTHESIS WITH LINEAR MATRIX INEQUALITIES USING MINIMUM ARRAY SENSORS

The Pythagoras number of real sum of squares polynomials and sum of square magnitudes of polynomials

Katholieke Universiteit Leuven Department of Computer Science

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Convex optimization problems involving finite autocorrelation sequences

Largest dual ellipsoids inscribed in dual cones

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

A UNIFIED APPROACH TO THE DESIGN OF IIR AND FIR NOTCH FILTERS

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Katholieke Universiteit Leuven Department of Computer Science

The extreme rays of the 5 5 copositive cone

A numerical solution of the constrained weighted energy problem and its relation to rational Lanczos iterations

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Split Rank of Triangle and Quadrilateral Inequalities

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications


Note on the convex hull of the Stiefel manifold

Existence and uniqueness of solutions for a continuous-time opinion dynamics model with state-dependent connectivity

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Recognizing single-peaked preferences on aggregated choice data

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE

Reliably computing all characteristic roots of delay differential equations in a given right half plane using a spectral method

Invariance properties in the root sensitivity of time-delay systems with double imaginary roots

THE MIXING SET WITH FLOWS

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Design of Stable IIR filters with prescribed flatness and approximately linear phase

Are There Sixth Order Three Dimensional PNS Hankel Tensors?

Boolean Inner-Product Spaces and Boolean Matrices

can only hit 3 points in the codomain. Hence, f is not surjective. For another example, if n = 4

Fast algorithms for solving H -norm minimization. problem

Four new upper bounds for the stability number of a graph

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Discrete-time Symmetric/Antisymmetric FIR Filter Design

Convex Optimization of Graph Laplacian Eigenvalues

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Permutation invariant proper polyhedral cones and their Lyapunov rank

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Convex Optimization 1

Using Schur Complement Theorem to prove convexity of some SOC-functions

On some interpolation problems

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

arxiv:quant-ph/ v1 22 Aug 2005

Digital Signal Processing

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

A note on companion pencils

Neural Network Algorithm for Designing FIR Filters Utilizing Frequency-Response Masking Technique

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

Applications of the Inverse Theta Number in Stable Set Problems

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Approximation Algorithms

On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time

Implicit double shift QR-algorithm for companion matrices

A Unified Approach to the Design of Interpolated and Frequency Response Masking FIR Filters

Semidefinite Programming

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

Katholieke Universiteit Leuven Department of Computer Science

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Part 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007

On self-concordant barriers for generalized power cones

A Quick Tour of Linear Algebra and Optimization for Machine Learning

BCOL RESEARCH REPORT 07.04

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Linear Matrix Inequality Formulation of Spectral Mask Constraints with Applications to FIR Filter Design

Convex Feasibility Problems

The Pythagoras number of real sum of squares polynomials and sum of square magnitudes of polynomials

Elementary linear algebra

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

M17 MAT25-21 HOMEWORK 6

Minimax Design of Complex-Coefficient FIR Filters with Low Group Delay

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Orthogonal Symmetric Toeplitz Matrices

Supplement: Universal Self-Concordant Barrier Functions

Discrete Applied Mathematics

Linear Algebra: Matrix Eigenvalue Problems

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

arxiv: v1 [math.gr] 10 Nov 2014

THIS paper is an extended version of the paper [1],

Transcription:

A convex optimization method to solve a filter design problem Thanh Hieu Le Marc Van Barel Report TW 599, August 2011 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A B-3001 Heverlee (Belgium)

A convex optimization method to solve a filter design problem Thanh Hieu Le Marc Van Barel Report TW 599, August 2011 Department of Computer Science, K.U.Leuven Abstract In this paper, we formulate the feasibility problem corresponding to a filter design problem as a convex optimization problem. Combined with a bisection rule this leads to an algorithm of minimizing the design parameter in the filter design problem. A safety margin is introduced to solve the numerical difficulties when solving this type of problems numerically. The numerical experiments illustrate the validity of this approach for larger degrees of the filter as compared to similar previous algorithms. Keywords : Filter design, convex optimization, nonnegative polynomial. MSC : Primary : 90C25, Secondary : 90C22, 65D10.

A convex optimization method to solve a filter design problem Thanh Hieu Le and Marc Van Barel Department of Computer Science, K. U. Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium {thanhhieu.le, marc.vanbarel}@cs.kuleuven.be Abstract In this paper, we formulate the feasibility problem corresponding to a filter design problem as a convex optimization problem. Combined with a bisection rule this leads to an algorithm of minimizing the design parameter in the filter design problem. A safety margin is introduced to solve the numerical difficulties when solving this type of problems numerically. The numerical experiments illustrate the validity of this approach for larger degrees of the filter as compared to similar previous algorithms. Keywords: Filter design, convex optimization, nonnegative polynomial. 1 Motivation In this paper, a low-pass filter design problem is considered where the parameters of the problem are the passband ripple, stopband attenuation and the degree of the filter transfer function. In [3], Genin et al. show that such a filter design problem can be solved by combining a bisection rule on one of these design parameters and an optimization scheme solving a feasibility problem. The feasibility problem can be reformulated as a convex optimization problem over the cone of nonnegative The research was partially supported by the Research Council K.U.Leuven, project OT/10/038 (Multi-parameter model order reduction and its applications), CoE EF/05/006 Optimization in Engineering (OPTEC), and by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office, Belgian Network DYSCO (Dynamical Systems, Control, and Optimization). The scientific responsibility rests with its author(s). 1

trigonometric polynomials. In [5, 6, 12, 4], it is shown that these polynomials can be parameterized by using Hankel and/or Toeplitz matrices. In [3], the authors focus on minimizing one of the three parameters, more precisely, the stopband attenuation. Their optimization method is implemented by using the LMI Toolbox of MATLAB but it breaks down as soon as the degree of the filter is greater than 7. The aim of this paper is to design an algorithm that can be used for much higher degrees of the filter. We first reformulate the feasibility problem of the filter design problem as a convex optimization one, more precisely, as a conic programming problem[1, 7]. In our numerical experiments, this is solved by using the cvx Toolbox [9, 7, 8]. This approach is then used to develop an algorithm to solve the filter design problem in which passband ripple and stopband attenuation are minimized simultaneously. In fact, this algorithm is just a modification of the one in [3] but it turns out to be valid for much larger values of the filter degree. Finally, we discuss the behavior of the minimal value of the whole problem when the filter degree, passband and the difference between stopband and passband change. 2 Nonnegative trigonometric polynomials In this section, we introduce some results on complex-valued functional systems and on the cones of nonnegative trigonometric polynomials on the unit circle and its symmetric arcs. One can find more details in [12] or [11, sections 2.1, 2.2, 2.3]. 2.1 Complex valued functional systems in the complex plane Let Γ be an arbitrary set in the complex plane. Given a system S = {ψ 0 (z),..., ψ r 1 (z)} of linearly independent complex-valued functions ψ i (z) on Γ and a real-valued weight function ϕ(z) that is nonnegative on Γ, define the finite-dimensional cone { N 1 } K = P (z) : P (z) = ϕ(z) Q i (z)q i (z), Q i (z) F(S), i = 0, 1,..., N 1, where F(S) = i=0 { r 1 } Q(z) : Q(z) = Q k ψ k (z), Q k C, k = 0, 1,..., r 1. (2) k=0 (1) 2

We take N r. Define the squared functional system S 2 = {v ij (z) = ϕ(z)ψ i (z)ψ j (z), i, j = 0, 1,..., r 1} and two vector functions, ψ(z) = [ψ 0 (z),..., ψ r 1 (z)] T, and v(z) = [v 0 (z),..., v s 1 (z)] T whose components span a finite-dimensional space that covers S 2. Continuing, define two spaces: E = {P = (P 0,..., P s 1 ) : P k C, k = 0, 1,..., s 1}, F = {W = (W ij ) : W ij = W ji C, i, j = 0, 1,..., r 1} with corresponding complex-valued inner products, respectively,.,. E : E E C, (P, Q) P, Q E = 1 s 1 (Q i P i + Q i Pi ), 2 i=0 r 1.,. F : F F C, (X, Y ) X, Y E = i,j=0 Y ijx ij. We can define a linear operator Λ : E F as Λ(P ) = 1 s 1 (P k Λ k + Pk Λ 2 k), where {Λ k } k F is given such that Λ(v(z)) = ϕ(z)ψ(z)ψ(z), z Γ. 2.2 Nonnegative trigonometric polynomials k=0 We summarize the results given in [11, section 2.3]. A trigonometric polynomial of degree d is of the form p(e jθ ) = d k=0 ( ) a k cos(kθ) + b k sin(kθ), θ ( π, π]. (3) with a k, b k R, k = 0, 1,..., d. Without loss of generality, we can assume b 0 = 0. If we set z = e jθ, where j = 1, then z belongs to the unit circle T and it is 3

easy to see that p(z) = 1 ( d (a k + jb k )z k + 2 k=0 d (a k jb k )z ). k If b k = 0, k = 0, 1,..., d, we call p a cosine polynomial. Let ω z denote the argument of the complex number z with ω z ( π, π]. For each pair (u, v) T 2 with 0 ω v ω u < 2π, let us denote the arc from u to v on T as k=0 T uv = {z T : ω u ω z ω v }. In the case 0 ω u < π, we write T u, T u instead of T uu, T uu, respectively. As a particular case in the previous subsection, we take S = {1, z,..., z d }, ϕ(z) = 1 and Γ is one of the subsets T or T uv. Then ψ(z) can be chosen as ψ(z) = [1, z,..., z d ] T and a minimal basis for S 2 corresponds to the vector function v(z) = [1, z,..., z 2d ] T. This implies r = d + 1 and s = 2d + 1. Additionally, E = C d+1 and F = H d+1, the set of all Hermitian matrices of order d + 1. The cones of nonnegative trigonometric polynomials on T and T uv, respectively, are defined as follows K T = {p(z) : p(z) 0, z T}, K uv T = {p(z) : p(z) 0, z T uv }. The linear operator corresponding to Γ = T is T : C d+1 H d+1, T (S) ij = S i j, 0 i, j d, S k = S k. For the case T uv, the linear operator is the sum of two linear operators T 1 : C d+1 H d+1 and T 2 : C d+1 H d which are defined as follows [11, section 6.2.2]. T 1 (S) ij = S i j, 0 i, j d, S k = S k T 2 (S) ij = e jθuv S i j+1 + e jθuv S i j 1 2S i j cos ω uv, 0 i, j d, S k = S k, where θ uv = ωv+ωu, ω 2 uv = ωv ωu. Notice that ω 2 uu = ω u. If 0 ω u < π, we write KT u, Ku T instead of Kuu T, Kuu T, respectively. 2.3 Nonnegative trigonometric polynomials and Hermitian matrices We start this subsection by introducing some notations. Let T (n) k denote the Toeplitz matrix of order n whose first column is two times the kth unit vector, except for k = 1 where just the first unit vector is taken without multiplication by 4

the factor two and whose first row is the zero vector in R n. Let S n, S n + and H n + be the sets of all real symmetric, real symmetric positive semidefinite and Hermitian positive semidefinite matrices, respectively. The next three propositions give the relations between the set of all nonnegative trigonometric polynomials on either the unit circle or some of its (symmetric about the horizontal axis) arcs and the set of all Hermitian positive semidefinite matrices. Proposition 1. The trigonometric polynomial (3) of degree d is nonnegative on the unit circle if and only if there exists X H d+1 + such that a k + jb k = T race(t (d+1) k+1 X), k = 0, 1,..., d. (4) Proof. In [11, (2.29)] it is proven that the polynomial (3) is nonnegative on the unit circle if and only if there exists X H+ d+1 such that { i j=0 a k + jb k = X ij if k = 0, 2 i j=k X ij if k = 1,..., d. It is easy to check this fact is equivalent to (4). Proposition 2. ([11, Theorem 6.12]) Given two numbers u, v on the unit circle such that 0 ω v ω u < 2π. The polynomial (3) of degree d belongs to the cone KT uv if and only if there exist two trigonometric polynomials p 1(z), p 2 (z) of degrees d, d 1, respectively, such that p(z) = p 1 (z) + (e jθuv z + e jθuv z 1 2 cos ω uv )p 2 (z), where p 1 (z), p 2 (z) are nonnegative on the unit circle. Applying the Proposition 2 for T u, (0 ω u < π) and its complement T u, respectively, we have p(z) K u T p 1(z), p 2 (z) K T, deg(p 1 ) = d, deg(p 2 ) = d 1 : p(z) = p 1 (z) + (z + z 1 2 cos ω u )p 2 (z) (5) p(z) K u T p 1(z), p 2 (z) K T, deg(p 1 ) = d, deg(p 2 ) = d 1 : p(z) = p 1 (z) (z + z 1 2 cos ω u )p 2 (z) If the matrix X H d+1 and the polynomial p(z) satisfy the equalities in (4), we then call them corresponding or associated. Similarly, if the polynomial p(z) has the sum-form as on the right hand side in either (5) or (6), p 1 (z), p 2 (z) are not necessarily in K T, we will call the matrices X 1, X 2 which are corresponding to p 1 (z), p 2 (z), respectively, corresponding or associated matrices of p(z). The 5 (6)

corresponding matrices of a nonnegative trigonometric (resp. cosine) polynomial are Hermitian (resp. real) symmetric [2]. The next proposition is a consequence of the two former ones. Proposition 3. If the trigonometric polynomial p(z) of degree d is nonnegative on T u (or T u ) then there exist two Hermitian positive semidefinite matrices X 1 H d+1 +, X 2 H d + such that the coefficients of p(z) are linearly dependent on the entries of X 1 and X 2. Proof. Let p = [p 0 0, p 0 1,..., p 0 d ]T, p 1 = [p 1 0,..., p 1 d ]T and p 2 = [p 2 0,..., p 2 d 1 ]T denote the column vectors of coefficients of the trigonometric polynomials p, p 1 and p 2, respectively, then (5) is equivalent to p 0 0 p 0 1 p 0 2. p 0 d 2 p 0 d 1 p 0 d A similar formulation for (6) is p 1 0 p 2 0 0 p 2 1 p 1 1 p 2 1 2p 2 0 p 2 2 p 1 2 p 2 2 p 2 1 p 2 1 3 =.... 2 cos ω u 1. (7) p 1 d 2 p 2 d 2 p 2 d 3 p 2 d 1 1 p 1 d 1 p 2 d 1 p 2 d 2 0 p 1 d 0 p 2 d 1 0 }{{}.=M p = M[1 (2 cos ω u ) 1 1] T. (8) Choose two matrices X 1, X 2 which are corresponding to p 1 (z), p 2 (z), respectively as in Proposition 1 and by substituting into the above equations, we are done. In the next section, we will show that in our approach, it is sufficient to deal with only cosine polynomials and therefore their associated matrices are real symmetric. As a corollary of Proposition 3, we have the following proposition in which the polynomials are cosine and the matrices are real symmetric. This is used to prove Proposition 5 and 6 in the next section. Proposition 4. Let p 1 (z), p 2 (z) and p 3 (z) be the cosine polynomials of degree d on T, T u, T u, respectively and X 1, (X 21, X 22 ), (X 31, X 32 ) be, respectively, their corresponding matrices. For each s R, let p 1 (z), p 2 (z) and p 3 (z) denote the polynomials corresponding to X 1 + si, (X 21 + si, X 22 + si), (X 31 + si, X 32 + si), 6

respectively. Then p 1 (z) = p 1 (z) + (d + 1)s, z T, p 2 (z) = p 2 (z) + (d + 1)s + 2ds(cos ω z cos ω u ), z = e jωz T u, p 3 (z) = p 3 (z) + (d + 1)s + 2ds(cos ω u cos ω z ), z = e jωz T u. 3 Filter design problem A lowpass filter design problem can be stated as follows. Consider a class of discrete-time systems whose input signal y and output signal w satisfy a linear constant coefficient difference equation of the form d a k w[t k] = k=0 d b k y[t k], with real coefficients {a k } d k=0, {b k} d k=0 and where y[i], w[i] denote the input, respectively, output signal at discrete time i. The corresponding frequency response is the function d H(e jθ k=0 ) = a ke jkθ d k=0 b ke. jkθ In practice, linear time-invariant discrete time systems are often used to implement frequency-selective filters. We focus on discrete-time infinite impulse response (IIR) lowpass filters, for which the passband is centered around zero. A similar reasoning can be followed for other locations of the passband. Figure 1 illustrates the corresponding specifications, its mathematical formulation is as follows k=0 H(e jθ ) l 1, θ [0, ω a ], H(e jθ ) u 1, θ [0, ω b ], H(e jθ ) δ, θ [ω b, π]. where a, b T, 0 < ω a < ω b < π, 0 < l 1 < 1, u 1 > 1 and δ > 0. Here, because H(e jθ ) is 2π periodic and H(e jθ ) = H(e jθ ), it is sufficient to study the design problem on the interval [0, π] instead of ( π, π]. On the other hand, the above inequalities are best written using the squared magnitude of the filter frequency 7

Figure 1: Specification for discrete-time IIR lowpass filter response R(θ) = H(e jθ ) 2, [13]. We thus have the new inequalities R(θ) l 2 1, θ [0, ω a ], R(θ) u 2 1, θ [0, ω b ], R(θ) δ 2, R(θ) 0, θ [ω b, π], θ [0, π]. To complete the formulation of the optimization problem, an objective function must be added to the above constraints. It depends on the specific problem which we want to deal with. Here are three standard ones 1. minimize the passband ripple, given a stopband attenuation: minimize(u 1 l 1 ); d, δ are fixed 2. minimize the stopband attenuation: minimize(δ); d, l 1, u 1 are fixed 3. minimize the degree d of the filter: minimize(d); δ, l 1, u 1 are fixed. A straightforward approximation of the semi-infinite inequality constraints can be performed by sampling these inequality constraints in N 15d [13, 3] sampling frequencies 0 ω 1 ω N π. Instead, we will keep the original inequality constraints. In our work, we deal with the problem of minimizing the passband ripple 8

and stopband attenuation simultaneously. Note that also other combinations are possible. To do this, we couple the values l 1 and u 1 to the value δ as follows l 2 1 = (1 δ) 2, u 2 1 = (1 + δ) 2. Notice that the passband ripple and stopband attenuation are 2δ and δ, respectively. We then consider the problem of minimizing δ subject to R(θ) (1 δ) 2, θ [0, ω a ], R(θ) (1 + δ) 2, θ [0, ω b ], R(θ) δ 2, θ [ω b, π], R(θ) 0, θ [0, π]. (9) We see that R(θ) = q 1(e jθ ) 2 q 2 where q (e jθ ) 2 1 (z), q 2 (z) are two complex polynomials in variable z C. So the feasibility problem is to find two polynomials q 1 (z), q 2 (z) satisfying (9). Additionally, it follows from the theorem of Fejér-Riesz [11, Theorem 2.15] that this is equivalent to the problem of finding trigonometric polynomials p 1 (z) and p 2 (z) which are nonnegative on the unit circle T and p i (z) = q i (z) 2, i = 1, 2. The feasibility problem corresponding to our rational lowpass filter design problem can now be reformulated as follows. Find the trigonometric polynomials p 1 (z), p 2 (z) with real function value such that the following design constraints are satisfied with z = e jωz, ω z ( π, π]: p 1 (z) p 2 (z) δ)2, ω z [0, ω a ], p 1 (z) p 2 (z) δ)2, ω z [0, ω b ], p 1 (z) p 2 (z) δ2, ω z [ω b, π], p 1 (z) p 2 (z) ω z [0, π]. (10) The above inequalities are transformed into the following ones that are appropriate for our purpose later q 1 (z) = p 1 (z) 0, z T, q 2 (z) = p 1 (z) l1p 2 2 (z) 0, z T a, q 3 (z) = u 2 1p 2 (z) p 1 (z) 0, z T b, q 4 (z) = u 2 2p 2 (z) p 1 (z) 0, z T b where l 2 1 = (1 δ) 2, u 2 1 = (1 + δ) 2 and u 2 2 = δ 2. It is clear that if two trigonometric polynomials p 1 (z), p 2 (z) satisfy (11) then also (10) holds. Our optimization (11) 9

problem thus becomes minimize δ subject to condition (11) (12) Additionally, as was already shown in [3] the relation among the polynomials in (11) can be represented with their coefficient vectors as follows q 1 I 0 q =. ( ) q 2 q 3 = I l1i 2 p1 I u 2 1I. (13) p 2 q 4 I u 2 2I Then, (11) is also equivalent to q K. = K T K a T K b T K b T. (14) In [3], problem (12) is solved by using a bisection rule on δ. For each δ > 0, the feasibility problem becomes: find two polynomials p 1, p 2 satisfying (13) and (14). Even though we have formulated several results in a general form, i.e., using Hermitian matrices and corresponding trigonometric polynomials, our experiment which is discussed in the last subsections only needs a formulation in terms of real symmetric matrices and corresponding cosine polynomials. In the next subsections, we shall reformulate the feasibility problem of the filter design problem (12) as a convex optimization problem and then we propose an algorithm to find the minimal value of δ, i.e., solve (12). 3.1 The feasibility problem is a convex optimization problem We now return to the feasibility problem of finding two trigonometric polynomials p 1 (z), p 2 (z) of degree d satisfying the conditions (13) and (14). Notice that this problem is corresponding to a given value δ (0, 1). Let Y 1 H d+1 be the matrix which is corresponding to q 1 as in Proposition 1 and Y i1 H d+1, Y i2 H d, i = 2, 3, 4 be corresponding to q i as in Proposition 3. As we have discussed above, it is sufficient to deal with real symmetric positive semidefinite matrices. Hence, this 10

problem can be stated as follows: find ˆx = (Y 1, Y 21, Y 22, Y 31, Y 32, Y 41, Y 42, p 1, p 2 ) S d+1 (S d+1 S d ) (S d+1 S d ) R d+1 R d+1 }{{} 3 such that the corresponding cosine polynomials of the above matrices satisfy (13) and (14). Notice that the polynomials p 1, p 2 can be derived from ˆx. One can see in [10, section 7.1] that the Hilbert space of all symmetric matrices of order n endowed with the trace inner product is bijective to R n(n+1)/2 with the inner product defined as follows. Suppose that x R n(n+1)/2 is indexed as x = [x 11,..., x 1n, x 22,..., x 2n,..., x nn ] T. Then, for any x, ỹ R n(n+1)/2, define x, ỹ = x T Dỹ, where D = diag(d 11,..., d 1n,..., d nn ) is a diagonal matrix such that d ii = 1 and d ij = 2 for 1 i < j n. The bijection is X = (x ij ) x = [x 11,..., x 1n,..., x nn ] T. Hence, the list ˆx can be also interpreted as a variable in R µ, µ = 4 (d+1)(d+2) + 2 3 d(d+1) + 2(d + 1), with the appropriate inner product. 2 Now, keeping the notations of the matrices Y 1, Y i1, Y i2 and there images under the bijections we have mentioned above, let S denote the set of all the vectors of the form x = [s, ỹ 1, ỹ 21, ỹ 22, ỹ 31, ỹ 32, ỹ 41, ỹ 42, p 1, p 2 ] T R 1+µ (15) satisfying Y 1 + si, Y i1 + si S d+1 +, Y i2 + si S d +, i = 2, 3, 4. One can see the set S is a convex cone. Moreover, it follows from Proposition 3 that (13) can be represented as a homogeneous system of linear equations in variable x R 1+µ. Because of the facts we have discussed, the following problem is a conic programming problem. To solve the feasibility problem for a given δ (0, 1), one considers the following convex optimization problem minimize(s) subject to condition (13) x S. (16) We will see from the following proposition that in case the above problem is feasible, the minimal value then must be nonpositive. Proposition 5. If the problem (16) is feasible then there always exists a feasible 11

point such that s 0. Proof. Suppose x = [r, ỹ 1, ỹ 21, ỹ 22, ỹ 31, ỹ 32, ỹ 41, ỹ 42, p 1, p 2 ] T, r > 0, is a feasible point of the problem (16) which is corresponding to the matrices Y 1, (Y i1, Y i2 ), i = 2, 3, 4 and cosine polynomials q 1 = p 1, q i, i = 2, 3, 4. Note that the matrices Y 1, Y i1, Y i2, i = 2, 3, 4 are not necessarily positive semidefinite but the following ones Y 1, (Y i1 + ri, Y i2 + ri), i = 2, 3, 4 are. Let q 1 = p 1, q i, i = 2, 3, 4 be the corresponding polynomials of the last matrices, respectively. Since r > 0, it follows from Proposition 4 that the polynomial q 1 (z) q 1 (z), z T. A similar reasoning is valid for the cases (Y i1 + ri, Y i2 + ri), i = 2, 3, 4. This means that the polynomials p 1, p 2, q i, i = 1,..., 4 satisfy (14). Finally, we determine a number s 0 such that [s,..., p 1, p 2 ] T is also a feasible point of the problem (16). This number can be chosen as the negative of the minimum of all eigenvalues of the matrices Y 1, (Y i1 + ri, Y i2 + ri), i = 2, 3, 4. The remaining problem is to compute the minimal value for δ (0, 1). 3.2 Minimizing the parameter δ The algorithm in this subsection is just a modification of the one which is proposed in [3]. In [3] the algorithm has been implemented by using the LMI Control Toolbox of MATLAB and it breaks down as soon as d 7. In our work, it is implemented by the cvx Toolbox [9, 7, 8] and the problem has been solved where the degree of the filter can be up to 70 or more. The following algorithm uses a bisection rule to minimize the parameter δ (0, 1). Let s(δ) denote the optimal value of problem (16) corresponding to δ. Algorithm 1. Input: d, 0 < ω a < ω b < π, δ 0 (0, 1) such that s(δ 0 ) < 0 and precision ϵ > 0. Output: A minimal value δ min (0, δ 0 ] and two cosine polynomials p 1, p 2 which are nonnegative on the unit circle and satisfy (12). Initialization. Set δlow 0 = 0, δ0 up = δ 0 and δ 1 = δ low+δ up. 2 At iteration k 0. Set δ k+1 = δk up+δlow k 2 While (δup k δlow k )/δk up > ϵ do 1. Solve the convex optimization problem (16) for d, ω a, ω b, δ k+1, obtain the optimal value s k+1 = s(δ k+1 ). 2. If minimal value s k+1 > 0 then set δ k+1 low = δk+1, δup k+1 = δup k Else, set δup k+1 = δ k+1, δ k+1 low = δk low. 12

Figure 2: Amplitude corresponding to d = 26, ω a = 2.225, ω b = 2.275 3. Go to iteration k + 1. The value δ min is δ k at which (δ k up δ k low )/δk up ϵ firstly and s k 0. The algorithm was implemented using the cvx Toolbox [9]. Example 1. We have used the above algorithm for the filter degree up to 70. However, in this example we present the filter design problem with the filter degree d = 26 and ω a = 0.225, ω b = 0.275 and δ 0 = 0.25. We choose the precision ϵ = 10 3. The algorithm stops at iteration k = 13 and we get δ min = δ 13 52.10 4, s 13 96.10 13. Figure 2 illustrates the amplitude of H(e jθ ) which is defined by the two cosine polynomials p 1 (z) and p 2 (z) corresponding to δ min. Algorithm 1 is based on a bisection rule on parameter δ. It turned out that in several experiments the numerically minimal value δ min is much smaller than the theoretically minimal value δ of the problem (12). To get insight into this numerical problem, we first state the following proposition. Proposition 6. Given two numbers 0 < δ 2 < δ 1 < 1. Suppose that the solution of the convex optimization problem (16) gives us the two corresponding optimal values s i = s(δ i ), i = 1, 2. Then s 1 s 2. 13

Proof. Suppose that our conclusion is false, i.e., s 1 > s 2. We will show that there exists a vector [s 2,..., p 1, p 2 ] T being a feasible point of the problem (16) for δ 1 and this gives a contradiction to the fact that s 1 is minimal. We first note that if s i, i = 1, 2 exists then it is nonpositive because of Proposition 5. Now, assume δ 2 = δ 1 ε for some ε > 0. Let P 1 = Q 1, P 2, Q i, i = 2, 3, 4 be the cosine polynomials satisfying (11) such that the corresponding vector [s 2,..., P 1, P 2 ] T solves the feasibility problem (16) for δ 2. Then, let q 1 (z). = P 1 (z), q 2 (z). = Q 2 (z) + [2ε(1 δ 1 ) + ε 2 ]P 2 (z) = P 1 (z) (1 δ 1 ) 2 P 2 (z), q 3 (z). = Q 3 (z) + [2ε(1 + δ 1 ) ε 2 ]P 2 (z) = (1 + δ 1 ) 2 P 2 (z) P 1 (z), q 4 (z). = Q 4 (z) + (2εδ 1 ε 2 )P 2 (z) = δ 2 1P 2 (z) P 1 (z). Since P 1 = Q 1, P 2, Q i, i = 2, 3, 4 satisfy (11) for δ 2, so do the ones q 1 = p 1, p 2, q i, i = 2, 3, 4 but for δ 1. Let Y 1, (Y i1, Y i2 ), i = 2, 3, 4 be the real positive semidefinite matrices corresponding to p 1 = q 1 and q i, i = 2, 3, 4 as in the Propositions 1 and 3. Let q 1 (z) = p 1 (z) be the trigonometric polynomial associated to the matrix Y 1 s 2 I. Let q i, i = 2, 3, 4 be the polynomials corresponding to Y i1 s 2 I, Y i2 s 2 I. Note that the latter matrices are also positive semidefinite because s 2 0. Similar as in the proof of Proposition 5, the polynomials p 1, p 2, q i, i = 1,..., 4 satisfy (14). This means the vector [s 2,..., p 1, p 2 ] T is a feasible point of problem (16) corresponding to δ 1 and the proof is completed. Proposition 6 shows that problem (16) can be viewed as a nonincreasing function s = s(δ) in variable δ (0, 1). Moreover, it is nonpositive for δ large, i.e., on the left side near 1. For example, numerically it turns out that s(0.95) < 0, d = 2,..., 70 where ω a = 0.225, ω b = 0.275. The bisection rule needs the sign of s(δ). However, we have seen in our numerical experiments that when the parameter δ becomes smaller the sign of s as computed by solving the feasibility problem (16) is not computed accurately. An illustration is presented in Figure 3. The green line denotes the graph of the function s = s(δ) in theory. The red points denote the points (δ, s(δ)) determined by our numerical experiment. To get around this problem in Algorithm 1, we determine a margin for the parameter s. This is a value slightly smaller than the minimum of all those values of s corresponding to the values of δ in the unreliable interval. This margin depends on the filter degrees, the passbands and stopbands. For all the cases of the filter degrees less than 70 and ω a = 0.225, ω b = 0.275, one can determine this margin numerically as s margin = 10 9. Using this margin, we can modify Algorithm 1 in which the condition s k+1 > 0 is replaced by the one s k+1 > s margin. We now resume the problem which was considered in Example 1. 14

δ k low δ 0 δ k+1 δ k up s margin s k+1 Figure 3: Unreliable interval of δ and the margin of parameter s Example 2. The modified algorithm stops at iteration k = 16 but δ 16 75.10 4, s 16 105.10 11. The value δ 16 in this case is more reliable. 4 Dependence of δ on the parameters of the filter design problem In this section, we will discuss the dependence of the minimal value δ of the filter design problem we have discussed on the filter degree d, the difference aux = ω b ω a and the passband ω a. Firstly, we define the delta-function in three variables as follows. Let : N (0, π) (0, π) (0, 1) be the function defined by our filter design problem. Namely, for each (d, aux, ω a ), solving the problem (12) we get (a numerical approximation of) the minimal value δ. Then (d, aux, ω a ) = δ. We will see in the following examples that the function is nonincreasingly monotonic in either variable d or aux. For the remaining variable, it is neither increasing nor decreasing. In the following experiments, we take a precision ϵ = 10 3. For the first experiment, take ω a = 0.225, ω b = 0.275 and let d = 2,.., 70. Solving problem (12) by applying the modified algorithm, we get a minimal value δ(d). = (d, 0.05, ω a ). Figure 4 shows the nonincreasing behavior of this function. The top figure shows the amplitude with the filter degree 70. For the second experiment, d = 26 is fixed and we take aux 1 = 0.005, aux i+1 = 15

Figure 4: delta function in d variable aux i + 0.002. For each i, choose ω a = 0.225 and ω b = ω a + aux i. The process stops at i such that ω b > 0.275, that is for aux i 0.05. Figure 5 presents the result. The first figure in Figure 5 presents the homotopy of the amplitude corresponding to each step i. For the last experiment, we fix d = 26 and aux = 0.05. For each i = 0, 1,... the experiment proceeds with ω a = 0.125 + 0.005i and ω b = ω a + aux. The results for 0.125 ω a 3.14 are shown in Figure 6 in which the first figure indicates the homotopy of the amplitude corresponding to each step i. 5 Conclusion We have shown that the feasibility problem of a filter design problem can be reformulated as a conic programming problem. We have also proposed an algorithm to solve the filter design problem minimizing the passband ripple and stopband attenuation simultaneously. This is a modification of the algorithm in [3]. Our method is a combination of a bisection rule on the parameter which is coupled by passband ripple and stopband attenuation, and a convex optimization scheme solving the feasibility problem. A numerical problem of the first version of the algorithm was identified and solved by introducing a safety margin when performing the bisection rule. Our algorithm is implemented by using the cvx Toolbox and its validity is shown by numerical experiments for values of the filter degree up to 70. 16

Figure 5: delta function in distance-omega variable Figure 6: delta function in ω a variable 17

References [1] S. P. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004. [2] T. N. Davison, Z.-Q. Luo, and J. F. Sturm. Linear matrix inequality formulation of spectral mask constraints. IEEE Signal Processing Magazine, (50):2702 2715, 2002. [3] Y. V. Genin, Y. Hachez, Yu. Nesterov, and P. Van Dooren. Convex optimization over positive polynomials and filter design. In Proceedings UKACC Int. Conf. Control 2000, 2000. Paper SS41. [4] Y. V. Genin, Y. Hachez, Yu. Nesterov, and P. Van Dooren. Optimization problems over positive pseudo-polynomial matrices. SIAM Journal on Matrix Analysis and Applications, 25(1):57 79, 2003. [5] Y. V. Genin, Yu. Nesterov, and P. Van Dooren. Positive transfer functions and convex optimization. In Proceedings European Control Conference 99, 1999. Paper F143. [6] Y. V. Genin, Yu. Nesterov, and P. Van Dooren. Optimization over positive polynomial matrices. In Proceedings of the 14th Int. Symp. Math. Th. Netw. Syst., 2000. Paper IS03. [7] M. Grant. Disciplined Convex Programming. PhD thesis, Stanford University, 2004. [8] M. Grant and S. P. Boyd. Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and Control, pages 95 110. Springer- Verlag Limited, 2008. [9] M. Grant and S. P. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, April 2011. [10] O. Güler. Barrier function in interior point methods. Mathematics of Operations Research, 21(4):860 885, 1996. [11] Y. Hachez. Convex optimization over non negative polynomials: structured algorithms and applications. PhD thesis, Université Catholique de Louvain, May 2003. [12] Yu. Nesterov. Squared functional systems and optimization problems. In High Performance Optimization, chapter 17, pages 405 440. Kluwer Academic Publishers, 2000. 18

[13] S.-P. Wu, S. P. Boyd, and L. Vandenberghe. FIR filter design via spectral factorization and convex optimization. In Applied and Computational Control, Signals and Circuits, pages 215 245. Birkhauser, 1998. 19