Similar documents
ROBUST SOLUTIONS TO LEAST-SQUARES PROBLEMS WITH UNCERTAIN DATA

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex

Robust linear optimization under general norms


Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

2 Abstract We show that a variety of antenna array pattern synthesis problems can be expressed as convex optimization problems, which can be (numerica

June Engineering Department, Stanford University. System Analysis and Synthesis. Linear Matrix Inequalities. Stephen Boyd (E.

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Fast algorithms for solving H -norm minimization. problem

Robust exact pole placement via an LMI-based algorithm

Advances in Convex Optimization: Theory, Algorithms, and Applications

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

where m r, m c and m C are the number of repeated real scalar blocks, repeated complex scalar blocks and full complex blocks, respectively. A. (D; G)-

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

1.1 Notations We dene X (s) =X T (;s), X T denotes the transpose of X X>()0 a symmetric, positive denite (semidenite) matrix diag [X 1 X ] a block-dia

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Rank Minimization and Applications in System Theory

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

On the solving of matrix equation of Sylvester type

Convex Optimization and l 1 -minimization

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Interval solutions for interval algebraic equations

Handout 8: Dealing with Data Uncertainty

6. Approximation and fitting

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Optimization based robust control

Likelihood Bounds for Constrained Estimation with Uncertainty

Integer Least Squares: Sphere Decoding and the LLL Algorithm

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

Iterative Methods for Smooth Objective Functions

Example 1 linear elastic structure; forces f 1 ;:::;f 100 induce deections d 1 ;:::;d f i F max i, several hundred other constraints: max load p

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Abstract The following linear inverse problem is considered: given a full column rank m n data matrix A and a length m observation vector b, nd the be

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

Least Squares Optimization

2nd Symposium on System, Structure and Control, Oaxaca, 2004

Introduction. Chapter One

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

System Identification by Nuclear Norm Minimization

Semidefinite Programming Basics and Applications

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

M E M O R A N D U M. Faculty Senate approved November 1, 2018

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Lecture 7: Weak Duality

Dept. of Aeronautics and Astronautics. because the structure of the closed-loop system has not been

Operations Research Letters

Filter Design for Linear Time Delay Systems

Derivative-Free Trust-Region methods

12. Interior-point methods

Robust conic quadratic programming with ellipsoidal uncertainties

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems

Didier HENRION henrion

A class of Smoothing Method for Linear Second-Order Cone Programming

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

A Cone Complementarity Linearization Algorithm for Static Output-Feedback and Related Problems

Total least squares. Gérard MEURANT. October, 2008

Sparse Optimization Lecture: Basic Sparse Optimization Models

Stability of neutral delay-diœerential systems with nonlinear perturbations

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

EE 227A: Convex Optimization and Applications October 14, 2008

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

An interior-point trust-region polynomial algorithm for convex programming

Sparse PCA with applications in finance

Connections Between Duality in Control Theory and

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Barrier Method. Javier Peña Convex Optimization /36-725

Sparse Covariance Selection using Semidefinite Programming

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

A direct formulation for sparse PCA using semidefinite programming

Learning the Kernel Matrix with Semidefinite Programming

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Fast linear iterations for distributed averaging

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University

Marcus Pantoja da Silva 1 and Celso Pascoli Bottura 2. Abstract: Nonlinear systems with time-varying uncertainties

Lecture 6: Conic Optimization September 8

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

u - P (s) y (s) Figure 1: Standard framework for robustness analysis function matrix and let (s) be a structured perturbation constrained to lie in th

Introduction to Linear Algebra. Tyrone L. Vincent

Contents Acknowledgements 1 Introduction 2 1 Conic programming Introduction Convex programming....

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Transcription:

Robust Least Squares and Applications Laurent El Ghaoui and Herve Lebret Ecole Nationale Superieure de Techniques Avancees 3, Bd. Victor, 7739 Paris, France (elghaoui, lebret)@ensta.fr Abstract We consider least-squares problems where the coef- cient matrices A; b are unknown-but-bounded. We imize the worst-case residual error using (convex) second-order cone programg (SOCP), yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomial-time using semidenite programg (SDP). We also consider the case when A; b are rational functions of an unknown-but-bounded perturbation vector. We show how to imize (via SDP) upper bounds on the optimal worst-case residual. We provide numerical examples, including one from robust identication and one from robust interpolation. Notation For a matrix X, kxk denotes the largest singular value, and kxk F the Frobenius norm; I denotes identity matrix, with size inferred from context. 1 Introduction In this paper, we consider the problem of nding a solution x to an overdetered set of equations Ax ' b, where the data matrices A; b are not known exactly. First, we assume that the given model is not a single pair (A; b), with A R nm, b R m, but a family of matrices (A + A, b + b), where = [A b] is an unknown-but-bounded matrix, say, kk, where is given. For x xed, we dene the worst-case residual as r(a; b; ; x) = max k(a+a)x?(b+b)k (1) ka bk F We say that x is a Robust Least Squares (RLS) solution if x imizes the worst-case residual r(a; b; ; x). In many applications, the perturbation matrices A, b have a known (e.g., Toeplitz) structure. In this case, the worst-case residual (1) might be a very conservative estimate. We are led to consider the following Structured RLS (SRLS) problem. Given A ; ; A p R nm, b ; ; b p R n, we dene for every R p, A() = A + px i A i ; b() = b + px i b i () For, and x R m, we dene the structured worstcase residual as r S (A; b; ; x) = max ka()x? b()k; (3) kk We say that x is a Structured Robust Least Squares (SRLS) solution if x imizes the worst-case residual r S (A; b; ; x). Our main contribution is to show that we can compute the exact value of the optimal worst-case residuals using convex, second-order cone programg (SOCP), or semidenite programg (SDP). The consequence is that the RLS and SRLS problems can be solved in polynomial-time, and great practical eciency, using e.g. recent interior-point methods [1, ]. (Our exact results are to be contrasted with those of Doyle et. al [3], which also use SDP to compute upper bounds on the worst-case residual for identication problems.) We also show that the RLS solution is continuous in the data matrices A; b, which yields a (Tikhonov) regularization technique for ill-conditioned LS problems. Similar regularity results hold for the SRLS problem. We also consider a generalisation of the SRLS problem, referred to as the linear-fractional SRLS problem in the sequel, in which the matrix functions A(), b() in () depend rationally on the parameter vector. (We describe later a robust interpolation problem that falls in this class.) The problem is NP-complete in this case, but we may compute, and optimize, upper bounds on the worst-case residual using SDP. In parallel with RLS, we interpret our solution as one of a weighted LS problem for an augmented system, the weights being computed via SDP. A full version of this paper (with proofs and references) is to appear [4].

The imization problem Preliaries imize c T x subject to F + P m F ix (4) where c R m, and the symmetric matrices F i = F T i R NN, i = ; ; m are given, is called a semidenite program (SDP). SDPs are convex optimization problems and can be solved in polynomial-time with e.g. primal-dual interior-point methods [1, 6]. A special case of SDP is a second-order cone programg problem (SOCP), which is one of the form imize c T x subject to kc i x + d i k e T i x + f i; i = 1; ; L; () where C i R nim, d i R ni, e i R m, f i R, i = 1; ; L. SOCPs can be expressed as SDPs, therefore they can be solved in polynomial-time using interior-point methods for SDPs. However the SDP formulation is not the most ecient numerically, as special interior-point methods can be devised for SOCPs [1, 7, 8]. 3 Unstructured Robust Least-Squares In this section, we consider the RLS problem, which is to imize the worst-case residual (1). We assume = 1 with no loss of generality. 3.1 Solution via SOCP Theorem 3.1 The RLS problem is equivalent to the (convex) second-order cone program (SOCP) x imize subject to kax?bk?; k k 1 The solution x RLS is unique, and given by x RLS = ( (I + A T A)?1 A T b if = (? )= > ; A y b else. Very ecient interior-point methods can be used for solving SOCPs [1]. When applied directly to the above problem, these methods yield an algorithm with (worstcase) complexity similar to one SVD of A (see [4] for details). It is possible to prove that the RLS solution is continuous in (A; b). The consequence is that the RLS approach can be used to regularize an ill-posed LS problem. It belongs to the class of Tikhonov regularizations [1], the regularization parameter being optimal for robustness. Note that many regularization schemes have been proposed in the past (see [13, 14]), usually based on an expression for x similar to that given in theorem 3.1. As mentioned in [1], the choice of an appropriate regularization parameter is problemdependent, and in many cases, not obvious. The RLS method is a rigorous way to make this choice, provided bounds on A; b are known. 3. Link with Total Least Squares The RLS framework assumes that the data matrices (A; b) are the \noal" values of the model, which are subject to unstructured perturbation, bounded in norm by. Now, if we think of (A; b) as \measured" data, the assumption that (A; b) correspond to a noal model may not be judicious. Also, in some applications, the norm-bound on the perturbation may be hard to estimate. Total Least Squares (TLS) [16] can be used in conjunction with RLS to address this issue. Let A TLS, b TLS, x TLS be imizers of the TLS problem imize ka bk F subject to (A + A)x = b + b; and let TLS = ka TLS b TLS k F, A TLS = A + A TLS, b TLS = A + b TLS. TLS nds a consistent, linear system that is closest (in Frobenius norm sense) to the observed data (A; b). The underlying assumption is that the observed data (A; b) is the result of a consistent, linear system which, under the measurement process, has been subjected to unstructured perturbations, unknown but bounded in norm by TLS. With this assumption, any point of the ball f(a ; b ) j ka? A TLS b? b TLS k F TLS g can be observed, just as well as (A; b). Thus, TLS computes an \uncertain linear system" representation of the observed phenomenon (A TLS ; b TLS ) is the noal model, and TLS is the perturbation level. Once this uncertain system representation (A TLS, b TLS, TLS ) is computed, choosing x TLS as a \solution" to Ax ' b amounts to nd the exact solution to the noal system. Doing so, we compute a very accurate solution (with zero residual), which does not take into account the perturbation level TLS. A more robust solution is given by the solution to the following RLS problem x max k(a TLS + A)x? (b TLS + b)k ka bk F TLS 4 Structured Robust Least-Squares In this section, we consider the RLS problem, which is to imize the structured worst-case residual (3). Again we assume = 1 with no loss of generality.

Theorem 4.1 The Euclidian-norm SRLS problem can be solved by computing an optimal solution (; ; x) of the SDP imize subject to 4? (A x? b ) T I M(x) T A x? b M(x) I 3 ; where M(x) = A 1 x? b 1 A p x? b p. The above theorem shows that a general-purpose SDP solver (such as SP [6]) can be used to solve the SRLS problem. We note however, that special interior-point methods can be devised, that take the structure of the above SDP into account for greater eciency. This subject is left for future research. As before, we may ask wether the solution to the SRLS problem of x4 is continuous in the data matrices A i ; b i, as was the case for unstructured RLS problems. We only discuss continuity of the optimal worst-case residual with respect to (A ; b ) (in many problems, the coecient matrices A i ; b i for i = 1; ; p are xed). It turns out that a sucient condition for continuity of the optimal worst-case residual (as a function of (A ; b )) is that [A T 1 AT p ] T is full rank. Linear-Fractional SRLS In this section, we exae a generalization of the SRLS problem. Our framework encompasses the case when the functions A(), b() are rational. Let D be a subspace of R NN, A R nm, b R n, L R nn, R A R Nm, R b R N, D R NN. For every D such that det(i? D) 6=, we dene the matrix functions A() b() = A b + L(I? D)?1 R A R b For a given x R m, we dene the worst-case residual by ( 1 if det(i? D) = ; r D (A; b; ; x) = max ka()x? b()k else. D; kk We say that x is a Structured Robust Least Squares (SRLS) solution if x imizes the worst-case residual above. As before, we assume = 1 with no loss of generality. Introduce the following linear subspaces. n B = B R NN j B = B for every Dg ; S = S B S = S T ; G = G B G =?G T Computing the worst-case residual is NP-hard [4]. We may compute, and optimize, an upper bound on this residual, as follows. Theorem.1 An upper bound on the optimal worstcase residual can be obtained by solving the SDP S;G;;x subject to S S; G G; (S; G) M(x) M(x) T > where M(x) = [(Ax? b) T (R A x? R b ) T ] T and = (6) I? LSL T?LSD T + LG?DSL T + G T L T S + DG? GD T? DSD T The upper bound is always exact when D = R NN. If > at the optimum, the upper bound is also exact. The optimal x is then unique, and when R A is full rank, it is given by the weighted least-squares formula x = (A T?1 A)?1 A T b?1 ; R b where A = [A T R T A ]T. Precise conditions for continuity of the optimal upper bound on worst-case residual in the linear-fractional case are not known. We may however regularize this quantity using a method described in [17] for a related problem. For a given >, dene the bounded set S = S S ; I S 1 I where S is dened above. If we replace S by S in the SDP (6), we obtain an optimal value () which is a continuous function of A; b. As!, () has a limit, equal to the optimal value of SDP (6). The linear-fractional SRLS can be interpreted as a weighted LS, and so can the above regularization method. Thus, the above method belongs to the class of Tikhonov (or weighted LS) regularization methods, the weighting matrix being optimal for robustness. 6 Numerical results 6.1 Complexity estimates of RLS We rst did \large-scale" experiments for the RLS problem of x3. We have solved problem (3.1) for uniformly generated random matrices A and vectors b with various sizes of n; m. Figures 1 and show the average, imum and maximum number of iterations for various RLS problems using the SOCP formulation. In Fig. 1, we show these numbers for values of n ranging from 1 to 1. For each value of n, the vertical bar

#iter Vertical bars indicate deviation for trials, with m = 1.4.3 = 1 1 max..1 1 mean 1 3 4 6 7 8 9 1 n 1.9 1.8 = = 1 = 1 = 1 1.7 1 3 4 6 7 8 9 Figure 1 Complexity estimates for RLS, with m xed. #iter Figure 3 Optimal worst-case residual vs. for various values of perturbation level. 1 Vertical bars indicate deviation for trials, with n = 1 max be conrmed theoretically. For = 1, the optimal worst-case residual becomes at (independent of ), and equal to kbk + 1, with x RLS =. 1 1 3 4 6 7 8 9 1 mean Figure Complexity estimates for RLS, with n xed. indicates the imum and maximum values obtained with trials of A; b, with m = 1. In Fig. 1, we show these numbers for values of m ranging from 11 to 1. For each value of n, the vertical bar indicates the imum and maximum values obtained with trials of A; b, with n = 1. For both plots, the plain curve is the mean value. The experiments conrm the fact the number of iterations is almost independent of problem size for the RLS problem. 6. RLS and regularization As mentioned before, we may use RLS to regularize an ill-conditioned LS problem. Consider the RLS problem for In Fig. 3, we show the optimal worst-case residual for an RLS problem involving a 4 3 matrix A, which depends linearly on a parameter ; A is singular when =. We observe the regularizing eect of the RLS solution. When =, we obtain the LS solution. The latter is not a continuous function of, and exhibits a spike for = (when A becomes singular). For >, the RLS solution smoothes the spike away. The spike is more and more attened as grows, which can n 6.3 Robust identication Consider the following system identication problem. We seek to estimate the impulse response h of a discrete-time system from its input u and output y. Assug that the system is single-input and singleouput, linear, and of order m, and that u is zero for negative time indices, y, u and h are related by the convolution equations Uh = y, where U is a lower triangular Toeplitz matrix whose rst column is u. Assug y; U are known exactly leads to a linear equation in h, which can be computed with standard LS. In practive however, both y and u are subject to errors. We may assume for instance that the actual value of y is y + y, and that of u is u + u, where u; y are unknown-but-bounded perturbations. The perturbed matrices U; y write U() = U + mx u i U i ; y() = y + mx y i e i where e i, i = 1; ; m is the i-th column of the m m identity matrix, and U i are lower triangular Toeplitz matrices with rst column equal to e i. We rst assume that the sum of the input and output energies are bounded, that is kk, where = [u T y T ] T R m, and is given. We adress the following SRLS problem hr max ku()h? y()k; (7) m kk As an example, we consider the following noal val-

ues for y; u u = 1 3 T ; y = 4 6 T In g. 4, we have shown the optimal worst-case residual and that corresponding to the LS solution, as given by solving problems (7) with h free, and (7), with h xed to the LS solution, respectively. Since the LS solution has zero residual (matrix U is invertible), we can prove (and check on the gure) that the worst-case residual grows linearly with. In contrast, the RLS optimal worst-case residual has a nite limit as! 1. 14 1 1 8 6 4 1 3 4 6 7 8 9 1 LS (predicted) LS (simulated) RLS (predicted) RLS (simulated) Figure Upper and lower bounds on worst-case residuals for LS and RLS solutions. 18 16 14 1 1 8 6 4. 1 1.. 3 LS RLS bounds corresponding to the RLS and LS solutions, with lower bound obtained by computing the largest residuals ku( trial )x?y( trial )k among 1 trial points trial, with x = x LS and x = x RLS. This plot shows that, for the LS solution, our estimate of the worstcase residual is not exact, and the discrepancy grows with uncertainty size. In contrast, for the RLS solution the estimate appears to be exact for every value of. Figure 4 Worst-case residuals of LS and euclidean-norm SRLS solutions for various values of perturbation level. We now assume that the perturbation bounds on y; u are not correlated. For instance, we consider problem (7), with the bound kk replaced with kyk ; kuk 1 Physically, the above bounds mean that the output energy and peak input are bounded. This problem can be formulated as a linear-fractional SRLS problem, with " 1 4 # [A b] = 1 ; 3 1 6 " 1 1 # L = 1 1 1 ; 1 1 1 1 3 1 1 1 6 1 1 R T = 4 7 1 1 and has the following structure = diag@ u1 I 3 ; u I ; u 3 ; 4 y 1 y y 3 31 A Here, the symbols denote dummy elements of that were added in order to work with a square perturbation matrix. In g., we show the bounds on worst-case residual vs., the uncertainty size. We compare the upper 6.4 Robust interpolation For given integers n 1, k, we seek a polynomial of degree n? 1 p(t) = x 1 + + x n t n?1 that interpolates given points (a i ; b i ), i = 1; ; k, that is p(a i ) = b i ; i = 1; ; k We assume here that the b i 's are known, while the a i 's are unknown-but-bounded, a i () = a i + i, i = 1; ; k, where kk 1. We seek a robust interpolant, that is, a imizer of max kk1 ka()x? bk, where A() has a Vandermonde structure. This can be formulated as a linearfractional SRLS problem. In g. 6, we have shown the result n = 3, k = 1, and a 1 = 1 4 T ; b1 = 1? T ; = The LS solution is very accurate (zero noal residual every point is interpolated exactly), but has a (predicted) worst-case residual of 17977. The RLS solution trades o this accuracy (only one point interpolated, and noal residual of 833) for robustness (with a worst-case residual less than 1173). As! 1, the RLS interpolation polynomial becomes more and more horizontal. (This is consistent with the fact that we allow perturbations on vector a only.) 7 Conclusions This paper shows that several robust least-squares (RLS) problems with unknown-but-bounded data matrices are amenable to (convex) second-order cone or semidenite programg (SOCP or SDP). The implication is that these RLS problems can be solved in polynomial-time, and eciently in practice.

1 3 4 6 p(t) 8 7 6 4 3 1 1 LS RLS ( = ) RLS ( = 1) Figure 6 Interpolation polynomials LS and RLS solutions for =. In our examples, we have demonstrated the use of a SOCP code [18], and a general-purpose semidenite programg code, SP [6]. Future work could be devoted to writing special code that exploits the structure of these problems, in order to further increase the ef- ciency of the method. For instance, it seems that, in many problems, the perturbation matrices are sparse, and/or have special (e.g., Toeplitz) structure. The method can be used for several related problems. Constrained RLS. We may consider problems where additional (convex) constraints are added on the vector x. (Such constraints arise naturally in e.g., image processing). For instance, we may consider problem (1) with an additional linear (resp. quadratic convex) constraint (Cx) i, i = 1; ; q (resp. x T Qx 1), where C (resp. Q ) is given. To solve such a problem, it suf- ces to add the related constraint to corresponding SOCP or SDP formulation. RLS problems with other norms. We may consider RLS problems in which the worst-case residual error in measured in other norms, such as the maximum norm. Matrix RLS. We may of course, derive similar results when the constant term b is a matrix. The worst-case error can be evaluated in a variety of norms. Error-in-Variables RLS. We may consider problems where the solution x is also subject to uncertainty (due to implementation and/or quantization errors). That is, we may consider a worstcase residual of the form max kxk 1 max k(a+a)(x+x)?(b+b)k; ka bk F where i, i = 1;, are given. We may compute (and optimize) upper bounds on the above quantity using SDP. t References [1] Yu. Nesterov and A. Nemirovsky. Interior point polynomial methods in convex programg Theory and applications. SIAM, 1994. [] L. Vandenberghe and S. Boyd. Semidenite programg. SIAM Review, 38(1)49{9, March 1996. [3] J. Doyle, M. Newlin, F. Paganini, and J. Tierno. Unifying robustness analysis and system ID. In Proc. IEEE Conf. on Decision and Control, pages 3667{367, December 1994. [4] L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data matrices. SIAM J. on Matrix Analysis and Applications, 1996. To appear. [] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory, volume 1 of Studies in Applied Mathematics. SIAM, Philadelphia, PA, June 1994. [6] L. Vandenberghe and S. Boyd. SP, Software for semidenite programg, user's guide, December 1994. Available via anonymous ftp to isl.stanford.edu under /pub/boyd/semidef_prog. [7] H. Lebret. Antenna pattern synthesis through convex optimization. In Franklin T. Luk, editor, Advanced Signal Processing Algorithms, volume Proc. SPIE 63, pages 18{ 19, 199. [8] K. D. Andersen. An ecient newton barrier method for imizing a sum of euclidean norms. SIAM J. on Optimization, 6(1)74{9, February 1996. [9] R.J. Stern and H. Wolkowicz. Indenite trust region subproblems and nonsymmetric eigenvalue perturbations. SIAM J. on Optimization, ()86{313, May 199. [1] M. K. H. Fan, A. L. Tits, and J. C. Doyle. Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics. IEEE Trans. Aut. Control, 36(1){ 38, Jan 1991. [11] T. Iwasaki and R. E. Skelton. All controllers for the general H1 control problem LMI existence conditions and state space formulas. Automatica, 3(8)pp137{1317, august 1994. [1] M. Hanke and P.C. Hansen. regularization methods for large-scale problems. Surveys on Mathematics for Industry, 33{31, 1993. [13] A. Tikhonov and V. Arsenin. Solutions of Ill-Posed Problems. Wiley, New York, 1977. [14] G. Demoment. Image reconstruction and restoration overview of common estimation problems. IEEE Trans on Acoust. Speech and Signal Processing, 37(1)4{36, December 1989. [1] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins Univ. Press, Baltimore, second edition, 1989. [16] S. Van Huel and J. Vandewalle. The total least squares problem computational aspects and analysis, volume 9 of Frontiers in applied Math. SIAM, 1991. [17] L. Lee and A. Tits. On continuity/discontinuity in robustness indicators. IEEE Trans. Aut. Control, 38(1)11{13, October 1993. [18] H. Lebret. Synthese de diagrammes de reseaux d'antennes par optimisation convexe. PhD thesis, Universite de Rennes I, November 1994.