|
|
- Preston Hopkins
- 5 years ago
- Views:
Transcription
1 Robust Least Squares and Applications Laurent El Ghaoui and Herve Lebret Ecole Nationale Superieure de Techniques Avancees 3, Bd. Victor, 7739 Paris, France (elghaoui, Abstract We consider least-squares problems where the coef- cient matrices A; b are unknown-but-bounded. We imize the worst-case residual error using (convex) second-order cone programg (SOCP), yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomial-time using semidenite programg (SDP). We also consider the case when A; b are rational functions of an unknown-but-bounded perturbation vector. We show how to imize (via SDP) upper bounds on the optimal worst-case residual. We provide numerical examples, including one from robust identication and one from robust interpolation. Notation For a matrix X, kxk denotes the largest singular value, and kxk F the Frobenius norm; I denotes identity matrix, with size inferred from context. 1 Introduction In this paper, we consider the problem of nding a solution x to an overdetered set of equations Ax ' b, where the data matrices A; b are not known exactly. First, we assume that the given model is not a single pair (A; b), with A R nm, b R m, but a family of matrices (A + A, b + b), where = [A b] is an unknown-but-bounded matrix, say, kk, where is given. For x xed, we dene the worst-case residual as r(a; b; ; x) = max k(a+a)x?(b+b)k (1) ka bk F We say that x is a Robust Least Squares (RLS) solution if x imizes the worst-case residual r(a; b; ; x). In many applications, the perturbation matrices A, b have a known (e.g., Toeplitz) structure. In this case, the worst-case residual (1) might be a very conservative estimate. We are led to consider the following Structured RLS (SRLS) problem. Given A ; ; A p R nm, b ; ; b p R n, we dene for every R p, A() = A + px i A i ; b() = b + px i b i () For, and x R m, we dene the structured worstcase residual as r S (A; b; ; x) = max ka()x? b()k; (3) kk We say that x is a Structured Robust Least Squares (SRLS) solution if x imizes the worst-case residual r S (A; b; ; x). Our main contribution is to show that we can compute the exact value of the optimal worst-case residuals using convex, second-order cone programg (SOCP), or semidenite programg (SDP). The consequence is that the RLS and SRLS problems can be solved in polynomial-time, and great practical eciency, using e.g. recent interior-point methods [1, ]. (Our exact results are to be contrasted with those of Doyle et. al [3], which also use SDP to compute upper bounds on the worst-case residual for identication problems.) We also show that the RLS solution is continuous in the data matrices A; b, which yields a (Tikhonov) regularization technique for ill-conditioned LS problems. Similar regularity results hold for the SRLS problem. We also consider a generalisation of the SRLS problem, referred to as the linear-fractional SRLS problem in the sequel, in which the matrix functions A(), b() in () depend rationally on the parameter vector. (We describe later a robust interpolation problem that falls in this class.) The problem is NP-complete in this case, but we may compute, and optimize, upper bounds on the worst-case residual using SDP. In parallel with RLS, we interpret our solution as one of a weighted LS problem for an augmented system, the weights being computed via SDP. A full version of this paper (with proofs and references) is to appear [4].
2 The imization problem Preliaries imize c T x subject to F + P m F ix (4) where c R m, and the symmetric matrices F i = F T i R NN, i = ; ; m are given, is called a semidenite program (SDP). SDPs are convex optimization problems and can be solved in polynomial-time with e.g. primal-dual interior-point methods [1, 6]. A special case of SDP is a second-order cone programg problem (SOCP), which is one of the form imize c T x subject to kc i x + d i k e T i x + f i; i = 1; ; L; () where C i R nim, d i R ni, e i R m, f i R, i = 1; ; L. SOCPs can be expressed as SDPs, therefore they can be solved in polynomial-time using interior-point methods for SDPs. However the SDP formulation is not the most ecient numerically, as special interior-point methods can be devised for SOCPs [1, 7, 8]. 3 Unstructured Robust Least-Squares In this section, we consider the RLS problem, which is to imize the worst-case residual (1). We assume = 1 with no loss of generality. 3.1 Solution via SOCP Theorem 3.1 The RLS problem is equivalent to the (convex) second-order cone program (SOCP) x imize subject to kax?bk?; k k 1 The solution x RLS is unique, and given by x RLS = ( (I + A T A)?1 A T b if = (? )= > ; A y b else. Very ecient interior-point methods can be used for solving SOCPs [1]. When applied directly to the above problem, these methods yield an algorithm with (worstcase) complexity similar to one SVD of A (see [4] for details). It is possible to prove that the RLS solution is continuous in (A; b). The consequence is that the RLS approach can be used to regularize an ill-posed LS problem. It belongs to the class of Tikhonov regularizations [1], the regularization parameter being optimal for robustness. Note that many regularization schemes have been proposed in the past (see [13, 14]), usually based on an expression for x similar to that given in theorem 3.1. As mentioned in [1], the choice of an appropriate regularization parameter is problemdependent, and in many cases, not obvious. The RLS method is a rigorous way to make this choice, provided bounds on A; b are known. 3. Link with Total Least Squares The RLS framework assumes that the data matrices (A; b) are the \noal" values of the model, which are subject to unstructured perturbation, bounded in norm by. Now, if we think of (A; b) as \measured" data, the assumption that (A; b) correspond to a noal model may not be judicious. Also, in some applications, the norm-bound on the perturbation may be hard to estimate. Total Least Squares (TLS) [16] can be used in conjunction with RLS to address this issue. Let A TLS, b TLS, x TLS be imizers of the TLS problem imize ka bk F subject to (A + A)x = b + b; and let TLS = ka TLS b TLS k F, A TLS = A + A TLS, b TLS = A + b TLS. TLS nds a consistent, linear system that is closest (in Frobenius norm sense) to the observed data (A; b). The underlying assumption is that the observed data (A; b) is the result of a consistent, linear system which, under the measurement process, has been subjected to unstructured perturbations, unknown but bounded in norm by TLS. With this assumption, any point of the ball f(a ; b ) j ka? A TLS b? b TLS k F TLS g can be observed, just as well as (A; b). Thus, TLS computes an \uncertain linear system" representation of the observed phenomenon (A TLS ; b TLS ) is the noal model, and TLS is the perturbation level. Once this uncertain system representation (A TLS, b TLS, TLS ) is computed, choosing x TLS as a \solution" to Ax ' b amounts to nd the exact solution to the noal system. Doing so, we compute a very accurate solution (with zero residual), which does not take into account the perturbation level TLS. A more robust solution is given by the solution to the following RLS problem x max k(a TLS + A)x? (b TLS + b)k ka bk F TLS 4 Structured Robust Least-Squares In this section, we consider the RLS problem, which is to imize the structured worst-case residual (3). Again we assume = 1 with no loss of generality.
3 Theorem 4.1 The Euclidian-norm SRLS problem can be solved by computing an optimal solution (; ; x) of the SDP imize subject to 4? (A x? b ) T I M(x) T A x? b M(x) I 3 ; where M(x) = A 1 x? b 1 A p x? b p. The above theorem shows that a general-purpose SDP solver (such as SP [6]) can be used to solve the SRLS problem. We note however, that special interior-point methods can be devised, that take the structure of the above SDP into account for greater eciency. This subject is left for future research. As before, we may ask wether the solution to the SRLS problem of x4 is continuous in the data matrices A i ; b i, as was the case for unstructured RLS problems. We only discuss continuity of the optimal worst-case residual with respect to (A ; b ) (in many problems, the coecient matrices A i ; b i for i = 1; ; p are xed). It turns out that a sucient condition for continuity of the optimal worst-case residual (as a function of (A ; b )) is that [A T 1 AT p ] T is full rank. Linear-Fractional SRLS In this section, we exae a generalization of the SRLS problem. Our framework encompasses the case when the functions A(), b() are rational. Let D be a subspace of R NN, A R nm, b R n, L R nn, R A R Nm, R b R N, D R NN. For every D such that det(i? D) 6=, we dene the matrix functions A() b() = A b + L(I? D)?1 R A R b For a given x R m, we dene the worst-case residual by ( 1 if det(i? D) = ; r D (A; b; ; x) = max ka()x? b()k else. D; kk We say that x is a Structured Robust Least Squares (SRLS) solution if x imizes the worst-case residual above. As before, we assume = 1 with no loss of generality. Introduce the following linear subspaces. n B = B R NN j B = B for every Dg ; S = S B S = S T ; G = G B G =?G T Computing the worst-case residual is NP-hard [4]. We may compute, and optimize, an upper bound on this residual, as follows. Theorem.1 An upper bound on the optimal worstcase residual can be obtained by solving the SDP S;G;;x subject to S S; G G; (S; G) M(x) M(x) T > where M(x) = [(Ax? b) T (R A x? R b ) T ] T and = (6) I? LSL T?LSD T + LG?DSL T + G T L T S + DG? GD T? DSD T The upper bound is always exact when D = R NN. If > at the optimum, the upper bound is also exact. The optimal x is then unique, and when R A is full rank, it is given by the weighted least-squares formula x = (A T?1 A)?1 A T b?1 ; R b where A = [A T R T A ]T. Precise conditions for continuity of the optimal upper bound on worst-case residual in the linear-fractional case are not known. We may however regularize this quantity using a method described in [17] for a related problem. For a given >, dene the bounded set S = S S ; I S 1 I where S is dened above. If we replace S by S in the SDP (6), we obtain an optimal value () which is a continuous function of A; b. As!, () has a limit, equal to the optimal value of SDP (6). The linear-fractional SRLS can be interpreted as a weighted LS, and so can the above regularization method. Thus, the above method belongs to the class of Tikhonov (or weighted LS) regularization methods, the weighting matrix being optimal for robustness. 6 Numerical results 6.1 Complexity estimates of RLS We rst did \large-scale" experiments for the RLS problem of x3. We have solved problem (3.1) for uniformly generated random matrices A and vectors b with various sizes of n; m. Figures 1 and show the average, imum and maximum number of iterations for various RLS problems using the SOCP formulation. In Fig. 1, we show these numbers for values of n ranging from 1 to 1. For each value of n, the vertical bar
4 #iter Vertical bars indicate deviation for trials, with m = = 1 1 max..1 1 mean n = = 1 = 1 = Figure 1 Complexity estimates for RLS, with m xed. #iter Figure 3 Optimal worst-case residual vs. for various values of perturbation level. 1 Vertical bars indicate deviation for trials, with n = 1 max be conrmed theoretically. For = 1, the optimal worst-case residual becomes at (independent of ), and equal to kbk + 1, with x RLS = mean Figure Complexity estimates for RLS, with n xed. indicates the imum and maximum values obtained with trials of A; b, with m = 1. In Fig. 1, we show these numbers for values of m ranging from 11 to 1. For each value of n, the vertical bar indicates the imum and maximum values obtained with trials of A; b, with n = 1. For both plots, the plain curve is the mean value. The experiments conrm the fact the number of iterations is almost independent of problem size for the RLS problem. 6. RLS and regularization As mentioned before, we may use RLS to regularize an ill-conditioned LS problem. Consider the RLS problem for In Fig. 3, we show the optimal worst-case residual for an RLS problem involving a 4 3 matrix A, which depends linearly on a parameter ; A is singular when =. We observe the regularizing eect of the RLS solution. When =, we obtain the LS solution. The latter is not a continuous function of, and exhibits a spike for = (when A becomes singular). For >, the RLS solution smoothes the spike away. The spike is more and more attened as grows, which can n 6.3 Robust identication Consider the following system identication problem. We seek to estimate the impulse response h of a discrete-time system from its input u and output y. Assug that the system is single-input and singleouput, linear, and of order m, and that u is zero for negative time indices, y, u and h are related by the convolution equations Uh = y, where U is a lower triangular Toeplitz matrix whose rst column is u. Assug y; U are known exactly leads to a linear equation in h, which can be computed with standard LS. In practive however, both y and u are subject to errors. We may assume for instance that the actual value of y is y + y, and that of u is u + u, where u; y are unknown-but-bounded perturbations. The perturbed matrices U; y write U() = U + mx u i U i ; y() = y + mx y i e i where e i, i = 1; ; m is the i-th column of the m m identity matrix, and U i are lower triangular Toeplitz matrices with rst column equal to e i. We rst assume that the sum of the input and output energies are bounded, that is kk, where = [u T y T ] T R m, and is given. We adress the following SRLS problem hr max ku()h? y()k; (7) m kk As an example, we consider the following noal val-
5 ues for y; u u = 1 3 T ; y = 4 6 T In g. 4, we have shown the optimal worst-case residual and that corresponding to the LS solution, as given by solving problems (7) with h free, and (7), with h xed to the LS solution, respectively. Since the LS solution has zero residual (matrix U is invertible), we can prove (and check on the gure) that the worst-case residual grows linearly with. In contrast, the RLS optimal worst-case residual has a nite limit as! LS (predicted) LS (simulated) RLS (predicted) RLS (simulated) Figure Upper and lower bounds on worst-case residuals for LS and RLS solutions LS RLS bounds corresponding to the RLS and LS solutions, with lower bound obtained by computing the largest residuals ku( trial )x?y( trial )k among 1 trial points trial, with x = x LS and x = x RLS. This plot shows that, for the LS solution, our estimate of the worstcase residual is not exact, and the discrepancy grows with uncertainty size. In contrast, for the RLS solution the estimate appears to be exact for every value of. Figure 4 Worst-case residuals of LS and euclidean-norm SRLS solutions for various values of perturbation level. We now assume that the perturbation bounds on y; u are not correlated. For instance, we consider problem (7), with the bound kk replaced with kyk ; kuk 1 Physically, the above bounds mean that the output energy and peak input are bounded. This problem can be formulated as a linear-fractional SRLS problem, with " 1 4 # [A b] = 1 ; " 1 1 # L = ; R T = and has the following structure = diag@ u1 I 3 ; u I ; u 3 ; 4 y 1 y y 3 31 A Here, the symbols denote dummy elements of that were added in order to work with a square perturbation matrix. In g., we show the bounds on worst-case residual vs., the uncertainty size. We compare the upper 6.4 Robust interpolation For given integers n 1, k, we seek a polynomial of degree n? 1 p(t) = x x n t n?1 that interpolates given points (a i ; b i ), i = 1; ; k, that is p(a i ) = b i ; i = 1; ; k We assume here that the b i 's are known, while the a i 's are unknown-but-bounded, a i () = a i + i, i = 1; ; k, where kk 1. We seek a robust interpolant, that is, a imizer of max kk1 ka()x? bk, where A() has a Vandermonde structure. This can be formulated as a linearfractional SRLS problem. In g. 6, we have shown the result n = 3, k = 1, and a 1 = 1 4 T ; b1 = 1? T ; = The LS solution is very accurate (zero noal residual every point is interpolated exactly), but has a (predicted) worst-case residual of The RLS solution trades o this accuracy (only one point interpolated, and noal residual of 833) for robustness (with a worst-case residual less than 1173). As! 1, the RLS interpolation polynomial becomes more and more horizontal. (This is consistent with the fact that we allow perturbations on vector a only.) 7 Conclusions This paper shows that several robust least-squares (RLS) problems with unknown-but-bounded data matrices are amenable to (convex) second-order cone or semidenite programg (SOCP or SDP). The implication is that these RLS problems can be solved in polynomial-time, and eciently in practice.
6 p(t) LS RLS ( = ) RLS ( = 1) Figure 6 Interpolation polynomials LS and RLS solutions for =. In our examples, we have demonstrated the use of a SOCP code [18], and a general-purpose semidenite programg code, SP [6]. Future work could be devoted to writing special code that exploits the structure of these problems, in order to further increase the ef- ciency of the method. For instance, it seems that, in many problems, the perturbation matrices are sparse, and/or have special (e.g., Toeplitz) structure. The method can be used for several related problems. Constrained RLS. We may consider problems where additional (convex) constraints are added on the vector x. (Such constraints arise naturally in e.g., image processing). For instance, we may consider problem (1) with an additional linear (resp. quadratic convex) constraint (Cx) i, i = 1; ; q (resp. x T Qx 1), where C (resp. Q ) is given. To solve such a problem, it suf- ces to add the related constraint to corresponding SOCP or SDP formulation. RLS problems with other norms. We may consider RLS problems in which the worst-case residual error in measured in other norms, such as the maximum norm. Matrix RLS. We may of course, derive similar results when the constant term b is a matrix. The worst-case error can be evaluated in a variety of norms. Error-in-Variables RLS. We may consider problems where the solution x is also subject to uncertainty (due to implementation and/or quantization errors). That is, we may consider a worstcase residual of the form max kxk 1 max k(a+a)(x+x)?(b+b)k; ka bk F where i, i = 1;, are given. We may compute (and optimize) upper bounds on the above quantity using SDP. t References [1] Yu. Nesterov and A. Nemirovsky. Interior point polynomial methods in convex programg Theory and applications. SIAM, [] L. Vandenberghe and S. Boyd. Semidenite programg. SIAM Review, 38(1)49{9, March [3] J. Doyle, M. Newlin, F. Paganini, and J. Tierno. Unifying robustness analysis and system ID. In Proc. IEEE Conf. on Decision and Control, pages 3667{367, December [4] L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data matrices. SIAM J. on Matrix Analysis and Applications, To appear. [] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory, volume 1 of Studies in Applied Mathematics. SIAM, Philadelphia, PA, June [6] L. Vandenberghe and S. Boyd. SP, Software for semidenite programg, user's guide, December Available via anonymous ftp to isl.stanford.edu under /pub/boyd/semidef_prog. [7] H. Lebret. Antenna pattern synthesis through convex optimization. In Franklin T. Luk, editor, Advanced Signal Processing Algorithms, volume Proc. SPIE 63, pages 18{ 19, 199. [8] K. D. Andersen. An ecient newton barrier method for imizing a sum of euclidean norms. SIAM J. on Optimization, 6(1)74{9, February [9] R.J. Stern and H. Wolkowicz. Indenite trust region subproblems and nonsymmetric eigenvalue perturbations. SIAM J. on Optimization, ()86{313, May 199. [1] M. K. H. Fan, A. L. Tits, and J. C. Doyle. Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics. IEEE Trans. Aut. Control, 36(1){ 38, Jan [11] T. Iwasaki and R. E. Skelton. All controllers for the general H1 control problem LMI existence conditions and state space formulas. Automatica, 3(8)pp137{1317, august [1] M. Hanke and P.C. Hansen. regularization methods for large-scale problems. Surveys on Mathematics for Industry, 33{31, [13] A. Tikhonov and V. Arsenin. Solutions of Ill-Posed Problems. Wiley, New York, [14] G. Demoment. Image reconstruction and restoration overview of common estimation problems. IEEE Trans on Acoust. Speech and Signal Processing, 37(1)4{36, December [1] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins Univ. Press, Baltimore, second edition, [16] S. Van Huel and J. Vandewalle. The total least squares problem computational aspects and analysis, volume 9 of Frontiers in applied Math. SIAM, [17] L. Lee and A. Tits. On continuity/discontinuity in robustness indicators. IEEE Trans. Aut. Control, 38(1)11{13, October [18] H. Lebret. Synthese de diagrammes de reseaux d'antennes par optimisation convexe. PhD thesis, Universite de Rennes I, November 1994.
ROBUST SOLUTIONS TO LEAST-SQUARES PROBLEMS WITH UNCERTAIN DATA
SIAM J MATRIX ANAL APPL c 1997 Society for Industrial and Applied Mathematics Vol 18, No 4, pp 1035 1064, October 1997 015 ROBUST SOLUTIONS TO LEAST-SQUARES PROBLEMS WITH UNCERTAIN DATA LAURENT EL GHAOUI
More informationOptimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex
Optimization in Convex and Control Theory System Stephen Boyd Engineering Department Electrical University Stanford 3rd SIAM Conf. Control & Applications 1 Basic idea Many problems arising in system and
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationFIR Filter Design via Semidenite Programming and Spectral Factorization Shao-Po Wu, Stephen Boyd, Lieven Vandenberghe Information Systems Laboratory Stanford University, Stanford, CA 9435 clive@isl.stanford.edu,
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More information2 Abstract We show that a variety of antenna array pattern synthesis problems can be expressed as convex optimization problems, which can be (numerica
1 Antenna Array Pattern Synthesis via Convex Optimization Herve Lebret ENSTA, DFR/A 32, bd Victor 75739 PARIS Cedex 15 France Internet: lebret@ensta.fr Stephen Boyd Information Systems Laboratory, Electrical
More informationJune Engineering Department, Stanford University. System Analysis and Synthesis. Linear Matrix Inequalities. Stephen Boyd (E.
Stephen Boyd (E. Feron :::) System Analysis and Synthesis Control Linear Matrix Inequalities via Engineering Department, Stanford University Electrical June 1993 ACC, 1 linear matrix inequalities (LMIs)
More informationRank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about
Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationFast algorithms for solving H -norm minimization. problem
Fast algorithms for solving H -norm imization problems Andras Varga German Aerospace Center DLR - Oberpfaffenhofen Institute of Robotics and System Dynamics D-82234 Wessling, Germany Andras.Varga@dlr.de
More informationRobust exact pole placement via an LMI-based algorithm
Proceedings of the 44th EEE Conference on Decision and Control, and the European Control Conference 25 Seville, Spain, December 12-15, 25 ThC5.2 Robust exact pole placement via an LM-based algorithm M.
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationThe model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho
Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September
More informationwhere m r, m c and m C are the number of repeated real scalar blocks, repeated complex scalar blocks and full complex blocks, respectively. A. (D; G)-
1 Some properties of an upper bound for Gjerrit Meinsma, Yash Shrivastava and Minyue Fu Abstract A convex upper bound of the mixed structured singular value is analyzed. The upper bound is based on a multiplier
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More information1.1 Notations We dene X (s) =X T (;s), X T denotes the transpose of X X>()0 a symmetric, positive denite (semidenite) matrix diag [X 1 X ] a block-dia
Applications of mixed -synthesis using the passivity approach A. Helmersson Department of Electrical Engineering Linkoping University S-581 83 Linkoping, Sweden tel: +46 13 816 fax: +46 13 86 email: andersh@isy.liu.se
More informationStatic Output Feedback Stabilisation with H Performance for a Class of Plants
Static Output Feedback Stabilisation with H Performance for a Class of Plants E. Prempain and I. Postlethwaite Control and Instrumentation Research, Department of Engineering, University of Leicester,
More informationRank Minimization and Applications in System Theory
Rank Minimization and Applications in System Theory M. Fazel, H. Hindi, and S. Boyd Abstract In this tutorial paper, we consider the problem of minimizing the rank of a matrix over a convex set. The Rank
More information1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationCONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren
CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,
More informationThe Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System
The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture
More informationOn the solving of matrix equation of Sylvester type
Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 7, No. 1, 2019, pp. 96-104 On the solving of matrix equation of Sylvester type Fikret Ahmadali Aliev Institute of Applied
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationSecond-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software
and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationHandout 8: Dealing with Data Uncertainty
MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite
More information6. Approximation and fitting
6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation least-norm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with
More informationAPPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract
APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationFRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System
FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations
More informationPOLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More informationLECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.
Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply
More informationOptimization based robust control
Optimization based robust control Didier Henrion 1,2 Draft of March 27, 2014 Prepared for possible inclusion into The Encyclopedia of Systems and Control edited by John Baillieul and Tariq Samad and published
More informationLikelihood Bounds for Constrained Estimation with Uncertainty
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 WeC4. Likelihood Bounds for Constrained Estimation with Uncertainty
More informationInteger Least Squares: Sphere Decoding and the LLL Algorithm
Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationw Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications
HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,
More informationFast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma
Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More informationIterative Methods for Smooth Objective Functions
Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods
More informationExample 1 linear elastic structure; forces f 1 ;:::;f 100 induce deections d 1 ;:::;d f i F max i, several hundred other constraints: max load p
Convex Optimization in Electrical Engineering Stephen Boyd and Lieven Vandenberghe EE Dept., Stanford ISL EE370 Seminar, 2/9/96 1 Main idea many electrical engineering design problems can be cast as convex
More informationON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationA CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationAbstract The following linear inverse problem is considered: given a full column rank m n data matrix A and a length m observation vector b, nd the be
ON THE OPTIMALLITY OF THE BACKWARD GREEDY ALGORITHM FOR THE SUBSET SELECTION PROBLEM Christophe Couvreur y Yoram Bresler y General Physics Department and TCTS Laboratory, Faculte Polytechnique de Mons,
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the
More information2nd Symposium on System, Structure and Control, Oaxaca, 2004
263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationUMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract
UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space
More informationSystem Identification by Nuclear Norm Minimization
Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationM E M O R A N D U M. Faculty Senate approved November 1, 2018
M E M O R A N D U M Faculty Senate approved November 1, 2018 TO: FROM: Deans and Chairs Becky Bitter, Sr. Assistant Registrar DATE: October 23, 2018 SUBJECT: Minor Change Bulletin No. 5 The courses listed
More informationIdentifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin
Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff
More informationCourse Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)
Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 L-L Specifications, models and loop-shaping by hand L6-L8 Limitations on achievable performance L9-L Controller optimization:
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationLecture 7: Weak Duality
EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly
More informationDept. of Aeronautics and Astronautics. because the structure of the closed-loop system has not been
LMI Synthesis of Parametric Robust H 1 Controllers 1 David Banjerdpongchai Durand Bldg., Room 110 Dept. of Electrical Engineering Email: banjerd@isl.stanford.edu Jonathan P. How Durand Bldg., Room Dept.
More informationOperations Research Letters
Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst
More informationFilter Design for Linear Time Delay Systems
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001 2839 ANewH Filter Design for Linear Time Delay Systems E. Fridman Uri Shaked, Fellow, IEEE Abstract A new delay-dependent filtering
More informationDerivative-Free Trust-Region methods
Derivative-Free Trust-Region methods MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v4) MTH6418: DFTR 1/32 Plan Quadratic models Model Quality Derivative-Free Trust-Region Framework
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationRobust conic quadratic programming with ellipsoidal uncertainties
Robust conic quadratic programming with ellipsoidal uncertainties Roland Hildebrand (LJK Grenoble 1 / CNRS, Grenoble) KTH, Stockholm; November 13, 2008 1 Uncertain conic programs min x c, x : Ax + b K
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationA STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract
A STRENGTHENED SDP RELAXATION via a SECOND LIFTING for the MAX-CUT PROBLEM Miguel Anjos Henry Wolkowicz y July 15, 1999 University of Waterloo Department of Combinatorics and Optimization Waterloo, Ontario
More informationROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES
ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES John Lipor Laura Balzano University of Michigan, Ann Arbor Department of Electrical and Computer Engineering {lipor,girasole}@umich.edu ABSTRACT This paper
More informationResearch Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems
Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 28, Article ID 67295, 8 pages doi:1.1155/28/67295 Research Article An Equivalent LMI Representation of Bounded Real Lemma
More informationDidier HENRION henrion
POLYNOMIAL METHODS FOR ROBUST CONTROL Didier HENRION www.laas.fr/ henrion henrion@laas.fr Laboratoire d Analyse et d Architecture des Systèmes Centre National de la Recherche Scientifique Université de
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationA Cone Complementarity Linearization Algorithm for Static Output-Feedback and Related Problems
EEE TRANSACTONS ON AUTOMATC CONTROL, VOL. 42, NO. 8, AUGUST 1997 1171 A Cone Complementarity Linearization Algorithm for Static Output-Feedback and Related Problems Laurent El Ghaoui, Francois Oustry,
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm
More informationStability of neutral delay-diœerential systems with nonlinear perturbations
International Journal of Systems Science, 000, volume 1, number 8, pages 961± 96 Stability of neutral delay-diœerential systems with nonlinear perturbations JU H. PARK{ SANGCHUL WON{ In this paper, the
More informationLeast Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.
Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationAPPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS
APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical
More informationAn interior-point trust-region polynomial algorithm for convex programming
An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic
More informationSparse PCA with applications in finance
Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction
More informationConnections Between Duality in Control Theory and
Connections Between Duality in Control heory an Convex Optimization V. Balakrishnan 1 an L. Vanenberghe 2 Abstract Several important problems in control theory can be reformulate as convex optimization
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More informationStability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions
Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions Vinícius F. Montagner Department of Telematics Pedro L. D. Peres School of Electrical and Computer
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon
More informationLearning the Kernel Matrix with Semidefinite Programming
Journal of Machine Learning Research 5 (2004) 27-72 Submitted 10/02; Revised 8/03; Published 1/04 Learning the Kernel Matrix with Semidefinite Programg Gert R.G. Lanckriet Department of Electrical Engineering
More informationApplications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012
Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications
More informationFast linear iterations for distributed averaging
Available online at www.sciencedirect.com Systems & Control Letters 53 (2004) 65 78 www.elsevier.com/locate/sysconle Fast linear iterations for distributed averaging Lin Xiao, Stephen Boyd Information
More information2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable
Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The
More informationINDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University
INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS Ronald J. Stern Concordia University Department of Mathematics and Statistics Montreal, Quebec H4B 1R6, Canada and Henry Wolkowicz
More informationMarcus Pantoja da Silva 1 and Celso Pascoli Bottura 2. Abstract: Nonlinear systems with time-varying uncertainties
A NEW PROPOSAL FOR H NORM CHARACTERIZATION AND THE OPTIMAL H CONTROL OF NONLINEAR SSTEMS WITH TIME-VARING UNCERTAINTIES WITH KNOWN NORM BOUND AND EXOGENOUS DISTURBANCES Marcus Pantoja da Silva 1 and Celso
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationLinear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002
Linear Matrix Inequalities in Robust Control Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002 Objective A brief introduction to LMI techniques for Robust Control Emphasis on
More informationSTABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin
On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting
More informationu - P (s) y (s) Figure 1: Standard framework for robustness analysis function matrix and let (s) be a structured perturbation constrained to lie in th
A fast algorithm for the computation of an upper bound on the -norm Craig T. Lawrence, y Andre L. Tits y Department of Electrical Engineering and Institute for Systems Research, University of Maryland,
More informationIntroduction to Linear Algebra. Tyrone L. Vincent
Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew
More informationContents Acknowledgements 1 Introduction 2 1 Conic programming Introduction Convex programming....
Pattern separation via ellipsoids and conic programming Fr. Glineur Faculte Polytechnique de Mons, Belgium Memoire presente dans le cadre du D.E.A. interuniversitaire en mathematiques 31 ao^ut 1998 Contents
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More information