NUMERICAL ALGORITHMS FOR INVERSE EIGENVALUE PROBLEMS ARISING IN CONTROL AND NONNEGATIVE MATRICES

Size: px
Start display at page:

Download "NUMERICAL ALGORITHMS FOR INVERSE EIGENVALUE PROBLEMS ARISING IN CONTROL AND NONNEGATIVE MATRICES"

Transcription

1 NUMERICAL ALGORITHMS FOR INVERSE EIGENVALUE PROBLEMS ARISING IN CONTROL AND NONNEGATIVE MATRICES By Kaiyang Yang SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY AT THE AUSTRALIAN NATIONAL UNIVERSITY CANBERRA, AUSTRALIA OCTOBER 2006 c Copyright by Kaiyang Yang, October 2006

2 THE AUSTRALIAN NATIONAL UNIVERSITY DEPARTMENT OF INFORMATION ENGINEERING, RSISE The undersigned hereby certify that they have read and recommend to the Research School of Information Sciences and Engineering for acceptance a thesis entitled Numerical Algorithms for Inverse Eigenvalue Problems Arising in Control and Nonnegative Matrices by Kaiyang Yang in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Dated: October 2006 Research Supervisors: Prof. John B. Moore Dr. Robert Orsi Examing Committee: Prof. Iven Mareels Prof. Andrew Lim ii

3 THE AUSTRALIAN NATIONAL UNIVERSITY Date: October 2006 Author: Title: Kaiyang Yang Numerical Algorithms for Inverse Eigenvalue Problems Arising in Control and Nonnegative Matrices Department: Information Engineering, RSISE Degree: Ph.D. Permission is herewith granted to The Australian National University to circulate and to have copied for non-commercial purposes, at its discretion, the above title upon the request of individuals or institutions. Signature of Author THE AUTHOR RESERVES OTHER PUBLICATION RIGHTS, AND NEITHER THE THESIS NOR EXTENSIVE EXTRACTS FROM IT MAY BE PRINTED OR OTHERWISE REPRODUCED WITHOUT THE AUTHOR S WRITTEN PERMISSION. THE AUTHOR ATTESTS THAT PERMISSION HAS BEEN OBTAINED FOR THE USE OF ANY COPYRIGHTED MATERIAL APPEARING IN THIS THESIS (OTHER THAN BRIEF EXCERPTS REQUIRING ONLY PROPER ACKNOWLEDGEMENT IN SCHOLARLY WRITING) AND THAT ALL SUCH USE IS CLEARLY ACKNOWLEDGED. iii

4 To Bojiu Yang and Ping Sun, My Parents and Hongsen Zhang, My Husband. iv

5 Table of Contents Table of Contents List of Figures List of Symbols Statement of Originality Acknowledgements Abstract v ix xi xii xiv xvi I Introduction 1 1 Introduction Inverse Eigenvalue Problems Background Problems Arising in Control Problems Arising in Nonnegative Matrices Research Motivations and Contributions Motivations Contributions Outline of the Thesis Summary II Background 11 2 Projections 13 v

6 2.1 Introduction Projections Alternating Projections Summary Computational Complexity Introduction What is NP-Hard? Computational Complexity in Control Summary III Problems Arising in Control 23 4 A Projective Methodology for Generalized Pole Placement Introduction Methodology The Symmetric Problem The General Nonsymmetric Problem Computational Results Classical Pole Placement: Random Problems Classical Pole Placement: Particular Problem Continuous Time Stabilization: Random Problems Continuous Time Stabilization: Particular Problem Discrete Time Stabilization: Random Problems Discrete Time Stabilization: Particular Problem A Hybrid Problem Summary A Projective Methodology for Simultaneous Stabilization and Decentralized Control Introduction Methodology Simultaneous Stabilization Decentralized Control Computational Results Simultaneous Stabilization: Random Problems Simultaneous Stabilization: Particular Problems Decentralized Control: Random Problems vi

7 5.4 Summary Trust Region Methods for Classical Pole Placement Introduction Trust Region Methods Basic Methodology Convergence Results Derivative Calculations Additional Comments Computational Results Random Problems Particular Problems Repeated Eigenvalues Summary A Gauss-Newton Method for Classical Pole Placement Introduction The Gauss-Newton Method Computational Results Summary IV Problems Arising in Nonnegative Matrices 92 8 A Projective Methodology for Nonnegative Inverse Eigenvalue Problems Introduction The Symmetric Problem The General Problem Computational Results SNIEP NIEP Summary Newton Type Methods for Nonnegative Inverse Eigenvalue Problems Introduction Newton Type Methods Derivative Calculations vii

8 9.4 Computational Results SNIEP NIEP Summary V Conclusion and Future Work Conclusion and Future Work Conclusion Future Work A Results for Classical Pole Placement 132 B More Computational Results for Stabilization 134 Bibliography 137 viii

9 List of Figures 1.1 Flow chart of the thesis Projection point onto convex set is unique Projection points onto nonconvex set may be multiple Alternating projections between two intersecting convex sets Alternating projections between a convex set and a nonconvex set intersecting each other Illustration of simultaneous stabilization Illustration of decentralized control Examples of generalized static output feedback pole placement problem Alternating projections for generalized static output feedback pole placement problems Performance for classical pole placement using up to 10 initial conditions Performance for discrete time stabilization using up to 10 initial conditions The closed loop poles corresponding to a solution for the considered hybrid problem Quadratic convergence near solution of the Levenberg-Marquardt algorithm Quadratic convergence near solution of the Gauss-Newton algorithm. 91 ix

10 8.1 Illustration of the problem formulation for SNIEP Illustration of the problem formulation for NIEP Linear convergence of the SNIEP algorithm Quadratic convergence near solution of the SNIEP algorithm Quadratic convergence near solution of the NIEP algorithm x

11 List of Symbols R the set of real numbers. C the set of complex numbers. R m p C m p O n S n S+ n the set of real m p matrices. the set of complex m p matrices. the set of orthogonal n n matrices. the set of real symmetric n n matrices. the set of real positive semidefinite n n matrices. A T the transpose of matrix A. A the complex conjugate transpose of matrix A. tr(a) the sum of the diagonal elements of a square matrix A. λ(a) the set of eigenvalues of matrix A R n n. ρ(a) the maximum of the real parts of the eigenvalues of A R n n. diag(v) the n n diagonal matrix for v C n whose i th diagonal term is v i. Re(a) the real part of a C. Im(a) the imaginary part of a C. vec(a) C mp consists of the columns of A stacked below each other for A C m p. A B the Kronecker product of A and B. 2 the vector 2-norm. F the Frobenius norm of a matrix. (In the cases of no confusion, F is omitted.) xi

12 Statement of Originality I hereby declare that this submission is my own work, in collaboration with others, while enrolled as a PhD candidate at the Department of Information Engineering, Research School of Information Sciences and Engineering, the Australian National University. To the best of my knowledge and belief, it contains no material previously published or written by another person nor material which to a substantial extent has been accepted for the award of any other degree or diploma of the university or other institute of higher learning, except where due acknowledgement has been made in the text. Most of the technical discussions in this thesis are based on the following publications: K. Yang, J. B. Moore, and Robert Orsi. Gauss-Newton Method for Solving Static Output Feedback Pole Placement Problem. In preparation. K. Yang, R. Orsi, and J. B. Moore. Newton Type Methods for Solving Inverse Eigenvalue Problems for Nonnegative Matrices. In preparation. K. Yang and R. Orsi. Simultaneous Stabilization and Decentralized Control: a Projective Methodology. In preparation. K. Yang and R. Orsi. Static Output Feedback Pole Placement via a Trust Region Approach. Submitted to IEEE Transactions on Automatic Control. xii

13 K. Yang and R. Orsi. Generalized Pole Placement via Static Output Feedback: a Methodology Based on Projections. Automatica, 42 (12), R. Orsi and K. Yang. Numerical Methods for Solving Inverse Eigenvalue Problems for Nonnegative Matrices. In Proceedings of the 17th International Symposium on Mathematical Theory of Networks and Systems (MTNS), pp , Kyoto, Japan, K. Yang and R. Orsi. Pole Placement via Output Feedback: a Methodology Based on Projections. In Proceedings of the 16th IFAC World Congress, 6 pages, Prague, Czech Republic, K. Yang, R. Orsi, and J. B. Moore. A Projective Algorithm for Static Output Feedback Stabilization. In Proceedings of the 2nd IFAC Symposium on System, Structure and Control (SSSC), pp , Oaxaca, Mexico, xiii

14 Acknowledgements I express my deepest gratitude to Professor John B. Moore and Dr. Robert Orsi for being my supervisors and offering me so much guidance and help throughout this research. John s invaluable help, guidance and insights greatly influenced me. His optimistic attitudes towards both research and life impressed me so much and will always encourage me in my later life. Dr. Robert Orsi had been my supervisor and the person I most closely worked with. His most careful scientific research attitude, broad knowledge, invaluable day-to-day supervision and much patience are highly appreciated. His encouragements will support me to go further in future research. My special thanks go to Dr. Robert Mahony for being my advisor. He advised me on some key issues in my research and offered me the chance to do tutoring which was an important and cherished component of my PhD training. Professor Uwe Helmke at the Würzburg University of Germany brought the nonnegative inverse eigenvalue problems to our attention which led to a successful topic in this research. I would like to thank his generous guidance. Professor Iven Mareels at the University of Melbourne of Australia advised us on various parts of the literature for our research. Only with that, could we have a deep understanding of the previous research situation. I would like to thank his important help. I am grateful to Dr. Mei Kobayashi, Mr. Hiroyuki Okano and Mr. Toshinari Itoko for their collaboration and help during my visit to IBM Tokyo Research Laboratory, which made my three months stay very fruitful and enjoyable. Department of Information Engineering of the Australian National University and xiv

15 the SEACS group in National ICT Australia (NICTA) offered me warm environments throughout my PhD. I especially thank Dr. Knut Hüper, our program leader in NICTA, for his help and guidance. I also especially thank Dr. Jochen Trumpf and Dr. Alexander Lanzon for their much help and discussion. My special thanks go to all our departmental staff and friends for their invaluable friendship. Finally, the most importantly, I always thank my family for their endless love and support. My parents dedicated all their love and effort to my twenty years long education. They always support my decisions to pursue my dreams even I need to be 10, 000 kilometers away. They keep my heart warm and my belief firm. My husband, Hongsen Zhang, is the most amazing person in my life. He gave me all his love, support, encouragement, understanding and patience during my hard work. Only with his unfailing love, could I possess the courage to go further in my life and work. xv

16 Abstract An Inverse Eigenvalue Problem (IEP) is to construct a matrix which possesses both proscribed eigenvalues and desired structure. Inverse eigenvalue problems arise in broad application areas such as control design, system identification, principle component analysis, structure analysis etc. There are many different types of inverse eigenvalue problems and despite of a great deal of research effort being put into this topic many of them are still open and are hard to be solved. In this dissertation, we propose optimization algorithms for solving two types of inverse eigenvalue problems, namely, the static output feedback problems and the nonnegative inverse eigenvalue problems. Consequently, this dissertation is essentially composed of two parts. In the first part, three novel methodologies for solving various static output feedback pole placement problems are presented. The static output feedback pole placement framework encompasses various pole placement problems. Some of them are NP-hard, for example, classical pole placement [31]. That is, an efficient (i.e. polynomial time) algorithm that is able to correctly solve all instances of the problem cannot be expected. In this dissertation, a projective methodology, two trust region methods and a Gauss-Newton method are proposed to solve various instances of the pole placement problems. In the second part, two novel methodologies for solving nonnegative/stochastic inverse eigenvalue problems are presented. Nonnegative matrices arise in many application areas and attract a lot of research in matrix analysis community. Stochastic xvi

17 inverse eigenvalue problem has potential applications in Markov chains and the theory of probability etc. In the small dimensional cases, i.e., the dimension of the resulting matrix is less or equal to 5, there exists necessary and sufficient conditions to fully characterize the problem. However when the dimension grows larger, the problem becomes much harder to be solved. The existing necessary conditions are too general and sufficient conditions are too specific. In general the proofs of the sufficient conditions are nonconstructive. In this dissertation, a projective methodology and two Newton type methods are proposed which are widely applicable to various nonnegative inverse eigenvalue problems. All of the problems considered are important and challenging in their area. The optimization methodologies are clearly stated and the algorithms are intensively tested. More than the problems being solved in this thesis, the algorithms appear to be quite useful in a lot more related problems, i.e., inverse eigenvalue problems subject to different structural constraints. xvii

18 Part I Introduction 1

19 Chapter 1 Introduction 1.1 Inverse Eigenvalue Problems Background The spectral properties of a physical system govern its dynamical performance. Hence the computation of eigenvalues enables the basic understanding of the underlying physical system. In contrast, an inverse eigenvalue problem is to reconstruct a physical system from desired dynamical behavior, namely, its eigenvalues. In this research, we concentrate our attention on the problems whose systems can be somehow expressed by matrices. It is clear that an inverse eigenvalue problem is trivially solved if there is no restriction on its structure. We simply construct a diagonal matrix with desired eigenvalues as the diagonal entries. However in practice we usually require that the resulting matrix from a specific inverse eigenvalue problem is physically realizable and thus additional structural constraints are imposed. For example in static output feedback problems, the closed loop systems form an affine subspace with the form A + BKC. In general the solution to an inverse eigenvalue problem must satisfy two 2

20 3 constraints the spectral constraint referring to the prescribed spectral data, and the structural constraint referring to the desired structure. There are various types of inverse eigenvalue problems and attract a lot of research. For more details on different problems, existing theoretical results, numerical algorithms, applications and open problems, refer to an excellent book [20]. Associated with any inverse eigenvalue problem are two fundamental questions: the solvability and the computability. The solvability is to determine a necessary and/or a sufficient condition under which an inverse eigenvalue problem is solvable. The computability is to develop efficient and reliable algorithms to construct the matrices with prescribed eigenvalues and desired structure, where the problem is feasible. Both questions are difficult and challenging. Inverse eigenvalue problems arise in a remarkable variety of applications, for example, control design, system identification, seismic tomography, principle component analysis, exploration and remote sensing, antenna array processing, geophysics, molecular spectroscopy, particle physics, structure analysis, circuit theory, mechanical system simulation [20]. In this dissertation, we propose optimization algorithms for the inverse eigenvalue problems arising in control and nonnegative matrices. We use both classical methods, i.e. Newton type methods, and novel methods, i.e. alternating projections and trust region methods. What follows describe the problems being considered in greater detail.

21 Problems Arising in Control One of the most basic control tasks is the static output feedback pole placement. That is, given system matrices A R n n, B R n m, C R p n and a list of desired eigenvalues λ D C n, find a static output feedback controller K R m p such that λ(a + BKC) = λ D. Static output feedback control problems are a type of special inverse eigenvalue problem as the structural constraint is the closed loop system has the form A + BKC. These problems have simple expressions and wide applications. They have been intensively researched in the past half a century. Despite this, many of them are still open and some of them are proved to be NP-hard, for example, classical pole placement and simultaneous stabilization. Development of novel, efficient and reliable algorithms for these hard problems are certainly of great interest Problems Arising in Nonnegative Matrices Nonnegative matrices are those whose entries are nonnegative. Stochastic matrices, which are a type of special nonnegative matrices, are with each row sum to 1. Nonnegative/stochastic matrices are widely used in game theory, Markov chains, theory of probability, probabilistic algorithms, discrete distributions, categorical data, group theory, matrix scaling and economics [20]. Nonnegative inverse eigenvalue problem is to construct a square nonnegative matrix with desired eigenvalues. When the dimension of the desired matrix is greater than 5, there is no necessary and sufficient condition available. The existing necessary conditions are too general and sufficient conditions are too specific. In general the proofs of sufficient conditions are nonconstructive. To the best to our knowledge, there are only a few algorithms in literature

22 5 and they are not applicable to high dimensional problems. Development of novel algorithms which are efficient and can be used to solve large scale problems is of great interest. 1.2 Research Motivations and Contributions Motivations Identifying the open inverse eigenvalue problems arising in control and nonnegative matrices and their hardness, our main interest here is to develop new optimization algorithms. In the following, we stress several key points on why the considered problems are interesting. 1. Pole placement in the generality (see for Chapter 4), which allows flexibility in choosing the pole placement regions, has not previously been considered. Taking the advantage that the pole placement regions need not to be convex or even connected, there are broad choices of the pole placement regions for various control tasks. Though many algorithms for different instances of generalized pole placement have been proposed, none of them unifies the solution method for various problems in one framework. 2. Simultaneous stabilization (see for Chapter 5) is an important problem in robust control with broad applications. It is proved to be NP-hard [6]. Decentralized control (see for Chapter 5) arises naturally from the need of controlling large scale systems, where a centralized controller is not possible to apply. With a bound on the norm of the controller, decentralized control problem is NP-hard

23 6 [6]. Despite the great deal of work that has been done on these problems, these problems are not thoroughly solved and keep attracting a lot of research effort. 3. Classical pole placement (see for Chapter 6 and 7) is one of the most important open problems in control design. It is shown to be NP-hard [31]. Though simply expressed, the problem is hard to be solved. 4. Inverse eigenvalue problems for nonnegative/stochastic matrices attracted much research in the past fifty years. However there are only one algorithm for symmetric nonnegative inverse eigenvalue problems and another for general nonsymmetric problems. Due to the nature of existing algorithms, they can hardly solve high dimensional problems. Nonnegative/stochastic inverse eigenvalue problems have broad potential applications Contributions In regard to algorithm development, we mainly apply three methodologies, i.e. the projective methodology, Newton type methods and trust region methods. Since each of them has its own advantage, we highlight three key points on the contributions of this work. In projective methodologies (see for Chapter 4, 5 and 8), we formulate the problems into feasibility problems involving two closed sets. In the case where one of the sets is nonconvex, how to project onto the nonconvex set is an open hard problem. We propose a substitute map which gives a reasonable estimation of the true projection and provides good performance in computational experiments. The idea of handling nonconvexity in such problem settings can be

24 7 used to many other related problems, especially those involving nonsymmetric matrices. In trust region methods and Newton type methods (see for Chapter 6, 7 and 9), we formulate the problems as nonlinear least squares problems. The idea of applying these methods to the specific problem settings is a perfect joint of optimization techniques and practical problems. The proposed algorithms are broadly tested to be efficient and reliable. Such ideas should be useful in many more problems. Various instances of control tasks via static output feedback (see for Chapter 4) are unified into single framework, namely, generalized static output feedback pole placement. The flexibility in choosing pole placement regions enables the encompassment of a large scope of control tasks, especially less standard pole placement problems. Hence the algorithms for solving these problems can be applied to those which can be put in this framework. 1.3 Outline of the Thesis This thesis consists of 5 parts including 10 chapters. Part I is the introduction and Part V is the conclusion and future work. The main content is presented in 3 parts. Figure 1.1 is a flow chart of the thesis, which clearly indicates on how the thesis is organized. A more detailed introduction to the main content is as follows: Part II : Background In this work, optimization algorithms are developed to tackle inverse eigenvalue problems arising in control and nonnegative matrices. All of them are hard to be

25 8 Introduction Part II Background Projections Computational Complexity Part III Problems Arising in Control A Projective Methodology for Generalized Pole Placement A Projective Methodology for Simultaneous Stabilization and Decentralized Conrol Trust Region Methods for Classical Pole Placement A Gauss-Newton Method for Classical Pole Placement Part IV Problems Arising in Nonnegative Matrices A Projective Methodology for Nonnegative Inverse Eigenvalue Problems Newton Type Methods for Nonnegative Inverse Eigenvalue Problems Conclusion and Future Work Figure 1.1: Flow chart of the thesis. solved. Different methodologies are employed to the algorithm developments, i.e., alternating projections, Newton type methods and trust region methods. Overviews of trust region methods and Newton type methods are given in the chapters they are employed. Projections and alternating projections are used in three chapters (see for Chapter 4, 5 and 8) and are introduced in the background part. This part also includes a brief introduction to computational complexity and the computational complexity in control. Part III : Arising in Control Control is one of the two main application areas considered in this work. Optimization algorithms for various static output feedback control problems are

26 9 presented in part III. First of all, generalized pole placement via static output feedback problem is considered. They are formulated as feasibility problems and solutions can be found via a projective methodology. Following the same idea, simultaneous stabilization via static output feedback and stabilization via decentralized static output feedback are considered. Alternating projection idea is applied. Then trust regions methods are developed for solving classical pole placement via static output feedback. As this problem is NP-hard, the trust region methods appear to be performing extremely well. Lastly a novel Gauss- Newton method is considered for the classical pole placement problem. The problem is formulated as a constrained nonlinear optimization problem and minimization is achieved via a Gauss-Newton method. Part IV : Arising in Nonnegative Matrices Problems arising in nonnegative and stochastic matrices are the other main application area in this work. Two different methodologies are presented for solving nonnegative/stochastive inverse eigenvalue problems, namely, a projective methodology and Newton type methods. The algorithms appear to be very useful for various problems. The algorithms are also extendable to many other inverse eigenvalue problems, especially those involving nonsymmetric matrices. 1.4 Summary This chapter forms a base of this thesis. We first introduce the background of the inverse eigenvalue problems. Identifying its broad applications and the existing difficulties in various inverse eigenvalue problems, we are motivated to focus our attention

27 10 on the problems arising in control and nonnegative/stochastic matrices. Based on this, we are ready to proceed to the following chapters with research details.

28 Part II Background 11

29 12 This part consists of two chapters. They cover the key background information on projections and computational complexity and are indispensable to understand the main parts of this research. Chapter 2 contains general properties of projections, alternating projections and recalls how alternating projections can be used to find a point in the intersection of a finite number of closed (convex) sets. It plays a key part in the algorithms in Chapter 4, 5 and 8. Basic knowledge is introduced and important properties are highlighted. In Chapter 3, a brief introduction to computational complexity is presented. It is essential to understand how hard the NP-hard problems are since several important NP-hard problems are tackled in this work.

30 Chapter 2 Projections 2.1 Introduction This chapter contains general properties of projections, alternating projections and introduces the method of alternating projections. Since alternating projection idea, especially its variation applied to the problems involving nonconvex set, is crucial to the algorithms in Chapter 4, 5 and 8, we gather such materials in this separate chapter. We first introduce the basic knowledge and then emphasize on the key properties of the concepts introduced. 2.2 Projections Projections play a key part in the algorithms. This section contains general properties of projections. Let x be an element in a Hilbert space H and let D be a closed (possibly nonconvex) subset of H. Any d 0 D such that x d 0 x d for all d D will be 13

31 14 called a projection of x onto D. In the cases of interest here, namely where H is a finite dimensional Hilbert space, there is always at least one such point for each x. A function P D : H H will be called a projection operator (for D) if for each x H, P D (x) D and x P D (x) x d for all d D. Where convenient, we will use y = P D (x) to denote that y is a projection of x onto D. We emphasize that y = P D (x) only says y is a projection of x onto D and does not make any statement regarding uniqueness. If D is convex as well as closed then each x has exactly one projection point P D (x) [51]. In Figure 2.1, D is a closed convex set and x is an arbitrary starting point. P D (x) is a projection of x onto D and P D (x) is unique. () Figure 2.1: Projection point onto convex set is unique. However if D is nonconvex but closed then each x may have multiple projection points P D (x) s. In Figure 2.2, D is a closed nonconvex set and x is an arbitrary starting point. Due to the nonconvexity of D, the projections of x onto D may be not unique. P D (x) 1 and P D (x) 2 are both of the smallest distance to x among all the points in D and hence are both projections.

32 15 () 2 () 1 Figure 2.2: Projection points onto nonconvex set may be multiple. 2.3 Alternating Projections In Chapter 4, 5 and 8, all problems of interest are feasibility problems of the following abstract form. Problem Given closed sets D 1,..., D N in a finite dimensional Hilbert space H, find a point in the intersection N i=1 (assuming the intersection is nonempty). (In fact, we will solely be interested in the case N = 2.) If all the D i s in Problem are convex, a classical method of solving Problem is to alternatively project onto the D i s. This method is often referred to as the Method of Alternating Projections (MAP). If the D i s have nonempty intersection, the successive projections are guaranteed to asymptotically converge to an intersection point [9]. D i Theorem (MAP). Let D 1,..., D N be closed convex sets in a finite dimensional Hilbert space H. Suppose N i=1 D i is nonempty. Then starting from an arbitrary initial value x 0, the sequence x i+1 = P Dφ(i) (x i ), where φ(i) = (i mod N) + 1,

33 16 converges to an element in N i=1 D i. We remark that the usefulness of MAP for finding a point in the intersection of a number of sets is dependent on being able to compute projections onto each of the D i s. The significance of Theorem is that it defines a systematic numerical algorithm for finding a point in the intersection of closed convex sets [67]. Theorem Let D 1,..., D N be closed convex sets in a finite dimensional Hilbert space H. Given constants t 1,..., t N in the interval (0, 2), for i = 1,..., N, define the operators R i = (1 t i )Id + t i P Di. If N i=1 D i is nonempty, then starting from an arbitrary initial value x 0, the following sequence converges to an element in N i=1 D i. x i+1 = R i(mod N)+1 (x i ) Proof. See [36]. An alternate proof can also be found in [78]. Here Id denotes the identity operator on H; Id(x) = x for all x H. Each R i of the theorem is a projection operator onto D i if and only if t i = 1. If t i 1 then R i is referred to as a relaxed projection. Theorem is a generalization of Theorem which corresponds to the case t 1 = = t N = 1. We will see later how the freedom to choose the t i s not equal to 1 can be useful. Figure 2.3 illustrates the alternating projections between two intersecting closed convex sets. The pink region and the green region represent two closed convex sets. The blue region is the feasible

34 17 set, i.e. the intersection of these two sets. Starting from an arbitrary point x, a point in the feasible set can be found via alternating projections. feasible set starting point Figure 2.3: Alternating projections between two intersecting convex sets. When t i (0, 1), R i is under projection where we step only the fraction t i from the starting point to the target set. Hence R i does not really map into the target set, so we do not usually apply this technique to algorithms. On the contrary when t i (1, 2), R i is over projection where we step further into the target set. So over projection is sometimes used to find a point in the intersection in a finite number of steps [8]. When one or more D i s are nonconvex, Theorem no longer applies and starting the algorithm of Theorem from certain initial values may result in a sequence of points that does not converge to a solution of the problem [23]. Figure 2.4 illustrates the alternating projections between two intersecting sets involving nonconvex set. The pink region represents a closed convex set. However the green region is a closed nonconvex set. The blue region is the feasible set, i.e., the intersection of these two sets. If we choose an arbitrary starting point x, alternating projection scheme does not converge to a solution in the feasible set. If we choose another arbitrary starting point y, alternating projection scheme is working and a point in the feasible

35 18 set can be found. starting point starting point feasible set Figure 2.4: Alternating projections between a convex set and a nonconvex set intersecting each other. If however there are only two sets, then the following distance reduction property always holds. Theorem Let D 1 and D 2 be closed (nonempty) sets in a finite dimensional Hilbert space H. For any initial value y 0 D 2, if x 1 = P D1 (y 0 ), y 1 = P D2 (x 1 ), x 2 = P D1 (y 1 ), then x 2 y 1 x 1 y 1 x 1 y 0. Proof. The second inequality holds as y 1 is a projection of x 1 onto D 2 and hence its distance to x 1 is less than or equal to the distance of x 1 to any other point in D 2 such as y 0. The first inequality holds by similar reasoning.

36 19 Corollary If for i = 0, 1,..., x i+1 = P D1 (y i ), y i+1 = P D2 (x i+1 ), that is, the x i s and y i s are successive projections between two closed sets, then x i y i is a nonincreasing function of i. Suppose one is interested in solving Problem in the case of two sets, D 1 and D 2, when one or both sets are nonconvex. If projections onto these sets are computable, a solution method is to alternately project onto D 1 and D 2. Corollary ensures that the distance x i y i is nonincreasing with i. While this is promising, there is, however, no guarantee that this distance goes to zero and hence that a solution to the problem will be found. Most of the literature on alternating projection methods deals with the case of convex subsets of a (possibly infinite dimensional) Hilbert space; a survey of these results is contained in [2]. The text [27] is also recommended. There is much less available for the case of one or more nonconvex sets; see in particular [23]. 2.4 Summary In this chapter, an overview of projections is given. We introduce the general properties of projections, alternating projections and recalls how alternating projections can be useful in finding a common point in the intersection of finite number of closed (convex) sets. Moreover the alternating projections involving nonconvex set are also analyzed. Please refer to Chapter 4, Chapter 5 and Chapter 8 for the algorithms using projective ideas. Keywords: projections, alternating projections.

37 Chapter 3 Computational Complexity 3.1 Introduction In this research, we tackle a few NP-hard problems which are long open problems in control, for example, classical pole placement and simultaneous stabilization via static output feedback. NP-hardness is directly related to computability and complexity. In this chapter, we present a brief introduction to computational complexity to give a basic idea on what are NP-hard problems. As large volume of literature on computational complexity in its own right exists, especially, many on computational complexity analysis of control related problems, we just mention a few for more details (see for [6] and [31] and a recent survey [5]). 3.2 What is NP-Hard? A decision problem is one where the desired output is binary, and can be interpreted as yes or no. 20

38 21 A decidable problem is one for which there exists an algorithm that always halts with the right answer. But there also exist undecidable problems, for which there is no algorithm that always halts with the right answer. If T (s) of an algorithm is defined as a function of size to be the worst case running time over all instances of size s, we say that an algorithm runs in polynomial time if there exists some integer k such that T (s) = O(s k ) and we define P as the class of all (decision) problems that admit polynomial time algorithms. Due to both practical and theoretical reasons, P is generically viewed as the class of problems that are efficiently solvable. There are many decidable problems of practical interest for which no polynomial time algorithm is known. Many of these problems belong to a class known as NP (nondeterministic polynomial time), that includes all of P. A decision problem is said to belong to NP if every yes instance has a certificate of being a yes instance whose validity can be verified with a polynomial amount of computation. If there exists a problem within the class NP that every problem in NP can be reduced to it in polynomial time, then this problem is said to be NP-complete. If a problem is at least as hard as some NP-complete problem, then it is said to be NP-hard. NP-hardness of a problem means that it is about as difficult as a NP-complete problem.

39 Computational Complexity in Control Computational complexity analysis into various application areas is an active research topic. It is helpful for the people to understand how valuable an algorithm is or how much we can optimize the solution. In control, there are many problems proved to be NP-hard or NP-complete. That is, a polynomial time algorithm that can correctly solve all instances of such problems cannot be expected. For example, classical pole placement via static output feedback and simultaneous stabilization via static output feedback are NP-hard problems that are considered in this work. More than these, there are many NP-hard problems in robust stability analysis, nonlinear control, optimal control and Markov decision theory etc. For more details on these open hard control problems, please refer to a recent survey [5]. 3.4 Summary In this chapter, a brief introduction to computational complexity is given. The main purpose is to introduce NP-hardness and hence gets to know how hard the NP-hard problems are. Note that computational complexity analysis is an independent and abundant research area in its own right. Keywords: computational complexity, NP-hard, undecidable.

40 Part III Problems Arising in Control 23

41 24 This part consists of four chapters. They include optimization algorithms developed for solving inverse eigenvalue problems arising in control. In Chapter 4, we present a projective methodology for solving static output feedback pole placement problems of the following rather general form: given n subsets of the complex plane, find a static output feedback that places in each of these subsets a pole of the closed loop system. The algorithm presented is iterative in nature and is based on alternating projection ideas. Each iteration of the algorithm involves a Schur matrix decomposition, a standard least squares problem and a combinatorial least squares problem. While the algorithm is not guaranteed to always find a solution, computational results are presented demonstrating the effectiveness of the algorithm. In Chapter 5, we extend the projective methodology presented in Chapter 4 to tackle some different control problems simultaneous stabilization via static output feedback and stabilization via decentralized static output feedback. Unlike the static output feedback pole placement in the generality considered in Chapter 4, which is handling one system each time, simultaneous stabilization is to stabilize multiple systems simultaneously using one static output feedback controller. In Figure 3.1, P 1,..., P n are n independent systems and we are supposed to find a controller K such that all the closed loop systems are stable. On the contrary, decentralized control is to split the control task into multiple channels. In each channel, a controller is employed to stabilize the subsystem. Decentralized control is generally arising in controlling large scale systems, where centralized control is not possible to apply. In Figure 3.2, P is a plant which is divided into n subsystems P 1,..., P n. For each subsystem P i (i = 1,..., n), a controller

42 25 Figure 3.1: Illustration of simultaneous stabilization. K i is employed to stabilize it. Again the algorithms are not guaranteed to always find a solution. The effectiveness of the algorithms is demonstrated in computational results. Figure 3.2: Illustration of decentralized control. In Chapter 6, we present two closely related algorithms for classical pole placement via static output feedback. The pole placement problem is formulated as an unconstrained nonlinear least squares optimization problem involving the desired poles and the closed loop system poles. Minimization is achieved via two different trust

43 26 region approaches that utilize the derivatives of the closed loop poles. Extensive numerical experiments show that the algorithms are very effective in practice though convergence to a solution is not guaranteed for either algorithm. Near solutions, both algorithms typically converge quadratically. While the algorithms require the desired poles to be distinct, effective strategies for dealing with repeated poles are also presented. In Chapter 7, we present a Gauss-Newton method for the problem of classical pole placement via static output feedback. The pole placement problem is formulated as an constrained nonlinear least squares optimization problem involving the Schur form of the closed loop system. Minimization is achieved via a Gauss-Newton method. Near solutions, the algorithm is typically converge quadratically. As algorithm does not require the desired poles to be distinct, there is no formal limitation on the proposed algorithm. Further convergence analysis and more extensive experiments will be carried out in future work.

44 Chapter 4 A Projective Methodology for Generalized Pole Placement 4.1 Introduction There has been a great deal of research done on the problems of pole placement and stabilization via static output feedback. An overview of theoretical results, existing algorithms and historical developments can be found in [10], [26], [29], [60], [61] and [69]. (For the convenience of the reader the appendix A lists some of the main theoretical pole placement results that have appeared in the literature to date.) Our main interest here is in algorithms, and in this regard, for pole placement, the survey paper [61] states that existing sufficiency conditions are mainly theoretical in nature and that there are no good numerical algorithms available in many cases when a problem is known to be solvable. Despite the great deal of work that has been done in this area, new algorithms for these important problems are still of great interest. In this chapter we will actually consider the following generalized static output feedback pole placement problem. Problem Given A R n n, B R n m, C R p n and closed subsets C 1,..., C n 27

45 28 C, find K R m p such that λ i (A + BKC) C i for i = 1,..., n. Here λ i (A + BKC) denotes the i th eigenvalue of A + BKC. Problem encompasses many types of pole placement problems. Indeed by varying the choice of C i s, Problem can for example be specialized to the following problems: 1. Classical pole placement: C i = {c i }, c i C. Here each region C i is an individual point on the complex plane, see for Figure 4.1(a). 2. Relaxed classical pole placement: C i = {z C z c i r i }. Here each region C i is a disk centered at c i C with radius r i 0, see for Figure 4.1(b). 3. Stabilization type problems for continuous time and discrete time systems: C 1 =... = C n = {z C Re z α}, α > 0, (4.1.1) and C 1 =... = C n = {z C z α}, 0 < α < 1, respectively, see for Figure 4.1(c) and (d).

46 29 4. Hybrid problems: for example, problems of the type shown in Figure 4.1(e). Here c C and the aim is to place a pair of poles at c and c, and to place the remaining poles in the truncated cone C: C 1 = {c}, C 2 = { c}, C 3 =... = C n = C. As far as we are aware, pole placement in the generality presented in Problem has not previously been considered. The most closely related results from the existing literature are as follows. Early work in [37] considers pole placement in a single region specified by polynomials. While a Lyapunov type necessary and sufficient condition is given for a matrix to have its eigenvalues in such a region, this condition is polynomial in the matrix in question and hence not readily amenable to controller design. In [16], LMI conditions are presented that are sufficient though not necessary for pole placement in various convex regions. These results cover state feedback and full-order dynamic output feedback but not static output feedback. Another LMI based approach, again for a single, though this time, possibly disconnected region, is considered in [7]. A method for placing poles in distinct convex regions (each region is specified using linear programming constraints or second order cone constraints) is given in [38] however the method is based on eigenvalue perturbation results and hence appears largely limited to cases where the open loop poles are already quite close to the desired poles. In [63], pole placement in distinct convex regions (each region is a disk or a half plane) is achieved via a rank constrained LMI approach though the results are only for state feedback. This chapter presents an algorithm for Problem The approach employed here is quite different to each of the approaches mentioned above. Problem is shown to be equivalent to finding a point in the intersection of two particular sets,

47 30 (a) Classical pole placement. (b) Relaxed classical pole placement. (c) Continuous time stabilization. (d) Discrete time stabilization. (e) A hybrid problem. Figure 4.1: Examples of generalized static output feedback pole placement problem.

48 31 one of which is a simple convex set, the other a rather complicated nonconvex set. The algorithm is iterative in nature and is based on an alternating projection like scheme between these two sets. Each iteration of the algorithm involves a Schur matrix decomposition and a standard least squares problem. If the C i s are not all equal, each iteration also requires a combinatorial least squares matching step. Alternating projection type ideas have been employed previously for output feedback stabilization, see in particular [33], [35] and [57] 1. A distinguishing feature of our algorithm is that, unlike these methods, our algorithm does not involve LMIs. (A further technical difference is that rather than solving feasibility problems that involve symmetric matrices, the problem solved by the algorithm is a feasibility problem involving nonsymmetric matrices.) The algorithm can be applied to problems with rather general choices for the C i regions. In fact the only formal requirement is the following, which we state in the form of an assumption. Assumption 1. It is possible to calculate projections onto each of the C i s: given z C it is possible to find z i C i such that z z i z c for all c C i. In particular, the C i s must be closed sets though they need not be convex or even connected. For given A, B, C and C i s, Problem may or may not have a solution. Indeed, one would expect that determining whether a particular instance of Problem is solvable is in general difficult. For example, the problem of determining whether the classical pole placement problem is solvable for particular A, B, C and desired poles has recently been shown to be NP-hard [31]. Given the difficulty of Problem 4.1.1, an 1 They were first used in control design to solve certain convex problems, see [34].

49 32 efficient (i.e., polynomial time) algorithm that is able to correctly solve all instances of the problem cannot be expected, and while the algorithm presented here is often quite effective in practice, it is not guaranteed to find a solution even if a solution exists. This chapter is structured as follows 2. Section 4.2 presents the solution methodology. To motivate the solution methodology we first restrict our attention to systems that have a symmetric state space representation (A = A T, C = B T ) and present an algorithm for this easier class of problems. The general problem is then considered. Section 4.3 contains computational results of applying the algorithm to various instances of Problem Methodology The Symmetric Problem Systems with a symmetric state space realization, that is, systems with state space matrices satisfying A = A T, C = B T, occur in various contexts, for example RCnetworks. In order to motivate our solution methodology, we first consider the following special case of Problem for symmetric systems 3. Problem Given A S n, B R n m, and closed subsets C 1,..., C n R, find K S m such that λ i (A + BKB T ) C i for i = 1,..., n. 2 Please refer to Chapter 2 for the background of projections. 3 See [52] for more regarding the classical pole placement problem for symmetric systems and a rather surprising result regarding arbitrary pole placement for such systems.

50 33 Note that in Problem the static output feedback matrix K is required to be symmetric and that the C i s are assumed to be subsets of the real numbers. This latter assumption is natural as given any symmetric K, A + BKB T is also symmetric and hence has real eigenvalues. We assume A, B and C 1,..., C n are given and fixed. Let L = {Z S n Z = A + BKB T for some K S m } and let M denote the set of symmetric matrices with eigenvalues in the specified regions C 1,..., C n, M = {Z S n λ i (Z) C i, i = 1,..., n}. The symmetric problem can be stated as follows: Find X L M. We now show that projections onto both sets L and M can be calculated and hence that alternating projections can be employed as a solution method. 4 For now on, S n will be regarded as a Hilbert space with inner product Y, Z = tr(y Z) = i,j y ij z ij, and associated norm Z = Z, Z 1 2 (the Frobenius norm). For i = 1,..., n, P Ci will denote a projection operator for C i. The set L is an affine subspace and hence is convex. Projection of X S n onto L involves solving a standard least squares problem. 4 As the system is symmetric and we restrict K to be symmetric, the symmetric static output feedback continuous time stabilization problem is equivalent to an LMI problem. Hence, if the problem is solvable, a numerical solution to the problem can be readily found using existing LMI algorithms, see for example [71]. The L and M for continuous time stabilization problem are both convex and projections onto these sets are readily calculated.

51 34 Lemma The projection of X S n onto L is given by P L (X) = A + BKB T where K is a solution of the least squares problem arg min K S m (B B) vec(k) vec(x A) 2. A proof of a more general version of this result is given in the next subsection. While the set M is not convex, it is still possible to calculate projections onto M. How to do this is shown in Theorem below. The result is based on the following result of Hoffman and Wielandt. Lemma Suppose Y, Z S n have eigenvalue-eigenvector decompositions Y = V DV T, D = diag(λ Y 1,..., λ Y n ), Z = W EW T, E = diag(λ Z 1,..., λ Z n ), where V, W R n n are orthogonal and λ Y 1... λ Y n and λ Z 1... λ Z n. Then D E Y Z, (4.2.1) where denotes the Frobenius norm. Proof. See for example [40, Corollary 6.3.8]. Theorem Given Y S n, let Y = V DV T be an eigenvalue-eigenvector decomposition of Y with D = diag(λ 1,..., λ n ). Let σ be a permutation of {1,..., n} such that amongst all possible permutations, it minimizes Define n λ k P Cσ(k) (λ k ) 2. k=1 P M (V, D) = V ˆDV T

52 35 where ˆD = diag(p Cσ(1) (λ 1 ),..., P Cσ(n) (λ n )). Then P M (V, D) is a best approximant in M to Y in the Frobenius norm. Proof. Let Y be as in the theorem statement. As P M (V, D) M, it remains to show Y P M (V, D) Y Z for all Z M. (4.2.2) Without loss of generality, suppose the eigenvalues of Y are ordered, i.e., λ 1... λ n. Similarly, for Z M, let Z = W EW T be an eigenvalue-eigenvector decomposition with E = diag(λ Z 1,..., λ Z n ) and λ Z 1... λ Z n. As the Frobenius norm is orthogonally invariant, Y P M (V, D) = D ˆD. (4.2.3) Let π be a permutation of {1,..., n} such that λ Z k C π(k), k = 1,..., n. (Such a permutation exists as Z M and hence must have an eigenvalue in each of the C i s.) Define Ê = diag(p C π(1) (λ 1 ),..., P Cπ(n) (λ n )). It follows from the definition of ˆD that D ˆD D Ê. (4.2.4) As for each k, λ k P Cπ(k) (λ k ) λ k λ Z k, it also follows that D Ê D E. (4.2.5) Combining (4.2.3), (4.2.4), (4.2.5) and inequality (4.2.1) from Lemma gives the inequality in (4.2.2) and the proof is complete. Note that to calculate P M (V, D) we keep the original orthogonal matrix V and simply modify the diagonal matrix D to ˆD. The fact that V remains unchanged motivates our solution method for the general nonsymmetric case.

53 The General Nonsymmetric Problem Consider again Problem Throughout this subsection it is assumed A, B, C and C 1,..., C n are given and fixed. From now on C n n will be regarded as a Hilbert space with inner product Y, Z = tr(y Z ) = i,j y ij z ij. The associated norm is the Frobenius norm Z = Z, Z 1 2. In this subsection we redefine L to be the set of all possible closed loop system matrices, L = {Z R n n Z = A + BKC for some K R m p }, and redefine M to be the set of complex matrices with eigenvalues in the specified regions C 1,..., C n, M = {Z C n n λ i (Z) C i, i = 1,..., n}. Problem can now be stated as: Find X L M. A solution strategy to solve Problem would be to employ an alternating projection scheme, alternatively projecting between L and M. A difficulty occurs in trying to do this. While L is convex and M is nonconvex, just as for the symmetric problem, M in this case is a substantially more complicated set and how to calculate projections onto this set is a difficult unsolved problem. That is, given a point Z, it is not clear how to find a point in M of minimal distance to Z.

54 37 As will be verified by the experiments, an alternating projection like scheme can still be quite successful if for M a suitable substitute mapping is used in place of an actual projection operator. Figure 4.2 illustrates the problem formulation for Problem and the alternating projection scheme. The pink line represents the set L as it is an affine subspace. The green region represents the set M which is a rather complicated closed nonconvex set. The dotted parts are the feasible set, i.e. the intersection of the two sets. Starting from an arbitrary point x, we know exactly how to do projection onto L. However instead of the actual projection onto M which is a pink point with dotted arrow, our proposed substitute mapping gives a reasonable estimation at the black point in M. starting point feasible set Figure 4.2: Alternating projections for generalized static output feedback pole placement problems. Before proceeding, recall Schur s result [40, Th 2.3.1]. Theorem Given Z C n n with eigenvalues λ 1,..., λ n in any prescribed order, there is a unitary matrix V C n n and an upper triangular matrix T C n n such

55 38 that Z = V T V, and T kk = λ k, k = 1,..., n. The following map is proposed as a substitute for a projection map onto M. Though it is not a true projection map, the notation P M will still be used. Our choice for P M is motivated by the projection operator for M for the symmetric problem. We stress that as P M is not a true projection map, unlike for the symmetric problem, the distance reduction property of Theorem may not hold. While P M has various desirable properties (see below), the theoretical convergence properties of the algorithm are currently unclear, providing interesting questions for future research. In what follows, Assumption 1 is assumed to hold. Definition Suppose V C n n is unitary and T C n n is upper triangular. Let σ be a permutation of {1,..., n} such that amongst all possible permutations, it minimizes Define n T kk P Cσ(k) (T kk ) 2. (4.2.6) k=1 P M (V, T ) = V ˆT V where ˆT is upper triangular and given by P Cσ(k) (T kk ), if k = l, ˆT kl = T kl, otherwise. Note that P M maps into the set M. Note also that, just as for the symmetric problem, finding σ involves solving a combinatorial least squares problem. (This will be discussed further later in the section.)

56 39 In order to apply P M to Z C n n, a Schur decomposition of Z must first be found. A given Z may have a nonunique Schur decomposition and Z = V 1 T 1 V1 = V 2 T 2 V2 does not necessarily imply P M (V 1, T 1 ) = P M (V 2, T 2 ). Hence, P M may give different points for different Schur decompositions of the same matrix. (Though it was not discussed at the time, similar comments apply to the P M mapping for the symmetric case.) This is not so important as different Schur decompositions lead to points in M of equal distance to the original matrix, as is now shown. Theorem Suppose Z = V 1 T 1 V 1 = V 2 T 2 V 2 where V 1, V 2 C n n are unitary and T 1, T 2 C n n are upper triangular. Then P M (V 1, T 1 ) Z = P M (V 2, T 2 ) Z. Proof. Suppose Z = V T V where V is unitary and T is upper triangular. If ˆT is the matrix given in Definition 4.2.6, then by the unitary invariance of the Frobenius norm, P M (V, T ) Z = ˆT T. As ˆT T 2 equals the quantity in (4.2.6), P M (V, T ) Z depends only on T 11,..., T nn (and the sets C 1,..., C n ). The T kk s are the eigenvalues of Z and hence, aside from ordering, are not decomposition dependent. The result now follows by noting that (4.2.6) does not depend on the ordering of the T kk s. P M (V, T ) keeps V fixed and modifies T. Theorem below shows that of all the points in M of the form V T V, T M upper triangular, i.e., of all the points in M that have a Schur decomposition with the same V matrix, P M (V, T ) is closest (or at least equal closest) to the original point Z = V T V.

57 40 Theorem Suppose Z = V T V C n n with V unitary and T upper triangular. Then P M (V, T ) satisfies P M (V, T ) Z V T V Z for all upper triangular T M. Proof. Let T be an upper triangular matrix in M. The unitary invariance of the Frobenius norm implies the result will be established if it can be shown ˆT T T T, where ˆT is the matrix given in Definition As both T and T are upper triangular and T M, it follows that T T 2 = n T kk T kk 2 + T kl T kl 2 (4.2.7) k<l k=1 and that T kk C σ(k), k = 1,..., n, for some permutation σ. The result now follows by noting that ˆT T 2 equals the quantity in (4.2.6) and that this value must be less than or equal to the first summation on the right hand side of the equality in (4.2.7). As for the symmetric problem, projection of X C n n onto L involves solving a standard least squares problem. Lemma The projection of X C n n onto L is given by P L (X) = A + BKC where K is a solution of the least squares problem arg min (C T B) vec(k) vec[re(x) A] 2. K R m p Proof. We would like to find K R m p that minimizes X (A + BKC) 2. (4.2.8)

58 41 As A, B and C are real matrices, it follows that (4.2.8) equals Re(X) (A + BKC) 2 + Im(X) 2, (4.2.9) and hence that the problem is equivalent to minimizing the first term in (4.2.9). The result now follows by noting that for any Z C n n, Z = vec(z) 2, and that for any (appropriately sized) matrices P, Q and R, vec(p QR) = (R T P ) vec(q) [41]. Here is our algorithm for Problem Algorithm: Problem Data. A R n n, B R n m, C R p n and C 1,..., C n C. Initialization. Choose a randomly generated matrix Y R n n. repeat 1. X := P L (Y ). 2. Calculate a Schur decomposition of X: X = V T V. 3. Y := P M (V, T ). until X Y < ɛ. Note that as Y = P M (V, T ) = V ˆT V, see Definition 4.2.6, X Y = V T V V ˆT V = T ˆT = ( T kk ˆT kk 2 ) 1 2. As the T kk s are the eigenvalues of X L and the ˆT kk s are the eigenvalues of Y M, the algorithm stops when X (which equals A + BKC for some K) has eigenvalues sufficiently close to those of a matrix that satisfies the pole placement constraints. In particular, each eigenvalue of such an X cannot violate the pole placement constraints by more than ɛ.

59 42 As mentioned previously, P M involves finding a permutation σ that minimizes (4.2.6). The first step in solving this combinatorial least squares problem is calculating the squared distance of each T kk to each subset C l and placing these values in a n n cost matrix D: D kl := T kk P Cl (T kk ) 2. (4.2.10) The problem is now equivalent to finding a permutation σ such that D kσ(k) is minimal. The problem of finding a minimizing σ given a cost matrix D is a linear assignment problem which can be solved in O(n 3 ) time using the so called Hungarian method, see [53] for details. Note that if the C i s are not all distinct, for example this occurs for stabilization problems and what here have been called hybrid problems, the complexity of the matching problem is reduced. In fact for stabilization problems, all the C i s are the same and no matching step is required. For the hybrid problem shown in Figure 4.1 (e), it is only necessary to check n(n 1) possibilities corresponding to which two T kk s are matched to c and c. Hence for this hybrid problem, the direct approach is faster than using the Hungarian method. Also note that, given a cost matrix D, an alternative to the Hungarian method is the following faster suboptimal matching strategy. Find the (or a) smallest entry in D, say D k l. Match T k k with C l and cross out row k and column l of D. Now only consider the uncrossed out entries in D and repeat, until all n matches have been made. This method does not always find the optimal matching though it can often be an quite effective substitute for the Hungarian method. It will be termed suboptimal matching. Surprisingly, as will be shown in the next section, by using

60 43 suboptimal matching in the algorithm it was possible to solve a particular problem which was not solvable using the Hungarian method. 4.3 Computational Results This section contains computational results of applying the algorithm to various instances of Problem We include results for classical pole placement, continuous time and discrete time stabilization, and a hybrid problem. The algorithms for each problem were implemented in Matlab 6.5 and all results were obtained using a 3.06 GHz Pentium 4 machine. Throughout this section a randomly generated matrix will be a matrix whose entries are drawn from a normal distribution of zero mean and variance Classical Pole Placement: Random Problems This subsection contains results for some randomly generated classical pole placement problems. A 1000 problems with n = 6, m = 4 and p = 3 were created. Each problem was created as follows. A, B, C and K were generated randomly. A scalar multiple of the identity was added to A to ensure the stability degree of A + B KC was equal to α = 0.1. The desired poles were taken to be the poles of A + B KC. Initial conditions were chosen randomly. An attempt was made to solve each problem using up to 10 different initial conditions and a maximum of a 1000 iterations per initial condition. With the termination parameter ɛ set to ɛ = 10 3, the overall success rate was 91%. (The success rate increases to 95% if up to 20 initial conditions are used.) For the problems that were

61 44 solved, the average number of iterations taken was and the average time taken was 1.5 CPU seconds. Figure 4.3 shows the performance of the algorithm for classical pole placement problem if we try to solve each problem using up to 10 initial conditions Success rate (%) No. of initial conditions Figure 4.3: Performance for classical pole placement using up to 10 initial conditions. Note that for classical pole placement problems, in (4.2.10), if C l = {c l } then P Cl (T kk ) = c l Classical Pole Placement: Particular Problem The following problem is taken from [66] and is of interest as the set of desired poles overlaps with the set of open loop poles. The system matrices are the following:

62 45 A = , B =, C = T. In this problem the set of open loop poles is {1, 2, 3, 4} and the aim is to place the closed-loop poles at { 1, 2, 3, 5}. While initial attempts to solve this problem failed, the problem was solved by using suboptimal matching and replacing Step 3 of the algorithm with the relaxed projection Y := (1 t)x + tp M (V, T ), t (0, 2) constant. Strictly speaking this is not a relaxed projection in the true sense as P M (V, T ) may not be a projection of X = V T V, however, the idea is clearly the same. Solutions were successfully found by taking t close to 0; t = 0.1, 0.2 and 0.3 can all be used to successfully find a solution. Likelihood of success increases and speed of convergence decreases the closer t is to 0. With t = 0.3 and ɛ = 10 3, solutions can typically be found in about iterations and about 4.7 CPU seconds.(with ɛ reduced greatly to ɛ = a solution was found in about 10 6 iterations and 275 CPU seconds.) Note: when employing relaxed projections, the loop termination criterion of the algorithm is different. It should be replaced by until X P M (V, T ) < ɛ as now X Y = X [(1 t)x + tp M (V, T )] = t X P M (V, T ) Continuous Time Stabilization: Random Problems In the comparison paper [26], a number of methods for continuous time stabilization via static output feedback are compared. In this subsection we repeat the same numerical experiments of [26] using our algorithm, to see how our algorithm compares.

63 46 The experiments involve randomly generated problems. A 1000 problems were generated for each of a number of different choices of system dimensions n, m and p. In each case, A, B, C and K were generated randomly. A scalar multiple of the identity was added to A to ensure it had a stability degree of one. A was then replaced by A B KC. All stable A s were discarded. In applying our algorithm, the C i s were chosen the same as in (4.1.1) with α = 0.1. As we only require the poles of the closed loop system to be stable, we terminated the algorithm as soon as all poles had real part less than zero. An attempt was made to solve each problem using up to 10 different initial conditions and a maximum of a 100 iterations per initial condition. Results are given in Table 4.1. As can be seen, all problems were solved. Most problems were solved in 10 iterations or less and average solution times were very small. These results are as good as the results for the best two algorithms tested in [26]. Other values for α, such as 0.2, 0.5, 1, and 2, were also tried and produced just as good results, indicating a certain robustness of the algorithm with respect to this parameter. The number of iterations required for the stabilization problems presented in this subsection is much less than the number required for the classical pole placement problems of the prior subsections. One would expect the stabilization problems to be easier to solve and hence this result is not unexpected. This probably indicates that the pole placement problems are harder problems, as one would expect. The other possibility is that the algorithm is some how better suited to solving stabilization problems rather than pole placement problems.

64 47 (n, m, p) (3, 1, 1) (6, 1, 1) (9, 1, 1) (3, 2, 1) (6, 4, 3) (9, 5, 4) 1 i < i < i NC T Table 4.1: A comparison of performance for different n, m and p. i denotes the number of iterations and listed are the number of problems that converged within different iteration ranges. NC denotes the number of problems that did not converge within 1000 iterations. T denotes the average convergence time in CPU seconds for the problems that converge within 1000 iterations. Finally we note that being able to solve the continuous time stabilization problem enables one to solve other related problems. For example the reduced order dynamic output feedback stabilization problem can also be solved via s system augmentation technique; see for example [69] Continuous Time Stabilization: Particular Problem The following problem taken from [44] appears frequently in the literature. The system considered is the nominal linearized model of a helicopter: A = ,

65 48 B = , C = T. In this problem we wish to place the closed loop eigenvalues in the set {z C Re(z) α} with α = 0.1. To achieve this, we apply the algorithm with A replaced by A+αI. Before proceeding, we define P γ M (V, T ) as follows. It is a modified version of Definite specifically for the continuous time stabilization problems. Definition Let γ R be nonpositive. For any V C n n unitary and any T C n n upper triangular, define P γ M (V, T ) = V T V, where min{γ, Re(T kk )} + i Im(T kk ), if k = l and Re(T kk ) 0, T kl = T kl, otherwise. Numerical experiments show that the performance of the algorithm can actually be improved by using P γ M which depends on the parameter γ R. P γ M shifts the real parts of the unstable eigenvalues to γ ( 0) rather than to 0. (If γ = 0 then PM 0 is just P M.) As we will see in this subsection, choosing certain value of γ increases the likelihood of finding solutions. An intuitive justification why γ < 0 can improve convergence is the following. During the iteration process, applying P L will tend to shift the eigenvalues back towards the right side of the complex plane. If we replace the real parts of the unstable eigenvalues with γ < 0, even though P L may shift the eigenvalues a little bit to the right, they may still end up in the left half plane as desired.

66 49 Regarding the choice of the parameter γ, a number of different values were selected for this problem. When γ 17, the algorithm was not always convergent. However, when γ 18, for example 18, 19, 20,..., the algorithm appeared to be always convergent. Typically the algorithm converged within 1000 iterations, with computational time under 0.7 CPU seconds. A particular solution is [ K = ] T for which A + BKC has eigenvalues { , , ± i0.7909} Discrete Time Stabilization: Random Problems This subsection contains results for some randomly generated discrete time stabilization problems. For each problem, the aim is to place all poles in the set C = {z z α}, α = 0.9. A 1000 (A, B, C) triples with n = 6, m = 4 and p = 3 were randomly generated. Triples with A stable were discarded and replaced. As in Section 4.3.1, an attempt was made to solve each problem using up to 10 different initial conditions and a maximum of a 1000 iterations per initial condition. With ɛ = 10 3, the success rate for 1 initial condition was 61% and the overall success rate was 80%. A plot of success rate versus number of initial conditions is shown in Figure 4.4. For the problems that were solved, the average number of iterations taken was and the average time taken was 0.37 CPU seconds. Note: in (4.2.10), P Cl (T kk ) equals αt kk T kk if T kk α and T kk otherwise.

67 Success rate (%) No. of initial conditions Figure 4.4: Performance for discrete time stabilization using up to 10 initial conditions Discrete Time Stabilization: Particular Problem The following example is taken from [32]. We wish to stabilize the following system:. A = , B = , C = T Note A is not stable. We take C as in Section and ɛ = This problem was easily solved for all initial conditions tried, a stabilizing K could be found on average in 3 iterations or less.

68 A Hybrid Problem So far we have presented results for three traditional classes of problems. In this subsection we demonstrate the generality of the algorithm by considering a less standard problem, namely a hybrid problem of the type shown in Figure 4.1(e). The problem parameters are c = i3, C = {z C Re z 2 and Im z Re z }, n = 13, m = 3 and p = 5. To ensure solvability, B, C and K were randomly generated and A set to A = Ṽ T Ṽ T B KC with Ṽ a random orthogonal matrix and T a real block upper triangular matrix with a spectrum satisfying the constraints. ( T was assigned the spectrum { 0.5±i3, 2, 2±i, 2.3, 2.5, 3±i3, 3.5±i3.1, 4± i4} and was created by choosing appropriate 1 1 and 2 2 blocks for its diagonal. The remaining upper triangular entries of T were chosen randomly.) With ɛ = 10 3, 64% of initial conditions tested resulted in a solution of this problem within 5000 iterations. For the initial conditions that lead to convergence, the average number of iterations taken was and the average time taken was 3.1 CPU seconds. The closed loop poles corresponding to a particular solution are shown in Figure 4.5. Note: the least squares matching steps were done directly rather than using the Hungarian method. 4.4 Summary In this chapter a new methodology for solving a broad class of output feedback pole placement problems is presented. While the methodology is not guaranteed to find a solution, numerical experiments presented demonstrate that it can be quite effective

69 Figure 4.5: The closed loop poles corresponding to a solution for the considered hybrid problem. in practice. A particular strength of the algorithm is that, in addition to being able to solve classical pole placement and stabilization problems, it can also be used to solve less standard pole placement problems. Keywords: static output feedback, pole placement, stabilization, alternating projections.

70 Chapter 5 A Projective Methodology for Simultaneous Stabilization and Decentralized Control 5.1 Introduction In this chapter, a projective methodology is presented for solving static output feedback simultaneous stabilization and stabilization by decentralized static output feedback problems. This chapter contains an overview of the algorithms and some of the numerical results. An analysis of convergence properties and extensive computational experiments are needed and will be done in the future work. In this chapter, we actually consider the following two problems. Problem : Static Output Feedback Simultaneous Stabilization. Given A i R n i n i, B i R n i m, and C i R p n i for each i = 1,..., k, find K R m p 53

71 54 such that all the closed loop systems A i + B i KC i for each i = 1,..., k are stable. Simultaneous stabilization problem was first introduced in [62] and [73]. This problem arises frequently in practice, due to plant uncertainty, plant variation, failure modes, plants with several modes of operation, or nonlinear plants linearized at several different equilibria [76]. Static output feedback simultaneous stabilization problem has been shown to be NP-hard [6]. That is, an efficient (i.e., polynomial time) algorithm that is able to correctly solve all instances of the problem cannot be expected. Even for three single input single output systems, no tractable simultaneous stabilization design procedure has been proposed [15]. An effective approach to tackle a variety of simultaneous stabilization problems is through numerical means. Note that if C i s are all identity, this problem reduces to state feedback simultaneous stabilization problem. If k = 1, it reduces to the standard static output feedback stabilization problem. Various numerical algorithms have been proposed for solving Problem 5.1.1, for example, [11] and [12]. Unlike these methods, our algorithm does not involve LMIs. Note that Problem is different from what is usually referred to as the simultaneous stabilization problem in [4] and [72]. Problem is to find a static output feedback controller simultaneously stabilizing k > 1 systems. While the other problem is to seek a dynamic compensator which stabilizes the given system and the compensator is stable itself, which is also referred to as strong stabilization problem. The other problem considered in this chapter is the stabilization by decentralized static output feedback. Problem : Stabilization by Decentralized Static Output Feedback.

72 55 Given A R n n, B i R n m i, and C i R p i n for each i = 1,..., k, find K i R m i p i s such that is stable. A + k B i K i C i i=1 For many large systems like electric power systems, transportation systems and whole economic systems, it is desirable to decentralize the control task. On the one hand, using decentralized control will reduce the design complexity greatly; on the other hand, decentralized control is especially preferable if the measurements have been taken on local channels and the controls can be applied on local channels only. Hence decentralized control has drawn a great deal of research into large scale control problems. Problem has been extensively researched and a lot of numerical algorithms have been proposed, for example, [13] and [47]. Stabilization by decentralized static output feedback is NP-hard if one imposes a bound on the norm of the controller or if the blocks are constrained to be identical, that is, all K i s are identical [6]. Note that Problem can be regarded as a generalization of the centralized problem to include a block diagonal structure constraint on K. This chapter presents a projective methodology for both Problem and The problems are shown to be equivalent to finding a point in the intersection of two particular sets. Alternating projection ideas are employed to find a point in the intersection. One of the sets is an affine subspace and hence convex. We know exactly how to do the projection onto it. However the other set is closed but nonconvex and moreover it is a rather complicated set. To the best of our knowledge, how to project

73 56 onto this set is an open difficult problem. We use a substitute mapping that maps onto the set, which gives a reasonable estimation of the actual projection operator. The effectiveness of the algorithms is demonstrated in computational results. The structure of the chapter is as follows 1. The methodology for both problems is presented in Section 5.2. Section 5.3 contains computational results demonstrating the effectiveness of the algorithms. 5.2 Methodology This section presents the solution algorithms for Problem and Throughout this section C n n will be regarded as a Hilbert space with inner product Y, Z = tr(y Z ) = i,j y ij z ij and the associated norm is the Frobenius norm Z = Z, Z Simultaneous Stabilization Throughout this subsection it is assumed that (A i, B i, C i ) s for i = 1,..., k are given and fixed. In this subsection, we define L S to be the set of all possible closed loop system matrices, L S = {Z 1 R n 1 n 1,..., Z k R n k n k Z 1 = A 1 + B 1 KC 1,..., Z k = A k + B k KC k for some K R m p }, and define M S to be the set of matrices with eigenvalues in the left half complex plane, M S = {Z 1 C n 1 n 1,..., Z k C n k n k ρ(z 1 ) 0,..., ρ(z k ) 0} 1 Please refer to Chapter 2 for background of projections.

74 57 Problem can now be stated as: Find X 1,..., X k L S M S. A solution strategy to solve Problem would be to employ an alternating projection scheme, alternatively projecting between L S and M S. While L S is an affine subspace and hence convex, M S is in general a rather complicated nonconvex set. Alternating projections between L S and M S are not guaranteed to converge. More importantly, how to calculate projections onto M S is an open hard problem. As will be shown in the experiments, an alternating projection like scheme can still be quite successful if instead of using a true projection map for M S, a suitable substitute is employed. The Schur s result in Theorem shows that a complex square matrix is unitarily equivalent to an upper triangular complex matrix. The following mapping is used as a substitute for the true projection map into M S. Though it is not a true projection map, the notation P MS will still be used. The choice for P MS is motivated by the solution for symmetric static output feedback stabilization problem, where a true projection can be found. Definition For any V i C n n unitary and any T i C n n upper triangular, define P MS (V i, T i ) = V i Ti V i, where min{0, Re(T ikk )} + iim(t ikk ), if k = l, T ikl = T ikl, otherwise. Given starting points Z 1 R n 1 n 1,..., Z k R n k n k, we do PMS (V i, T i ) to Z i for each i = 1,..., k. Note that P MS (Z i ) maps into M S. In order to apply P MS to Z i,

75 58 a Schur decomposition of Z i must first be found. A given Z i may have nonunique Schur decompositions and they may give different projection points. This fact is not so important as different Schur decompositions lead to points in M S of equal distance from the original matrix. The proof of this fact is similar to the proof for Theorem and hence is omitted here. P MS (V i, T i ) keeps V i fixed and modifies T i. Refer to the proof of Theorem for a similar idea which shows that of all the points in M S that have a Schur decomposition with the same V i matrix, P MS (V i, T i ) is the closest (or at least equal closest) to the original point Z i = V i T i V i. The projection of X i R n i n i for each i = 1,..., k onto L S involves solving a standard least squares problem. Lemma The projection of X i R n i n i onto L S is given by P LS (X i ) = A i + B i KC i where K is a solution of the least squares problem arg min K R m p ( k i=1 C T i B i ) vec (K) k vec [ Re (X i ) A i ] 2 i=1 Proof. This proof is similar to the proof for Lemma and hence is omitted here. Here is our algorithm for Problem Algorithm: Problem Data. A i R n i n i, B i R n i m, C i R p n i for each i = 1,..., k. Initialization. Choose randomly generated matrices Y i R n i n i for each i = 1,..., k. repeat 1. X i := P LS (Y i ).

76 59 2. Calculate a Schur decomposition of each X i : X i = V i T i V i. 3. Y i := P MS (V i, T i ). until ρ(x i ) 0 for all i = 1,..., k. As we only require the eigenvalues of all the closed loop systems to be stable, we terminate the algorithm as soon as all the eigenvalues have real parts less or equal to zero Decentralized Control Throughout this subsection it is assumed that A, B i and C i for each i = 1,..., k are given and fixed. In this subsection we define L D to be the set of all possible closed loop system matrices, k L D = {Z R n n Z = A + B i K i C i for some K 1 R m 1 p 1,..., K k R m k p k }, i=1 and define M D to be the set of matrices with eigenvalues in the left half complex plane, M D = {Z C n n ρ(z) 0 } Problem can now be stated as: Find X L D M D. Again alternating projection scheme is employed to solve Problem As the method of projecting onto M D is an unsolved problem, the following P MD (V, T ) is employed as a substitute of the actual projection operator. Computational experiments show that this scheme is quite successful in practice.

77 60 M D. The following map P MD (V, T ) is used as a substitute for the projection map onto Definition For any V C n n unitary and any T C n n upper triangular, define P MD (V, T ) = V T V, where min{0, Re(T kk )} + iim(t kk ), if k = l, T kl = T kl, otherwise. The projection of X R n n onto L D involves solving a standard least squares problems. Lemma The projection of X R n n onto L D is given by P LD (X) = A + k i=1 B ik i C i where vec(k) is a solution of the least squares problem and vec(k) stacks vec(k i ) in order arg min K R i m i p i [ C T 1 B 1... C T k B k ] vec (K) vec [ Re (X) A ] 2 2 Proof. This proof is similar to the proof of Lemma and hence is omitted here. Here is our algorithm for Problem Algorithm: Problem Data. A R n n, B i R n m i, C i R p i n for each i = 1,..., k. Initialization. Choose a randomly generated matrix Y R n n. repeat 1. X := P LD (Y ).

78 61 2. Calculate a Schur decomposition of X: X = V T V. 3. Y := P MD (V, T ). until ρ(x) Computational Results This section contains computational results of applying the algorithms to both Problem and We present results for both randomly generated problems and particular problems from literature. As mentioned in the introduction, extensive numerical experiments will be done and it is a part of the future work of this thesis. The algorithms for each problem were implemented in Matlab 6.5 and all results were obtained using a 3.19 GHz Pentium 4 machine. Throughout this section a randomly generated matrix will be a matrix whose entries are drawn from a normal distribution of zero mean and variance Simultaneous Stabilization: Random Problems This subsection contains results for some randomly generated static output feedback simultaneous stabilization problems. A 1000 problems were created for each of a number of different choices for the system dimensions (n, m, p). Each problem was created as follows. System matrices A i, B i, C i for each i = 1,..., k and K were generated randomly. A scalar multiple of the identity was added to A i to ensure it had a stability degree of 0.1. A i was then replaced by A i B i KCi. All stable A i s were discarded. Numerical experiments show that the performance of the algorithm can actually be

79 62 improved by using a slightly modified version of P MS which depends on a parameter γ R. A formal definition of similar projection map can be found in Definition 4.3.1, where P γ M S can be obtained using the similar idea. P γ M S shifts the real parts of the unstable eigenvalues to γ ( 0) rather than to 0. In these experiments, we chose γ = 1. An attempt was made to solve each problem using up to 10 different initial conditions and a maximum of 1000 iterations per initial condition. (n, m, p) (3, 1, 1) (6, 4, 3) (9, 5, 4) k S.R. 97% 98% 100% 100% 100% 99% i T Table 5.1: A comparison of performance for different n, m, p and the number of systems k. S. R. denotes the success rate, T denotes the average convergence time in CPU seconds, and i the average number of iterations. T and i are based only on those problems that were successfully solved. As can be seen from Table 5.1, the results are quite good. The algorithm solved most of the problems. The average solution times were very small. Note that the different choices of dimensions have their own difficulties. Kimura s stabilization condition is m + p > n (see Theorem A.0.3). Only (n, m, p) = (6, 4, 3) meets this condition. In the case of (n, m, p) = (9, 5, 4), m + p = n. While in the case of (3, 1, 1) problems m + p < n, which are quite hard problems.

80 Simultaneous Stabilization: Particular Problems This subsection contains results for some particular problems of simultaneous stabilization from literature. In order to present a greater number of results, rather than presenting the details of each problem, only references are given. For each problem, 100 random initial conditions were tested and the maximum number of iterations per initial condition was set to As soon as all the systems were stable, the experiments were terminated. No. references (n, m, p) k S.R. (%) T i 1 [11, Ex. 1] (2, 1, 1) 3 98% [11, Ex. 2], [12, Ex. 2], [75, Ex. 1] (3, 1, 2) 4 96% [11, Ex. 3] (3, 1, 3) 3 96% [11, Ex. 4], [12, Ex. 1] (2, 1, 1) 3 95% Table 5.2: Results for particular examples from literature. T and i are based only on those problems that were successfully solved. As can be seen from Table 5.2, performance was very good. Solutions to each problem could be found. The success rates are all greater or equal to 95% and both the average iterations and average times taken for successfully solved problems are very small Decentralized Control: Random Problems This subsection contains results for some randomly generated problems of stabilization by decentralized static output feedback. A 1000 problems were created for each of a number of different choices for the system dimensions n and controller dimensions (m i, p i ) for i = 1,..., k. Each problem was created as follows. System matrices

81 64 A, B i, C i and K i for i = 1,..., k were generated randomly. A scalar multiple of identity was added to A to ensure it had a stability degree of 0.1. A was then replaced by A k i=1 B ik i C i. All stable A s were discarded. Numerical experiments show that the performance of the algorithm can actually be improved by using P γ M D with similar idea as from Definition Here we choose the value of γ to be 1. n k (m 1, p 1 ) (2, 2) (2, 2) (3, 4) (3, 4) (5, 7) (5, 7) (m 2, p 2 ) (2, 2) (1, 3) (4, 3) (4, 3) (7, 5) (7, 5) (m 3, p 3 ) - (3, 1) - (4, 4) - (6, 6) S.R. 100% 100% 100% 100% 100% 100% i T Table 5.3: A comparison of performance for randomly generated problems with different n, m i, p i and the number of subsystems k. T and i are based only on those problems that were successfully solved. As can be seen from Table 5.3, the algorithm is performing very well in solving every problems. Not surprisingly that for the same n-dimensional problems, the more channels the control task is divided in the faster the solution would be found and the less iterations were needed. 5.4 Summary In this chapter we present a novel methodology for solving static output feedback simultaneous stabilization and stabilization by decentralized static output feedback

82 65 problems. Numerical experiments show that the algorithms are very effective in practice, though the methodology is not guaranteed to find a solution. Keywords: static output feedback, simultaneous stabilization, decentralized control, alternating projections.

83 Chapter 6 Trust Region Methods for Classical Pole Placement 6.1 Introduction Pole placement via static output feedback is a classical problem in systems and control theory, and there exists a great deal of research into this topic. Unfortunately, the problem of determining solvability of static output feedback pole placement problems has recently been shown to be NP-hard [31]. Though sufficient conditions for solvability exist, the survey paper [61] states that these conditions are mainly theoretical in nature and that there are no good numerical algorithms available in many cases when a problem is known to be solvable. New algorithms for this important problem are certainly of interest. In this chapter we present two related numerical algorithms for solving static output feedback pole placement problems. Problem Given system matrices A R n n, B R n m, C R p n and desired eigenvalues λ D C n, find K R m p such that λ(a + BKC) = λ D. 66

84 67 Two trust region approaches are considered for solving the following unconstrained nonlinear least squares problem min f(k) := 1 λ(a + BKC) λ D 2 K R m p 2. (6.1.1) 2 Here λ(a + BKC) denotes the vector of eigenvalues of A + BKC, with entries sorted to give the minimum norm. Trust region methods, which are well known in the optimization community, are a type of iterative method for minimizing nonconvex functions. The specific trust region methods we use are the trust region Newton method and the Levenberg-Marquardt method. In order to employ the trust region Newton method, at each iteration, the first and second derivatives of the eigenvalues of A + BKC must be calculated. The Levenberg-Marquardt based algorithm has the advantage of only requiring the first derivatives of the eigenvalues. Both resulting algorithms have the desirable property that, near solutions, they typically converge quadratically. A technicality that arises in the proposed approaches is that the eigenvalues of A + BKC may not be differentiable everywhere. They will however be differentiable at all points at which A + BKC has distinct eigenvalues. A consequence of this is that the algorithms are only appropriate for problems for which the desired poles are distinct. It turns out that the algorithms can still be used to solve problems whose desired poles are distinct but whose separation is quite small and hence this does not appear to be a serious limitation. The idea of solving pole placement type problems by utilizing eigenvalue derivatives is not completely new. Related ideas have been used to solve a noncontrol related inverse eigenvalue problem involving symmetric matrices [30]. However, the

85 68 methods presented in [30] are all local in nature and hence require one to start sufficiently close to a solution for them to converge. An important distinguishing feature of our algorithms is that the use of a trust region methodology means that this is not the case for our algorithms. First derivatives of eigenvalues (though not second derivatives) have also been used to solve various control problems. For example, in [38] they are used to try to achieve pole placement in certain convex regions. The methodology that is used there is quite different to the one used here and is based on convex programming techniques. As it requires that the open loop poles are already quite close to the desired poles, this method also only works locally. Along the same lines as [38], first derivatives of eigenvalues have also been used for robustness analysis and stabilization, see [59]. This chapter is structured as follows. Section 6.2 contains an overview of trust region methods. In order to use trust region methods to solve the pole placement problem, the first and second derivatives of the f are required. Details of these calculations, including how to calculate derivatives of the eigenvalues, are given in Section 6.3. Trust region methods require the function to be minimized to be differentiable. While f will typically only be differentiable on an open dense set, this turns out to be sufficient. Such issues are addressed in Section 6.4. Section 6.5 contains computational results, including results for a number of problems from the literature. 6.2 Trust Region Methods This section gives an overview of trust region methods. It is assumed that the function f : R N R to be minimized is (sufficiently) smooth. The actual f we wish to minimize is given in (6.1.1) and it may not satisfy this assumption. Issues related to

86 69 this fact are addressed in Section 6.4. Additional information on trust region methods can be found in [24] and [56] Basic Methodology Trust region methods can be used to minimize smooth nonconvex functions and are iterative in nature. It is assumed that the function f : R N R to be minimized is (sufficiently) smooth. The actual f we wish to minimize in problem formulations may not satisfy this assumption. Issues related to this fact are addressed in Section 6.4. Given a current iterate x k, they construct a possibly nonconvex, quadratic approximation of the objective function about x k. This model is only assumed to be a good approximation in a certain ball centered about x k. This is the so-called trust region. It turns out that, numerically, it is possible to readily minimize a quadratic function over a ball. Doing so gives a candidate step p k. The step p k is only accepted if the difference in the objective function, f(x k ) f(x k + p k ), is sufficiently close to the difference predicted by the model. If p k is not acceptable, the trust region radius is decreased and the process repeated. On the other hand, if the model gives a good prediction, the radius of the trust region may be increased to allow a larger step in the next iteration. What follows describes the trust region method in greater detail. At each iteration, the quadratic approximation is assumed to be of the form m k (p) = f(x k ) + f(x k ) T p pt B k p. Here B k is typically either the Hessian of f at x k or some approximation of this Hessian. If B k is the Hessian, then m k is simply the 2nd order Taylor approximation

87 70 of f at x k. As will be discussed below, it may also be useful to consider other choices for B k. Each constrained minimization problem is of the form min m k (p) s.t. p 2 k, (6.2.1) p R N where k > 0 is the current trust region radius. The solution p k of (6.2.1) gives a potential step. Whether or not it is a suitable step is assessed by considering the ratio of actual reduction of the objective to the predicted reduction: The overall trust region method is as follows. Trust Region Method, Generic Algorithm ([56]) Given ˆ > 0, 0 (0, ˆ ), and η [0, 1 4 ): for k= 0, 1, 2,... Obtain p k by (approximately) solving (6.2.1); Evaluate ρ k from (6.2.2); ρ k = f(x k) f(x k + p k ) m k (0) m k (p k ). (6.2.2) if ρ k < 1 4 k+1 = 1 4 k else if ρ k > 3 and p 4 k 2 = k k+1 = min{2 k, ˆ } else k+1 = k ; if ρ k > η x k+1 = x k + p k

88 71 else x k+1 = x k ; end(for). Approximate solutions of the constrained quadratic minimization problem (6.2.1) can be obtained in a number of ways. One way is the nearly exact solution method described in [56, Section 4.2]: it can be shown that problem (6.2.1) is equivalent to finding a p and a scalar γ 0 such that the following conditions hold, p 2 k, (B k + γi)p = f(x k ), γ( k p 2 ) = 0, (B k + γi) is positive semidefinite. Without going into the details we mention that finding a p and γ that satisfy these conditions is equivalent to solving a one dimensional root finding problem in γ which can be solved using a Newton method. Regarding the choice of B k s, the Hessian of f at x k is a natural choice. In this case, the method is called the trust region Newton method. When the objective function f is a least squares cost, say f(x) = 1 M 2 i=1 r2 i (x) for some functions r i : R N R, there is another suitable choice. In this case, if we define r(x) = (r 1 (x),..., r M (x)) T and let J(x) denote the Jacobian of r(x), then f(x) = J(x) T r(x) and 2 f(x) = J(x) T J(x) + M r i (x) 2 r i (x), i=1 and a good choice for B k is J(x k ) T J(x k ). The advantages of this choice for B k include the fact that it does not require the calculation of the second derivatives of the r i s and

89 72 that it gives a good approximation of 2 f(x k ) when f(x k ) is small, that is, when each r i (x k ) is small. For this choice of B k s, the method is called the Levenberg-Marquardt method Convergence Results This section contains some general convergence results. These results have been specialized for our purposes and the given references usually refer to more general results. The results in this section all assume the nearly exact solution method is used for the subproblems (6.2.1) and that the algorithm parameter η is nonzero, that is, η (0, 1 4 ). A simple but important property of trust region methods is that the cost is nonincreasing from one iteration to the next: for all k 0, f(x k ) f(x k+1 ). Here are some less simple properties. The following result concerns global convergence to stationary points. Theorem ([56, Th 4.8]). Suppose that on the sublevel set {x f(x) f(x 0 )}, (6.2.3) f is twice continuously differentiable and bounded below, and that B k β for some constant β. Then lim f(x k) = 0. k Theorem holds for both the trust region Newton method and the Levenberg- Marquardt method. For the former method, the following result also holds. Theorem ([56, Th 4.9]). Suppose the set (6.2.3) is compact, that f is twice continuously differentiable on this set, and that B k = 2 f(x k ). Then the x k s have a

90 73 limit point x that satisfies the following first and second order necessary conditions for a local minima, f(x ) = 0, 2 f(x ) is positive semidefinite. The following local convergence result for the trust region Newton method implies that near a (strict) local minimum the method reduces to a pure Newton method. Theorem ([56, Th 6.4]). Suppose that B k = 2 f(x k ). Further, suppose the sequence of x k s converges to a point x that satisfies the following first and second order sufficient conditions for a (strict) local minima, f(x ) = 0, 2 f(x ) is positive definite, and that f is three continuously differentiable in a neighborhood of x. Then the trust region bound k becomes inactive for all k sufficiently large. A consequence of Theorem is that local convergence for the trust region Newton method is usually quadratic, just as for the pure Newton method. The final result in this section shows that the Levenberg-Marquardt method is often locally quadratically convergent to global minima. (Note that this may not be the case for local minima that are not global minima.) Theorem ([56, Section 10.2]). Suppose the r i s that determine f are three continuously differentiable in a neighborhood of a global minima x. Suppose further that J(x ) T J(x ) is positive definite. Then the Levenberg-Marquardt method is locally quadratically convergent to x.

91 Derivative Calculations In order to apply the trust region methods, we need to calculate the appropriate first and second derivatives. As already mentioned, the eigenvalues of A + BKC may not be differentiable everywhere. For example, the eigenvalues of [ ] [ ] 4 0 k [ ] l (6.3.1) are 2± 4 + 4k when l = k and hence they are not differentiable at (k, l) = ( 1, 1). The next result, which follows from a result in [43, Section 2.5.7], shows that lack of differentiability cannot occur at points at which the eigenvalues are distinct. Theorem Consider a matrix valued function A : R N R n n. Suppose A(x) is k-times continuously differentiable in x in an open neighborhood Ω. Furthermore suppose that at each point in Ω, A(x) has distinct eigenvalues. Then the eigenvalues of A(x) are k-times continuously differentiable in Ω. Suppose the conditions of Theorem are satisfied with k 2. Then we can write down explicit expressions for the first and second derivatives of the eigenvalues. (Part of the following results appear in [48]. The rest we have proved using similar techniques to those appearing in that paper.) If λ i denotes the ith eigenvalue of A(x), suppose D = diag(λ 1,..., λ n ) and let X C n n and Y C n n be such that A(x)X = XD and Y X = I. Then ( λ i = Y A(x) X x k x k ), (6.3.2) ii and if we define P = Y A(x) x k X and Q = Y A(x) x l X, (6.3.3)

92 75 then ( ) 2 λ i = Y 2 A(x) X + x k x l x k x l ii n j=1 j i P ij Q ji + P ji Q ij λ i λ j. (6.3.4) These results can be used to calculate the derivatives of our objective function f(k) = 1 n 2 i=1 (λ i λ D i ) (λ i λ D i ). Differentiating we have { n } f(k) = Re (λ i λ D i ) λ i K kl K kl and { n 2 f(k) = Re K kl K pq i=1 Note that A(x) x k is given by ( λi K kl i=1 ) ( λi K pq ) + n i=1 (λ i λ D i ) 2 λ i K kl K pq } (6.3.5). (6.3.6) (A + BKC) K kl = B k C l, (6.3.7) where B k is the kth column of B and C l is the lth row of C. Identity (6.3.7) implies that the first term appearing in (6.3.4) is always zero. Combining (6.3.2) (6.3.7) we now have a complete characterization of the first and second derivatives of our cost (at points where A + BKC has distinct eigenvalues). Note that when applying the Levenberg-Marquardt method, the approximate second derivatives are given by the first term in (6.3.6), { n ( ) 2 ( f(k) λi λi Re K kl K pq ) }. (6.3.8) i=1 K kl K pq 6.4 Additional Comments When evaluating the cost f(k), the eigenvalues of A + BKC must be matched with the desired eigenvalues in a least squares sense. Suppose a given problem has

93 76 distinct desired eigenvalues and that it is solvable. Then, sufficiently near a solution of the problem, the eigenvalues of A + BKC will be distinct, which eigenvalues of A + BKC match to which desired eigenvalues will not change, and the eigenvalues of A + BKC will depend smoothly on K. As a result, for problems that are solvable and have distinct desired eigenvalues, our objective function f will be smooth in a neighborhood of solutions. An important consequence of this is that the results from Section regarding local convergence to solutions still apply. In particular, near solutions of problems with distinct eigenvalues, both our algorithms will often converge quadratically. The comments above address the behavior of the algorithms in a neighborhood of a solution. What about behavior far away from a solution? Are the algorithms even defined in such regions? Considering the steps involved, all that is required for the algorithms to be well defined is that, for each iterate, A + BKC have distinct eigenvalues. If the desired eigenvalues are distinct and a generic initial condition is used, it is unlikely that for either algorithm, that for any iterate, A + BKC has repeated eigenvalues. Hence, under these mild assumptions, the algorithms should be well defined and this is indeed what is observed in practice. If the desired eigenvalues are not distinct, the cost may not be differentiable at a solution. This indicates that the requirement of distinct desired eigenvalues is also necessary. This does not limit the usefulness of the algorithms too much however as desired eigenvalues can always be perturbed slightly so that they are distinct. While having distinct but close eigenvalues does lead to a degree of ill-conditioning in our algorithms, the algorithms can still be effectively utilized in such cases, as will be shown in the numerical results section.

94 77 We also mention that, if the desired eigenvalues are distinct, it is our belief that, modulo small changes, the global results of Section should still hold, at least generically. 6.5 Computational Results This section contains computational results of applying the algorithms to various problems. The algorithms were implemented in Matlab 6.5 and all results were obtained using a 3.19 GHz Pentium 4 machine Random Problems A 1000 random problems were created for each of a number of different choices for the system dimensions (n, m, p). Each problem was created as follows. System matrices A, B, and C were generated randomly; their entries were drawn from a normal distribution of zero mean and variance 1. λ D was taken to be the spectrum of a randomly generated matrix and a scalar was added to λ D to ensure max i Re λ D i = 0.1. Each triple (n, m, p) was chosen to satisfy mp > n. (6.5.1) As the problems are randomly generated and satisfy condition (6.5.1), Wang s sufficient condition ensures each problem is solvable, [74]. An attempt was made to solve each problem using up to 5 different initial conditions and a maximum of a 2000 iterations per initial condition. Initial conditions were chosen randomly. The convergence condition used was λ(a + BKC) λ D 2 < ɛ with ɛ = Results for both algorithms are given in Table 6.1.

95 78 (n,m,p) (3,2,2) (6,4,3) (9,5,5) Trust S.R. 100% 100% 91% Region T Newton i Levenberg- S.R. 100% 100% 99% Marquardt T i Table 6.1: A comparison of performance for different n, m and p. S.R. denotes the success rate, T denotes the average convergence time in CPU seconds, and i the average number of iterations. T and i are based only on those problems that were successfully solved. ɛ = As can be seen, the results for both algorithms are quite good. Not surprisingly, given the reduced computation required for its implementation, the Levenberg- Marquardt based algorithm is faster than the trust region Newton based algorithm; notice in particular the large difference in T for the (9, 5, 5) problem. What is perhaps surprising is that the Levenberg-Marquardt based algorithm is more likely to find a solution (at least within the number of iterations that were allowed). This suggests that the Levenberg-Marquardt based algorithm is superior to the trust region Newton based algorithm. Note however that we have observed instances where, using the same initial condition, the former algorithm converges to a local minima while the latter algorithm converges to a solution. The problems in Table 6.1 are easy in the sense that they actually satisfy Kimura s condition (see Theorem A.0.3), m + p > n, and in most cases the number of variables mp is significantly larger than n. Results for some harder problems are presented in Table 6.2. For these problems, mp n = 1. An attempt was made to solve each problem using up to 5 different initial conditions and a maximum of 5000

96 79 iterations per initial condition. Initial conditions were chosen randomly. (n,m,p) (5,3,2) (7,2,4) (9, 2, 5) Trust S.R. 100% 89% 65% Region T Newton i Levenberg- S.R. 99% 96% 81% Marquardt T i Table 6.2: Some harder random problems. T and i are based only on those problems that were successfully solved. ɛ = As can be seen, for these problems, success rates are less, even though we have allowed a greater number of iterations per initial condition. Overall though the results for these difficult problems are still quite good. Figure 6.1 shows a typical plot of λ(a+bkc) λ D 2 versus i. It has the desired property that near solution the convergence was quadratic Particular Problems This subsection contains results for particular problems from the literature. In order to present a greater number of results, rather than presenting the details of each problem, only references are given. Results are presented only for the Levenberg- Marquardt based algorithm. For each problem, 100 random initial conditions were tested and the maximum number of iterations per initial condition was set to 1000 (expect for Problem 6 for which the maximum number of iterations per initial condition was set to 1500). The termination parameter ɛ was reduced to ɛ = Results are given in Table 6.3.

97 80 1 λ(a + BKC) λ D Iteration No. i Figure 6.1: Quadratic convergence near solution of the Levenberg-Marquardt algorithm. As can be seen, performance was again very good. Solutions to each problem could be found. Aside from Problem 6, for those initial conditions that lead to a solution, average convergence times were less than 0.5 CPU seconds and solutions could be found from many different initial conditions. Problem 6 was the most sensitive to initial conditions. In fact, the results for Problem 6 in the table are based on choosing the entries of initial K s from a normal distribution of zero mean and variance 100. (Choosing initial conditions for this problem in the same manner as for all the other problem lead to a rather low success rate of 5%.)

98 81 No. references (n, m, p) T i S.R. (%) 1 [1, ex 1, case 1] (4, 2, 2) [1, ex 1, case 2] (4, 2, 2) [1, ex 2], [70] (5, 2, 4) [64] (4, 3, 2) [49, ex 1] (4, 2, 2) [49, ex 2] (6, 3, 2) [49, ex 3] (5, 3, 2) [66, ex 1] (4, 3, 2) [66, ex 2] (3, 1, 2) [66, ex 3] (4, 2, 2) [46] (8, 4, 3) [25] (3, 2, 2) Table 6.3: Particular problems. T and i are based only on those problems that were successfully solved. ɛ = Repeated Eigenvalues Each of the problems considered in the prior subsection (as well as all the random problems) had distinct desired eigenvalues. In this subsection we consider what can be achieved if the desired eigenvalues are not distinct. Consider the following problem from [14], A = , B = 1 0, C = T, with desired eigenvalues λ D = { 3, 3, 2, 2, 1, 1} T. Notice that this problem has three pairs of repeated eigenvalues. The algorithms do not provide a way to exactly solve this problem. However, a

99 82 fairly good approximate solution can be found by considering a slightly perturbed desired spectrum with distinct eigenvalues. For example, suppose λ D is replaced with λ D δ = { 3 δ, 3, 2 δ, 2, 1 δ, 1} T with δ = Then this perturbed problem can often be solved. An alternative strategy is to solve a series of perturbed problems with decreasing δ s. First solve a perturbed problem with δ = Then, setting δ = 10 2 and using the solution of the prior problem as an initial condition, solve this new perturbed problem. Continue this process with δ = 10 3 and δ = Using the Levenberg-Marquardt based algorithm, the first strategy lead to a solution for 55% of initial conditions tried, with average convergence time of 0.72 CPU seconds. The second strategy was successful in 59% of cases, with average convergence time of 0.35 CPU seconds. We note that the main problem we encountered in solving these problems was not convergence to local minima, though this can occur, but rather that near solutions the Hessian of the cost can have very large eigenvalues. This leads to numerical issues when trying to solve the constrained quadratic subproblems (6.2.1). The code we have implemented for these subproblems works very well in the vast majority of cases though we expect it could still be improved further and hence that even better results may be achievable. 6.6 Summary In this chapter two related numerical methods for the static output feedback pole placement problem have been presented. Both algorithms are well behaved globally and have the property that local convergence to solutions often occurs quadratically.

100 83 Extensive computational results presented indicate that the algorithms can be highly effective in practice. While it is required that the desired poles are distinct, the algorithms can still be successfully utilized for problems with repeated poles if small perturbations to the desired poles are allowed. Keywords: pole placement, static output feedback, trust region method, Newton s method, Levenberg-Marquardt method, eigenvalue derivatives.

101 Chapter 7 A Gauss-Newton Method for Classical Pole Placement 7.1 Introduction In this chapter we present a Gauss-Newton algorithm for solving static output feedback pole placement problems see Problem The problem is formulated as a constrained nonlinear least squares problem and the minimization is achieved via a Gauss-Newton algorithm. This chapter contains an overview of the algorithm and some of the numerical results. As part of the future work, a convergence analysis and more numerical experiments will be carried out. Consider Problem again. Define f(q, G, K) = Q(D + G)Q T (A + BKC) 2. (7.1.1) Here A R n n, B R n m, C R p n are given system matrices. D is a block diagonal matrix with some 1 1 and 2 2 blocks placed properly on the diagonal. D 84

102 85 is constructed in the following way to possess the desired eigenvalues λ D C n. The 1 1 blocks contain the real eigenvalues in λ D. Each 2 2 block contains the real parts of a pair of complex conjugate eigenvalues on the diagonal and the values of the imaginary parts on the subdiagonal. Note that the eigenvalues are sorted to give the minimal norm, which is a combinatorial least squares problem (this problem and its solution methods are explained in Section 4.2.2). A Gauss-Newton method is considered for solving the following constrained nonlinear least squares problem Problem min f(q, G, K) s.t Q O n ; G block super triangular; K R m p. For clarity on the form of G, look at the following example. Suppose λ D = { 1, 2 ± i, 3}, then D = and G = 0 g 1 g 2 g g g (7.1.2) In Section 4.2.2, we recall a Schur s result that any matrix Z C n n is unitarily equivalent to an upper triangular matrix [40, Th 2.3.1]. Here we recall another Schur s result that for any matrix Z R n n, there is a real orthogonal matrix V O n such that Z = V T V T where T is a block upper triangular matrix [40, Th 2.3.4].

103 86 Theorem Given Z R n n with eigenvalues λ 1,..., λ n in any prescribed order, there is an orthogonal matrix V O n and a block upper triangular matrix T R n n such that Z = V T V T (7.1.3) and T ii = Re(λ i ), i = 1,..., n. Complex conjugate eigenvalues form 2 2 blocks along the diagonal. Proof. See, for example, [40, Th 2.3.4]. Note the problem formulation of Problem is motivated from the Schur s form of real matrices in Theorem Gauss-Newton method is a variation of standard Newton s method and is iterative in nature. Gauss-Newton method has the advantage of just requiring the first derivative of the cost function. In order to employ Gauss-Newton method to solve Problem 7.1.1, we need to calculate the first derivative at the each iteration. As we have three variables in f(q, G, K) and these variables have dependent relationships, we first use elimination technique to simply the cost function. Then we do the first order Taylor approximation to derive the first derivatives. This chapter is organized as follows 1. The main algorithm is presented in Section 7.2. Since the calculation is intensive and is hard to present, Section 7.2 contains an overview of the algorithm. Section 7.3 presents some results of applying the algorithm to some randomly generated problems. Further analysis and experiments will be done in the future work. 1 For more information on Gauss-Newton method, please refer to Section 9.2

104 The Gauss-Newton Method This section presents the solution algorithm. In cost function f(q, G, K), we have three variables. Assume K and Q are given and fixed, the calculation of G reduces to a standard least squares problem. The solution of G is a formula in variables K and Q. We put the solution back into (7.1.1) and get a new cost function in K and Q. Assume again K is given and fixed, the calculation of Q in the new cost function reduces to a standard least squares problem in Q. The solution of K is a formula in Q. We put this solution back into the cost function. After these two steps, we obtain a new cost function in variable Q only, from which we can do the first order Taylor approximation and hence apply Gauss-Newton method. What follows describes this procedure in greater detail. Throughout this section, we assume A, B and C are given and fixed. The capital letters are used to represent matrices and small bold letters represent vectors. Stage I : eliminate G f(q, G, K) = Q(D + G)Q T (A + BKC) 2 = vec(g) vec[q T (A + BKC)Q D] 2 2 = Q g g vec(v ) 2 2 = 0. g opt = (Q T g Q g ) 1 Q T g v. (7.2.1) where V := Q T (A + BKC)Q D. So the new cost function is: f(q, K) = [ Q g (Q T g Q g ) 1 Q T g I ] vec ( Q T (A + BKC)Q D ) 2 2

105 88 Look at example (7.1.2) again. Note that g = [ g 1 g 2 g 3 g 4 g 5 ] T and Q g has the form Q g = and to a given problem, Q g is a constant matrix. T, Stage II : eliminate K Define Q := Q g (Q T g Q g ) 1 Q T g I. We have f(q, K) = Q vec ( Q T (A + BKC)Q D ) 2 2 = Q vec ( Q T AQ D ) + Q ( Q T C T Q T B ) k 2 2 = 0. so k opt = (Y T Y ) 1 Y T w. (7.2.2) where Y := Q ( Q T C T Q T B ) and w := Q vec ( Q T AQ D ). Define Q := I Y (Y T Y ) 1 Y T, we have a new cost function f(q) = Qw 2 2 Note that Q is a constant matrix since Q g is fixed once the problem is given. The cost function we are going to proceed with Gauss-Newton method is the following: f(q) = Qw 2 2 = I Q ( Q T C T Q T B ) [ ( QC QB T ) Q T Q ( Q T C T Q T B ) ] 1 ( QC QB T ) Q T Q vec ( Q T AQ D ) 2 2

106 89 So far the cost function f(q, G, K) has been reduced to a new cost function f(q). The first order Taylor approximation of f(q) is based on the fact that the first order Taylor approximation of an orthogonal matrix Q = Q 0 (I + Ω + O(Ω 2 )) where Ω is a skew-symmetric matrix. The calculation of the Jacobian of f(q) is quite lengthy and hence we omit here. Here is our algorithm for Problem Algorithm: Problem Data. A R n n, B R n m, C R p n and λ D C n. Initialization. Choose a randomly generated orthogonal matrix Y O n, a randomly generated block super triangular matrix G (the form is determined by λ D ) and a randomly generated matrix K R m p. repeat 1. Calculate g opt using (7.2.1). 2. Calculate k opt using (7.2.2). 3. Do first order Taylor approximation of f(q). 4. Derive Q to update Q. until f(q, G, K) < ɛ. 7.3 Computational Results This section contains some numerical results of applying the algorithm to randomly generated problems. The algorithm was implemented in Matlab 6.5 and all results were obtained using a 3.19 GHz Pentium 4 machine.

107 90 A 1000 random problems were created for each of a number of different choices for the system dimensions (n, m, p). Each problem was created as follows. A, B and C were randomly generated, i.e., whose entries are drawn from a normal distribution of zero mean and variance 1. To ensure the problem is feasible, the desired eigenvalues λ D are taken from a random matrix and a scalar was added to λ D to ensure max i Re λ D i = 0.1. Initial conditions were chosen randomly. Each triple (n, m, p) was chosen to satisfy m + p > n. As the problems are randomly generated and satisfy condition m + p > n, Kimura s condition (see Theorem A.0.3) ensures each problem is solvable. (n,m,p) (6, 4, 3) (9, 5, 5) Gauss- S.R. 98% 99% Newton T Method i Table 7.1: A comparison of performance for different n, m and p. S.R. denotes the success rate, T denotes the average convergence time in CPU seconds, and i the average number of iterations. T and i are based only on those problems that were successfully solved. ɛ = As can be seen from Table 7.1, the results for the algorithm are very good. It solves most of the randomly generated problems. Figure 7.1 shows a typical plot of f(q, G, K) versus i. It has the desired property that near solution, the convergence was quadratic. As mentioned in the introduction, a more comprehensive analysis and numerical experiments will be carried out in near future.

108 f(q, G, K) Iteration No. i Figure 7.1: Quadratic convergence near solution of the Gauss-Newton algorithm. 7.4 Summary In this chapter, a Gauss-Newton algorithm is proposed for solving static output feedback pole placement problem. The problem is formulated as a constrained nonlinear least squares problem and minimization is achieved via a Gauss-Newton method. As this chapter gives an overview of the algorithm and contains some of the numerical results, convergence analysis and numerical experiments will be done in near future, which we believe will be performing well. Keywords: pole placement, static output feedback, Gauss-Newton method.

109 Part IV Problems Arising in Nonnegative Matrices 92

110 93 In this part, numerical algorithms are presented for solving two inverse eigenvalue problems, namely, the inverse eigenvalue problem for nonnegative matrices (NIEP) or stochastic matrices (StIEP) and the inverse eigenvalue problem for symmetric nonnegative matrices (SNIEP). In Chapter 8, we present two related numerical methods, one for NIEP/StIEP and another for SNIEP. The methods are iterative in nature and utilize alternating projection ideas. For the algorithm for the symmetric problem, the main computational component of each iteration is an eigenvalue-eigenvector decomposition, while for the other problem, it is a Schur matrix decomposition. Numerical results are presented demonstrating that the algorithms are very effective in solving various problems including high dimensional problems. In Chapter 9, two related numerical algorithms are presented for NIEP/StIEP and SNIEP. Both algorithms are iterative in nature. One is based on Newton s method and the other one is based on Gauss-Newton method. The main computational components of each iteration are the first and second order derivatives of the eigenvalues. Extensive numerical experiments show that the algorithms are very effective in practice though convergence to a solution is not guaranteed for either algorithm.

111 Chapter 8 A Projective Methodology for Nonnegative Inverse Eigenvalue Problems 8.1 Introduction A real n n matrix is said to be nonnegative if each of its entries is nonnegative. The Nonnegative Inverse Eigenvalue Problem (NIEP) is the following: given a list of n complex numbers λ = {λ 1,..., λ n }, find a nonnegative n n matrix with eigenvalues λ (if such a matrix exists). A related problem is the Symmetric Nonnegative Inverse Eigenvalue Problem (SNIEP): given a list of n real numbers λ = {λ 1,..., λ n }, find a symmetric nonnegative n n matrix with eigenvalues λ (if such a matrix exists). The NIEP and SNIEP are different problems even if λ is restricted to contain only 94

112 95 real entries; there exist lists of n real numbers λ for which the NIEP is solvable but the SNIEP is not [42]. Finding necessary and sufficient conditions for a list λ to be realizable as the eigenvalues of a nonnegative matrix has been a challenging area of research for over fifty years and this problem is still unsolved; see the recent survey paper [28]. As noted in [19, Section 6], while various necessary or sufficient conditions exist, the necessary conditions are usually too general while the sufficient conditions are too specific. Under a few special sufficient conditions, a nonnegative matrix with the desired spectrum can be constructed; however, in general, proofs of sufficient conditions are nonconstructive. Two sufficient conditions that are constructive and not restricted to small n are, respectively, given in [65], for the SNIEP, and [68], for the NIEP with real λ. (See also [58] for an extension of the results of the latter paper.) A good overview of known results relating to necessary or sufficient conditions can be found in the recent survey paper [28] and general background material on nonnegative matrices, including inverse eigenvalue problems and applications, can be found in the texts [3] and [54]. We also mention the recent paper [22], which can be used to help determine whether a give list λ may be realizable as the eigenvalues of a nonnegative matrix. In this chapter we are interested in generally applicable numerical methods for solving NIEPs and SNIEPs. To the best of our knowledge, the only algorithms that have appeared up to now in the literature consist of [18] for the SNIEP and [21] for the NIEP. In the case of [18], the following constrained optimization problem is considered: 1 min Q T ΛQ R R 2. (8.1.1) Q T Q=I,R=R T 2

113 96 Here Λ is a constant diagonal matrix with the desired spectrum and stands for the Hadamard product, i.e., componentwise product. Note that the symmetric matrices with the desired spectrum are exactly the elements of { Q T ΛQ Q O n } and that the symmetric nonnegative matrices are exactly the elements of { R R R S n }. In [18], a gradient flow based on (8.1.1) is constructed. A solution to the SNIEP is found if the gradient flow converges to a Q and an R that zero the objective function. The approach taken in [21] for the NIEP is similar but is complicated by the fact that the set of all matrices, both symmetric and nonsymmetric, with a particular desired spectrum is not nicely parameterizable. In particular, these matrices can no longer be parameterized by the orthogonal matrices. In this chapter we present a numerical algorithm for the NIEP and another for the SNIEP. In both cases, the problems are posed as problems of finding a point in the intersection of two particular sets. Unlike the approaches in [18] and [21] which are based on gradient flows, our algorithms are iterative in nature. For the SNIEP, the solution methodology is based on an alternating projection scheme between the two sets in question. The solution methodology for the NIEP is also based on an alternating projection like scheme but is more involved, as we will shortly explain. While alternating projections can often be a very effective means of finding a point in the intersection of two or more convex sets, for both the SNIEP and NIEP formulations, one set is nonconvex. Nonconvexity of one of the sets means that alternating projections may not converge to a solution. This is in contrast to the case where all sets are convex and convergence to a solution is guaranteed. As mentioned above, for each problem, one set in the problem formulation is nonconvex. For the NIEP, this set is particularly complicated; it consists of all matrices

114 97 with the desired spectrum. At least some of the members of this set will be nonsymmetric matrices and it is this that causes complications. In particular, though the set is closed and hence projections are well defined theoretically, how to calculate projections onto such sets is an unsolved difficult problem. An alternate method for mapping onto this set is used, which is motivated by the control counterpart in Part II. Though the resulting points are not necessarily projected points, they are members of the set and share a number of other desirable properties. As will be shown, this alternate projection is very effective in our context. Furthermore, we believe that it may also be quite effective for other inverse eigenvalue problems involving nonsymmetric matrices. For more on other inverse eigenvalue problems, see the survey papers [17] and [19], and the recent text [20]. Before concluding this introductory section we would like to point out how the NIEP is related to another problem involving stochastic matrices. A n n matrix is said to be stochastic if it is nonnegative and the sum of the entries in each row equals one. Another variation of the NIEP is the STochastic Inverse Eigenvalue Problem (StIEP): given a list of n complex numbers λ = {λ 1,..., λ n }, find a stochastic n n matrix with eigenvalues λ (if such a matrix exists). It turns out that the NIEP and the StIEP are almost exactly the same problem, as we now show. (See also [21].) The vector of all 1 s is always an eigenvector for a stochastic matrix, implying each stochastic matrix must have 1 as an eigenvalue. Also, the maximum row sum matrix norm of a stochastic matrix equals 1 and hence the spectral radius cannot be greater than 1, and as a result, must actually equal 1. Suppose λ satisfies the above mentioned necessary conditions to be the spectrum of a stochastic matrix and that a nonnegative matrix A with this spectrum can be found. Then if an eigenvector x

115 98 of A corresponding to the eigenvalue 1 can be chosen to have positive entries (by the Perron-Frobenius theorem this is certainly possible if A is irreducible), then, if we define D = diag(x), it is straightforward to verify that D 1 AD is a stochastic matrix with the desired spectrum. (In fact it can be shown that if λ satisfies the above mentioned necessary conditions, then it is the spectrum of a stochastic matrix if and only if it is the spectrum of a nonnegative matrix [77, Lemma 5.3.2].) This chapter is structured as follows 1. The SNIEP algorithm is presented first, in Section 8.2, and then insights from this algorithm are used to address the more difficult NIEP in Section 8.3. Numerical results for both algorithms are presented in Section The Symmetric Problem Our algorithm for solving the SNIEP consists of alternately projecting onto two particular sets. The details are given in this section. Given a list of real eigenvalues λ = {λ 1,..., λ n }, renumbering if necessary, suppose λ 1... λ n. Let Λ = diag(λ 1,..., λ n ), (8.2.1) and let M denote the set of all real symmetric matrices with eigenvalues λ, M = {A S n A = V ΛV T for some orthogonal V }. (8.2.2) 1 Please refer to Chapter 2 for background information on projections.

116 99 Let N denote the set of symmetric nonnegative matrices, N = {A S n A ij 0 for all i, j}. (8.2.3) The SNIEP can now be stated as the following particular case of Problem 2.3.1: Find X M N. (8.2.4) Our solution approach is to alternatively project between M and N, and we next show that it is indeed possible to calculate projections onto these sets. Figure 8.1 illustrates the problem formulation and the alternating projections for SNIEP. For visualization convenience, the figure is demonstrated in R 3. However it should be clear that the real problem can be any dimensional. The pink ball represents the set M (actually nonconvex) and the box region represents N. The shaded pink region is the intersection of the two sets. We know exactly how to do projections onto both N and M and hence both projections are well established. feasible set starting point Figure 8.1: Illustration of the problem formulation for SNIEP.

2nd Symposium on System, Structure and Control, Oaxaca, 2004

2nd Symposium on System, Structure and Control, Oaxaca, 2004 263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,

More information

Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations

Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations Mohammed El-Shambakey Dissertation Submitted to the Faculty of the Virginia Polytechnic Institute and State

More information

Inverse Eigenvalue Problems: Theory and Applications

Inverse Eigenvalue Problems: Theory and Applications Inverse Eigenvalue Problems: Theory and Applications A Series of Lectures to be Presented at IRMA, CNR, Bari, Italy Moody T. Chu (Joint with Gene Golub) Department of Mathematics North Carolina State University

More information

Modular Monochromatic Colorings, Spectra and Frames in Graphs

Modular Monochromatic Colorings, Spectra and Frames in Graphs Western Michigan University ScholarWorks at WMU Dissertations Graduate College 12-2014 Modular Monochromatic Colorings, Spectra and Frames in Graphs Chira Lumduanhom Western Michigan University, chira@swu.ac.th

More information

Effect of 3D Stress States at Crack Front on Deformation, Fracture and Fatigue Phenomena

Effect of 3D Stress States at Crack Front on Deformation, Fracture and Fatigue Phenomena Effect of 3D Stress States at Crack Front on Deformation, Fracture and Fatigue Phenomena By Zhuang He B. Eng., M. Eng. A thesis submitted for the degree of Doctor of Philosophy at the School of Mechanical

More information

Primitive Digraphs with Smallest Large Exponent

Primitive Digraphs with Smallest Large Exponent Primitive Digraphs with Smallest Large Exponent by Shahla Nasserasr B.Sc., University of Tabriz, Iran 1999 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE

More information

DECENTRALIZED CONTROL DESIGN USING LMI MODEL REDUCTION

DECENTRALIZED CONTROL DESIGN USING LMI MODEL REDUCTION Journal of ELECTRICAL ENGINEERING, VOL. 58, NO. 6, 2007, 307 312 DECENTRALIZED CONTROL DESIGN USING LMI MODEL REDUCTION Szabolcs Dorák Danica Rosinová Decentralized control design approach based on partial

More information

On Construction of a Class of. Orthogonal Arrays

On Construction of a Class of. Orthogonal Arrays On Construction of a Class of Orthogonal Arrays arxiv:1210.6923v1 [cs.dm] 25 Oct 2012 by Ankit Pat under the esteemed guidance of Professor Somesh Kumar A Dissertation Submitted for the Partial Fulfillment

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization Tim Roughgarden February 28, 2017 1 Preamble This lecture fulfills a promise made back in Lecture #1,

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

H -Optimal Control and Related Minimax Design Problems

H -Optimal Control and Related Minimax Design Problems Tamer Başar Pierre Bernhard H -Optimal Control and Related Minimax Design Problems A Dynamic Game Approach Second Edition 1995 Birkhäuser Boston Basel Berlin Contents Preface v 1 A General Introduction

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

THIS paper studies the input design problem in system identification.

THIS paper studies the input design problem in system identification. 1534 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 10, OCTOBER 2005 Input Design Via LMIs Admitting Frequency-Wise Model Specifications in Confidence Regions Henrik Jansson Håkan Hjalmarsson, Member,

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points

More information

Optimization based robust control

Optimization based robust control Optimization based robust control Didier Henrion 1,2 Draft of March 27, 2014 Prepared for possible inclusion into The Encyclopedia of Systems and Control edited by John Baillieul and Tariq Samad and published

More information

THESIS. Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University

THESIS. Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University The Hasse-Minkowski Theorem in Two and Three Variables THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By

More information

A Decentralized Stabilization Scheme for Large-scale Interconnected Systems

A Decentralized Stabilization Scheme for Large-scale Interconnected Systems A Decentralized Stabilization Scheme for Large-scale Interconnected Systems OMID KHORSAND Master s Degree Project Stockholm, Sweden August 2010 XR-EE-RT 2010:015 Abstract This thesis considers the problem

More information

V&V MURI Overview Caltech, October 2008

V&V MURI Overview Caltech, October 2008 V&V MURI Overview Caltech, October 2008 Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology Goals!! Specification, design, and certification!! Coherent

More information

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 12, DECEMBER 2003 2089 Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions Jorge M Gonçalves, Alexandre Megretski,

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS Jorge M. Gonçalves, Alexandre Megretski y, Munther A. Dahleh y California Institute of Technology

More information

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes

An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes An Approach to Constructing Good Two-level Orthogonal Factorial Designs with Large Run Sizes by Chenlu Shi B.Sc. (Hons.), St. Francis Xavier University, 013 Project Submitted in Partial Fulfillment of

More information

4y Springer NONLINEAR INTEGER PROGRAMMING

4y Springer NONLINEAR INTEGER PROGRAMMING NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

More information

Estimation for state space models: quasi-likelihood and asymptotic quasi-likelihood approaches

Estimation for state space models: quasi-likelihood and asymptotic quasi-likelihood approaches University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2008 Estimation for state space models: quasi-likelihood and asymptotic

More information

Comprehensive Introduction to Linear Algebra

Comprehensive Introduction to Linear Algebra Comprehensive Introduction to Linear Algebra WEB VERSION Joel G Broida S Gill Williamson N = a 11 a 12 a 1n a 21 a 22 a 2n C = a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn a m1 a m2 a mn Comprehensive

More information

Eigenstructure Assignment for Helicopter Hover Control

Eigenstructure Assignment for Helicopter Hover Control Proceedings of the 17th World Congress The International Federation of Automatic Control Eigenstructure Assignment for Helicopter Hover Control Andrew Pomfret Stuart Griffin Tim Clarke Department of Electronics,

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

FATIGUE BEHAVIOUR OF OFFSHORE STEEL JACKET PLATFORMS

FATIGUE BEHAVIOUR OF OFFSHORE STEEL JACKET PLATFORMS FATIGUE BEHAVIOUR OF OFFSHORE STEEL JACKET PLATFORMS by ASHOK GUPTA THESIS SUBMITTED TO THE INDIAN INSTITUTE OF TECHNOLOGY, DELHI FOR THE AWARD OF THE DEGREE OF DOCTOR OF PHILOSOPHY Department of Civil

More information

Approximation Metrics for Discrete and Continuous Systems

Approximation Metrics for Discrete and Continuous Systems University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science May 2007 Approximation Metrics for Discrete Continuous Systems Antoine Girard University

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

NOWADAYS, many control applications have some control

NOWADAYS, many control applications have some control 1650 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 10, OCTOBER 2004 Input Output Stability Properties of Networked Control Systems D Nešić, Senior Member, IEEE, A R Teel, Fellow, IEEE Abstract Results

More information

Fractal Control Theory

Fractal Control Theory Fractal Control Theory Shu-Tang Liu Pei Wang Fractal Control Theory 123 Shu-Tang Liu College of Control Science and Engineering Shandong University Jinan China Pei Wang College of Electrical Engineering

More information

NEW RESULTS IN STABILITY, CONTROL, AND ESTIMATION OF FRACTIONAL ORDER SYSTEMS. A Dissertation BONG SU KOH

NEW RESULTS IN STABILITY, CONTROL, AND ESTIMATION OF FRACTIONAL ORDER SYSTEMS. A Dissertation BONG SU KOH NEW RESULTS IN STABILITY, CONTROL, AND ESTIMATION OF FRACTIONAL ORDER SYSTEMS A Dissertation by BONG SU KOH Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009 Preface The title of the book sounds a bit mysterious. Why should anyone read this

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

MODAL EXPANSION THEORIES FOR SINGLY - PERIODIC DIFFRACTION GRATINGS

MODAL EXPANSION THEORIES FOR SINGLY - PERIODIC DIFFRACTION GRATINGS MODAL EXPANSION THEORIES FOR SINGLY - PERIODIC DIFFRACTION GRATINGS by.r. Andrewartha, B.Sc.(Hons.), University of Tasmania A thesis submitted in fulfilment of the requirements for the degree of Doctor

More information

Global Phase Diagrams and Critical Phenomena of Binary Mixtures. Ji Lin Wang

Global Phase Diagrams and Critical Phenomena of Binary Mixtures. Ji Lin Wang Global Phase Diagrams and Critical Phenomena of Binary Mixtures Ji Lin Wang Dissertation Submitted in fulfilment of requirements for the degree of Doctor of Philosophy Centre for Molecular Simulation School

More information

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Static Output Feedback Stabilisation with H Performance for a Class of Plants Static Output Feedback Stabilisation with H Performance for a Class of Plants E. Prempain and I. Postlethwaite Control and Instrumentation Research, Department of Engineering, University of Leicester,

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

Common Knowledge and Sequential Team Problems

Common Knowledge and Sequential Team Problems Common Knowledge and Sequential Team Problems Authors: Ashutosh Nayyar and Demosthenis Teneketzis Computer Engineering Technical Report Number CENG-2018-02 Ming Hsieh Department of Electrical Engineering

More information

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 PROFESSOR HENRY C. PINKHAM 1. Prerequisites The only prerequisite is Calculus III (Math 1201) or the equivalent: the first semester of multivariable calculus.

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information

AN ELECTRO-THERMAL APPROACH TO ACTIVE THERMOGRAPHY RAJESH GUPTA

AN ELECTRO-THERMAL APPROACH TO ACTIVE THERMOGRAPHY RAJESH GUPTA AN ELECTRO-THERMAL APPROACH TO ACTIVE THERMOGRAPHY by RAJESH GUPTA Centre for Applied Research in Electronics Submitted in fulfillment of the requirements of the degree of DOCTOR OF PHILOSOPHY to the INDIAN

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. VII - System Characteristics: Stability, Controllability, Observability - Jerzy Klamka

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. VII - System Characteristics: Stability, Controllability, Observability - Jerzy Klamka SYSTEM CHARACTERISTICS: STABILITY, CONTROLLABILITY, OBSERVABILITY Jerzy Klamka Institute of Automatic Control, Technical University, Gliwice, Poland Keywords: stability, controllability, observability,

More information

MATRIX AND LINEAR ALGEBR A Aided with MATLAB

MATRIX AND LINEAR ALGEBR A Aided with MATLAB Second Edition (Revised) MATRIX AND LINEAR ALGEBR A Aided with MATLAB Kanti Bhushan Datta Matrix and Linear Algebra Aided with MATLAB Second Edition KANTI BHUSHAN DATTA Former Professor Department of Electrical

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES The Pennsylvania State University The Graduate School Department of Mathematics STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES A Dissertation in Mathematics by John T. Ethier c 008 John T. Ethier

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

NON-NUMERICAL RANKING BASED ON PAIRWISE COMPARISONS

NON-NUMERICAL RANKING BASED ON PAIRWISE COMPARISONS NON-NUMERICAL RANKING BASED ON PAIRWISE COMPARISONS By Yun Zhai, M.Sc. A Thesis Submitted to the School of Graduate Studies in partial fulfilment of the requirements for the degree of Ph.D. Department

More information

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 6-016 REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING Gina

More information

WIDE AREA CONTROL THROUGH AGGREGATION OF POWER SYSTEMS

WIDE AREA CONTROL THROUGH AGGREGATION OF POWER SYSTEMS WIDE AREA CONTROL THROUGH AGGREGATION OF POWER SYSTEMS Arash Vahidnia B.Sc, M.Sc in Electrical Engineering A Thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part 9. A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part Sandip Roy Ali Saberi Kristin Herlugson Abstract This is the second of a two-part paper describing a control-theoretic

More information

Linear Statistical Models

Linear Statistical Models Linear Statistical Models JAMES H. STAPLETON Michigan State University A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York 0 Chichester 0 Brisbane 0 Toronto 0 Singapore This Page Intentionally

More information

The Number of Zeros of a Polynomial in a Disk as a Consequence of Restrictions on the Coefficients

The Number of Zeros of a Polynomial in a Disk as a Consequence of Restrictions on the Coefficients East Tennessee State University Digital Commons @ East Tennessee State University Electronic Theses and Dissertations 5-204 The Number of Zeros of a Polynomial in a Disk as a Consequence of Restrictions

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t]

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t] Homework 4 By: John Steinberger Problem 1. Recall that a real n n matrix A is positive semidefinite if A is symmetric and x T Ax 0 for all x R n. Assume A is a real n n matrix. Show TFAE 1 : (a) A is positive

More information

A THESIS. Submitted by MAHALINGA V. MANDI. for the award of the degree of DOCTOR OF PHILOSOPHY

A THESIS. Submitted by MAHALINGA V. MANDI. for the award of the degree of DOCTOR OF PHILOSOPHY LINEAR COMPLEXITY AND CROSS CORRELATION PROPERTIES OF RANDOM BINARY SEQUENCES DERIVED FROM DISCRETE CHAOTIC SEQUENCES AND THEIR APPLICATION IN MULTIPLE ACCESS COMMUNICATION A THESIS Submitted by MAHALINGA

More information

MACHINE LEARNING FOR GEOLOGICAL MAPPING: ALGORITHMS AND APPLICATIONS

MACHINE LEARNING FOR GEOLOGICAL MAPPING: ALGORITHMS AND APPLICATIONS MACHINE LEARNING FOR GEOLOGICAL MAPPING: ALGORITHMS AND APPLICATIONS MATTHEW J. CRACKNELL BSc (Hons) ARC Centre of Excellence in Ore Deposits (CODES) School of Physical Sciences (Earth Sciences) Submitted

More information

Phase-averaged analysis of an oscillating water column wave energy converter

Phase-averaged analysis of an oscillating water column wave energy converter Phase-averaged analysis of an oscillating water column wave energy converter Alan Fleming, BEng (Hons) National Centre for Maritime Engineering and Hydrodynamics Australian Maritime College Submitted in

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION ALIREZA ESNA ASHARI, RAMINE NIKOUKHAH, AND STEPHEN L. CAMPBELL Abstract. The problem of maximizing a quadratic function subject to an ellipsoidal

More information

AN APPROPRIATE LOT SIZING TECHNIQUE FOR INVENTORY POLICY PROBLEM WITH DECREASING DEMAND

AN APPROPRIATE LOT SIZING TECHNIQUE FOR INVENTORY POLICY PROBLEM WITH DECREASING DEMAND AN APPROPRIATE LOT SIZING TECHNIQUE FOR INVENTORY POLICY PROBLEM WITH DECREASING DEMAND A THESIS Submitted in Partial Fulfillment of the Requirement for the Bachelor Degree of Engineering in Industrial

More information

DESIGN OF DOWELS FOR SHEAR TRANSFER AT THE INTERFACE BETWEEN CONCRETE CAST AT DIFFERENT TIMES: A CASE STUDY

DESIGN OF DOWELS FOR SHEAR TRANSFER AT THE INTERFACE BETWEEN CONCRETE CAST AT DIFFERENT TIMES: A CASE STUDY DESIGN OF DOWELS FOR SHEAR TRANSFER AT THE INTERFACE BETWEEN CONCRETE CAST AT DIFFERENT TIMES: A CASE STUDY Samayamanthree Mudiyanselage Premasiri Karunarathna 118614J Degree of Master of Engineering in

More information

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact: Linear algebra forms the basis for much of modern mathematics theoretical, applied, and computational. The purpose of this book is to provide a broad and solid foundation for the study of advanced mathematics.

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Robust and Optimal Control, Spring 2015

Robust and Optimal Control, Spring 2015 Robust and Optimal Control, Spring 2015 Instructor: Prof. Masayuki Fujita (S5-303B) G. Sum of Squares (SOS) G.1 SOS Program: SOS/PSD and SDP G.2 Duality, valid ineqalities and Cone G.3 Feasibility/Optimization

More information

Control Systems. LMIs in. Guang-Ren Duan. Analysis, Design and Applications. Hai-Hua Yu. CRC Press. Taylor & Francis Croup

Control Systems. LMIs in. Guang-Ren Duan. Analysis, Design and Applications. Hai-Hua Yu. CRC Press. Taylor & Francis Croup LMIs in Control Systems Analysis, Design and Applications Guang-Ren Duan Hai-Hua Yu CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Croup, an

More information

Stochastic Optimization Methods

Stochastic Optimization Methods Stochastic Optimization Methods Kurt Marti Stochastic Optimization Methods With 14 Figures 4y Springer Univ. Professor Dr. sc. math. Kurt Marti Federal Armed Forces University Munich Aero-Space Engineering

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems Chapter One Introduction 1.1 Large-Scale Interconnected Dynamical Systems Modern complex dynamical systems 1 are highly interconnected and mutually interdependent, both physically and through a multitude

More information

Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs

Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs Shreyas Sundaram and Christoforos N. Hadjicostis Abstract We present a method for estimating the inputs and

More information

Linear Feedback Control Using Quasi Velocities

Linear Feedback Control Using Quasi Velocities Linear Feedback Control Using Quasi Velocities Andrew J Sinclair Auburn University, Auburn, Alabama 36849 John E Hurtado and John L Junkins Texas A&M University, College Station, Texas 77843 A novel approach

More information

Optimization Based Output Feedback Control Design in Descriptor Systems

Optimization Based Output Feedback Control Design in Descriptor Systems Trabalho apresentado no XXXVII CNMAC, S.J. dos Campos - SP, 017. Proceeding Series of the Brazilian Society of Computational and Applied Mathematics Optimization Based Output Feedback Control Design in

More information

On Input Design for System Identification

On Input Design for System Identification On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods

More information

Optimization over Nonnegative Polynomials: Algorithms and Applications. Amir Ali Ahmadi Princeton, ORFE

Optimization over Nonnegative Polynomials: Algorithms and Applications. Amir Ali Ahmadi Princeton, ORFE Optimization over Nonnegative Polynomials: Algorithms and Applications Amir Ali Ahmadi Princeton, ORFE INFORMS Optimization Society Conference (Tutorial Talk) Princeton University March 17, 2016 1 Optimization

More information

MAT 211, Spring 2015, Introduction to Linear Algebra.

MAT 211, Spring 2015, Introduction to Linear Algebra. MAT 211, Spring 2015, Introduction to Linear Algebra. Lecture 04, 53103: MWF 10-10:53 AM. Location: Library W4535 Contact: mtehrani@scgp.stonybrook.edu Final Exam: Monday 5/18/15 8:00 AM-10:45 AM The aim

More information

Experimental designs for multiple responses with different models

Experimental designs for multiple responses with different models Graduate Theses and Dissertations Graduate College 2015 Experimental designs for multiple responses with different models Wilmina Mary Marget Iowa State University Follow this and additional works at:

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications Inverse Eigenvalue Problems: Theory, Algorithms, and Applications Moody T. Chu North Carolina State University Gene H. Golub Stanford University OXPORD UNIVERSITY PRESS List of Acronyms List of Figures

More information

Mir Md. Maruf Morshed

Mir Md. Maruf Morshed Investigation of External Acoustic Loadings on a Launch Vehicle Fairing During Lift-off Supervisors: Professor Colin H. Hansen Associate Professor Anthony C. Zander School of Mechanical Engineering South

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Integer Linear Programs

Integer Linear Programs Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 9, SEPTEMBER 2003 1569 Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback Fabio Fagnani and Sandro Zampieri Abstract

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

A Vector Space Approach to Models and Optimization

A Vector Space Approach to Models and Optimization A Vector Space Approach to Models and Optimization C. Nelson Dorny Moore School of Electrical Engineering University of Pennsylvania From a book originally published in 1975 by JOHN WILEY & SONS, INC.

More information

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components Applied Mathematics Volume 202, Article ID 689820, 3 pages doi:0.55/202/689820 Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information