Optimal Input Elimination
|
|
- Rudolph Barton
- 5 years ago
- Views:
Transcription
1 SICE Journal of Control, Measurement, and System Integration, Vol. 11, No. 2, pp , March 2018 Optimal Input Elimination Kazuhiro SATO Abstract : This paper studies an optimal inputs elimination problem in large-scale network systems. We first solve an H 2 optimization problem of the difference between the transfer functions of the original system and the system after the input variables elimination. It is shown that the problem can be rigorously solved by calculating the gradient and Hessian of this objective function. The solution means that, when the input variables to be eliminated were fixed, the H 2 optimal inputs elimination is achieved by simply eliminating input variables without changing the driver nodes, which are state variables that are directly affected by an input signal. We next solve a finite combinatorial optimization problem to decide input variables to be eliminated. The objective function is defined by using the solution to the H 2 optimization problem. It is shown that a greedy algorithm gives the global optimal solution to the finite combinatorial problem within a practical time. The algorithm can be understood that we eliminate input variables in ascending order of the average controllability centralities which assign relative importances to each node within a network. Finally, we demonstrate how to use the results in this paper by a simple example. Key Words : input elimination, H 2 optimization, large-scale network systems. 1. Introduction Large scale network systems such as smart grids and social networks have received increased attention in recent years. For such complex networks, it is not clear how we should select the driver nodes, which are state variables directly affected by an input signal. This is because although the selection problem can be formulated as a finite combinatorial optimization problem, a brute force approach for solving the problem quickly becomes intractable. For this reason, there are many previous works on input selection problems [1] [14]. Reference [4] has shown that sparse inhomogeneous networks are difficult to control, while dense homogeneous networks can be controlled by using a few driver nodes. Furthermore, it has been found that the minimum number of driver nodes to make a system controllable is mainly determined by the degree distribution of the network. However, reference [5] has proved that finding the driver nodes is non-deterministic polynomial time (NP) hard. To solve the computationally hard problem, references [6],[11] [14] have introduced energy performance indices on the choice of the optimal input set, and have solved it by efficient approximation algorithms. Moreover, references [2],[7] [10] have characterized all possible solutions to problems related with that in [5] by using graph theory, and have given efficient numerical algorithms for solving the problems. In this paper, we study an optimal inputs elimination problem related with the input selection problems. This problem is based on a different viewpoint from the previous works in [1] [14]. In fact, all the previous studies add the new input variable to enhance the controllability in a network, while we eliminate redundant input variables, which do not almost affect the output. Our problem is practically important, because we School of Regional Innovation and Social Design Engineering, Kitami Institute of Technology, Hokkaido , Japan ksato@mail.kitami-it.ac.jp (Received July 26, 2017) (Revised October 2, 2017) better eliminate the variable to reduce cost when some input variable does not almost affect each output in a network, i.e., when the elimination does not almost reduce the system performance. To formulate the problem, we consider the linear system ẋ = Ax + Bu, (1) y = Cx, where x R n, u R m,andy R p are the state, input, and output variables, respectively, and A R n n, B R n m,and C R p n are constant matrices. The system (1) can express a network system. In fact, in the terminology of graph theory, the matrix A induces a graph G = (V, E) of the network in which the nodes correspond to states, i.e., V = {1, 2,...,n} and the edges correspond to nonzero entries of A, i.e., (i, j) E whenever a ji 0. The nonzero entries of the matrix B describe how each actuator affects the nodes in the network; i.e., we can consider a situation where the matrix B determines the driver nodes in the network system (1). Furthermore, we assume that the system after the input variables elimination is expressed as ẋ = Ax + Bu J, (2) ȳ = Cx, where B R n (m r) and u J R m r is the input constructed by eliminating input variables in the original input u. The elimination is specified by a given set J which contains r elements. Then, we first consider an H 2 optimization problem of the difference between the transfer functions of the original system (1) and the system which is equivalent to the reduced system (2) subject to the fixed set J. Next, to decide the optimal set J, we solve a finite combinatorial optimization problem, the objective function of which is defined by using the solution of the H 2 optimization problem. The contributions of this paper are as follows. 1) We prove that the optimal solution to the H 2 optimization JCMSI 0002/18/ c 2017 SICE
2 SICE JCMSI, Vol. 11, No. 2, March problem is essentially given by B = B J, where B J is defined as the matrix reconstructed by eliminating the column vectors, which is specified by the set J, inthe matrix B. To this end, we calculate the gradient and Hessian of the objective function. The optimal solution means that the H 2 optimal elimination is achieved by simply eliminating input variables without changing the driver nodes when the input variables to be eliminated were determined. Furthermore, we prove that, if the system (1) is observable, then the solution is the unique global optimal solution. 2) We prove that the optimal set J can be obtained by a greedy algorithm within a polynomial time. This is non-trivial because finite combinatorial problems are NP-hard in general. The proof is achieved by using the results of [13]. The algorithm can be understood that we eliminate input variables in ascending order of an average controllability centrality which assigns a relative importance to each node within a network. This paper is organized as follows. In Section 2, we formulate the H 2 optimization problem and finite combinatorial optimization problem to be solved in this paper. In Section 3, we rigorously solve the H 2 optimization problem by calculating the gradient and Hessian of the objective function. In Section 4, we show that a greedy algorithm solves the finite combinatorial optimization problem. In Section 5, we demonstrate how to use the results in this paper by a simple example. The conclusion is presented in Section 6. Notation: The sets of real and complex numbers are denoted by R and C, respectively. The identity matrix of size n is denoted as I n. The symbol 0 n R n is a vector with only zero entries. Given a vector v C n, v denotes the the Euclidean norm. The Hilbert space L 2 (R n )isdefinedby { L 2 (R n ):= f :[0, ) R } n f (t) 2 dt <. Given a measurable function f :[0, ) R n, f L 2 and f L denote the L 2 and L norms of f, respectively, i.e., f L 2 := f (t) 2 dt, 0 f L := sup f (t). t 0 Given a matrix A C m n, A and A F denote the induced norm and the Frobenius norm, respectively, i.e., A := max v C n \{0} Av, v A F := tr(a A), where the superscript denotes the Hermitian conjugation and tr(a A) is the trace of A A, i.e., the sum of the diagonal elements of A A. For a matrix function G(s) C m n, G H 2 and G H denote the H 2 and H norms of G, respectively, i.e., 1 G H 2 := G(iω) 2 F 2π dω, G H := sup σ(g(iω)), ω R where i denotes the imaginary unit and σ(g(iω)) denotes the maximum singular value of G(iω) Problem Setup This section formulates two problems to be solved. Let us consider the linear system (1) as an original network system. The transfer function of the system (1) is defined as G(s) := C(sI n A) 1 B for s C. To rigorously formulate our problem, we also consider x = A x + Bu, (3) ỹ = C x, where B S J R n m, and the transfer function is given by G(s) := C(sI n A) 1 B. Here, J is composed of the numbers which specify which columns are replaced by 0 n,ands J is defined as the set of free real matrices except for the column vectors specified by J; i.e., S J is a subspace of R n m. Note that the system (3) is equivalent to the form (2). That is, the system (3) is a system after the input variables elimination. In fact, for example, if the number of inputs m is 6 and J = {1, 2, 5}, S J = {( 0 n 0 n B 1 0 n B 2 ) B 1 R n 2, B 2 R n}, then (3) is expressed as x = A x + ( ) B 1 B 2 ỹ = C x. u 3 u 4 u 6, This means that the input variables u 1, u 2,andu 5 were eliminated from the original system. Thus, the system (4) has the form (2). In this paper, we want to clarify the system (3) which best approximates the original system (1) in the sense of the difference between the outputs of the systems (1) and (3). To this end, we assume that the matrix A is stable; i.e., the real parts of all eigenvalues of the matrix A are negative. This is because if the matrix A is not stable, ỹ is usually quite different from y for any B S J. We thus consider the following optimal inputs elimination problem. Problem 1: minimize g(j) subject to J {1, 2,...,m}, J = r. Here, g(j) := f ( B ), and J denotes the number of elements in J, where (4) f ( B) := G G 2 H 2, (5) and B S J is a solution to the following optimization problem.
3 102 Problem 2: SICE JCMSI, Vol. 11, No. 2, March 2018 where B = ( ) B b i 1 b i 2 b i k R n (m+k) and u = ( ) u T u 1 u 2 u T k R m+k. That is, the matrix B is the augmented matrix of B, and the vector u is the augmented vector of u. In Problem 3, b 1, b 2,...,b N are arbitrary and h(s ) depends on them. In Problem 1, g(j) depends on B because g(j) := f ( B ). However, B is not arbitrary because B is a so- lution to Problem 2. Thus, the main novelty of this paper is to solve Problem 2. minimize f ( B) Remark 2 Our key idea for formulating Problem 2 is to use subject to B S J. the transfer function G of the system (3) instead of the transfer function Problems 1 and 2 mean that we determine the driver nodes of the system after the inputs elimination such that G best approximates G in the sense of the H 2 norm. The difference between Ḡ = C(sI n A) 1 B of the system (2). This is based on the observation that since our problem and the problem in [13] is explained in Remark the size of Ḡ is different from that of G, we cannot calculate the 1. By solving Problems 1 and 2, we can eliminate redundant difference of the two transfer functions; i.e., G Ḡ cannot be input variables which do not affect the output. This serves to defined in the usual sense. In contrast, we can calculate G G reduce cost when the inputs elimination does not almost reduce because the size of G is the same with that of G. the system performance, i.e., when G G H 2 is sufficiently small. The aim of Problem 2 is to eliminate input variables such that y ỹ L becomes as small as possible when the set J was given. In fact, since the matrix A is stable, y ỹ L G G H 2 u L 2 for any u L 2 (R m ). The proof is similar to that in Appendix A in [15]. Hence, if f ( B) issufficiently small, we can expect that y ỹ L becomes small. As a result, Problem 1 means that we select the set J such that y ỹ L is as small as possible. Furthermore, note that we have y ỹ L 2 G G H u L 2 for any u L 2 (R m ). However, the modified problem replaced f ( B) with G G H is difficult to solve rigorously, as explained in Remark 3. Since Problem 1 is a finite combinatorial optimization problem, we can solve it by brute force by enumerating all possible subsets of size r, evaluating g for all of these subsets, and picking the best subset if we obtained the solution to Problem 2. However, when we consider a large-scale network system, the number of possible subsets increases factorially as J becomes larger. As the result, the brute force approach quickly becomes intractable. To solve Problem 1, we need to obtain the solution to Problem 2. Hence, we first give a global optimal solution to Problem 2 in the next section. An efficient method for solving Problem 1 is proposed in Section 4. Remark 1 Although Problem 1 is similar to that of the input selection problem in [13], Problem 2 is novel. In fact, [13] has considered the following problem. Problem 3: maximize h(s ) subject to S {b 1, b 2,...,b N }, S = k. Here, the function h(s ) means a magnitude of the controllability in the system ẋ = Ax + B u, y = Cx, Remark 3 The objective function f ( B)isdifferentiable, as discussed in Section 3. However, the function G G H is not differentiable; i.e., we cannot calculate the gradient of G G H. Thus, if we replace the objective function f ( B) ofproblem2 to G G H, we cannot expect that we can obtain the solution to the new problem. For this reason, we adopt f ( B) asthe objective function of Problem 2. Furthermore, note that it is possible to bound the H norm from above by a constant multiple of the H 2 norm if the pole structure of the transfer function is known [16],[17]. Hence, we can expect that the solution to Problem 2 becomes a near-optimal solution to the modified problem in some cases. 3. A Global Optimal Solution to Problem 2 This section proves the following theorem. Theorem 1 A global optimal solution B S J to Problem 2 is given by B = (B) J elimination, (6) where (B) J elimination denotes the same matrix B except for the column vectors specified by the set J and the all exceptional column vectors are replaced by 0 n. Moreover, if the system (1) is observable, then (6) is the unique global optimal solution to Problem 2. Theorem 1 means that, if the set J was determined, we can achieve the optimal elimination by simply eliminating input variables specified by J. That is, the system after the H 2 optimal inputs elimination specified by J is given by x = A x + B J u J, (7) ỹ = C x, where B J R n (m r) is defined as the matrix reconstructed by eliminating the column vectors, which are specified by J, of B and u J R m r denotes the inputs constructed by eliminating the inputs, which are specified by J, ofu. To prove Theorem 1, we note that f ( B) can be written as f ( B) = tr(cw c C T ) = tr((b B) T W o (B B)), (8) because the matrix A is stable, where W c and W o are the controllability and observability Gramians, respectively, which are the solutions to the Lyapunov equations
4 SICE JCMSI, Vol. 11, No. 2, March AW c + W c A T + (B B)(B B) T = 0, (9) A T W o + W o A + C T C = 0. (10) The proof of (8) is similar to that in Appendix B in [15]. From now on, we derive the gradient and Hessian of the objective function f.letf denote the extension of the objective function f to the ambient Euclidean space R n m. The directional derivative of f at B in the direction B can be calculated as D f ( B)[ B ] = tr( B T ( 2W o (B B))). (11) Since the gradient f ( B) satisfies D f ( B)[ B ] = tr( B T ( f ( B))), (11) implies that f ( B) = 2W o (B B). Since ( f ( B)) J elimination is the projection of f ( B) onto the subspace S J, the gradient grad f ( B)for B S J is given by grad f ( B) = ( f ( B)) J elimination = 2W o ( (B B) J elimination ) = 2W o ( (B)J elimination B ). (12) The Hessian Hess f ( B) atany B S J is given by Hess f ( B)[ B ] = ( Dgrad f ( B)[ B ] ) J elimination = 2W o B, (13) where B T BS J,andwhereT BS J is the tangent space of S J at the point B. Here, note that T BS J can be identified with S J, because S J is a vector space. For a detailed explanation of the concept of the Hessian, see [18]. Thus, B, Hess f ( B)[ B ] := tr( B T Hess f ( B)[ B ]) = 2tr( B T W o B ). (14) Since the observability Gramian W o is symmetric positive semidefinite, (14) implies that B, Hess f ( B)[ B ] 0forany 0 B T BS J and any B S J. Hence, the objective function f is convex on S J [19]. If (6) holds, (12) yields grad f ( B) = 0; i.e., (6) is a local optimal solution at least. Actually, (6) is a global optimal solution to Problem 2, because the function f is convex on S J. Moreover, if the system (1) is observable, then W o is a symmetric positive definite matrix. Hence, then (14) leads us to B, Hess f ( B)[ B ] > 0forany0 B T BS J and any B S J ; i.e., the objective function f is strictly convex on S J [19]. This completes the proof of Theorem An Efficient Method for Solving Problem 1 This section shows that the global optimal solution to Problem 1 is given by Algorithm 1. As explained below, this algorithm resolves the computational difficulty of the brute force approach. To see this, we note that g(j) = tr(cw c (J)C T ), (15) where W c (J) satisfies the Lyapunov equation AW c (J) + W c (J)A T + B J B T J = 0 (16) for B J := B (B) J elimination. This is because g(j) = f ((B) J elimination ) from Theorem 1. By a direct calculation, (15) can be rewritten as g(j) = g({ j}). (17) j J Note that this has been proved in Theorem 4 in [13]. In a discrete time setting, [6] has proved a similar result. It follows from (15) that g({ j}) = tr(cw c ({ j})c T ), (18) where, from (16), W c ({ j}) satisfies the Lyapunov equation AW c ({ j}) + W c ({ j})a T + b j b T j = 0. (19) Here, b j denotes the j-th column vector of the matrix B. Algorithm 1 Greedy algorithm for solving Problem 1. 1: J 2: for k = 1, 2,...,r do 3: J J arg min g({ j}) j {1,2,...,m}\J 4: end for Using the relations (17) and (18), Problem 1 can be exactly solved by Algorithm 1; i.e., we can obtain the global optimal solution to Problem 1. Before we perform Algorithm 1, we can calculate g({ j}) forall j {1, 2,...,m} in advance. Hence, the number of calculations of the Lyapunov equation (19) is m times. To solve the Lyapunov equation efficiently, we can use an effective method for solving (19) such as the Bartels- Stewart-algorithm [20] even if the matrix A is a dense matrix. For sparse cases, we can use more effective methods. For example, see [21]. Thus, we can perform the optimal inputs elimination within a practical time. The function g({ j}) in (18) can be related to a dynamic network centrality measure which assigns a relative importance to each node within a network. In fact, if C = I n and b j = e j, where e j has 1 in the j-th entry and zeros elsewhere, (18) is rewritten as g({ j}) = tr(w c ( j)), (20) where W c ({ j}) satisfies the Lyapunov equation AW c ({ j}) + W c ({ j})a T + e j e T j = 0. The value tr(w c ( j)) in (20) is called the average controllability centrality for the node j [13]. Hence, if C = I n and b j = e j, Algorithm 1 means that we eliminate r input variables in ascending order of the average controllability centralities, while a proposed algorithm in [13] adds input variables in descending order of those values. 5. Numerical Example This section demonstrates how to use the results in this paper by a simple example. Note that we can also apply the results for large-scale network systems as explained in Section 1, in thesameway. The system matrices of the plant (1) are given by
5 A := , ( ) B := , C :=, and thus the eigenvalues of the matrix A are 1, 1, 1, 3, 4; i.e., A is stable. From Theorem 1 and the discussion of Section 4, the optimal solution to Problem 1 can be examined by calculating g({1}), g({2}), g({3}), and g({4}), where g({ j}) ( j = 1, 2, 3, 4) are defined as (18). By a direct calculation, we have g({1}) = , g({2}) = , g({3}) = , g({4}) = Hence, if r = 1 in Problem 1, then the optimal solution to Problem 1 is J = {3}. Furthermore, if r = 2, then the optimal solution to Problem 1 is J = {1, 3}. Moreover, if r = 3, then the optimal solution to Problem 1 is J = {1, 2, 3}. 6. Conclusion We have studied an optimal inputs elimination problem in large-scale network systems. We first rigorously solved the H 2 optimizationproblem of the difference between the original and the reduced transfer functions. The solution means that the H 2 optimal elimination is achieved by simply eliminating input variables without changing the driver nodes when the input variables to be eliminated were fixed. The selection problem of the input variables has been formulated as a finite combinatorial optimization problem. It has been shown that the proposed algorithm gives the global optimal solution to the finite combinatorial problem within a polynomial time. In future work, we will develop a hybrid method based on our results and the results in [13] to enhance controllability in large-scale network systems. Acknowledgments The author thanks to Mr. Yusuke Fujimoto for information on this subject. References [1] N. Bof, G. Baggio, and S. Zampieri: On the role of network centrality in the controllability of complex networks, IEEE Transactions on Control of Network Systems, Vol. 4, No. 3, pp , [2] J.F. Carvalho, S. Pequito, A.P. Aguiar, S. Kar, and K.H. Johansson: Composability and controllability of structural linear time-invariant systems: Distributed verification, Automatica, Vol.78, pp , [3] C. Commault and J.-M. Dion: Input addition and leader selection for the controllability of graph-based systems, Automatica, Vol. 49, No. 11, pp , [4] Y.Y. Liu, J.J. Slotine, and A. Barabási: Controllability of complex networks, Nature, Vol. 473, No. 7346, pp , [5] A. Olshevsky: Minimal controllability problems, IEEE Transactions on Control of Network Systems, Vol. 1, No. 3, pp , SICE JCMSI, Vol. 11, No. 2, March 2018 [6] F. Pasqualetti, S. Zampieri, and F. Bullo: Controllability metrics, limitations and algorithms for complex networks, IEEE Transactions on Control of Network Systems, Vol. 1, No. 1, pp , [7] S. Pequito, S. Kar, and A.P. Aguiar: A framework for structural input/output and control configuration selection in large-scale systems, IEEE Transactions on Automatic Control, Vol. 61, No. 2, pp , [8] S. Pequito, S. Kar, and A.P. Aguiar: Minimum cost input/output design for large-scale linear structural systems, Automatica, Vol. 68, pp , [9] S. Pequito and G.J. Pappas: Structural minimum controllability problem for switched linear continuous-time systems, Automatica, Vol. 78, pp , [10] S. Pequito, V.M. Preciado, A.-L. Barabási, and G.J. Pappas: Trade-offs between driving nodes and time-to-control in complex networks, Scientific Reports, Vol. 7, 39978, [11] T. Summers: Actuator placement in networks using optimal control performance metrics, 55th IEEE Conference on Decision and Control (CDC), pp , [12] T. Summers and I. Shames: Convex relaxations and gramian rank constraints for sensor and actuator selection in networks, IEEE International Symposium on Intelligent Control (ISIC), pp. 1 6, [13] T.H. Summers, F.L. Cortesi, and J. Lygeros: On submodularity and controllability in complex dynamical networks, IEEE Transactions on Control of Network Systems, Vol. 3, No. 1, pp , [14] V. Tzoumas, M.A. Rahimian, G.J. Pappas, and A. Jadbabaie: Minimal actuator placement with bounds on control effort, IEEE Transactions on Control of Network Systems, Vol. 3, No. 1, pp , [15] K. Sato: Riemannian optimal control and model matching of linear port-hamiltonian systems, IEEE Transactions on Automatic Control, Vol. 62, No. 12, pp , [16] S. Hara, B.D.O. Anderson, and H. Fujioka: Relating H 2 - and H -norm bounds for sampled-data systems, IEEE Transactions on Automatic Control, Vol. 42, No. 6, pp , [17] T. Ivanov, B.D.O. Anderson, P.-A. Absil, and M. Gevers: New relations between norms of system transfer functions, Systems & Control Letters, Vol. 60, No. 3, pp , [18] P.-A. Absil, R. Mahony, and R. Sepulchre: Optimization Algorithms on Matrix Manifolds, Princeton University Press, [19] S. Boyd and L. Vandenberghe: Convex Optimization, Cambridge University Press, [20] R.H. Bartels and G.W. Stewart: Solution of the matrix equation AX + XB = C, Communications of the ACM, Vol. 15, No. 9, pp , [21] T. Penzl: A cyclic low-rank Smith method for large sparse Lyapunov equations, SIAM Journal on Scientific Computing, Vol. 21, No. 4, pp , Kazuhiro SATO (Member) He received his B.S., M.S., and Ph.D. degrees from Kyoto University, Japan, in 2009, 2011, and 2014, respectively. He was a Post Doctoral Fellow at Kyoto University from 2014 to He is currently an Assistant Professor at Kitami Institute of Technology. He likes applied mathematics. He is a member of IEEE.
Scheduling of Control Nodes for Improved Network Controllability
Scheduling of Control Nodes for Improved Network Controllability Yingbo Zhao Fabio Pasqualetti Jorge Cortés Abstract This paper considers the problem of controlling a linear time-invariant network by means
More informationGramian-based Reachability Metrics for Bilinear Networks
Gramian-based Reachability Metrics for Bilinear Networks Yingbo Zhao, Jorge Cortés Department of Mechanical and Aerospace Engineering UC San Diego The 54th IEEE Conference on Decision and Control, Osaka,
More informationPerformance bounds for optimal feedback control in networks
Performance bounds for optimal feedback control in networks Tyler Summers and Justin Ruths Abstract Many important complex networks, including critical infrastructure and emerging industrial automation
More informationV. Tzoumas, A. Jadbabaie, G. J. Pappas
2016 American Control Conference ACC Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Sensor Placement for Optimal Kalman Filtering: Fundamental imits, Submodularity, and Algorithms V. Tzoumas,
More informationMinimal Input Selection for Robust Control
Minimal Input Selection for Robust Control Zhipeng Liu, Yao Long, Andrew Clark, Phillip Lee, Linda Bushnell, Daniel Kirschen, and Radha Poovendran Abstract This paper studies the problem of selecting a
More informationDenis ARZELIER arzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15
More informationSemidefinite Programming Duality and Linear Time-invariant Systems
Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,
More informationSensor Localization and Target Estimation in Visual Sensor Networks
Annual Schedule of my Research Sensor Localization and Target Estimation in Visual Sensor Networks Survey and Problem Settings Presented in the FL seminar on May th First Trial and Evaluation of Proposed
More informationReachability metrics for bilinear complex networks
Reachability metrics for bilinear complex networks Yingbo Zhao and Jorge Cortés Abstract Controllability metrics based on the controllability Gramian have been widely used in linear control theory, and
More informationOn Submodularity and Controllability in Complex Dynamical Networks
1 On Submodularity and Controllability in Complex Dynamical Networks Tyler H. Summers, Fabrizio L. Cortesi, and John Lygeros arxiv:144.7665v2 [math.oc] 12 Jan 215 Abstract Controllability and observability
More informationDURING the past decade, an increased interest in the analysis
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, VOL. 3, NO. 1, MARCH 2016 67 Minimal Actuator Placement With Bounds on Control Effort V. Tzoumas, M. A. Rahimian, G. J. Pappas, and A. Jadbabaie, Fellow,
More informationResilient Structural Stabilizability of Undirected Networks
Resilient Structural Stabilizability of Undirected Networks Jingqi Li, Ximing Chen, Sérgio Pequito, George J. Pappas, Victor M. Preciado arxiv:181.126v1 [math.oc] 29 Sep 218 Abstract In this paper, we
More informationTrade-offs between driving nodes and time-to-control in complex networks Supplementary Information
Trade-offs between driving nodes and time-to-control in complex networks Supplementary Information Sérgio Pequito Victor M. Preciado Albert-László Barabási George J. Pappas CONTENTS I PRELIMINARIES AND
More informationThe norms can also be characterized in terms of Riccati inequalities.
9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements
More informationME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ
ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationControllability metrics, limitations and algorithms for complex networks
Controllability metrics, limitations and algorithms for complex networks Sandro Zampieri Universita di Padova In collaboration with Fabio Pasqualetti - University of California at Riverside Francesco Bullo
More informationModel Reduction for Unstable Systems
Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model
More informationOn Optimal Link Removals for Controllability Degradation in Dynamical Networks
On Optimal Link Removals for Controllability Degradation in Dynamical Networks Makan Fardad, Amit Diwadkar, and Umesh Vaidya Abstract We consider the problem of controllability degradation in dynamical
More informationHankel Optimal Model Reduction 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Hankel Optimal Model Reduction 1 This lecture covers both the theory and
More informationOptimization of Quadratic Forms: NP Hard Problems : Neural Networks
1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationModel reduction for linear systems by balancing
Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationApproximate MLD System Model of Switched Linear Systems for Model Predictive Control
Special Issue on SICE Annual Conference 2016 SICE Journal of Control Measurement and System Integration Vol. 10 No. 3 pp. 136 140 May 2017 Approximate MLD System Model of Switched Linear Systems for Model
More informationarxiv: v3 [math.oc] 20 Mar 2013
1 Design of Optimal Sparse Feedback Gains via the Alternating Direction Method of Multipliers Fu Lin, Makan Fardad, and Mihailo R. Jovanović Abstract arxiv:1111.6188v3 [math.oc] 20 Mar 2013 We design sparse
More information2nd Symposium on System, Structure and Control, Oaxaca, 2004
263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,
More informationH 2 optimal model reduction - Wilson s conditions for the cross-gramian
H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai
More informationA Dynamical Systems Approach to Weighted Graph Matching
A Dynamical Systems Approach to Weighted Graph Matching Michael M. Zavlanos and George J. Pappas Abstract Graph matching is a fundamental problem that arises frequently in the areas of distributed control,
More informationRobust Multivariable Control
Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet Today s topics Today s topics Norms Today s topics Norms Representation of dynamic systems Today s topics Norms
More informationRaktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems
. AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...
More informationMeasurement partitioning and observational. equivalence in state estimation
Measurement partitioning and observational 1 equivalence in state estimation Mohammadreza Doostmohammadian, Student Member, IEEE, and Usman A. Khan, Senior Member, IEEE arxiv:1412.5111v1 [cs.it] 16 Dec
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More informationDISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS
DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS H.L. TRENTELMAN AND S.V. GOTTIMUKKALA Abstract. In this paper we study notions of distance between behaviors of linear differential systems. We introduce
More informationSystem Identification by Nuclear Norm Minimization
Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems
More informationMinimal Input and Output Selection for Stability of Systems with Uncertainties
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, FEBRUARY 218 1 Minimal Input and Output Selection for Stability of Systems with Uncertainties Zhipeng Liu, Student Member, IEEE, Yao Long, Student
More informationAgreement algorithms for synchronization of clocks in nodes of stochastic networks
UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationCDS Solutions to the Midterm Exam
CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationA Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem
A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu
More informationContents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping
Contents lecture 5 Automatic Control III Lecture 5 H 2 and H loop shaping Thomas Schön Division of Systems and Control Department of Information Technology Uppsala University. Email: thomas.schon@it.uu.se,
More informationarxiv: v1 [q-bio.qm] 9 Sep 2016
Optimal Disease Outbreak Detection in a Community Using Network Observability Atiye Alaeddini 1 and Kristi A. Morgansen 2 arxiv:1609.02654v1 [q-bio.qm] 9 Sep 2016 Abstract Given a network, we would like
More informationOn node controllability and observability in complex dynamical networks.
arxiv:1901.05757v1 [math.ds] 17 Jan 2019 On node controllability and observability in complex dynamical networks. Francesco Lo Iudice 1, Francesco Sorrentino 2, and Franco Garofalo 3 1 Department of Electrical
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationMultivariable Calculus
2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)
More informationOn the Complexity of the Constrained Input Selection Problem for Structural Linear Systems
On the Complexity of the Constrained Input Selection Problem for Structural Linear Systems Sérgio Pequito a,b Soummya Kar a A. Pedro Aguiar a,c a Institute for System and Robotics, Instituto Superior Técnico,
More informationON DISCRETE HESSIAN MATRIX AND CONVEX EXTENSIBILITY
Journal of the Operations Research Society of Japan Vol. 55, No. 1, March 2012, pp. 48 62 c The Operations Research Society of Japan ON DISCRETE HESSIAN MATRIX AND CONVEX EXTENSIBILITY Satoko Moriguchi
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationZero controllability in discrete-time structured systems
1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationRobust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds
Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationHomework set 4 - Solutions
Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLinear Matrix Inequality (LMI)
Linear Matrix Inequality (LMI) A linear matrix inequality is an expression of the form where F (x) F 0 + x 1 F 1 + + x m F m > 0 (1) x = (x 1,, x m ) R m, F 0,, F m are real symmetric matrices, and the
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationSpacecraft Attitude Control with RWs via LPV Control Theory: Comparison of Two Different Methods in One Framework
Trans. JSASS Aerospace Tech. Japan Vol. 4, No. ists3, pp. Pd_5-Pd_, 6 Spacecraft Attitude Control with RWs via LPV Control Theory: Comparison of Two Different Methods in One Framework y Takahiro SASAKI,),
More informationRank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about
Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationNetwork Clustering for SISO Linear Dynamical Networks via Reaction-Diffusion Transformation
Milano (Italy) August 28 - September 2, 211 Network Clustering for SISO Linear Dynamical Networks via Reaction-Diffusion Transformation Takayuki Ishizaki Kenji Kashima Jun-ichi Imura Kazuyuki Aihara Graduate
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationAn Optimization-based Approach to Decentralized Assignability
2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More informationLinear Matrix Inequalities for Normalizing Matrices
Linear Matrix Inequalities for Normalizing Matrices Christian Ebenbauer Abstract A real square matrix is normal if it can be diagonalized by an unitary matrix. In this paper novel convex conditions are
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More informationLecture 5 : Projections
Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization
More informationNorm invariant discretization for sampled-data fault detection
Automatica 41 (25 1633 1637 www.elsevier.com/locate/automatica Technical communique Norm invariant discretization for sampled-data fault detection Iman Izadi, Tongwen Chen, Qing Zhao Department of Electrical
More informationMapping MIMO control system specifications into parameter space
Mapping MIMO control system specifications into parameter space Michael Muhler 1 Abstract This paper considers the mapping of design objectives for parametric multi-input multi-output systems into parameter
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationAN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION
AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationarxiv: v1 [math.oc] 1 Apr 2014
On the NP-completeness of the Constrained Minimal Structural Controllability/Observability Problem Sérgio Pequito, Soummya Kar A. Pedro Aguiar, 1 arxiv:1404.0072v1 [math.oc] 1 Apr 2014 Abstract This paper
More informationGeneralized Shifted Inverse Iterations on Grassmann Manifolds 1
Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.
More informationProblem set 5 solutions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION Problem set 5 solutions Problem 5. For each of the stetements below, state
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationFast Linear Iterations for Distributed Averaging 1
Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider
More informationZeros and zero dynamics
CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationWeighted balanced realization and model reduction for nonlinear systems
Weighted balanced realization and model reduction for nonlinear systems Daisuke Tsubakino and Kenji Fujimoto Abstract In this paper a weighted balanced realization and model reduction for nonlinear systems
More informationPositive semidefinite matrix approximation with a trace constraint
Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace
More informationUniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods
Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationMULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY
Jrl Syst Sci & Complexity (2009) 22: 722 731 MULTI-AGENT TRACKING OF A HIGH-DIMENSIONAL ACTIVE LEADER WITH SWITCHING TOPOLOGY Yiguang HONG Xiaoli WANG Received: 11 May 2009 / Revised: 16 June 2009 c 2009
More informationSample ECE275A Midterm Exam Questions
Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and
More informationEL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)
EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.4: Dynamic Systems Spring Homework Solutions Exercise 3. a) We are given the single input LTI system: [
More informationComplex Laplacians and Applications in Multi-Agent Systems
1 Complex Laplacians and Applications in Multi-Agent Systems Jiu-Gang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complex-valued Laplacians have been shown to be powerful
More informationStatic Output Feedback Stabilisation with H Performance for a Class of Plants
Static Output Feedback Stabilisation with H Performance for a Class of Plants E. Prempain and I. Postlethwaite Control and Instrumentation Research, Department of Engineering, University of Leicester,
More informationOn the Number of Strongly Structurally Controllable Networks
07 American Control Conference Sheraton Seattle Hotel May 6, 07, Seattle, USA On the Number of Strongly Structurally Controllable Networks Tommaso Menara, Gianluca Bianchin, Mario Innocenti, and Fabio
More informationBindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17
Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationQuadratic and Copositive Lyapunov Functions and the Stability of Positive Switched Linear Systems
Proceedings of the 2007 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 2007 WeA20.1 Quadratic and Copositive Lyapunov Functions and the Stability of
More informationSufficiency of Signed Principal Minors for Semidefiniteness: A Relatively Easy Proof
Sufficiency of Signed Principal Minors for Semidefiniteness: A Relatively Easy Proof David M. Mandy Department of Economics University of Missouri 118 Professional Building Columbia, MO 65203 USA mandyd@missouri.edu
More informationIterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations
Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationBlack Box Linear Algebra
Black Box Linear Algebra An Introduction to Wiedemann s Approach William J. Turner Department of Mathematics & Computer Science Wabash College Symbolic Computation Sometimes called Computer Algebra Symbols
More informationH 2 -optimal model reduction of MIMO systems
H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m
More information