Vulnerability Analysis of Closed-Loop Systems. Nathan Woodbury. of graduation requirements for University Honors. Department of Computer Science

Similar documents
Vulnerability Analysis of Feedback Systems

Vulnerability Analysis of Feedback Systems. Nathan Woodbury Advisor: Dr. Sean Warnick

Signal Structure for a Class of Nonlinear Dynamic Systems

Mathematical Relationships Between Representations of Structure in Linear Interconnected Dynamical Systems

Representing Structure in Linear Interconnected Dynamical Systems

10 Transfer Matrix Models

Control Systems Engineering ( Chapter 6. Stability ) Prof. Kwang-Chun Ho Tel: Fax:

Network Structure Preserving Model Reduction with Weak A Priori Structural Information

Zeros and zero dynamics

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

EEE 184: Introduction to feedback systems

Some special cases

Linear State Feedback Controller Design

1 Continuous-time Systems

Outline. Control systems. Lecture-4 Stability. V. Sankaranarayanan. V. Sankaranarayanan Control system

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

The Gauss-Jordan Elimination Algorithm

ONGOING WORK ON FAULT DETECTION AND ISOLATION FOR FLIGHT CONTROL APPLICATIONS

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Reductions for One-Way Functions

CONTROL DESIGN FOR SET POINT TRACKING

Network Reconstruction from Intrinsic Noise: Non-Minimum-Phase Systems

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)

ECE317 : Feedback and Control

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Analyzing the Stability Robustness of Interval Polynomials

A misère-play -operator

L2 gains and system approximation quality 1

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions

Appendix A: Matrices

Data Gathering and Personalized Broadcasting in Radio Grids with Interferences

(1) for all (2) for all and all

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n

False Data Injection Attacks Against Nonlinear State Estimation in Smart Power Grids

Quantifying Cyber Security for Networked Control Systems

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Linear Algebra I. Ronald van Luijk, 2015

Dynamic Attack Detection in Cyber-Physical. Systems with Side Initial State Information

Coding Sensor Outputs for Injection Attacks Detection

The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals

COMBINATORIAL GAMES AND SURREAL NUMBERS

Lec 6: State Feedback, Controllability, Integral Action

Exam. 135 minutes, 15 minutes reading time

Multi-Robotic Systems

arxiv: v1 [cs.sy] 2 Apr 2019

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version

Stability Margin Based Design of Multivariable Controllers

ECEN 605 LINEAR SYSTEMS. Lecture 20 Characteristics of Feedback Control Systems II Feedback and Stability 1/27

DECENTRALIZED CONTROL DESIGN USING LMI MODEL REDUCTION

Software Engineering/Mechatronics 3DX4. Slides 6: Stability

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs

1 Indistinguishability for multiple encryptions

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Lecture 1 and 2: Introduction and Graph theory basics. Spring EE 194, Networked estimation and control (Prof. Khan) January 23, 2012

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e

Linear Algebra March 16, 2019

Tree sets. Reinhard Diestel

SMITH MCMILLAN FORMS

EE Control Systems LECTURE 9

Zero controllability in discrete-time structured systems

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

TRACKING AND DISTURBANCE REJECTION

Notes on Row Reduction

ORF 363/COS 323 Final Exam, Fall 2017

Pathwise Observability Through Arithmetic Progressions and Non-Pathological Sampling

Syntactic Characterisations in Model Theory

1 The linear algebra of linear programs (March 15 and 22, 2015)

Alireza Mousavi Brunel University

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Minimum Sensor Placement for Single Robust. Observability of Structured Complex Networks

CDS Final Exam

ECE557 Systems Control

Didier HENRION henrion

Exact and Approximate Equilibria for Optimal Group Network Formation

Linear Codes, Target Function Classes, and Network Computing Capacity

Stability of interval positive continuous-time linear systems

Optimal XOR based (2,n)-Visual Cryptography Schemes

Index coding with side information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

POSITIVE REALNESS OF A TRANSFER FUNCTION NEITHER IMPLIES NOR IS IMPLIED BY THE EXTERNAL POSITIVITY OF THEIR ASSOCIATE REALIZATIONS

Math 123, Week 2: Matrix Operations, Inverses

Combinatorial Agency of Threshold Functions

arxiv: v1 [cs.cc] 5 Dec 2018

Minimum number of non-zero-entries in a 7 7 stable matrix

A State-Space Approach to Control of Interconnected Systems

Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions

Relations Graphical View

A Decentralized Stabilization Scheme for Large-scale Interconnected Systems

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

x y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0

IDENTIFICATION AND ANALYSIS OF TIME-VARYING MODAL PARAMETERS

Control Systems I. Lecture 9: The Nyquist condition

PETER A. CHOLAK, PETER GERDES, AND KAREN LANGE

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Math 300 Introduction to Mathematical Reasoning Autumn 2017 Inverse Functions

Laplace Transform Analysis of Signals and Systems

Transcription:

Vulnerability Analysis of Closed-Loop Systems by Nathan Woodbury Submitted to Brigham Young University in partial fulfillment of graduation requirements for University Honors Department of Computer Science Brigham Young University December 2013 Advisor: Sean Warnick Honors Representative: Joseph Parry Signature: Signature:

ABSTRACT Vulnerability Analysis of Closed-Loop Systems Nathan Woodbury Department of Computer Science Bachelor of Science We consider destabilizing attacks, which are attacks on a single link within a system that could potentially destabilize the entire system. We then define the vulnerability of a link to be the inverse of the effort required on that link to destabilize the system. Since links are a property of the system s structure, we evaluate three structural representations, defining Transfer Functions to be the design of the system and Dynamical Structure Functions to be the implementation. With these definitions, we formulate two problems where we wish to minimize vulnerability. The first, the open-loop problem, exists when we are given a fixed system design and have complete control over its implementation. We show that for such a system, we can always create a completely-secure implementation an implementation with zero vulnerability if we eliminate all internal feedback from the system s structure. The second, the closed-loop problem, occurs when there exists some restrictions to the system s implementation. In particular, we consider the case where the system is implemented by two subsystems of given design are connected in feedback. We show that the vulnerability of one system is only dependent on the design and not the implementation of the other. We then show that removing internal feedback from a subsystem does not necessarily minimize vulnerability; that it is possible to fight fire with fire and use internal feedback to combat the vulnerability introduced by connecting systems in feedback.

ACKNOWLEDGMENTS First, I am indebted to my advisor, Dr. Sean Warnick for his support and guidance throughout this project. I would also like to thank my friends and colleagues of the IDeA Labs, particularly Anurag Rai, Phil Paré, and Vasu Chetty, for their aid and insights into this thesis. Further, I would also like to acknowledge the professors and other faculty at Brigham Young University for creating the wonderful opportunity that my undergraduate experience has been. Finally, I would like to thank my family for the support they have given me throughout the past years.

Contents Title and signature page Abstract Acknowledgments Table of Contents i ii iii iv 1 Introduction 1 1.1 Attack Models................................. 2 1.1.1 Denial of Service Attacks...................... 3 1.1.2 Deception Attacks.......................... 3 1.1.3 Destabilizing Attacks......................... 3 1.2 Vulnerability................................. 4 1.3 Conclusions.................................. 5 2 Preliminaries: Systems and Structure 6 2.1 State Representations............................. 6 2.2 Transfer Function Representations...................... 8 2.3 Dynamical Structure Functions........................ 10 2.3.1 Motivation for Dynamical Structure Functions in Vulnerability... 12 2.3.2 Derivation of Dynamical Structure Functions............ 13 2.4 Conclusions.................................. 14 3 Vulnerability in Open-Loop Systems 15 3.1 Problem Formulation............................. 15 3.2 Problem 1: Defining Vulnerabile Links.................... 16 3.2.1 Conditions for Vulnerability..................... 17 3.2.2 The Measure of Vulnerability.................... 20 3.3 Problem 2: Minimizing Vulnerability.................... 20 3.4 Numerical Example.............................. 22 3.5 Conclusions.................................. 23 iv

CONTENTS v 4 Vulnerability in Closed-Loop Systems 24 4.1 Problem Formulation............................. 24 4.2 Problem 1: Defining Vulnerable Links.................... 25 4.2.1 The Closed-Loop System as a DSF................. 26 4.2.2 Decomposition of Analysis...................... 26 4.2.3 Summary............................... 30 4.3 Problem 2: Minimizing Vulnerability.................... 30 4.3.1 Fighting Fire with Fire........................ 31 4.3.2 Numerical Examples......................... 31 4.3.3 The High-Gain Heuristic....................... 35 4.4 Conclusions.................................. 37 5 Conclusions and Future Work 38 A Symbols and Notations 40 Bibliography 42

Chapter 1 Introduction Imagine a system. A system takes inputs, processes those inputs, and produces outputs. This system is an abstraction of a variety of real processes that perform a wide range of different tasks. It may land an aircraft, it may transmit a message across the internet, it may regulate the temperature of an organism, or it may determine the prices and interest rates of US treasury bonds. We are concerned principally with linear time-invariant (LTI) systems. It has been shown, however, that if a non-linear system is linearized around an equilibrium, the resulting linear system will behave like the non-linear system near that equilibrium [1]. Therefore, our focus on linear systems is not overly restrictive. LTI systems can be represented by a structure, or a graph of nodes and directed edges. Figure 1.1 gives one example of a system structure. In this example, the nodes are split into three disjoint subsets. The first subset is the set of nodes that can be externally influenced, which we call the system s inputs. The second subset is the set of nodes that can be externally measured, which we call the system s outputs. The third subset is the set of nodes that can neither be influenced nor measured, which we call the system s hidden states. A link is a directed edge connecting any one node (input, output, or hidden) to any other node (like- 1

1.1 Attack Models 2 Figure 1.1 A structural representation of a system containing nodes and edges defining inputs, outputs, hidden states, and links. wise input, output, or hidden) indicating the ability of the first node to affect the behavior of the second. In many systems, it may be unfeasible for an attacker to attack the entire system at once; therefore the attacker must resort to attacking a limited number of links within that system. For example, the power grid and the internet is distributed across a large geographic area; therefore an attacker is unlikely to have access all parts of the system. Therefore, in this thesis, we are narrowing our scope to consider only the cases where attackers have access to a single link within the system, as shown in Figure 1.1. However, as we will eventually show, our results can be easily generalized to the cases where attackers have access to more than one link simultaneously. 1.1 Attack Models Typically, attacks on control systems have been classified into denial of service attacks and deception attacks [2]. Following the example of [3], we also consider a more generalized

1.1 Attack Models 3 attack model which are called destabilizing attacks. 1.1.1 Denial of Service Attacks A denial of service (DoS) attack is perhaps the easiest and most common attack on a control system. The purpose of a DoS attack is to prevent signals from reaching their intended destination in order to degrade system performance. This is modelled as the removal of a link in the system [2, 3]. For example, the result of a DoS attack on the system in Figure 1.1 would be to remove the link highlighted by the attack from the system. 1.1.2 Deception Attacks The purpose of a deception attack is to change the state estimates computed by a modelbased controller by hijacking a subset of sensors and sending altered readings. This is modeled as a stable additive perturbation on a link in a system. Many systems protect against such attacks by equiping a Bad Data Detector (BDD) which raises an alarm whenever the state estimates of the plant deviates from the expected. However, it has been shown that, even with a (BDD), an attacker can still change state estimates without raising the chance of raising an alarm [3 5]. For example, the result of a Deception attack on the system in Figure 1.1 would be to change the signal passing along the highlighted link. 1.1.3 Destabilizing Attacks The underlying perspective behind DoS and Deception attacks is that both are the result of an enemy changing the value of a single link within a system in order to degrade the

1.2 Vulnerability 4 system s performance. We consider a more generalized attack, which we call a destabilizing attack, where an enemy attacks a single link within a system in order to destabilize the entire system. In this sense, both DoS and deception attacks are destabilizing attacks if their result is a possible destabilization of a system. Note that, though the enemy may have malicious intent to destabilize the system, or it may simply be a failure on that link that could result in system destabilization. 1.2 Vulnerability When considering destabilizing attacks, it is natural to consider the notion of vulnerability. We define two types of vulnerability, the vulnerability of a link and the vulnerability of a system. Definition 1. The vulnerability of a link is the sensitivity of the system s stability to perturbations on that link in a destabilizing attack. In particular, let e be the minimum effort that an attacker needs to perform on a particular link l S in order to destabilize the system S. The vulnerability of link l is defined as v(l) = 1 e. Thus, as the amount of effort on l required to destabilize the system increases, the vulnerability of l decreases. Definition 2. The vulnerability of a system is defined as the maximum vulnerability over all links in that system. In other words, V (S) = max l S v(l).

1.3 Conclusions 5 The purpose of this thesis is to show how systems may be designed in order to minimize vulnerability. Ideally, a system should be designed such that its vulnerability is zero. We define such a system as a completely secure system. Proposition 1. A system is completely secure if and only if every link in that system has zero vulnerability. Proof. Assume that a system is completely secure; e.g. assume that V (S) = 0. Since V (S) = max l S, we have that for all l S, v(l) 0. The minimum effort e to destabilize the system is never negative, so we also have that for all l S, v(l) 0. Therefore, v(l) = 0 for all l S. Now assume that v(l) = 0 for all l S. Then V (S) = max l S v(l) = 0. 1.3 Conclusions In conclusion, LTI systems can be represented as a structure, or a graph of links and nodes. Vulnerability is a property of the links within a system s structure. A link s vulnerability is the sensitivity of that link to attack, and a system s vulnerability is the vulnerability of the most vulnerable link within that system.

Chapter 2 Preliminaries: Systems and Structure As described in the previous chapter, vulnerability is a property of links within the structure of an LTI system. However, any given system can be represented by many different structures, each containing different amounts of information about the system itself (see Figure 2.1). Therefore, the definition of vulnerability of a system depends on the structure chosen to represent that system. We consider three notions of structure here, the state representation, the transfer function, and the dynamical structure function (DSF). Figure 2.1 The relative amounts of structural information provided by the three notions of structure considered. 2.1 State Representations State representations, also called state-space representations, are infinite state machines that shows the internal wiring of the system. Of the three notions of structure we consider, 6

2.1 State Representations 7 the state representation contains the most structural information (see Figure 2.1) since it describes exactly the internal functionality of an LTI system. A linear state representation is often described by a tuple (A,B,C), where given p inputs u, q outputs y, and n internal states x, ẋ = Ax + Bu, y = Cx, where A R n n, B R n p, and C R q n. Matrix A is called the dynamics matrix and describes how the internal states x affect one another. Matrix B is called the control matrix and describes how the internal states x are affected by the inputs u. Matrix C is called the sensor matrix and describes how the internal states x are measured to produce the outputs y [6]. Example 1. Let 1 0 1 1 0 0 1 0 0 A = 2 3 0, B = 0 1 0, C = 0 1 0. (2.1) 0 2 3 0 0 1 0 0 1 The system described by this state representation is given in Figure 2.2. The non-zero entries a i j A correspond to the links connecting internal state x j to internal state x i. For example, a 21 = 1 0; therefore there exists a link from x 1 to x 2. Similarly, the non-zero entries b i j B describe links from u j to x i and the non-zero entries c i j C describe links from x j to c i.

2.2 Transfer Function Representations 8 Figure 2.2 The links the system described by the state representation in (2.1). Purple links are non-zero entries in A (links along the diagonal of A connecting an internal state to itself are not included), red links are non-zero entries in B, and blue links are non-zero entries in C. 2.2 Transfer Function Representations The transfer function representation, sometimes abbreviated as simply the transfer function, describes the Black Box behavior of a system. In other words, the transfer function representation describes how the inputs of a system affects its outputs. Since the transfer function representation only shows the input-output behavior of a system where the state representation shows the interactions between the internal states as well, the transfer function representation contains less structural information than the state representation as shown in Figure 2.1. In fact, we say that the transfer function representation actually contains no structural information; it describes the design of the system rather than the implementation. Definition 3. For the purposes of this thesis, the design of a system refers to its transfer function representation. Since state representations contain more structural information than transfer function representations, every state representation can be converted into exactly one transfer func-

2.2 Transfer Function Representations 9 tion representation; however an infinite number of state representations can yield the same transfer function. Where the state representation was measured in the time domain, the transfer function representation is measured in the frequency domain. It is represented by G(s), which is a matrix of transfer functions (rational polynomials in terms of the Laplace variable s, not to be confused with the transfer function representation itself). Thus Y = G(s)U, (2.2) where U is the set of inputs u converted to the frequency domain and similarly Y is the set of outputs y converted to the frequency domain. Each entry G i j (s) G(s) is a transfer function that describes how input U j ) affects output Y i. Stability in the transfer function representation can be checked by using the poles of each of the transfer function n i(s) d i (s) G(s). The poles of each transfer function are defined as the roots of the polynomial d i (s). The system is stable if the poles of each transfer function has negative real part. The state representation can be converted to the transfer function representation by the equation G(s) = C(sI A) 1 B. (2.3) Example 2. Applying (2.3) to (2.1) in example 1, we get the transfer function representation G(s) = 1 s 3 + 7s 2 + 15s + 13 (s + 3) 2 2 (s + 3) 2(s + 3) (s + 1)(s + 3) 2. (2.4) 4 2(s + 1) (s + 1)(s + 3) This results in a structure as shown in Figure 2.3 where every output is, in some way, affected by every input.

2.3 Dynamical Structure Functions 10 Figure 2.3 can also be derived from Figure 2.2 in Example 1 by tracing the paths in the structure. For example, it can be seen that u 1 connects to y 1 by tracing the route u 1 x 1 y 1. It can also be seen that u 1 connects to y 3 by tracing the route u 1 x 1 x 2 x 3 y 3. In this way, the transfer function structure only cares that a route from u 1 to y 3 exists and disregards the information on how that route is followed. Hence the transfer function representation contains less structural information than the state representation. Figure 2.3 The links the system described by the transfer function representation in (2.4). Each non-zero entry in G(s) corresponds to a link that connects an output to an input. 2.3 Dynamical Structure Functions The dynamical structure function (DSF) was initially developed to solve the problem of network reconstruction. In attempting to determine the structure of a system from output data, the state representation often requires more data than is feasible. The transfer function representation, on the other hand, can be more easily reconstructed; however, the transfer function representation does not actually contain structural information about the system and therefore is not very useful. Hence the DSF was conceived as a middle ground between state representations and transfer function representations (see Figure 2.1). In this way, it does not require as much

2.3 Dynamical Structure Functions 11 information to reconstruct as the state representation, and yet it is still capable of representing the structure, or implementation of a system. Definition 4. For the purposes of this thesis, the implementation of a system refers to its dynamical structure function. A DSF is represented by the pair (P(s),Q(s)), where P(s) is a matrix of transfer functions that describes how each input directly affects each output, and where Q(s) describes how each output directly affects each other output. Hence, a link in the DSF is either a relationship between an input and a measured state or between a measured state and another measured state. The interactions among non-measured internal states, called hidden states, are abstracted into these links. It should be noted that the DSF actually contains variable levels of structural information. If all internal states in the system are measured to produce outputs, then the DSF contains as much structural information as the state representation. However, as fewer internal states are measured, the level of structural information decreases towards that of the transfer function representation. The DSF is actually a factorization of the transfer function representation, where G(s) = (I Q(s)) 1 P(s). (2.5) The derivation of the DSF from a state representation will be given in Section 2.3.2. The stability of the DSF can be checked by converting it into the transfer function structure and then checking the stability of that structure. Example 3. Consider again the system shown in Example 1 and continued in 2. Its DSF is

2.3 Dynamical Structure Functions 12 given by 1 s+1 0 0 P(s) = 1 0 s+3 0, Q(s) = 1 0 0 s+3 2 0 0 1 s+3 0 0 s+1 0 2 s+3 0. (2.6) It can be verified that (I Q(s)) 1 P(s) = G(s) from (2.4). The resulting structure can be seen in Figure 2.4. Figure 2.4 The links the system described by the DSF in (2.6). Each non-zero entry in P(s) corresponds to a red link and each non-zero entry in Q(s) corresponds to a blue link. 2.3.1 Motivation for Dynamical Structure Functions in Vulnerability Vulnerability is a type of robustness analysis, and the use of transfer functions is often convenient and useful in robustness analyses. The transfer function representation does not contain enough information, however, to use in vulnerability analysis. The state representation, on the other hand, is unwieldy to work with. Therefore, we use the DSF since it has more information than the transfer function representation and is more convenient than the state representation.

2.3 Dynamical Structure Functions 13 2.3.2 Derivation of Dynamical Structure Functions Since DSFs comprise a significant portion of the analysis of vulnerability, we will include here a brief tutorial on how transfer functions are derived from the state representation. Consider a state-space LTI system z 1 = Ā 11 Ā 12 z 1 B 1 + u, z 2 Ā 21 Ā 22 z 2 B 2 [ ] y = C 1 C 2 z 1, (2.7) [ ] where C 1 C 2 has full row rank. This system can be transformed into ẏ = A 11 A 12 y + B 1 u, ẋ A 21 A 22 x B 2 [ ] y y = I 0, (2.8) x where y are measured states and x are hidden states. A Laplace transform of the system represented in Equation 2.8 results in sy = A 11 A 12 Y + B 1 U. (2.9) sx A 21 A 22 X B 2 Solving for X results in z 2 X = (si A 22 ) 1 A 21 Y + (si A 22 ) 1 B 2 U, (2.10)

2.4 Conclusions 14 which, when substituted back into equation 2.9, yields sy = W(s)Y +V (s)u W(s) = A 11 + A 12 (si A 22 ) 1 A 21 V (s) = A 12 (si A 22 ) 1 B 2 + B 1. (2.11) Let D(s) be a diagonal matrix containing the diagonal entries of W(s). We have that (si D)Y = (W(s) D(s))Y +V (s)u, (2.12) hence Y = Q(s)Y + P(s)U, (2.13) Q(s) = (si D(s)) 1 (W(s) D(s)) (2.14) P(s) = (si D(s)) 1 V (s). (2.15) Definition 5. A link in a system given by equation 2.7 and characterized by a transfer function (P(s),Q(s)) is any non-zero entry in either P(s) or Q(s). Definition 6. A vulnerable link is a link in (P(s),Q(s)) on which there exists a stable perturbation that makes the system unstable. Definition 7. A system is said to be vulnerable if it contains a vulnerable link. 2.4 Conclusions In conclusion, we have shown that a system s structure has many different representations, each with different levels of information about the structure. For this thesis, we are principally concerned with two structural representations, namely the transfer function representation as the system s design and the dynamical structure function as its implementation.

Chapter 3 Vulnerability in Open-Loop Systems We now follow the work in [3] to mathematically define vulnerability and to show how vulnerability can be minimized in open-loop systems. 3.1 Problem Formulation There are two problems that we seek to solve in studying vulnerability in open-loop systems: 1. Defining Vulnerable Links: Derive a definition and computation of vulnerability in open-loop systems. 2. Minimizing Vulnerability: Given a system design G(s), find the implementation (P(s),Q(s)) that minimizes the vulnerability of the system. But first, we will define an open-loop system. Definition 8. An open-loop system is a system where there is no feedback in its design. In other words, an open-loop system is a system given by a transfer function representation 15

3.2 Problem 1: Defining Vulnerabile Links 16 G(s) that maps inputs U to outputs Y and where the closed-loop transfer function from Y to U is zero (see Figure 3.1). It is important to note that an open-loop system may have feedback in its implementation given by the DSF. We call this feedback the system s internal feedback. Figure 3.1 An open-loop system is a system that maps inputs U to outputs Y where the closed-loop transfer function from Y to U is zero. 3.2 Problem 1: Defining Vulnerabile Links The first problem we seek to solve is the derivation and computation of vulnerability in open-loop systems. Recall that a link is vulnerable if there can exist a stable additive perturbation on that link that destabilizes the entire system. Example 4. To illustrate a vulnerable link, consider a system with the following DSF: 1 s+2 0 P(s) =, Q(s) = 0 1 s+2. 0 1 s+2 By checking the transfer function representation 1 s+2 0 1 G(s) = (I Q(s))P(s) = s + 2 1, (s + 1)(s + 3) 1 s + 2 we get that the poles of each transfer function are 1 and 3. Since all poles have negative real part, this system is stable.

3.2 Problem 1: Defining Vulnerabile Links 17 Now add a stable additive perturbation (s) = 8 s+2 on link Q 12(s) (see Figure 3.2), resulting in and Q(s) = 0 9 s+2, 1 s+2 0 Ḡ(s) = (I Q(s)) 1 1 P(s) = s + 2 1. (s 1)(s + 5) 9 s + 2 The roots of this resulting system are 1 and 5. Since not all roots have negative real part, the system is unstable. And since there exists an additive perturbation (s) on Q 12 (s) that can destabilize the system, we consider Q 12 (s) to be vulnerable. We can similarly show that Q 21 (s) is vulnerable; however P 11 (s) and P 22 (s) are not vulnerable. Figure 3.2 The system in Example 4 with perturbation (s) on link Q 12 (s). Links in red are vulnerable while links in black are not. 3.2.1 Conditions for Vulnerability Note from Figure 3.2 that the links that are vulnerable are the links that are in a cycle. In fact, this property always holds true, a link is vulnerable if and only if it is in a cycle. Theorem 1. Consider a stable system represented by DSF (P(s), Q(s)) and consider a link from nodes i to j in either P(s) or Q(s). That link is vulnerable, or in other words, there exists a stable additive perturbation (s) on that link that makes the system unstable if and only if the closed loop transfer function from node j to node i is non-zero.

3.2 Problem 1: Defining Vulnerabile Links 18 Proof. A system with a stable additive transformation on the link from node i to node j can be represented as the linear fractional transformation shown in Figure 3.3, where T is the associated closed-loop transfer function and w i,w j represent the signals at nodes i and j respectively. Figure 3.3 System with a stable perturbation on the link from i to j. Let T i j be the closed-loop transfer function from node j to node i. Then the system in Figure 3.3 stable if and only if the system in Figure 3.4 is stable [7]. Figure 3.4 Necessary and sufficient conditions for the system in Figure 3.3, this system must also be stable. Assume that T i j (s) = 0. Since, in this situation, the system in Figure 3.4 is only comprised of the feed-forward term (s), it is stable for all stable perturbations (s). Hence the

3.2 Problem 1: Defining Vulnerabile Links 19 link from i to j is not vulnerable. Now assume that T i j (s) 0. Then the system in Figure 3.4 is unstable if any of the transfer functions d j w j are unstable. We have d i w j = w i [ 1 T 1 T i j (s) (s) i j (s) ] d j (s). Let T i j (s) = t n(s) t d (s) and (s) = δ n(s) δ d (s) where t n(s), t d (s), δ n (s), and δ d (s) are all polynomials in s. Then w j = t d (s)δ d (s) [ t n (s)δ n (s) t d (s)δ d (s) t n (s)δ n (s) t d (s)δ d (s) d i δ n (s) δ d (s) ] d j According to the Routh-Hurwitz Stability Criterion, a polynomial is stable if all of its coefficients are of the same sign [8]. In the case of the polynomial d i. R(s) = t d (s)δ d (s) t n (s)δ n (s), a properly designed can zero out at least one of the terms; hence the Routh-Hurwitz Stability Criterion fails. Therefore, there exists a on the link from i to j that destabilizes the system and the link is vulnerable. Theorem 1 can be interpreted as any link in a cycle is vulnerable. Note, however, that the DSF is designed so that the links in P(s) are never in a cycle, which leads to the following corollary. Corollary 1. None of the links in P(s) are vulnerable. Proof. In a DSF, the transfer function from states to input and from input to input is always zero. Therefore, according to Theorem 1, all links in P(s) from input to state are invulnerable.

3.3 Problem 2: Minimizing Vulnerability 20 From Corollary 1, we see that the vulnerability analysis can be limited to an analysis of the links in Q(s). Note that where H(s) (I Q(s)) 1, h i j (s) is the closed-loop transfer function seen by link q ji (s). 3.2.2 The Measure of Vulnerability Recall from Corollary 1 that given a DSF (P(s),Q(s)), we are only concerned with the vulnerabilities on links in Q(s) and that the closed-loop transfer functions on links in Q(s) are given by H(s) = (I Q(s)) 1. Hence, the vulnerability of the link from i to j or the inverse of the magnitude of the smallest signal on the link from i to j that can destabilize the system is given by: v ji = H i j (s). (3.1) The vulnerability of the system is defined as the vulnerability of the most vulnerable link in the system; hence V = max (i, j) Q(s) v ji = max (i, j) Q(s)) H i j(s) = H i j (s) 1. (3.2) 3.3 Problem 2: Minimizing Vulnerability Our second problem is, given a system design G(s), find the implementation (P(s), Q(s)) that minimizes the vulnerability of the system. More formally and using equation 3.2, given a fixed G(s) and letting P(s) = (I Q (s))g(s), choose Q (s) such that Q (s) = argminmax Q(s) H(s) = argmin Q(s) (I Q(s)) 1 1. (3.3)

3.3 Problem 2: Minimizing Vulnerability 21 The one-infinity norm is non-convex and therefore generally computationally difficult; however, due to the nature of DSFs and Theorem 1, this problem has a well-defined solution. Definition 9. A completely secure architecture is a choice of Q(s) (where P(s) = (I Q(s))G(s)) such that the vulnerability V of the system given by G(s) is zero. Recall from Theorem 1 that a link is vulnerable if and only if it is in a cycle. Therefore, a completely secure architecture can be found by removing all internal feedback from the system. Lemma 1. Given an open-loop system with a fixed design G(s), any implementation (P(s), Q(s)) where Q(s) = 0 and P(s) = G(s) is completely secure. Proof. In this situation, all links are in P(s). However, from Corollary 1, no links in P(s) are vulnerable. Therefore the vulnerability of each link is zero and the vulnerability of the system, which is the vulnerability of the most vulnerable link, is likewise zero. Theorem 2. Given an open-loop system with a fixed design G(s) and assuming complete control over the implementation (P(s), Q(s)), it is always possible to implement a the system with a completely secure architecture. Proof. It is sufficient to let Q(s) = 0. Then from Lemma 2, the implementation (P(s),Q(s)) is completely secure. Note that although Q(s) = 0 is one sufficient method to implement a system with a completely secure architecture, any other acyclic Q(s) is also sufficient.

3.4 Numerical Example 22 3.4 Numerical Example To illustrate the solutions to problems 1 and 2, consider a system with a design given by G(s) = 1 (s + 1)(s + 2)(s + 3) (s + 2) (s + 2)(s + 3) d(s) (s + 1)(s + 3) (s + 1)(s + 2)(s + 3) (s + 3), (s + 1) (s + 1)(s + 2) (s + 1)(s + 2)(s + 3) d(s) = (s + 1)(s 3 + 6s 2 + 11s + 5). (3.4) Let this system be implemented as in Figure 3.5a by 1 1 0 0 s+1 s+1 0 0 Q(s) = 1 s+2 0 0, P(s) = 1 0 s+1 0. (3.5) 1 1 0 s+3 0 0 0 s+1 It can be checked that G(s) = (I Q(s)) 1 P(s). Since the links in P(s) are not vulnerable, we only consider the vulnerabilities on links Q 13 (s), Q 21 (s), Q 32 (s). Let H(s) = (I Q(s)) 1. Then Hence V = v(q 13 (s)) = H 31 = 0.20, v(q 21 (s)) = H 12 = 0.40, v(q 32 (s)) = H 23 = 0.60. max v(q x(s)) = v(q 32 (s)) = 0.60. (3.6) x {13,21,32} This implementation of G(s) is vulnerable since there are links in Q(s) that are in a cycle. However, a completely secure architecture can be implemented as in Figure 3.5b by setting Q(s) = 0 and P(s) = G(s). It is important to note that we have created two implementations of the same system G(s), yet they both do the same thing, meaning both have the same input-output behavior. If

3.5 Conclusions 23 (a) A vulnerable implementation. (b) A completely secure implementation. Figure 3.5 A vulnerable and a completely secure implementation of the same system. Black links are secure and red links are vulnerable. the first can land a plane, the second will also. The difference is that the first is vulnerable, while the second is completely secure. 3.5 Conclusions In conclusion, we have shown that, given an open-loop system design G(s) and its implementation (P(s), Q(s)), only links in Q(s) are vulnerable. The vulnerability of any link Q i j (s) Q(s) is given by H ji (s), where H(s) = (I Q(s)) 1, and the vulnerability of the entire system is the vulnerability of the most vulnerable link in the system. We have also shown that an open-loop system G(s) may be implemented in many ways, some of which are vulnerable and some of which are completely secure. In order for G(s) to be implemented securely, Q(s) must not have any cycles. To be completely secure, it is sufficient to implement G(s) such that Q(s) = 0 and P(s) = G(s).

Chapter 4 Vulnerability in Closed-Loop Systems In the previous chapter, we showed the computation and application in vulnerability in open-loop systems. However, a critical assumption that we made was that we had complete control over the implementation of the system. In this chapter, we consider the situation where we do not have complete control over the implementation of the system. In particular, we consider the case where feedback is a necessary design component of the system. 4.1 Problem Formulation Like before, there are two problems that we seek to solve in studying closed-loop systems: 1. Defining Vulnerable Links: Derive a definition and a computation of vulnerability in closed-loop systems. 2. Minimizing Vulnerability: Given a fixed closed-loop system design G(s) and K(s), find the implementation (P(s), Q(s)) of K(s) that minimizes the vulnerability of the system. 24

4.2 Problem 1: Defining Vulnerable Links 25 But first, we will define a closed-loop system. Definition 10. A closed-loop system is a system where there is feedback in its design. In other words, given inputs U and outputs Y, there exists a non-zero transfer function G(s), called the plant, that maps U to Y, and there exists a non-zero transfer function K(s), called the controller, that maps Y to U. Both the plant and the controller may also have internal feedback in their implementations. Figure 4.1 A closed-loop system is a pair of systems. One maps inputs U to outputs Y and the other maps Y to U. 4.2 Problem 1: Defining Vulnerable Links Like the open-loop problem, the first problem that we seek to solve is the derivation and computation of vulnerability in closed-loop systems. Recall that a closed-loop system is the composition of a plant and a controller in feedback.

4.2 Problem 1: Defining Vulnerable Links 26 4.2.1 The Closed-Loop System as a DSF Let the design of the plant be given by G(s) and its implementation by ( P(s), Q(s)). Let the design of the controller be given by K(s) and its implementation by (P(s),Q(s)). The input to the plant is U and the output is Y. The input to the controller is Y and the output is U. Hence, from Equation 2.13, we get that Y = Q(s)Y + P(s)U U = Q(s)U + P(s)Y, which is equivalent to Y Q(s) = U P(s) P(s) Y. (4.1) Q(s) U Note that (4.1) is a DSF with Q(s) Q(s) = P(s) P(s), P(s) = 0 0. (4.2) Q(s) 0 0 Since the closed-loop system can be expressed as a DSF, we can use the results of the open-loop system likewise hold in the closed-loop system. Note that all links in the closed-loop system, including links in P(s) and P(s), are in Q(s). Therefore, all links in the system are potentially vulnerable. 4.2.2 Decomposition of Analysis We now show that the implementation of one system cannot affect the vulnerability of any other system in which it is in feedback. Lemma 2. Let a closed loop system be the composition of a fixed plant G(s) and a fixed controller K(s) in feedback. Let plant be implemented by ( P(s), Q(s)) and the controller

4.2 Problem 1: Defining Vulnerable Links 27 be implemented by (P(s),Q(s)). Also, let Q(s) be defined as in Equation 4.2. Then H = ( I Q(s) ) 1 = (I ( G(s)K(s)) 1 I Q(s) ) 1 K(s)(I G(s)K(s)) 1 ( I Q(s) ) 1 G(s)(I K(s)G(s)) 1 (I Q(s)) 1. (4.3) (I K(s)G(s)) 1 (I Q(s)) 1 Proof. Let ( I Q(s) ) 1 = H 11 H 12. H 21 H 22 Then I Q(s) P(s) P(s) H 11 H 12 = I 0. I Q(s) H 21 H 22 0 I Recall that P(s) = (I Q(s))K(s) and that P(s) = ( I Q(s) ) G(s). First, we have that ( I Q(s) ) H 12 PH 22 = 0. Hence H 12 = ( I Q(s) ) 1 P(s)H 22 = G(s)H 22. Similarly, we have that (I Q(s))H 21 PH 11 = 0. Hence H 21 = (I Q(s)) 1 P(s)H 11 = K(s)H 11. Next, we have that ( I Q(s) ) H 11 P(s)H 21 = ( I Q ) H 11 P(s)K(s)H 11 = I. Hence H 11 = ( (I Q(s)) P(s)K(s) ) 1 = ( (I Q(s)) (I Q(s))G(s)K(s) ) 1 = ( (I Q(s))(I G(s)K(s)) ) 1 = (I G(s)K(s)) 1 ( I Q(s) ) 1

4.2 Problem 1: Defining Vulnerable Links 28 Finally, we have that (I Q(s))H 22 P(s)H 12 = (I Q(s))H 22 P(s)G(s)H 22 = I. Hence H 22 = ((I Q(s)) P(s)G(s)) 1 = ((I Q(s)) (I Q(s))K(s)G(s)) 1 = ((I Q(s))(I K(s)G(s))) 1 = (I K(s)G(s)) 1 (I Q(s)) 1 Therefore H = ( I Q(s) ) 1 = H 11 H 12 H 21 H 22 = (I ( G(s)K(s)) 1 I Q(s) ) 1 K(s)(I G(s)K(s)) 1 ( I Q(s) ) 1 G(s)(I K(s)G(s)) 1 (I Q(s)) 1. (I K(s)G(s)) 1 (I Q(s)) 1 Theorem 3. Let a closed loop system be the composition of a fixed plant G(s) and a fixed controller K(s) in feedback. Let the controller be implemented by (P(s), Q(s)). Then the vulnerability of the links in P(s) and Q(s) are independent of the implementation of G(s). Proof. Consider two arbitrary implementations of G(s), ( P 1 (s), Q 1 (s)) and ( P 2 (s), Q 2 (s)). We show that the vulnerability of the links in (P(s),Q(s)) is the same for each implementation of G(s). Let the DSF of the combined system be given as in (4.2. Recall that all links in the system are in Q(s). From (3.1), we get that the vulnerability of any link (i, j) Q(s) is v( Q i j (s)) = H ji (s). From Lemma 2, we get that H = (I ( G(s)K(s)) 1 I Q(s) ) 1 G(s)(I K(s)G(s)) 1 (I Q(s)) 1 K(s)(I G(s)K(s)) 1 ( I Q(s) ). 1 (I K(s)G(s)) 1 (I Q(s)) 1

4.2 Problem 1: Defining Vulnerable Links 29 The links in P(s) are found in the entries of Q 21 (s); hence the vulnerability of the links in P(s) are given by the infinity norms of the entries of H 12 (s) = G(s)(I K(s)G(s)) 1 (I Q(s)) 1. Since G(s) = ( I Q 1 (s) ) P 1 (s) = ( I Q 2 (s) ) P 2 (s), the vulnerabilities of the links in P(s) cannot be changed by changing the implementation of G(s). Similarly, the links in Q(s) are found in the entries of Q 22 (s); hence the vulnerability of the links in Q(s) are given by the infinity norms of the entries of H 22 (s) = (I K(s)G(s)) 1 (I Q(s)) 1. Again, since G(s) is invariant with its implementation, the vulnerabilities of the links in Q(s) cannot be changed by changing the implementation of G(s). Corollary 2. Let a closed loop system be the composition of a fixed plant G(s) and a fixed controller K(s) in feedback. Let the plant be implemented by ( P(s), Q(s)). Then the vulnerability of the links in P(s) and Q(s) are independent of the implementation of K(s). Proof. This follows by switching the role of the plant and controller in the proof of Theorem 3. Theorem 3 and Corollary 2 have both an advantage and a disadvantage. The advantage is that the implementation of one system in feedback cannot change the vulnerability in the other system; therefore it is convenient to implement each separately. In addition, any chosen implementation of one system cannot inadvertently make the other system more vulnerable. The disadvantage, however, is that if the implementation of one system is vulnerable and cannot be changed, then there is nothing that can be done on the other system to improve the situation.

4.3 Problem 2: Minimizing Vulnerability 30 4.2.3 Summary In summary, the closed-loop system can be represented by a DSF; therefore the computation of vulnerability is the same as in the open-loop system. Any link continues to be vulnerable if and only if it is in a cycle. Vulnerability is still given by (3.1) and (3.2) where Q(s) in these Equations is Q(s) given by (4.2). Since links in a cycle are vulnerable and since two systems are in feedback, there are likely to be cycles in the combined system; hence it may not be possible to remove feedback from the system. However, we have shown that when an engineer is implementing a system, he or she need not worry about introducing vulnerabilities into any other system with which it is connected in feedback. This simplifies the design and analysis of the security of systems. 4.3 Problem 2: Minimizing Vulnerability We seek to minimize the vulnerability of a system composed of two sub-systems in feedback. To do this, we utilize (3.2) to choose Q (s) that minimizes vulnerability through the following minimization problem: Q (s) = argmin Q(s) where Q(s) is defined as in (4.2). H(s) 1 = argmin I Q(s) 1, (4.4) Q(s) However, recall from Theorem 3 and Corollary 2 that the vulnerability of the plant and controller are independent of the implementation of the other. Thus, the problem of minimizing the vulnerability of the plant is separate and independent from the problem of minimizing the vulnerability of the controller. Without loss of generality, we only consider the problem of reducing the vulnerability of the controller; therefore, (4.4) is reduced to

4.3 Problem 2: Minimizing Vulnerability 31 choosing Q (s) in the implementation of the controller such that Q (s) = argmin G(s)(I K(s)G(s)) 1 (I Q(s)) 1 (4.5) Q(s) (I K(s)G(s)) 1 1 Again, this optimization problem is non-convex, and therefore computationally difficult. However, the nature of DSFs provides insight into simplifying the problem. 4.3.1 Fighting Fire with Fire As we began studying this problem, we believed that the result of the closed loop problem in (4.5) would be the same as in the open loop problem: too choose Q(s) such that Q(s) has no cycles, such as Q(s) = 0. However, this is not the case. In fact, evidence shows that we can fight fire with fire, or use internal feedback in Q(s) to counter the vulnerability introduced by connecting two systems in feedback. 4.3.2 Numerical Examples Consider an unstable plant given by the following state representation: Ã = 1 0, B = 2 1, C = 1 0. 0 1 1 2 0 1 The corresponding transfer function representation of this system is G(s) = Finally, the corresponding DSF from the state representation is Q(s) = 0, P(s) = G(s). 2 s 1 1 s+1 1 s 1 2 s+1.

4.3 Problem 2: Minimizing Vulnerability 32 To stabilize this plant, we connect a controller with the following state representation: A = 2 1 1, B = 3 2 2 2, C = 1 0. 3 0 1 The corresponding transfer function representation of this system is K(s) = 1 (s + 1)(s + 3) 3s 4 2s 1. 2s 1 3s 4 Though the state representation gives an implementation, we will change this implementation in the following examples. Example 5. In this example, we do not change the structure of the controller. Therefore, it is implemented by Q(s) = 1 s + 2 0 1, P(s) = 1 3 1 0 s + 2 2 2. 3 Note that this implementation has full internal feedback within the controller. The resulting vulnerability of all links in the combined system are given in Figure 4.2, with a maximum vulnerability of 2.70 and an average vulnerability across all links of 1.41. Figure 4.2 The vulnerability of links in Example 5.

4.3 Problem 2: Minimizing Vulnerability 33 Since the most vulnerable link in this example is in the controller, there is a possibility that another controller implementation can reduce the vulnerability of the entire system. Example 6. Now, we change the controller so that there is no internal feedback. Particularly, let Q(s) = 0. Then P(s) = K(s). The resulting vulnerability of all links in the combined system are given in Figure 4.3, with a maximum vulnerability of 2.27 and an average vulnerability across all links of 1.10. Figure 4.3 The vulnerability of links in Example 6. Hence, as we originally expected, removing internal feedback from the controller reduced the vulnerability of the system. However, note that the most vulnerable link is still in the controller. Therefore, it may be possible to implement a better controller to reduce this vulnerability further. Example 7. Let Q(s) = 1 s + 1 0 32. 32 0

4.3 Problem 2: Minimizing Vulnerability 34 Then P(s) = 1 d(s) 3s2 + 57s + 28 2s 2 + 93s + 127, 2s 2 + 93s + 127 3s 2 + 57s + 28 d(s) = (s + 1) 2 (s + 3). The resulting vulnerability of all links in the combined system are given in Figure 4.4, with a maximum vulnerability of 1.85 and an average vulnerability across all links of 0.69. Figure 4.4 The vulnerability of links in Example 7. Note that the most vulnerable link is now in the plant. Therefore, no other controller can reduce the vulnerability of the system more than this controller. Thus an implementation with full internal feedback may result in a lower vulnerability than one with no internal feedback. Cycles cause vulnerability, yet if feedback is necessary, we can use internal feedback to reduce the vulnerability caused by feedback.

4.3 Problem 2: Minimizing Vulnerability 35 4.3.3 The High-Gain Heuristic Example 7 gave some powerful insight into a heuristic that leads to good implementations, which we call the High-Gain Heuristic. For any system, let n n 0 d(s) n n d(s) 0 Q(s) =...... n d(s) d(s) d(s) n d(s) 0 where d(s) is a polynomial in s and n R and is large., P(s) = (I Q(s))K(s), This heuristic has consistently led to good results in all of our simulations, where, as n grows large the vulnerabilities on all links in P(s) approach 0 as the vulnerabilities on all links in Q(s) approaches 1 p where Q(s) is a p p matrix. As the number of rows in Q(s) increases, or in other words, as the number of measured states in the system increases, the more secure the controller designed using the high-gain heuristic becomes until it converges to the point where the vulnerability of all links in P(s) and Q(s) approaches zero. Example 8. In this example, we change the implementation of both the plant and the controller to use the high-gain heuristic. Let Q(s) = 1 s + 1 0 10000. 10000 0

4.3 Problem 2: Minimizing Vulnerability 36 Then P(s) = 1 f (s) g(s) h(s) h(s), g(s) f (s) = (s + 1) 2 (s + 3), g(s) = 3s 2 + 19993s + 9996, h(s) = 2s 2 + 29997s + 39999. Also let Q(s) = 1 s 1 0 10000. 10000 0 Then P(s) = 1 f (s) g(s) h(s) h(s), g(s) f (s) = (s + 1) 2 (s + 3), g(s) = 2s 2 10000s + 9998, h(s) = s 2 20002s 19999. The resulting vulnerability of all links in the combined system are given in Figure 4.5, with a maximum vulnerability of 0.50 and an average vulnerability across all links of 0.17.

4.4 Conclusions 37 Figure 4.5 The vulnerability of links in Example 8. 4.4 Conclusions In conclusion, we have shown that the vulnerability of one system is only dependent on the design of the system with which it is connected in feedback, and not its implementation. We have also shown that removing the internal feedback of the systems is not capable of minimizing vulnerability, and in fact, it is often possible to fight fire with fire by using full internal feedback through the High-Gain Heuristic to combat the vulnerability introduced by connecting the systems in feedback.

Chapter 5 Conclusions and Future Work In conclusion, we have defined vulnerability due to destabilizing attacks as a property of a system s structure. Further, we defined the Transfer Function to be the system s design and the Dynamical Structure Function to be the system s implementation. We wish to minimize vulnerability, and formulated the problem of how to minimize vulnerability in open-loop systems and in closed-loop systems. In the case of the open-loop system, we have shown that vulnerability can be reduced to zero by implementing the system with no internal feedback. In the case of the closed-loop system, we showed that the vulnerability of one system is only dependent on the design of the system with which it is connected in feedback. Though we still do not know how to minimize vulnerability in the closed-loop case, we have provided the High-Gain Heuristic, which has proven effective at minimizing the vulnerability in several systems. This heuristic shows that it is possible to fight fire with fire, to use the internal feedback within systems to combat the vulnerability introduced by connecting the systems in feedback. Further work must be performed to prove that the High-Gain Heuristic is effective at reducing vulnerability in all systems without introducing unstable hidden modes. Addition- 38

39 ally, work must be performed to show that either the High-Gain Heuristic or an alternate implementation minimizes the vulnerability of a closed-loop system.

Appendix A Symbols and Notations R: The set of real numbers. x R n : A column vector of n real numbers. Unless otherwise specified, all variables denoted with lower-case letters are column vectors. x i x: The ith entry of x. ẋ: The derivative of x with respect to time. A R m n : An m n matrix of real numbers. Unless otherwise specified, all variables denoted with upper-case letters are matrices. a i j A: The entry in the ith row and jth column of A. s: The Laplace variable. p(s): A polynomial in terms of the Laplace variable s. T (s): A transfer function, which for the purposes of this paper is a rational polynomial in terms of the Laplace variable s; e.g. H(s) = n(s) d(s) and d(s). 40 for some polynomials n(s)

41 T (s): A matrix of transfer functions. Note that T (s) can either be a transfer function or a matrix of transfer functions. Its definition depends on the context. G i j (s): The transfer function in the ith row and jth column of the transfer function matrix G(s)

Bibliography [1] J. P. Hespanha, Linear Systems Theory (Princeton University Press, Princeton, 2009). [2] S. Amin, A. Cardenas, and S. S. Sastry, Safe and Secure Networked Control Systems Under Denial-of-Service Attacks, In Hybrid Systems: Computation and Control, pp. 31 45 (2009). [3] A. Rai, D. Ward, S. Roy, and S. Warnick, American Control Conference (Montréal, Canada, 2012). [4] Y. Liu, P. Ning, and M. K. Reiter, False Data Injection Attacks against State Estimation in Electric Power Grids, ACM Transactions on Information and System Security 14 (2011). [5] E. Garone, A. Casavola, and B. Sinopoli, IEEE Conference on Decision and Control (Atlanta, GA, 2010), pp. 5967 5972. [6] K. J. Åström and R. M. Murray, Feedback Systems (Princeton University Press, 2008). [7] G. E. Dullerud and F. Paganini, A Course in Robust Control Theory, a Convex Approach (Springer, 2000). [8] G. Meinsma, Elementary proof of the Routh-Hurwitz test, Systems and Control Letters 25, 237 242 (1995). 42