c 2005 by Shreyas Sundaram. All rights reserved.

Size: px
Start display at page:

Download "c 2005 by Shreyas Sundaram. All rights reserved."

Transcription

1 c 25 by Shreyas Sundaram All rights reserved

2 OBSERVERS FOR LINEAR SYSTEMS WITH UNKNOWN INPUTS BY SHREYAS SUNDARAM BASc, University of Waterloo, 23 THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering in the Graduate College of the University of Illinois at Urbana-Champaign, 25 Urbana, Illinois

3 ABSTRACT In practice, it is often the case that systems can be modeled as having unknown inputs For such systems, it may be necessary to estimate the states and inputs in order to achieve certain control objectives In this thesis, we study linear systems with unknown inputs, and design observers to estimate the desired quantities In particular, we consider the use of delayed observers, which enlarges the class of systems for which state and input estimation is possible We start by designing reduced-order observers that reconstruct the entire state Our design procedure is quite general in that it encompasses the design of full-order observers via appropriate choices of design matrices We provide necessary and sufficient conditions for the existence of such observers, as well as a characterization of the minimum delay required to reconstruct the entire state Once the state observer is constructed, we show that it is straightforward to obtain an estimate of the unknown inputs We then present an investigation of partial state and input observers for linear systems with unknown inputs Our approach characterizes the set of all linear functions of the states and inputs that can be reconstructed through a linear observer with a given delay, and directly produces the corresponding observer parameters In comparison to previous work on partial observers, the main contributions of our work are (i) an algebraic procedure to design delayed linear observers with dimension no greater than that of the system, and (ii) a unified treatment of both state and input observers Finally, we consider a stochastic setup and present a method for constructing linear minimum-variance unbiased state estimators for discrete-time linear stochastic systems with unknown inputs Our design provides a characterization of estimators with delay, which eases the established necessary conditions for existence of unbiased estimators with zerodelay A consequence of using delayed estimators is that the noise affecting the system becomes correlated with the estimation error We handle this correlation by increasing the dimension of the estimator appropriately iii

4 To my family iv

5 ACKNOWLEDGMENTS I am indebted to my adviser, Christoforos N Hadjicostis, for his guidance This thesis would not have been possible without his insight and expertise I am also grateful to my family and friends for their support I would like to thank Paul Leventis, Catherine Gebotys, Farid Golnaraghi, Daniel Miller, Hans Eberle, Nils Gura, and Sheueling Chang-Shantz for their assistance in getting me to where I am today I am also thankful to the National Science Foundation, which supported my work under NSF Career Award and NSF EPNES Award , and the Air Force Office of Scientific Research, which supported my work under URI Award No F URI The opinions, findings, and conclusions and recommendations expressed in this thesis do not necessarily reflect the views of the NSF or the AFOSR v

6 TABLE OF CONTENTS CHAPTER 1 INTRODUCTION 1 11 Background and Motivation 1 12 Preliminaries and Notation 2 13 Previous Results Inversion State observation 4 CHAPTER 2 FULL STATE AND INPUT OBSERVERS 6 21 Introduction 6 22 State Observer 6 23 Input Observer Design Procedure Example Summary 21 CHAPTER 3 PARTIAL STATE AND INPUT OBSERVERS Introduction Partial State Observer Partial Input Observer Example Summary 41 CHAPTER 4 OPTIMAL STATE ESTIMATORS Introduction Preliminaries Unbiased Estimation Optimal Estimator Design Procedure Examples Example Example Summary 6 CHAPTER 5 SUMMARY AND CONCLUSIONS 61 REFERENCES 62 vi

7 CHAPTER 1 INTRODUCTION 11 Background and Motivation When designing control systems, there is frequently some degree of uncertainty surrounding the plant For example, some of the plant parameters may not be exactly known 1, 2, or the plant may be subject to unmeasurable disturbances 3 Similarly, in decentralized control systems, it may not be possible to have knowledge of the control signals generated by different controllers 4 The system may also be affected by faults, the magnitude and characteristics of which cannot be predicted a priori 5 These uncertainties can often be incorporated into the system model by treating them as unknown inputs Traditional techniques for control system design must then be generalized to handle these uncertainties For example, researchers have studied ways to reconstruct the unknown inputs by using only the output of the system and possibly the initial system state 6 11 These investigations have revealed that it will generally be necessary to use delayed (or differentiated) outputs in order to reconstruct the inputs A system is said to be invertible if it is possible to reconstruct the inputs in the above manner Researchers have also extended the standard Luenberger state observers to treat the problem of state estimation in linear systems with unknown inputs It has been shown that system invertibility is a necessary condition for the existence of unknown input observers 26, and thus delayed observers may be required in order to estimate the entire system state In this thesis, we study state and input observers for linear systems with unknown inputs We start by examining the problem of observing the entire state and input in such systems, and develop a design procedure to obtain the observer parameters We then investigate partial observers, which reconstruct a maximal subset of the states and inputs with a given delay Finally, we study linear stochastic systems with unknown inputs, and develop an optimal state estimator that minimizes the mean square estimation error Specifically, the main contributions of this thesis are: 1 The development of a unified design procedure for both reduced and full-order state 1

8 observers that incorporate delays 2 The characterization of all possible linear functions of the state vector and input that can be observed with a given delay 3 A design procedure for optimal state estimators with delays for linear systems operating in the presence of noise The fact that we consider delayed observers makes our investigation more general than many of the other works currently present in the literature 12 Preliminaries and Notation In this thesis, we will be considering discrete-time linear systems of the form x k+1 = Ax k + Bu k y k = Cx k + Du k, (11) with state vector x R n, unknown input u R m, output y R p, and system matrices (A, B, C, D) of appropriate dimensions Note that we omit known inputs in the above equations for clarity of development We also assume without loss of generality that the matrix D B is full column rank This assumption can always be enforced by an appropriate transformation and renaming of the unknown inputs Finally, while we consider discrete-time systems in our development, our approach applies equally well to continuous-time systems by replacing advances with differentiators (eg, y k+α should be replaced by the α th derivative of y(t)) The response of system (11) over α + 1 time units is given by y k y k+1 y k+α }{{} Y k:k+α C D CA CB D = x k + CA α }{{} CA α 1 B } CA α 2 B {{ D } Θ α M α u k u k+1 u k+α } {{ } U k:k+α (12) The matrices Θ α and M α in the above equation can be expressed in a variety of ways 2

9 We will be using the following identities in our derivations: C Θ α 1 Θ α = =, Θ α 1 A CA α (13) M α = D M α 1 = Θ α 1 B D, (14) M α 1 Cζ α 1 where ζ α 1 A α 1 B A α 2 B B 13 Previous Results We now summarize some important results on system inversion and state observation These results will be useful to our development later on 131 Inversion Consider the system response given by (12), and suppose that the value of the state x k is known at time-step k If we wish to determine u k from the output, we require that the first m columns of M α be linearly independent of each other and of the remaining αm columns of M α Otherwise, there exists a nonzero sequence of inputs U k:k+α such that M α U k:k+α =, and this is indistinguishable from the input sequence U k:k+α = Using (14), we note that the last αm columns of M α have the same rank as M α 1 This reasoning was used by Sain and Massey to produce the following theorem for system invertibility 7 Theorem 11 For any nonnegative integer α, rankm α rankm α 1 m with equality if and only if the system in (11) is invertible with delay α If the condition in the above theorem holds, then there exists a matrix S such that u k = SY k:k+α SΘ α x k, and this can be substituted into (11) to obtain x k+1 Note that the expression on the left side of the inequality in the above theorem is a monotonically nondecreasing function of α, which indicates that more information can be obtained about the input by allowing a 3

10 larger delay An upper bound on the delay was obtained by Willsky, who studied system invertibility in 27 Theorem 12 Let q be the dimension of the nullspace of D The system in (11) is invertible if and only if rank M n q+1 rank M n q = m ; ie, if (11) is invertible, its inherent delay α cannot exceed n q State observation As mentioned earlier in this chapter, the problem of state estimation for the system in (11) has also received considerable attention in the literature The vast majority of these investigations focus on zero-delay observers (ie, observers that only make use of the current measurement y k to estimate x k ) The following theorem provides the well-established conditions for the existence of a zero-delay state observer for (11) 19, 25, 28 Theorem 13 The system (11) has an asymptotically stable state observer if and only if D 1 rank = m + rankd, CB D zi A B 2 rank = n + m, z C, z 1 C D The values of z for which the second condition fails are called the transmission zeros of the system 29 Thus, the second condition in the above theorem means that all transmission zeros of the system must be stable in order to estimate the entire system state Comparing condition 1 to Theorem 11, we notice that the system must be invertible with a delay of one time-step in order to construct a zero-delay observer One would then expect that this necessary condition could be met more easily through the use of a delayed observer This fact was verified in 26 by defining a new output equation for the system (11), with Θ α and M α taking the place of the C and D matrices, respectively These matrices were then substituted into the necessary conditions for zero-delay observers, and reduced to produce the following result Theorem 14 The system (11) has an asymptotically stable state observer with delay α if and only if 4

11 1 rankm α+1 rankm α = m, zi A B 2 rank = n + m, z C, z 1 C D Note that the second condition in the above theorem is unchanged from the zero-delay case, which indicates that delays do not affect the transmission zeros of the system We are now ready to pursue our investigation of observers for linear systems with unknown inputs The remainder of this thesis is organized as follows In Chapter 2, we develop a design procedure for both full and reduced-order delayed observers to estimate the entire system state and input In Chapter 3, we characterize the set of all linear functions of the state and input that can be observed with a given delay In Chapter 4, we consider the case where the system is operating in the presence of noise, and develop a state estimator that is optimal in the sense of minimizing the mean square estimation error We summarize and conclude the thesis in Chapter 5 5

12 CHAPTER 2 FULL STATE AND INPUT OBSERVERS 21 Introduction In this chapter, we study the problem of observing the entire system state and input in linear systems with unknown inputs State observers for such systems have received considerable attention over the past few decades 16, 19, 22, 23, 25, and various methods of realizing both full and reduced-order zero-delay state observers have been presented As discussed in Chapter 1, it may be necessary to utilize delayed observers in order to estimate the system state While 26 established necessary and sufficient conditions for the existence of state observers with delays, no design procedure was provided In 3, the authors handled delayed observers by constructing a higher-dimensional system that incorporated the delayed states into the new state vector An observer was then constructed for this augmented system, and geometric conditions were given for the existence of such observers In this chapter, we provide a unified design procedure for both reduced and full-order observers with delays, and present conditions for the existence of such observers In contrast to the work in 3, the dimension of our observer is no greater than the dimension of the original system, and we present algebraic existence conditions Our approach generalizes recently published work on full-order zero-delay state observers 25, and allows us to treat the full-order observer as a special case of a reduced-order observer where the dynamic portion reconstructs the entire state vector Once we have constructed a state observer, we show that it is straightforward to obtain an estimate of the inputs 22 State Observer We first consider the problem of constructing an observer to estimate the system state We start by determining the set of states which can be directly obtained from the output of the system over α + 1 time-steps The following theorem provides an answer to this problem 6

13 Theorem 21 For system (11) with response over α + 1 time-steps given by (12), let t = rank Θ α M α rank M α Then it is possible to perform a similarity transform on the system S to obtain a new system S such that exactly t of the states in S are directly obtainable from the output of the system Proof: Assume rank Θ α M α rank M α = t This implies that there are t linearly independent vectors in the matrix Θ α that cannot be written as a linear combination of vectors in M α Thus, there exists a matrix P of dimension t (α + 1)p such that PΘ α has full row-rank, and PM α = Define the similarity transformation matrix T PΘ α H, (21) where the matrix H is chosen so that T has full rank Note that P and H can be chosen so that T is orthogonal, if so desired Consider the system S with state-vector x k = T x k The system matrices in S are given by Ā T AT 1 A 11 A 12 = A 21 A 22 PΘ α B B T B =, HB, x 1,k x 2,k C CT 1, D D (22) = Now it is readily seen from (12) that PY k:k+α = PΘ α T 1 x k = I t x k, and thus the first t states of x k are immediately obtained Remark 21 The problem of determining a particular set of states from the (delayed) output was studied in 31, and the special case of perfect observability (ie, t = n) was studied in 4, 32 The result in Theorem 21 appears to be new in that it deals with reconstructing a maximal subset of states from the output 7

14 To estimate the remaining (n t) states of x k (ie, x 2,k ), we construct a reduced-order observer of the form z k+1 = Ez k + F Y k:k+α ψ k = z k + GY k:k+α, (23) where matrices E, F, and G are chosen such that ψ k x 2,k as k Using (12), the observer error is given by e k+1 ψ k+1 x 2,k+1 = Ez k + F Y k:k+α + GY k+1:k+α+1 A 21 A 22 x k HBu k = Ee k + (F EG) Θ α x k + GΘ α Ax k + EHx k A 21 A 22 T x k + (F EG) M α U k:k+α + GΘ α Bu k + GM α U k+1:k+α+1 HBu k Using the identities (13) and (14), the expression for the error can be written as e k+1 = Ee k + F EG Θ α+1 x k + G Θ α+1 x k + EHx k A 21 A 22 T x k + F EG M α+1 U k:k+α+1 + G M α+1 U k:k+α+1 HBu k Partition the matrices F and G as F = F F 1 F α, G = G G 1 G α where each F i and G i are of dimension (n t) p, and define K F EG F 1 EG 1 + G F α EG α + G α 1 G α (24) Note that since we are free to choose F and G, matrix K can be chosen to have any value we require The error can then be expressed as e k+1 = Ee k + ( ) EH A 21 A 22 T + KΘ α+1 x k + KM α+1 U k:k+α+1 HBu k In order to force the error to go to zero, regardless of the values of x k and the inputs, the following two conditions must hold: 8

15 1 E must be a stable matrix, 2 The matrix K must satisfy KM α+1 = HB, (25) EH = A 21 A 22 T KΘ α+1 (26) The solvability of condition (25) is given by the following theorem Theorem 22 There exists a matrix K such that KM α+1 = HB if and only if rank M α+1 rank M α = m (27) Proof: There exists a K satisfying (25) if and only if the matrix R HB is in the space spanned by the rows of M α+1 This is equivalent to the condition rank M α+1 R = rank M α+1 Using (14), we get rank M α+1 R D = rank Θ α B M α HB I D = rank P I Θ α B M α HB I I D = rank T B Θ α I M α 9

16 By our assumption that the matrix D B has full column rank, we get thereby completing the proof rank M α+1 R = m + rank M α, Note that (27) is the condition for inversion of the inputs with known initial state, as given in 7 If we set α =, condition (27) becomes D rank = m + rank D, CB D which is the well-known necessary condition for unknown-input observers with zero delay 19 This is a fairly strict condition, and demonstrates the utility of a delayed observer When designing such an observer, one can start with α = and increase α until a value is found that satisfies (27) Using Theorem 12, we see that an upper bound on the observer delay is n q time-steps, where q is the nullity of D If (27) is not satisfied even for α = n q, it is not possible to estimate all the states in the system Remark 22 As mentioned in Chapter 1, Condition (27) was obtained in 26 through a different method The approach in that paper was to define a new output equation for the system, with Θ α and M α taking the place of the C and D matrices, respectively These matrices were then substituted into the necessary conditions for zero-delay observers, and reduced to produce Equation (27) While this approach is quite intuitive, it may result in unnecessarily large and redundant matrices when designing the observer parameters Furthermore, the upper bound on the observer delay provided in 26 is α = n 1, which can be tightened by applying the result of Theorem 12 We now turn our attention to condition (26) Right-multiplying by T 1, we get the equivalent condition E = A 21 A 22 KΘ α+1 T 1 (28) From the above equation it is apparent that there is an additional constraint on K; namely, K times the first t columns of Θ α+1 T 1 must produce A 21 To satisfy this constraint, we define P T y T y, J Φ I p, (29) 1

17 where the matrix Φ is chosen so that T y is square and invertible Using (13) and (14), we get I t M J M α+1 = ΦM α, Θ J Θα+1 T 1 = L 1 L 2, (21) Cζ α D L 3 L 4 where L 1 L 2 ΦΘ α T 1 = L 3 L 4 CA α+1 T 1 Since J is invertible, we can define a matrix K such that K = KJ Partitioning K as K = K1 K2 where K 1 has t columns, Equations (25) and (28) become K1 K2 ΦM α Cζ α D E = A 21 A 22 =, HB, (211) I t K1 K2 L 1 L 2 (212) L 3 L 4 It is clear from the above equations that K 1 must be chosen such that K 1 = A 21 K 2 L 1 L 3 and so the problem is reduced to finding the matrix K 2 satisfying Equations (211) and (212) Recall that the first m columns of M α+1 must be linearly independent of each other and of the remaining (α + 1)m columns (by Theorem 22), and so the rank of ΦM α Cζ α D, (213) is m + rankm α Let N be a matrix whose rows form a basis for the left nullspace of the last (α + 1)m columns of (213) In particular, we can assume without loss of generality that 11

18 N satisfies ΦM α N = Cζ α D I m (214) From (211), we see that K 2 must be of the form for some K = K 2 = KN K1 K2, where K 2 has m columns Equation (211) then becomes K1 K2 = I m HB, (215) from which it is obvious that K 2 = HB and K 1 is a free matrix Returning to Equation (212), we have E = A 22 K 2 L 2 L 4 = A 22 K1 HB N L 2 L 4 Defining ν 1 ν 2 N L 2 L 4, (216) where ν 2 has m rows, we come to the final equation E = (A 22 HBν 2 ) K 1 ν 1 (217) Recall that we require E to be a stable matrix, and this is only possible if the pair (A 22 HBν 2, ν 1 ) is detectable This detectability condition can be stated in terms of the original system matrices as follows Theorem 23 The rank condition zi A 22 + HBν 2 rank = n t, z C, z 1 ν 1 12

19 is satisfied if and only if rank zi A C B = n + m, z C, z 1 D To prove the theorem, we make use of the following lemma, which is obtained by a simple modification of a theorem from 26 Lemma 21 The rank condition zi A rank C B = n + m, z C, z 1 D is satisfied if and only if zi A B rank C D = n + m + rankm α, z C, z 1 Θ α A Θ α B M α We are now in place to prove Theorem 23 Proof: We start by noting from (214) that I t N J D Θ α B M α = I m Let N be a matrix whose rows form a basis for the left nullspace of M α We can then write I t N J = W I p N (218) for some invertible matrix W Through a series of nonsingular transformations, we obtain zi A B zi A B rank C D = rank C D N Θ α A N Θ α B Θ α A Θ α B M α Θ α A Θ α B M α 13

20 zi A B C D = rank N Θ α A N Θ α B zc D zθ α 1 A Θ α 1 B M α 1 zi A B C D = rank N Θ α A N Θ α B zd D zθ α 1 A Θ α 1 B M α 1 zi A B C D = rank N Θ α A N Θα B D zθ α 1 A zθ α 1 B Θ α 1 B M α 1 Continuing in the above manner, we get zi A B zi A B rank C D = rank C D N Θ α A N Θ α B, Θ α A Θ α B M α M α and the rank of the top left submatrix in the above expression is given by zi A B zi T AT 1 T B rank C D = rank CT 1 D N Θ α A N Θ α B N Θ α AT 1 N Θα B zi A 11 A 12 PΘ α B = rank A 21 zi A 22 HB CT 1 (:, 1) CT 1 (:, 2) D, N Θ α AT 1 (:, 1) N Θα AT 1 (:, 2) N Θα B where T 1 (:, 1) represents the first t columns of T 1, and T 1 (:, 2) represents the last n t columns By the definition of P, there exists a matrix V such that P = V N Using the fact 14

21 that V N Θ α AT 1 = A 11 A 12, we get zi A rank C N Θ α A zi t B D = rank A 21 zi A 22 HB N CT 1 (:, 1) CT 1 (:, 2) D Θ α B N Θ α AT 1 (:, 1) N Θα AT 1 (:, 2) N Θα B Using (21), (216), and (218), we left-multiply the last two block rows in the above matrix by W to obtain zi A rank C N Θ α A zi t B A 21 zi A 22 HB D = rank N I t Θ α B ν 1 ν 2 I m zi A 22 HB = t + rank ν 1 ν 2 I m zi A 22 + HBν 2 = t + m + rank ν 1, where represents unimportant matrices This gives zi A B zi A 22 + HBν 2 rank C D = t + m + rankm α + rank Θ α A Θ α B M α ν 1 Using Lemma 21, we get the desired result We can now state the following theorem, whose proof is immediately given by the discussion so far Theorem 24 The system S in (11) has an observer with delay α if and only if 1 rank M α+1 rank M α = m, zi A B 2 rank = n + m, z C, z 1 C D 15

22 Recall that the first condition in Theorem 24 means that the system is invertible with delay α + 1 In fact, it has been shown in 11 that condition 2 is sufficient for the existence of a stable inverse for system S This fact leads to the following theorem Theorem 25 The system S in (11) has an observer (possibly with delay) if and only if rank zi A C B = n + m, z C, z 1 D The result in the above theorem has also been noted in 33, which studied the problem of reconstructing the unknown inputs The difference between Theorem 24 and Theorem 25 is that the latter does not provide a characterization of the delay in the observer Note that the conditions in Theorem 24 (and the equivalent condition in Theorem 23) are a generalization of those given in 19, 25, 34 for the existence of zero-delay observers, and verify the conditions in 26 Once the matrix K 1 is chosen to make E stable, we can obtain the matrix K in (25) and (26) by calculating K 2 = K1 HB N L K 1 = A 21 K 1 2 L 3 T y K = K1 K2 I p (219) We can then use (24) to map this K matrix to the observer gains F and G Note that this mapping is not unique In particular, one can choose G = G 1 = = G α 1 =, thereby getting K = F F 1 F α EG α G α This choice corresponds to using only the most delayed measurement in the output of the observer Similarly, one can choose F 1 = F 2 = = F α =, which corresponds to using only the earliest measurement in the dynamic portion of the observer Other combinations are also possible Note that this freedom does not exist when designing a zero-delay observer The final observer is given by Equation (23), and the estimate of the original system states is obtained as ˆx k = T 1 PY k:k+α 16 ψ k (22)

23 Remark 23 It is of interest to note that while we have pursued the development of a reduced-order observer, the above approach and conditions immediately apply to full-order observers as well This is because a full-order observer can be viewed as a special case of a reduced-order observer, where the dynamic portion reconstructs the entire state This can be accomplished by setting P to be an empty matrix (ie, by choosing t =, T = H = I n, T y = I (α+1)p ) 23 Input Observer We now turn our attention to reconstructing the unknown inputs Recall from the previous section that the system had to be invertible in order for a state observer to exist This means that once we have constructed a state observer, it is straightforward to obtain an estimate of the inputs The procedure is a generalization of a technique presented in 26 to the case where D We start by rewriting (11) as x k+1 Ax k B = u k (221) y k Cx k D Since D B is assumed to be full column rank, there exists a matrix S such that B S = I m (222) D Left-multiplying (221) by S and replacing x k with the estimated value ˆx k from (22), we get the estimate of the input to be ˆx k+1 Aˆx k û k = S (223) y k C ˆx k Since ˆx k x k as k, the above estimate will asymptotically approach the true value of the input Since the estimate of ˆx k+1 requires access to the output y k+α+1 (from (23)), the above estimate of the input will actually be delayed by α + 1 time-steps This agrees with (27), which indicates that the minimum delay for inversion of the inputs is α + 1 time-steps 17

24 24 Design Procedure We now summarize the design steps that can be used in designing a delayed observer for the system given in (11) 1 Find the smallest α such that rankm α+1 rank M α = m If the condition is not satisfied for α = n nullity D, it is not possible to reconstruct the entire system state 2 Find the matrix P such that PΘ α is full row rank and PM α is zero Use Theorem 21 for a reduced-order observer, or set P to be the empty matrix for a full-order observer Choose H and form the matrix T in (21) to obtain the transformed system given by (22) Also choose Φ and form the matrix T y given in (29) 3 Find the matrix N satisfying (214) 4 Form the matrices Θ and ν 1 ν 2 from (21) and (216) 5 If the detectability condition in Theorem 23 is satisfied, choose the matrix K 1 such that the eigenvalues of E = (A 22 HBν 2 ) K 1 ν 1 are stable 6 Calculate K by using Equation (219) 7 Use (24) to map this K matrix to F and G 8 Find the matrix S satisfying (222) 9 The final observer is given by Equation (23) An estimate of the original system states is obtained from (22), and an estimate of the unknown inputs is given by (223) 25 Example Consider the system given by the matrices 1 1 A = 1 1, B = 1 1, C = 1 1 1, D = 1 18

25 It is found that condition (27) holds for α = 1, so our observer must have a minimum delay of one time-step Using Theorem 21, we find t = 2 and choose 1 1 P =, 1 1 H = 1 1, 1 Φ = 1 Performing the similarity transformation, we get A 21 A 22 = The matrices M and Θ from (21) are found to be 1 M =, Θ = In this example, the last (α + 1)m = 4 columns of M have a rank of two, and thus the matrix N in (214) will only have two rows: 1 1 N = 1 Equation (215) becomes K1 K2 I 2 = HB, and since K 2 has m = 2 columns, K1 is the empty matrix This implies that we will have no freedom in choosing the eigenvalues of our observer Next, we use Equation (216) to obtain ν 1 ν 2 = 1 19

26 Again, since ν 2 has m = 2 rows, ν 1 is the empty matrix We now check the detectability of our system by computing E = A 22 HBν 2 =, which implies that we are able to design a stable observer Using (219), we get K 2 = 1 2 2, K 1 = 1, K = Finally, we obtain the F and G matrices by choosing G = Since E =, we have The final observer is given by F = F F 1 = 1 1 1, G = G G 1 = 2 2 z k+1 = F Y k:k+1, ψ k = z k + GY k:k+1, and an estimate of the original system states can be obtained via (22) To test this observer, the system is simulated with an initial non-zero state, and driven by a sinusoidal input The observer is initialized with an initial state of zero, and as seen from the plots in Figure 21, catches up with the system state to produce a perfect estimate that is delayed by one time-step Note that the observer starts operation at k = 1, to account for the one-step delay To estimate the inputs, we find the matrix S satisfying (222) to be 1 S = 1 Using (223), we get the results shown in Figure 22 Note that the estimate is delayed by α + 1 = 2 time-steps, to account for the fact that we need y k+2 in order to reconstruct u k 2

27 x System states Estimated one step delayed states time step 2 x time step 5 x time step 26 Summary Figure 21: Simulation of state observer We have provided a characterization of unknown input observers with delays, and have developed a streamlined design procedure to obtain the observer parameters Our approach is quite general in that it treats both reduced and full-order observers by selecting the design matrices appropriately Once the state observer is constructed, it is straightforward to obtain an estimate of the unknown inputs 21

28 u time step 2 2 u Actual input Estimated two step delayed input time step Figure 22: Simulation of input observer 22

29 CHAPTER 3 PARTIAL STATE AND INPUT OBSERVERS 31 Introduction We have already seen from the discussions in Chapters 1 and 2 that some strict conditions must be satisfied in order to estimate the entire system state In particular, the requirement that the system be invertible (which might require the use of delayed measurements) means it will not be possible to build a full state observer for a large class of systems Even if the system is invertible, the delay required to do so may be intolerably high In this chapter, we present a method to determine the set of all linear functions of the state and input that can be reconstructed through a linear observer with a given delay The problem of reconstructing a particular function of the inputs and states was studied in 3 through a geometric approach Delayed observers were handled in that paper by constructing a higher dimensional system which incorporated the delayed states and inputs into the new state vector However, this approach might cause the dimension of the observer to be much larger than the dimension of the system In 34 and 24, the authors examined partial state observers for continuous-time systems, but did not make use of delays (differentiators in continuous-time), and did not characterize the set of observable inputs The problem of partial state observation was also studied in 35 for the specific case where the measurements are free of unknown inputs That paper also did not allow the use of delays, and these two facts simplified the analysis considerably The problem of partial input reconstruction was studied in 36 under the assumption that the initial system state is known Consequently, the input observers constructed in that paper may be unstable In comparison to the works considered above, the main contributions of this chapter are (i) an algebraic procedure to design delayed linear observers with dimension no greater than that of the system, and (ii) the characterization of all possible linear functions of the state and input that can be reconstructed with a given delay Our approach directly produces the observer parameters in addition to the set of observable states and inputs Our design does not assume knowledge of the initial system state, but can easily incorporate that information to increase the number of functionals that are reproduced 23

30 32 Partial State Observer We start by constructing a partial state observer for (11) This state observer will then be used in the next section to construct an input observer We wish to determine the set of linear functions of the state vector x k that can be reproduced through a linear observer of the form z k+1 = Ez k + F Y k:k+α ψ k = z k + GY k:k+α, (31) where the nonnegative integer α is the observer delay, and the matrices E, F and G are chosen such that ψ k T x k as k for some matrix T The delay α is assumed to be a design parameter The objective will be to find the matrix T of largest rank for which a stable observer can be constructed We will show later in this section that the rows of T will then form a basis for all possible linear functions of the state Using (12), the observer error is given by e k+1 ψ k+1 T x k+1 = Ez k + F Y k:k+α + GY k+1:k+1+α T Ax k T Bu k = Ee k + (F EG) Θ α x k + GΘ α Ax k + (ET T A) x k + (F EG) M α U k:k+α + GΘ α Bu k + GM α U k+1:k+α+1 T Bu k As in Chapter 2, we partition the matrices F and G as F = F F 1 F α, G = G G 1 G α, where F i and G i have p columns, and define K F EG F 1 EG 1 + G F α EG α + G α 1 G α (32) Using identities (13) and (14), the error can then be expressed as e k+1 = Ee k + (ET T A + KΘ α+1 ) x k + ( ) KM α+1 T B U k:k+α+1 (33) 24

31 In order to force the error to go to zero regardless of the values of x k and the inputs, the following conditions must hold: 1 E must be a stable matrix, 2 The matrix K must satisfy KM α+1 = T B, (34) ET T A + KΘ α+1 = (35) We start with condition (34) Using (14), we can write this condition as K D Θ α B M α = T B (36) Let N be a basis for the left nullspace of M α (ie, N M α = ) We can see from the above expression that the matrix K must be of the form K = K for some K Equation (36) then becomes where the rank of D N Θ α B is given by K D N Θ α B I p N r = rankm α+1 rankm α, (37) = T B, (38) We note that r is a monotonically nondecreasing function of α 7, and this shows that a larger value of α may allow greater freedom in choosing the matrix T Returning to (38), there exists a pair of nonsingular matrices U and V such that D U V = (39) N Θ α B I r Define K = KU (31) 25

32 for some K K1 K2, where K 2 has r columns Right-multiplying Equation (38) by V, we get K1 K2 = T BV I r T Γ 1,1 Γ 1,2, (311) where Γ 1,1 is the first r columns of BV, and Γ 1,2 is the last m r columns From the above expression, we see that T must satisfy T Γ 1,2 = Let N 1 be a basis for the left nullspace of Γ 1,2 This implies that T must be of the form T = T 1 N 1 (312) for some T 1, and thus we have to maximize the rank of T 1 in order to maximize the rank of T If the rank of N 1 is zero, it is not possible to reconstruct any of the states Equation (311) also shows that K 2 = T Γ 1,1 = T 1 N 1 Γ 1,1, and K 1 is a free matrix Returning to condition (35), we use (37), (31), and (312) to obtain ET 1 N 1 T 1 N 1 A + Defining ν 1 ν 2 I p K1 T 1 N 1 Γ 1,1 U Θ α+1 = (313) N U where ν 2 has r rows, Equation (313) becomes ET 1 N 1 I p N Θ α+1, K1 T 1 ν 1 N 1 (A Γ 1,1 ν 2 ) Since N 1 has full row rank, there exists a nonsingular matrix V 1 such that N 1 V 1 = I r1 where r 1 is the rank of N 1 Right-multiplying (314) by V 1, we get = (314), (315) ET 1 K1 T 1 Γ 2,1 Γ 2,2 =, (316) 26

33 where ν 1 Γ 2,1 Γ 2,2 V 1 N 1 (A Γ 1,1 ν 2 ) and Γ 2,1 has r 1 columns Let N 2 N 2,1 N 2,2 denote a basis for the left nullspace of Γ 2,2, where N 2,2 has r 1 columns From (316), we see that K1 T 1 = T 2 N 2, (317) for some matrix T 2, and thus we have T 1 = T 2 N 2,2 (318) Substituting the above expressions into (316), we get ET 2 N 2,2 T 2 N 2 Γ 2,1 = We can now use the following algorithm to find values for T 2, T 1, and T Note that this algorithm starts with i = 2, in order to maintain consistency with the notation used so far 1 The equation at the ith iteration is ET i N i,2 T i N i Γ i,1 = (319) 2 Terminal conditions: (a) If the rank of N i,2 is zero, it is not possible to reconstruct any linear functions of the states Stop iterating (b) If N i,2 is full column rank, stop iterating 3 If none of the terminal conditions are satisfied, let r i denote the rank of N i,2 Find the nonsingular matrices U i and V i such that U i N i,2 V i = I ri 4 Set T i = T i U i, and right-multiply (319) by V i to obtain E T i T i U i N i Γ i,1 V i = I ri 27

34 5 Define Γ i+1,1 Γ i+1,2 U i N i Γ i,1 V i, where Γ i+1,1 has r i columns 6 Let N i+1 be a basis for the left nullspace of Γ i+1,2, and let N i+1,2 be the last r i columns of N i+1 7 Set T i = T i+1 N i+1 and proceed to iteration i + 1 At this point, either the rank of N i,2 is zero, or N i,2 is full column rank Note that one of these two conditions is guaranteed to occur after a sufficient number of iterations, since the rank of N i will either decrease at each iteration or stay the same The latter will occur only if N i is a square nonsingular matrix, in which case N i,2 will have full column rank To see why no state observer exists if the rank of N i,2 is r i =, assume that the above iteration is performed one more time In step 6 of the procedure, we see that N i+1,2 will be the empty matrix Since this has rank zero, we can take it to have full column rank The following analysis will then reveal that the matrix T will also have zero rank, implying that no state functionals can be obtained If N i,2 is full column rank, there exists a matrix U i such that U i N i,2 = I ri (32) Define T i = T i U i, and partition the matrix T i as T i = L H, (321) where H has r i columns The matrix T 1 from (318) is then given by T 1 = T 2 N 2,2 = T 2 U 2 N 2,2 = T 3 N 3 U 2 N 2,2 = T i U i N i U i 1 N i 1 U 3 N 3 U 2 N 2,2 (322) Recall that we have to maximize the rank of T 1 in order to maximize the rank of T Since the matrices V i are nonsingular, we can repeatedly multiply the above expression on the 28

35 right by V 2, V 3,, V i 1 to get rankt 1 = rankt 1 V 2 = rank Ti U i N i U i 1 N i 1 U 3 N 3,2 V 3 = rank Ti U i N i U i 1 N i 1 U 4 N 4,2 V 4 = rank Ti U i N i,2 = rank L H I ri = rankh Since N 1 is full row rank in (312), the rank of T will be the same as the rank of H, which is upper bounded by r i This shows that if r i =, no state functionals can be produced we get We now continue with the construction of the observer Substituting (321) into (319), EH L H U i N i Γ i,1 = (323) Denoting C U i N i Γ i,1 = A (324) where A has r i rows, Equation (323) becomes EH HA LC = (325) The above equation is in the form of the Sylvester observer equation which is frequently encountered in functional observer design 37, 38 To solve this equation, we first note that there is a nonsingular matrix P which transforms the pair (A, C) into the form Ā P AP 1 = A o A 21 Aō,s, C CP 1 = A 31 A 32 Aō, s C o, (326) where the pair (A o, C o ) is observable, all modes of Aō,s are unobservable and stable, and all 29

36 modes of Aō, s are unobservable and unstable The dimensions of A o, Aō,s, and Aō, s are taken to be n o n o, nō,s nō,s, and nō, s nō, s, respectively Note that the above form is simply a slightly modified version of the Kalman observable canonical form 29 Right-multiplying (325) by P 1 and setting H = HP, we get E H HĀ L C = (327) Let L = HL 1 + L 2 for some L 1 and L 2 Partitioning H and L 1 as H = H1 H2, L 1 = L 11 L 12 L 13, Equation (327) becomes E H 1 E H 2 H1 H2 A o + L 11 C o A 21 + L 12 C o Aō,s L 2 C o = (328) A 31 + L 13 C o A 32 Aō, s Since (A o, C o ) is observable, the matrix L 11 can be chosen to place the eigenvalues of A o + L 11 C o at arbitrary (stable) locations Denoting A o + L 11 C o A s =, A 21 + L 12 C o Aō,s Λ = A 31 + L 13 C o A 32, (329) where the dimension of A s is n s n s, we get E H 1 E H 2 H1 A s + H 2 Λ H 2 Aō, s L 2 C o = (33) To analyze the above equation, we will make use of the following lemma 39 Lemma 31 Suppose A and B are square matrices The matrix equation AX XB = has the unique solution X = if and only if A and B have no common eigenvalues 3

37 Examining (33), we see that E H 2 H 2 Aō, s = For E to be stable, it must not share any eigenvalues with the matrix Aō, s By Lemma 31, we see that H 2 must be the zero matrix Substituting this fact into (33), we get E H 1 H 1 A s L 2 C o = To maximize the rank of H, we choose H1 to be I ns and L 2 =, which gives us E = A s From (321), we have T i = HL1 HP = I ns L 1 P, and we can use (322), (317) and (312) to obtain T = T i U i N i U i 1 N i 1 U 3 N 3 U 2 N 2,2 N 1, (331) K 1 = T i U i N i U i 1 N i 1 U 3 N 3 U 2 N 2,1 (332) From (37) and (31), we have K = I p K1 T Γ 1,1 U N, and we can use (32) to map this K matrix to F and G As discussed in Chapter 2, this mapping is not unique The final state observer is given by (31) Remark 31 Many of the early papers on input reconstruction (invertibility) assumed the initial state of the system to be known 7, 8, 1, 36 This assumption implies that the initial state observer error in (33) will be zero, and consequently, it may not be necessary for the matrix E to be stable In this case, the matrix H in (325) can simply be taken to be the identity matrix, and this might increase the number of state and input functionals that can be observed We now revisit our claim that the rows of T will form a basis for all possible linear functionals of the state vector We first note that our algorithm produces a matrix T of maximum rank satisfying Equations (34)-(35), with corresponding matrices E and K 31

38 Suppose that there exists another set of matrices T, Ẽ, and K satisfying the same conditions, but such that the row space of T is not completely contained within the row space of T However, this would imply that the matrices T E K T, E, K T Ẽ K also satisfy Equations (34)-(35), and the rank of T will be greater than that of T This is not possible, since the rank of T is maximal Thus, the row space of T contains all possible linear functionals of the state that can be obtained through a linear observer 33 Partial Input Observer We now turn our attention to determining the largest subset of the unknown inputs that can be reconstructed through a linear observer Specifically, we seek an observer of the form η k = Rψ k + SY k:k+α, (333) where η k T u u k as k for some matrix T u, and ψ k is the output of the state observer in (31) We use the output of the state observer in the above equation because it represents the largest amount of information that is available about the state of the system The rank of T u should be maximized in order to obtain the largest amount of information on the input u k As in the previous section, the rows of T u will form a basis for all possible linear functions of the input To find the observer parameters and the matrix T u, we left-multiply (12) by S to obtain SY k:k+α = SΘ α x k + S D Θ α 1 B M α 1 In order to obtain T u u k from the above equation, the matrix S must satisfy S D Θ α 1 B M α 1 = T u U k:k+α (334) Following the same procedure as in the partial state observer design, we note that S must be of the form S = S I p N (335) 32

39 for some S, where N is a basis for the left nullspace of M α 1 In the equation let r be the rank of such that S D = T u, (336) N Θ α 1 B D N Θ α 1 B Then there exists a pair of nonsingular matrices Ū and V Ū D V = N Θ α 1 B I r (337) Define S = ŜŪ (338) for some Ŝ = Ŝ1 Ŝ 2, where Ŝ2 has r columns Right-multiplying Equation (336) by V, we get Ŝ1 Ŝ 2 = T u V I r T u V1 V2, (339) where V 1 is the first r columns of V, and V 2 is the last m r columns From the above expression, we see that T u must satisfy T u V2 = Let N v be a basis for the left nullspace of V 2 This implies that T u must be of the form T u = T u N v (34) for some T u, and thus the problem is reduced to maximizing the rank of T u If the rank of N v is zero, it is not possible to reconstruct any of the inputs Equation (339) also shows that Ŝ2 = T u V1 = T u N v V1, and Ŝ1 is a free matrix Defining Ŷ1 Ū Ŷ 2 ˆν 1 ˆν Ū 2 I p Y k:k+α, N I p N Θ α, 33

40 where Ŷ2 and ˆν 2 have r rows, Equation (334) becomes Ŷ ˆν 1 1 Ŝ1 Tu N v V1 = Ŝ1 Tu N v V1 x k + T u u k Ŷ 2 ˆν 2 (341) The above equation shows that we need access to the state functional ˆν 1 Ŝ1 Tu N v V1ˆν 2 x k (342) in order to reconstruct T u u k From the previous section, we recall that the output of the observer in (31) will be T x k, and that the rows of T form a basis for all possible linear functions of the state Thus we require that the functional in (342) be a linear combination of the rows in T Mathematically, this means that there should exist a matrix Q such that ˆν 1 Ŝ1 Tu = QT N v V1ˆν 2 Rearranging, we have Ŝ1 ˆν 1 Tu Q N v V1ˆν 2 =, (343) } T {{ } Π which implies that Ŝ1 Tu Q = T Π N Π, where N Π is a basis for the left nullspace of the matrix Π defined in (343) and T Π is chosen to maximize the rank of T u In particular, we can assume without loss of generality that N Π is of the form where the matrices Tu,1 T u,2 Ŝ 11 Q 1 N Π = Ŝ 12 Tu,1, (344) Ŝ 13 Tu,2 Q 2 and Q 2 are full row rank This form can always be obtained by left-multiplying N Π by an appropriate nonsingular matrix From the above expressions, we see that T Π should be of the form T Π = I in order to maximize the rank of T u This results in Tu,1 T u = T u,2, 34

41 and allows us to use (34) to obtain T u = Tu,1 T u,2 N v (345) Substituting this into (341), we get Ŝ12 Tu,1 N v V1 Ŷ1 = T x k + Tu,2 N v V1 Ŝ 13 Ŷ 2 Q 2 Tu,1 T u,2 N v u k (346) The above expression shows that the functional T u,1 N v u k can be obtained directly from the output without relying on the value of the state, and the functional Tu,2 N v u k depends on the state functional Q 2 T x k As an aside, note that (344) also characterizes the set of state functionals which are directly available from the output Specifically, the state functional Q 1 T x k can be obtained by setting Ŝ1 = Ŝ11 in (341) Comparing (346) to (333), we see that R should be chosen as R = Q 2, (347) and S can obtained from (335) and (338) as Ŝ12 Tu,1 N v V1 S = Ū Tu,2 N v V1 Ŝ 13 I p N (348) The combined state and input observer can be obtained from (31) and (333) as z k+1 = Ez k + F Y k:k+α, ψ k = z k + GY k:k+α, η k = Rψ k + SY k:k+α, (349) where ψ k T x k and η k T u u k as k 35

42 34 Example Consider a linearized model of the dynamics of a VTOL aircraft from 2: a 1 (t) ẋ(t) = x(t) s 1 (t) 552 s 2 (t) 1 1 p 1 (t) }{{}}{{}}{{} A B u(t) 1 y(t) = 1 1 x(t) u(t), (35) } 1 1 {{ 1 } C } {{ } D where a 1 (t) represents an actuator fault, s 1 (t) and s 2 (t) represent sensor faults, and p 1 (t) represents a disturbance due to parameter uncertainties We will assume that no prior information is available about the characteristics of these signals, and they will be modeled as unknown inputs For this example, we will assume that the first derivative of the outputs is available We wish to determine the largest subset of the states that can be obtained through an observer of the form y(t) ż(t) = Ez(t) + F ẏ(t) y(t) ψ(t) = z(t) + G ẏ(t) This is equivalent to α = 1 for the discrete time observer in (31) From the discussion in Section 32, we see that we will require the following matrices to derive the observer parameters: D C M 1 =, Θ 1 =, CB D CA D C M 2 =, Θ 2 = Θ 1 B M 1 Θ 1 A 36

43 A basis for the left nullspace of M 1 is given by 1 N =, 1 and the matrix D N Θ 1 B has a rank of 4 The matrices U and V from (39) are found to be U =, V = I From (311), we get 4422 Γ 1,1 = , 1 and Γ 1,2 is the empty matrix of dimension 4 A basis for the left nullspace of Γ 1,2 is taken to be N 1 = I 4 This means that matrix V 1 = I 4 in (315), and Γ 2,1 = Once again, Γ 2,2 is the empty matrix of dimension 6 A basis for the left nullspace of Γ 2,2 is taken to be N 2 = I 6, which means that the matrix N 2,2 in (318) is N 2,2 = I4 Since this is full column rank, one of the terminal conditions of the iteration is satisfied The matrix 37

44 U 2 in (32) can be taken to be the identity matrix From (324), we get C A = U 2 N 2 Γ 2,1 = The matrix P in (326) is given by P = , which gives Ā P AP 1 = C CP 1 1 = 1, This system has one unobservable and unstable mode at 4193 and one unobservable and stable mode at 1833 We choose to place the eigenvalues of the observable subsystem at { 1, 11}, and this requires that L 11 = 1 11 The matrices L 12 and L 13 will be taken to be the zero matrices Following the discussion in Section 32, we choose H = I 3, and this gives 38

45 E = From (331) and (332) we get T = H K 1 = H L 1 L 1 P U 2 N 2,2 N 1 = P U 2 N 2,1 = ,, which produces K = We will map this to the F and G matrices by using (32) and choosing F 1 = This gives F = G = Note that the entries corresponding to ẏ(t) are zero in the above matrices, and thus the partial state observer is given by ż(t) = 1 11 z(t) +, ψ(t) = z(t) + 1 y(t) y(t) 39

46 We now turn our attention to determining the set of input functionals that can be observed The matrix N in (335) is given by 1 N =, 1 and the matrices Ū and V in (337) are found to be the same as U and V from the design of the state observer The basis for the left nullspace of the empty matrix V 2 in (339) is taken to be N v = I 4 Using (343), we get Π = , with corresponding left nullspace given by N Π = , where the partitions correspond to Ŝ1, T u and Q from (343) From (345), we obtain T u = , (351) 4

Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs

Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs Shreyas Sundaram and Christoforos N. Hadjicostis Abstract We present a method for estimating the inputs and

More information

Optimal State Estimators for Linear Systems with Unknown Inputs

Optimal State Estimators for Linear Systems with Unknown Inputs Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators

More information

Observers for Linear Systems with Unknown Inputs

Observers for Linear Systems with Unknown Inputs Chapter 3 Observers for Linear Systems with Unknown Inputs As discussed in the previous chapters, it is often the case that a dynamic system can be modeled as having unknown inputs (e.g., representing

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies Shreyas Sundaram and Christoforos N Hadjicostis Abstract We present a method for achieving consensus in distributed systems in

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

ONGOING WORK ON FAULT DETECTION AND ISOLATION FOR FLIGHT CONTROL APPLICATIONS

ONGOING WORK ON FAULT DETECTION AND ISOLATION FOR FLIGHT CONTROL APPLICATIONS ONGOING WORK ON FAULT DETECTION AND ISOLATION FOR FLIGHT CONTROL APPLICATIONS Jason M. Upchurch Old Dominion University Systems Research Laboratory M.S. Thesis Advisor: Dr. Oscar González Abstract Modern

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

COLORINGS FOR MORE EFFICIENT COMPUTATION OF JACOBIAN MATRICES BY DANIEL WESLEY CRANSTON

COLORINGS FOR MORE EFFICIENT COMPUTATION OF JACOBIAN MATRICES BY DANIEL WESLEY CRANSTON COLORINGS FOR MORE EFFICIENT COMPUTATION OF JACOBIAN MATRICES BY DANIEL WESLEY CRANSTON B.S., Greenville College, 1999 M.S., University of Illinois, 2000 THESIS Submitted in partial fulfillment of the

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

x 1 x 2. x 1, x 2,..., x n R. x n

x 1 x 2. x 1, x 2,..., x n R. x n WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Pavankumar Tallapragada Nikhil Chopra Department of Mechanical Engineering, University of Maryland, College Park, 2742 MD,

More information

Math 240 Calculus III

Math 240 Calculus III Generalized Calculus III Summer 2015, Session II Thursday, July 23, 2015 Agenda 1. 2. 3. 4. Motivation Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make

More information

Linear algebra review

Linear algebra review EE263 Autumn 2015 S. Boyd and S. Lall Linear algebra review vector space, subspaces independence, basis, dimension nullspace and range left and right invertibility 1 Vector spaces a vector space or linear

More information

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective ABSTRACT Shreyas Sundaram Coordinated Science Laboratory and Dept of Electrical and Computer Engineering

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

The Simplex Method: An Example

The Simplex Method: An Example The Simplex Method: An Example Our first step is to introduce one more new variable, which we denote by z. The variable z is define to be equal to 4x 1 +3x 2. Doing this will allow us to have a unified

More information

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19 POLE PLACEMENT Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 19 Outline 1 State Feedback 2 Observer 3 Observer Feedback 4 Reduced Order

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A. Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Q-Parameterization 1 This lecture introduces the so-called

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0. Chapter Find all x such that A x : Chapter, so that x x ker(a) { } Find all x such that A x ; note that all x in R satisfy the equation, so that ker(a) R span( e, e ) 5 Find all x such that A x 5 ; x x

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns Linear Equation: a x + a 2 x 2 +... + a n x n = b. x, x 2,..., x n : variables or unknowns a, a 2,..., a n : coefficients b: constant term Examples: x + 4 2 y + (2 5)z = is linear. x 2 + y + yz = 2 is

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Online Supplement: Managing Hospital Inpatient Bed Capacity through Partitioning Care into Focused Wings

Online Supplement: Managing Hospital Inpatient Bed Capacity through Partitioning Care into Focused Wings Online Supplement: Managing Hospital Inpatient Bed Capacity through Partitioning Care into Focused Wings Thomas J. Best, Burhaneddin Sandıkçı, Donald D. Eisenstein University of Chicago Booth School of

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Network Reconstruction from Intrinsic Noise: Non-Minimum-Phase Systems

Network Reconstruction from Intrinsic Noise: Non-Minimum-Phase Systems Preprints of the 19th World Congress he International Federation of Automatic Control Network Reconstruction from Intrinsic Noise: Non-Minimum-Phase Systems David Hayden, Ye Yuan Jorge Goncalves Department

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

A Simple Computational Approach to the Fundamental Theorem of Asset Pricing

A Simple Computational Approach to the Fundamental Theorem of Asset Pricing Applied Mathematical Sciences, Vol. 6, 2012, no. 72, 3555-3562 A Simple Computational Approach to the Fundamental Theorem of Asset Pricing Cherng-tiao Perng Department of Mathematics Norfolk State University

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Dynamic Attack Detection in Cyber-Physical. Systems with Side Initial State Information

Dynamic Attack Detection in Cyber-Physical. Systems with Side Initial State Information Dynamic Attack Detection in Cyber-Physical 1 Systems with Side Initial State Information Yuan Chen, Soummya Kar, and José M. F. Moura arxiv:1503.07125v1 math.oc] 24 Mar 2015 Abstract This paper studies

More information

September 18, Notation: for linear transformations it is customary to denote simply T x rather than T (x). k=1. k=1

September 18, Notation: for linear transformations it is customary to denote simply T x rather than T (x). k=1. k=1 September 18, 2017 Contents 3. Linear Transformations 1 3.1. Definition and examples 1 3.2. The matrix of a linear transformation 2 3.3. Operations with linear transformations and with their associated

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the

More information

TRACKING AND DISTURBANCE REJECTION

TRACKING AND DISTURBANCE REJECTION TRACKING AND DISTURBANCE REJECTION Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 13 General objective: The output to track a reference

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

CDS Solutions to Final Exam

CDS Solutions to Final Exam CDS 22 - Solutions to Final Exam Instructor: Danielle C Tarraf Fall 27 Problem (a) We will compute the H 2 norm of G using state-space methods (see Section 26 in DFT) We begin by finding a minimal state-space

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

Subspace Identification

Subspace Identification Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Synthesis via State Space Methods

Synthesis via State Space Methods Chapter 18 Synthesis via State Space Methods Here, we will give a state space interpretation to many of the results described earlier. In a sense, this will duplicate the earlier work. Our reason for doing

More information

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Static Output Feedback Stabilisation with H Performance for a Class of Plants Static Output Feedback Stabilisation with H Performance for a Class of Plants E. Prempain and I. Postlethwaite Control and Instrumentation Research, Department of Engineering, University of Leicester,

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Linear Algebra (Math-324) Lecture Notes

Linear Algebra (Math-324) Lecture Notes Linear Algebra (Math-324) Lecture Notes Dr. Ali Koam and Dr. Azeem Haider September 24, 2017 c 2017,, Jazan All Rights Reserved 1 Contents 1 Real Vector Spaces 6 2 Subspaces 11 3 Linear Combination and

More information

CBE507 LECTURE III Controller Design Using State-space Methods. Professor Dae Ryook Yang

CBE507 LECTURE III Controller Design Using State-space Methods. Professor Dae Ryook Yang CBE507 LECTURE III Controller Design Using State-space Methods Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University Korea University III -1 Overview States What

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z. Determining a span Set V = R 3 and v 1 = (1, 2, 1), v 2 := (1, 2, 3), v 3 := (1 10, 9). We want to determine the span of these vectors. In other words, given (x, y, z) R 3, when is (x, y, z) span(v 1,

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

LMIs for Observability and Observer Design

LMIs for Observability and Observer Design LMIs for Observability and Observer Design Matthew M. Peet Arizona State University Lecture 06: LMIs for Observability and Observer Design Observability Consider a system with no input: ẋ(t) = Ax(t), x(0)

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

MA 527 first midterm review problems Hopefully final version as of October 2nd

MA 527 first midterm review problems Hopefully final version as of October 2nd MA 57 first midterm review problems Hopefully final version as of October nd The first midterm will be on Wednesday, October 4th, from 8 to 9 pm, in MTHW 0. It will cover all the material from the classes

More information

Structural Controllability and Observability of Linear Systems Over Finite Fields with Applications to Multi-Agent Systems

Structural Controllability and Observability of Linear Systems Over Finite Fields with Applications to Multi-Agent Systems Structural Controllability and Observability of Linear Systems Over Finite Fields with Applications to Multi-Agent Systems Shreyas Sundaram, Member, IEEE, and Christoforos N. Hadjicostis, Senior Member,

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback Topic #17 16.31 Feedback Control State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback Back to reality Copyright 21 by Jonathan How. All Rights reserved 1 Fall

More information

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Least squares problems Linear Algebra with Computer Science Application

Least squares problems Linear Algebra with Computer Science Application Linear Algebra with Computer Science Application April 8, 018 1 Least Squares Problems 11 Least Squares Problems What do you do when Ax = b has no solution? Inconsistent systems arise often in applications

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

Lecture: Quadratic optimization

Lecture: Quadratic optimization Lecture: Quadratic optimization 1. Positive definite och semidefinite matrices 2. LDL T factorization 3. Quadratic optimization without constraints 4. Quadratic optimization with constraints 5. Least-squares

More information