University of Twente. Faculty of Mathematical Sciences. On stability robustness with respect to LTV uncertainties
|
|
- Lorraine Parsons
- 5 years ago
- Views:
Transcription
1 Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box AE Enschede The Netherlands Phone: Fax: Memorandum No On stability robustness with respect to LTV uncertainties G. Meinsma, T. Iwasaki 1 and M. Fu November 1998 ISSN Dept. of Control Systems Engineering, Tokyo Institute of Technology, -1-1 Oookayama, Meguro, Tokyo 15, Japan Dept. of Electrical and Computer Engineering, University of Newcastle, Callaghan, NSW 38, Australia
2 On stability robustness with respect to LTV uncertainties Gjerrit Meinsma Department of Systems, Signals and Control Faculty of Mathematical Sciences University of Twente P.O. Box 17, 75 AE Enschede The Netherlands Tetsuya Iwasaki Department of Control Systems Engineering Tokyo Institute of Technology -1-1 Oookayama, Meguro, Tokyo 15 Japan Minyue Fu Department of Electrical and Computer Engineering University of Newcastle Callaghan, NSW 38 Australia. Abstract It is shown that the well-known.d; G/-scaling upper bound of the structured singular value is a nonconservative test for robust stability with respect to certain linear time-varying uncertainties. Keywords Mixed structured singular values, Duality, Linear matrix inequalities, Time-varying systems, Robustness, IQC Mathematics Subject Classification: 93B36, 93C5, 93D9, 93D5. 1
3 1 Introduction v y u M + + v 1 Figure 1: The closed loop. Is the above closed loop stable for all 1 s in a given set of stable operators B? That, roughly, is the fundamental robust stability problem. There is an intriguing result by Megretski and Treil [4] and Shamma [8] which says, loosely speaking, that if M is a stable LTI operator and the set of 1 s is the set of contractive linear time-varying operators of some fixed block diagonal structure 1 = diag.1 1 ;1 ;:::;1 mf /; (1) that then the closed loop is robustly stable that is, stable for all such 1 s if and only if the H -norm of DMD 1 is less than one for some constant diagonal matrix D that commutes with the 1 s. The problem can be decided in polynomial time, and it is a problem that has since long been associated with an upper bound of the structured singular value. The intriguing part is that the result holds for any number of LTV blocks 1 i, which is in stark contrast with the case that the 1 i s are assumed time-invariant. Paganini [6] extended this result by allowing for the more general block diagonal structure 1 = diag.ž 1 I n1 ;:::;Ž mc I nmc ;1 1 ;:::;1 mf /: () A precise definition is given in Section. Paganini s result is an exact generalization and leads, again, to a convex optimization problem over the constant matrices D that commute with 1. In view of the connection of these results with the upper bounds of the structured singular it is natural to ask if the well known.d; G/-scaling upper bound of the mixed structured singular value also has a similar interpretation. In this note we show that that is indeed the case. The.D; G/-scaling upper bound of the structured singular value was originally defined as a means to provide an easy-to-verify condition that guarantees robust stability with respect to the contractive linear time-invariant operators 1 of the form 1 = diag. Ž 1 Iñ1 ;:::; Ž mr Iñmr ;Ž 1 I n1 ;:::;Ž mc I nmc ;1 1 ;:::;1 mf /; (3) with Ž i denoting real-valued constants [1]. It is known that for general LTI plants M this sufficient condition is necessary as well if and only if,.m r + m c / + m F 3:
4 (See [5].) In this note we show that the.d; G/-scaling condition is in fact both necessary and sufficient for robust stability with respect to the contractive LTV operators 1 of the form (3) with now Ž i denoting linear time-varying self-adjoint operators on `. A precise definition follows. Paganini [7] has gone through considerable trouble to show that for his structure () one may assume causality of 1 without changing the condition. In the extended structure (3) with self-adjoint Ž i this is no longer possible. Notation and preliminaries ` := {x : Z R : k Z x.k/ < }. The norm v of v ` is the usual norm on ` and for vector-valued signals v `n the norm v is defined as. v v n /1=. The induced norm is denoted by. So, for F : `n `n it is defined as F := sup u `n Fu = u. For matrices F C n m the induced norm will be the spectral norm, and for vectors this reduces to the Euclidean norm. F H is the complex conjugate transpose of F, andhef is the Hermitian part F defined as He F = 1.F + FH /. An operator 1 : `n `n is said to be contractive if 1v v for every v `n. Lower case Ž s always denote operators from `1 to `1. Then for u; y `n the expression y = ŽI n u is defined to mean that the entries y k of y satisfy y k = Žu k. An operator Ž : ` ` is self-adjoint if u;žv = Žu;v for all u;v `. The M and 1 throughout denote bounded operators from `n to `n and M is assumed linear time invariant (LTI). Bounded operators on `n are also called stable. Hats will denote Z-transforms, so if y ` then ŷ.z/ is defined as ŷ.z/ = k Z y.k/z k. To avoid clutter we shall use for functions ˆf of frequency the notation ˆf! := ˆf.e i! /:.1 Stability The closed loop depicted in Fig. 1 is considered internally stable if the map from [ 1 ] [ v to uy ] is bounded as a map from l n to l n. Because of stability of M and 1 the closed loop is internally stable iff.i 1M/ 1 is bounded. The closed loop will be called uniformly robustly stable with respect to some set B of stable LTV operators if there is an >such that [ [ [ ] u v1 v1 y] 1 B ; v ] `n : (4) v We only consider 1 s with norm at most one and stable M. In that case (4) holds if and only if there is an ž> such that.i 1M/u ž u 1 B ; u `n :. The 1 s and the.d; G/-scaling matrices Throughout we assume that 1 : `n `n and that 1 is of the form 1 = diag. Ž 1 Iñ1 ;:::; Ž mr Iñmr ;Ž 1 I n1 ;:::;Ž mc I nmc ;1 1 ;:::;1 mf / (5) 3
5 with Ž i : ` ` LTV, self-adjoint and Ž i 1; Ž i : ` ` LTV and Ž i 1; 1 i : `qi `qi LTV and 1 i 1. (6) The dimensions and numbers ñ i ; n i ; q i ; m r ; m c ; m F of the various identity matrices and 1 i blocks are fixed, but otherwise 1 may vary over all possible n n LTV operators of the form (5), (6). Given that, the sets D and G of D and G-scales are defined accordingly as D = {D = diag. D 1 ;:::; D mr ; D 1 ;:::;D mc ; d 1 I q1 ;:::;d mf I qmf / :< D = D T R n n }; G = {G = diag. G 1 ;:::; G mr ; ;:::;; ;:::;/ : G = G H jr n n }: Note that the D-scales are assumed real-valued and that the G-scales are taken to be purely imaginary. As it turns out there is no need to consider a wider class of D and G-scales. 3 The discrete-time result Theorem 3.1. The discrete-time closed-loop in Fig. 1 with stable LTI plant with transfer matrix M is uniformly robustly stable with respect to 1 s of the form (5, 6) if and only if there is a constant matrix D D and a constant matrix G G such that M H! DM! + j.gm! M H! G/ D <! [; ³]: (7) The existence of such D and G can be tested in polynomial time. The remainder of this paper is devoted to a proof of this result. Megretski [3] showed this for the full blocks case (1), Paganini [6] derived this result for the case that the 1 s are of the form (). The proof of the general case (5) follows the same lines as that of [6] and [5]. A key idea is to replace the condition of the contractive 1-blocks with an integral quadratic condition independent of 1: Lemma 3.. Let u; y `q and consider the quadratic integral 6.u; y/ := ³ The following holds..ŷ! û! /.ŷ! +û! / H d! R q q : (8) 1. There is a contractive self-adjoint LTV Ž : ` ` such that u = ŽI q y if and only if 6.u; y/ is Hermitian and nonnegative definite.. There is a contractive LTV Ž : ` ` such that u = ŽI q y if and only if the Hermitian part of 6.u; y/ is nonnegative definite. 3. There is a contractive LTV 1 : `q 6.u; y/ is nonnegative. `q such that u = 1y if and only if the trace of 4
6 Proof. See appendix. A consequence of this result is the following. Lemma 3.3. Let u be a nonzero element of `n. Then.I 1M/u = for some 1 of the form (5, 6) if-and-only-if 6.u; Mu/ := ³.M! I/û! û H!.M! + I/ H d! (9) is of the form Z 1????????? Z???????.??..????????? Z 1????????? Z???? R n n ; (1).?????..????????? Z 1????????? Z?.????????.. with Z i = Z i T, He Z i, Tr Z i, and with? denoting an irrelevant entry. Here the partitioning of (1) is compatible with that of 1. Proof (sketch). The equation.i 1M/u = isthesameas u = 1Mu: With appropriate partitionings, the expression u = 1Mu can be written row-block by rowblock as u 1 = Ž 1 M 1 u u = Ž M u... u K = 1 mf M K u: By Lemma 3. there exist contractive Ž i, Ž i and 1 i of the form (6) for which the above equalities hold iff certain quadratic integrals 6 i have certain properties. It is not to difficult to figure out that these quadratic integrals 6 i are exactly the blocks on the diagonal of 6.u; Mu/, and that the conditions on these blocks are that they satisfy 6 i = 6 T i, He 6 i, or Tr 6 i, corresponding to the three types of uncertainties. Proof of Theorem 3.1. Suppose such D D and G G exist. Then a standard argument will show that there is an ž>suchthat.i 1M/u ž u for all u and contractive 1 of the form (5). This is the definition of uniformly robustly stable. 5
7 Conversely suppose the closed loop is uniformly robustly stable. For some ž>, then,.i 1M/u ž for every u of unit norm. Define W := {6.u; Mu/ : u = 1} R n n : (11) By application of Lemma 3.3, the set W does not intersect the convex cone Z defined as Z :={Z : Z is of the form (1) with Z i = Z T i, He Z i, Tr Z i }: In the appendix we show that in fact W is bounded away from Z. Remarkably the closure W of W is convex. This observation is from Megretski & Treil [4], and for completeness a proof is listed in the appendix, Lemma 5.1. Because W is bounded away from Z, also the closure W is bounded away from Z, so there is a > such that W also does not intersect Z := Z +{Z R n n : Z }: Both W and Z are convex and have empty intersection, and therefore a hyper-plane exists that separates the two sets [, p.133]. In other words there is a nonzero matrix E R n n (say of unit norm) such that 1 E;W E; Z : (1) As inner product take X; Y =Tr X T Y. In particular (1) says that E; Z is bounded from below. By Lemma 5.3 that is the case if and only if E is of the form E = diag.ẽ 1 ;:::;Ẽ mr ; E 1 ;:::;E mc ; e 1 I;:::;e mf I/ with Ẽ i + Ẽ T i, E i = E T i and e i R, that is, if and only if E D + jg. Inthat case inf E; Z =, and so a := inf E; Z < : From (1) we thus see that E;W a <. If u = 1, then ³ û H!( He.M! + I/ H E.M! I/ ) û! d! = Re Tr ³ E.M! I/û! û H!.M! + I/ H d! = E;6.u; Mu/ sup E;W a < : (13) This being at most a < foreveryu `n ; u = 1 implies that He.M! + I/ H.E + ži/.m! I/ <! [; ³]; (14) for some small enough ž>. Express E + ži as E + ži = D + jg for some D D and G G. Then Equation (14) becomes (7). 1 In (1) the expression E;W denotes the set {x : x = E; Y ; Y W } and the inequality in (1) is defined to mean that every element of the set on the left-hand side, E;W, is less than or equal to every element of the set on the right-hand side, E;Z. 6
8 4 The continuous-time result Analagous to the discrete-time case we say that a continuous-time system is uniformly robustly stable if there is a > such that (4) holds for all v 1 ;v L. Completely analagous to the discrete-time case it can be shown that: Theorem 4.1. The continuous-time closed-loop in Fig. 1 with stable LTI plant with transfer matrix M is uniformly robustly stable with respect to 1 s of the form (5) with Ž i : L L LTV, self-adjoint and Ž i 1; Ž i : L L LTV and Ž i 1; 1 i : L q i Lq i LTV and 1 i 1. if and only if there is a constant matrix D D and a constant matrix G G such that M. j!/ H DM. j!/+ j.gm. j!/ M. j!/ H G/ D < for all! R. 5 Appendix Proof of Lemma 3.. Items and 3 are proved in [6] (note that the Hermitian part of (8) is ³ ŷ! ŷ H! û!û H! d!, and its trace equals ³. y u /). If u := ŽI q y with Ž self-adjoint and contractive then (8) is easily seen to be Hermitian and. Conversely suppose (8) is Hermitian and nonnegative. Now let { f i } i=;1;; be an orthonormal basis of `, and expand y `q in this basis: y =.j/ f j ;.j/ R q : j=;1;::: We may associate with this expansion the matrix Y R q of coefficients 1././ q./ 1.1/.1/ q.1/ Y = 1././ q./ :.... The matrix U is likewise defined from u. In this matrix notation the expression u = ŽI q y becomes U = 1Y, and the quadratic integral (8) becomes 6.u; y/ =.Y T U T /.Y + U/: By assumption the above is Hermitian and nonnegative definite, that is, Y T U = U T Y and U T U Y T Y: (15) We may assume without loss of generality that the orthonormal basis { f j } was chosen such that the first, say p, elements { f 1 ;:::; f p } span the space spanned by the entries {y 1 ;:::;y q } of y. ThenY is of the form [ ] Ip Y = C for some full row rank C R p q : p 7
9 Then the second inequality of (15) is that U T U C T C.ThisimpliesthatU is of the form U = VC for some V. Partition V as [ V 1 ] V with V1 R p p. The two formulas of (15) then become C T V 1 C = C T V T 1 C and CT.V T 1 V 1 + V T V /C C T I p C: (16) As C has full row rank, (16) is equivalent to that V 1 = V T 1 and V T 1 V 1 + V T V I p : It is now immediate that U equals U = 1Y for 1 defined as 1 := [ V1 V T V V V 1.I V 1 / 1 V T ] : (17) It is easy to verify that 1 is contractive. Furthermore 1 is symmetric and so the corresponding operator Ž is self-adjoint. (It may happen that I V1 is singular. In that case the inverse in (17) may be replaced with the Moore-Penrose inverse.) Lemma 5.1. The closure of (11) is convex. Proof. The proof hinges on the fact that lim N u; N v =for every pair u;v `n and with N denoting the N-step delay. Let u;v `n both have unit norm, i.e., 6.u; Mu/; 6.v; Mv/ W.GivenN Nand ½ [; 1] define x as x := ½u + 1 ½ N v: Since 6 is linear in its two arguments, we have that 6.x; Mx/ =½6.u; Mu/ + 1 ½ ½6.u; M N v/ + 1 ½ ½6. N v; Mu/ +.1 ½/6.v; Mv/: As N the contributions of 6.u; M N v/ and 6. N v; Mu/ tend to zero, so lim 6.x; Mx/ = ½6.u; Mu/ +.1 ½/6.v; Mv/: N That this is an element of the closure of (11) follows from the fact that. lim N x = ½ u +.1 ½/ v = 1. Lemma 5.. Uniform robust stability implies that W is bounded away from Z. Proof. Suppose to the contrary that inf 6.u; Mu/ Z =: u `n ; u =1;Z Z This means that there is a sequence {u k ; Q k } k N `n Rn n such that 6.u k ; Mu k / + Q k Z; u k = 1; lim k Q k =: 8
10 For each k define y k := Mu k `n and take zk to be any element of `n whose entries are mutually orthogonal and have unit norm, z k i ; zk j =Ž ij, and whose entries are also orthogonal to all entries of u k and y k. With it define ū k : = u k + 1. Q k I n ȳ k : = y k + 1. Q k I n + The reason for this definition is that now 6.ū k ; ȳ k / = ³.ŷ k! û k! + 1 Qk Q k/z k ; 1 Qk Q k/z k : 1 Qk Q kẑ k!/.ŷ k! +û k! + Q k ẑ k!/ H d! = 6.u k ; y k / + Q k Z: So we see that 6.ū k ; ȳ k / is an element of Z and, hence, ū k = 1 k ȳ k for some contractive 1 k of the form (5,6). Finally consider.i 1 k M/ū k =ū k 1 k M.u k +.ū k u k // =ū k 1 k.y k + M.ū k u k // =ū k 1 k.ȳ k +.y k ȳ k // 1 k M.ū k u k / = 1 k.y k ȳ k / 1 k M.ū k u k /: (18) Using the fact that ū k u k = O. Q k /, ȳ k y k = O. Q k / and that lim k Q k =, we obtain from (18) that lim.i k 1k M/ū k = ; lim k ūk = 1: This contradicts uniform robust stability. Lemma 5.3. inf Z Z Tr E T Z is bounded from below for some E R n n if and only if E is of the form E = diag.ẽ 1 ;:::;Ẽ mr ; E 1 ;:::;E mc ; e 1 I;::: ;e mf I/ with Ẽ i + Ẽ T i, E i = E T i and e i. Proof. Suppose that inf Z Z Tr E T Z is bounded from below. The off-diagonal blocks of E are then zero for the following reason: Let F be equal to E but with its blocks on the diagonal equal to zero. The off-diagonal blocks of Z Z are not restricted in any way so Z := ½F is an element of Z for every ½ R. IfF is nonzero then Tr E T Z = Tr E T.½F/ = ½Tr F T F and this is unbounded from below as a function of ½. Therefore F must be zero, i.e., E is block-diagonal. The general form of a block-diagonal E is E = diag.ẽ 1 ;:::;Ẽ mr ; E 1 ;:::;E mc ; Ē 1 ;:::;Ē mf / Express Z as in (1). Then Tr E T Z = Tr Ẽ T i Z i + Tr E T i Z i + Tr Ē T i Z i : 9
11 Each block of Z Z can vary independently of all other blocks of Z, so the only way that the above is bounded from below is that all inf Tr Ẽ Z i = Z T i T i Z i ; inf Tr Ei T He Z i Z i and inf Tr Z i Tr ĒT i Z i are bounded from below. It is fairly easy to show that inf Z i = Z T i Tr ẼT i Z i > He Ẽ i inf He Z i Tr ET i Z i > E i = Ei T inf Tr Zi Tr Ēi T Z i > Ē i = e i I; < e i R: (This is considered in more detail in [5].) Acknowledgement: The authors would like to thank Fernando Paganini for his s. References [1] M. Fan, A. Tits, and J. Doyle. Robustness in the presence of joint parametric uncertainty and unmodeled dynamics. IEEE Trans. on Aut. Control, 36(1):5 38, [] D. G. Luenberger. Optimization by Vector Space Methods. John Wiley, New York, [3] A. Megretski. Necessary and sufficient conditions of stability: A multiloop generalization of the circle criterion. IEEE Trans. on Aut. Control, 38(5), [4] A. Megretski and S. Treil. Power distribution inequalities in optimization and robustness of uncertain systems. J. Math. Syst. Estimation and Control, 3(3):31 319, [5] G. Meinsma, Y. Shrivastava, and M. Fu. A dual formulation of mixed ¼ and on the losslessness of.d; G/-scaling. IEEE Trans. Aut. Control, 4(7):13 136, [6] F. Paganini. Analysis of implicitly defined systems. In Proceedings of the 33rd CDC, pages , [7] F. Paganini. Sets and Constraints in the Analysis of Uncertain Systems. PhD thesis, Caltech, USA, [8] J. Shamma. Robust stability with time-varying structured uncertainty. IEEE Transactions on Automatic Control, 39(4):714 74,
Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about
Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationOn Optimal Performance for Linear Time-Varying Systems
On Optimal Performance for Linear Time-Varying Systems Seddik M. Djouadi and Charalambos D. Charalambous Abstract In this paper we consider the optimal disturbance attenuation problem and robustness for
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationStructured Stochastic Uncertainty
Structured Stochastic Uncertainty Bassam Bamieh Department of Mechanical Engineering University of California at Santa Barbara Santa Barbara, CA, 9306 bamieh@engineeringucsbedu Abstract We consider linear
More informationRobust Stabilization of the Uncertain Linear Systems. Based on Descriptor Form Representation t. Toru ASAI* and Shinji HARA**
Robust Stabilization of the Uncertain Linear Systems Based on Descriptor Form Representation t Toru ASAI* and Shinji HARA** This paper proposes a necessary and sufficient condition for the quadratic stabilization
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationwhere m r, m c and m C are the number of repeated real scalar blocks, repeated complex scalar blocks and full complex blocks, respectively. A. (D; G)-
1 Some properties of an upper bound for Gjerrit Meinsma, Yash Shrivastava and Minyue Fu Abstract A convex upper bound of the mixed structured singular value is analyzed. The upper bound is based on a multiplier
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationINNER PRODUCT SPACE. Definition 1
INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationMemorandum No An easy way to obtain strong duality results in linear, linear semidefinite and linear semi-infinite programming
Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box 217 75 AE Enschede The Netherlands Phone +31-53-48934 Fax +31-53-4893114 Email memo@math.utwente.nl
More informationNumerical Linear Algebra
University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationTHIS paper deals with robust control in the setup associated
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 50, NO 10, OCTOBER 2005 1501 Control-Oriented Model Validation and Errors Quantification in the `1 Setup V F Sokolov Abstract A priori information required for
More informationTwo Challenging Problems in Control Theory
Two Challenging Problems in Control Theory Minyue Fu School of Electrical Engineering and Computer Science University of Newcastle, NSW 2308 Australia Email: minyue.fu@newcastle.edu.au Abstract This chapter
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More information(v, w) = arccos( < v, w >
MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v
More informationUniversity of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm
University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions
More information(v, w) = arccos( < v, w >
MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationOPERATOR THEORY ON HILBERT SPACE. Class notes. John Petrovic
OPERATOR THEORY ON HILBERT SPACE Class notes John Petrovic Contents Chapter 1. Hilbert space 1 1.1. Definition and Properties 1 1.2. Orthogonality 3 1.3. Subspaces 7 1.4. Weak topology 9 Chapter 2. Operators
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationConvexity of the Joint Numerical Range
Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an
More informationAnalysis Preliminary Exam Workshop: Hilbert Spaces
Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationA note on a nearest polynomial with a given root
ACM SIGSAM Bulletin, Vol 39, No. 2, June 2005 A note on a nearest polynomial with a given root Stef Graillat Laboratoire LP2A, Université de Perpignan Via Domitia 52, avenue Paul Alduy F-66860 Perpignan
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationAn LQ R weight selection approach to the discrete generalized H 2 control problem
INT. J. CONTROL, 1998, VOL. 71, NO. 1, 93± 11 An LQ R weight selection approach to the discrete generalized H 2 control problem D. A. WILSON², M. A. NEKOUI² and G. D. HALIKIAS² It is known that a generalized
More informationTheorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k
I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column
More informationControl for stability and Positivity of 2-D linear discrete-time systems
Manuscript received Nov. 2, 27; revised Dec. 2, 27 Control for stability and Positivity of 2-D linear discrete-time systems MOHAMMED ALFIDI and ABDELAZIZ HMAMED LESSI, Département de Physique Faculté des
More informationStructured Singular Values versus Diagonal Scaling: the Noncommutative Setting
21st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 2014. Structured Singular Values versus Diagonal Scaling: the Noncommutative Setting Joseph A. Ball 1, Gilbert Groenewald
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More information12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)
1.4. INNER PRODUCTS,VECTOR NORMS, AND MATRIX NORMS 11 The estimate ^ is unbiased, but E(^ 2 ) = n?1 n 2 and is thus biased. An unbiased estimate is ^ 2 = 1 (x i? ^) 2 : n? 1 In x?? we show that the linear
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationPart 1a: Inner product, Orthogonality, Vector/Matrix norm
Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationPreliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100
Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012
More informationarzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.1 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS STABILITY ANALYSIS Didier HENRION www.laas.fr/ henrion henrion@laas.fr Denis ARZELIER www.laas.fr/
More informationAdvanced Digital Signal Processing -Introduction
Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More information(v, w) = arccos( < v, w >
MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationOn the simultaneous diagonal stability of a pair of positive linear systems
On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationWe use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write
1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical
More informationInformation Structures Preserved Under Nonlinear Time-Varying Feedback
Information Structures Preserved Under Nonlinear Time-Varying Feedback Michael Rotkowitz Electrical Engineering Royal Institute of Technology (KTH) SE-100 44 Stockholm, Sweden Email: michael.rotkowitz@ee.kth.se
More informationComplex Laplacians and Applications in Multi-Agent Systems
1 Complex Laplacians and Applications in Multi-Agent Systems Jiu-Gang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complex-valued Laplacians have been shown to be powerful
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationApplications of Controlled Invariance to the l 1 Optimal Control Problem
Applications of Controlled Invariance to the l 1 Optimal Control Problem Carlos E.T. Dórea and Jean-Claude Hennet LAAS-CNRS 7, Ave. du Colonel Roche, 31077 Toulouse Cédex 4, FRANCE Phone : (+33) 61 33
More informationLinear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,
More informationBlock companion matrices, discrete-time block diagonal stability and polynomial matrices
Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationFinite Frames and Graph Theoretical Uncertainty Principles
Finite Frames and Graph Theoretical Uncertainty Principles (pkoprows@math.umd.edu) University of Maryland - College Park April 13, 2015 Outline 1 Motivation 2 Definitions 3 Results Outline 1 Motivation
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationTHE GAP BETWEEN COMPLEX STRUCTURED SINGULAR VALUE µ AND ITS UPPER BOUND IS INFINITE
THE GAP BETWEEN COMPLEX STRUCTURED SINGULAR VALUE µ AND ITS UPPER BOUND IS INFINITE S. TREIL 0. Introduction The (complex) structured singular value µ = µ(a) of a square matrix A was introduced by J. Doyle
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationPart V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory
Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite
More informationDiscrete-Time H Gaussian Filter
Proceedings of the 17th World Congress The International Federation of Automatic Control Discrete-Time H Gaussian Filter Ali Tahmasebi and Xiang Chen Department of Electrical and Computer Engineering,
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationDefinition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition
6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationOptimum Sampling Vectors for Wiener Filter Noise Reduction
58 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Optimum Sampling Vectors for Wiener Filter Noise Reduction Yukihiko Yamashita, Member, IEEE Absact Sampling is a very important and
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationQuadratic and Copositive Lyapunov Functions and the Stability of Positive Switched Linear Systems
Proceedings of the 2007 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 2007 WeA20.1 Quadratic and Copositive Lyapunov Functions and the Stability of
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationTOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :
TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More information