Solutions to the generalized Sylvester matrix equations by a singular value decomposition

Size: px
Start display at page:

Download "Solutions to the generalized Sylvester matrix equations by a singular value decomposition"

Transcription

1 Journal of Control Theory Applications (4) DOI /s Solutions to the generalized Sylvester matrix equations by a singular value decomposition Bin ZHOU Guangren DUAN (Center for Control Theory Guidance Technology Harbin Institute of Technology Harbin Heilongjiang China) Abstract: In this paper solutions to the generalized Sylvester matrix equations AX XF BY MXN X T Y with A M R n n B T R n r F N R p p the matrices N F being in companion form are established by a singular value decomposition of a matrix with dimensions n (n + pr) The algorithm proposed in this paper for the euqation AX XF BY does not require the controllability of matrix pair (A B) the restriction that A F don t have common eigenvalues Since singular value decomposition is adopted the algorithm is numerically stable may provide great convenience to the computation of the solution to these equations can perform important functions in many design problems in control systems theory Keywords: Generalize Sylvester matrix equations; General solutions; Companion matrix; Singular value decomposition 1 Symbols notations In this paper we use B T rank(b) to denote the transpose the rank of matrix B respectively b ij is the i-th row j-th column of matrix B I p is the p p identity matrix 0 will be used as a r s matrix when the dimensions are evident from the context The symbol is to denote the Kronecker product We use A F r p which means that A is an r p matrix in the field F We use cola i q ip to denote a matrix in the form of cola i q ip A p A p+1 A q 1 A q use rowa i q ip to denote rowa i q ip colat i q ip T Further let A R n n B R n r we define the so-called Krylov matrix with matrix pair (A B) as follows: 2 Introduction Q c (A B k) cola i B k 1 The general solution to the generalized Sylvester matrix equation AX XF BY (1) A R n n B R n r F R p p are known is closely related with many problems in linear control systems theory such as eigenvalue assignment 1 2 observer design 3 eigenstructure assignment design 4 5 constrained control 6 etc has been studied by many authors (see the references therein) When the matrix F is in Jordan form an attractive analytical restriction-free solution with explicit freedom is presented in 3 To obtain this solution one needs to carry out an orthonormal transformation compute a matrix inverse solve a series of linear equation groups Reference 5 proposes two solutions to the matrix equation also for the case that the matrix F is in Jordan form The first one is in an iterative form while the second is in an explicit parametric form To obtain the explicit solution proposed in 5 one needs to carry out a right coprime factorization of (si A) 1 B (when the eigenvalues of the Jordan matrix F are undetermined) or a series of singular value decomposition (when the eigenvalues of F are known) Generalization of this explicit solution to a more general case is considered in 4 7 When the matrix F is in companion form a very neat elegant general complete parametric solution to the generalized Sylvester matrix equation (1) is proposed in 8 The solution is expressed in terms of the controllability matrix of the matrix pair (A B) a symmetric operator matrix a parametric matrix in the Hankel matrix form Such a result can provide great convenience for many analysis design problems associated with the matrix equation (1) However this proposed solution has a disadvantage that when A F have common eigenvalues the given solution may be not complete This is not convenient to use in some problems Received 16 June 2006; revised 22 November 2006 This work was supported by the Chinese Outsting Youth Foundation (No ) Program for Changjiang Scholars Innovative Research Team in University

2 398 B ZHOU et al / Journal of Control Theory Applications (4) such as pole assignment eigenstructure assignment In this note we also consider the generalized Sylvester matrix (1) with F being a companion matrix Different from the method proposed in 8 in this note when the matrix F is in the companion form the generalized Sylvester matrix equation (1) is firstly converted into a matrix equation in vector form as à x 0 (2) à Rn (n+pr) Such a linear equation can be efficiently solved by a singular value decomposition of matrix à With some relations between the original matrix variables X Y the new vector variable x solutions to the original generalized Sylvester matrix equation can be immediately obtained It is worth pointing out that the generalized Sylvester matrix equation (1) can always be converted into the vector form as (2) with à I p A F T I n I p B x rowrowx i p i1 rowy i p i1 However in this case à Rpn (pn+pr) the dimension pn (pn+pr) is obviously higher than n (n+pr) which may cause some numerical problems The generalized Sylvester matrix equation AX EXF BY (3) is closely related with many synthesis problems in descriptor linear systems theory In solving the generalized Sylvester matrix (3) it is extremely important to find the complete parametric solutions that is parametric solutions consisting of the maximum number of free parameters since many problems such as robustness in control system design require full use of the design freedom For (3) with F being in Jordan form 4 has proposed a complete parametric solution However this solution is not in a direct explicit form but in a recursive form Also when the matrix F is assumed in Jordan form the matrix triple (E A B) is assumed to be R-controllable 7 has given a complete explicit solution which uses the right coprime factorization of the input-state transfer function (se A) 1 B When F is an arbitrary matrix 9 gives a complete explicit solution also involving the right coprime factorization of the input-state transfer function (se A) 1 B such results are also extended to a more general case in 10 In this paper we consider this problem in another way Now we give the following lemma Lemma 1 The generalized Sylvester matrix equation (3) A E R n n B R n r F R p p are known the matrix pair (E A) is regular is equivalent to the generalized Sylvester matrix equation MXN X T Y (4) with M (γe A) 1 E N γi F (5) T (γe A) 1 B γ is an arbitrary scalar such that (γe A) is nonsingular Proof Since the matrix pair (E A) is regular there exists a scalar γ such that (γe A) is nonsingular Premultiplying the equation (3) by (γe A) 1 produces (γe A) 1 AX (γe A) 1 EXF (γe A) 1 BY (6) Let M (γe A) 1 E note that γm (γe A) 1 A γ (γe A) 1 E (γe A) 1 A (γe A) 1 (γe A) I we have (γe A) 1 A γm I So equation (6) is equivalent to (γm I) X MXF T Y or MX (γi F ) X T Y Let N γi F then the above equation is reduced to (4) The above Lemma 1 shows that the solutions to the generalized Sylvester matrix equation (3) can be immediately obtained while the solutions to the generalized Sylvester matrix equation (4) are gotten Similar to the generalized Sylvester matrix equation (1) the generalized Sylvester matrix equation (4) can also be solved by the same process described above As shown in 8 the assumption that the matrix F is in companion form does not lose generality However the companion matrix may have several forms such as 0 0 β β 1 C 1 (β) 0 1 β p 1 β p C 3 (β) β β C 2 (β) C1 T (β) C 4 (β) C3 T (β) The following lemma shows that they are similar to some special transformation matrix

3 B ZHOU et al / Journal of Control Theory Applications (4) Lemma 2 The four types of companion matrices are similar to each other ie C 3 (β) EC 1 (β) E C 2 (β) S1 1 (β) C 1 (β) S 1 (β) C 4 (β) S 1 (β) E 1 C 1 (β) S 1 (β) E E col e i 1 ip with e i the i-th column of the identity matrix I p β 1 I r β 2 I r β p 1 I r I r β 2 I r β 3 I r I r 0 S r (β) (7) β p 1 I r I r 0 I r This above lemma shows that we need only to consider the case F C 1 (β) (8) 3 The main results 31 The generalized Sylvester matrix equation AX XF BY In this section we consider the generalized Sylvester matrix equation (1) Theorem 1 The generalized Sylvester matrix equation (1) is equivalent to { rowxi p i2 rowai p 1 i1 x 1 Π(A B)rowy i p 1 i1 β(a)x 1 cola i B p 1 S r(β)rowy i p i1 (9) S r (β) is defined as (7) B AB B 0 Π (A B) (10) A p 3 B B 0 A p 2 B A p 3 B AB B Proof Note that the generalized Sylvester matrix equation (1) is equivalent to the following series of equations: Ax 1 x 2 By 1 Ax 2 x 3 By 2 Ax p 1 x p By p 1 Ax p + p β i 1 x i By p i1 (11) which are also equivalent to x 2 Ax 1 By 1 x 3 A 2 x 1 1 A i By 2 i x p A p 1 x 1 p 2 A i By p 1 i Ax p + p β i 1 x i By p i1 (12) Substituting the first p 1 equations in (12) into the last equation in (12) yields A(A p 1 x 1 p 2 A i By p 1 i ) + p β i 1 x i i1 A p x 1 p 2 A i+1 By p 1 i + β 0 x 1 + p β i 1 A i 1 x 1 i 2 A j By i 1 j i2 j0 A p x 1 + p 1 β i A i x 1 p 1 A i By p i p 2 i1 β p 1 A i By p i 1 1 β 2 A i By 2 i 0 β 1 A i By 1 i By p which is also equivalent to β(a)x 1 p 1 A i By p i + p 2 β p 1 A i By p i β 2 A i By 2 i + 0 β 1 A i By 1 i cola i B p 1 S r(β)coly i p This is the second equation in (9) The first equation in (9) is equivalent to the first p 1 equations in (11) Note that the above processes are invertible The proof is then completed Lemma 3 The degree of freedom in the solution to the generalized Sylvester matrix equation (1) is ϖ pr + n ω (13) ω rankβ(a) Q c (A B p) Proof It follows from the proof of Theorem 1 that the generalized Sylvester matrix equation (1) is equivalent to the second equation in (9) first p 1 equations in (11) ie (1) is equivalent to Ax 1 x 2 By 1 Ax 2 x 3 By 2 Ax p 1 x p By p 1 β(a)x 1 cola i B p 1 S r(β)rowy i p i1

4 400 B ZHOU et al / Journal of Control Theory Applications (4) which can be rewritten as rowxi p i1 Ξ 1 Ξ 2 0 rowy i p i1 β (A) A I 0 0 B 0 Ξ 1 0 A I B A I 0 0 Q c (A B p) S r (β) Ξ B 0 Through some simplifications we have β (A) Qc (A B p) S r (β) 0 rankξ rank 0 0 I (p 1)n (p 1)n + ω So according to linear equation theory the degree of freedom in the solution is ϖ np + pr ω (p 1)n pr + n ω The following corollary is immediately obtained according to the above lemma Corollary 1 The degree of freedom in the solution to the generalized Sylvester matrix (1) is rp if one of the following statements holds: 1) The matrix A F don t have common eigenvalues 2) The matrix pair (A B) is controllable p 1 n r Note that the degree of freedom in the second equation in (9) is ϖ pr + n rankβ(a) Q c (A B p) (14) With Lemma 3 (14) we find that the degree of freedom in the solution to (1) is equal to the degree of freedom in the solution to the second equation in (9) It follows from this fact that we need only to solve the second equation in (9) Note that this equation is in the form of (2) By linear equation theory the following theorem is deduced Theorem 2 All the solutions to the generalized Sylvester matrix equation (1) are given by the second equation in (9) x 1 y i i 1 2 p are given by x 1 V 12 f (15) row y i p i1 Sr 1 (β) V 22 f R ϖ is an arbitrary vector V11 V 12 V V 12 R n ϖ V 22 R pr ϖ (16) V 21 V 22 satisfies the following singular value decomposition: U T β(a) cola i B p 1 Σω ω 0 V 0 0 with Σ ω ω an invertible matrix Proof We firstly show that the expression (15) satisfies the second equation in (9) Let In 0 0 V 0 Sr 1 (β) f substitute it into the second equation in (9) then we get U β (A) col A i B p 1 S r (β) U β (A) col A i B p 1 0 V f Σω ω f This shows that is the solution to the second equation in (9) Furthermore using the notation (16) we have In 0 0 V 0 Sr 1 (β) f In 0 V11 V Sr 1 (β) V 21 V 22 f V 11 V 12 0 Sr 1 (β) V 21 Sr 1 (β) V 22 f V 12 f Sr 1 (β) V 22 f The last expression is equivalent to (15) This shows that (15) satisfies the second equation in (9) Secondly we show the solutions given by (15) are complete To prove this note that f R ϖ is an arbitrary vector with the number of its elements equal to the degree of freedom in the solutions so we need only to show rank V 12 S 1 r (β) V 22 ϖ This is obvious since V 12 V 22 are two parts of the unitary matrix V Sr 1 (β) is nonsingular The formula given in the above theorem involves the in-

5 B ZHOU et al / Journal of Control Theory Applications (4) verse of the matrix S r (β) However it can be calculated by explicit recursive algorithm as proposed in the following lemma The proof is omitted here Lemma 4 Let S r (β) be defined as (7) then Sr 1 (β) col ( F T) i b p 1 I r b T By applying Theorem 1 we can get the following result regarding solution to the normal Sylvester matrix equation AX XF BG (17) Corollary 2 Let A F do not have common eigenvalues then the unique solution to the normal Sylvester matrix equation (17) is given by { x1 β (A) 1 col A i B p 1 S r (β) row g i p i1 row x i p i2 row A i p 1 i1 x 1 Π (A B) row g i p 1 i1 32 The generalized Sylvester matrix equation MXN X T Y The results proposed in this subsection are quite similar to the above subsection We firstly give the following parallel theorem to Theorem 1: Theorem 3 The generalized Sylvester matrix equation (4) with N F C 1 (β) is equivalent to { rowxi 1 ip 1 rowmp 1 i1 x p+π(m T )rowy i 1 ip 1 β (M)x p colm i T p 1 S r (β)rowy i 1 ip (18) Π(M T ) is defined as (10) β (s) p β i s p i β p 1 I r β p 2 I r β p 3 I r β 0 I r S r (β) 0 β p 3 I r 0 (19) 0 β 0 I r 0 0 Proof The generalized Sylvester matrix equation (4) is equivalent to Mx 2 x 1 T y 1 Mx 3 x 2 T y 2 Mx p x p 1 T y p 1 M p β i 1 x i x p T y p which can also be rewritten as x p 1 Mx p T y p 1 x p 2 M 2 x p 1 M i T y p 2+i x 1 M p 1 x p p 2 M i T y 1+i M p β i 1 x i x p T y p i1 By substituting x i i 1 2 p 1 into the last equation we obtain x p M p 1 β i 1 M p i x p p i 1 M j T y i+j i1 β p 1 Mx p T y p j0 By some simplifications the above equation is equivalent to the second equation in (18) The equivalence between (4) (18) can be similarly proved as that in Theorem 1 Similar to Lemma 3 we have the following corollary: Corollary 3 The degree of freedom in the solution to the generalized Sylvester matrix equation (4) is ϖ pr + n ω ω rankβ (M) col M i T p 1 S r (β) The parallel solutions to the generalized Sylvester matrix equation (4) are given in the following theorem with its proof omitted Theorem 4 The complete parametric solutions to the generalized Sylvester matrix equation (4) is characterized by the second equation in (18) x p y i i 1 2 p are given by x p row y i p i1 V12 V 22 f (20) f R ϖ is an arbitrary vector V11 V 12 V V 12 R n ϖ V 22 R pr ϖ V 21 V 22 satisfies the following singular value decomposition: U T β (M) colm i T p 1 S r (β)v Σω ω with Σ ω ω being an invertible matrix Remark 1 Similar to Corollary 2 we can also obtain the unique solution to the following normal Sylvester matrix equation: MXN X T G

6 402 B ZHOU et al / Journal of Control Theory Applications (4) according to Theorem 3 4 Examples To illustrate the proposed method we give the following two examples Example 1 We firstly consider a generalized Sylvester matrix equation in the form of (1) with A B F By applying singular value decomposition to the matrix β (A) B AB we get V V For simplicity we select f T obtain X Y Example 2 We further consider a generalized Sylvester matrix equation in the form of (4) with M T N By applying singular value decomposition to the matrix β (A) colm i T p 1 S r (β) we get V V We specially choose f T obtain X Y References 1 S P Bhattacharyya E de Souza Pole assignment via Sylvester equationj Systems & Control Letters (4): L H Keel J A Fleming S P Bhattacharyya Minimum norm pole assignment via Sylvester s equationj Linear Algebra Its Role in Systems Theory AMS Contemporary Mathematics (3): C C Tsui A complete analytical solution to the equation T A F T LC its applicationsj IEEE Transactions on Automatic Control (8): G Duan Solution to matrix equation AV + BW EV F eigenstructure assignment for descriptor systemsj Automatica (3): G Duan Solutions to matrix equation AV + BW V F their application to eigenstructure assignment in linear systemsj IEEE Transactions on Automatic Control (2): A Saberi A A Stoorvogel P Sannuti Control of Linear Systems with Regulation Input ConstraintsM// Series of Communications Control Engineering New York: Springer- Verlag G Duan On the solution to Sylvester matrix equation AV + BW EV F J IEEE Transactions on Automatic Control (4): B Zhou G Duan An explicit solution to the matrix equation AX XF BY J Linear Algebra Its Applications (1): B Zhou G Duan A new solution to the generalized Sylvester matrix equation AV EV F BW J Systems & Control Letters (3): G Duan B Zhou Solution to the second-order Sylvester matrix equation MV F 2 + DV F + KV BW J IEEE Transactions on Automatic Control (5):

7 B ZHOU et al / Journal of Control Theory Applications (4) Bin ZHOU was born in HuBei Province China in 1981 He received the Bachelor s degree from the Department of Control Science Engineering at Harbin Institute of Technology Harbin China in 2004 He is now a graduate student in the Center for Control Systems Guidance Technology in Harbin Institute of Technology His current research interests include linear systems theory constrained control systems zhoubinhit@163com Guang-Ren DUAN was born in Heilongjiang Province 1962 He received his BSc degree in Applied Mathematics both his M S PhD degrees in Control Systems Theory He is currently the Director of the Center for Control Systems Guidance Technology at Harbin Institute of Technology His main research interests include robust control eigenstructure assignment descriptor systems grduan@hiteducn

Solutions to generalized Sylvester matrix equation by Schur decomposition

Solutions to generalized Sylvester matrix equation by Schur decomposition International Journal of Systems Science Vol 8, No, May 007, 9 7 Solutions to generalized Sylvester matrix equation by Schur decomposition BIN ZHOU* and GUANG-REN DUAN Center for Control Systems and Guidance

More information

Closed-form Solutions to the Matrix Equation AX EXF = BY with F in Companion Form

Closed-form Solutions to the Matrix Equation AX EXF = BY with F in Companion Form International Journal of Automation and Computing 62), May 2009, 204-209 DOI: 101007/s11633-009-0204-6 Closed-form Solutions to the Matrix Equation AX EX BY with in Companion orm Bin Zhou Guang-Ren Duan

More information

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011 1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 56, NO 5, MAY 2011 L L 2 Low-Gain Feedback: Their Properties, Characterizations Applications in Constrained Control Bin Zhou, Member, IEEE, Zongli Lin,

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

Applied Mathematics and Computation

Applied Mathematics and Computation Applied Mathematics and Computation 212 (2009) 327 336 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc Solutions to a family

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Solution to Sylvester equation associated to linear descriptor systems

Solution to Sylvester equation associated to linear descriptor systems Solution to Sylvester equation associated to linear descriptor systems Mohamed Darouach To cite this version: Mohamed Darouach. Solution to Sylvester equation associated to linear descriptor systems. Systems

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Static Output Feedback Stabilisation with H Performance for a Class of Plants Static Output Feedback Stabilisation with H Performance for a Class of Plants E. Prempain and I. Postlethwaite Control and Instrumentation Research, Department of Engineering, University of Leicester,

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B Dragan S. Djordjević November 15, 2005 Abstract In this paper we find the explicit solution of the equation A X + X A = B for linear bounded operators

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer

Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer Preprints of the 19th World Congress The International Federation of Automatic Control Simultaneous State and Fault Estimation for Descriptor Systems using an Augmented PD Observer Fengming Shi*, Ron J.

More information

Robust Output Feedback Control for a Class of Nonlinear Systems with Input Unmodeled Dynamics

Robust Output Feedback Control for a Class of Nonlinear Systems with Input Unmodeled Dynamics International Journal of Automation Computing 5(3), July 28, 37-312 DOI: 117/s11633-8-37-5 Robust Output Feedback Control for a Class of Nonlinear Systems with Input Unmodeled Dynamics Ming-Zhe Hou 1,

More information

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation

Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation indawi Publishing Corporation Discrete Mathematics Volume 013, Article ID 17063, 13 pages http://dx.doi.org/10.1155/013/17063 Research Article Finite Iterative Algorithm for Solving a Complex of Conjugate

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

ON THE CONSTRUCTION OF GENERAL SOLUTION OF THE GENERALIZED SYLVESTER EQUATION

ON THE CONSTRUCTION OF GENERAL SOLUTION OF THE GENERALIZED SYLVESTER EQUATION TWMS J App Eng Math V7, N1, 2017, pp 1-6 ON THE CONSTRUCTION OF GENERAL SOLUTION OF THE GENERALIZED SYLVESTER EQUATION FA ALIEV 1, VB LARIN 2, Abstract The problem of construction the general solution

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Simultaneous global external and internal stabilization of linear time-invariant discrete-time systems subject to actuator saturation

Simultaneous global external and internal stabilization of linear time-invariant discrete-time systems subject to actuator saturation 011 American Control Conference on O'Farrell Street, San Francisco, CA, USA June 9 - July 01, 011 Simultaneous global external and internal stabilization of linear time-invariant discrete-time systems

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems Mathematical Problems in Engineering Volume 2012, Article ID 257619, 16 pages doi:10.1155/2012/257619 Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information

Group inverse for the block matrix with two identical subblocks over skew fields

Group inverse for the block matrix with two identical subblocks over skew fields Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS Shumei Mu Tianguang Chu and Long Wang Intelligent Control Laboratory Center for Systems and Control Department of Mechanics

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities Research Journal of Applied Sciences, Engineering and Technology 7(4): 728-734, 214 DOI:1.1926/rjaset.7.39 ISSN: 24-7459; e-issn: 24-7467 214 Maxwell Scientific Publication Corp. Submitted: February 25,

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case ATARU KASE Osaka Institute of Technology Department of

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Closed-Loop Structure of Discrete Time H Controller

Closed-Loop Structure of Discrete Time H Controller Closed-Loop Structure of Discrete Time H Controller Waree Kongprawechnon 1,Shun Ushida 2, Hidenori Kimura 2 Abstract This paper is concerned with the investigation of the closed-loop structure of a discrete

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Performance assessment of MIMO systems under partial information

Performance assessment of MIMO systems under partial information Performance assessment of MIMO systems under partial information H Xia P Majecki A Ordys M Grimble Abstract Minimum variance (MV) can characterize the most fundamental performance limitation of a system,

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Minimalsinglelinearfunctionalobserversforlinearsystems

Minimalsinglelinearfunctionalobserversforlinearsystems Minimalsinglelinearfunctionalobserversforlinearsystems Frédéric Rotella a Irène Zambettakis b a Ecole Nationale d Ingénieurs de Tarbes Laboratoire de Génie de production 47 avenue d Azereix 65016 Tarbes

More information

Hankel Optimal Model Reduction 1

Hankel Optimal Model Reduction 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Hankel Optimal Model Reduction 1 This lecture covers both the theory and

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Projection of state space realizations

Projection of state space realizations Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description

More information

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University Applied Matrix Algebra Lecture Notes Section 22 Gerald Höhn Department of Mathematics, Kansas State University September, 216 Chapter 2 Matrices 22 Inverses Let (S) a 11 x 1 + a 12 x 2 + +a 1n x n = b

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0. ) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III

More information

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 Answer the questions in the spaces provided on the question sheets. You must show your work to get credit for your answers. There will

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Linear algebra. S. Richard

Linear algebra. S. Richard Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

ON POLE PLACEMENT IN LMI REGION FOR DESCRIPTOR LINEAR SYSTEMS. Received January 2011; revised May 2011

ON POLE PLACEMENT IN LMI REGION FOR DESCRIPTOR LINEAR SYSTEMS. Received January 2011; revised May 2011 International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 4, April 2012 pp. 2613 2624 ON POLE PLACEMENT IN LMI REGION FOR DESCRIPTOR

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

The reflexive re-nonnegative definite solution to a quaternion matrix equation

The reflexive re-nonnegative definite solution to a quaternion matrix equation Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow

More information

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS INTERNATIONAL JOURNAL OF INFORMATON AND SYSTEMS SCIENCES Volume 5 Number 3-4 Pages 480 487 c 2009 Institute for Scientific Computing and Information ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF

More information

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1

Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 Computational Methods for Feedback Control in Damped Gyroscopic Second-order Systems 1 B. N. Datta, IEEE Fellow 2 D. R. Sarkissian 3 Abstract Two new computationally viable algorithms are proposed for

More information

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2. MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices

More information

12. Perturbed Matrices

12. Perturbed Matrices MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Recurrent Neural Network Approach to Computation of Gener. Inverses

Recurrent Neural Network Approach to Computation of Gener. Inverses Recurrent Neural Network Approach to Computation of Generalized Inverses May 31, 2016 Introduction The problem of generalized inverses computation is closely related with the following Penrose equations:

More information

Math Linear algebra, Spring Semester Dan Abramovich

Math Linear algebra, Spring Semester Dan Abramovich Math 52 0 - Linear algebra, Spring Semester 2012-2013 Dan Abramovich Fields. We learned to work with fields of numbers in school: Q = fractions of integers R = all real numbers, represented by infinite

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51 Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and

More information

The Fundamental Theorem of Linear Algebra

The Fundamental Theorem of Linear Algebra The Fundamental Theorem of Linear Algebra Nicholas Hoell Contents 1 Prelude: Orthogonal Complements 1 2 The Fundamental Theorem of Linear Algebra 2 2.1 The Diagram........................................

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Rank Reduction for Matrix Pair and Its Application in Singular Systems

Rank Reduction for Matrix Pair and Its Application in Singular Systems Rank Reduction for Matrix Pair Its Application in Singular Systems Jing Wang Wanquan Liu, Qingling Zhang Xiaodong Liu 1 Institute of System Sciences Northeasten University, PRChina e-mail: wj--zhang@163com

More information

H 2 -optimal model reduction of MIMO systems

H 2 -optimal model reduction of MIMO systems H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information