Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter

Similar documents
Lainiotis filter implementation. via Chandrasekhar type algorithm

THE KALMAN FILTER RAUL ROJAS

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION

A Simplified Derivation of Scalar Kalman Filter using Bayesian Probability Theory

AN OPEN-PLUS-CLOSED-LOOP APPROACH TO SYNCHRONIZATION OF CHAOTIC AND HYPERCHAOTIC MAPS

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE

Estimation of Backward Perturbation Bounds For Linear Least Squares Problem

A Hadamard-type lower bound for symmetric diagonally dominant positive matrices

Discrete Orthogonal Moment Features Using Chebyshev Polynomials

5. Fast NLMS-OCF Algorithm

Statistical Inference Based on Extremum Estimators

Introduction to Signals and Systems, Part V: Lecture Summary

Stochastic Simulation

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

CMSE 820: Math. Foundations of Data Sci.

GUIDELINES ON REPRESENTATIVE SAMPLING

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

The Basic Space Model

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm

The random version of Dvoretzky s theorem in l n

Vector Quantization: a Limiting Case of EM

A Genetic Algorithm for Solving General System of Equations

Bounds for the Extreme Eigenvalues Using the Trace and Determinant

Statistical Analysis on Uncertainty for Autocorrelated Measurements and its Applications to Key Comparisons

PC5215 Numerical Recipes with Applications - Review Problems

1 1 2 = show that: over variables x and y. [2 marks] Write down necessary conditions involving first and second-order partial derivatives for ( x0, y

Iterative method for computing a Schur form of symplectic matrix

3/8/2016. Contents in latter part PATTERN RECOGNITION AND MACHINE LEARNING. Dynamical Systems. Dynamical Systems. Linear Dynamical Systems

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

Discrete-Time Signals and Systems. Discrete-Time Signals and Systems. Signal Symmetry. Elementary Discrete-Time Signals.

A Note on Effi cient Conditional Simulation of Gaussian Distributions. April 2010

Mixtures of Gaussians and the EM Algorithm

Numerical Method for Blasius Equation on an infinite Interval

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

CS537. Numerical Analysis and Computing

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

Study on Coal Consumption Curve Fitting of the Thermal Power Based on Genetic Algorithm

Higher-order iterative methods by using Householder's method for solving certain nonlinear equations

Chapter 1 Simple Linear Regression (part 6: matrix version)

A note on the modified Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems

ki, X(n) lj (n) = (ϱ (n) ij ) 1 i,j d.

Problem Set 2 Solutions

POWER SERIES SOLUTION OF FIRST ORDER MATRIX DIFFERENTIAL EQUATIONS

Numerical Conformal Mapping via a Fredholm Integral Equation using Fourier Method ABSTRACT INTRODUCTION

CO-LOCATED DIFFUSE APPROXIMATION METHOD FOR TWO DIMENSIONAL INCOMPRESSIBLE CHANNEL FLOWS

Stochastic Matrices in a Finite Field

The Method of Least Squares. To understand least squares fitting of data.

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

ECE 308 Discrete-Time Signals and Systems

Sequences of Definite Integrals, Factorials and Double Factorials

11 Correlation and Regression

Generating Functions for Laguerre Type Polynomials. Group Theoretic method

The standard deviation of the mean

Introduction to Machine Learning DIS10

Mathematical Modeling of Optimum 3 Step Stress Accelerated Life Testing for Generalized Pareto Distribution

The Perturbation Bound for the Perron Vector of a Transition Probability Tensor

State Space Representation

A New Solution Method for the Finite-Horizon Discrete-Time EOQ Problem

Time-Domain Representations of LTI Systems

Linear Regression Demystified

Last time: Moments of the Poisson distribution from its generating function. Example: Using telescope to measure intensity of an object

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian

Polynomial Multiplication and Fast Fourier Transform

1. Linearization of a nonlinear system given in the form of a system of ordinary differential equations

Monte Carlo Optimization to Solve a Two-Dimensional Inverse Heat Conduction Problem

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

CS321. Numerical Analysis and Computing

On Nonsingularity of Saddle Point Matrices. with Vectors of Ones

A collocation method for singular integral equations with cosecant kernel via Semi-trigonometric interpolation

On the convergence rates of Gladyshev s Hurst index estimator

Exact Solutions for a Class of Nonlinear Singular Two-Point Boundary Value Problems: The Decomposition Method

Solving a Nonlinear Equation Using a New Two-Step Derivative Free Iterative Methods

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

ITERATIVE SOLUTION OF TWO MATRIX EQUATIONS

Applied Mathematics Letters

A CHOLESKY LR ALGORITHM FOR THE POSITIVE DEFINITE SYMMETRIC DIAGONAL-PLUS-SEMISEPARABLE EIGENPROBLEM

An Alternative Scaling Factor In Broyden s Class Methods for Unconstrained Optimization

Math 113 Exam 3 Practice

Numerical Solution of the Two Point Boundary Value Problems By Using Wavelet Bases of Hermite Cubic Spline Wavelets

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

ANOTHER WEIGHTED WEIBULL DISTRIBUTION FROM AZZALINI S FAMILY

THE SPECTRAL RADII AND NORMS OF LARGE DIMENSIONAL NON-CENTRAL RANDOM MATRICES

SUPPLEMENTARY INFORMATION

Math 25 Solutions to practice problems

IN many scientific and engineering applications, one often

Derivative of a Determinant with Respect to an Eigenvalue in the LDU Decomposition of a Non-Symmetric Matrix

Using An Accelerating Method With The Trapezoidal And Mid-Point Rules To Evaluate The Double Integrals With Continuous Integrands Numerically

Infinite Sequences and Series

On forward improvement iteration for stopping problems

Problem Set 4 Due Oct, 12

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE

Complex Analysis Spring 2001 Homework I Solution

Chapter 10 Advanced Topics in Random Processes

HOMEWORK #10 SOLUTIONS

Transcription:

Cotemporary Egieerig Scieces, Vol. 3, 00, o. 4, 9-00 Chadrasekhar ype Algorithms for the Riccati Equatio of Laiiotis Filter Nicholas Assimakis Departmet of Electroics echological Educatioal Istitute of Lamia, Greece assimakis@teilam.gr Abstract Chadrasekhar type algorithms for solvig the discrete time Riccati equatio ad Lyapuov equatio emaatig from Laiiotis filter are preseted. he Chadrasekhar type algorithms are compared to the classical per step algorithm cosistig of direct implemetatio of the recursio of the Riccati equatio or the Lyapuov equatio. It is show that Chadrasekhar type algorithms may be faster tha the classical oes. Keywords: Riccati equatio, Laiiotis filter, Chadrasekhar algorithm Itroductio he discrete time Riccati equatio arises i liear estimatio ad is associated with time ivariat systems described by the followig state space equatios: x( k + ) = Fx( k) + w( k) () zk ( ) = Hxk ( ) + vk ( ) () for k 0, where x( k ) is the dimesioal state vector at time k, zk ( ) is the m dimesioal measuremet vector, F is the system trasitio matrix, H is the output matrix, { wk ( )} ad { vk ( )} are idepedet Gaussia zero-mea white ad ucorrelated radom processes,q ad R are the plat ad measuremet oise covariace matrices respectively, ad x (0) is a Gaussia radom process with mea x 0 ad covariace P 0. he filterig/estimatio problem is to produce a estimate at time L of the state vector usig measuremets till time L, i.e. the aim is to use the measuremets set

9 N. Assimakis {(), z K, z( L)} i order to calculate a estimate value x( L/ L) of the state vector x( L ). he discrete time Laiiotis filter [4] is a well kow algorithm that solve the filterig problem, by computig the estimatio x( k / k ) as well as the estimatio error covariace matrix Pk ( / k ) for every k. he Laiiotis filter equatios provide a recursio for the dimetioal estimatio error covariace matrix Pk ( / k ), which is assumed to be o-egative defiite Pk ( / k) 0, the Riccati equatio emaatig from Laiiotis filter: Pk+ k+ = P+ F I+ Pk ko Pk kf (3) ( / ) [ ( / ) ] ( / ) with iitial coditio P(0 / 0) = P0, where P = Q QH AHQ F F QH AHF O F H AHF = (5) = () ad A = [ HQH + R] (7) For time ivariat systems, it is well kow [] that if the sigal process model is asymptotically stable (i.e. all eigevalues of F lie iside the uit circle), the there exists a steady state value P of the estimatio error covariace matrix. It is kow that Lyapuov equatio is derived from Riccati equatio whe R. I this case, A = 0 ad P = Q, F = F, O = 0 ad the Riccati equatio (3) becomes the Lyapuov equatio: Pk ( + / k+ ) = P+ FPk ( / kf ) (8) he discrete time Riccati equatio emaatig from the Laiiotis filter equatios has attracted eormous attetio. I view of the importace of the Riccati equatio, there exists cosiderable literature o its recursive solutios [3], [5], cocerig per step or doublig algorithms. I this paper Chadrasekhar type algorithms for solvig the discrete time Riccati equatio emaatig from Laiiotis filter are preseted ad compared to the classical per step algorithm, i.e. to the direct implemetatio of the recursio of the Riccati equatio. he paper is orgaized as follows: I sectio the classical recursive per step algorithm is preseted. I sectio 3 recursive Chadrasekhar type algorithms are preseted. I sectio 4 the computatioal requiremets of all algorithms are established ad comparisos are carried out. It is poited out that the Chadrasekhar type algorithms may be faster tha the classical per step algorithm. I additio, a rule is established i order to decide if the Chadrasekhar (4)

Chadrasekhar type algorithms 93 type algorithms are faster tha the classical oe. Per Step Algorithm he classical per step algorithm cosists of the direct implemetatio of the recursio of the Riccati equatio emaatig from the Laiiotis filter equatios. Also, the correspodig per step algorithm for the solutio of the Lyapuov equatio is preseted. Per Step Algorithm Riccati equatio (PSARE) he steady state solutio P is calculated by recursively implemetig the Riccati equatio (3) for k = 0,,..., with iitial coditio P(0 / 0) = P0, util the followig covergece criterio is satisfied: Pk ( + / k+ ) Pk ( / k) ε, where deotes the matrix orm ad ε is a small positive real umber pre-specified to give the steady state solutio to the accuracy desired. he steady state or limitig solutio P = lim P( k / k) of the Riccati equatio is idepedet of the k iitial coditio []. I the sequel we assume zero iitial coditio P (0 / 0) = 0, i.e. P 0 = 0. he we are able to use P(/) = P as iitial coditio. Note that the existace of [ I + P( k / k) O ] is guarateed due to the presece of the idetity matrix I. Also, the existece of A = [ HQH + R] is guarateed if R is positive defiite ( R > 0 ), which meas that o measuremet is exact. his is reasoable i physical problems. hus, the osigular measuremet oise covariaces matrix case is assumed i the sequel. Per Step Algorithm Lyapuov equatio (PSALE) Lyapuov equatio is derived from Riccati equatio whe R. he, the per step algorithm for the Lyapuov equatio cosists of the recursive implemetatio of the Lyapuov equatio (8) with iitial coditio P (/)=P. able summarizes the classical per step algorithms for solvig the Riccati ad the Lyapyov equatios emaatig from Laiiotis filter. able. Per Step Algorithms Riccati equatio Lyapuov equatio PSARE P k k P F I P k k O P k k F ( + / + ) = + [ + ( / ) ] ( / ) PSALE Pk ( + / k+ ) = P+ FPk ( / kf )

94 N. Assimakis 3 Chadrasekhar ype Algorithms he Chadrasekhar type algorithms use the idea of defiig the differece: δ Pk ( ) = Pk ( + / k+ ) Pk ( / k) (9) ad its factorizatio: δ Pk ( ) = YkSkY ( ) ( ) ( k) (0) with Yk ( ) of dimesio r ad Sk ( ) of dimesio r r, where 0 r = rak( δ P0 ) = rak( P ) = rak( Q) () he usig the quatity Ok ( ) = Pk ( / k) + O () the followig recursio is obvious: Ok ( + ) = Ok ( ) + YkSkY ( ) ( ) ( k) (3) Note that the o sigularity of O is guarateed if R is positive defiite ad if F is osigular. he Chadrasekhar type algorithms cosist of the recursio: Pk ( + / k+ ) = Pk ( / k) + YkSkY ( ) ( ) ( k) (4) usig recursios for the quatities Yk ( ) ad Sk ( ). wo versios of the Chadrasekhar type algorithms for the solutio of the Riccati equatio are preseted. Also, the correspodig Chadrasekhar type algorithm for the solutio of the Lyapuov equatio is preseted. Chadrasekhar ype Algorithm Riccati equatio versio (CARE) Settig Yk ( + ) = [ FO ] O ( kyk ) ( ) (5) after some algebra the followig recursio is derived [5]: Sk ( + ) = Sk ( ) SkY ( ) ( ko ) ( k+ ) YkSk ( ) ( ) () Note that the o sigularity of Ok ( ) is guarateed if O is osigular, which meas that R is positive defiite ( R > 0 ). Assumig zero iitial coditio P (0 / 0) = 0, we use the followig iitial coditios: O(0) = O Y(0) S(0) Y (0) = P Remarks.. If Q has full rak ( r = ), the we are able touse the iitial coditios Y(0) = I ad S(0) = P.. If Q = 0 ( r = 0 ), the A= R ad P = 0, F = F, O = FH R HF. So the estimatio error covariace is Pk ( / k) = P0 =0 ad the limitig value P of the

Chadrasekhar type algorithms 95 estimatio error covariace is P = 0. Chadrasekhar ype Algorithm Riccati equatio versio (CARE) Settig Yk ( + ) = [ FO ] O ( k+ ) Yk ( ) (7) after some algebra, workig as i [5], the followig recursio is derived: Sk ( + ) = Sk ( ) + SkY ( ) ( ko ) ( kyksk ) ( ) ( ) (8) with the same iitial coditios used i CARE. Chadrasekhar ype Algorithm Lyapuov equatio (CALE) Lyapuov equatio is derived from Riccati equatio whe R. he, the Chadrasekhar type algorithm for the Lyapuov equatio becomes: Yk ( + ) = FYk ( ) (9) Pk ( + / k+ ) = Pk ( / k) + YkY ( ) ( k) (0) with iitial coditios P (0 / 0)=0 Y(0) Y (0) = P able summarizes the Chadrasekhar type algorithms for solvig the Riccati ad the Lyapyov equatios emaatig from Laiiotis filter.. able. Chadrasekhar ype Algorithms Ok ( + ) = Ok ( ) + YkSkY ( ) ( ) ( k) Riccati equatio Lyapuov equatio CARE CARE CALE Yk FO O kyk ( + ) = [ ] ( ) ( ) ( + ) = ( ) ( ) ( ) ( + ) ( ) ( ) Sk Sk SkY ko k YkSk Pk ( + / k+ ) = Pk ( / k) + YkSkY ( ) ( ) ( k) Ok ( + ) = Ok ( ) + YkSkY ( ) ( ) ( k) Yk FO O k Yk ( ) [ + = ] ( + ) ( ) ( + ) = ( ) + ( ) ( ) ( ) ( ) ( ) Sk Sk SkY ko kyksk Pk ( + / k+ ) = Pk ( / k) + YkSkY ( ) ( ) ( k) Yk ( + ) = FYk ( ) 4 Computatioal Compariso of Algorithms Pk ( + / k+ ) = Pk ( / k) + YkY ( ) ( k) Both the per step ad the Chadrasekhar type algorithms are recursive oes. hus, the total computatioal time required for the implemetatio of each algorithm is: ta ( lg) = CBa ( lg) Sa ( lg) top () where CB( a lg) is the per recursio calculatio burde required for the o-lie calculatios of each algorithm, Sa ( lg) is the umber of recursios (steps) that

9 N. Assimakis each algorithm executes ad top is the time required to perform a scalar operatio. he per step ad the Chadrasekhar type algorithms preseted above are equivalet with respect to their behavior: they calculate theoretically the same steady state estimatio error variace. he, it is reasoable to assume that both algorithms compute the limitig solutio of the Riccati equatio (or the correspodig Lyapuov equatio) executig the same umber of recursios, depedig o the desired accuracy. hus, i order to compare the algorithms with respect to their computatioal time, we have to compare their per recursio calculatio burde required for the o-lie calculatios; the calculatio burde of the off-lie calculatios (iitializatio process) is ot take ito accout. he computatioal aalysis is based o the aalysis i []: scalar operatios are ivolved i matrix maipulatio operatios, which are eeded for the implemetatio of the filterig algorithms. able 3 summarizes the calculatio burde of eeded matrix operatios. able 3. Calculatio Burde of Matrix Operatios Matrix Operatio Calculatio Burde A( m) + B( m) = C( m) m A( ) + B( ) = S( ) S : I( ) + A( ) = B( ) I : idetity A( m) B( m k) = C( k) mk k A( m) B( m ) = S( ) S : m m ( ) [ A( )] 3 = B( ) he recursive computatioal requiremets of all per step ad Chadrasekhar type algorithms for solvig the Riccati equatio ad the Lyapuov equatio are summarized i able 4. he details are give i the Appedix. able 4. Per Recursio Calculatio Burde of Algorithms 3 Riccati PSARE (5 + ) equatio 3 CARE (3 3 + ) + 3r r+ 7 r Lyapuov equatio PSALE 3 3 CALE 3 r From able 4, we derive the followig coclusios:. he per recursio calculatio burdes of the classical per step algorithms deped oly o the state dimesio. he per recursio calculatio burdes of the Chadrasekhar type algorithms

Chadrasekhar type algorithms 97 deped o the state dimesio ad o dimesio r.. he two versios of Chadrasekhar type algorithms are equivalet with respect to their computatioal burdes. 3. Cocerig the Riccati equatio solutio algorithms: - if r =, the the classical per step algorithm is faster tha the Chadrasekhar type algorithms - if r <, the the Chadrasekhar type algorithms may be faster tha the classical per step algorithm; i fact Chadrasekhar type algorithms are faster tha the classical per step algorithm if the followig relatio holds: 3 CB( PSARE) CB( CARE) = (0 3 + ) (3r r + 7 r) > 0 () Figure depicts the relatio betwee the dimesios ad r that may hold i order to decide which algorithm is faster. 45 Chadrasekhar type algorithm vs Per step algorithm 40 35 30 PSA dimesio r 5 0 5 CA 0 5 0 0 0 0 30 40 50 0 70 80 90 00 dimesio Figure. Chadrasekhar type algorithm may be faster tha per step algorithm he, we are able to establish the followig Rule of humb: Chadrasekhar type algorithms are faster tha the classical per step algorithm if the followig relatio holds: r < 0.4 (3) 4. Cocerig the Lyapuov equatio solutio algorithms: - if r =, the the classical per step algorithm is as fast as the Chadrasekhar type algorithm - if r <, the the Chadrasekhar type algorithm is faster tha the classical per step algorithm hus, Chadrasekhar type algorithms possess the advatage that there is a reductio i computatioal burde i compariso to the classical per step algorithm, especially whe r is eough less tha.

98 N. Assimakis Refereces [] B. D. O. Aderso, J. B. Moore, Optimal Filterig, Pretice Hall ic., 979. [] N. Assimakis ad M. Adam, Discrete time Kalma ad Laiiotis filters compariso, It. Joural of Mathematical Aalysis (IJMA), (007), 35-59. [3] N. D. Assimakis, D. G. Laiiotis, S. K. Katsikas, F. L. Saida, A survey of recursive algorithms for the solutio of the discrete time Riccati equatio, Noliear Aalysis, heory, Methods & Applicatios, 30 (997), 409-40. [4] Laiiotis D. G., Partitioed liear estimatio algorithms: Discrete case, IEEE ras. o AC, vol. AC-0, pp. 55-57, 975. [5] Laiiotis D. G., Assimakis N. D., Katsikas S. K., A ew computatioally effective algorithm for solvig the discrete Riccati equatio, Joural of Mathematical Aalysis ad Applicatios, vol. 8, o. 3, pp. 88-895, 994. Appedix. Calculatio burdes of algorithms A. Per Step Algorithms Per Step Algorithm Riccati equatio (PSARE) Matrix Operatio Matrix Dimesios Calculatio Burde P( k / k) O ( ) ( ) 3 ( ) + ( ) [ I + P( k / k) O ] I idetity [ I + P( k / k) O ] ( ) 3 ( ) ( ) 3 [ I + P( k / k) O ] P( k / k) F[ I + P( k / k) O] P( k / k) ( ) ( ) 3 ( ) ( ) 3 F[ I + P( k / k) O] P( k / k) F Pk ( + / k+ ) = P ( ) + ( ) + F[ I + P( k / k) O] P( k / k) F PSARE otal 3 (5 + ) Per Step Algorithm Lyapuov equatio (PSALE) Matrix Operatio Matrix Dimesios Calculatio Burde F Pk ( / k ) ( ) ( ) 3 ( ) ( ) 3 FPk ( / kf ) P( k + / k + ) = P ( ) + ( ) + FPk ( / k) F 3 PSALE otal 3

Chadrasekhar type algorithms 99 B. Chadrasekhar ype Algorithms Chadrasekhar ype Algorithm Riccati equatio versio / (CARE/) Matrix Calculatio Matrix Operatio Matrix Operatio Dimesios Burde YkSk ( ) ( ) YkSk ( ) ( ) ( r) ( r r) r r ( r) ( r ) YkSkY ( ) ( ) ( k ) YkSkY ( ) ( ) ( k ) r + r ( + ) Ok ( + ) = Ok ( ) + YkSkY ( ) ( ) ( k) Ok ( + ) = Ok ( ) + YkSkY ( ) ( ) ( k) O ( k) O ( k + ) ( ) + ( ) ( ) ( + ) ( 3 3 ) O ( k) Y( k) O ( k + ) Y( k) ( ) ( r) r Yk ( + ) = [ FO ] Yk ( + ) = [ FO ] ( ) ( r) r O ( k) Y( k) O ( k + ) Y( k) O ( k + ) O ( k) ( ) r r ( 3 3 ) O ( k + ) Y( k) S( k) O ( k) Y( k) S( k) ( ) ( r) r r SkY ( ) ( k) SkY ( ) ( k) ( r ) ( r) r O ( k + ) Y( k) S( k) O ( k) Y( k) S( k) + r ( r + r ) Sk ( + ) = Sk ( ) Sk ( + ) = Sk ( ) ( r r) + ( r r) SkY ( ) ( k) + SkY ( ) ( k) ( r + r) O ( k + ) Y( k) S( k) O ( k) Y( k) S( k) Pk ( + / k+ ) = Pk ( / k) Pk ( + / k+ ) = Pk ( / k) ( ) + ( ) + YkSkY ( ) ( ) ( k) + YkSkY ( ) ( ) ( k) ( + ) 3 (3 3 + ) CARE CARE otal + 3r r + 7 r

00 N. Assimakis Chadrasekhar ype Algorithm Lyapuov equatio (CALE) Matrix Operatio Matrix Dimesios Calculatio Burde Yk ( + ) = FYk ( ) ( ) ( r) r r YkY ( ) ( k ) ( r) ( r ) r r ( ) Pk ( + / k+ ) = ( ) + ( ) P( k / k) + Y( k) Y ( k) CALE otal 3 r Received: April, 00