Combining Linear Equation Models via Dempster s Rule
|
|
- Audrey Hodge
- 6 years ago
- Views:
Transcription
1 Combining Linear Equation Models via Dempster s Rule Liping Liu Abstract. This paper proposes a concept of imaginary extreme numbers, which are like traditional complex number a + bi but with i = 1 being replaced by e = 1/0, and defines usual operations such as addition, subtraction, and division on the numbers. It applies the concept to representing linear equations in knowledgebased systems. It proves that the combination of linear equations via Dempster s rule is equivalent to solving a system of simultaneous equations or finding a least-square estimate when they are overdetermined. 1 Introduction The concept of linear belief functions unifies the representation of a diverse range of linear models in expert systems [Liu et al., 2006]. These linear models include linear equations that characterize linear deterministic relationships of continuous or discrete variables and stochastic models such as linear regressions, linear time series, and Kalman filters in which some variables are deterministic while others stochastic. They also include normal distributions that describe probabilistic knowledge on a set of variables, a lack of knowledge such as ignorance and partial ignorance, and direct observations or observations with missing values. Despite the varieties, the concept of linear belief functions unifies them as manifestations of a single concept, represents them as matrices with the same semantics, and combine them by a single mechanism, the matrix addition rule, which is consistent with Dempster s rule of combination [Shafer, 1976]. What makes the unification possible is the sweeping operator. Nevertheless, when the operator is applied to knowledge representation, a division-by-zero enigma often arises. For example, when two linear models are combined, their matrix representations must be fully swept via the old matrix addition rule [Dempster, 2001] or partially swept via the new matrix addition rule [Liu, 2011b]. This poses no issue Liping Liu The University of Akron T. Denœux & M.-H. Masson (Eds.): Belief Functions: Theory & Appl., AISC 164, pp springerlink.com c Springer-Verlag Berlin Heidelberg 2012
2 256 L. Liu to linear models with a positive definite covariance matrix [Liu, 2011a]. However, for deterministic linear models such as linear equations, sweeping points are often zero, and a sweeping, if needs to be done, will have to divide regular numerical values by zero, a mathematical operation that is not defined. The division-by-zero issue has been a challenge that hinders the development of intelligent systems that implements linear belief functions. In this paper, I propose a notion of imaginary extreme numbers to deal with the division-by-zero problem. An imaginary extreme number is a complex number like 3 + 4e with extreme number e = 1 0. On these imaginary numbers, usual operations can be defined. The notion of imaginary extreme numbers makes it possible to represent linear equations as knowledge in intelligent systems. As we will illustrate, a linear equation is transformed into an equivalent one by a sweeping from a zero variance and a reverse sweeping from an extreme inverse variance. The notion also makes it possible to combine linear equations as independent pieces of knowledge via Dempster s rule of combination. We will show that the combination of linear equations corresponds to solving the equations or finding the least-square estimate when the equations are over-determining. 2 Matrix Sweepings Sweeping is a matrix transformation that starts from a sweeping point, a square submatrix, and iteratively spreads the change across the entire matrix: Definition 1. Assume real matrix A is made of submatrices as A =(A ij ) and assume A ij is a square submatrix. Then a forward (reverse) sweeping of A from A ij replaces submatrix A ij by its negative inverse (A ij ) 1, any other submatrix A ik in row i and any submatrix A kj in column j are respectively replaced by ( )(A ij ) 1 A ik and ( )A kj (A ij ) 1, and the remaining submatrix A kl not in the same row or column as A ij,i.e.,k i and j l, by A kl A kj (A ij ) 1 A il. Note that forward and reverse sweepings defined above operationally differ only in the sign for the elements in the same column or row as the sweeping point. Yet the difference is significant in that forward and reverse sweepings cancel each other s effects, and thus the modifiers forward and reverse are justified. Both forward and reverse sweeping operations may be also defined to sweep from a square submatrix as a sweeping point. If a sweeping point is positive definite such as a covariance matrix, then a sweeping from the submatrix is equivalent to a series of successive sweepings from each of the leading diagonal elements of the submatrix [Liu, 2011a]. When applied to a moment matrix that consists of a mean vector and a covariance matrix, sweeping operations can transform a normal distribution to its various forms,
3 Combining Linear Equation Models via Dempster s Rule 257 each with interesting semantics. Assume X has mean vector μ and covariance matrix Σ. Then in general the moment matrix is [ ] μ M(X)= Σ and its fully swept form M( [ ] μσ 1 X )= Σ 1 represents the density function of X. NoteM( X ) symbolizes that M(X) has been swept from the covariance matrix of X, ortobebrief,thatm(x) has been swept from both X. It is interesting to imagine that, if the variances of X are so huge that their inverse covariance matrix Σ 1 0, then M( X )=0. Thus, a zero fully swept matrix is the representation of ignorance; intuitively, we are ignorant about X if its variances are infinite. A partial sweeping has more interesting semantics. For example, for the normal distribution of X, Y,andZ with moment matrix: 342 M(X,Y, Z)= , 026 its sweeping from the variance terms for X and Y is a partially swept matrix M( X, Y,Z)= This contains two distinct pieces of information about the variables [Liu, 2011a]. First, the submatrix corresponding to variables X and Y, M( X, Y )= represents the density function of X andy. Second, the remaining partial matrix represents a regression model Y = X + 0.5Y + ε with ε N(0,5).Since this regression model alone casts no information on independent variables X and Y, the missing elements in the above partial matrix shall be zero. Furthermore,
4 258 L. Liu when the conditional variance of Z vanishes, the conditional distribution reduces to a regular linear equation model Z = x + 0.5y as represented by the matrix: M( X, Y,Z)= Here M( X, Y,Z) represents a generic moment matrix of X, Y,andZ with X and Y being swept. Note that it has been long realized that a linear model such as a regression model or a linear equation is a special case of a multivariate normal distribution [Khatri, 1968].What is new, however, is that with sweeping operations, it can be uniformly represented as a moment matrix or its partially swept form. 3 Imaginary Numbers In this section I propose a new type of imaginary numbers, called extreme numbers, and use it to resolve the division-by-zero issue. Just as a usual imaginary number uses i for non-existent 1, we use e for 1 0, which also does not exist. Also, as a usual imaginary number consists of two parts, a real part and an imaginary part, an imaginary extreme number consists of the same two parts. For example, 3 2e is an extreme number with 3 as real part and 2 as imaginary part. When imaginary part vanishes, an extreme number reduces to a real one. When its imaginary part is nonzero, we call an extreme number true extreme number. When its real part is zero, we call the extreme number pure extreme. When both real and imaginary parts are zero, the extreme number is zero, i.e., a + be = 0 if and only if a = 0andb = 0. Thus, the system of extreme numbers includes real numbers as a subset. Extreme numbers may be added, subtracted, or scaled as usual imaginary numbers. For any extreme number a + be and a real number c, their multiplication, or scaling of a + be using scale c is defined as c(a + be)=(a + be)c = ac + bce.for any two extreme number a 1 + b 1 e and a 2 + b 2 e, their addition is defined as (a 1 + b 1 e)+(a 2 + b 2 e)=(a 1 + a 2 )+(b 1 + b 2 )e.clearly, the system of extreme numbers is closed under the operation of scaling, addition, and subtraction. Unlike usual imaginary numbers, the multiplication of two extreme numbers is not defined because it is not closed operationally. However, division can be defined here: for any two extreme number a 1 + b 1 e and a 2 + b 2 e, their division is defined as follows: a 1 + b 1 e a 2 + b 2 e = b 1 b 2 if b 2 0. If the denominator is a nonzero real number, then division reduces to scaling. If the denominator is zero and the numerator is one, i.e., b 1 = 0anda 1 = 1,
5 Combining Linear Equation Models via Dempster s Rule 259 the division is e = 1/0 via definition. Also, 0/0 is defined to be 0 to be consistent with scaling, i.e., 0(0 + 1e)=0 + 0e = 0. Because division generally cancels out imaginary parts, the operation of multiplication followed by division, called crossing, can be defined. For any three extreme numbers a 1 + b 1 e, a 2 + b 2 e,anda 3 + b 3 e, their crossing is defined as follows: (a 1 + b 1 e)(a 2 + b 2 e) = a 1b 2 + a 2 b 1 + b 1b 2 e a 3 + b 3 e b 3 b 3 if b 3 0. Crossing reduces to division if one of the multiplicants a 1 + b 1 e and a 2 + b 2 e is real, i.e., b 1 b 2 = 0. If at the same time the denominator is a nonzero real number, i.e., b 3 = 0anda 3 0, it is reduced to scaling. It is consistent with the definition of extreme numbers if the dividor a 3 + b 3 e = 0, and b 1 = 0, b 2 = 0. Extreme numbers may be extended to extreme matrices with the inverse of zero matrix being defined as 0 1 = Ie, wherei is an identity matrix. In general, A + Be with real part A and imaginary part B, where both A and B are of the same dimensions. Operations on extreme matrices can be adopted from those for extreme numbers with slight modifications on division and crossing. For any two extreme matrices A 1 + B 1 e and A 2 + B 2 e,ifb 2 is nonsingular, then (A 1 + B 1 e)(a 2 + B 2 e) 1 = B 1 (B 2 ) 1 (A 2 + B 2 e) 1 (A 1 + B 1 e)=(b 2 ) 1 B 1 For any three extreme matrices A 1 + B 1 e, A 2 + B 2 e,anda 3 + B 3 e,ifb 3 is nonsingular, then their crossing is defined as A 1 (B 3 ) 1 B 2 + B 1 (B 3 ) 1 A 2 + B 1 (B 3 ) 1 B 2 e. 4 Equation Combination Intuitively, a linear equation carries partial knowledge on the values of some variables through a linear relationship with other variables. If each of such equations is considered an independent piece of knowledge, its combination with other similar knowledge will render the values more certain. When there exist sufficient number of linear equations, their combination may jointly determine a specific value of the variables with complete certainty. Therefore, the combination of linear equations should correspond to solving a system of simultaneous equations. In this section, we will prove this statement. In genearl, a linear equation may be expressed explicitly as X n = b + a 1 X 1 + a 2 X a n 1 X n 1 (1) or implicitly as a 1 X 1 + a 2 X a n 1 X n 1 + a n X n = b. (2)
6 260 L. Liu The matrix representation for the explicit expression is straightforward: b M( X 1,..., a 1 X n 1,X n )= a n 1. a 1... a n 1 0 This partially swept matrix indicates that we have ignorance on the values of X 1, X 2,..., and X n 1 ; thus they correspond to a zero submatrix in the fully swept form. While given X 1, X 2,..., and X n 1,thevalueofX n is b for sure; thus its conditional mean and variance are respectively b and zero. Of course, in algebra, a variable on the right-hand-side can be moved to the left-hand-side through a linear transformation. For example, if a 1 0, Equation 1 can be equivalently turned into X 1 = b a 2 X 2... a n 1 X n X n. a 1 a 1 a 1 a 1 This transformation can be also done through the sweepings of matrix representations first by a foward sweeping from X n and then a backward sweeping from X 1. An implicit expression like Equation 2 may be represented as two separate linear equations in explicit forms: a 1 X 1 + a 2 X a n 1 X n 1 + a n X n = U and U = b. Their matrices are respectively M 1 ( X 1,..., a 1 X n,u)= a n a 1... a n 0 and [ ] b M 2 (U)=. 0 To combine them via Dempster s rule, we sweep both matrirces from U respectively into M 1 ( X 1,..., X n, U ) as (a 1 ) 2 e... a 1 a n ea 1 e a n a 1 e... (a n ) 2 ea n e a 1 e... a n e e and M 2 ( [ ] be U )=, e
7 Combining Linear Equation Models via Dempster s Rule 261 and then add the results position-wise into M( X 1,..., X n, U ) as be (a 1 ) 2 e... a 1 a n e a 1 e a n a 1 e... (a n ) 2 ea n e. a 1 e... a n e 2e To remove the auxiliary variable U, we shall unsweept M( X 1,..., X n, U ) from U into M( X 1,..., X n,u) and then remove U by projecting the result to the variables X 1, X 2,..., and X n. We will obtain a fully swept matrix representation M( X 1,..., X n ) for the implicit linear equation 2 as 1 2 b( ) a 1... a n e a 1... ( ) a 1... a n e (3) 1 2 a n Assume coefficient a n 0, we can then unswept it from X n and obtain M( X 1,..., X n 1,X n ) as b/a n a 1 /a n a n 1 /a n, a 1 /a n... a n 1 /a n 0 which is the matrix representation for an explicit form for equation 2: X n = b a 1 X 1... a n 1 X n 1. a n a n a n Now let us study the representation and combination of multiple linear equations. For explicit expressions, without loss of generality, assume two linear equations are respectively Y = b 1 + XA 1 and Y = b 2 + XA 2,whereY is a single variable, X is n dimensional horizontal vector, b 1 and b 2 are constant values, and A 1 and A 2 are n dimensional vertical vectors. Their matrix representations are M 1 ( X,Y )= 0 b 1 0 A 1, (A 1 ) T 0 M 2 ( X,Y )= 0 b 2 0 A 2. (A 2 ) T 0 To combine them, we need to sweep both matrices from Y andthenaddthem position-wise into M( X, Y ) as
8 262 L. Liu b 1(A 1 ) T e b 2 (A 2 ) T e (b 1 + b 2 )e A 1 (A 1 ) T e A 2 (A 2 ) T e (A 1 + A 2 )e. (A 1 + A 2 ) T e 2e Now unsweeping M( X, Y ) from Y, we obtain M( X,Y ) as 1 2 (b 2 b 1 )(A 1 A 2 ) T e (b 1 + b 2 )/2 1 2 (A 1 A 2 )(A 1 A 2 ) T e (A 1 + A 2 )/2. (A 1 + A 2 ) T /2 0 Comparing to Equation 3, the above matrix represents the implicit linear equation: X(A 1 A 2 )=b 2 b 1 for X along with the conditional knowledge of Y given X. It is trivial to note that the combination is equivalent to solving linear equations Y = b 1 + XA 1 and Y = b 2 + XA 2 by substitution: b 1 + XA 1 = b 2 + XA 2. When linear equations are expressed implicitly, their combination is equivalent to forming a larger system of linear equations. Assume XA = U and XB = V are two systems of linear equations on a vector of variables X, U, andv, whereu and V are distinct vectors of auxiliary variables, and A and B are appropriate coefficient matrices. Their matrix representations are M( X,U)= A, A T 0 M( X,V )= B. B T 0 Since both matrices have been swept from common variables X, they can be directly summed according to the new generalized rule of combination [Liu, 2011b]: M( X,U,V)= 0 AB A T 00, B T 00 which corresponds to X [ AB ] = [ UV ]. In words, the combination of XA= U and XB= V is identical to a system of linear equations joining both XA= U and XB= V.
9 Combining Linear Equation Models via Dempster s Rule 263 To understand what it really means by combining linear equations, let us perform sweepings on the matrix representation for a system of m equations, XA= U, where X is a vector of n variables, and U is a vector of m variables, and A is a n m coefficient matrix. First, assume n m and all linear equations are independent, i.e., none is linear combination of others, and thus there is a subvector of X that can be solved in terms of other variables. Without loss of generality, assume X =(X 1,X 2 ) with X 1 being any subvector of m variables that can be solved and A is split vertically into two submatrices A 1 and A 2 with A 1 being a nonsingular m m matrix. Then we have X 1 A 1 + X 2 A 2 = U, which is represented as M( X 1, X 2,U)= 0 0 A A 2. A T 1 AT 2 0 Apply a forward sweep to M( X 1, X 2,U) from U: M( X 1, X 2, U )= ea 1 A T 1 ea 1A T 2 ea 1 ea 2 A T 1 ea 2A T 2 ea 2 ea T 1 ea T 2 ei and unsweep M( X 1, X 2, U ) from X 1. Noting that A 1 is nonsingular and (A 1 A T 1 ) 1 =(A T 1 ) 1 (A 1 ) 1, we can easily verify that M(X 1, X 2, U ) is (A T 1 ) 1 A T 2 (AT 1 ) 1 A 2 (A 1 ) 1 0 0, (A 1 ) which is the matrix representation of X 1 = X 2 A 2 (A 1 ) 1 +U(A 1 ) 1. Therefore, sweeping from U and unsweeping from X 1 is the same as solving for X 1 in terms of U. Second, assume the system XA = C contains m equations and n variables with n m, C being an n dimensionalvector, and A has rank n. Using auxiliary variable U, the system is equivalent to the combination of
10 264 L. Liu M( 0 0 X,U)= 0 A A T 0 with or via extreme numbers, [ ] C M(U)= 0 M( X, 0 Ce U )= AA T e Ae. A T e 2Ie Unsweeping M( X, U ) from the inverse covariance matrix of U, we obtain M( 1 2 X,U)= CAT e C/2 1 2 AAT ea/2. A T /2 0 Since A has rank n, AA T is positive definite. Thus, we can unsweep M( X,U) from the inverse covariance matrix of X and obtain M(X,U) as CAT (AA T ) C[I + AT (AA T ) 1 A] implying that, after combination, variable X takes on value X = CA T (AA T ) 1 with certainty. Note that this solution is the least-square estimate of X from regression model XA = C with A being the observation matrix for independent variables and C being the observations for a dependent variable. In addition, the auxiliary variable U takes on the value U = 1 2 C[I + AT (AA T ) 1 A] (4) with certainty. This seems to be in conflict with initial component model U = C. However, one shall realize that, when m > n, there exist only n independent linear equations. Thus, only n variables of U can take independent observations, and the remaining n m variables take on the values as derived from those observations. Otherwise, U = C will have conflicting observations on some or all variables. Equation 4 represents values that are closest to the observations if there is any conflict. In fact, in the special case when m = n,wehave (AA T ) 1 =(A T ) 1 A 1.
11 Combining Linear Equation Models via Dempster s Rule 265 Thus M(X,U)= CA 1 C 0 0, 0 0 implying that X = CA 1 and U = C with certainty. This is simply the solution to XA= C. 5 Conclusion In knowledge-based systems, extreme numbers arise whenever a deterministic linear model like a linear equation exists in the knowledge base. A linear model is represented as a marginal or conditional normal distribution. For a linear equation, its conditional variance is zero, and its matrix sweeping from such a zero variance turns the matrix into an extreme one. This paper studied the application of extreme numbers to representing and transforming linear equations and combining them as belief functions via Dempster s rule. When a number of linear equations are under-determined, their combination corresponds to solving the equations for some variables in terms of others. When they are just determined, their combination corresponds to solving the equations for all the variables. When they are overdetermined, their combination corresponds to finding the least-square estimate of all the variables.the meaning of the combination in such a case should be studied by future research. References [Dempster, 2001] Dempster, A.P.: Normal belief functions and the kalman filter. In: Saleh, A.K.M.E. (ed.) Data Analysis from Statistical Foundations, pp Nova Science Publishers, Hauppauge (2001) [Khatri, 1968] Khatri, C.G.: Some results for the singular normal multivariate regression nodels. Sankhya A 30, (1968) [Liu, 2011a] Liu, L.: Dempster s rule for combining linear models. Technical report, Department of Management, The University of Akron, Akron, Ohio (2011a) [Liu, 2011b] Liu, L.: A new rule for combining linear belief functions. Technical report, Department of Management, The University of Akron, Akron, Ohio (2011b) [Liu et al., 2006] Liu, L., Shenoy, C., Shenoy, P.P.: Knowledge representation and integration for portfolio evaluation using linear belief functions. IEEE Transactions on Systems, Man, and Cybernetics, Series A 36(4), (2006) [Shafer, 1976] Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976)
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationGROUPS. Chapter-1 EXAMPLES 1.1. INTRODUCTION 1.2. BINARY OPERATION
Chapter-1 GROUPS 1.1. INTRODUCTION The theory of groups arose from the theory of equations, during the nineteenth century. Originally, groups consisted only of transformations. The group of transformations
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationPOLI270 - Linear Algebra
POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and
More informationAlgebra II Vocabulary Alphabetical Listing. Absolute Maximum: The highest point over the entire domain of a function or relation.
Algebra II Vocabulary Alphabetical Listing Absolute Maximum: The highest point over the entire domain of a function or relation. Absolute Minimum: The lowest point over the entire domain of a function
More informationThe Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where
21 : CHAPTER Seemingly-Unrelated Regressions The Algebra of the Kronecker Product Consider the matrix equation Y = AXB where Y =[y kl ]; k =1,,r,l =1,,s, (1) X =[x ij ]; i =1,,m,j =1,,n, A=[a ki ]; k =1,,r,i=1,,m,
More informationManaging Decomposed Belief Functions
Managing Decomposed Belief Functions Johan Schubert Department of Decision Support Systems, Division of Command and Control Systems, Swedish Defence Research Agency, SE-164 90 Stockholm, Sweden schubert@foi.se
More informationLecture 5 Singular value decomposition
Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationRegression. Oscar García
Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationComputational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras
Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Module No. # 05 Lecture No. # 24 Gauss-Jordan method L U decomposition method
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationNotes on Random Variables, Expectations, Probability Densities, and Martingales
Eco 315.2 Spring 2006 C.Sims Notes on Random Variables, Expectations, Probability Densities, and Martingales Includes Exercise Due Tuesday, April 4. For many or most of you, parts of these notes will be
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationCentral Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J
Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationChapter 5 Prediction of Random Variables
Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,
More informationNotes on Mathematics
Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................
More informationOn Expected Gaussian Random Determinants
On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose
More informationLinear regression methods
Linear regression methods Most of our intuition about statistical methods stem from linear regression. For observations i = 1,..., n, the model is Y i = p X ij β j + ε i, j=1 where Y i is the response
More informationMATH 320, WEEK 7: Matrices, Matrix Operations
MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationLINEAR ALGEBRA WITH APPLICATIONS
SEVENTH EDITION LINEAR ALGEBRA WITH APPLICATIONS Instructor s Solutions Manual Steven J. Leon PREFACE This solutions manual is designed to accompany the seventh edition of Linear Algebra with Applications
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationGroups. 3.1 Definition of a Group. Introduction. Definition 3.1 Group
C H A P T E R t h r e E Groups Introduction Some of the standard topics in elementary group theory are treated in this chapter: subgroups, cyclic groups, isomorphisms, and homomorphisms. In the development
More information=, v T =(e f ) e f B =
A Quick Refresher of Basic Matrix Algebra Matrices and vectors and given in boldface type Usually, uppercase is a matrix, lower case a vector (a matrix with only one row or column) a b e A, v c d f The
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More information1 Adeles over Q. 1.1 Absolute values
1 Adeles over Q 1.1 Absolute values Definition 1.1.1 (Absolute value) An absolute value on a field F is a nonnegative real valued function on F which satisfies the conditions: (i) x = 0 if and only if
More informationSubmatrices and Partitioned Matrices
2 Submatrices and Partitioned Matrices Two very important (and closely related) concepts are introduced in this chapter that of a submatrix and that of a partitioned matrix. These concepts arise very naturally
More informationWhat is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =
STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row
More informationLecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore
Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of
More informationLinear Algebra and Robot Modeling
Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n
More informationBelief Update in CLG Bayesian Networks With Lazy Propagation
Belief Update in CLG Bayesian Networks With Lazy Propagation Anders L Madsen HUGIN Expert A/S Gasværksvej 5 9000 Aalborg, Denmark Anders.L.Madsen@hugin.com Abstract In recent years Bayesian networks (BNs)
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationM. Matrices and Linear Algebra
M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.
More informationLecture 6: Geometry of OLS Estimation of Linear Regession
Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationBasic Probability Reference Sheet
February 27, 2001 Basic Probability Reference Sheet 17.846, 2001 This is intended to be used in addition to, not as a substitute for, a textbook. X is a random variable. This means that X is a variable
More informationOn Conditional Independence in Evidence Theory
6th International Symposium on Imprecise Probability: Theories and Applications, Durham, United Kingdom, 2009 On Conditional Independence in Evidence Theory Jiřina Vejnarová Institute of Information Theory
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More informationSteven J. Leon University of Massachusetts, Dartmouth
INSTRUCTOR S SOLUTIONS MANUAL LINEAR ALGEBRA WITH APPLICATIONS NINTH EDITION Steven J. Leon University of Massachusetts, Dartmouth Boston Columbus Indianapolis New York San Francisco Amsterdam Cape Town
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationSection 3.3. Matrix Rank and the Inverse of a Full Rank Matrix
3.3. Matrix Rank and the Inverse of a Full Rank Matrix 1 Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix Note. The lengthy section (21 pages in the text) gives a thorough study of the rank
More informationReasoning with Uncertainty
Reasoning with Uncertainty Representing Uncertainty Manfred Huber 2005 1 Reasoning with Uncertainty The goal of reasoning is usually to: Determine the state of the world Determine what actions to take
More informationLinear Algebra and Eigenproblems
Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details
More informationVectors and Matrices Notes.
Vectors and Matrices Notes Jonathan Coulthard JonathanCoulthard@physicsoxacuk 1 Index Notation Index notation may seem quite intimidating at first, but once you get used to it, it will allow us to prove
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationConsistent Bivariate Distribution
A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of
More informationMatrix Representation
Matrix Representation Matrix Rep. Same basics as introduced already. Convenient method of working with vectors. Superposition Complete set of vectors can be used to express any other vector. Complete set
More informationTwo remarks on normality preserving Borel automorphisms of R n
Proc. Indian Acad. Sci. (Math. Sci.) Vol. 3, No., February 3, pp. 75 84. c Indian Academy of Sciences Two remarks on normality preserving Borel automorphisms of R n K R PARTHASARATHY Theoretical Statistics
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationExponential Properties 0.1 Topic: Exponential Properties
Ns Exponential Properties 0.1 Topic: Exponential Properties Date: Objectives: SWBAT (Simplify and Evaluate Expressions using the Exponential LAWS) Main Ideas: Assignment: LAW Algebraic Meaning Example
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationACO Comprehensive Exam October 14 and 15, 2013
1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for
More informationGetting Started with Communications Engineering
1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we
More information~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.
Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical
More informationA FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic
A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationA Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,
More informationBasic Linear Algebra in MATLAB
Basic Linear Algebra in MATLAB 9.29 Optional Lecture 2 In the last optional lecture we learned the the basic type in MATLAB is a matrix of double precision floating point numbers. You learned a number
More informationA Randomized Algorithm for the Approximation of Matrices
A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More information1300 Linear Algebra and Vector Geometry
1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Introduction: linear equations Read 1.1 (in the text that is!) Go to course, class webpages.
More informationLinear Algebra. Chapter Linear Equations
Chapter 3 Linear Algebra Dixit algorizmi. Or, So said al-khwarizmi, being the opening words of a 12 th century Latin translation of a work on arithmetic by al-khwarizmi (ca. 78 84). 3.1 Linear Equations
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationFinite Fields: An introduction through exercises Jonathan Buss Spring 2014
Finite Fields: An introduction through exercises Jonathan Buss Spring 2014 A typical course in abstract algebra starts with groups, and then moves on to rings, vector spaces, fields, etc. This sequence
More informationIntroduction to Determinants
Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if.
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More informationJim Lambers MAT 419/519 Summer Session Lecture 11 Notes
Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations
More informationLecture 21: Spectral Learning for Graphical Models
10-708: Probabilistic Graphical Models 10-708, Spring 2016 Lecture 21: Spectral Learning for Graphical Models Lecturer: Eric P. Xing Scribes: Maruan Al-Shedivat, Wei-Cheng Chang, Frederick Liu 1 Motivation
More informationFE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 3. Factor Models and Their Estimation Steve Yang Stevens Institute of Technology 09/12/2012 Outline 1 The Notion of Factors 2 Factor Analysis via Maximum Likelihood
More informationThe Multivariate Gaussian Distribution [DRAFT]
The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More informationMatrices and systems of linear equations
Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.
More informationBoolean Inner-Product Spaces and Boolean Matrices
Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationComparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems
Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Nikola Kosturski and Svetozar Margenov Institute for Parallel Processing, Bulgarian Academy of Sciences Abstract.
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More information