Multistat2. The course is focused on the parts in bold; a short description will be presented for the other parts

Similar documents
Matrices and Linear Algebra

Linear Algebra for Machine Learning. Sargur N. Srihari

Dimension. Eigenvalue and eigenvector

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATH 221, Spring Homework 10 Solutions

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Review of Linear Algebra

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

POLI270 - Linear Algebra

Linear Algebra Review

Mathematical foundations - linear algebra

and let s calculate the image of some vectors under the transformation T.

Linear Algebra - Part II

Chapter 5. Eigenvalues and Eigenvectors

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

Study Guide for Linear Algebra Exam 2

Linear Algebra, Vectors and Matrices

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Math Final December 2006 C. Robinson

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

4. Linear transformations as a vector space 17

Introduction to Matrix Algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Knowledge Discovery and Data Mining 1 (VO) ( )

Introduction to Matrices

CS 246 Review of Linear Algebra 01/17/19

1 Last time: least-squares problems

Matrix Operations: Determinant

Eigenvalues and Eigenvectors

Quantum Computing Lecture 2. Review of Linear Algebra

235 Final exam review questions

Math Linear Algebra Final Exam Review Sheet

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Math Matrix Algebra

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

Image Registration Lecture 2: Vectors and Matrices

2 Eigenvectors and Eigenvalues in abstract spaces.

Pre-sessional Mathematics for Big Data MSc Class 2: Linear Algebra

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra Review. Fei-Fei Li

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Econ Slides from Lecture 7

Topic 1: Matrix diagonalization

Chapter 3. Determinants and Eigenvalues

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Symmetric and anti symmetric matrices

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Linear Algebra Review. Vectors

Announcements Monday, October 29

Chapter 2 Notes, Linear Algebra 5e Lay

Mathematical foundations - linear algebra

CSL361 Problem set 4: Basic linear algebra

Eigenvalues, Eigenvectors, and Diagonalization

LINEAR ALGEBRA REVIEW

Elementary maths for GMT

Matrix Vector Products

MTH 464: Computational Linear Algebra

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

MATRICES The numbers or letters in any given matrix are called its entries or elements

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Introduction to Matrices

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

1. General Vector Spaces

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Eigenvalues and Eigenvectors

Elementary Row Operations on Matrices

This appendix provides a very basic introduction to linear algebra concepts.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

CS 143 Linear Algebra Review

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

CSE 494/598 Lecture-4: Correlation Analysis. **Content adapted from last year s slides

Cayley-Hamilton Theorem

Linear Algebra Exercises

Determinants An Introduction

Second Midterm Exam April 14, 2011 Answers., and

Recall : Eigenvalues and Eigenvectors

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Transcription:

1

2

The course is focused on the parts in bold; a short description will be presented for the other parts 3

4

5

What I cannot teach you: how to look for relevant literature, how to use statistical software 6

7

8

9

I am not a bush doctor, and statistics is not magic: you need it for your work 10

11

12

You can use stat as a blue pill (hide problems in your data, experimental setup etc.) and live happily or use it as a red pill: it may be sobering sometimes but it does help a lot in discovering reality 13

14

15

16

17

18

Make examples of types of variables Categorical unordered: eye color, sex, marital status Categorical with ordering: level of instruction, weight class Quantitative continuous: weight, temperature (can we really measure on a continuous scale? Which is the number of significant digits given the precision and accuracy of your measurements?) Quantitative discrete: microbial counts 19

Ordinal variables are frequent in sensory science and psychometics: people may not use a linear scale and distance between points on a scale (i.e. an hedonic scale 0-9) might not be constant The difference between interval and ratio variables is important: although they both use scales with constant intervals, the fact that the 0 in interval scales is arbitrary has implications in comparing values. For example, temperature in C is an interval scale with an arbitrary 0, while temperature in K is a ratio scale. A body at 40 C is not twice as hot as a body at 20 C as can be easily seen by converting in K. 20

From http://www.graphpad.com 21

Link Systat or Excel data files 22

This is a file on peptidase activity of a number of strains of lactic acid bacteria; the variables Doxxx, mmxxxx and mkatxxx indicate the absorbance in the assay, mmol of substrate hydrolyzed during the assa and activity in kat 23

this file shows results from the phenotypic and genotypic characterization of lab isolated from sourdough breads 24

A data matrix may often include both categorical and quantitative variables, although most often only the latter are used Trace of a matrix is the sum of its diagonal elements. Addressing of the elements in the matrix is relevant, since a matrix is an ordered table of elements. Usually the index i is used for rows and j for columns. In some cases further dimensions are added: in a repeated measure experiment k my index the number of the measurement taken. Usually an address variable is used for this and, for example, matrices of different experiments are simply stacked. A symmetric matrix is a square matrix which is equal to its transpose: distance and similarity matrices are usually symmetric 25

A row vector specifies coordinates in an Euclidean space witn p dimensions, i.e. specifies the value of all variables for a single observation. x is a trasposed vector (usually the notation is used to indicate row vectors anyway). A matrix is actually made by one or more row vectors (or column vectors). Therefore transposition of vectors or matrices is simply exchanging rows for columns (first row becomes first column and so on and a nxp matrix becomes a pxn matrix) The rank of a matrix is the number of the row vectors which are linearly independent. A matrix is full rank if all the row vectors are linearly independent 26

27

28

29

Special matrices: I (identity) all 0 except the diagonal, (1). It plays the same role as the number 1 in multiplication in arithmetics 0 (null) all elements are 0, E, all elements are 1. A square matrix which is equal to its transposed matrix is symmetrical. Diagonal matrix, all elements are 0 except the diagonal. The inverse of a matrix is the matrix which, when multiplied for the original matrix, gives the matrix I. Multiplication by the inverse of a matrix is the equivalent of division 30

Sum and difference of matrices can be carried out only if the two matries have the same dimensions and the results is the ordered sum (difference) of elements The following properties hold: Commutative A+B=B+A Associative A+(B+C)=(A+B)+C A-(B-C)=(A-B)+C 31

Scalar multiplication: all elements are multiplied by a scalar. Matix multiplication is possible only if the number of columns n of the first matrix is equal to the number of rows of the second matrix. Commutative property does not apply to matrix multiplication, while associative (ABC=(AB) C) and distributive (A(B+C)=AB+AC) properties apply. If two matrices have dimensions (m,n) and (n,p) the result of their multiplication has dimensions (m,p). The product of any matrix by the identity matrix (I) is the matrix itself. The internal product of two vectors is a scalar x x which is the sum of squares of the x elements of the vector and also is the square of the distance of the point represented by the vector from the origin. While if holds (AB) =B A in general BA=AB does not hold The image is from: http://en.wikipedia.org/wiki/matrix_(mathematics) 32

Premultiplication of a matrix by a diagonal matrix gives a second matrix in which each row of the second matrix is multiplied by the corresponding diagonal element of the first matrix. if you premultiply by a diagonal matrix, the rows of the matrix are multiplied by the element of the diagonal; if you postmultiply, the columns are multiplied The inverse B of a square matrix A is a matrix such as: AB=BA=I A matrix is singular or degenerate if it cannot be inverted; this happens if and only if the determinant is 0. A matrix is invertible only if it is full rank. 33

The scale of the vector changes by multiplication by the scalar c. Scalar multiplication of a matrix works exactly in the same way. From an Euclidean point of view, multiplication of a vector by a scalar changes its length by the factor c. 34

The matrix obtained by this product is the SSCP (sum of squares and cross products) matrix, a simmetrical matrix. 35

It is the sum of squares of the vector. By convention vectors are written as row vectors. Column vectors are obtained by transposing row vectors 36

Image from http://en.wikipedia.org/wiki/matrix_(mathematics). Multiplication of a mxn matrix by a nxn matrix can be used to obtain linear transformations of the first matrix. Examples are provided for Euclidean space in two dimensions. Check the article on linear transformations on wikipedia for further details. 37

This slide shows the difinition of Euclidean distance between two points with coordinates x and y (x and y are vectors). θ is the angle between the two vectors; In cos θ the vectors are divided by their length which is equivalent to normalizing them to unit length. If the two vectors are coincident then cos θ =1; if they are at right angle cos θ =0. When y=0, d is also called the L2-norm (norm, for short) of the vector and is its length 38

Postmultiplication of a mxn matrix by a vector with dimension n linearly transforms the group of variables represented by the vector in a new space with n dimensions 39

The determinant of the equation is a polynomial of grade p; if the determinant is non zero, the inverse of the matrix exists From: http://en.wikipedia.org/wiki/matrix_(mathematics) A number λ and a non-zero vector v satisfying Av = λv are called eigenvalue and eigenvector of A, respectively. The number λ is an eigenvalue of A if and only if A λi is not invertible, which is equivalent to det(a λi) = 0 The function pa(t) = det(a ti) is called the characteristic polynomial of A, the degree of this polynomial is n. Therefore pa(t) has at most n (possibly complex) different roots, i.e. eigenvalues of the matrix. The trace of a square matrix is the sum of its diagonal entries. It equals the sum of its n eigenvalues. 40

41

42

From stat trek web site 43

44

45

46

47

if i=j the covariance is the variance; with p variables a symmetrical pxp matrix is obtained and its diagonal are the variances (p variances) while there are p(p-1)/2 covariances. The matrix can also be called variance-covariance matrix. Covariance is highly sensitive to scale and therefore difficult to interpret 48

correlation is covariance standardized by the standard deviation of the variables. It varies between 1 and 1 and is a measure of linear relationship of the variables. With p variables there are p(p-1)/2 corelations arranged in a pxp matrix, with diagonal elements=1. Usually only the lower triangular matrix is used. 49

If a limited number of missing values is present it is ok to perform the analysis or to carry out imputation; if the number of missing values is high and especially if it is not at random the analysis should be used only to obtain preliminary information. 50

51