BMI/STAT 768: Lecture 04 Correlations in Metric Spaces
|
|
- Bethanie Isabel Kennedy
- 5 years ago
- Views:
Transcription
1 BMI/STAT 768: Lecture 04 Correlations in Metric Spaces Moo K. Chung February 1, 2018 The elementary statistical treatment on correlations can be found in [4]: mchung/papers/chung.2007.sage.pdf Correlations are most widely used similarity measure in imaging however, this is not a trivial concept. Here we formulate it as a metric. The mathematical setting and notations will follow [5, 7]. 1 Pearson correlations Definition 1 Consider two data vectors x = (x 1, x 2,, x n ) and y = (y 1, y 2,, y n ). The Pearson correlation coefficient ρ between two vectors x and y is de-
2 fined as ρ(x, y) = n (x i x)(y i ȳ) n (x i x) 2 n (y i ȳ), (1) 2 where x = 1 m n x i and ȳ = 1 m n y i are the sample means. Then algebraic manipulation can show that ρ(x, y) = ρ(ax + b, cy + d) for any nonzero a, c R and any b, d R. Thus, the correlation is scale and translation invariant. The correlation (1) can be factored as a vector product: ρ(x, y) = [x ] y, where x = αx + β and y = γy + δ such that α = γ = 1 n (x i x), β = x 2 n (x i x), 2 1 n (y i ȳ), δ = ȳ 2 n (y i ȳ). 2 Then trivially, correlations under any nontrivial linear transformation, i.e., a, c 0, can be represented as a vector product.
3 Among all linear transformations, following linear transformation is the most often used in relation to sparse models [5, 6]. Consider a transformation that center and scale x and y such that x i = y i = 0, x 2 = x x = y 2 = y y = 1. This projects n-dimensional vectors x, y R n onto a n-dimensional unit sphere S n 1. Thus the centering and scaling operations can be viewed as data embedding from higher Euclidean space R n onto unit sphere S n 1. Then the correlation between x and y is simply the cosine similarity often used in engineering and computer science literature: cos θ = x y, where θ is the angle between vectors x and y. 2 Relation to linear models Assuming x and y are centered and scaled, consider the following linear model
4 y = βx + e, where e is a mean zero error vector and β is unknown scalar parameter we need to estimate. The least squares estimation (LSE) of β is given by minimizing the sum of the squared residuals: e e = (y βx) (y βx). Trivially we can show that LSE of β is β = x y, the Pearson correlation. Even if x and y are not centered and scaled, there is a relationship between correlations and the regression coefficients. This is left as an exercise. 3 Computing large correlation matrices For large correlation matrices of size p p, where p > 10000, the matrix computation takes long time and requires significant memory. The built-in functions in MAT- LAB and R packages are often not written in a computationally efficient fashion. The pairwise computation of
5 correlations is not efficient. A more efficient way is to use the scaling and normalization (2) and compute the correlation matrix as a matrix product [5, 6]. Suppose we have n p data matrices X = (x ij ) and Y = (x ij ), where n is the number of images and p is the number of voxels we are interested in computing correlations. We scale and normalize along the columns such that x ij = 0, y ij = 0, x 2 ij = 1, yij 2 = 1. Then the correlation matrices of X and Y are defined as corr(x) = X X, corr(y ) = Y Y. Since the data is normalized, the covariance matrix is reduced to the correlation matrix. If n p, the correlation matrices are full rank and invertible. If not, we no longer have a well defined correlation matrices and regularizations are possibly needed. The cross-correlation matrices of X and Y are defined as corr(x, Y ) = X Y,
6 which is not symmetric. To make it a symmetric, we can perform a symmetrization: (X Y + Y X)/2. The above procedure can be implemented as function Z = corr2norm(x) [n p]=size(x); meanx=mean(x,1); meanx=repmat(meanx,n,1); X1=X-meanX; diagx1 = sqrt(diag(x1 *X1)); X1=X1./repmat(diagX1, n,1); Z=X1; Once we normalize the data matrix, the cross-correlation matrix C is simply given as the product of martrices: X= corr2norm(x); Y = corr2norm(y); C = X *Y; This is perhaps the fastest way to compute large-scale correlation matrices. Due to scaling, X *X= 1 Y *Y= 1
7 Figure 1: Schematic of computing the cross-correlation matrix in monozygotic (MZ) and same-sex dizygotic (DZ) twin images [6]. Given paired images in twins, the cross-correlation matrix will not be symmetric. Thus, it requires the symmeterization. Often, data matrix X will likely to have missing values, which are often denoted as NaN in MATLAB. The above algorithm assumes there is no missing value. It may necessary to perform missing data treatment for any entry with NaN.
8 Figure 2: fmri cross-correlation network of monozygotic (MZ) and same-sex dizygotic (DZ) twins with nodes [6]. Only positive correlations are shown.
9 4 Correlation Metric Consider a node set V = {1,..., p} and edge weights ρ = (ρ ij ), where ρ ij is the weight between nodes i and j. The edge weights measure similarity or dissimilarity between nodes. The edge weights in most brain networks are usually given by some similarity measure between nodes [7, 8, 9, 10, 12]. Weighted network X = (V, ρ) is formed by the pair of node set V and edge weights ρ. If X is a metric, network interpretation is straightforward. We will show how to construct a metric using correlations. Consider n 1 measurement vector x j = (x 1j,, x nj ) on node j. Suppose we center and rescale the measurement x j such that x j 2 = x jx j = x 2 ij = 1 and x ij = 0. Naturally, we are interested in using correlations or their simple functions as edge wights, i.e., ρ ij = x i x j or ρ ij = 1 x i x j.
10 However, not every functions of correlations are metric. Example 1 (Zhiwei Ma of University of Chicago) ρ ij = 1 x i x j is not a metric. Consider the following 3-node counter example: x i = (0, 1 2, 1 2 ), x j = ( 1 2, 0, 1 2 ), x k = ( 1 6, Then we have ρ ij > ρ ik + ρ jk. 1, 2 ). 6 6 Then interesting methodological question is to identify minimum conditions that make a function of correlations a metric. Theorem 1 For centered and scaled data x 1,, x p, let ρ ij = cos 1 (x i x j ). Then ρ ij is metric. Proof. On unit sphere S n 1, the correlation between x i and x j is the cosine angle θ ij between the two vectors, i.e., x i x j = cos θ ij.
11 The geodesic distance ρ between nodes x i and x j on the unit sphere is given by angle θ ij : ρ(x i, x j ) = cos 1 (x i x j ). For nodes x i, x j S n 1, there are two possible angles θ ij and 2π θ ij depending on if we measure the angles along the shortest arc or longest arc. We take the convention of using the smallest angle in defining θ ij. With this convention, ρ(x i, x j ) π. Given three nodes x i, x j and x k, which forms a spherical triangle, we then have spherical triangle inequality ρ(x i, x j ) ρ(x i, x k ) + ρ(x k, x j ). (2) The proof to (2) is given in [11]. Thus we proved ρ is a metric. Theorem 2 For any metric ρ ij, f(ρ ij ) is also a metric if f(0) = 0 and f(x) is increasing and concave for x > 0 The proof is given in [13]. Example 2 Any power [cos 1 (x i x j)] 1/m for m 1 is metric. When m = 1, we have the simplest possible metric ρ(x i, x j ) = cos 1 (x i x j), which obtains minimum 0 when x i x j = 1 and maximum π when x i x j = 1.
12 Theorem 3 For any x 1,, x p R n, ρ ij = [ 1 corr(x i, y j ) ] 1/2 is a metric, where corr(x i, y j ) is the Pearson correlation. This elementary proof is left as an exercise. References [1] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Fast and simple calculus on tensors in the log- Euclidean framework. Lecture Notes in Computer Science, 3749: , [2] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Log-Euclidean metrics for fast and simple calculus on diffusion tensors. Magnetic resonance in medicine, 56: , [3] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIAM journal on matrix analysis and applications, 29: , 2007.
13 [4] M.K. Chung. Correlation coefficient. The Encyclopedia of Measurement and Statistics (Editor Salkind, N.J.), Sage Publications, pages 73 74, [5] M.K. Chung, J.L. Hanson, J. Ye, R.J. Davidson, and S.D. Pollak. Persistent homology in sparse regression and its application to brain morphometry. IEEE Transactions on Medical Imaging, 34: , [6] M.K. Chung, V. Vilalta-Gil, H. Lee, P.J. Rathouz, B.B. Lahey, and D.H. Zald. Exact topological inference for paired brain networks via persistent homology. In Information Processing in Medical Imaging (IPMI), Lecture Notes in Computer Science, volume 10265, pages , [7] H. Lee, M.K. Chung, H. Kang, B.-N. Kim, and D.S. Lee. Computing the shape of brain networks using graph filtration and Gromov-Hausdorff metric. MICCAI, Lecture Notes in Computer Science, 6892: , [8] Y. Li, Y. Liu, J. Li, W. Qin, K. Li, C. Yu, and T. Jiang. Brain Anatomical Network and Intelligence. PLoS Computational Biology, 5(5):e , 2009.
14 [9] AR Mclntosh and F. Gonzalez-Lima. Structural equation modeling and its application to network analysis in functional brain imaging. Human Brain Mapping, 2:2 22, [10] M.E.J. Newman and D.J. Watts. Scaling and percolation in the small-world network model. Physical Review E, 60: , [11] M. Reid and B. Szendròi. Geometry and Topology. Cambridge University Press, [12] C. Song, S. Havlin, and H.A. Makse. Self-similarity of complex networks. Nature, 433: , [13] S. Van Dongen and A.J. Enright. Metric distances derived from cosine similarity and Pearson and Spearman correlations. arxiv preprint arxiv: , 2012.
BMI/STAT 768: Lecture 09 Statistical Inference on Trees
BMI/STAT 768: Lecture 09 Statistical Inference on Trees Moo K. Chung mkchung@wisc.edu March 1, 2018 This lecture follows the lecture on Trees. 1 Inference on MST In medical imaging, minimum spanning trees
More informationDifferential Geometry and Lie Groups with Applications to Medical Imaging, Computer Vision and Geometric Modeling CIS610, Spring 2008
Differential Geometry and Lie Groups with Applications to Medical Imaging, Computer Vision and Geometric Modeling CIS610, Spring 2008 Jean Gallier Department of Computer and Information Science University
More informationSpherical Euclidean Distance Embedding of a Graph
Spherical Euclidean Distance Embedding of a Graph Hou-Duo Qi University of Southampton Presented at Isaac Newton Institute Polynomial Optimization August 9, 2013 Spherical Embedding Problem The Problem:
More informationWell-developed and understood properties
1 INTRODUCTION TO LINEAR MODELS 1 THE CLASSICAL LINEAR MODEL Most commonly used statistical models Flexible models Well-developed and understood properties Ease of interpretation Building block for more
More informationProperties of Linear Transformations from R n to R m
Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation
More informationIntroduction to General Linear Models
Introduction to General Linear Models Moo K. Chung University of Wisconsin-Madison mkchung@wisc.edu September 27, 2014 In this chapter, we introduce general linear models (GLM) that have been widely used
More informationResearch Trajectory. Alexey A. Koloydenko. 25th January Department of Mathematics Royal Holloway, University of London
Research Trajectory Alexey A. Koloydenko Department of Mathematics Royal Holloway, University of London 25th January 2016 Alexey A. Koloydenko Non-Euclidean analysis of SPD-valued data Voronezh State University
More informationBMI/STAT 768 : Lecture 13 Sparse Models in Images
BMI/STAT 768 : Lecture 13 Sparse Models in Images Moo K. Chung mkchung@wisc.edu March 25, 2018 1 Why sparse models are needed? If we are interested quantifying the voxel measurements in every voxel in
More informationRigid Geometric Transformations
Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationA Riemannian Framework for Denoising Diffusion Tensor Images
A Riemannian Framework for Denoising Diffusion Tensor Images Manasi Datar No Institute Given Abstract. Diffusion Tensor Imaging (DTI) is a relatively new imaging modality that has been extensively used
More informationarxiv: v1 [stat.co] 8 Jul 2018
Exact Combinatorial Inference for Brain Images Moo K. Chung 1, Zhan Luo 1, Alex D. Leow 2, Andrew L. Alexander 1, Richard J. Davidson 1, H. Hill Goldsmith 1 University of Wisconsin-Madison, 2 University
More informationMAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012
(Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationRigid Geometric Transformations
Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates
More informationLecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.
Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. Definition A set of k vectors {u, u 2,..., u k }, where each u i R n, is said to be an
More informationNeuroimage Processing
Neuroimage Processing Instructor: Moo K. Chung mkchung@wisc.edu Lecture 2. General Linear Models (GLM) Multivariate General Linear Models (MGLM) September 11, 2009 Research Projects If you have your own
More informationChapter 1: Systems of Linear Equations
Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where
More informationarxiv: v2 [q-bio.qm] 12 Jul 2017
Topological Distances between Networks and Its Application to Brain Imaging Hyekyoung Lee 1, Zhiwei Ma 2, Yuan Wang 3, Moo K. Chung 3 arxiv:1701.04171v2 [q-bio.qm] 12 Jul 2017 1 Seoul National University
More informationRiemannian Metric Learning for Symmetric Positive Definite Matrices
CMSC 88J: Linear Subspaces and Manifolds for Computer Vision and Machine Learning Riemannian Metric Learning for Symmetric Positive Definite Matrices Raviteja Vemulapalli Guide: Professor David W. Jacobs
More informationNonlinear Dimensionality Reduction
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Kernel PCA 2 Isomap 3 Locally Linear Embedding 4 Laplacian Eigenmap
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationExact Combinatorial Inference for Brain Images
Exact Combinatorial Inference for Brain Images MooK.Chung 1(B), Zhan Luo 1, Alex D. Leow 2, Andrew L. Alexander 1, Richard J. Davidson 1, and H. Hill Goldsmith 1 1 University of Wisconsin, Madison, USA
More informationMAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers.
MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3 (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers. (a) Define d : V V + {0} by d(x, y) = 1 ξ j η j 2 j 1 + ξ j η j. Show that
More informationν(u, v) = N(u, v) G(r(u, v)) E r(u,v) 3.
5. The Gauss Curvature Beyond doubt, the notion of Gauss curvature is of paramount importance in differential geometry. Recall two lessons we have learned so far about this notion: first, the presence
More informationVectors. January 13, 2013
Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,
More informationTangent Planes, Linear Approximations and Differentiability
Jim Lambers MAT 80 Spring Semester 009-10 Lecture 5 Notes These notes correspond to Section 114 in Stewart and Section 3 in Marsden and Tromba Tangent Planes, Linear Approximations and Differentiability
More informationLECTURE NOTE #11 PROF. ALAN YUILLE
LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform
More informationMulti-Atlas Tensor-Based Morphometry and its Application to a Genetic Study of 92 Twins
Multi-Atlas Tensor-Based Morphometry and its Application to a Genetic Study of 92 Twins Natasha Leporé 1, Caroline Brun 1, Yi-Yu Chou 1, Agatha D. Lee 1, Marina Barysheva 1, Greig I. de Zubicaray 2, Matthew
More informationMath 433 Outline for Final Examination
Math 433 Outline for Final Examination Richard Koch May 3, 5 Curves From the chapter on curves, you should know. the formula for arc length of a curve;. the definition of T (s), N(s), B(s), and κ(s) for
More informationReview Packet 1 B 11 B 12 B 13 B = B 21 B 22 B 23 B 31 B 32 B 33 B 41 B 42 B 43
Review Packet. For each of the following, write the vector or matrix that is specified: a. e 3 R 4 b. D = diag{, 3, } c. e R 3 d. I. For each of the following matrices and vectors, give their dimension.
More informationNeuroimage Processing
Neuroimage Processing Instructor: Moo K. Chung mkchung@wisc.edu Lecture 10-11. Deformation-based morphometry (DBM) Tensor-based morphometry (TBM) November 13, 2009 Image Registration Process of transforming
More informationMedical Image Analysis
Medical Image Analysis Instructor: Moo K. Chung mchung@stat.wisc.edu Lecture 3. Deformation-based Morphometry (DBM) January 30, 2007 Deformation based Morphometry (DBM) It uses deformation fields obtained
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example
More informationWe could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2
Week 22 Equations, Matrices and Transformations Coefficient Matrix and Vector Forms of a Linear System Suppose we have a system of m linear equations in n unknowns a 11 x 1 + a 12 x 2 + + a 1n x n b 1
More informationj=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).
Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale
More informationNotes on multivariable calculus
Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in
More informationLECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups
LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS 1. Lie groups A Lie group is a special smooth manifold on which there is a group structure, and moreover, the two structures are compatible. Lie groups are
More informationDiscriminative Direction for Kernel Classifiers
Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering
More informationLecture 7. Quaternions
Matthew T. Mason Mechanics of Manipulation Spring 2012 Today s outline Motivation Motivation have nice geometrical interpretation. have advantages in representing rotation. are cool. Even if you don t
More informationLecture 10: Dimension Reduction Techniques
Lecture 10: Dimension Reduction Techniques Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2018 Input Data It is assumed that there is a set
More informationDetermine whether the following system has a trivial solution or non-trivial solution:
Practice Questions Lecture # 7 and 8 Question # Determine whether the following system has a trivial solution or non-trivial solution: x x + x x x x x The coefficient matrix is / R, R R R+ R The corresponding
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationDISCRETIZED CONFIGURATIONS AND PARTIAL PARTITIONS
DISCRETIZED CONFIGURATIONS AND PARTIAL PARTITIONS AARON ABRAMS, DAVID GAY, AND VALERIE HOWER Abstract. We show that the discretized configuration space of k points in the n-simplex is homotopy equivalent
More information1. Let A be a 2 2 nonzero real matrix. Which of the following is true?
1. Let A be a 2 2 nonzero real matrix. Which of the following is true? (A) A has a nonzero eigenvalue. (B) A 2 has at least one positive entry. (C) trace (A 2 ) is positive. (D) All entries of A 2 cannot
More informationQualifying Exams I, Jan where µ is the Lebesgue measure on [0,1]. In this problems, all functions are assumed to be in L 1 [0,1].
Qualifying Exams I, Jan. 213 1. (Real Analysis) Suppose f j,j = 1,2,... and f are real functions on [,1]. Define f j f in measure if and only if for any ε > we have lim µ{x [,1] : f j(x) f(x) > ε} = j
More informationFocus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.
Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationA Do It Yourself Guide to Linear Algebra
A Do It Yourself Guide to Linear Algebra Lecture Notes based on REUs, 2001-2010 Instructor: László Babai Notes compiled by Howard Liu 6-30-2010 1 Vector Spaces 1.1 Basics Definition 1.1.1. A vector space
More information1 Determinants. 1.1 Determinant
1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to
More informationMATRIX LIE GROUPS AND LIE GROUPS
MATRIX LIE GROUPS AND LIE GROUPS Steven Sy December 7, 2005 I MATRIX LIE GROUPS Definition: A matrix Lie group is a closed subgroup of Thus if is any sequence of matrices in, and for some, then either
More informationLecture 5 : Projections
Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization
More informationII. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES
II. DIFFERENTIABLE MANIFOLDS Washington Mio Anuj Srivastava and Xiuwen Liu (Illustrations by D. Badlyans) CENTER FOR APPLIED VISION AND IMAGING SCIENCES Florida State University WHY MANIFOLDS? Non-linearity
More informationb 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n
Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven
More information+ dxk. dt 2. dt Γi km dxm. . Its equations of motion are second order differential equations. with intitial conditions
Homework 7. Solutions 1 Show that great circles are geodesics on sphere. Do it a) using the fact that for geodesic, acceleration is orthogonal to the surface. b ) using straightforwardl equations for geodesics
More informationLecture 23: 6.1 Inner Products
Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such
More informationNonparametric regression for topology. applied to brain imaging data
, applied to brain imaging data Cleveland State University October 15, 2010 Motivation from Brain Imaging MRI Data Topology Statistics Application MRI Data Topology Statistics Application Cortical surface
More informationALGEBRA: From Linear to Non-Linear. Bernd Sturmfels University of California at Berkeley
ALGEBRA: From Linear to Non-Linear Bernd Sturmfels University of California at Berkeley John von Neumann Lecture, SIAM Annual Meeting, Pittsburgh, July 13, 2010 Undergraduate Linear Algebra All undergraduate
More informationSolutions to the Calculus and Linear Algebra problems on the Comprehensive Examination of January 28, 2011
Solutions to the Calculus and Linear Algebra problems on the Comprehensive Examination of January 8, Solutions to Problems 5 are omitted since they involve topics no longer covered on the Comprehensive
More informationSpectral Methods for Subgraph Detection
Spectral Methods for Subgraph Detection Nadya T. Bliss & Benjamin A. Miller Embedded and High Performance Computing Patrick J. Wolfe Statistics and Information Laboratory Harvard University 12 July 2010
More informationweb: HOMEWORK 1
MAT 207 LINEAR ALGEBRA I 2009207 Dokuz Eylül University, Faculty of Science, Department of Mathematics Instructor: Engin Mermut web: http://kisideuedutr/enginmermut/ HOMEWORK VECTORS IN THE n-dimensional
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationCovariance and Dot Product
Covariance and Dot Product 1 Introduction. As you learned in Calculus III and Linear Algebra, the dot product of two vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) in R n is the number n := x i y
More informationLinear Independence x
Linear Independence A consistent system of linear equations with matrix equation Ax = b, where A is an m n matrix, has a solution set whose graph in R n is a linear object, that is, has one of only n +
More informationLecture 18: March 15
CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 18: March 15 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationLinear algebra. S. Richard
Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................
More informationCollisions at infinity in hyperbolic manifolds
Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Collisions at infinity in hyperbolic manifolds By D. B. MCREYNOLDS Department of Mathematics, Purdue University, Lafayette, IN 47907,
More informationECON 5111 Mathematical Economics
Test 1 October 1, 2010 1. Construct a truth table for the following statement: [p (p q)] q. 2. A prime number is a natural number that is divisible by 1 and itself only. Let P be the set of all prime numbers
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationGeneralized Exponential Random Graph Models: Inference for Weighted Graphs
Generalized Exponential Random Graph Models: Inference for Weighted Graphs James D. Wilson University of North Carolina at Chapel Hill June 18th, 2015 Political Networks, 2015 James D. Wilson GERGMs for
More informationOrthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6
Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationDesigning Information Devices and Systems I Discussion 13B
EECS 6A Fall 7 Designing Information Devices and Systems I Discussion 3B. Orthogonal Matching Pursuit Lecture Orthogonal Matching Pursuit (OMP) algorithm: Inputs: A set of m songs, each of length n: S
More informationNOTES ON LINEAR ALGEBRA CLASS HANDOUT
NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function
More informationwhich has a check digit of 9. This is consistent with the first nine digits of the ISBN, since
vector Then the check digit c is computed using the following procedure: 1. Form the dot product. 2. Divide by 11, thereby producing a remainder c that is an integer between 0 and 10, inclusive. The check
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationUnsupervised Learning: Dimensionality Reduction
Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationMTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education
MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationAM 106/206: Applied Algebra Madhu Sudan 1. Lecture Notes 12
AM 106/206: Applied Algebra Madhu Sudan 1 Lecture Notes 12 October 19 2016 Reading: Gallian Chs. 27 & 28 Most of the applications of group theory to the physical sciences are through the study of the symmetry
More informationMATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:
MATH 2331 Linear Algebra Section 1.1 Systems of Linear Equations Finding the solution to a set of two equations in two variables: Example 1: Solve: x x = 3 1 2 2x + 4x = 12 1 2 Geometric meaning: Do these
More informationSECTION v 2 x + v 2 y, (5.1)
CHAPTER 5 5.1 Normed Spaces SECTION 5.1 171 REAL AND COMPLEX NORMED, METRIC, AND INNER PRODUCT SPACES So far, our studies have concentrated only on properties of vector spaces that follow from Definition
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationComponents and change of basis
Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition
More informationIMA Preprint Series # 2298
RELATING FIBER CROSSING IN HARDI TO INTELLECTUAL FUNCTION By Iman Aganj, Neda Jahanshad, Christophe Lenglet, Arthur W. Toga, Katie L. McMahon, Greig I. de Zubicaray, Margaret J. Wright, Nicholas G. Martin,
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 1 Material Dr. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University January 9, 2018 Linear Algebra (MTH 464)
More informationMATHEMATICS. Units Topics Marks I Relations and Functions 10
MATHEMATICS Course Structure Units Topics Marks I Relations and Functions 10 II Algebra 13 III Calculus 44 IV Vectors and 3-D Geometry 17 V Linear Programming 6 VI Probability 10 Total 100 Course Syllabus
More informationElements of Convex Optimization Theory
Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationMATH10212 Linear Algebra B Homework Week 5
MATH Linear Algebra B Homework Week 5 Students are strongly advised to acquire a copy of the Textbook: D C Lay Linear Algebra its Applications Pearson 6 (or other editions) Normally homework assignments
More informationTensors, and differential forms - Lecture 2
Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description
More informationAlgorithms for Picture Analysis. Lecture 07: Metrics. Axioms of a Metric
Axioms of a Metric Picture analysis always assumes that pictures are defined in coordinates, and we apply the Euclidean metric as the golden standard for distance (or derived, such as area) measurements.
More informationDecomposition Methods for Representations of Quivers appearing in Topological Data Analysis
Decomposition Methods for Representations of Quivers appearing in Topological Data Analysis Erik Lindell elindel@kth.se SA114X Degree Project in Engineering Physics, First Level Supervisor: Wojtek Chacholski
More informationThe Geometrization Theorem
The Geometrization Theorem Matthew D. Brown Wednesday, December 19, 2012 In this paper, we discuss the Geometrization Theorem, formerly Thurston s Geometrization Conjecture, which is essentially the statement
More information