Matrix Recipes. Javier R. Movellan. December 28, Copyright c 2004 Javier R. Movellan
|
|
- Clarence Pitts
- 6 years ago
- Views:
Transcription
1 Matrix Recipes Javier R Movellan December 28, 2006 Copyright c 2004 Javier R Movellan 1
2 1 Definitions Let x, y be matrices of orer m n an o p respectively, ie x 11 x 1n y 11 y 1p x = y = (1) x m1 x mn y o1 y op We refer to a matrix of orer 1 1 as a scalar, an a matrix of orer m 1 where m > 1, as a vector We let f be a generic function of the form y = f(x) If x is a vector an y is a scalar we say that f is a scalar function of a vector If x is a matrix an y a vector, we say that f is a vector function of a matrix, an so on General Definition: Derivative of a matrix function of a matrix x 11 x 1n ef = x (2) x m1 where ef = x ij 11 x mn 1q x ij x ij p1 x ij pq x ij (3) Thus the erivative of a p q matrix y with respect to a m n matrix x is a (mp) (nq) matrix of the following form 11 x q x 1n x 11 1q x 1n 11 x m1 11 1q x mn x m1 1q x mn ef = x (4) p1 x 11 p1 pq x 1n x 11 pq x 1n p1 x m1 pq x m1 p1 x mn pq x mn From this efinitions we can erive a variety of important special cases Derivative of a scalar function of a vector We think of a vector as a matrix with a single column an a scalar as a matrix with a single cell Applying 2
3 the general efinition we get that ef = x x 1 x m (5) Graient of a scalar function of a vector We represent graients using the sign For the case of graients of scalar functions of a vector, we efine the graient to equal the erivative (note this will not be the case for vector functions of vectors) x y ef = (6) x Derivative of a vector function of a vector If we think of y an x as p 1 an m 1 matrices, then the erivative woul be a vector with mp imensions ef = ( 11 p1 11 p1 ) T (7) x x 11 x 11 x 1m x 1m Jacobian of a vector function of a vector A more useful representation can be obtaine by working with the erivative of y with respect to the transpose of x This results on an p m matrix which is known as the Jacobian of y with respect to x 1 J x y ef = x x T = 1 1 x m (8) p x 1 p x m The Jacobian is very useful for linearizing vector functions of vectors f(x) f(x 0 ) + J x f(x x 0 ) (9) Graient of a vector function of a vector Note if y is a scalar then J x y = = ( x T x y) T ie, the Jacobian of a scalar function is the transpose of the graient With this in min we will efine the graient of a vector function of a vector as the transpose of the Jacobian ( ) T x y ef = (J x y) T = x T = T x (10) Hessian of a scalar function of a vector Let y = f(x), y R, x R n The matrix H x y ef = 2 xy ef = x x y ef = ( ) 2 y T x 1 x 1 2 y x 1 x n = x x (11) 2 y x n x 1 2 x n x n 3
4 is calle the Hessian matrix of y with respect to x 2 Summary of efinitions For x, y vectors Graient: Jacobian: Hessian: x y ef = T x (12) J x y ef = ( x y) T ef = x T (13) H x y ef = 2 x y ef = x x y (14) 3 Chain Rules 1 Vector functions of vectors: If z = h(y), is a vector function of a vector, an y = f(x) is a vector function of a vector then or equivalently J x z = (J y z)(j x y) (15) x z = ( x y)( y z) (16) Example 1: Consier the quaratic function z = y T ay (17) where y is an n imensional vector a is an n n matrix It is easy to show that x T ax = x j (a kj + a jk ) (18) x k j thus Moreover if J x x T ax = x T (a + a T ) (19) x x T ax = (a + a T )x (20) y = bx (21) where x is an m imensional vector an b is an n m matrix then Thus J x y = b (22) x y = b T (23) 4
5 J x (bx c) T a(bx c) = ( J bx c (bx c) T a(bx c) ) (J x (bx c)) (24) = (x c) T (a + a T )b (25) an x (x µ) T a(x µ) = b T (a + a T )(x c) T (26) Example 2: Let x, y vectors x e xt y y = x e xt y e x T ye xt y y (27) where x e xt y = x x T y x T y e xt y = ye xt y (28) e x T y e xt y y = e x T yye xt y = y T (29) Thus x e xt y y = y e xt y y T (30) 2 Scalar functions of matrices: If z = h(y) is a scalar function of a matrix an y = f(x) is a matrix function of a scalar then [ ( ) ] [ T ( ) ] T z x trace z z = trace (31) x x 5
6 4 Useful Formulae (I nee to check whether notation is consistent with other parts of the ocument) Let a, b, c, matrices, x, y, z vectors 41 Linear an Quaratic Functions x xt c = c (32) x cx = x (cx)t = c T (33) J x cx = ( x cx) T = c (34) x (ax + b)t c(x + e) = a T c(x + e) + T c T (ax + b), (35) x (ax + b)t c(ax + b) = a T (c + c T )(ax + b) (36) a (ax + b)t c(ax + b) = (c + c T )(ax + b)x T, (37) a xt a T bay = b T axy T + bayx T (38) a xt ay = xy T (39) a x e ta y = txe ta y (40) 6
7 42 Traces trace [a] = I a (41) b trace [abc] = b trace [ c T b T a T ] = a T c T (42) b trace [abn ] = ( n 1 ) T b i ab n i 1 (43) i=0 e trace [ abe T c ] = a T c T b T + caeb (44) b trace [ a T ba ] = (b + b T )a (45) a trace [ a 1 b ] = a 1 b T a 1 (46) 43 Determinants a et [a] = et [a] a T (47) 44 Kronecker an Vecs vec[abc] vec[b] 5 Matrix Differential Equations = c a T (48) (a t + b t ) = a t t t + b t t (a t b t ) = a t t t b b t t + a t t (a 2 t ) = a t t t a a t t + a t t (a 2 t ) = a t t t a a t t + a t t a t a 1 t = 0 = a t t t a 1 t a 1 t t + a 1 a t t t (49) (50) (51) (52) (53) = a 1 a t t t a 1 (54) (55) 7
8 It is only possible to expect e at t = a t e at uner conitions of full commutativity Types of linear matrix equations a t t = ba tc (56) (57) with special cases when b or c are the ientity matrix The following tricks allow moving the coefficients to the left, or right then a t t = a tc (58) a T t t = c T a T t (59) (60) an if then a 1 t t a t t = ba t (61) = a 1 a t t a 1 = a 1 ba t a 1 t = a 1 t b (62) (63) It can be shown that if a 0 is non-singular, then the solution a t is non-singular for all t 8
9 6 Determinants I + xy = 1 + x y, x, y R n e a = e trace(a) 7 Trace trace(a + b) = trace(a) + trace(b) trace(a) = trace(a T ) trace(abc) = trace(cab) = trace(bca) trace(xy T a) = y T ax, x R m, y R n trace(a T rb T ) = a T rb confirm r oes not nee rotation If a is m n an b is n m then trace(ab) = trace(ba) = trace(a T b T ) 8 Matrix Exponentials e a = e trace (a) if a = pλp 1 then e a = pe Λ p 1 eta t = ae ta a x e ta y = te ta xy 81 Proofs: x e ta = a ij δ x e t(a+δ1i1 j ) y ( ) = x e ta δ=0 δ etδ1i1 j y δ=0 = x e ta t1 i 1 je δ1i1 j y δ=0 = t(e ta x) i y j (64) Thus x e ta a = te ta xy (65) 9 Matrix Logarithms Let a a real or complex square matrix of oreer n with positive eigenvalues Then there is a unique matrix b such that (1) a = e b an (2) the imagignary part of the eigenvalues is in [ π, π] We call b the principle logarithm of a if a = pλp 1 then log(a) = plog(λ)p 1 9
10 10 Kronecker an Vec Provie a way to eal with erivatives of matrix functions without having to use cubix or quartix (ie, matrices with 3 or 4 imensions) Instea of working with matrix functions we work with vectorize versions of matrix functions This gives rise to the Kronecker prouct, or tensor prouct Definition: Kronecker prouct a 11 b a 1n b a b = (66) a m1 b a mn b Definition: Vec operator vec[a] = a 11 a m1 a 12 a m2 a 1n a mn (67) 101 Properties 1 a b c = (a b) c = a (b c) (68) provie the imensions of the matrices allows for all the expressions to exist (a + b) (c + ) = a c + a + b c + b (69) (a b)(c ) = (ac) (b) (70) (a b)(b ) = (ac) (b) (71) (a b) T = a T b T (72) 10
11 6 7 8 (a b) 1 = a 1 b 1 (73) vec [ ab T ] = b a (74) vec [abc] = (c T a) vec [b] (75) 9 If {λ i, u i } are eigenvalues/eigenvectors of a an {δ i, v i } are eigenvalues/eigenvectors of b then {λ i δ j, u i v j } are eigenvalues/eigenvectors ofr a b et[a b] = et[a] m et[b] n (76) where a, b are of orer n n an m m respectively trace(a b) = trace[a] trace[b] (77) 12 2 vecx(axbx T ) = 2 vecxvec(x T )(b T a)vecx = b a +b atrace(a b) = trace[a] trace[b] (78) where a, b, x are matrices 11 Optimization of Quaratic Functions This is arguably the most useful optimization problem in applie mathematics Its solution is behin a large variety of useful algorithms incluing Multivariate Linear Regression, the Kalman Filter, Linear Quaratic Controllers, etc Let ρ(x) = E(bx C) T a(bx C) + x T x (79) where a an are symmetric positive efinite matrices, b is a matrix, x a vector an C a ranom vector Taking the Jacobian with respect to x an applying the chain rule we have J x ρ = J bx c (bx c) T a(bx c) J x (bx c) + J x x T x (80) = 2(bx c) T ab + 2x T (81) x ρ = (J x ) T = 2b T a(bx c) + 2 x (82) Setting the graient to zero we get (b T ab + )x = b T ac (83) This is commonly known as the Normal Equation Thus the value ˆx that minimizes ρ is ˆx = hc (84) 11
12 where Moreover h = (b T ab + ) 1 b T a (85) ρ(ˆx) = (bhc c) T a(bhc c) + c T h T hc (86) = c T h T b T abhc 2c T h T b T ac + c T ac + c T h T hc (87) Now note Thus c T h T b T abhc + c T h T hc = c T h T (b T ab + )hc (88) where = c T a T b(b T ab + ) 1 (b T ab + )(b T ab + ) 1 b T ac (89) = c T a T b(b T ab + ) 1 b T ac (90) = c T h T b T ac (91) ρ(ˆx) = c T ac c T h T b T ac = c T kc (92) k = a h T b T a = a a T b(b T ab + ) 1 b T a (93) This is known as the Riccati Equation which is foun in a variety of stochastic filtering an control problems Example Application: Rige Regression Let y R n represents a set of observations on a variable we want to preict, b is an n p matrix, where each row is a set of observations on p variables use to preict c, an x R p are the weights given to each variable to preict y A useful measure of error is ρ(x) = (bx y) T (bx y) + λx T x (94) where λ 0 is a constant that penalizes for the use of large values of x Thus the solution to this problem is where I p is the p p ientity matrix 12 Optimization Methos 121 Newton-Raphson Metho ˆx = (b T b + λi p ) 1 b T by (95) Let y = f(x), for y R, x R n The Newton-Raphson algorithm is an iterative metho for optimizing y We start the process at an arbitrary point x 0 R n 12
13 Let x t R n represent the state of the algorithm at iteration t We approximate the function f using the linear an quaratic terms of the Taylor expansion of f aroun x t ˆf t (x) = f(x t ) + x f(x t )(x x t ) T (x x t) T ( x x f(x t )) (x x t ) (96) an then we then fin the extremum of ˆf t with respect to x an move irectly to that extremum To o so note that x ˆft (x) = x f(x t ) + ( x x f(x t )) (x x t ) (97) We let x(t + 1) be the value of x for which x ˆft (x) = 0 x t+1 = x t + ( x x f(x t )) 1 x f(x t ) (98) It is useful to compare the Newton-Raphson metho with the stanar metho of graient ascent The graient ascent iteration is efine as follows x t+1 = x t + ɛi n x f(x t ) (99) where ɛ is a small positive constant Thus graient escent can be seen as a Newton-Raphson metho in which the Hessian matrix is approximate by 1 ɛ I n 122 Gauss-Newton Metho Let f(x) = n i=1 r i(x) 2 for r i : R n R We start the process with an arbitrary point x 0 R n Let x t R n represent the state of the algorithm at iteration t We approximate the functions r i using the linear term of their Taylor expansion aroun x t ˆr i (x t ) = r i (x t ) + ( x r i (x t )) T (x x t ) (100) n n ˆf t (x) = (ˆr i (x t )) 2 = (r i (x t ) ( x r i (x t )) T x t + ( x r i (x t )) T x) 2 (101) i=1 i=1 (102) Minimizing ˆf t (x) is a linear least squares problem of well known solution If we let y i = ( x r i (x t )) T x t r i (x t ) an u i = x r i (x t ) then n n x t+1 = ( u i u T i ) 1 ( u i y i ) (103) i=1 Note this is equivalent to Newton-Raphson with the Hessian being approximate by ( n i=1 u iu T i ) 1 i=1 13
14 History 1 The first version of this ocument was written by Javier R Movellan, on May 2004, base on an Appenix from the Multivariate Logistic Regression Primer at the Kolmogorov Project The original ocument was 7 pages long 14
Euler equations for multiple integrals
Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationRank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col
Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an
More informationPure Further Mathematics 1. Revision Notes
Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,
More informationShort Review of Basic Mathematics
Short Review of Basic Mathematics Tomas Co January 1, 1 c 8 Tomas Co, all rights reserve Contents 1 Review of Functions 5 11 Mathematical Ientities 5 1 Drills 7 1 Answers to Drills 8 Review of ODE Solutions
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More information164 Final Solutions 1
164 Final Solutions 1 1. Question 1 True/False a) Let f : R R be a C 3 function such that fx) for all x R. Then the graient escent algorithm starte at the point will fin the global minimum of f. FALSE.
More informationDiagonalization of Matrices Dr. E. Jacobs
Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is
More informationKRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS
Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS
More informationMath Review for Physical Chemistry
Chemistry 362 Spring 27 Dr. Jean M. Stanar January 25, 27 Math Review for Physical Chemistry I. Algebra an Trigonometry A. Logarithms an Exponentials General rules for logarithms These rules, except where
More informationTensors, Fields Pt. 1 and the Lie Bracket Pt. 1
Tensors, Fiels Pt. 1 an the Lie Bracket Pt. 1 PHYS 500 - Southern Illinois University September 8, 2016 PHYS 500 - Southern Illinois University Tensors, Fiels Pt. 1 an the Lie Bracket Pt. 1 September 8,
More informationICS 6N Computational Linear Algebra Matrix Algebra
ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix
More informationProblem set 2: Solutions Math 207B, Winter 2016
Problem set : Solutions Math 07B, Winter 016 1. A particle of mass m with position x(t) at time t has potential energy V ( x) an kinetic energy T = 1 m x t. The action of the particle over times t t 1
More informationProblem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs
Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable
More informationQF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim
QF101: Quantitative Finance September 5, 2017 Week 3: Derivatives Facilitator: Christopher Ting AY 2017/2018 I recoil with ismay an horror at this lamentable plague of functions which o not have erivatives.
More informationApplications of the Wronskian to ordinary linear differential equations
Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.
More informationELEC3114 Control Systems 1
ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.
More informationMathematical Review Problems
Fall 6 Louis Scuiero Mathematical Review Problems I. Polynomial Equations an Graphs (Barrante--Chap. ). First egree equation an graph y f() x mx b where m is the slope of the line an b is the line's intercept
More informationConservation Laws. Chapter Conservation of Energy
20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationSYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is
SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. Uniqueness for solutions of ifferential equations. We consier the system of ifferential equations given by x = v( x), () t with a given initial conition
More information1 Lecture 20: Implicit differentiation
Lecture 20: Implicit ifferentiation. Outline The technique of implicit ifferentiation Tangent lines to a circle Derivatives of inverse functions by implicit ifferentiation Examples.2 Implicit ifferentiation
More informationDiophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations
Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything
More informationThe numbers inside a matrix are called the elements or entries of the matrix.
Chapter Review of Matries. Definitions A matrix is a retangular array of numers of the form a a a 3 a n a a a 3 a n a 3 a 3 a 33 a 3n..... a m a m a m3 a mn We usually use apital letters (for example,
More informationFinal Exam Study Guide and Practice Problems Solutions
Final Exam Stuy Guie an Practice Problems Solutions Note: These problems are just some of the types of problems that might appear on the exam. However, to fully prepare for the exam, in aition to making
More informationChapter 2 Lagrangian Modeling
Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie
More informationMatrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch
Matrices MA1S1 Tristan McLoughlin November 9, 2014 Anton & Rorres: Ch 1.3-1.8 Basic matrix notation We have studied matrices as a tool for solving systems of linear equations but now we want to study them
More informationLinear First-Order Equations
5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)
More informationIntroduction to Machine Learning
How o you estimate p(y x)? Outline Contents Introuction to Machine Learning Logistic Regression Varun Chanola April 9, 207 Generative vs. Discriminative Classifiers 2 Logistic Regression 2 3 Logistic Regression
More informationFunction Spaces. 1 Hilbert Spaces
Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure
More informationNotes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk
Notes on Lie Groups, Lie algebras, an the Exponentiation Map Mitchell Faulk 1. Preliminaries. In these notes, we concern ourselves with special objects calle matrix Lie groups an their corresponing Lie
More informationd dx [xn ] = nx n 1. (1) dy dx = 4x4 1 = 4x 3. Theorem 1.3 (Derivative of a constant function). If f(x) = k and k is a constant, then f (x) = 0.
Calculus refresher Disclaimer: I claim no original content on this ocument, which is mostly a summary-rewrite of what any stanar college calculus book offers. (Here I ve use Calculus by Dennis Zill.) I
More informationFree rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012
Free rotation of a rigi boy 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012 1 Introuction In this section, we escribe the motion of a rigi boy that is free to rotate
More informationFirst Order Linear Differential Equations
LECTURE 6 First Orer Linear Differential Equations A linear first orer orinary ifferential equation is a ifferential equation of the form ( a(xy + b(xy = c(x. Here y represents the unknown function, y
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More information7.1 Support Vector Machine
67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to
More informationDifferentiation ( , 9.5)
Chapter 2 Differentiation (8.1 8.3, 9.5) 2.1 Rate of Change (8.2.1 5) Recall that the equation of a straight line can be written as y = mx + c, where m is the slope or graient of the line, an c is the
More informationImplicit Differentiation
Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,
More informationSection The Chain Rule and Implicit Differentiation with Application on Derivative of Logarithm Functions
Section 3.4-3.6 The Chain Rule an Implicit Differentiation with Application on Derivative of Logarithm Functions Ruipeng Shen September 3r, 5th Ruipeng Shen MATH 1ZA3 September 3r, 5th 1 / 3 The Chain
More informationOptimization of Geometries by Energy Minimization
Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.
More informationAcute sets in Euclidean spaces
Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of
More informationII. First variation of functionals
II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationLATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION
The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische
More informationSchrödinger s equation.
Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of
More informationState observers and recursive filters in classical feedback control theory
State observers an recursive filters in classical feeback control theory State-feeback control example: secon-orer system Consier the riven secon-orer system q q q u x q x q x x x x Here u coul represent
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationCalculus and optimization
Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function
More informationLectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs
Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent
More informationmodel considered before, but the prey obey logistic growth in the absence of predators. In
5.2. First Orer Systems of Differential Equations. Phase Portraits an Linearity. Section Objective(s): Moifie Preator-Prey Moel. Graphical Representations of Solutions. Phase Portraits. Vector Fiels an
More informationMatrix operations Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the
More informationSome vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10
Some vector algebra an the generalize chain rule Ross Bannister Data Assimilation Research Centre University of Reaing UK Last upate 10/06/10 1. Introuction an notation As we shall see in these notes the
More informationDesigning Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 2A
EECS 6B Designing Information Devices an Systems II Spring 208 J. Roychowhury an M. Maharbiz Discussion 2A Secon-Orer Differential Equations Secon-orer ifferential equations are ifferential equations of
More informationMathematical Foundations of Applied Statistics: Matrix Algebra
Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.
More informationTable of Contents Derivatives of Logarithms
Derivatives of Logarithms- Table of Contents Derivatives of Logarithms Arithmetic Properties of Logarithms Derivatives of Logarithms Example Example 2 Example 3 Example 4 Logarithmic Differentiation Example
More informationOptimization Notes. Note: Any material in red you will need to have memorized verbatim (more or less) for tests, quizzes, and the final exam.
MATH 2250 Calculus I Date: October 5, 2017 Eric Perkerson Optimization Notes 1 Chapter 4 Note: Any material in re you will nee to have memorize verbatim (more or less) for tests, quizzes, an the final
More informationCalculus I Sec 2 Practice Test Problems for Chapter 4 Page 1 of 10
Calculus I Sec 2 Practice Test Problems for Chapter 4 Page 1 of 10 This is a set of practice test problems for Chapter 4. This is in no way an inclusive set of problems there can be other types of problems
More informationAn Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an
More informationMath 32A Review Sheet
Review Sheet Tau Beta Pi - Boelter 6266 Contents 1 Parametric Equation 2 1.1 Line.................................................. 2 1.2 Circle................................................. 2 1.3 Ellipse.................................................
More informationDynamical Systems and a Brief Introduction to Ergodic Theory
Dynamical Systems an a Brief Introuction to Ergoic Theory Leo Baran Spring 2014 Abstract This paper explores ynamical systems of ifferent types an orers, culminating in an examination of the properties
More informationMatrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.
Matrix Arithmetic There is an arithmetic for matrices that can be viewed as extending the arithmetic we have developed for vectors to the more general setting of rectangular arrays: if A and B are m n
More informationHomework 2 Solutions EM, Mixture Models, PCA, Dualitys
Homewor Solutions EM, Mixture Moels, PCA, Dualitys CMU 0-75: Machine Learning Fall 05 http://www.cs.cmu.eu/~bapoczos/classes/ml075_05fall/ OUT: Oct 5, 05 DUE: Oct 9, 05, 0:0 AM An EM algorithm for a Mixture
More informationIMPLICIT DIFFERENTIATION
IMPLICIT DIFFERENTIATION CALCULUS 3 INU0115/515 (MATHS 2) Dr Arian Jannetta MIMA CMath FRAS Implicit Differentiation 1/ 11 Arian Jannetta Explicit an implicit functions Explicit functions An explicit function
More informationJUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson
JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises
More informationVectors in two dimensions
Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication
More informationLinear Algebra- Review And Beyond. Lecture 3
Linear Algebra- Review An Beyon Lecture 3 This lecture gives a wie range of materials relate to matrix. Matrix is the core of linear algebra, an it s useful in many other fiels. 1 Matrix Matrix is the
More informationText S1: Simulation models and detailed method for early warning signal calculation
1 Text S1: Simulation moels an etaile metho for early warning signal calculation Steven J. Lae, Thilo Gross Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Str. 38, 01187 Dresen, Germany
More informationTMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments
Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary
More informationSome functions and their derivatives
Chapter Some functions an their erivatives. Derivative of x n for integer n Recall, from eqn (.6), for y = f (x), Also recall that, for integer n, Hence, if y = x n then y x = lim δx 0 (a + b) n = a n
More informationSlide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)
Slie10 Haykin Chapter 14: Neuroynamics (3r E. Chapter 13) CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Neural Networks with Temporal Behavior Inclusion of feeback gives temporal characteristics to
More information19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control
19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior
More informationCS281A/Stat241A Lecture 17
CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian
More informationChapter 4. Matrices and Matrix Rings
Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,
More informationPDE Notes, Lecture #11
PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =
More informationarxiv: v1 [cs.lg] 22 Mar 2014
CUR lgorithm with Incomplete Matrix Observation Rong Jin an Shenghuo Zhu Dept. of Computer Science an Engineering, Michigan State University, rongjin@msu.eu NEC Laboratories merica, Inc., zsh@nec-labs.com
More informationLeft-invariant extended Kalman filter and attitude estimation
Left-invariant extene Kalman filter an attitue estimation Silvere Bonnabel Abstract We consier a left-invariant ynamics on a Lie group. One way to efine riving an observation noises is to make them preserve
More information1 Math 285 Homework Problem List for S2016
1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:
More informationu!i = a T u = 0. Then S satisfies
Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace
More informationIntroduction to Markov Processes
Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav
More informationYear 11 Matrices Semester 2. Yuk
Year 11 Matrices Semester 2 Chapter 5A input/output Yuk 1 Chapter 5B Gaussian Elimination an Systems of Linear Equations This is an extension of solving simultaneous equations. What oes a System of Linear
More informationExam 2 Review Solutions
Exam Review Solutions 1. True or False, an explain: (a) There exists a function f with continuous secon partial erivatives such that f x (x, y) = x + y f y = x y False. If the function has continuous secon
More informationOPG S. LIST OF FORMULAE [ For Class XII ] OP GUPTA. Electronics & Communications Engineering. Indira Award Winner
OPG S MAHEMAICS LIS OF FORMULAE [ For Class XII ] Covering all the topics of NCER Mathematics et Book Part I For the session 0-4 By OP GUPA Electronics & Communications Engineering Inira Awar Winner Visit
More informationA Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential
Avances in Applie Mathematics an Mechanics Av. Appl. Math. Mech. Vol. 1 No. 4 pp. 573-580 DOI: 10.4208/aamm.09-m0946 August 2009 A Note on Exact Solutions to Linear Differential Equations by the Matrix
More informationAgmon Kolmogorov Inequalities on l 2 (Z d )
Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,
More informationAPPPHYS 217 Thursday 8 April 2010
APPPHYS 7 Thursay 8 April A&M example 6: The ouble integrator Consier the motion of a point particle in D with the applie force as a control input This is simply Newton s equation F ma with F u : t q q
More informationCalculus of Variations
16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t
More informationMA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth
MA 2232 Lecture 08 - Review of Log an Exponential Functions an Exponential Growth Friay, February 2, 2018. Objectives: Review log an exponential functions, their erivative an integration formulas. Exponential
More informationJointly continuous distributions and the multivariate Normal
Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More information2.20 Marine Hydrodynamics Lecture 3
2.20 Marine Hyroynamics, Fall 2018 Lecture 3 Copyright c 2018 MIT - Department of Mechanical Engineering, All rights reserve. 1.7 Stress Tensor 2.20 Marine Hyroynamics Lecture 3 1.7.1 Stress Tensor τ ij
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationSummary: Differentiation
Techniques of Differentiation. Inverse Trigonometric functions The basic formulas (available in MF5 are: Summary: Differentiation ( sin ( cos The basic formula can be generalize as follows: Note: ( sin
More informationElectricity and Magnetism Computer Lab #1: Vector algebra and calculus
Electricity an Magnetism Computer Lab #: Vector algebra an calculus We are going to learn how to use MathCa. First we use MathCa as a calculator. Type: 54.4+.-(/5+4/5)^= To get the power we type hat. 54.4.
More informationImplicit Differentiation. Lecture 16.
Implicit Differentiation. Lecture 16. We are use to working only with functions that are efine explicitly. That is, ones like f(x) = 5x 3 + 7x x 2 + 1 or s(t) = e t5 3, in which the function is escribe
More informationOrdinary Differential Equations
Orinary Differential Equations Example: Harmonic Oscillator For a perfect Hooke s-law spring,force as a function of isplacement is F = kx Combine with Newton s Secon Law: F = ma with v = a = v = 2 x 2
More informationCharacterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations
Characterizing Real-Value Multivariate Complex Polynomials an Their Symmetric Tensor Representations Bo JIANG Zhening LI Shuzhong ZHANG December 31, 2014 Abstract In this paper we stuy multivariate polynomial
More informationEuler Equations: derivation, basic invariants and formulae
Euler Equations: erivation, basic invariants an formulae Mat 529, Lesson 1. 1 Derivation The incompressible Euler equations are couple with t u + u u + p = 0, (1) u = 0. (2) The unknown variable is the
More information