PROBABILITY 2018 HANDOUT 2: GAUSSIAN DISTRIBUTION
|
|
- David Melton
- 5 years ago
- Views:
Transcription
1 PROAILITY 218 HANDOUT 2: GAUSSIAN DISTRIUTION GIOVANNI PISTONE Contents 1. Standard Gaussian Distribution 1 2. Positive Definite Matrices 5 3. General Gaussian Distribution 6 4. Independence of Jointly Gaussian Random Variables 7 References 8 This handout covers multivariate Gaussian distributions and the relevant matrix theory. Two classical references are [3] and [1] (many reprints available). A modern advanced reference for positive definite matrices is [2]. 1. Standard Gaussian Distribution 1 (Change of variable formula in R d ). Let A, Ă R d be open and φ be a diffeomerphism from A onto. Let Jφ: A Ñ Mat pd ˆ dq be the Jacobian mapping of φ and Jφ 1 : Ñ Mat pd ˆ dq the Jacobian mapping of φ 1, so that Jφ 1 pjφ φ 1q 1. For each nonnegative f : Ñ R n, fpyq dy f φpxq det pjφpxqq dx A Exercise 1. A s, 2πrˆs, `8r, R 2 R 2 z tpx, yq P R 2 x ě, y u, φpθ, ρq pρ cos θ, ρ sin θq. j ρ sin θ cos θ Jφpθ, ρq, det pjφpθ, ρqq ρ ρ cos θ sin θ ij e px2`y 2 q{2 dxdy ij e pρ2 cos 2 θ`ρ 2 sin 2 θq{2 ρ dθdρ R 2 s,2πrˆs,`8r ij e ρ2 {2 ρ dθdρ 2π s,2πrˆs,`8r 2. (Image of an absolutely continous measure) Let ps, F, µq be measure space, p: S Ñ R ą a probability density, px, Gq a measurable space, φ: S Ñ X a measurable function. If φ has a measurable inverse, then the image measure is characterised by f dφ # pp µq pf φqp dµ pf φqpp φ 1 φq dµ fp φ 1 dφ # µ Date: Version March 22,
2 hence φ # pp µq pp φ 1 q µ. Eq. (1) applied to f φ and the diffeomorphism φ 1 gives f dpφ # lq f φpxq dx f φ φ 1 pyq ˇˇdet `Jφ 1 pyq ˇˇ dy A fpyq ˇˇdet `Jφ 1 pyq ˇˇ dy fpyq ˇˇdet `Jφ φ 1 pyq ˇˇ 1 dy This shows that the image of the Lebesgue measure l under a diffeomorphism is φ # l ˇˇdet `Jφ 1 ˇˇ l ˇˇdet `Jφ φ 1 ˇˇ 1 l Exercise 2. A s, 1rˆs, 1r, R 2, φpu, vq p? 2 log u cosp2πvq,? 2 log u sinp2πvqq,» 1 Jφpu, vq 2 p 2 log uq 1{2 2 fi u cosp2πvq 2π? 2 log u sinp2πvq 1 2 p 2 log uq 1{2 2 ffi u sinp2πvq 2π? 2 fl, log u cosp2πvq det pjφpu, vqq 2π u, det `Jφ φ 1 2π px, yq e. px2`y2 q{2 The image of the uniform probability measure on s, 1r 2 under φ is p2πq 1 e px2`y 2q{2 dxdy. 3 (Marginalization). The previous argument does not apply when Φ is not 1-to-1. We will show in the chapter on conditioning that in such a case Φ # pp µq ˆp Φ # pµq where ˆp is the conditional expectation of p with respect to Φ. However, there are two common and simple cases namely, the finite state space case and the marginalisation. Assume µ µ 1 b µ 2 on S S 1 ˆ S 2 and consider the marginal projection Φ: px 1, x 2 q ÞÑ x 1. Then Φ 1 pa 1 q A 1 ˆ S 2 and µpφ 1 pa 1 qq µpa 1 ˆ S 2 q µ 1 pa 1 q hence, Φ # pµq µ 1. Let p be a density on S with respect to µ. For each positive f : S 1 we have ij f dφ # pp µq f Φ dpp µq fpx 1 qppx 1, x 2 q µpdx 1, dx 2 q ˆ fpx 1 q ppx 1, x 2 q µ 2 pdx 2 q µ 1 pdx 1 q so that Φ # pp µq p 1 px 1 q µ 1, p 1 px 1 q ppx 1, x 2 q µ 2 pdx 2 q For example, if ppx 1, x 2 q p2πq 1 e px2 1`x2 2 q{2, then ppx 1, x 2 q dx 2 p2πq 1{2 e x2 1 {2 p2πq 1{2 e x2 2 {2 dx 2 cp2πq 1{2 e x2 1 {2 with c ş p2πq 1{2 e x2 2 {2 dx 2 1 as the further integration with respect to dx 1 shows. Notice that the argument applies to all ppx 1, x 2 q cfpx 1 qfpx 2 q. 4. The real random variable Z is standard Gaussian, Z N 1 p, 1q, if its distribution ν has density ˆ R Q z ÞÑ γpzq p2πq 1 2 exp 1 with respect to the Lebesgue measure. It is in fact a density, see above the computation of its two-fold product. 2 2 z2
3 Exercise 3. All moments µpnq ş z n γpzq dz exists. As zγpzq γ 1 pzq, integration by parts produces a recurrent relation for the moments. [Hint: Write ş ş z n γpzq dz z n 1 zγpzq dz ş z n 1 p γ 1 pzqq dz and perform an integration by parts] Exercise 4. If f : R Ñ R absolutely continuous i.e., fpzq fpq ` şz ş f 1 puq du, with f 1 puq γpuq du ă `8 then ş zfpzq γpzq dz ă `8. In fact, ˆ z zfpzq γpzq dz ˇˇˇˇz fp ` f 1 puq duˇˇˇˇ γpzq dz ď z f pq z γpzq dz ` ˇˇˇˇz f 1 puq du ˇ γpzq dz. The first term in the RHS equals a 2{π fpq, while in the second term we have for z ě, z ˇ f 1 puq du ˇ ď p ď u ď zq f 1 puq du. We have ˇˇˇˇz z ˆ f 1 puq du ˇ γpzq dz ď z 8 f 1 puq zγpzq dz du f 1 puq u p ď u ď zq f 1 puq du 8 u γpzq dz p γ 1 pzqq dz du f 1 puq γpuq du ă 8. A similar argument applies to the case z ď. This implies zfpzqγpzq dz fpzqp γ 1 pzqq dz f 1 pzq γpzqdz. Exercise 5. The Stein operator is δfpzq zfpzq f 1 pzq. We have fpzqg 1 pzqγpzq dz δf pzqgpzqγpzqdz We define the Hermite polynomials to be H n pzq δ n 1. For example, H 1 pzq z, H 2 pzq z 2 1, H 3 pzq z 3 3z. Hermite polynomials are orthogonal with respect to γ, H n pzqh m pzqγpzq dz if n ą m. 5. Let Z N 1 p, 1q, Y b ` az, a, b P R. Then E pxq b, E px 2 q a 2 ` b 2, Var pxq a 2. If a, then φpzq b ` az is a diffeomorphism with inverse φ 1 pxq a 1 px bq, hence the density of X is ˆ 1 γpa 1 px bqq a 1 p2πa 2 q 1{2 exp px bq2 2a2 If a then the distribution of X b is the Dirac measure at b. We say that X is Gaussian with mean b and variance a 2, X N 1 pb, a 2 q. Viceversa, if X N 1 pµ, σ 2 q and σ 2 1, then Z σ 1 px µq N 1 p, 1q. 6. The characteristic function of a probability measure µ is pµptq e itx µpdxq cosptxq µpdxq ` i sinptxq µpdxq, i? 1 If two probability measure have the same characteristic function, then they are equal. 3
4 Exercise 6. For the standard Gaussian probability measure we have pγptq cosptzq γpzqdz e t2 2. In fact, by derivation under the integral d dt pγptq z sinptzq γpzqdz sinptzqγ 1 pzq dz tγptq and pγpq 1. The characteristic function of X N 1 pµ, σ 2 q is E `e itx E `e itpµ`σzq e itµ E e ipσt qz e tµ` 1 2 σ2 t 2 Exercise 7. The characteristic function pµ of the probability measure µ on R is nonnegative definite. Take t 1,..., t n in R with n 1, 2,.... The matrix j n T rpµpt i t j qs n i,j 1 e ipt i t j qx µpdxq is Hermitian, that is the transposed matrix is equal to the conjugate matrix equivalently, T is equal to its adjoint T. An Hermitian matrix T is non-negative definite if for all complex vector ζ P C n it holds ζ T ζ ě. In our case j nÿ ζ e ipt i t j qx µpdxq ζ ζ i ζ j e ipt i t j qx µpdxq i.j 1 nÿ i.j 1 i,j 1 ζ i e itix ζ j e it jx nÿ µpdxq ζ i e it ix Exercise 8. let X N 1 pb, σ 2 q and f : R Ñ R continuous and bounded. lim σñ E pfpxqq fpbq. i 1 2 µpdxq ě. Show that Exercise 9. Let X be a real random variable with density p with respect to the Lebesgue measure, and let Z N 1 p, 1q. Assume X and Z are independent i.e., the joint random variable px, Zq has density pbγ with respect to the Lebesgue measure of 2. Compute the density of X ` Z. [Hint: make a change of variable px, zq ÞÑ px ` z, zq then marginalize.] 7. The product of absolutely continuous probability measures is pp 1 µ 1 q b pp 2 µ 2 q pp 1 b p 2 q µ 1 b µ 2 The R d -valued random variable Z pz 1,..., Z d q is multivariate standard Gaussian, Z N n p d, I d q if its components are IID N 1 p, 1q. We write ν d ν bd to denote the d-fold product measure. The distribution ν d γ bd of Z N n p, Iq has the product density nź ˆ R n Q z ÞÑ γpzq φpz j q p2πq n 2 exp 1 2 }z}2 j 1 Exercise 1. The moment generating function t ÞÑ E pexp pt Zqq P R ą is nź ˆ1 ˆ1 R n Q t ÞÑ M Z ptq exp 2 t2 i exp 2 }t}2 j 1 M Z is everywhere strictly convex and analytic. 4
5 Exercise 11. The characteristic function ζ ÞÑ pγ n pζq E `exp `? 1ζ Z is 2ź ˆ ˆ R n Q ζ ÞÑ pγ n pζq exp 1 2 ζ2 i exp 1 2 }ζ}2 pγ n is non-negative definite. j 1 2. Positive Definite Matrices 8. We collect here a few useful properties of matrices. denotes transposition. (1) Denote by Mat pm ˆ nq the vector space of mˆn real matrices. We have Mat pm ˆ 1q Ø R m. Let Mat pn ˆ nq be the vector space of n ˆ n real matrices, GLpnq the group of invertible matrices, Sympnq the vector space of real symmetric matrices. (2) Given A P Mat pn ˆ nq, a real eigen-value of A is a real number λ such that A λi is singular i.e., det pa λiq. If λ is an eigen-value of A, u an eigen-vector of A associated to λ if Au λu. (3) y identifying each matrix A P Mat pm ˆ nq with its vectorized form vecpaq P R mn, the vector space Mat pm ˆ nq is an Euclidean space for the scalar product xa, y vecpaq vecpq Tr pa q. The general linear group GLpnq is an open subset of Mat pn ˆ nq. (4) A square matrix whose columns form an orthonormal system, S rs 1 s n s, s i s j pi jq, has determinant 1. The property is characterised by S S 1. The set of such matrices is the orthogonal group Opnq. (5) Each symmetric matrix A P Sympnq has n real eigen-values λ i, i 1,..., n and correspondingly an orthonormal basis of eigen-vectors u i, i 1,..., n. (6) Let A P Mat pm ˆ nq and let r ą be its rank i.e., the dimension of the space generated by its columns, equivalently by its rows. There exist matrices S P Mat pm ˆ rq, T P Mat pn ˆ rq, and a positive diagonal r ˆ r matrix Λ, such that S S T T I r, and A SΛ 1{2 T. The matrix SS is the orthogonal projection onto image A. In fact image SS image A, SS A A, and SS is a projection. Similarly, T T is the ortogonal projection unto image A. (7) A symmetric matrix A P Sympnq is positive definite, A P Sym`pnq, respectively strictly positive definite, A P Sym``pnq, if x P R n implies x 1 Ax ě, respectively ą. Sym`pnq is a closed pointed cone of Sympnq, whose interior is Sym``pnq. A positive definite matrix is strictly positive definite if it is invertible. (8) A symmetric matrix A is positive definite, respectively strictly positive definite, if, and only if, all eigen-values are non-negative, respectively positive. (9) A symmetric matrix is positive definite if, and only if, A 1 for some P M n. Moreover, A P GL n if, and only if, P GL n. (1) A symmetric matrix A is positive definite if, and only if A 2 and is positive definite. We write A 1 2 and call the positive square root of A. Exercise 12. If you are not familiar with the previous items, try the following exercise. Consider the matrices j cos θ sin θ Rpθq, θ P R. sin θ cos θ Check that Rpθq Rpθq I, det Rpθq 1, and Rpθ 1 qrpθ 2 q Rpθ 1 ` θ 2 q. Compute the matrix j λ1 Σpθq Rpθq Rpθq, λ λ 1, λ 2 ě. 2 5
6 Chech that det Σpθq λ 1 λ 2, Σpθq Σpθq, the eigenvalues of Σpθq are λ 1, λ 2, and ΣpθqRpθq Rpθq diag pλ 1, λ 2 q. Compute «ff λ 1{2 1 Apθq Rpθq λ 1{2 Rpθq, λ 1, λ 2 ě. 2 Check that ApθqApθq ApθqApθq Σpθq. Exercise 13. Let A P Opnq and Z N n p, Iq. Check that AZ N n p, Iq. let P Mat pn ˆ rq, r ă n, and assume that the columns are orthonormal. Check that Z N r p, Iq. [Hint: complete to an orthogonal matrix by adding columns, r Cs P Opnq and use the marginalization.] 1 Exercise 14. Let Z N 1 p, 1q, A P Mat p2 ˆ 1q. Check that AZ has no density 1j with respect to the Lebesgue measure. Exercise 15. Let Z N 2 p, Iq, A 1 1 P Mat p1 ˆ 2q. Compute the density od AZ. Proposition General Gaussian Distribution (1) Definition Let Z N n p, Iq, A P Mat pm ˆ nq, b P R m, Σ AA. Then Y b ` AZ has a distribution that depends on Σ and b only. The distribution of Y is called Gaussian with mean b and variance Σ, N m pb, Σq. (2) Statility If Y N m pb, Σq, P Mat pr ˆ mq, c P R r, then c`y N r pc ` b, Σ q. (3) Existence Given any non-negative definite Σ P Sym`pnq and any vector b P R n, the Gaussian distribution N n pb, Σq exists. (4) Density If Σ P Sym``pnq e.g., Σ P Sym`pnq and moreover det pσq, then the Gaussian distribution N m pb, Σq, has a density with respect to the Lebesgue measure on R n given by given by ˆ p Y pyq p2πq m 2 det pσq 1 2 exp 1 2 py bqt Σ 1 py bq. (5) No density If the rank of Σ is r ă m, then the distribution of N m pb, Σq is supported by the image of Σ. In particular it has no density w.r.t. the Lebesgue measure on R n. (6) Characteristic function Y N m pb, Σq if, and only if, the characteristic function is ˆ R m Q t ÞÑ exp 1 2 t Σt ` ib t Proof. (1) Assume b 1, b 2 P R m, A i P Mat pm ˆ n i q, Y i b i ` A i Z i, Z i N ni p, Iq, i 1, 2. If b 1 b 2 then the expected values of Y 1 and Y 2 are different, hence the distribution is different. Assume b 1 b 2 b, and consider the distribution of Y i b A i Z i, i 1, 2. We can write A i S i Λ 1{2 i Ti, which in turn implies implies Σ S i ΛS i, but Σ SΛS, hence S 1 S 2 S and Λ 1 Λ 2 Λ (a part the order). We are reduced to the case Y i b SΛTi Z i, T i P Mat pn i ˆ rq with both with orthonormal columns. The conclusion follows from T 1 Z 1 T 2 Z 2. 6
7 (2) Y N m pb, Σq means Y b ` AZ with Z N n p, Iq and AA Σ. It follows c ` Y c ` pb ` AZq pc ` bq ` paqz, wth paqpaq AA Σ. (3) Take Y b ` Σ 1{2 Z, Z N n p, Iq. (4) Use the change of variable formula in Y b ` AZ with A Σ 1{2 to get p Y pyq ˇˇdet `A 1 ˇˇ p Z pa 1 py bqq. The express each term with Σ. (5) use the decomposition Σ SΛS and note that some elements on the diagonal of Λ are zero. (6) The if part is a computation, the only if part requires the injection property of characteristic function. Exercise 16 (Linear interpolation of the rownian motion). Let Z n, n 1, 2... be IID N 1 p, 1q. Given ă σ! 1, define recursively the times t and t n`1 t n ` σ 2. Let T tt n n, 1,...u. Define recursively pq, pt n`1 q pt n q ` σz n. As pt n q ř n i 1 σz i σ ř n i 1 Z i, then Var ppt n qq σ 2 Var p ř n i 1 Z iq nσ 2 t n. For each t P R ą zt, define ptq by linear interpolation i.e., ptq t n`1 t pt n q ` t t n pt n`1 q, t P rt n, t n`1 s. t n`1 t n t n`1 t n Compute the variance of ptq and the density of ptq. 4. Independence of Jointly Gaussian Random Variables Proposition 2. Consider a partitioned Gaussian vector j ˆ j Y1 b1 Σ11 Y N Y n1`n 2, 2 b 2 j Σ 12 Σ 22. Let r i Rank pσ ii q, Σ ii S i Λ i S i with S i P Mat pn i ˆ r i q, S i S I ri, and Λ i P diag`` pr i q, i 1, 2. (1) The blocks Y 1, Y 2 are independent, Y 1 K Y 2, if, and only if, Σ 12, hence Σ 21 Σ 12. More precisely, if, and only if, there exist two independent standard Gaussian Z i N ri p, Iq and matrices A i P Mat pn i ˆ r i q, i 1, 2, such that j j j Y1 A1 Z1. Y 2 A 1 Z 2 (2) (The following property is sometimes called Schur complement lemma.) Write Σ`22 S Λ S 2. Then, j j j I Σ12 Σ`22 Σ11 Σ 12 I I Σ 21 Σ 22 Σ`22Σ 21 I j j Σ11 Σ 12 Σ`22Σ 21 I Σ 21 Σ 22 Σ`22Σ 21 I j Σ11 Σ 12 Σ`22Σ 21, Σ 22 7 Σ 21
8 hence the last matrix is non-negative definite. The Shur complement of the partitioned covariance matrix Σ is Σ 1 2 Σ 11 Σ 12 Σ`22Σ 21 P Sym`pn 1 q. (3) Assume det pσq. Then both det `Σ 1 2 and det pσq22. If we define the partitioned concentration to be j K Σ 1 K11 K 12, K 22 then K 11 Σ K 21 and K 1 11 K 12 Σ 12 Σ Exercise 17. Let Σ P Sym`pnq and let r Rank pσq. We know that Σ SΛS with S P Mat pn ˆ rq, S S I r, λ P diag`` prq. Let us define Σ` SΛ 1 S. Then it follows by simple computation that Σ`Σ ΣΣ` SS. Also, ΣΣ`Σ Σ and Σ`ΣΣ` Σ`. If Y N n p, Σq, then Y SS Y. In fact, Y SS Y pi SS qy is a Guassian random variable with variance pi SS qsλs pi SS q because pi SS qs S SS S S S. Proof. (1) If the blocks are independent, they are uncorrelated. Conversely, if Σ ii S i Λ i S i, i 1, 2, define A i S i Λ 1{2 i to get j j A1 A1 Σ. A 2 A 2 (2) Computations using Ex. 17. (3) From the computation above we see that the Schur complement is positive definite and that ˆ j Σ11 Σ det 12 det `Σ Σ 21 Σ 1 2 det pσ22 q. 22 It follows that det pσq implies both det `Σ 1 2 and det pσ22 q. The condition j j j K11 K 12 Σ11 Σ 12 I K 21 K 22 Σ 21 Σ 22 I is equivalent to I K 11 Σ 11 ` K 12 Σ 21 K 11 Σ 12 ` K 12 Σ 22. Right-multiply the second equation by Σ 1 22 and substitute in the first one, to get K 11 Σ 1 2 I, hence K 1 11 Σ 1 2. The other equality follows by left-multiplying the second equation by K Exercise 18 (Whitening). Let Y N n pb, Σq. Assume Σ has rank r and decomposition Σ SΛS, S S I r, λ P diag`` prq. Then Z Λ 1{2 S py bq ia a white noise, Z N r po, Iq. Moreover, b ` SΛ 1{2 Z Y. In fact, Y pb ` SΛ 1{2 Zq py bq SΛ 1{2 Λ 1{2 S py bq pi SS qpy bq. 8
9 References [1] T. W. Anderson, An introduction to multivariate statistical analysis, third ed., Wiley Series in Probability and Statistics, Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, 23. MR [2] Rajendra hatia, Positive definite matrices, Princeton Series in Applied Mathematics, Princeton University Press, Princeton, NJ, 27. MR (27k:155) [3] Paul R. Halmos, Finite-dimensional vector spaces, The University Series in Undergraduate Mathematics, D. Van Nostrand Co., Inc., Princeton-Toronto-New York-London, 1958, 2nd ed. MR Collegio Carlo Alberto address: 9
NOTES WEEK 15 DAY 1 SCOT ADAMS
NOTES WEEK 15 DAY 1 SCOT ADAMS We fix some notation for the entire class today: Let n P N, W : R n, : 2 P N pw q, W : LpW, W q, I : id W P W, z : 0 W 0 n. Note that W LpR n, R n q. Recall, for all T P
More informationSingular integral operators and the Riesz transform
Singular integral operators and the Riesz transform Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto November 17, 017 1 Calderón-Zygmund kernels Let ω n 1 be the measure
More informationExponential homomorphism. non-commutative probability
The exponential homomorphism in non-commutative probability (joint work with Octavio Arizmendi) Texas A&M University March 25, 2016 Classical convolutions. Additive:» R» fpzq dpµ 1 µ 2 qpzq Multiplicative:»»
More informationFURSTENBERG S THEOREM ON PRODUCTS OF I.I.D. 2 ˆ 2 MATRICES
FURSTENBERG S THEOREM ON PRODUCTS OF I.I.D. 2 ˆ 2 MATRICES JAIRO BOCHI Abstract. This is a revised version of some notes written ą 0 years ago. I thank Anthony Quas for pointing to a gap in the previous
More informationSingular Value Decomposition and its. SVD and its Applications in Computer Vision
Singular Value Decomposition and its Applications in Computer Vision Subhashis Banerjee Department of Computer Science and Engineering IIT Delhi October 24, 2013 Overview Linear algebra basics Singular
More informationDS-GA 1002: PREREQUISITES REVIEW SOLUTIONS VLADIMIR KOBZAR
DS-GA 2: PEEQUISIES EVIEW SOLUIONS VLADIMI KOBZA he following is a selection of questions (drawn from Mr. Bernstein s notes) for reviewing the prerequisites for DS-GA 2. Questions from Ch, 8, 9 and 2 of
More informationMultivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013
Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.
More informationEntropy and Ergodic Theory Lecture 27: Sinai s factor theorem
Entropy and Ergodic Theory Lecture 27: Sinai s factor theorem What is special about Bernoulli shifts? Our main result in Lecture 26 is weak containment with retention of entropy. If ra Z, µs and rb Z,
More informationHomework for MATH 4604 (Advanced Calculus II) Spring 2017
Homework for MATH 4604 (Advanced Calculus II) Spring 2017 Homework 14: Due on Tuesday 2 May 55. Let m, n P N, A P R mˆn and v P R n. Show: L A pvq 2 ď A 2 v 2. 56. Let n P N and let A P R nˆn. Let I n
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationNext is material on matrix rank. Please see the handout
B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0
More informationOn principal eigenpair of temporal-joined adjacency matrix for spreading phenomenon Abstract. Keywords: 1 Introduction
1,3 1,2 1 2 3 r A ptq pxq p0q 0 pxq 1 x P N S i r A ptq r A ÝÑH ptq i i ÝÑH t ptq i N t1 ÝÑ ź H ptq θ1 t 1 0 i `t1 ri ` A r ÝÑ H p0q ra ptq t ra ptq i,j 1 i j t A r ptq i,j 0 I r ś N ˆ N mś ĂA l A Ą m...
More informationMathematische Methoden der Unsicherheitsquantifizierung
Mathematische Methoden der Unsicherheitsquantifizierung Oliver Ernst Professur Numerische Mathematik Sommersemester 2014 Contents 1 Introduction 1.1 What is Uncertainty Quantification? 1.2 A Case Study:
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationLecture 19 - Covariance, Conditioning
THEORY OF PROBABILITY VLADIMIR KOBZAR Review. Lecture 9 - Covariance, Conditioning Proposition. (Ross, 6.3.2) If X,..., X n are independent normal RVs with respective parameters µ i, σi 2 for i,..., n,
More informationFMM Preconditioner for Radiative Transport
FMM Preconditioner for Radiative Transport SIAM CSE 2017, Atlanta Yimin Zhong Department of Mathematics University of Texas at Austin In collaboration with Kui Ren (UT Austin) and Rongting Zhang (UT Austin)
More informationDraft. Lecture 14 Eigenvalue Problems. MATH 562 Numerical Analysis II. Songting Luo. Department of Mathematics Iowa State University
Lecture 14 Eigenvalue Problems Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II Songting Luo ( Department of Mathematics Iowa State University[0.5in] MATH562
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationApproximation in the Zygmund Class
Approximation in the Zygmund Class Odí Soler i Gibert Joint work with Artur Nicolau Universitat Autònoma de Barcelona New Developments in Complex Analysis and Function Theory, 02 July 2018 Approximation
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More information1. Examples. We did most of the following in class in passing. Now compile all that data.
SOLUTIONS Math A4900 Homework 12 11/22/2017 1. Examples. We did most of the following in class in passing. Now compile all that data. (a) Favorite examples: Let R tr, Z, Z{3Z, Z{6Z, M 2 prq, Rrxs, Zrxs,
More informationGround States are generically a periodic orbit
Gonzalo Contreras CIMAT Guanajuato, Mexico Ergodic Optimization and related topics USP, Sao Paulo December 11, 2013 Expanding map X compact metric space. T : X Ñ X an expanding map i.e. T P C 0, Dd P Z`,
More informationLet X and Y denote two random variables. The joint distribution of these random
EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationREAL ANALYSIS II TAKE HOME EXAM. T. Tao s Lecture Notes Set 5
REAL ANALYSIS II TAKE HOME EXAM CİHAN BAHRAN T. Tao s Lecture Notes Set 5 1. Suppose that te 1, e 2, e 3,... u is a countable orthonormal system in a complex Hilbert space H, and c 1, c 2,... is a sequence
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationChapter 1. Basics. 1.1 Definition. A time series (or stochastic process) is a function Xpt, ωq such that for
Chapter 1 Basics 1.1 Definition A time series (or stochastic process) is a function Xpt, ωq such that for each fixed t, Xpt, ωq is a random variable [denoted by X t pωq]. For a fixed ω, Xpt, ωq is simply
More informationStat 206: Linear algebra
Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two
More informationChapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix
Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationEntropy and Ergodic Theory Lecture 7: Rate-distortion theory
Entropy and Ergodic Theory Lecture 7: Rate-distortion theory 1 Coupling source coding to channel coding Let rσ, ps be a memoryless source and let ra, θ, Bs be a DMC. Here are the two coding problems we
More informationSynopsis of Numerical Linear Algebra
Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationLECTURE 16: UNITARY REPRESENTATIONS LECTURE BY SHEELA DEVADAS STANFORD NUMBER THEORY LEARNING SEMINAR FEBRUARY 14, 2018 NOTES BY DAN DORE
LECTURE 16: UNITARY REPRESENTATIONS LECTURE BY SHEELA DEVADAS STANFORD NUMBER THEORY LEARNING SEMINAR FEBRUARY 14, 2018 NOTES BY DAN DORE Let F be a local field with valuation ring O F, and G F the group
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationMath 110 Answers for Homework 8
Math 110 Answers for Homework 8 1. Determine if the linear transformations described by the following matrices are invertible. If not, explain why, and if so, nd the matrix of the inverse transformation.
More informationarxiv: v1 [math.ca] 4 Apr 2017
ON LOCALIZATION OF SCHRÖDINGER MEANS PER SJÖLIN Abstract. Localization properties for Schrödinger means are studied in dimension higher than one. arxiv:704.00927v [math.ca] 4 Apr 207. Introduction Let
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationPRODUCT MEASURES AND FUBINI S THEOREM
CHAPTER 6 PRODUCT MEASURES AND FUBINI S THEOREM All students learn in elementary calculus to evaluate a double integral by iteration. The theorem justifying this process is called Fubini s theorem. However,
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationMatrix Theory. A.Holst, V.Ufnarovski
Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationGap probabilities in tiling models and Painlevé equations
Gap probabilities in tiling models and Painlevé equations Alisa Knizel Columbia University Painlevé Equations and Applications: A Workshop in Memory of A. A. Kapaev August 26, 2017 1 / 17 Tiling model
More informationEntropy and Ergodic Theory Notes 22: The Kolmogorov Sinai entropy of a measure-preserving system
Entropy and Ergodic Theory Notes 22: The Kolmogorov Sinai entropy of a measure-preserving system 1 Joinings and channels 1.1 Joinings Definition 1. If px, µ, T q and py, ν, Sq are MPSs, then a joining
More informationElliptically Contoured Distributions
Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal
More informationMatrices A brief introduction
Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex
More informationCMPE 58K Bayesian Statistics and Machine Learning Lecture 5
CMPE 58K Bayesian Statistics and Machine Learning Lecture 5 Multivariate distributions: Gaussian, Bernoulli, Probability tables Department of Computer Engineering, Boğaziçi University, Istanbul, Turkey
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationEntropy and Ergodic Theory Lecture 17: Transportation and concentration
Entropy and Ergodic Theory Lecture 17: Transportation and concentration 1 Concentration in terms of metric spaces In this course, a metric probability or m.p. space is a triple px, d, µq in which px, dq
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More informationMATH 260 Homework 2 solutions. 7. (a) Compute the dimension of the intersection of the following two planes in R 3 : x 2y z 0, 3x 3y z 0.
MATH 6 Homework solutions Problems from Dr Kazdan s collection 7 (a) Compute the dimension of the intersection of the following two planes in R 3 : x y z, 3x 3y z (b) A map L: R 3 R is defined by the matrix
More informationNOTES WEEK 14 DAY 2 SCOT ADAMS
NOTES WEEK 14 DAY 2 SCOT ADAMS For this lecture, we fix the following: Let n P N, let W : R n, let : 2 P N pr n q and let I : id R n : R n Ñ R n be the identity map, defined by Ipxq x. For any h : W W,
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationMariusz Jurkiewicz, Bogdan Przeradzki EXISTENCE OF SOLUTIONS FOR HIGHER ORDER BVP WITH PARAMETERS VIA CRITICAL POINT THEORY
DEMONSTRATIO MATHEMATICA Vol. XLVIII No 1 215 Mariusz Jurkiewicz, Bogdan Przeradzki EXISTENCE OF SOLUTIONS FOR HIGHER ORDER BVP WITH PARAMETERS VIA CRITICAL POINT THEORY Communicated by E. Zadrzyńska Abstract.
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationTraces of rationality of Darmon points
Traces of rationality of Darmon points [BD09] Bertolini Darmon, The rationality of Stark Heegner points over genus fields of real quadratic fields Seminari de Teoria de Nombres de Barcelona Barcelona,
More informationDraft. Lecture 01 Introduction & Matrix-Vector Multiplication. MATH 562 Numerical Analysis II. Songting Luo
Lecture 01 Introduction & Matrix-Vector Multiplication Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II Songting Luo ( Department of Mathematics Iowa State University[0.5in]
More informationEntropy and Ergodic Theory Notes 21: The entropy rate of a stationary process
Entropy and Ergodic Theory Notes 21: The entropy rate of a stationary process 1 Sources with memory In information theory, a stationary stochastic processes pξ n q npz taking values in some finite alphabet
More information5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors
EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we
More informationSOLUTIONS Math B4900 Homework 9 4/18/2018
SOLUTIONS Math B4900 Homework 9 4/18/2018 1. Show that if G is a finite group and F is a field, then any simple F G-modules is finitedimensional. [This is not a consequence of Maschke s theorem; it s just
More informationMatrices A brief introduction
Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex
More informationCOMPLEX NUMBERS LAURENT DEMONET
COMPLEX NUMBERS LAURENT DEMONET Contents Introduction: numbers 1 1. Construction of complex numbers. Conjugation 4 3. Geometric representation of complex numbers 5 3.1. The unit circle 6 3.. Modulus of
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationq n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.
Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationACM 104. Homework Set 4 Solutions February 14, 2001
ACM 04 Homework Set 4 Solutions February 4, 00 Franklin Chapter, Problem 4, page 55 Suppose that we feel that some observations are more important or reliable than others Redefine the function to be minimized
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationSingular Value Decomposition (SVD)
School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang
More informationLecture 11: Clifford algebras
Lecture 11: Clifford algebras In this lecture we introduce Clifford algebras, which will play an important role in the rest of the class. The link with K-theory is the Atiyah-Bott-Shapiro construction
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationFurther Mathematical Methods (Linear Algebra) 2002
Further Mathematical Methods (Linear Algebra) 00 Solutions For Problem Sheet 0 In this Problem Sheet we calculated some left and right inverses and verified the theorems about them given in the lectures.
More informationNumerical Linear Algebra
University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices
More informationSupplementary Notes on Linear Algebra
Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationVery quick introduction to the conformal group and cft
CHAPTER 1 Very quick introduction to the conformal group and cft The world of Conformal field theory is big and, like many theories in physics, it can be studied in many ways which may seem very confusing
More informationLocal behaviour of Galois representations
Local behaviour of Galois representations Devika Sharma Weizmann Institute of Science, Israel 23rd June, 2017 Devika Sharma (Weizmann) 23rd June, 2017 1 / 14 The question Let p be a prime. Let f ř 8 ně1
More informationMATH 260 Class notes/questions January 10, 2013
MATH 26 Class notes/questions January, 2 Linear transformations Last semester, you studied vector spaces (linear spaces) their bases, dimension, the ideas of linear dependence and linear independence Now
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationElectronic Companion Dynamic Pricing of Perishable Assets under Competition
1 Electronic Companion Dynamic Pricing of Perishable Assets under Competition Guillermo Gallego IEOR Department, Columbia University, New York, NY 10027 gmg2@columbia.edu Ming Hu Rotman School of Management,
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More information4 Relativistic kinematics
4 Relativistic kinematics In astrophysics, we are often dealing with relativistic particles that are being accelerated by electric or magnetic forces. This produces radiation, typically in the form of
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More information7.3 The Chi-square, F and t-distributions
7.3 The Chi-square, F and t-distributions Ulrich Hoensch Monday, March 25, 2013 The Chi-square Distribution Recall that a random variable X has a gamma probability distribution (X Gamma(r, λ)) with parameters
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationMath Homework 8 (selected problems)
Math 102 - Homework 8 (selected problems) David Lipshutz Problem 1. (Strang, 5.5: #14) In the list below, which classes of matrices contain A and which contain B? 1 1 1 1 A 0 0 1 0 0 0 0 1 and B 1 1 1
More information