Probability on a Riemannian Manifold
|
|
- Noel Tyler
- 6 years ago
- Views:
Transcription
1 Probability on a Riemannian Manifold Jennifer Pajda-De La O December 2, Introduction We discuss how we can construct probability theory on a Riemannian manifold. We make comparisons to this and how probability is thought of on R n. The basic definitions for probability on R n come from Resnick [2005]. Information about probability theory on a Riemannian manifold is taken from Pennec [1999]. Fisher Information on R n and the Delta Theorem on R n are taken from Martin [2015]. We organize the paper as follows. Each section covers a different topic in probability/statistics. We summarize the importance of each concept, and then show, side by side, each concept in R n (on the left) and on a Riemannian manifold (on the right). From this paper, it is hoped that the similarities and differences between probability in R n and on a Riemannian manifold are made clear. 2 Definitions We define important terms, abbreviations, and recall important definitions here. We combine the definitions that are required for probability on R n and on a Riemannian manifold. B : a σ-field Note: A σ-field has the following properties: 1. Ω B; 2. If B B then B C B, where C denotes the complement; 3. If B i B, i 1, then i=1b i B. B(R): Borel σ-field. This is the σ-field that is generated by open subsets of R, i.e. B(R) = σ ({(a, b], a b < }) = σ ({[a, b), < a b }) = σ ({[a, b], < a b < }) = σ ({(a, b), a b }) = σ ({open sets of R}). 1
2 A : A Borel tribe / σ-field (Ω, B, P) is a probability space. In particular, P is a probability measure on the measurable space (Ω, B). Note: A probability measure has the following properties: 1. P( ) = 0; P(Ω) = 1; 2. 0 P(B) 1, B B; 3. If {B n, n 1} are disjoint events in B, then P ( n=1b n ) = n=1 P(B n). E: Expected Value E: Mean value (Fréchet Expectation) V : Variance COV : Covariance 1 A : Indicator Function of a set A M : Riemannian Manifold : Matrix Transpose iid: Independent and Identically Distributed D : Convergence in Distribution Note: We can define convergence in distribution as follows: Suppose {F, F n, n 1} are probability distributions. Then X n D X means that F n D F i.e. X n converges in distribution to X or F n converges weakly to F. Convergence in distribution only happens when the four types of convergence stated below are equivalent. Equivalence will only happen if F is proper, i.e. F ( ) = 0 and F ( ) = 1. Resnick [2005] Let {F n, n 1} be probability distribution functions and let F be a distribution function which is not necessarily proper. 1. Vague Convergence: The seqeunce {F n } converges vaguely to F, written F n v F, if for every finite interval of continuity I of F, we have F n (I) F (I). 2. Proper Convergence: The sequence {F n } converges properly to F if F n v F and F is a proper distribution function; that is F (R) = Weak Convergence: The sequence {F n } converges weakly to F, written F n w F, if F n (x) F (x) for all x C (F ). Here, C (F ) = {x R: F is continuous at x}. 4. Complete Convergence: The sequence {F n } converges complete to F, written F n c F, if F n w F and F is proper. 2
3 3 Setup In this section, we give the basic setup for each scenario. 3.1 In R n Let (Ω, B, P) be a probability space. We can define a random variable as follows: (Ω, B, P) X (R, B(R)). So X is a random variable with the property that X 1 ((, λ]) = [X λ] B, λ R. There is an induced measure on (R, B(R)). This induced measure is P X 1. We can also define a distribution function of X as 3.2 Riemannian Manifold Let M be a connected Riemannian manifold. Take a continuous collection of dot products on the tangent space T X M. The basis is i j X = Q(X). In particular, Q(X) is a positive definite matrix. The distance between two points on the manifold is defined as the minimum length among smooth curves joining the points. The curves that are created from this procedure are called geodesics. There is a unique geodesic starting at a given point X with a given tangent vector. We consider an exponential map, such that each vector is mapped to the corresponding point( on the manifold. In particular, let y = exp X XY ). The measure that we will be considering on the manifold is the volume measure. This volume measure is induced by the basis, Q(X). Specifically, dm(x) = Q(X) dx. F (A) = P X 1 (A) = P[X A]. For a point, F (x) = F ((, x]) = P[X x]. The density function is given as f(x), or f θ (x), when we need to remind ourselves what the parameter of our distribution is. On M, we can define a probability space (Ω, B, P). Let X : Ω M be a random primitive. The induced measure is P X 1. In particular, take (Ω, B, P) X (M, A ). Let p X (y) be the pdf of X. 3
4 4 Probability of a Set or Event Occurring We often want to know what is the probability of a certain event, therefore, it is important that we have a definition for this. The probability of an event occurring depends on its distribution. This can be given by its distribution function, or by its probability mass function (in the case of discrete variables), or by its probability density function (in the case of continuous variables). We denote the probability of an event A occurring by P [X A], regardless of whether the random variable X is continuous or discrete. For the discrete case, we can change the integral to a sum. 4.1 In R n P X 1 (A) = P [X A] = 1 A (X)dP Ω = 1 A (X(ω))P(dω) Ω = 1 A (x)df (x) R = df (x) A = f(x)dx. A Note that the last three equalities are only true for X as a random variable. 4.2 Riemannian Manifold P [X M] = P [X A] = M A p X (y)dm(y) = 1, p X (y)dm(y). Example: A Uniform PDF in a bounded set A: p X (y) = 1 A(y) V (A), where V (A) is the volume of the set A with respect to the measure dm. Example: A (continuous) Uniform PDF on the interval [a, b]. { 1 for x [a, b] b a f(x) = 0 otherwise = 1 [a,b](x) b a. 4
5 5 Expected Value of a Function Expected value measures a long-run average of a random variable/primitive or a function of a random variable/primitive. For example, we want to know how much we will win or lose if we are gambling at dice. Depending on the roll of the dice, we would either win a certain amount of money, lose a certain amount of money, or break-even. Assuming the dice are fair, we know the probability of each of the outcomes from rolling the dice. We can calculate how much, on average, we would gain/lose from playing the game. This is assuming that we will play the game long enough. In our calculations, we first start out with finding the expected value of a function, g(x) or ϕ(x). If we find the expected value of an identity map, i.e. g(x) = X or ϕ(x) = X, then this would be a long-run average of the random variable/primitive X. The expected value is also the first moment of a distribution. Moments are used to try and distinguish one distribution from another. 5.1 In R n Take g(x) to be a measurable function of X. Then E(g(X)) = g(x)dp Ω = g(x(ω))pd(ω) Ω = g(x)df (x) R = g(x)f(x)dx. R Note that the last two equalities are only true for X as a random variable. 5.2 Riemannian Manifold Let ϕ(x(ω)) be a Borel-real valued function defined on M. Then ϕ(x) is a real random primitive. The expected value is given as E [ϕ(x)] = ϕ(y)p X (y)dm(y). M 5
6 6 Variance Variance, or the square of the standard deviation is very important in statistics. It is also the second moment of a distribution. Variance measures the spread of the data. For example, if the variance is large, then the data-points are more spread-out from the mean, while a small variance would indicate that the data is more centered around the mean. We can also calculate variances of a function of the statistic we are interested in; this is the Delta Theorem, and will be stated in Section In R n If X 2 L 1, then X L 2. Define variance as V(X) = E (X E(X)) 2 = E ( X 2) [E (X)] 2. Theorem 1. (Delta Theorem) For random variables T n, assume that n 1/2 (T n θ) D N (0, v(θ)), where v(θ) is the asymptotic variance. Let g( ) be a function differentiable at θ, with g (θ) 0. Then n 1/2 {g(t n ) g(θ)} D N (0, v g (θ)), where v g (θ) = [g (θ)] 2 v(θ). 6.2 Riemannian Manifold Suppose dist (X, Y ) 2 = XY 2. Then σx(y) 2 = E [ dist (y, X) 2] = dist (y, z) 2 p X (z)dm(z). M 6
7 7 Covariance Covariance is important when you have multiple random variables with different distributions. It is possible for random variables to be related to one another in some way, and the covariance is one metric that displays this relationship (another is the correlation, correlation is a scaled covariance). For example, if there is a positive covariance between two random variables, this means that they will behave similarly (a positive correlation); if there is a negative covariance between two random variables, this means that they will act opposite to one another (a negative correlation). When the covariance is 0, this means that they are uncorrelated. If two random variables are independent, then their covariance will always be zero; the converse may not necessarily be true. 7.1 In R n If X, Y L 2, then COV(X, Y ) = E [(X E(X)) (Y E(Y ))] = E(XY ) E(X)E(Y ). If X and Y are random vectors, then 7.2 Riemannian Manifold Covariance calculations depend on how we view our chart. For example, if we consider our chart as a matrix, then covariance depends on the choice of basis. However, if we do not view our chart as a matrix and we view it as a bilinear form over the tangent plane, then covariance calculations do not depend on the basis. Earlier we defined E as the mean value, or Fréchet expectation. In particular, E is the set of mean primitives. We can also define it as E [X] = arg min y M E [ dist (y, X) 2]. ( ) [ ( ) ( ) ] COV X, Y = E X EX Y EY = E X Y E XE Y. Suppose X E [X]. We take X to be the unique mean value of X. Then Σ XX = COV X(X) [ = E ] XX XX. 7
8 8 Fisher Information One interpretation of Fisher Information is that variance is small if the Fisher information is big. This can be seen from the Cramér-Rao Lower Bound. We display this theorem in Section In R n Suppose θ is n dimensional and f θ (x) is the density of X with respect to P. Then the following are the FI regularity conditions. 1. f θ (x)/ θ i exists P-a.e. for each i. 2. f θ (x)dp(x) can be differentiated under the integral sign. 3. The support of f θ is the same for all θ. Fisher Information may be defined as: I X (θ) ij = COV θ log f θ (X), log f θ(x) θ i θ j }{{} score vector { } 2 = E θ log f θ (X), θ i θ j where the last equality is true provided that we can differentiate twice under the integral sign. Theorem 2. (Cramér-Rao Lower Bound) For simplicity, take θ to be a scalar, and assume that f θ satisfies the FI regularity conditions. Let X 1,..., X n f θ and let T = iid T (X 1,..., X n ) be a real-valued statistic with E θ (T ) = g(θ). Then V θ (T ) {g (θ)} 2 {ni(θ)} Riemannian Manifold Information is given by I(X). Entropy is H(X). I(X) = H(X) = E [log p X (X)]. 8
9 9 Multivariate Normal Distribution The normal distribution is highly used in statistics. In many cases, as long as the same size is large enough, we can approximate the distribution of our data as a normal distribution because of the Central Limit Theorem (CLT) for iid random variables. Note that there is also a CLT for the case where random variables are independent, but not necessarily identically distributed; this is the Lindeberg-Feller CLT. We give the CLT for the R n case below. Theorem 3. (CLT) Let {X n, n 1} be iid random variables with E(X n ) = µ and V(X n ) = σ 2. Suppose N is a random variable with N(0, 1) distribution. If S n = X X n, then S n nµ σ n D N. Moreover, the normal distribution has many nice properties so various calculations are made easier to complete. 9.1 In R n Let X = [X 1,..., X k ] be a vector such that X N (µ, Σ). Then the pdf of X is given by f(x) = (2π) k/2 Σ 1/2 { exp 1 } 2 (X µ) Σ 1 (X µ). (1) This pdf exists only when Σ is positive definite. The entropy (or I X (θ)) is given by k 2 (1 + ln (2π)) ln Σ. Suppose X N (µ, σ 2 ). Then the pdf of X is given by f(x) = ( 2πσ 2) { } 1/2 (x µ)2 exp. 2σ Riemannian Manifold The pdf [of a normal distribution on a manifold tries to minimize] the information with a fixed mean value and covariance. Suppose we have a cut locus C( X) with no continuity or differentiability constraint, a symmetric domain D( X), and a concentration matrix Γ. Suppose k is a normalization constant. Then the Normal Distribution pdf is given by Xy N ( X,Γ) [ (y) = k exp Γ Xy ] 2 [ Xy k 1 Γ Xy ] = exp dm(y) M 2 Σ = k Xy Xy [ Xy Γ Xy ] exp dm(y). 2 M Note that a high concentration matrix Γ occurs if and only if there is a small covariance matrix Σ. The equation in Section 9.2 will give the Gaussian PDF shown in Equation (1) when working in a vector space. 9
10 References Ryan Martin. Lecture notes on advanced statistical theory. Supplement to the lectures for Stat 511 at UIC given by the author, January Xavier Pennec. Probabilities and statistics on riemannian manifolds: Basic tools for geometric measurements. In Proc. of Nonlinear Signal and Image Processing (NSIP 99), pages , Sidney Resnick. A Probability Path. Birkhäuser,
Lecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationTheory of probability and mathematical statistics
Theory of probability and mathematical statistics Tomáš Mrkvička Bibliography [1] J. [2] J. Andďż l: Matematickďż statistika, SNTL/ALFA, Praha 1978 Andďż l: Statistickďż metody, Matfyzpress, Praha 1998
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationWe will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.
1 Probability 1.1 Probability spaces We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events. Definition 1.1.
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationECE 275B Homework # 1 Solutions Version Winter 2015
ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2
More informationECE 275B Homework # 1 Solutions Winter 2018
ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationMeasure-theoretic probability
Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationSTAT2201. Analysis of Engineering & Scientific Data. Unit 3
STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More informationStat 5101 Notes: Algorithms
Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More information1 Probability theory. 2 Random variables and probability theory.
Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationLecture 1: Review on Probability and Statistics
STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationPreliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics,
More informationBivariate distributions
Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationChapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationUniversity of Regina. Lecture Notes. Michael Kozdron
University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating
More informationInformation geometry for bivariate distribution control
Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationAppendix A : Introduction to Probability and stochastic processes
A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationLecture 6 Basic Probability
Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More informationDeep Learning for Computer Vision
Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair
More informationA Few Notes on Fisher Information (WIP)
A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties
More informationMultivariate distributions
CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationPreliminary Statistics. Lecture 3: Probability Models and Distributions
Preliminary Statistics Lecture 3: Probability Models and Distributions Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Revision of Lecture 2 Probability Density Functions Cumulative Distribution
More informationWeek 12-13: Discrete Probability
Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationJoint Probability Distributions, Correlations
Joint Probability Distributions, Correlations What we learned so far Events: Working with events as sets: union, intersection, etc. Some events are simple: Head vs Tails, Cancer vs Healthy Some are more
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationRandom variables (discrete)
Random variables (discrete) Saad Mneimneh 1 Introducing random variables A random variable is a mapping from the sample space to the real line. We usually denote the random variable by X, and a value that
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationRandom Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.
Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example
More information1 Basic continuous random variable problems
Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and
More informationStochastic Simulation Introduction Bo Friis Nielsen
Stochastic Simulation Introduction Bo Friis Nielsen Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: bfn@imm.dtu.dk Practicalities Notes will handed
More informationMULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationSTAT 430/510: Lecture 16
STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationMasters Comprehensive Examination Department of Statistics, University of Florida
Masters Comprehensive Examination Department of Statistics, University of Florida May 6, 003, 8:00 am - :00 noon Instructions: You have four hours to answer questions in this examination You must show
More informationENGG2430A-Homework 2
ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,
More informationChapter 2. Continuous random variables
Chapter 2 Continuous random variables Outline Review of probability: events and probability Random variable Probability and Cumulative distribution function Review of discrete random variable Introduction
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationRandom vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.
Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just
More informationJoint Probability Distributions, Correlations
Joint Probability Distributions, Correlations What we learned so far Events: Working with events as sets: union, intersection, etc. Some events are simple: Head vs Tails, Cancer vs Healthy Some are more
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans
More informationSTOR Lecture 16. Properties of Expectation - I
STOR 435.001 Lecture 16 Properties of Expectation - I Jan Hannig UNC Chapel Hill 1 / 22 Motivation Recall we found joint distributions to be pretty complicated objects. Need various tools from combinatorics
More informationChapter 3. Point Estimation. 3.1 Introduction
Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More information2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).
Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent
More informationProbability: Handout
Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1
More information4th IIA-Penn State Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur
4th IIA-Penn State Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur Laws of Probability, Bayes theorem, and the Central Limit Theorem Rahul Roy Indian Statistical Institute, Delhi. Adapted
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More information