Ex. 3. Mathematica knows about all standard probability distributions. c) The probability is just the pdf integrated over the interval
|
|
- Douglas Dawson
- 5 years ago
- Views:
Transcription
1 Ex. 3 Mathematica knows about all standard probability distributions a) In[336]:= p.4; n 6; r 5; PDF BinomialDistribution n, p r BarChart Table PDF BinomialDistribution n, p, k, k,, n Out[339]= Out[34]= b) In[341]:= Ν 2.4; PDF PoissonDistribution Ν r BarChart Table PDF PoissonDistribution Ν, k, k,, n Out[342]= Out[343]= c) The probability is just the pdf integrated over the interval
2 2 Session3.nb In[344]:= Μ 2.4; p.4; n 6; Σ n p 1 p distr NormalDistribution Μ, Σ ; CDF distr 5.5 CDF distr 4.5 Plot PDF distr x, x, 1.2, 6, AxesOrigin, ; Show, Plot PDF distr x, x, 4.5, 5.5, AxesOrigin,, Filling Axis Out[344]= 1.2 Out[346]= Out[348]= d) In[349]:= Μ 2.4; Σ Ν distr NormalDistribution Μ, Σ ; CDF distr 5.5 CDF distr 4.5 Plot PDF distr x, x, 1.2, 6, AxesOrigin, ; Show, Plot PDF distr x, x, 4.5, 5.5, AxesOrigin,, Filling Axis Out[349]= Out[351]= Out[353]= Ex 4 In[354]:= Needs "MultivariateStatistics`" Clear Μ, Σ
3 Session3.nb 3 Covariance matrix in 2 dimension can be written as a function of Ρ, Σ x, Σ y In[356]:= meanvector Μ x, Μ y ; covmatrix Σ x 2, Ρ Σ x Σ y, Ρ Σ x Σ y, Σ y 2 ; MatrixForm distr MultinormalDistribution meanvector, covmatrix ; Out[358]//MatrixForm= mean and covariance are as expected In[36]:= Mean distr Covariance distr Out[36]= Out[361]= Μ x, Μ y 1,.5,.5, 1 a) For simplicity center around the origin. Matrix is symmetric, and so are the contours In[362]:= meanvector, ; Ρ ; Σ x 1; Σ y 1; covmatrix Σ x 2, Ρ Σ x Σ y, Ρ Σ x Σ y, Σ y 2 ; MatrixForm distr MultinormalDistribution meanvector, covmatrix ; Out[365]//MatrixForm= 1 1 Quick check: the contours are circles In[367]:= ContourPlot PDF distr, x, y, x, 2, 2, y, 2, 2, ColorFunction "Rainbow", Contours Out[367]= Analytical solution: the pdf decreases monotonically in radial direction, so we need to find the radius R such that 68% (95%) is contained inside of the circle.
4 4 Session3.nb Analytical solution: the pdf decreases monotonically in radial direction, so we need to find the radius R such that 68% (95%) is contained inside of the circle. 1) Transform to polar coordinates In[368]:= Clear r, Φ ; var r, Φ ; trafo r Cos Φ, r Sin Φ ; Table D trafo i, var j, i, 2, j, 2 MatrixForm Jacobian matrix jacobian Det FullSimplify distrtransformed PDF distr, x, y jacobian. x trafo 1, y trafo 2 FullSimplify Out[371]//MatrixForm= Out[372]= Cos Φ r Sin Φ Sin Φ r Cos Φ r Out[373]= r2 2 r 2 Π 2) Integrate over the pdf up to a radius R Choose a starting value of 1 to find the solution numerically In[374]:= R68 R. FindRoot Integrate distrtransformed, r,, R, Φ,, 2 Pi.68, R, 1 R95 R. FindRoot Integrate distrtransformed, r,, R, Φ,, 2 Pi.95, R, 1 Out[374]= Out[375]= ) Make sure we didn't commit a simple error: the pdf should be normalized In[376]:= Integrate distrtransformed, r,, Infinity, Φ,, 2 Pi Out[376]= 1 4) Now plot the contours. Red circle is the 68% region, the blue circle the 95% region.
5 Session3.nb 5 In[377]:= ContourPlot distr, r,, R95 1, Φ,, 2 Pi, ColorFunction "Rainbow" ; ContourPlot PDF distr, x, y, x, 3, 3, y, 3, 3, ColorFunction "Rainbow", Contours distrtransformed. r R68, distrtransformed. r R Out[378]= b) Positive correlation, ellipsoids tilted. We omit the exact results from now on, just show the form of the contours
6 6 Session3.nb In[379]:= meanvector, ; Ρ.5; Σ x 1; Σ y 1; covmatrix Σ x 2, Ρ Σ x Σ y, Ρ Σ x Σ y, Σ y 2 ; MatrixForm distr MultinormalDistribution meanvector, covmatrix ; ContourPlot PDF distr, x, y, x, 2, 2, y, 2, 2, ColorFunction "Rainbow", Contours 7 Out[382]//MatrixForm= Out[384]= c) negative correlations, ellipsoids point towards negative values. Variance larger in y direction, thus ellipsoids stretched
7 Session3.nb 7 In[385]:= meanvector, ; Ρ.5; Σ x 1; Σ y 3; covmatrix Σ x 2, Ρ Σ x Σ y, Ρ Σ x Σ y, Σ y 2 ; MatrixForm distr MultinormalDistribution meanvector, covmatrix ; ContourPlot PDF distr, x, y, x, 2, 2, y, 2, 2, ColorFunction "Rainbow", Contours 7 Out[388]//MatrixForm= Out[39]= Ex 1 fix the Random Number generator to get reproducible results In[391]:= SeedRandom 1232 ; the histogram range and bin width In[392]:= xmin ; xmax 1; width.3; the numbers from the instructions In[393]:= Nsamples 1; Nexperiments 1, 1, 1 ; 1: Not quite Gaussian yet
8 8 Session3.nb In[395]:= Table RandomReal, 1, Nsamples Mean, k, 1, Nexperiments 1 ; Histogram, xmin, xmax, width Out[396]= : pretty Gaussian In[397]:= Table RandomReal, 1, Nsamples Mean, k, 1, Nexperiments 2 ; Histogram, xmin, xmax, width 2 15 Out[398]= 1 5 1: Now it looks good In[399]:= Table RandomReal, 1, Nsamples Mean, k, 1, Nexperiments 3 ; Histogram, xmin, xmax, width Out[4]= : not much better than with 1
9 Session3.nb In[41]:= Table RandomReal, 1, Nsamples Mean, k, 1, 1 ; Histogram, xmin, xmax, width Out[42]= Ex 2 a) create all random numbers as a matrix where each column is one experiment with 1 samples In[43]:= Nexperiments 1; In[44]:= randomnumbers Table RandomReal i 1, i, i, 1, Nsamples, k, 1, Nexperiments ; and then compute averages over each column. Mathematica indexes matrices by rows, so we need to transpose In[45]:= averages Table Transpose randomnumbers k Mean, k, 1, Nexperiments ; Distribution sharply peaked at 5 In[46]:= Histogram averages,, 1, Out[46]= 2 1 b) Same as before, just rescale differently In[47]:= randomnumbers Table RandomReal, 1 2 i 1, i, 1, Nsamples, k, 1, Nexperiments ; averages Table Transpose randomnumbers k Mean, k, 1, Nexperiments ;
10 1 Session3.nb In[49]:= Histogram averages,, 2, Out[49]= 1 5 In[41]:= Now what happens if only one number is large, all others between and 1? Now the distribution is very much uniform, the big number with power 2 9 dominates the other distributions. Central Limit Theorem evidently doesn't apply In[411]:= randomnumbers Table RandomReal, 1, i, 1, Nsamples, k, 1, Nexperiments ; randomnumbers ; averages Table Transpose randomnumbers k Mean, k, 1, Nexperiments ; In[414]:= Histogram averages,, 2, Out[414]= 1 5 Ex 5 We generate RN uniform in a square of length 2, closing the unit circle, so the ratio of the two areas is the probability of a success p Πr2 2 r 2 Π 4 we do not take too many events to start with. More events will simply make your posterior sharper, and your analysis more precise In[415]:= nevents 2; SeedRandom 234 ;
11 Session3.nb Generate all events In[417]:= events Table RandomReal 1, 1, RandomReal 1, 1, i, 1, nevents ; Filter out the successes and failures. x.x is the inner product of the 2 dim. vectors In[418]:= successes DeleteCases events, x_ ; x.x 1 ; failures Cases events, x_ ; x.x 1 ; ListPlot successes, failures, AspectRatio 1.5 Out[42]= In[421]:= Clear f Estimate p=π/4 from the number of successes, r, using Bayes' theorem and a flat prior. The posterior with p limited to [,1] is In[422]:= f p_, r_, n_ : n 1 r n r p r 1 p n r ; p &&p 1 f r nevents, r, nevents N Out[423]= r r 1..5 r r r r r r The maximum of the posterior is at r/n. So our posterior knowledge of the value of Π given the r success observed is completely given by the following plot. The peak becomes narrower as more events are observed
12 12 Session3.nb In[424]:= Out[424]= 148 r Length successes posterior Plot f Π 4, r, nevents, Π, 2.5, 3.7, AxesLabel Π, "Posterior Π r,n ", PlotRange Full ; prior Plot 1 4, x, 1, 4, PlotStyle Red ; Show posterior, prior Posterior Π r,n Out[427]= For the central 68% probability interval around the mode, first find the value of the cumulative distribution function (CDF) at the mode. It is close to.5, since the distribution is almost symmetric around the mode. The CDF is a beta function as discussed in the lecture In[428]:= Clear p ; cdf p_ : If p && p 1, Beta p, 1 r, 1. nevents r else CDFatMode cdf r nevents Plot cdf p, p,, 1 Out[43]= nevents 1. r nevents r,.8.6 Out[431]=.4.2 Now numerically invert the CDF for the lower and upper bounds of the interval
13 Session3.nb In[432]:= level.68; plower1 p. FindRoot CDFatMode cdf p level 2, p,.78 pupper1 p. FindRoot cdf p CDFatMode level 2, p,.78 check results cdf pupper1 cdf plower1 Out[433]=.7952 Out[434]= Out[435]=.68 In[436]:= level.95; plower2 p. FindRoot CDFatMode cdf p level 2, p,.78 pupper2 p. FindRoot cdf p CDFatMode level 2, p,.78 check results cdf pupper2 cdf plower2 Out[437]= Out[438]= Out[439]=.95 Add the limits to the posterior plot. Note the asymmetry from the 95% interval because mode is not exactly equals the median In[44]:= onesigmaband Plot f Π 4, r, nevents, Π, 4 plower1, 4 pupper1, Filling Axis, PlotRange, f r nevents, r, nevents, FillingStyle Directive Opacity.6, Blue ; twosigmaband Plot f Π 4, r, nevents, Π, 4 plower2, 4 pupper2, Filling Axis, PlotRange, f r nevents, r, nevents, FillingStyle Directive Opacity.4, Orange ; Show posterior, onesigmaband, twosigmaband Posterior Π r,n Out[442]= In[443]:= f r nevents.1, r, nevents N Out[443]= Calculate mean, mode, median for Π. Their are very close to one another. 1)Mode: we know it should be at r/n, let's confirm
14 14 Session3.nb In[451]:= FindMaximum f p, r, nevents, p, r nevents N r nevents so for Π 4 Out[451]= , p.74 Out[452]=.74 Out[453]= ) Mean In[448]:= 4 Integrate p f p, r, nevents, p,, 1 N Out[448]= ) Median In[449]:= FindRoot cdf p 1 2, p,.8 ; 4 p. Out[45]=
RANDOM NUMBERS with BD
In[1]:= BINOMIAL In[2]:=? BinomialDistribution BinomialDistribution n,p represents a binomialdistribution withn trials and successprobability p. In[3]:= In[4]:= Out[4]= bd : BinomialDistribution 1, 2 &
More informationA small note on the statistical method of moments for fitting a probability model to data
A small note on the statistical method of moments for fitting a probability model to data by Nasser Abbasi, Nov 16, 2007 Mathematics 502 probability and statistics, CSUF, Fall 2007 f x x Μ 2 2 Σ 2 2 Π
More informationLecture 22 Appendix B (More on Legendre Polynomials and Associated Legendre functions)
Lecture Appendix B (More on Legendre Polynomials and Associated Legendre functions) Ex.7: Let's use Mathematica to solve the equation and then verify orthogonality. First In[]:= Out[]= DSolve y'' x n^y
More informationMath 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.
Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample
More informationMathematica Project, Math21b Spring 2008
Mathematica Project, Math21b Spring 2008 Oliver Knill, Harvard University, Spring 2008 Support Welcome to the Mathematica computer project! In case problems with this assignment, please email Oliver at
More informationDiscrete Mathematics and Probability Theory Fall 2015 Lecture 21
CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about
More informationMTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6
MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward
More informationTwo parallel line charges with ± Charge.
Two parallel line charges with ± Charge. PROBLEM: Consider two infinitely long line charges parallel to each other and the z axis, passing through the x-y plane at Points { a,0,0} and {+a,0,0} (e.g., separated
More informationLecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis
Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationFrequentist-Bayesian Model Comparisons: A Simple Example
Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal
More informationcomponent risk analysis
273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain
More informationEE 302: Probabilistic Methods in Electrical Engineering
EE : Probabilistic Methods in Electrical Engineering Print Name: Solution (//6 --sk) Test II : Chapters.5 4 //98, : PM Write down your name on each paper. Read every question carefully and solve each problem
More informationMonte Carlo Radiation Transfer I
Monte Carlo Radiation Transfer I Monte Carlo Photons and interactions Sampling from probability distributions Optical depths, isotropic emission, scattering Monte Carlo Basics Emit energy packet, hereafter
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationL2: Review of probability and statistics
Probability L2: Review of probability and statistics Definition of probability Axioms and properties Conditional probability Bayes theorem Random variables Definition of a random variable Cumulative distribution
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationA6523 Modeling, Inference, and Mining Jim Cordes, Cornell University
A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;
More informationStatistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit
Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution
More informationSimulation. Where real stuff starts
1 Simulation Where real stuff starts ToC 1. What is a simulation? 2. Accuracy of output 3. Random Number Generators 4. How to sample 5. Monte Carlo 6. Bootstrap 2 1. What is a simulation? 3 What is a simulation?
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationMetric-based classifiers. Nuno Vasconcelos UCSD
Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major
More informationThe Ramsey model - linearization, backward integration, relaxation (complete)
The Ramsey model - linearization, backward integration, relaxation (complete) (Quantitative Dynamic Macroeconomics, Lecture Notes, Thomas Steger, University of Leipzig) Ramsey Model: linearized version
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationTransformation of Probability Densities
Transformation of Probability Densities This Wikibook shows how to transform the probability density of a continuous random variable in both the one-dimensional and multidimensional case. In other words,
More informationA732: Exercise #7 Maximum Likelihood
A732: Exercise #7 Maximum Likelihood Due: 29 Novemer 2007 Analytic computation of some one-dimensional maximum likelihood estimators (a) Including the normalization, the exponential distriution function
More informationSo we will instead use the Jacobian method for inferring the PDF of functionally related random variables; see Bertsekas & Tsitsiklis Sec. 4.1.
2011 Page 1 Simulating Gaussian Random Variables Monday, February 14, 2011 2:00 PM Readings: Kloeden and Platen Sec. 1.3 Why does the Box Muller method work? How was it derived? The basic idea involves
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More information401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.
401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis
More informationThis exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.
TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More information1. An Introduction to Mathematica (complete)
1. An Introduction to Mathematica (complete) (Lecture Notes, Thomas Steger, University of Leipzig, summer term 11) This chapter provides a concise introduction to Mathematica. Instead of giving a rigorous
More information1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment
Systems 2: Simulating Errors Introduction Simulating errors is a great way to test you calibration algorithms, your real-time identification algorithms, and your estimation algorithms. Conceptually, the
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationMath Review Sheet, Fall 2008
1 Descriptive Statistics Math 3070-5 Review Sheet, Fall 2008 First we need to know about the relationship among Population Samples Objects The distribution of the population can be given in one of the
More informationStatistics and Data Analysis
Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative
More informationCIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions
CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationNeural network computations from the point of view of statistical inference
Introduction to Neural Networks Probability and neural networks Initialize standard library files: Off[General::spell1]; SetOptions[ContourPlot, ImageSize Small]; SetOptions[Plot, ImageSize Small]; SetOptions[ListPlot,
More information374: Solving Homogeneous Linear Differential Equations with Constant Coefficients
374: Solving Homogeneous Linear Differential Equations with Constant Coefficients Enter this Clear command to make sure all variables we may use are cleared out of memory. In[1]:= Clearx, y, c1, c2, c3,
More informationMathematica for Calculus II (Version 9.0)
Mathematica for Calculus II (Version 9.0) C. G. Melles Mathematics Department United States Naval Academy December 31, 013 Contents 1. Introduction. Volumes of revolution 3. Solving systems of equations
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationStatistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg
Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical
More informationStatistical Pattern Recognition
Statistical Pattern Recognition A Brief Mathematical Review Hamid R. Rabiee Jafar Muhammadi, Ali Jalali, Alireza Ghasemi Spring 2012 http://ce.sharif.edu/courses/90-91/2/ce725-1/ Agenda Probability theory
More informationHistogram Processing
Histogram Processing The histogram of a digital image with gray levels in the range [0,L-] is a discrete function h ( r k ) = n k where r k n k = k th gray level = number of pixels in the image having
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationEco517 Fall 2004 C. Sims MIDTERM EXAM
Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering
More informationPhysics 235 Chapter 4. Chapter 4 Non-Linear Oscillations and Chaos
Chapter 4 Non-Linear Oscillations and Chaos Non-Linear Differential Equations Up to now we have considered differential equations with terms that are proportional to the acceleration, the velocity, and
More informationDimensionality Reduction and Principle Components
Dimensionality Reduction and Principle Components Ken Kreutz-Delgado (Nuno Vasconcelos) UCSD ECE Department Winter 2012 Motivation Recall, in Bayesian decision theory we have: World: States Y in {1,...,
More informationSpecial Discrete RV s. Then X = the number of successes is a binomial RV. X ~ Bin(n,p).
Sect 3.4: Binomial RV Special Discrete RV s 1. Assumptions and definition i. Experiment consists of n repeated trials ii. iii. iv. There are only two possible outcomes on each trial: success (S) or failure
More informationElliptically Contoured Distributions
Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal
More informationMultivariate Statistics
Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical
More informationDistributions of Functions of Random Variables
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 217 Néhémy Lim Distributions of Functions of Random Variables 1 Functions of One Random Variable In some situations, you are given the pdf f X of some
More informationThe normal distribution
The normal distribution Patrick Breheny September 29 Patrick Breheny Biostatistical Methods I (BIOS 5710) 1/28 A common histogram shape The normal curve Standardization Location-scale families A histograms
More information8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution
Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution EIGENVECTORS [I don t know if you were properly taught about eigenvectors
More information1 14a. 1 14a is always a repellor, according to the derivative criterion.
In[3]:= f Functionx, x^ a; Solvex fx, x Out[4]= x 4a, x 4a So the fixed points are x 4a and x 4a. f'x x f' 4a 4a f' 4a 4a Since f ' 4a 4a is always greater than (due to the fact that 4a 0 when a0), we
More information5. Discriminant analysis
5. Discriminant analysis We continue from Bayes s rule presented in Section 3 on p. 85 (5.1) where c i is a class, x isap-dimensional vector (data case) and we use class conditional probability (density
More informationCalculus with Algebra and Trigonometry II Lecture 21 Probability applications
Calculus with Algebra and Trigonometry II Lecture 21 Probability applications Apr 16, 215 Calculus with Algebra and Trigonometry II Lecture 21Probability Apr applications 16, 215 1 / 1 Histograms The distribution
More informationECE Lecture #10 Overview
ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationChapter 11 Parametric Equations, Polar Curves, and Conic Sections
Chapter 11 Parametric Equations, Polar Curves, and Conic Sections ü 11.1 Parametric Equations Students should read Sections 11.1-11. of Rogawski's Calculus [1] for a detailed discussion of the material
More information3.2. The Ramsey model - linearization, backward integration, relaxation (complete)
3.. The Ramsey model - linearization, backward integration, relaxation (complete) (Quantitative Dynamic Macroeconomics, Lecture Notes, Thomas Steger, University of Leipzig, winter term /3) Ramsey Model:
More information6. Vector Random Variables
6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following
More information8 - Continuous random vectors
8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution
More informationLeast Squares and Kalman Filtering Questions: me,
Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationRiskAMP Reference Manual
RiskAMP Reference Manual RiskAMP Version Structured Data, LLC. RiskAMP is a trademark of Strucuted Data, LLC. Excel is a registered trademark of Microsoft Corporation. Other trademarks are held by their
More informationMotivation: What can be done with the linear modeling tools we cover today and in the next few lectures?
Introduction to Neural Networks Daniel Kersten Lecture 4: The generic neuron and linear models Introduction Last time Levels of abstraction in neural models McCulloch-Pitts threshold logic: Discrete time,
More informationIntensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER?
: WHICH ONE LOOKS BETTER? 3.1 : WHICH ONE LOOKS BETTER? 3.2 1 Goal: Image enhancement seeks to improve the visual appearance of an image, or convert it to a form suited for analysis by a human or a machine.
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state
More informationSTA 4273H: Sta-s-cal Machine Learning
STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our
More informationProbability Methods in Civil Engineering Prof. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur
Probability Methods in Civil Engineering Prof. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur Lecture No. # 12 Probability Distribution of Continuous RVs (Contd.)
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics. Probability Distributions Prof. Dr. Klaus Reygers (lectures) Dr. Sebastian Neubert (tutorials) Heidelberg University WS 07/8 Gaussian g(x; µ, )= p exp (x µ) https://en.wikipedia.org/wiki/normal_distribution
More informationLECTURE NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO
LECTURE NOTES FYS 4550/FYS9550 - EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I PROBABILITY AND STATISTICS A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO Before embarking on the concept
More informationA Brief Review of Probability, Bayesian Statistics, and Information Theory
A Brief Review of Probability, Bayesian Statistics, and Information Theory Brendan Frey Electrical and Computer Engineering University of Toronto frey@psi.toronto.edu http://www.psi.toronto.edu A system
More informationOutline. Random Variables. Examples. Random Variable
Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,
More information(a) Determine the general solution for φ(ρ) near ρ = 0 for arbitary values E. (b) Show that the regular solution at ρ = 0 has the series expansion
Problem 1. Curious Wave Functions The eigenfunctions of a D7 brane in a curved geometry lead to the following eigenvalue equation of the Sturm Liouville type ρ ρ 3 ρ φ n (ρ) = E n w(ρ)φ n (ρ) w(ρ) = where
More informationThe Solow model - analytical solution, linearization, shooting, relaxation (complete)
The Solow model - analytical solution, linearization, shooting, relaxation (complete) (Quantitative Dynamic Macroeconomics, Lecture Notes, Thomas Steger, University of Leipzig). Solow model: analytical
More informationLecture 5: Bayes pt. 1
Lecture 5: Bayes pt. 1 D. Jason Koskinen koskinen@nbi.ku.dk Photo by Howard Jackman University of Copenhagen Advanced Methods in Applied Statistics Feb - Apr 2016 Niels Bohr Institute 2 Bayes Probabilities
More informationA Summary of Contour Integration
A Summary of Contour Integration vs 0., 5//08 ' Niels Walet, University of Manchester In this document I will summarise some very simple results of contour integration. The Basics The key word linked to
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationPreliminary statistics
1 Preliminary statistics The solution of a geophysical inverse problem can be obtained by a combination of information from observed data, the theoretical relation between data and earth parameters (models),
More informationEigen decomposition for 3D stress tensor
Eigen decomposition for 3D stress tensor by Vince Cronin Begun September 13, 2010; last revised September 23, 2015 Unfinished Dra ; Subject to Revision Introduction If we have a cubic free body that is
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationf[x_, μ_] := 4. μ... Nest[f[#,...] &,...,...] Plot[{x, f[...]}, {x, 0, 1}, AspectRatio Equal]
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 In this exercise, we use renormalization-group and scaling methods to study the onset of chaos. There are several routes by which a dynamical system can start exhibiting chaotic
More informationGeneral Bayesian Inference I
General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationLecture 9: March 26, 2014
COMS 6998-3: Sub-Linear Algorithms in Learning and Testing Lecturer: Rocco Servedio Lecture 9: March 26, 204 Spring 204 Scriber: Keith Nichols Overview. Last Time Finished analysis of O ( n ɛ ) -query
More informationCommunication Theory II
Communication Theory II Lecture 5: Review on Probability Theory Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt Febraury 22 th, 2015 1 Lecture Outlines o Review on probability theory
More informationChapter Learning Objectives. Probability Distributions and Probability Density Functions. Continuous Random Variables
Chapter 4: Continuous Random Variables and Probability s 4-1 Continuous Random Variables 4-2 Probability s and Probability Density Functions 4-3 Cumulative Functions 4-4 Mean and Variance of a Continuous
More informationClass 11 Maths Chapter 15. Statistics
1 P a g e Class 11 Maths Chapter 15. Statistics Statistics is the Science of collection, organization, presentation, analysis and interpretation of the numerical data. Useful Terms 1. Limit of the Class
More informationSimulation. Where real stuff starts
Simulation Where real stuff starts March 2019 1 ToC 1. What is a simulation? 2. Accuracy of output 3. Random Number Generators 4. How to sample 5. Monte Carlo 6. Bootstrap 2 1. What is a simulation? 3
More informationChain of interacting spins-1/2
Chain of interacting spins-1/2 Below we give codes for finding: (1) the Hamiltonian in the site-basis, as well as its eigenvalues and eigenstates; (2) the density of states; (3) the number of principal
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More information1 x 2 and 1 y 2. 0 otherwise. c) Estimate by eye the value of x for which F(x) = x + y 0 x 1 and 0 y 1. 0 otherwise
Eample 5 EX: Which of the following joint density functions have (correlation) ρ XY = 0? (Remember that ρ XY = 0 is possible with symmetry even if X and Y are not independent.) a) b) f (, y) = π ( ) 2
More informationLecture 03 Positive Semidefinite (PSD) and Positive Definite (PD) Matrices and their Properties
Applied Optimization for Wireless, Machine Learning, Big Data Prof. Aditya K. Jagannatham Department of Electrical Engineering Indian Institute of Technology, Kanpur Lecture 03 Positive Semidefinite (PSD)
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationLearning Objectives for Stat 225
Learning Objectives for Stat 225 08/20/12 Introduction to Probability: Get some general ideas about probability, and learn how to use sample space to compute the probability of a specific event. Set Theory:
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More information