the probability of getting either heads or tails must be 1 (excluding the remote possibility of getting it to land on its edge).

Similar documents
Solving with Absolute Value

MA 1125 Lecture 15 - The Standard Normal Distribution. Friday, October 6, Objectives: Introduce the standard normal distribution and table.

appstats27.notebook April 06, 2017

Chapter 27 Summary Inferences for Regression

Notes 11: OLS Theorems ECO 231W - Undergraduate Econometrics

Quantum Entanglement. Chapter Introduction. 8.2 Entangled Two-Particle States

CS 361: Probability & Statistics

MA 1125 Lecture 33 - The Sign Test. Monday, December 4, Objectives: Introduce an example of a non-parametric test.

Chapter 18. Sampling Distribution Models. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =

Toss 1. Fig.1. 2 Heads 2 Tails Heads/Tails (H, H) (T, T) (H, T) Fig.2

MITOCW watch?v=rwzg8ieoc8s

116 Chapter 3 Convolution

MAT Mathematics in Today's World

Why write proofs? Why not just test and repeat enough examples to confirm a theory?

CHM 532 Notes on Wavefunctions and the Schrödinger Equation

Descriptive Statistics (And a little bit on rounding and significant digits)

Business Statistics. Lecture 9: Simple Regression

MATH MW Elementary Probability Course Notes Part I: Models and Counting

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc.

MA 3280 Lecture 05 - Generalized Echelon Form and Free Variables. Friday, January 31, 2014.

Mathematician Preemptive Strike 2 Due on Day 1 of Chapter 2

Probability and Independence Terri Bittner, Ph.D.

Review of probability. Nuno Vasconcelos UCSD

An introduction to plotting data

base 2 4 The EXPONENT tells you how many times to write the base as a factor. Evaluate the following expressions in standard notation.

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

Chapter 18. Sampling Distribution Models /51

MATH 10 INTRODUCTORY STATISTICS

Chapter 0: Some basic preliminaries

Chapter 1 Review of Equations and Inequalities

We're in interested in Pr{three sixes when throwing a single dice 8 times}. => Y has a binomial distribution, or in official notation, Y ~ BIN(n,p).

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Warm-up Using the given data Create a scatterplot Find the regression line

Bayesian Updating with Continuous Priors Class 13, Jeremy Orloff and Jonathan Bloom

Lab 0 Appendix C L0-1 APPENDIX C ACCURACY OF MEASUREMENTS AND TREATMENT OF EXPERIMENTAL UNCERTAINTY

EXPERIMENT: REACTION TIME

MITOCW watch?v=vjzv6wjttnc

Main topics for the First Midterm Exam

40 Wave Functions and Uncertainty

Note that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b).

Section 2.7 Solving Linear Inequalities

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations

Lesson 3-1: Solving Linear Systems by Graphing

Lecture 1. ABC of Probability

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Probability Distributions

A Review/Intro to some Principles of Astronomy

RVs and their probability distributions

MATH 1130 Exam 1 Review Sheet

The following are generally referred to as the laws or rules of exponents. x a x b = x a+b (5.1) 1 x b a (5.2) (x a ) b = x ab (5.

In other words, we are interested in what is happening to the y values as we get really large x values and as we get really small x values.

Probability Theory. Probability and Statistics for Data Science CSE594 - Spring 2016

Introduction to Algebra: The First Week

1.1 Administrative Stuff

1 Some Statistical Basics.

Algebra & Trig Review

Some Statistics. V. Lindberg. May 16, 2007

- measures the center of our distribution. In the case of a sample, it s given by: y i. y = where n = sample size.

Section 4.6 Negative Exponents

Resonance and response

Math 5a Reading Assignments for Sections

Data and Error Analysis

Bayesian Updating with Discrete Priors Class 11, Jeremy Orloff and Jonathan Bloom

MAXIMUM AND MINIMUM 2

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Fourier and Stats / Astro Stats and Measurement : Stats Notes

Lecture 10: Powers of Matrices, Difference Equations

Lab 1: Measurement, Uncertainty, and Uncertainty Propagation

Why should you care?? Intellectual curiosity. Gambling. Mathematically the same as the ESP decision problem we discussed in Week 4.

- a value calculated or derived from the data.

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 2 MATH00040 SEMESTER / Probability

Lecture 5: Introduction to Markov Chains

6.867 Machine Learning

Math 31 Lesson Plan. Day 2: Sets; Binary Operations. Elizabeth Gillaspy. September 23, 2011

Central Limit Theorem and the Law of Large Numbers Class 6, Jeremy Orloff and Jonathan Bloom

Study and research skills 2009 Duncan Golicher. and Adrian Newton. Last draft 11/24/2008

DIFFERENTIAL EQUATIONS

Solving Equations. Another fact is that 3 x 4 = 12. This means that 4 x 3 = = 3 and 12 3 = will give us the missing number...

Brief Review of Probability

EXPERIMENT 2 Reaction Time Objectives Theory

Solving Quadratic & Higher Degree Equations

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Applications of Exponential Functions Group Activity 7 STEM Project Week #10

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur

Probability is related to uncertainty and not (only) to the results of repeated experiments

Midterm Examination. Mth 136 = Sta 114. Wednesday, 2000 March 8, 2:20 3:35 pm

Math 131, Lecture 22

Linear equations The first case of a linear equation you learn is in one variable, for instance:

GUIDED NOTES 5.6 RATIONAL FUNCTIONS

Basic Probability. Introduction

Prealgebra. Edition 5

Chapter 10 Regression Analysis

CSC321 Lecture 4 The Perceptron Algorithm

GUIDED NOTES 2.2 LINEAR EQUATIONS IN ONE VARIABLE

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

REVIEW: Waves on a String

Introduction to Statistical Inference

Confidence Intervals. - simply, an interval for which we have a certain confidence.

What is proof? Lesson 1

Transcription:

Probability One of the most useful and intriguing aspects of quantum mechanics is the Heisenberg Uncertainty Principle. Before I get to it however, we need some initial comments on probability. Let s first look at probability with discrete variables. For instance, the probability of a coin toss giving heads. Everyone knows that this is 50%. In order to determine this experimentally however, I need to toss the coin many times, or alternatively, I can toss many identical coins once. Let us begin with this example to create a pattern that is based on intuition. Let s make a plot of the value of the coin toss (heads 1, tails -1) and histogram what happens after many (200) tosses (for simplicity of my arguments, I happen to make equal heads and tails tosses): 100-1 1 Now, let s ask some questions about this distribution of events: 1) What is the probability of getting tails (-1)? N( 1) 100 1 A) We can calculate the probability by: P ( 1 ) N( 1) + N( 1) 100 + 100 2 2) What is the probability of getting heads (1)? A) OK, I won t write down the equation, P(1) ½. Notice that the sum of the probabilities is given by: N( 1) N( 1) 100 + 100 P ( 1 ) + P( 1) + 1 N( 1) + N( 1) N( 1) + N( 1) 100 + 100 the probability of getting either heads or tails must be 1 (excluding the remote possibility of getting it to land on its edge). 3) What is the most likely value? A) Each is equally likely. 4) What is the median value? A) The median value is defined to be the number where the probability of getting a larger value is the same as the probability of getting a smaller value. In our case here, the median is zero. 5) What is the average value?

A) I sometimes get median and average confused. Often times, distributions have the same median and average values, but this is not always the case. In our present case, we calculate the average by: 1 100 + 1 100 Average 0, 200 and it is the same as the median. It is important to note here that we will never actually measure the average value. As a more instructive example, I want to take some text straight out of Griffiths, since I don t think that I can summarize it as well as he has done in his excellent text:

OK, so what should really be taken away from this? First, equation 1.9 is very important to be comfortable with. The average value (or expectation value as it is used in QM, but be careful with that word!) of any function f is just the sum of f(j) times the probability of j or P(j). A good understanding of the variance and the standard deviation is also very important, and will lead us soon to a discussion of the uncertainty principle. First, let s build on what we have learned with discrete variables, and move to continuous variables. To build on the previous example, what happens when we ask what the probability is of someone having the exact same birthday as me, down to the microsecond? It would be very, very small, in fact perhaps zero. For continuous variables, it only makes sense to talk about the probability of something happening in a particular interval, say the probability that someone has a birthday the same as mine within one day. This then would be the probability per unit time, or a probability density ρ(t). So, the probability of finding someone with the same birthday in an interval from t to t + dt would be given by ρ(t)dt. For a finite interval, say from July 1 to July 31, one would sum up all the probabilities for each (infinitesimal) time interval, i.e., integrate from the beginning of the July 31 interval to the end: P( Birthday is in July) ρ() t July 1 dt

where ρ(t) is the probability density for someone to have a birthday in a particular time interval. You may catch me saying that ρ(t) is the probability that something will happen. Just remember what I mean, and not what I say. With this definition, we can now delineate several rules that apply to probabilities: The first is just a statement that the total probability must be one. In our birthday example, the probability that someone has a birthday between Jan. 1 and Dec. 31 is one. The probability that a coin toss will result in a heads or a tails is one, etc. We call this normalization of the probability density. The second is just a definition of the average value over the complete interval. The third is a generalization of the second, meaning that the average value of any function of the variable is just given in the same way as the average value, i.e, the function times the probability density integrated over the complete interval. The fourth equation is just a restatement of the definition of the variance. You will use these over, and over again throughout this semester, so become familiar and comfortable with them. NOTE: Many texts use x for σ x! In fact, I was taught that way and Liboff uses that notation. The formalism of Griffiths above is, I believe, a more standard mathematical notation. I will try to use σ throughout this semester, but be aware of the possibility of confusion. Homework: Consider a Gaussian distribution (quite common): ρ x ( ) Ae λ x a ( ) 2 where A, a and λ are positive real constants. a) Find the constant A by using the normalization condition. b) Find <x>, <x 2 > and σ. c) Sketch the graph of ρ(x).