Notes on Mathematics Groups
|
|
- Frank Joseph
- 5 years ago
- Views:
Transcription
1 EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties and is known as the identity, I, element of the group; for all elements a, b, c of G we have: (a b) c = a (b c) [ is associative] a I = a = I a. itself]. Lastly, for any element a of G, there is a unique element a of G such that [The identity maps an element into a a = I = a a [Each element has an inverse]. The group is closed meaning that any operation between any two elements returns an element of the group, i.e. no operation will return an element not in the group. This is the general definition of a mathematical group. If the binary operation is commutative, i.e. that a b = b a, then the group is said to be Abelian (or simply commutative). The elements themselves have not been specified either but as long as the above properties hold, it forms a mathematical group. Also, the number of elements maybe finite or infinite; the latter are called continuous groups. [Example 1]: A simple example would be the four elements {1, i, 1, i} under the binary operation of multiplication ( = ). The identity element is clearly I = 1. Verifying the properties above with some examples, (a b) c = a (b c) (1 i) i = 1 = 1 (i i) a I i 1 = i. The inverses are [1 = 1, i = i, 1 = 1, i = i] which can be easily verified. [Example 2]: Consider the set of elements consisting of rotations in the two dimensional plane, R 2. Limiting to clockwise rotations by 0 o, 90 o, 180 o, 270 o these are represented as the operators {R 0,R 90,R 180,R 270 }. Note that these operators rotate the plane of and change the arrangement of vectors, however the group consists of the rotation operators and not the vectors themselves. It should be clear that the identity element I is R 0 since not rotating and then rotating by some angle is the same is just rotating by the second angle. The binary relation is multiplication of operators. The first property of groups is one you may need to convince yourself of. Consider the following sets of rotations R 90,R 270,R 180, (a b) c = a (b c) (R 90 R 270 )R 180 = R 90 (R 270 R 180 ) The first set says to rotate by 180 o then by the combined rotation of 90 o o = 360 o, which clearly equals a rotation by 180 o. The other side of the equality says to rotate through the combination of 270 o o = 450 o = 90 o o and then by 90 o, which also equals a rotation by 180 o. The inverses of each operator are: (R 0 ) 1 = R 0 ; (R 90 ) 1 = R 270 ; (R 180 ) 1 = R 180 ; (R 270 ) 1 = R 90. The other properties clearly show that these elements form a group. [Example 3]: Consider a group which consists of rotations by 90 o in three dimensional space. To simplify the discussion we ll restrict our attention to the rotation of a book. The operators involved will consist of rotations about axes centered on the three planes of the book. We will call these axes A, B, and C that remain fixed to the book. The elements are: A +,A,B +,B,C +,C, and R 0, where + and - indicate rotations clockwise and counterclockwise by 90 o. You should verify that this set of elements do indeed form a group. Notice however that this group is non-abelian. This can be checked by the following sets of rotations, A + B + and B + A + Take a book, rotate it about one face and then a separate one. Compare the orientation of the book to when the two rotations are reversed. You should find that the book is not in the same orientation. This indicates that these two operations are not the same and these group of rotations are non-abelian. The more general result is that the group of rotations in three dimensional space (technical term for this group is SO(3)) is non-abelian. 1
2 Isomorphism Let s return to examples 1 and 2 and note that these two groups are identical in every way if we make the identification 1 = R 0, i = R 90, 1 = R 180, and i = R 270. The multiplication of elements is identical, thus these two groups are the same. The mathematical term for this equality is isomorphism. If you can exactly map one group onto another, with no missing elements or operations, then the two groups are isomorphic though the elements may seem very different in nature. Fields When dealing with mathematical objects the basic entities can come in different types. We will assume you are familiar with real numbers and complex numbers. These are two examples of fields, there are others. The mathematical structure of a field is different than a group and includes the following operations and designated elements, F, +,, 1, 0, 1. The element a of F undergo the operations of addition and multiplication, along with an additive inverse, a as well as a multiplicative inverse a 1 = 1 a. In every field there are two elements 0 (additive identity or null) and 1 (multiplicative identity). Again the two primary fields that we will be concerned with are the real, R and complex, C, numbers. There are other fields that will not enter our realm but others have explored. These are the quaternions, Q, (explored by Hamilton) and the octonions, O, among others. Vectors and Vector Spaces Vectors are mathematical objects that have particular properties. First, we will stick to the physical interpretation of vectors for simplicity. Later we will define vectors and vector spaces from a more formal mathematical point. Given a coordinate system in three dimensional space the displacement of a location in space from the origin can be represented by a vector. The vector can be viewed as an arrow in space which has a direction and length (magnitude). To quantitatively represent the vector, 3 numbers representing the displacement in each perpendicular direction can be used. Thus, for an object at position x = 1 m, y = 3 m, z = 4 m, the vector can be expressed as a triplet of numbers, = (1 m, 3 m, 4 m). As you should be familiar with already, a vector is expressed by placing an arrow over the variable representing it (often boldface letters will be used, i.e. r). The displacement between two points in three dimensional space is simply the difference in each of the three directions. One way to represent a vector is by a listing the three components in a row or column as such, (A x, A y, A z ). (Whether it is a row or column wont concern is too much yet). Another way to represent a vector is by using unit vectors. A unit vector is simply a vector of unit length pointing along one of the defined axes. In three dimensional space there is a convention as to how they are labeled. The unit vector pointing along the x direction is labeled as î, along y as ĵ and along z as ˆk. The hat is used to distinguish it from regular vectors. With these unit vectors it is simple to represent any old vector. Multiply the x component of the vector by î, thus it simply scales the component to the appropriate length. Thus we can represent a vector as follows, A = A x î + A y ĵ + A zˆk This form will make understanding dot and cross products easier. It is important to realize that vectors depend upon the coordinate system in use. If you change coordinate systems, the three components of the vector will change. However, the magnitude and relative direction to other vectors remains constant. Hence, one vector can take on many different values for its entries in different coordinate systems. To refresh your memory, to add two vectors together you simply add each of the components separately. Likewise, to subtract two vectors you subtract each separate component. In this appendix and the next, we will show two different ways to multiply vectors together. Note, there is no defined way of dividing vectors. The dot product is an operation that maps two vectors into a scalar (a number). The cross product is an operation that maps two vectors into another vector. Vector Spaces As was mentioned above representing a vector as an arrow has limited quantitative benefit. When we come to discuss more sophisticated spaces, it will not be easy to picture as arrows (e.g. consider a vector in a 4-dimensional space). The vector as viewed as an arrow is a more general, basis independent, representation. In order to get quantitative information, a basis must be introduced. We will now go through a more formal definition of vectors and vector spaces. Some definitions and notation, Fields: Real numbers R or R, 2
3 Complex numbers C or C, (Quaternions Q or Q, we will not consider these). Scalars, k, are 1 1 arrays (simply numbers which can be real or complex). Vectors, ket vector v and a bra vector v, are N component lists of numbers (real or complex), we will explore the difference between kets and bras later (kets as 1 N or bras as N 1 arrays). Matrices, M, Tensors, are arrays of real or complex numbers, labeled as N M (N rows and M columns). the previous three are types of tensors which can be considered as many dimensional arrays N M K L... Tensors are important in general relativity. They will not be considered in the quantum course in anything but a superficial manner. Now we introduce a linear vector space, labeled as V n (F), as a set of vectors which may be added, subtracted, multiplied (to be defined), or multiplied by scalars in such a way that, a) these operations yield another element of V n (F) (formally, they form a group under addition). b) Are n dimensional and whose elements are from the field F. c) Satisfy the following properties, (introduce arbitrary vectors from V n (F): v, w, u ), Addition: Commutativity, Associativity, Identity, v + w = w + v v + ( w + u) = ( v + w) + u there exists a vector 0, such that, v + 0 = v. Inverse, for every v, there exists a unique inverse, v, such that v + ( v) = 0. Multiplication by scalars (given a and b which are scalars): Distributive, a( v + w) = a v + a w (a + b) v = a v + b v. a(b v) = (ab) v. 0 v = 0. Also, a 0 = 0 ( 1) v = v. Multiplication by vectors There are three types of products of vectors, only one of which will be of concern to us. They are, Inner product, v w. Returns a scalar. (This you may be familiar with by its three dimensional name, dot product. The term inner product is the general term applicable to any type of space. Commutative: v w = w v a( v w) = (a v) w = v (a w) Cross product, v w. Returns a vector. Outer product, v w, Returns a tensor. 3
4 Details of vector operations Adding and subtracting vectors: Graphical method for V 3 (R), place the tail of one vector to the head of the other, sum is from tail of first to head of last. The better, more general method, is to simply add the like components. Consider two vectors from a 4-dimensional real (or complex) linear vector space, v = a 1ˆx 1 + a 2ˆx 2 + a 3ˆx 3 + a 4ˆx 4, w = b 1ˆx 1 + b 2ˆx 2 + b 3ˆx 3 + b 4ˆx 4 v + w = (a 1 + b 1 )ˆx 1 + (a 2 + b 2 )ˆx 2 + (a 3 + b 3 )ˆx 3 + (a 4 + b 4 )ˆx 4 To subtract two vectors simply change the subtracting vector to its additive inverse and add the vectors, v w = v + ( w) v w = a 1 b 1 )ˆx 1 + (a 2 b 2 )ˆx 2 + (a 3 b 3 )ˆx 3 + (a 4 b 4 )ˆx 4 Multiplication of vectors by scalars Simply multiply each component by a scalar, k v = (ka 1 )ˆx 1 + (ka 2 )ˆx 2 + (ka 3 )ˆx 3 Multiplication of vectors Inner product over reals Using the vectors v and w defined above, where the coefficients a i and b i are real the inner product is defined as, v w = (a 1 b 1 )(ˆx 1 ˆx 1 ) + (a 2 b 2 )(ˆx 2 ˆx 2 ) + (a 3 b 3 )(ˆx 3 ˆx 3 ) = a 1 b 1 + a 2 b 2 + a 3 b 3 The inner product is defined by the inner product of the unit vectors, ˆx i ˆx i = 1 ˆx i ˆx j = 0 if i j One of the important properties of the inner product is its relation to the measurement of distance in a space. In fact, the inner product defines how distances are measured. In ordinary three dimensional space the inner product (dot product in this case) of a vector with itself, is the square of the length of the vector. v v = a a2 2 + a2 3 = v 2 The Pythagorean theorem is seen as just an example of the inner product (in two dimensions). This property of the inner product remains no matter the dimension or field of the vector space. The inner product of a vector with itself is always a real, positive number (scalar). Inner product over complex field Of prime importance for quantum mechanics is the inner product within complex vector spaces. To make the process easier P. A. M. Dirac introduced a new notation. But before we introduce that, let s see how the inner product is defined for vectors with complex coefficients (the unit vectors always being real). Once again, consider our two vectors v and w with, now complex, coefficients a i and b i. The inner product is defined as, v w = (a 1b 1 ) + (a 2b 2 ) + (a 3b 3 ) (1) We see that it is not symmetric (i.e. v w = ( w v) ). The reason for this structure is that the inner product of a complex vector with itself must be a real positive number (the length squared). Since a complex number times itself is complex itself we see that it can not define the length. The modulus square of a complex number, z z z 2 is a real positive number. Now it is time to introduce the notation of Dirac, the bra-ket vector formalism. Since the two vectors in the inner product are treated differently we denote them in a different manner. Ordinary vectors (the one not complex conjugated 4
5 in the inner product) are called ket vectors and are denoted as v. For every ket vector, there is a unique counterpart called a bra vector, denoted as v. The relation between a ket vector and bra vector is simple. Ket vector v = a a a 3 3 Bra vector v = a a a 3 3 In the expression above the 1, 2, 3 vectors are unit vectors (real and of length 1), only the coefficients are complex. [Technically, the ket vectors belong to a vector space V n (C) and the bra vectors belong to a dual space, Ṽ n (C). The two spaces are one to one and onto.] The convenience of this notation is now the inner product of two vectors is written in a particular order, inner product v w = a 1 b a 2 b a 3 b = a 1 b 1 + a 2 b 2 + a 3 b 3 And again, we see that v w = ( w v ) as well as v v = a 1 a 1 + a 2 a 2 + a 3 a 3, the square of the length of the vector. Cross product We will not be concerned with cross products but they can be easily understood by the cross product of the unit vectors in three dimensions. (Again we will not be concerned with cross products for dimensions other than three which is not always defined) Probability ˆx i ˆx i = 0 ˆx 1 ˆx 2 = ˆx 3 ˆx 1 ˆx 3 = ˆx 2 ˆx 2 ˆx 3 = ˆx 1 ˆx i ˆx j = ˆx j ˆx i In this section we will give a brief review of probability theory. This is not meant to supplant a proper treatment of the subject but to provide a background in what will be discussed later. The ideas will be primarily developed via examples that you are, no doubt, familiar with. Finite Outcome Probability The first fact about probabilities will hopefully be obvious and will be stated without proof. The probability for any particular event to occur is always less than or equal to 100% (which is equivalent to saying that the probability is less than or equal to 1). It does not make sense to say that something has a 110% chance of occurring, 100% is the highest saying that something will definitely occur. Also, probabilities are always greater than or equal to zero. Again, does it make sense for the probability for an event to occur to be 10%? Of course not. A probability of zero says that the event will never occur, you cant get any lower than that. Now let s go on to see how to find probabilities for some simple scenarios. The simplest example using probability is the tossing of a coin. The outcome (heads (H) or tails (T)) is taken as completely random with either occurrence being equally likely. We say there is a 50% chance of heads arising and 50% chance of tails arising. Put another way, the probability is 1 2 that the result is H and 1 2 that it is T. The first important point to note is that the sum of the probabilities for each outcome to happen must add up to 1. Here, we have = 1, assuming that the only two outcomes are H or T. Consider one die, if you toss it only one of six numbers will come up. Since we assume that each number is equally likely to come up the probability for a particular number to show is 1/6. (Of course the probability for H and T to come up is not really 50/50. Due to the printing on each side of a US penny the probabilities differ slightly (Lincoln s head is more massive than the monument on the back)). The rule for assigning probabilities for a finite number of events occurring with equal probability is simply P(a) = 1/N, where N is the total number of possible outcomes. The total probability for any outcome to occur is 1, since P = 1/N +1/N + +1/N = N/N = 1. If the number of possibilities for one outcome to occur is greater than one then we divide the number of possible ways for the outcome to occur by the total number of possible outcomes. For example, consider tossing a pair of dice. The probability for the sum to be 3 is not the same as for it to be 7. Tabulating the possible outcomes we can find the probability for any sum to appear. From this table we see that 7 is the sum which is most likely to occur. The total number of possible outcomes is 36 (by summing the number of outcomes). Thus the rule to find the probability for an event to occur when there is a finite number of possible outcomes is, P i = number of states with outcome i total number of possible outcomes 5 (2)
6 Sum Possible outcomes (die 1-die 2) #of outcomes Probability 1 None / , / ,3-1, / ,4-1,2-3, / ,5-1,2-4,4-2,3,3 5 5/ ,6-1,2-5,5-2,3-4, / ,6-2,3-5,5-3, / ,6-3,4-5, / ,6-4, / , / /36 13 none 0 0 Table 1: Probabilities for rolling two dice. And we also have the rule that the sum of the probabilities for each outcome must equal one (something does occur!). N P i = 1 (3) i=1 Note that this is for the case when each outcome is equally likely (every number on each die is equally likely to arise). A more formal way to include the possibility that each outcome has varying probability is to begin with probabilities. The probability for one die to have one particular outcome is 1 6. The probability for two dice to have one particular outcome is P 1 (i)p 2 (j) = = (This can easily be verified by examining the table above). Then the probability for one particular outcome, say the probability to roll a 5 is given by summing the probabilities of obtaining a 5. Prob: 1/36 + 1/36 + 1/36 + 1/36 = 4/36 = 1/9 [die 1, die 2] [2,3] [3,2] [4,1] [1,4]. Two rules for mutually exclusive events. When a statement says or the probabilities are to be added. What is the probability to roll a 4 or a 5 with two dice? The answer is P(4) + P(5) = 1/12 +1/9 = 7/36 When a statement says and the probabilities are to be multiplied. What is the probability to roll a 4 and at the same time flip a coin and get heads? The answer is the product of the two individual probabilities. (You can verify this by creating a table like we did for the two dice). P(4) P(H) = ( 1 6 ) (1 2 ) = The above rules are to be applied when the two events are mutually exclusive. They are also statistically independent, the outcome for one die does not depend upon the outcome of the other. We will deal shortly with scenarios where the events are neither mutually exclusive nor statistically independent. EXAMPLE: One more example to demonstrate this. Say a shifty gambler loads a pair of dice by shaving parts of it to skew its weight distribution. In so doing the probability of a 1 or 6 is increased to (1.2)/6, What is the probability that the pair will come up 7 with these dice? First we must find the probability for the other outcomes to occur (assumed to be equal). Since the total probability for a roll is 1 we have, (1.2) p 6 + p 6 + p 6 + p 6 + (1.2) p 6 = 1 [1][2][3][4][5][6] where p is the probability for a 2 through 5 to come up. Simplifying we have 2.4+4p = 6. Solving, we have p = 0.9. So in order to get a 7 we have, 1.44/ / / / / /36 = 6.66/36 [1,6] + [6,1] + [2,4] + [4,2] + [1,5] + [5,1] Which is now 18.5% as opposed to 16.7% with fair dice. 6
7 Axioms of Probability and some theorems and definitions A probability system (or probability space) consists of three items, Ω = (S, E, P): S: A basic space S of elementary outcomes (elements) ξ. E: A class E of events. (This is a subset of S). P: A probability measure P( ) defined for each event A in the class E and having the following properties: For events A and B in E we have (P1) P(S) = 1. (Probability for one outcome of the whole set to occur is 1). (P2) P(A) 0. (Probability for any one event is nonnegative). (P3) If A B =, then P(A B) = P(A) + P(B). (P4) If A 1, A 2,... is a sequence of pairwise incompatible events in E, i.e., A i Aj = for i j then n P( A i ) = i=1 n P(A i ) i=1 (The probability of the union of a set of mutually exclusive events is the sum of the probabilities of the individual events.) Some other theorems i P( ) = 0 ii For every event A, P(A) + P( A) = 1. iii P(A B) = P(A) + P(B) P(A B). (If A and B are not mutually exclusive). A definition, If event E is an event with positive probability, the conditional probability of the event A, given E, written P(A E), is define by the relation P(A E) = P(A E) P(E) Note that if events A and B are statistically independent we have P(B A) = P(B), and then P(A B) = P(A)P(B). EXAMPLE: Let s go back to the case of a fair pair of dice and see if we can tell whether they are statistically dependent. Lets find the probability that die one comes up as a one and dice two comes up as 5, that is we want to find P(X 1 = 1 X 2 = 5), where X i is the random variable representing the value of die i. Employing our formula we have, P(X 1 = 1 X 2 = 5) = P(X 1 = 1)P(X 1 = 1 X 2 = 5) = 1 6 ( 1 36 ) 1 = 1 36 = P(X 1 = 1)P(X 2 = 5). (4) 6 The dice are statistically independent as we expect. Joint probability To make the above rules a little more formal we define a random variable X which can take on values within a set σ X. This random variable can be seen as the limit of the low numbered events described before (the students in the school example) to infinite number of events. Define another random variable Y which takes on values in a set σ Y. We define the joint probability distribution P(x, y) = P(X = x&y = y) = P(X = x Y = y) as the probability for X to equal x and Y equal to y. The marginal probability distribution for X, P 1 (x), is defined the probability for X = x regardless of what Y is, P 1 (x) = y P(x, y) Likewise P 2 (y) = x P(x, y). 7
8 Probability Density Distribution The above discussion was for results which have a finite number of possible outcomes. What if we deal with a situation in which the total number of possible outcomes is infinite? For example, say you throw a point particle down on a line, what is the probability that the point lands at position x? If you follow the same procedure as above you would find that the probability for it to land anywhere is zero. How can that be? Summing up all of the possibilities would give zero, not one, but yet we know it ends up at one particular point. Consider a 12 sided dice. The probability for any one side to come up is now 1 12 as opposed to 1 6 for a six sided dice. We know that the probability for any one side to come up is 1 N where N is the number of sides. What if we consider a 1000 sided dice? A sided dice? The probability for any one side to come up decreases. A sphere can be considered as an infinite sided dice. Then we have that the probability for the point x on the sphere to be at the top (comes up) is, 1 P(x) = lim N N = 0 To make sense of this we introduce a new quantity, the probability density distribution. Since the probability for any one point on the sphere to come up we ask instead what the probability for a range of results near x to come up. To simplify the discussion lets return to the case of throwing a point onto a line. The probability for the point to land in a region near x, (xtox + x), is now finite, it is simply the length of the region, x, divided by the total possible length L, P(x, x + x) = x L. We define the probability density distribution to be the quantity when multiplied by the length of the region gives the probability for those events to occur. We see that if we take are region to zero (find the probability to land at a point) we get zero just as before. In the case of the spherical dice, we see that the probability distribution is simply the area of the sphere. In both of these cases the probability distribution is uniform throughout the domains, (each point is equally likely to come up), p(x) = constant. (We will label probabilities with a capital P and probability densities with a lower case p). In general however the probability density distribution can vary from point to point. The rule governing the proper definition of the probability density distribution is that summing all possible outcomes should give 1. P = 1 = p(x) x xover all domain For the case of a sphere we simply have, P = Aover sphere 1 4πR 2 A = 1 4πR 2 Aover sphere A = 4πR2 4πR 2 = 1 [Aside: Of course to do this properly we need to use calculus. Then the above rule would become, p(x)dx = 1 over entire region and the rule to find the probability within some region would be, P(x, x + L) = x+l x p(x )dx Note that in general, probability density distributions do not need to be less than 1, though they are positive.] Expectation value Another useful quantity to define is the expectation value of a random variable X. This can be thought of as the outcome which is most likely to arise, or simply the mean value. We indicate two notations often used, the first is usually employed by physicists, the second by mathematicians. X = E(X) = xp(x = x) for all x For the case of a pair of dice we can find the expectation value for the sum of two fair dice, 2(1/36) + 3(1/18) + 4(1/12) + 5(1/9) + 6(5/36) + 7(1/6) + 8(5/36) + 9(1/18) + 10(1/12)+11(1/18) +12(1/36) = 252/36 = 7. Thus 7 is the most likely outcome which is what we expect. For continuous variable systems the expectation value is defined as, x = E(x) = xp(x)dx domain D 8
9 Variance, Covariance, and Correlation To complete the discussion we introduce three more quantities. The first is the variance, var(x) = E((X E(X)) 2 ) = E(X 2 ) E 2 (X). (The square root of the variance is the standard deviation). The variance is a measure of how far about the mean the distribution is spread. If the variance is zero, then the distribution is only at the mean. In terms of statistics, the data all lie at the mean value. The larger the variance the more spread out the distribution about the mean value. The covariance of a pair of random variables is defined as the expectation of the product of the deviations from each mean, X, Y = cov(x, Y ) = E[(X E(X))(Y E(Y ))]. The covariance is a measure of how two random variables are related. If the covariance is zero then the two random variables are not related. The larger the covariance the more related the two random variables are. In addition there can be a negative covariance, which indicates that the two random variables are inversely related. An easier way to understand the covariance is to examine the normalized covariance that limits the range from -1 to +1. This is the correlation between two random variables. The correlation coefficient, ρ(x, Y ) is defined as, ρ(x, Y ) = cov(x, Y ) var(x)var(y ) The correlations value is limited to the range between -1 to +1. If one random variable is completely determined by another then the correlation is +1. If one random variable is determined to be exactly the opposite of another (such that if one variable is large the other is small) then they are completely anticorrelated and ρ = 1. And lastly if the two random variables are independent then the correlation is zero. 9
Review of Basic Probability Theory
Review of Basic Probability Theory James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 35 Review of Basic Probability Theory
More informationStochastic Processes
qmc082.tex. Version of 30 September 2010. Lecture Notes on Quantum Mechanics No. 8 R. B. Griffiths References: Stochastic Processes CQT = R. B. Griffiths, Consistent Quantum Theory (Cambridge, 2002) DeGroot
More informationBasic Probability. Introduction
Basic Probability Introduction The world is an uncertain place. Making predictions about something as seemingly mundane as tomorrow s weather, for example, is actually quite a difficult task. Even with
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More informationDept. of Linguistics, Indiana University Fall 2015
L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 34 To start out the course, we need to know something about statistics and This is only an introduction; for a fuller understanding, you would
More informationProbability Year 9. Terminology
Probability Year 9 Terminology Probability measures the chance something happens. Formally, we say it measures how likely is the outcome of an event. We write P(result) as a shorthand. An event is some
More informationSingle Maths B: Introduction to Probability
Single Maths B: Introduction to Probability Overview Lecturer Email Office Homework Webpage Dr Jonathan Cumming j.a.cumming@durham.ac.uk CM233 None! http://maths.dur.ac.uk/stats/people/jac/singleb/ 1 Introduction
More informationSummary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016
8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying
More informationReview of Probability. CS1538: Introduction to Simulations
Review of Probability CS1538: Introduction to Simulations Probability and Statistics in Simulation Why do we need probability and statistics in simulation? Needed to validate the simulation model Needed
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More information* 8 Groups, with Appendix containing Rings and Fields.
* 8 Groups, with Appendix containing Rings and Fields Binary Operations Definition We say that is a binary operation on a set S if, and only if, a, b, a b S Implicit in this definition is the idea that
More informationDiscrete Probability. Chemistry & Physics. Medicine
Discrete Probability The existence of gambling for many centuries is evidence of long-running interest in probability. But a good understanding of probability transcends mere gambling. The mathematics
More informationHomework 4 Solution, due July 23
Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var
More informationBrief Review of Probability
Brief Review of Probability Nuno Vasconcelos (Ken Kreutz-Delgado) ECE Department, UCSD Probability Probability theory is a mathematical language to deal with processes or experiments that are non-deterministic
More informationRecap. The study of randomness and uncertainty Chances, odds, likelihood, expected, probably, on average,... PROBABILITY INFERENTIAL STATISTICS
Recap. Probability (section 1.1) The study of randomness and uncertainty Chances, odds, likelihood, expected, probably, on average,... PROBABILITY Population Sample INFERENTIAL STATISTICS Today. Formulation
More informationES 111 Mathematical Methods in the Earth Sciences Lecture Outline 3 - Thurs 5th Oct 2017 Vectors and 3D geometry
ES 111 Mathematical Methods in the Earth Sciences Lecture Outline 3 - Thurs 5th Oct 2017 Vectors and 3D geometry So far, all our calculus has been two-dimensional, involving only x and y. Nature is threedimensional,
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationCME 106: Review Probability theory
: Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:
More informationSteve Smith Tuition: Maths Notes
Maths Notes : Discrete Random Variables Version. Steve Smith Tuition: Maths Notes e iπ + = 0 a + b = c z n+ = z n + c V E + F = Discrete Random Variables Contents Intro The Distribution of Probabilities
More informationPreliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics, Appendix
More informationProbability Year 10. Terminology
Probability Year 10 Terminology Probability measures the chance something happens. Formally, we say it measures how likely is the outcome of an event. We write P(result) as a shorthand. An event is some
More information1 INFO Sep 05
Events A 1,...A n are said to be mutually independent if for all subsets S {1,..., n}, p( i S A i ) = p(a i ). (For example, flip a coin N times, then the events {A i = i th flip is heads} are mutually
More informationProbability Theory Review
Cogsci 118A: Natural Computation I Lecture 2 (01/07/10) Lecturer: Angela Yu Probability Theory Review Scribe: Joseph Schilz Lecture Summary 1. Set theory: terms and operators In this section, we provide
More informationStatistical Theory 1
Statistical Theory 1 Set Theory and Probability Paolo Bautista September 12, 2017 Set Theory We start by defining terms in Set Theory which will be used in the following sections. Definition 1 A set is
More informationIf the objects are replaced there are n choices each time yielding n r ways. n C r and in the textbook by g(n, r).
Caveat: Not proof read. Corrections appreciated. Combinatorics In the following, n, n 1, r, etc. will denote non-negative integers. Rule 1 The number of ways of ordering n distinguishable objects (also
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationDiscrete Probability Refresher
ECE 1502 Information Theory Discrete Probability Refresher F. R. Kschischang Dept. of Electrical and Computer Engineering University of Toronto January 13, 1999 revised January 11, 2006 Probability theory
More informationSociology 6Z03 Topic 10: Probability (Part I)
Sociology 6Z03 Topic 10: Probability (Part I) John Fox McMaster University Fall 2014 John Fox (McMaster University) Soc 6Z03: Probability I Fall 2014 1 / 29 Outline: Probability (Part I) Introduction Probability
More informationChapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc.
Chapter 14 From Randomness to Probability Copyright 2012, 2008, 2005 Pearson Education, Inc. Dealing with Random Phenomena A random phenomenon is a situation in which we know what outcomes could happen,
More informationProbability theory basics
Probability theory basics Michael Franke Basics of probability theory: axiomatic definition, interpretation, joint distributions, marginalization, conditional probability & Bayes rule. Random variables:
More informationVectors a vector is a quantity that has both a magnitude (size) and a direction
Vectors In physics, a vector is a quantity that has both a magnitude (size) and a direction. Familiar examples of vectors include velocity, force, and electric field. For any applications beyond one dimension,
More informationAppendix A : Introduction to Probability and stochastic processes
A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of
More informationIn this initial chapter, you will be introduced to, or more than likely be reminded of, a
1 Sets In this initial chapter, you will be introduced to, or more than likely be reminded of, a fundamental idea that occurs throughout mathematics: sets. Indeed, a set is an object from which every mathematical
More informationP [(E and F )] P [F ]
CONDITIONAL PROBABILITY AND INDEPENDENCE WORKSHEET MTH 1210 This worksheet supplements our textbook material on the concepts of conditional probability and independence. The exercises at the end of each
More informationTopic 3 Random variables, expectation, and variance, II
CSE 103: Probability and statistics Fall 2010 Topic 3 Random variables, expectation, and variance, II 3.1 Linearity of expectation If you double each value of X, then you also double its average; that
More informationProbability COMP 245 STATISTICS. Dr N A Heard. 1 Sample Spaces and Events Sample Spaces Events Combinations of Events...
Probability COMP 245 STATISTICS Dr N A Heard Contents Sample Spaces and Events. Sample Spaces........................................2 Events........................................... 2.3 Combinations
More informationEcon 325: Introduction to Empirical Economics
Econ 325: Introduction to Empirical Economics Lecture 2 Probability Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 3-1 3.1 Definition Random Experiment a process leading to an uncertain
More information(arrows denote positive direction)
12 Chapter 12 12.1 3-dimensional Coordinate System The 3-dimensional coordinate system we use are coordinates on R 3. The coordinate is presented as a triple of numbers: (a,b,c). In the Cartesian coordinate
More informationPage 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03
Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner
More informationFundamentals of Probability CE 311S
Fundamentals of Probability CE 311S OUTLINE Review Elementary set theory Probability fundamentals: outcomes, sample spaces, events Outline ELEMENTARY SET THEORY Basic probability concepts can be cast in
More informationAxioms of Probability
Sample Space (denoted by S) The set of all possible outcomes of a random experiment is called the Sample Space of the experiment, and is denoted by S. Example 1.10 If the experiment consists of tossing
More informationLecture 1. ABC of Probability
Math 408 - Mathematical Statistics Lecture 1. ABC of Probability January 16, 2013 Konstantin Zuev (USC) Math 408, Lecture 1 January 16, 2013 1 / 9 Agenda Sample Spaces Realizations, Events Axioms of Probability
More informationSTA Module 4 Probability Concepts. Rev.F08 1
STA 2023 Module 4 Probability Concepts Rev.F08 1 Learning Objectives Upon completing this module, you should be able to: 1. Compute probabilities for experiments having equally likely outcomes. 2. Interpret
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationProbability and random variables
Probability and random variables Events A simple event is the outcome of an experiment. For example, the experiment of tossing a coin twice has four possible outcomes: HH, HT, TH, TT. A compound event
More information1 Dirac Notation for Vector Spaces
Theoretical Physics Notes 2: Dirac Notation This installment of the notes covers Dirac notation, which proves to be very useful in many ways. For example, it gives a convenient way of expressing amplitudes
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationStochastic Histories. Chapter Introduction
Chapter 8 Stochastic Histories 8.1 Introduction Despite the fact that classical mechanics employs deterministic dynamical laws, random dynamical processes often arise in classical physics, as well as in
More informationLecture 4: Probability and Discrete Random Variables
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1
More informationChapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of
Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple
More informationLecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019
Lecture 10: Probability distributions DANIEL WELLER TUESDAY, FEBRUARY 19, 2019 Agenda What is probability? (again) Describing probabilities (distributions) Understanding probabilities (expectation) Partial
More informationRandom variables (discrete)
Random variables (discrete) Saad Mneimneh 1 Introducing random variables A random variable is a mapping from the sample space to the real line. We usually denote the random variable by X, and a value that
More informationP (A) = P (B) = P (C) = P (D) =
STAT 145 CHAPTER 12 - PROBABILITY - STUDENT VERSION The probability of a random event, is the proportion of times the event will occur in a large number of repititions. For example, when flipping a coin,
More informationIntermediate Math Circles November 8, 2017 Probability II
Intersection of Events and Independence Consider two groups of pairs of events Intermediate Math Circles November 8, 017 Probability II Group 1 (Dependent Events) A = {a sales associate has training} B
More informationProbability Theory and Applications
Probability Theory and Applications Videos of the topics covered in this manual are available at the following links: Lesson 4 Probability I http://faculty.citadel.edu/silver/ba205/online course/lesson
More informationMath-Stat-491-Fall2014-Notes-I
Math-Stat-491-Fall2014-Notes-I Hariharan Narayanan October 2, 2014 1 Introduction This writeup is intended to supplement material in the prescribed texts: Introduction to Probability Models, 10th Edition,
More informationMultivariate probability distributions and linear regression
Multivariate probability distributions and linear regression Patrik Hoyer 1 Contents: Random variable, probability distribution Joint distribution Marginal distribution Conditional distribution Independence,
More informationProbability Dr. Manjula Gunarathna 1
Probability Dr. Manjula Gunarathna Probability Dr. Manjula Gunarathna 1 Introduction Probability theory was originated from gambling theory Probability Dr. Manjula Gunarathna 2 History of Probability Galileo
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationNotes 1 Autumn Sample space, events. S is the number of elements in the set S.)
MAS 108 Probability I Notes 1 Autumn 2005 Sample space, events The general setting is: We perform an experiment which can have a number of different outcomes. The sample space is the set of all possible
More informationExecutive Assessment. Executive Assessment Math Review. Section 1.0, Arithmetic, includes the following topics:
Executive Assessment Math Review Although the following provides a review of some of the mathematical concepts of arithmetic and algebra, it is not intended to be a textbook. You should use this chapter
More informationIntroduction to Probability. Ariel Yadin. Lecture 1. We begin with an example [this is known as Bertrand s paradox]. *** Nov.
Introduction to Probability Ariel Yadin Lecture 1 1. Example: Bertrand s Paradox We begin with an example [this is known as Bertrand s paradox]. *** Nov. 1 *** Question 1.1. Consider a circle of radius
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationBivariate distributions
Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient
More informationn N CHAPTER 1 Atoms Thermodynamics Molecules Statistical Thermodynamics (S.T.)
CHAPTER 1 Atoms Thermodynamics Molecules Statistical Thermodynamics (S.T.) S.T. is the key to understanding driving forces. e.g., determines if a process proceeds spontaneously. Let s start with entropy
More informationReview of probability. Nuno Vasconcelos UCSD
Review of probability Nuno Vasconcelos UCSD robability probability is the language to deal with processes that are non-deterministic examples: if I flip a coin 00 times how many can I expect to see heads?
More informationLecture 16 : Independence, Covariance and Correlation of Discrete Random Variables
Lecture 6 : Independence, Covariance and Correlation of Discrete Random Variables 0/ 3 Definition Two discrete random variables X and Y defined on the same sample space are said to be independent if for
More informationDiscrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations
EECS 70 Discrete Mathematics and Probability Theory Fall 204 Anant Sahai Note 5 Random Variables: Distributions, Independence, and Expectations In the last note, we saw how useful it is to have a way of
More informationChapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables
Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results
More informationStochastic Quantum Dynamics I. Born Rule
Stochastic Quantum Dynamics I. Born Rule Robert B. Griffiths Version of 25 January 2010 Contents 1 Introduction 1 2 Born Rule 1 2.1 Statement of the Born Rule................................ 1 2.2 Incompatible
More informationthe time it takes until a radioactive substance undergoes a decay
1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete
More informationVectors Part 1: Two Dimensions
Vectors Part 1: Two Dimensions Last modified: 20/02/2018 Links Scalars Vectors Definition Notation Polar Form Compass Directions Basic Vector Maths Multiply a Vector by a Scalar Unit Vectors Example Vectors
More informationStatistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions
Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions 1999 Prentice-Hall, Inc. Chap. 4-1 Chapter Topics Basic Probability Concepts: Sample
More informationProbability & Random Variables
& Random Variables Probability Probability theory is the branch of math that deals with random events, processes, and variables What does randomness mean to you? How would you define probability in your
More information(But, they are entirely separate branches of mathematics.)
2 You ve heard of statistics to deal with problems of uncertainty and differential equations to describe the rates of change of physical systems. In this section, you will learn about two more: vector
More informationEXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0.
EXPECTED VALUE of a RV corresponds to the average value one would get for the RV when repeating the experiment, independently, infinitely many times. Sample (RIS) of n values of X (e.g. More accurately,
More informationDO NOT OPEN THIS TEST BOOKLET UNTIL YOU ARE ASKED TO DO SO
DO NOT OPEN THIS TEST BOOKLET UNTIL YOU ARE ASKED TO DO SO T.B.C. : P-AQNA-L-ZNGU Serial No.- TEST BOOKLET MATHEMATICS Test Booklet Series Time Allowed : Two Hours and Thirty Minutes Maximum Marks : 00
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationRVs and their probability distributions
RVs and their probability distributions RVs and their probability distributions In these notes, I will use the following notation: The probability distribution (function) on a sample space will be denoted
More informationMATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)
MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem
More informationProbability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.
Probability Theory To start out the course, we need to know something about statistics and probability Introduction to Probability Theory L645 Advanced NLP Autumn 2009 This is only an introduction; for
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationSection 13.3 Probability
288 Section 13.3 Probability Probability is a measure of how likely an event will occur. When the weather forecaster says that there will be a 50% chance of rain this afternoon, the probability that it
More informationTopic -2. Probability. Larson & Farber, Elementary Statistics: Picturing the World, 3e 1
Topic -2 Probability Larson & Farber, Elementary Statistics: Picturing the World, 3e 1 Probability Experiments Experiment : An experiment is an act that can be repeated under given condition. Rolling a
More informationPhysics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I
Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 27th, 21 From separation of variables, we move to linear algebra Roughly speaking, this is the study
More informationSTAT2201. Analysis of Engineering & Scientific Data. Unit 3
STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random
More informationGEOMETRY OF MATRICES x 1
GEOMETRY OF MATRICES. SPACES OF VECTORS.. Definition of R n. The space R n consists of all column vectors with n components. The components are real numbers... Representation of Vectors in R n.... R. The
More informationVector Spaces in Quantum Mechanics
Chapter 8 Vector Spaces in Quantum Mechanics We have seen in the previous Chapter that there is a sense in which the state of a quantum system can be thought of as being made up of other possible states.
More informationChapter 8: An Introduction to Probability and Statistics
Course S3, 200 07 Chapter 8: An Introduction to Probability and Statistics This material is covered in the book: Erwin Kreyszig, Advanced Engineering Mathematics (9th edition) Chapter 24 (not including
More informationProbabilistic Systems Analysis Spring 2018 Lecture 6. Random Variables: Probability Mass Function and Expectation
EE 178 Probabilistic Systems Analysis Spring 2018 Lecture 6 Random Variables: Probability Mass Function and Expectation Probability Mass Function When we introduce the basic probability model in Note 1,
More informationCS 246 Review of Proof Techniques and Probability 01/14/19
Note: This document has been adapted from a similar review session for CS224W (Autumn 2018). It was originally compiled by Jessica Su, with minor edits by Jayadev Bhaskaran. 1 Proof techniques Here we
More informationM378K In-Class Assignment #1
The following problems are a review of M6K. M7K In-Class Assignment # Problem.. Complete the definition of mutual exclusivity of events below: Events A, B Ω are said to be mutually exclusive if A B =.
More information18.440: Lecture 26 Conditional expectation
18.440: Lecture 26 Conditional expectation Scott Sheffield MIT 1 Outline Conditional probability distributions Conditional expectation Interpretation and examples 2 Outline Conditional probability distributions
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 1 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationDeep Learning for Computer Vision
Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationVECTORS. 3-1 What is Physics? 3-2 Vectors and Scalars CHAPTER
CHAPTER 3 VECTORS 3-1 What is Physics? Physics deals with a great many quantities that have both size and direction, and it needs a special mathematical language the language of vectors to describe those
More information1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques
1 Proof techniques Here we will learn to prove universal mathematical statements, like the square of any odd number is odd. It s easy enough to show that this is true in specific cases for example, 3 2
More informationQuantum Mechanics - I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 7 The Uncertainty Principle
Quantum Mechanics - I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Lecture - 7 The Uncertainty Principle (Refer Slide Time: 00:07) In the last lecture, I had spoken
More information