Generalized Central Limit Theorem and Extreme Value Statistics

Size: px
Start display at page:

Download "Generalized Central Limit Theorem and Extreme Value Statistics"

Transcription

1 Generalized Central Limit Theorem and Extreme Value Statistics Ariel Amir, 9/2017 Let us consider some distribution P (x) from which we draw x1, x2,..., xn. If the distribution has a finite standard deviation σ, then we can define: PN ξ (xi i=1 µ) σ N. (1) Where by the Central Limit Theorem P (ξ) approaches a Gaussian with vanishing mean and standard deviation of 1 as N. What happens when P (x) does not have a finite variance? Or a finite mean? It turns out that there is a generalization of the CLT, which will lead to a different form of scaling, which will depend on the power law tail of the distribution. Our analysis will have some rather surprising consequences in the case of heavy-tailed distributions where the mean diverges. In that case the sum of N variables will be dominated by rare events, and we will quantify this with the aid of extreme value distributions (characterizing for example the maximum of N i.i.d variables). Fig. 1 shows one such example, where a running sum of variables drawn from a distribution whose tail falls off as p(x) 1/x1+µ was used running sum N Figure 1: Running sum for heavy tailed distributions (blue: µ = 0.5, red: µ = 1.5, green: µ = 2.5.) mu=0.5 mu = 1.5 mu = 2.5

2 1 Probability distribution of sums An important quantity to define is the characteristic function of the distribution: ˆf(ω) = f(t)e iωt dt. (2) Given two variables x,y with distributions f(x) and g(y), consider their sum z = x + y. The distribution of z is given by: P (z) = f(x)g(z x)dx, (3) i.e., it is the convolution of the two variables. Consider now the Fourier transform of the distribution (i.e., its characteristic function): e iωz P (z)dz = f(x)g(z x)e iωz dxdz. (4) Changing variables to x and z x y, the Jacobian of the transformation is 1, and we find: e iωz P (z)dz = f(x)g(y)e iω(x+y) dxdy. (5) This is the product of the characteristic functions of f and y which will shortly prove very useful. This can be generalized to the case of the sum of n variables. Defining: we can write: ˆp(ω) = X = x 1 + x x n, (6)... p(x 1 )p(x 2 )..p(x n )dx 1 dx 2... dx n δ(x 1 + x x n X)e iωx dx, (7) which upon doing the integration over X leads to the product of the characteristic functions. 2

3 Finally, the same property also holds for the Laplace transform of positive variables, with essentially the same proof. 2 Tauberian theorems Consider n i.i.d variables drawn from a distribution p(x). The characteristic function of the sum is given by: g n (ω) = (f(ω)) n. (8) It is clear that the characteristic function is bounded by 1 in magnitude, and that as ω 0 it tends to 1. The behavior of the characteristic function near the origin is determined by the tail of p(x) for large x, as we shall quantify later, in what s known as a Tauberian theorem (relating large x and small ω behavior). Consider first a variable whose distribution has a finite variance σ 2 (and hence finite mean). Without loss of generality let s assume that the mean is 0 otherwise we can always consider ew variable with the mean subtracted. We can Taylor expand e iωx 1 + iω ω2 2 x2 near the origin, and find: f(ω) 1 + aω + bω , (9) with: a = i p(x)xdx = 0; b = 1 2 x 2 p(x)dx = σ2 2. (10) Therefore: g n (ω) (1 σ2 ω 2 2 )n. (11) Thus: g n (ω) e n log(1 σ2 ω 2 2 ) e n σ2 ω 2 2. (12) 3

4 In the next section we shall see the significance of the correction to the last expansion. Taking the inverse Fourier transform leads to: P n (x) e x2 2σ 2 n, (13) i.e., the sum is approximated by a Gaussian with standard deviation σ n, as we know from the CLT. Note that the finiteness of the variance associated with the properties of the tail of P (x) was essential to establishing the behavior of f(ω) for small frequencies. This is an example of a Tauberian theorem, where large x behavior determines the small ω behavior, and we shall see that this kind of relation holds also for cases when the variance diverges. Before going to these interesting scenarios, let s consider a particular example with finite variance, and find the regime of validity of the central limit theory. 2.1 Example: Cauchy distribution Consider the following distribution, known as the Cauchy distribution: f(x) = 1 γπ(1 + ( x γ )2 ), (14) (Note that this has the same Lorentzian expression as telegraph noise). Its Fourier transform is: ϕ(ω) = e γ ω. (15) Thus the characteristic function of a sum of N such variables is: ϕ(ω) = e Nγ ω, (16) 4

5 and taking the inverse Fourier transform we find that the sum also follows a Cauchy distribution: f(x) = 1 γπ(1 + ( x Nγ )2 ). (17) Thus, the sum does not converge to a Gaussian, but rather retains its Lorentzian form. Moreover, it is interesting to note that the scaling form governing the width of the Lorentzian evolves with N in a different way than the Gaussian scenario: while in the latter the variance increases linearly with N hence the width increases as N, here the scaling factor is linear in N. This remarkable property is in fact useful for certain computer science algoriths [REF]. 3 Generalized CLT We shall now use a different method to generalize this result further, to distributions with arbitrary support. We will look for distributions which are stable: this means that if we add two (or more) variables drawn from this distribution, the distribution of the sum will retain the same shape i.e., it will be identical up to a potential shift and scaling, by some yet undetermined factors. Clearly, if the sum of a large number of variables drawn from any distribution converges to a distribution with a well defined shape, it must be such a stable distribution. The family of such variables is known as Levy stable distributions. Clearly, a Gaussian distribution obeys this property. We also saw that the inverse Laplace transform of e xµ with 0 < µ < 1 obeys it. We shall now use a RG (renormalization group) approach to find the general form of such distributions, which will turn out to have a simple representation in Fourier rather than real space because the characteristic function is the natural object to deal with here. Defining the partial sums by s n, the general scaling one may consider is: ξ n = s n b n. (18) Here, determines the width of the distribution, and b n is a shift. If the distribution has a finite mean it seems plausible that we should center it by choosing b n = x n. We will show that this is indeed the case, and that if its mean is infinite we can set b n = 0. Note that we have seen this previously for the case of positive variables. 5

6 The scaling we are seeking is of the form of Eq. (18), and our hope is that: lim n P (ξ n = ξ) (19) exists, i.e., the scaled sum converges to some distribution P (x), which is not necessarily Gaussian (or symmetric). Let us denote the characteristic function of the scaled variable by ϕ(ω) (assumed to be independent of n as n ). Consider the variable y n = ξ n. We have: P (y n ) = 1 P (y n / ), (20) with P the limiting distribution of Eq. (19). Therefore the characteristic function of the variable y n is: ϕ y (ω) = ϕ( ω). (21) Note that the 1 prefactor canceled, as it should: since the characteristic function must be 1 at ω = 0. Consider next the distribution of the sum s n. We have s n = y n + b n, and: ˆP (s n ) = P (s n b n ). (22) It is straightforward to see that shifting a distribution by b n implies multiplying the characteristic function by e iωbn. Therefore the characteristic function of the sum is: Φ N (ω) = e ib N ω ϕ y (ω) = e ib N ω ϕ(a N ω). (23) This form will be the basis for the rest of the derivation, where we emphasize that the characteristic function ϕ is N-independent. Consider N = n m, where n; m are two large numbers. The important insight is to realize that one may compute s N in two ways: as the sum of N of the original variables, or as the sum of m variables, each one being the sum of n of the original variables. The sum of n variables drawn from the original distribution 6

7 is given by: Φ n (ω) = e ibnω ϕ( ω). (24) If we take a sum of m variables drawn from that distribution (i.e., the one corresponding to the sums of n s), then its characteristic function will be on one hand: f(ω) = e imbnω (ϕ( ω)) m, (25) and on the other hand it is the distribution of n m = N variables drawn from the original distribution, and hence does not depend on n or m separately but only on their product N. Therefore: ei N n bnω+ N n log[ϕ(anω)] = 0. (26) Defining bn n c n, we find: inω c n N n 2 log(ϕ) + N ϕ n ϕ ω = 0. (27) ϕ ( ω)ω ϕ( ω) = log(ϕ(ω)) n an iω c n n. (28) Multiplying both sides by and defining z ω, we find that: ϕ (z)z ϕ(z) log(ϕ(z)) n an + iz c n n = 0. (29) The equation for ϕ(z) has the mathematical structure: with C 1, C 2 constants. ϕ ϕ C 1 log(ϕ(z)) = ic 2, (30) z 7

8 If C 2 = 0, we can write the equation in the form: ϕ ϕ log(ϕ) = log(ϕ) log(ϕ) = [log(log(ϕ))] = C 1 z. (31) log(log(ϕ)) = C 1 log z + D ϕ = e A z C1. (32) This is the general solution to the homogenous equation. Guessing a particular solution to the inhomogeneous equation in the form log(ϕ) = Dz, leads to: As long as C 1 1, we found a solution! D C 1 D = ic 2 D = ic 2 1 C 1. (33) In the case C 1 = 1, we can guess a solution of the form log(ϕ) = Dz log(z), and find: D log(z) + D D log(z) = ic 2, (34) hence we have a solution when D = ic 2. Going back to Eq. (29), we can also get the scaling for the coefficients: n an = C 1 C 1 log( ) = 1/n. (35) This implies that: log( ) = 1 C 1 log(n) + const n 1/C 1. (36) Similarly: c n n = C 2. (37) 8

9 Hence: c n n1/c 1 2. (38) Therefore: c n = C 3 n 1/C C 4 b n = C 3 n 1/C 1 + C 4 n. (39) The first term will become a constant when we divide by the term n 1/C 1 of Eq. (18), leading to a simple shift of the resulting distribution. The second term will vanish for C 1 < 1. We shall soon see that the case C 1 > 1 corresponds to the case of a variable with finite mean, in which case the C 4 n term will be associated with centering of the scaled variable by subtracting their mean, as in the standard CLT. 3.1 General formula for the characteristic function According to Eqs. (32) and (33), the general formula for the characteristic function for C 1 1 is: ϕ(z) = e A z C 1 +Dz. (40) the D term is associated with a trivial shift of the distribution (related to the linear scaling of b n ) and can be eliminated. We will therefore not consider it in the following. The case of C 1 = 1 will be consider in the next section. The requirement that the inverse Fourier transform of ϕ is a probability distribution imposes that ϕ( z) = ϕ (z). Therefore the characteristic function takes the form: ϕ = e AωC 1 e A ω C 1 ω > 0 ω < 0. (41) This may be rewritten as: ϕ = e a ω µ [1 iβsign(ω)tan( πµ 2 )]. (42) The asymmetry of the distribution is determined by β. For this representation of ϕ, we will show that 9

10 1 β 1, and that β = 1 corresponds to a positive distribution, β = 1 to a strictly negative distribution, and β = 0 to a symmetric one. This formula will not be valid for µ = 1, a case that we will discuss in the next section. Consider f(x) which decays, for x > x, as f(x) = A +, (43) x1+µ with 0 < µ < 1. We shall now derive yet another Tauberian relation, finding the form of the Fourier transform of f(x) near the origin in terms of the tail of the distribution. For small, positive ω we find: Φ(ω) x A + x 1+µ eiωx dx = A + x ω e im dmω1+µ m1+µ ω, (44) where we substituted m = ωx. Evaluating the integral on the RHS by parts we obtain: I = [ ω µ m µ µ eim x ω + im µ ωµ x ω µ eim dm. (45) For µ < 1, we may approximate the integral by replacing the lower limit of integration by 0, to find: I x µ µ eix ω + ωµ µ i e im dm, (46) 0 m µ and e im 0 m dm = iγ(1 µ)e i π µ 2 µ (this can easily be evaluated using contour integration). Thus, for small ω we have: Φ(ω) Cω µ, (47) with C e i π 2 µ. It is possible to bound the rest of the integral to be at most linear in ω for small ω (up to a constant). Therefore the characteristic function near the origin is approximated by: ϕ(ω) 1 Cω µ, (48) 10

11 with: Im(C) Re(C) = tan(π 2 µ) C 1 i tan(π µ). (49) 2 This corresponds to β = 1 in our previous representation. If we similarly look at a distribution with a left tail, a similar analysis leads to the same form of Eq. (48) albeit with C 1 + i tan( π 2 µ), corresponding to β = 1 in Eq. (42). In the general case where both A + and A exit, we obtain the expression ϕ(ω) 1 ω µ (A + e iµ π 2 + A e iµ π 2 ). (50) with β defined as: Im(C) Re(C) = sin( π 2 µ) cos( π 2 µ) ( ) A+ A A + + A = tan( π µ)β, (51) 2 What about the case where 1 < µ < 2? β = A + A A + + A. (52) Note that in this case p(x) = g (x) where g decays as a power-law with 0 < µ < 1 but note that g will decay to a finite constant rather than to zero. Its Fourier transform will thus, by the same logic, be approximated by the following form for small ω: ĝ(ω) a 1 /ω + a 2 ω µ, (53) where the 1/ω divergence stems from the constant value of g at infinity. The Fourier transforms of the two functions are related by ˆp = iωĝ. Therefore for small ω we have: ˆp(ω) = 1 ( iω)cω µ + ibω, (54) where the constant B must be equal to the expectation value of x, and Eq. (49) gives the form of C. We 11

12 can simply this using the trigonometric relations tan(π/2µ) = tan(π/2 µ + π/2) = cot(π/2 µ), to find ic i[1 i tan( π 2 µ)] = tan(π µ) i. (55) 2 Up to a real constant, this is proportional to: 1 + i cot( π 2 µ) = 1 i tan(π µ). (56) 2 Thus the form of Eq. (42) (with β 1) is still intact also for 1 < µ < 2. Note that the linear term will drop out due to the shift of Eq. (18). Also, it is worth mentioning that the asymmetry term vanishes as µ 2: in the case of finite variance we always obtain a symmetric Gaussian, i.e., we become insensitive to the asymmetry of the original distribution. Special Cases µ = 1/2, β = 1: Levy distribution Consider the Levy distribution: p(x) = C e C 2x 2π (x) 3/2 (x 0) (57) The Fourier transform of p(x) for ω > 0 is ϕ(ω) = e 2iCω, (58) which indeed correspond to tan( π 2 µ = ) = 1 β = 1. The case µ = 1 and β = 0 corresponds to the previously discussed Cauchy distribution. In the general case µ = 1 and β 0, we have seen that the general form of the characteristic function is, according to Eqs. (32) and (34): ϕ(z) = e A z C 1 +Dz log(z). (59) 12

13 Repeating the logic we used before to establish the coefficients D for z > 0 and z < 0, based on the power-law tails of the distributions (which in this case fall off like 1/x 2 ) leads to: ϕ(ω) = e Cω [1 iβsign(ω)φ] ; φ = 2 log ω. (60) π This is the only exception to the form of Eq. (42). It remains to be shown why β = 1 corresponds to the case of a strictly positive distribution (which would thus justify the 2 π factor in the definition of φ). To see this, note that the logic following up to Eq. (45) is still intact for the case µ = 1. However, we can no longer replace the lower limit of integration by 0. Instead, note that the real part of the integral can be evaluated by parts, and diverges as log(ω), while the imaginary part of the integral does not suffer from such a divergence and we can well approximate it by replacing the lower limit of integration with 0. Therefore we find: I/ω log(ω) + i This leads to the form of Eq. (60). 0 sin(x)/xdx = log(ω) + iπ/2. (61) 4 Exploring the stable distributions numerically Since we defined the stable distributions in terms of their Fourier transforms, it is very easy to find them numerically by calculating the inverse Fourier transform. Here is a short code that finds the stable distribution for µ = 0.5 and β = 0.5 (i.e., corresponding to a distribution with right and left power-law tails with asymmetric magnitudes). It is easy to check that if β exceeds one, the inverse Fourier transform leads to a function with negative values and hence does not correspond to a probability distribution as we anticipated. mu=0.5; B=0.5; dt=0.001; w_vec=-10000:dt:10000; for indx=1:length(w_vec) w=w_vec(indx); f(indx)=exp(-abs(w)^mu/2*(1-i*b*tan(pi*mu/2)*sign(w))); 13

14 end; x_vec=-5:0.1:5; for indx=1:length(x_vec) x=x_vec(indx); y(indx)=0; for tmp=1:length(f) y(indx)=y(indx)+exp(-i*w_vec(tmp)*x)*f(tmp)*dt; end; end; y=y/(2*pi); plot(x_vec,y); Fig. 2 shows the result. There are much better numerical methods to evaluate the Levy stable distribution [1]. Fig. 3 shows examples for Levy distributions obtained using such a method. Note that for µ < 1 and β = 1 the support of the stable distribution is strictly positive, while this is not the case for µ > 1. This is expected, since for µ 1 we have seen that we can set the shift b n arbitrarily to zero and still convergence to the Levy stable distribution. Since the initial distribution is strictly positive, the scaled sum will also be strictly positive, and hence the result for the limiting distribution follows. Note that this arguments fails for µ > 1. 5 RG Approach for Extreme Value Distributions Consider the maximum of n variables drawn from some distribution p(x), characterized by a cumulative distribution C(x). We will now find the behavior of the maximum for large n, that will turn out to also follow universal statistics much like in the case of the generalized CLT that depend on the tails of p(x). This was discovered by Fisher and Tippett, motivated by an attempt to characterize the distribution of strengths of cotton fibers [2]. If we define: X n max(x 1, x 2,..., x n ), (62) 14

15 P(x) x Figure 2: Levy stable distribution for µ = 0.5 and asymmetry β = 1, obtained by numerically inverting Eq. (42). then we have: P rob(x n < c) = P rob(x 1 < C)P rob(x 2 < C)...P rob(x n < C) = C n (x). (63) For this reason it is natural to work with the cumulative distribution when dealing with extreme value statistics, akin to the role which the characteristic function played in the previous chapter. Clearly, it is easy to convert the question of the minimum of n variables to one related to the maximum, if we define p(x) = p( x). 15

16 µ = 0.5,β = 0 µ = 1,β = 0 µ = 1.5,β = 0 µ = 0.5,β = 1 µ = 1,β = 1 µ = 1.5,β = P(x) x Figure 3: Levy stable distributions for µ ranging from 0.5 to 1.5 and asymmetry β ranging from 0 to Example I: the Gumbel distribution Consider the distribution: p(x) = e x. (64) The cumulative is therefore: C(x) = 1 e x. (65) 16

17 The cumulative distribution for the maximum of n variables is therefore: G(x) = (1 e x ) n e ne x = e e (x µ), (66) with µ log(n). This is an example of the Gumbel distribution. Taking the derivative to find the probability distribution for the maximum, we find: p(x) = e e (x µ) e (x µ). (67) Denoting l e (x µ), we have: p(x) = e l l, (68) and we can find the maximum by taking the derivative with respect to l, leading to: e l l = e l l = 1. (69) Hence the distribution is peaked at x = log(n). It is easy to see that its width is of order unity. We can now revisit the approximation we made in Eq. (66), and check its validity. We have seen before that making the approximation: (1 x/n) n e x, (70) is valid under the condition x n. In our case, this implies: e x n n. (71) At the peak of the distribution, we have e x n = 1, and the approximation is valid for n 1. From 17

18 Eq. (71) we see that this would hold in a region of size o(log( n)) around it since the width of the distribution is of order unity, this implies that for large n the Gumbel distribution would approximate the exact solution well, until we are sufficiently far in the tail that the probability distribution is vanishingly small. However, ote of caution is in place: the logarithmic dependence we found signals a very slow converge to the limiting form. This is also true in the case where the distribution p(x) is Gaussian, as was already noted in Fisher and Tippett s original work [2] We will soon show that for the maximum of a large number of variables drawn from any distributions with a sufficiently fast decaying tail, the Gumbel distribution arises. 5.2 Example II: the Weibull distribution Consider the minimum of the same distribution we had in the previous example. The same logic would give us that: P rob[min(x 1,..x n ) > ξ] = P rob[x 1 > ξ]p rob[x 2 > ξ]..p rob[x n > ξ] = e ξn. (72) This is an example of the Weibull distribution, which occurs when the variable is bounded (e.g: in this case the variable is never negative). The general case, for the case of a maximum of n variables with distribution bounded by C, would be: C x e ( b ) α, x C G(x) = 0, x > C (73) 5.3 Example III: the Frechet distribution The final example will belong in the third possible universality class, corresponding to variables with a power-law tail. If at large x we have: p(x) = a, (74) (x b) 1+µ 18

19 Then the cumulative distribution is: Therefore taking it to a large power n we find: C(x) = 1 a µ(x b) µ. (75) an C n (x) e µ(x b) µ. (76) Upon appropriately scaling the variable, we find that: G(x) = e A ( x B n 1/µ ) µ, (77) (where A and B do not depend on n). We shall now show that these 3 cases can be derived in a general framework, using a similar approach to the one we used earlier. 5.4 General form for extreme value distributions We will follow similar logic to the RG used to find the form of the characteristic functions in the generalized CLT. Let us assume that there exists some scaling coefficients, b n such that when we define: The following limit exists: ξ n X b n, (78) lim P rob(ξ n = ξ) = g(ξ). (79) n (note that this limit is not unique: we can always shift and rescale by a constant). This would imply that p(x) a 1 n g logic we used before, we know that: ( ) ( X bn and the cumulative is given by: G X bn ). By the same ( ) X G m bn (80) 19

20 depends only on the quantity N = n m. Therefore we have: ( ( )) X G N/n bn = 0. (81) This implies that: ( ( ( ))) N X n log bn G = 0. (82) From this we can deduce that: N n 2 log G + N n Now, if we fix and b n, we have that: G [ G ( bn ) x a 2 n ] = 0. (83) G(z) (a zb) = c. (84) G(z) log G(z) From this we can see that: log(log(g)) = 1 a 3 log(a 1 z + a 2 ) (85) Now we simply solve for G to recover that G takes the form: G = e (a 1z+a 2 ) 1 a 3. (86) How can the three forms of extreme value distribution be recovered from this form? Frechet Distribution: One choice would yield the Frechet distribution: e a(x b) µ Frechet, (87) where we have µ > 0. Weibull Distribution: 20

21 A different choice of the signs of the coefficients of Eq. (86) yields the Weibull distribution: with µ > 0. Gumbel Distribution: e a(c x)µ Weibull, (88) Finally, if we take the limit as a 3 = 1/µ 0 one can choose an a such that: ( 1 ( x d bµ )) µ ( exp x d ). (89) b This gets us the Gumbel distribution: ( ( exp exp x d )) b Gumbel. (90) Let us now consider the scaling coefficients and b n. For the Frechet distribution, we have: G N/n (X) = e (N/n)a ( X bn an b ) µ. (91) Since this has to be n-independent, it is easy to see that we must have b n = const + b, and: a µ n 1/n n 1/µ. (92) Note that for µ < 1 this implies that the maximum scales in the same way as the sum! This elucidates the behavior of the Levy flights which we have seen earlier. By the same reasoning we find that for the Weibull distribution of Eq.(87) we have: G(X) = e ( bn X an )µ, (93) with b n = const and n 1/µ. Note that in this case because the original distribution is bounded, the distribution of the maximum 21

22 becomes narrower with larger n. In a similar fashion the scaling for the Gumbel distribution can be determined. By the same logic, the following expression should be n-independent: exp ( Nn exp ( X b n Equivalently, we can demand the following expression to be n-independent: )) d. (94) b X b n d + log(n/n). (95) b This implies, remarkably, that b n log(n) while = constant: the distribution retains the same shape and width, but (slowly) shifts towards larger values. Acknowledgments I thank Ori Hirschberg, Felix Wong and Po-Yi Ho for useful comments and the students of AM 203 at Harvard where this material was first tested. References [1] [2] Fisher, R.A. and Tippett, L.H.C., 1928, April. Limiting forms of the frequency distribution of the largest or smallest member of a sample. In Mathematical Proceedings of the Cambridge Philosophical Society (Vol. 24, No. 2, pp ). Cambridge University Press. 22

Lecture 4: Fourier Transforms.

Lecture 4: Fourier Transforms. 1 Definition. Lecture 4: Fourier Transforms. We now come to Fourier transforms, which we give in the form of a definition. First we define the spaces L 1 () and L 2 (). Definition 1.1 The space L 1 ()

More information

Math 341: Probability Eighteenth Lecture (11/12/09)

Math 341: Probability Eighteenth Lecture (11/12/09) Math 341: Probability Eighteenth Lecture (11/12/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams

More information

F n = F n 1 + F n 2. F(z) = n= z z 2. (A.3)

F n = F n 1 + F n 2. F(z) = n= z z 2. (A.3) Appendix A MATTERS OF TECHNIQUE A.1 Transform Methods Laplace transforms for continuum systems Generating function for discrete systems This method is demonstrated for the Fibonacci sequence F n = F n

More information

Lecture 3: Central Limit Theorem

Lecture 3: Central Limit Theorem Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (

More information

Ordinary differential equations Notes for FYS3140

Ordinary differential equations Notes for FYS3140 Ordinary differential equations Notes for FYS3140 Susanne Viefers, Dept of Physics, University of Oslo April 4, 2018 Abstract Ordinary differential equations show up in many places in physics, and these

More information

Series Solution of Linear Ordinary Differential Equations

Series Solution of Linear Ordinary Differential Equations Series Solution of Linear Ordinary Differential Equations Department of Mathematics IIT Guwahati Aim: To study methods for determining series expansions for solutions to linear ODE with variable coefficients.

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

The Fourier Transform Method

The Fourier Transform Method The Fourier Transform Method R. C. Trinity University Partial Differential Equations April 22, 2014 Recall The Fourier transform The Fourier transform of a piecewise smooth f L 1 (R) is ˆf(ω) = F(f)(ω)

More information

Lecture 3: Central Limit Theorem

Lecture 3: Central Limit Theorem Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (εx)

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 (continued) Lecture 8 Key points in probability CLT CLT examples Prior vs Likelihood Box & Tiao

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 5 FFT and Divide and Conquer Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 56 Outline DFT: evaluate a polynomial

More information

Fourier transforms. R. C. Daileda. Partial Differential Equations April 17, Trinity University

Fourier transforms. R. C. Daileda. Partial Differential Equations April 17, Trinity University The Fourier Transform R. C. Trinity University Partial Differential Equations April 17, 214 The Fourier series representation For periodic functions Recall: If f is a 2p-periodic (piecewise smooth) function,

More information

Chapter 2: Complex numbers

Chapter 2: Complex numbers Chapter 2: Complex numbers Complex numbers are commonplace in physics and engineering. In particular, complex numbers enable us to simplify equations and/or more easily find solutions to equations. We

More information

Higher-order ordinary differential equations

Higher-order ordinary differential equations Higher-order ordinary differential equations 1 A linear ODE of general order n has the form a n (x) dn y dx n +a n 1(x) dn 1 y dx n 1 + +a 1(x) dy dx +a 0(x)y = f(x). If f(x) = 0 then the equation is called

More information

Math 341: Probability Seventeenth Lecture (11/10/09)

Math 341: Probability Seventeenth Lecture (11/10/09) Math 341: Probability Seventeenth Lecture (11/10/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams

More information

The Central Limit Theorem

The Central Limit Theorem The Central Limit Theorem (A rounding-corners overiew of the proof for a.s. convergence assuming i.i.d.r.v. with 2 moments in L 1, provided swaps of lim-ops are legitimate) If {X k } n k=1 are i.i.d.,

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Probability, CLT, CLT counterexamples, Bayes. The PDF file of this lecture contains a full reference document on probability and random variables.

Probability, CLT, CLT counterexamples, Bayes. The PDF file of this lecture contains a full reference document on probability and random variables. Lecture 5 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Probability, CLT, CLT counterexamples, Bayes The PDF file of

More information

First and Last Name: 2. Correct The Mistake Determine whether these equations are false, and if so write the correct answer.

First and Last Name: 2. Correct The Mistake Determine whether these equations are false, and if so write the correct answer. . Correct The Mistake Determine whether these equations are false, and if so write the correct answer. ( x ( x (a ln + ln = ln(x (b e x e y = e xy (c (d d dx cos(4x = sin(4x 0 dx xe x = (a This is an incorrect

More information

Conformal maps. Lent 2019 COMPLEX METHODS G. Taylor. A star means optional and not necessarily harder.

Conformal maps. Lent 2019 COMPLEX METHODS G. Taylor. A star means optional and not necessarily harder. Lent 29 COMPLEX METHODS G. Taylor A star means optional and not necessarily harder. Conformal maps. (i) Let f(z) = az + b, with ad bc. Where in C is f conformal? cz + d (ii) Let f(z) = z +. What are the

More information

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH * 4.4 UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH 19 Discussion Problems 59. Two roots of a cubic auxiliary equation with real coeffi cients are m 1 1 and m i. What is the corresponding homogeneous

More information

Math Homework 2

Math Homework 2 Math 73 Homework Due: September 8, 6 Suppose that f is holomorphic in a region Ω, ie an open connected set Prove that in any of the following cases (a) R(f) is constant; (b) I(f) is constant; (c) f is

More information

Linear second-order differential equations with constant coefficients and nonzero right-hand side

Linear second-order differential equations with constant coefficients and nonzero right-hand side Linear second-order differential equations with constant coefficients and nonzero right-hand side We return to the damped, driven simple harmonic oscillator d 2 y dy + 2b dt2 dt + ω2 0y = F sin ωt We note

More information

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1. MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:

More information

λ n = L φ n = π L eınπx/l, for n Z

λ n = L φ n = π L eınπx/l, for n Z Chapter 32 The Fourier Transform 32. Derivation from a Fourier Series Consider the eigenvalue problem y + λy =, y( L = y(l, y ( L = y (L. The eigenvalues and eigenfunctions are ( nπ λ n = L 2 for n Z +

More information

Review of Fundamental Equations Supplementary notes on Section 1.2 and 1.3

Review of Fundamental Equations Supplementary notes on Section 1.2 and 1.3 Review of Fundamental Equations Supplementary notes on Section. and.3 Introduction of the velocity potential: irrotational motion: ω = u = identity in the vector analysis: ϕ u = ϕ Basic conservation principles:

More information

Prelim 2 Math Please show your reasoning and all your work. This is a 90 minute exam. Calculators are not needed or permitted. Good luck!

Prelim 2 Math Please show your reasoning and all your work. This is a 90 minute exam. Calculators are not needed or permitted. Good luck! April 4, Prelim Math Please show your reasoning and all your work. This is a 9 minute exam. Calculators are not needed or permitted. Good luck! Trigonometric Formulas sin x sin x cos x cos (u + v) cos

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

f(x) cos dx L L f(x) sin L + b n sin a n cos

f(x) cos dx L L f(x) sin L + b n sin a n cos Chapter Fourier Series and Transforms. Fourier Series et f(x be an integrable functin on [, ]. Then the fourier co-ecients are dened as a n b n f(x cos f(x sin The claim is that the function f then can

More information

Taylor and Laurent Series

Taylor and Laurent Series Chapter 4 Taylor and Laurent Series 4.. Taylor Series 4... Taylor Series for Holomorphic Functions. In Real Analysis, the Taylor series of a given function f : R R is given by: f (x + f (x (x x + f (x

More information

221B Lecture Notes on Resonances in Classical Mechanics

221B Lecture Notes on Resonances in Classical Mechanics 1B Lecture Notes on Resonances in Classical Mechanics 1 Harmonic Oscillators Harmonic oscillators appear in many different contexts in classical mechanics. Examples include: spring, pendulum (with a small

More information

Part IB. Complex Analysis. Year

Part IB. Complex Analysis. Year Part IB Complex Analysis Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 Paper 1, Section I 2A Complex Analysis or Complex Methods 7 (a) Show that w = log(z) is a conformal

More information

MATHS & STATISTICS OF MEASUREMENT

MATHS & STATISTICS OF MEASUREMENT Imperial College London BSc/MSci EXAMINATION June 2008 This paper is also taken for the relevant Examination for the Associateship MATHS & STATISTICS OF MEASUREMENT For Second-Year Physics Students Tuesday,

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Expectation, variance and moments

Expectation, variance and moments Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Damped harmonic motion

Damped harmonic motion Damped harmonic motion March 3, 016 Harmonic motion is studied in the presence of a damping force proportional to the velocity. The complex method is introduced, and the different cases of under-damping,

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

The random variable 1

The random variable 1 The random variable 1 Contents 1. Definition 2. Distribution and density function 3. Specific random variables 4. Functions of one random variable 5. Mean and variance 2 The random variable A random variable

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

4 Uniform convergence

4 Uniform convergence 4 Uniform convergence In the last few sections we have seen several functions which have been defined via series or integrals. We now want to develop tools that will allow us to show that these functions

More information

PENULTIMATE APPROXIMATIONS FOR WEATHER AND CLIMATE EXTREMES. Rick Katz

PENULTIMATE APPROXIMATIONS FOR WEATHER AND CLIMATE EXTREMES. Rick Katz PENULTIMATE APPROXIMATIONS FOR WEATHER AND CLIMATE EXTREMES Rick Katz Institute for Mathematics Applied to Geosciences National Center for Atmospheric Research Boulder, CO USA Email: rwk@ucar.edu Web site:

More information

Wave Phenomena Physics 15c. Lecture 11 Dispersion

Wave Phenomena Physics 15c. Lecture 11 Dispersion Wave Phenomena Physics 15c Lecture 11 Dispersion What We Did Last Time Defined Fourier transform f (t) = F(ω)e iωt dω F(ω) = 1 2π f(t) and F(w) represent a function in time and frequency domains Analyzed

More information

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ) Chapter 4 Stochastic Processes 4. Definition In the previous chapter we studied random variables as functions on a sample space X(ω), ω Ω, without regard to how these might depend on parameters. We now

More information

Lecture 34. Fourier Transforms

Lecture 34. Fourier Transforms Lecture 34 Fourier Transforms In this section, we introduce the Fourier transform, a method of analyzing the frequency content of functions that are no longer τ-periodic, but which are defined over the

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Fourier Series Example

Fourier Series Example Fourier Series Example Let us compute the Fourier series for the function on the interval [ π,π]. f(x) = x f is an odd function, so the a n are zero, and thus the Fourier series will be of the form f(x)

More information

f (t) K(t, u) d t. f (t) K 1 (t, u) d u. Integral Transform Inverse Fourier Transform

f (t) K(t, u) d t. f (t) K 1 (t, u) d u. Integral Transform Inverse Fourier Transform Integral Transforms Massoud Malek An integral transform maps an equation from its original domain into another domain, where it might be manipulated and solved much more easily than in the original domain.

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

Quiz Solutions, Math 136, Sec 6, 7, 16. Spring, Warm-up Problems - Review

Quiz Solutions, Math 136, Sec 6, 7, 16. Spring, Warm-up Problems - Review Quiz Solutions, Math 36, Sec 6, 7, 6. Spring, Warm-up Problems - Review. Find appropriate substitutions to evaluate the following indefinite integrals: a) b) t + 4) 3 dt. Set u t + 4. We have du dt t.

More information

Solutions to Calculus problems. b k A = limsup n. a n limsup b k,

Solutions to Calculus problems. b k A = limsup n. a n limsup b k, Solutions to Calculus problems. We will prove the statement. Let {a n } be the sequence and consider limsupa n = A [, ]. If A = ±, we can easily find a nondecreasing (nonincreasing) subsequence of {a n

More information

Physics 307. Mathematical Physics. Luis Anchordoqui. Wednesday, August 31, 16

Physics 307. Mathematical Physics. Luis Anchordoqui. Wednesday, August 31, 16 Physics 307 Mathematical Physics Luis Anchordoqui 1 Bibliography L. A. Anchordoqui and T. C. Paul, ``Mathematical Models of Physics Problems (Nova Publishers, 2013) G. F. D. Duff and D. Naylor, ``Differential

More information

Discrete Fourier Transform

Discrete Fourier Transform Discrete Fourier Transform DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 13, 2013 Mårten Björkman

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x. Section 5.1 Simple One-Dimensional Problems: The Free Particle Page 9 The Free Particle Gaussian Wave Packets The Gaussian wave packet initial state is one of the few states for which both the { x } and

More information

THE NATIONAL UNIVERSITY OF IRELAND, CORK COLÁISTE NA hollscoile, CORCAIGH UNIVERSITY COLLEGE, CORK. Summer Examination 2009.

THE NATIONAL UNIVERSITY OF IRELAND, CORK COLÁISTE NA hollscoile, CORCAIGH UNIVERSITY COLLEGE, CORK. Summer Examination 2009. OLLSCOIL NA héireann, CORCAIGH THE NATIONAL UNIVERSITY OF IRELAND, CORK COLÁISTE NA hollscoile, CORCAIGH UNIVERSITY COLLEGE, CORK Summer Examination 2009 First Engineering MA008 Calculus and Linear Algebra

More information

Final Exam May 4, 2016

Final Exam May 4, 2016 1 Math 425 / AMCS 525 Dr. DeTurck Final Exam May 4, 2016 You may use your book and notes on this exam. Show your work in the exam book. Work only the problems that correspond to the section that you prepared.

More information

Differential Equations

Differential Equations Differential Equations Problem Sheet 1 3 rd November 2011 First-Order Ordinary Differential Equations 1. Find the general solutions of the following separable differential equations. Which equations are

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Fourier transforms, Generalised functions and Greens functions

Fourier transforms, Generalised functions and Greens functions Fourier transforms, Generalised functions and Greens functions T. Johnson 2015-01-23 Electromagnetic Processes In Dispersive Media, Lecture 2 - T. Johnson 1 Motivation A big part of this course concerns

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

MA22S3 Summary Sheet: Ordinary Differential Equations

MA22S3 Summary Sheet: Ordinary Differential Equations MA22S3 Summary Sheet: Ordinary Differential Equations December 14, 2017 Kreyszig s textbook is a suitable guide for this part of the module. Contents 1 Terminology 1 2 First order separable 2 2.1 Separable

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Synopsis of Complex Analysis. Ryan D. Reece

Synopsis of Complex Analysis. Ryan D. Reece Synopsis of Complex Analysis Ryan D. Reece December 7, 2006 Chapter Complex Numbers. The Parts of a Complex Number A complex number, z, is an ordered pair of real numbers similar to the points in the real

More information

Math 201 Solutions to Assignment 1. 2ydy = x 2 dx. y = C 1 3 x3

Math 201 Solutions to Assignment 1. 2ydy = x 2 dx. y = C 1 3 x3 Math 201 Solutions to Assignment 1 1. Solve the initial value problem: x 2 dx + 2y = 0, y(0) = 2. x 2 dx + 2y = 0, y(0) = 2 2y = x 2 dx y 2 = 1 3 x3 + C y = C 1 3 x3 Notice that y is not defined for some

More information

4. Complex Oscillations

4. Complex Oscillations 4. Complex Oscillations The most common use of complex numbers in physics is for analyzing oscillations and waves. We will illustrate this with a simple but crucially important model, the damped harmonic

More information

First-Passage Statistics of Extreme Values

First-Passage Statistics of Extreme Values First-Passage Statistics of Extreme Values Eli Ben-Naim Los Alamos National Laboratory with: Paul Krapivsky (Boston University) Nathan Lemons (Los Alamos) Pearson Miller (Yale, MIT) Talk, publications

More information

Math 241 Final Exam Spring 2013

Math 241 Final Exam Spring 2013 Name: Math 241 Final Exam Spring 213 1 Instructor (circle one): Epstein Hynd Wong Please turn off and put away all electronic devices. You may use both sides of a 3 5 card for handwritten notes while you

More information

Lecture 9. = 1+z + 2! + z3. 1 = 0, it follows that the radius of convergence of (1) is.

Lecture 9. = 1+z + 2! + z3. 1 = 0, it follows that the radius of convergence of (1) is. The Exponential Function Lecture 9 The exponential function 1 plays a central role in analysis, more so in the case of complex analysis and is going to be our first example using the power series method.

More information

Spectral Broadening Mechanisms

Spectral Broadening Mechanisms Spectral Broadening Mechanisms Lorentzian broadening (Homogeneous) Gaussian broadening (Inhomogeneous, Inertial) Doppler broadening (special case for gas phase) The Fourier Transform NC State University

More information

Fourier Transforms of Measures

Fourier Transforms of Measures Fourier Transforms of Measures Sums of Independent andom Variables We begin our study of sums of independent random variables, S n = X 1 + X n. If each X i is square integrable, with mean µ i = EX i and

More information

Partial differential equations (ACM30220)

Partial differential equations (ACM30220) (ACM3. A pot on a stove has a handle of length that can be modelled as a rod with diffusion constant D. The equation for the temperature in the rod is u t Du xx < x

More information

Functions of a Complex Variable and Integral Transforms

Functions of a Complex Variable and Integral Transforms Functions of a Complex Variable and Integral Transforms Department of Mathematics Zhou Lingjun Textbook Functions of Complex Analysis with Applications to Engineering and Science, 3rd Edition. A. D. Snider

More information

A class of probability distributions for application to non-negative annual maxima

A class of probability distributions for application to non-negative annual maxima Hydrol. Earth Syst. Sci. Discuss., doi:.94/hess-7-98, 7 A class of probability distributions for application to non-negative annual maxima Earl Bardsley School of Science, University of Waikato, Hamilton

More information

(a) To show f(z) is analytic, explicitly evaluate partials,, etc. and show. = 0. To find v, integrate u = v to get v = dy u =

(a) To show f(z) is analytic, explicitly evaluate partials,, etc. and show. = 0. To find v, integrate u = v to get v = dy u = Homework -5 Solutions Problems (a) z = + 0i, (b) z = 7 + 24i 2 f(z) = u(x, y) + iv(x, y) with u(x, y) = e 2y cos(2x) and v(x, y) = e 2y sin(2x) u (a) To show f(z) is analytic, explicitly evaluate partials,,

More information

sech(ax) = 1 ch(x) = 2 e x + e x (10) cos(a) cos(b) = 2 sin( 1 2 (a + b)) sin( 1 2

sech(ax) = 1 ch(x) = 2 e x + e x (10) cos(a) cos(b) = 2 sin( 1 2 (a + b)) sin( 1 2 Math 60 43 6 Fourier transform Here are some formulae that you may want to use: F(f)(ω) def = f(x)e iωx dx, F (f)(x) = f(ω)e iωx dω, (3) F(f g) = F(f)F(g), (4) F(e α x ) = α π ω + α, F( α x + α )(ω) =

More information

Series Solutions of Linear ODEs

Series Solutions of Linear ODEs Chapter 2 Series Solutions of Linear ODEs This Chapter is concerned with solutions of linear Ordinary Differential Equations (ODE). We will start by reviewing some basic concepts and solution methods for

More information

Math Exam 2, October 14, 2008

Math Exam 2, October 14, 2008 Math 96 - Exam 2, October 4, 28 Name: Problem (5 points Find all solutions to the following system of linear equations, check your work: x + x 2 x 3 2x 2 2x 3 2 x x 2 + x 3 2 Solution Let s perform Gaussian

More information

Physics 116C The Distribution of the Sum of Random Variables

Physics 116C The Distribution of the Sum of Random Variables Physics 116C The Distribution of the Sum of Random Variables Peter Young (Dated: December 2, 2013) Consider a random variable with a probability distribution P(x). The distribution is normalized, i.e.

More information

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6 MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Lecture 9 February 2, 2016

Lecture 9 February 2, 2016 MATH 262/CME 372: Applied Fourier Analysis and Winter 26 Elements of Modern Signal Processing Lecture 9 February 2, 26 Prof. Emmanuel Candes Scribe: Carlos A. Sing-Long, Edited by E. Bates Outline Agenda:

More information

Math 1B Final Exam, Solution. Prof. Mina Aganagic Lecture 2, Spring (6 points) Use substitution and integration by parts to find:

Math 1B Final Exam, Solution. Prof. Mina Aganagic Lecture 2, Spring (6 points) Use substitution and integration by parts to find: Math B Final Eam, Solution Prof. Mina Aganagic Lecture 2, Spring 20 The eam is closed book, apart from a sheet of notes 8. Calculators are not allowed. It is your responsibility to write your answers clearly..

More information

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016 Math 4B Notes Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: T 2:45 :45pm Last updated 7/24/206 Classification of Differential Equations The order of a differential equation is the

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

FIXED POINT ITERATION

FIXED POINT ITERATION FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial

More information

FIRST YEAR CALCULUS W W L CHEN

FIRST YEAR CALCULUS W W L CHEN FIRST YER CLCULUS W W L CHEN c W W L Chen, 994, 28. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

18.04 Practice problems exam 2, Spring 2018 Solutions

18.04 Practice problems exam 2, Spring 2018 Solutions 8.04 Practice problems exam, Spring 08 Solutions Problem. Harmonic functions (a) Show u(x, y) = x 3 3xy + 3x 3y is harmonic and find a harmonic conjugate. It s easy to compute: u x = 3x 3y + 6x, u xx =

More information

Wavefield extrapolation via Sturm-Liouville transforms

Wavefield extrapolation via Sturm-Liouville transforms Extrapolation via Sturm-Liouville Wavefield extrapolation via Sturm-Liouville transforms Michael P. Lamoureux, Agnieszka Pawlak and Gary F. Margrave ABSTACT We develop a procedure for building a wavefield

More information

Week 2 Notes, Math 865, Tanveer

Week 2 Notes, Math 865, Tanveer Week 2 Notes, Math 865, Tanveer 1. Incompressible constant density equations in different forms Recall we derived the Navier-Stokes equation for incompressible constant density, i.e. homogeneous flows:

More information

PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS. First Order Equations. p(x)dx)) = q(x) exp(

PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS. First Order Equations. p(x)dx)) = q(x) exp( PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS First Order Equations 1. Linear y + p(x)y = q(x) Muliply through by the integrating factor exp( p(x)) to obtain (y exp( p(x))) = q(x) exp( p(x)). 2. Separation of

More information

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment

More information

A Numerical Approach To The Steepest Descent Method

A Numerical Approach To The Steepest Descent Method A Numerical Approach To The Steepest Descent Method K.U. Leuven, Division of Numerical and Applied Mathematics Isaac Newton Institute, Cambridge, Feb 15, 27 Outline 1 One-dimensional oscillatory integrals

More information

Statistics, Probability Distributions & Error Propagation. James R. Graham

Statistics, Probability Distributions & Error Propagation. James R. Graham Statistics, Probability Distributions & Error Propagation James R. Graham Sample & Parent Populations Make measurements x x In general do not expect x = x But as you take more and more measurements a pattern

More information