Monte Carlo Methods with Reduced Error

Size: px
Start display at page:

Download "Monte Carlo Methods with Reduced Error"

Transcription

1 Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for such computational schemes an ranom variables that a value of ξ is chosen so that the variance is as small as possible. Monte Carlo algorithms with reuce variance compare to Plain Monte Carlo algorithms are usually calle efficient Monte Carlo algorithms. The techniques use to achieve such a reuction are calle variance-reuction techniques. Let us consier several classical algorithms of this kin.

2 Separation of Principal Part Consier again the integral I = Ω f(x)p(x)x, (16) where f L 2 (Ω, p), x Ω IR. Let the function h(x) L 2 (Ω, p) be close to f(x) with respect to its L 2 norm; i.e. f h L2 ε. Let us also suppose that the value of the integral is known. Ω h(x)p(x)x = I The ranom variable θ = f(ξ) h(ξ) + I generates the following estimate for

3 the integral (16) θ N = I + 1 N N [f(ξ i ) h(ξ i )]. (17) i=1 Obviously Eθ N = I an (17) efines a Monte Carlo algorithm, which is calle the Separation of Principal Part algorithm. A possible estimate of the variancevariance of θ is Dθ = Ω [f(x) h(x)] 2 p(x)x (I I ) 2 ε 2. This means that the variance an the probable error will be quite small, if the function h(x) is such that the integral I can be calculate analytically. The function h(x) is often chosen to be piece-wise linear function in orer to compute the value of I easily.

4 Integration on a Subomain Let us suppose that it is possible to calculate the integral analytically on Ω Ω an f(x)p(x)x = I, Ω p(x)x = c, Ω where 0 < c < 1. Then the integral (16) can be represente as where Ω 1 = Ω Ω. I = Ω 1 f(x)p(x)x + I, Let us efine a ranom point ξ, in Ω 1, with probability ensity function p 1 (x ) = p(x )/(1 c) an a ranom variable θ = I + (1 c)f(ξ ). (18)

5 Obviously Eθ = I. use to compute I Therefore, the following approximate estimator can be θ N = c + 1 N N (1 + c) f(ξ i), where ξ i are inepenent realizations of the -imensional ranom point ξ. The latter presentation efines the Integration on Subomain Monte Carlo algorithm. The next theorem compares the accuracy of this Monte Carlo algorithm with the Plain Monte Carlo. i=1 Theorem 1. If the variance Dθ exists then Dθ (1 c)dθ. Proof 1. Let us calculate the variances of both ranom variables θ (efine

6 by (11) an θ (efine by (18)): Dθ = Ω f 2 p x I 2 = Ω 1 f 2 p x + Ω f 2 p x I 2 ; (19) Dθ = (1 c) Ω 2 f 2 p 1 x [(1 c) fp 1 x] 2 1 Ω 1 = (1 c) f 2 p x Ω 1 ( ) 2 fp x. (20) Ω 1 Multiplying both sies of (19) by (1 c) an subtracting the result from (20) yiels (1 c)dθ Dθ = (1 c) f 2 p x (1 c)i 2 (I I ) 2. Ω

7 Using the nonnegative value b 2 Ω (f I c ) 2 p(x)x = Ω f 2 p x I 2 c, one can obtain the following inequality (1 c)dθ Dθ = (1 c)b 2 + ( ci I / c) 2 0 an the theorem is prove.

8 Symmetrization of the Integran For a one-imensional integral I 0 = b a f(x)x on a finite interval [a, b] let us consier the ranom point ξ (uniformly istribute in this interval) an the ranom variable θ = (b a)f(ξ). Since Eθ = I 0, the Plain Monte Carlo algorithm leas to the following approximate estimate for I 0 : θ N = b a N f(ξ i ), N where ξ i are inepenent realizations of ξ. Consier the symmetric function i=1 f 1 (x) = 1 [f(x) + f(a + b x)], 2

9 the integral of which over [a, b] is equal to I 0. Consier also the ranom variable θ efine as θ = (b a)f 1 (ξ). Since Eθ = I 0, the following symmetrize approximate estimate of the integral may be employe: θ N = b a 2N N [f(ξ i ) + f(a + b ξ i )]. i=1 Theorem 2. If the partially continuous function f is monotonic in the interval a x b, then Dθ 1 2 Dθ. Proof 2. The variances of θ an θ may be expresse as Dθ = (b a) b a f 2 (x)x I 2 0, (21)

10 b b 2Dθ = (b a) f 2 x + (b a) f(x)f(a + b x)x I0. 2 (22) a a From (21) an (22) it follows that the assertion of the theorem is equivalent to establishing the inequality (b a) b a f(x)f(a + b x)x I 2 0. (23) Without loss of generality, suppose that f is non-ecreasing, f(b) > f(a), an introuce the function v(x) = (b a) x a f(a + b t)t (x a)i 0, which is equal to zero at the points x = a an x = b. The erivative of v, namely v (x) = (b a)f(a + b x) I 0

11 is monotonic an since for x [a, b]. Obviously, v (a) > 0, v (b) < 0, we see that v(x) 0 b a v(x)f (x)x 0. (24) Thus integrating (24) by parts, one obtains b a f(x)v (x)x 0. (25) Now (23) follows by replacing the expression for v (x) in (25). (The case of a non-increasing function f can be treate analogously.)

12 Importance Sampling Algorithm Consier the problem of computing the integral I 0 = Ω f(x)x, x Ω IR. Let Ω 0 be the set of points x for which f(x) = 0 an Ω + = Ω Ω 0. Definition 1. Define the probability ensity function p(x) to be tolerant to f(x), if p(x) > 0 for x Ω + an p(x) 0 for x Ω 0. For an arbitrary tolerant probability ensity function p(x) for f(x) in Ω let us efine the ranom variable θ 0 in the following way: θ 0 (x) = { f(x) p(x), x Ω +, 0, x Ω 0.

13 It is interesting to consier the problem of fining a tolerant ensity, p(x), which minimizes the variance of θ 0. The existence of such a ensity means that the optimal Monte Carlo algorithm with minimal probability error exists. Theorem 3. (Kahn). The probability ensity function c f(x) minimizes Dθ 0 an the value of the minimum variance is Dˆθ 0 = [ Ω ] 2 f(x) x I0. 2 (26) Proof 3. Let us note that the constant in the expression for ˆp(x) is c = [ Ω f(x) x] 1, because the conition for normalization of probability ensity must be satisfie. At the same time f 2 (x) Dθ 0 = Ω + p(x) x I2 0 = Dˆθ 0. (27)

14 It is necessary only to prove that for other tolerant probability ensity functions p(x) the inequality Dθ 0 Dˆθ 0 hols. Inee [ Ω f(x) x] 2 = [ ] 2 [ = f x f p 1/2 p 1/2 x Ω + Ω + ] 2 Applying the Cauchy-Schwarz inequality to the last expression one gets [ 2 f(x) x] f 2 p 1 x p x Ω Ω + Ω + Ω+ f 2 p x. (28) Corollary 1. If f oes not change sign in Ω, then Dˆθ 0 = 0. Proof 4. The corollary is obvious an follows from the inequality (26). Inee, let us assume that the integran f(x) is non-negative. Then [ Ω f(x) x] = I 0 an accoring to (26) the variance Dˆθ 0 is zero. If f(x) is non-positive, i.e., f(x) 0, then again [ Ω f(x) x] = I 0 an Dˆθ 0.

15 For practical algorithms this assertion allows ranom variables with small variances (an consequently small probable errors) to be incurre, using a higher ranom point probability ensity in subomains of Ω, where the integran has a large absolute value. It is intuitively clear that the use of such an approach shoul increase the accuracy of the algorithm.

16 Weight Functions Approach Monte Carlo quaratures with weight functions are consiere for the computation of S(g; m) = g(θ)m(θ)θ, where g is some function (possibly vector or matrix value). The unnormalize posterior ensity m is expresse as the prouct of two functions w an f, where w is calle the weight function m(θ) = w(θ)f(θ). The weight function w is nonnegative an integrate to one; i.e., w(θ)θ = 1, an it is chosen to have similar properties to m. Most numerical integration algorithms then replace the function m(θ) by a iscrete approximation in the form of ˆm(θ) = { wi f(θ), θ = θ i, i = 1, 2,..., n, 0 elsewhere,

17 so that the integrals S(g; m) may by estimate by Ŝ(g; m) = N w i f(θ i )g(θ i ). (29) i=1 Integration algorithms use the weight function w as the kernel of the approximation of the integran S(g; m) = = g(θ)m(θ)θ = g(θ)w(θ)f(θ)θ (30) g(θ)f(θ)w (θ) = Ew(g(θ)f(θ)). (31) This suggests a Monte Carlo approach to numerical integration: generate noes θ 1,..., θ N inepenently from the istribution w an estimate S(g; m) by Ŝ(g; m) in (29) with w i = 1 N. If g(θ)f(θ) is a constant then Ŝ(g; m) will

18 be exact. More generally Ŝ(g; m) is unbiase an its variance will be small if w(θ) has a similar shape to g(θ)m(θ). The above proceure is also known as importance sampling. Such an approach is efficient if one eals with a set of integrals with ifferent weight functions. Determination of the weight function can be one iteratively using a posterior information.

19 Superconvergent Monte Carlo Algorithms As was shown earlier, the probability error usually has the form of (9): R N = cn 1/2. The spee of convergence can be increase if an algorithm with a probability error R N = cn 1/2 ψ() can be constructe, where c is a constant, ψ() > 0 an is the imension of the space. As will be shown later such algorithms exist. This class of Monte Carlo algorithms exploit the existing smoothness of the integran. Often the exploiting of smoothness is combine with subiviing the omain of integration into a number of non-overlapping sub-omains. Each sub-omain is calle stratum. This is the reason to call the techniques leaing to superconvergent Monte Carlo algorithms Stratifie sampling, or Latin hypercube sampling. Let us note that the Plain Monte Carlo, as well as algorithms base on variance reuction techniques, o not exploit any smoothness (regularity) of the integran. We will show how one can exploit the regularity to increase the convergence of the algorithm. Definition 2. Monte Carlo algorithms with a probability error R N = cn 1/2 ψ() (32)

20 (c is a constant, ψ() > 0) are calle Monte Carlo algorithms with a superconvergent probable error.

21 Error Analysis Let us consier the problem of computing the integral I = f(x)p(x)x, Ω where Ω IR, f L 2 (Ω; p) W 1 (α; Ω) an p is a probability ensity function, i.e. p(x) 0 an p(x)x = 1. The class W 1 (α; Ω) contains function f(x) Ω with continuous an boune erivatives ( f x α for every k = 1, 2,..., ). (k) Let Ω E be the unit cube Ω = E = {0 x (i) < 1; i = 1, 2,..., }. Let p(x) 1 an consier the partition of Ω into the subomains (stratums) Ω j, j = 1, 2,..., m, of N = m equal cubes with ege 1/m (eviently p j = 1/N

22 an j = /m) so that the following conitions hol: Ω = m j=1 Ω j, Ω i Ω j =, i j, an j = p j = where c 1 an c 2 are constants. Ω j p(x)x c 1 N, (33) sup x 1 x 2 c 2 x 1,x 2 Ω j N 1/, (34) m Then I = I j, where I j = f(x)p(x)x an obviously I j is the mean of j=1 Ω j the ranom variable p j f(ξ j ), where ξ j is a ranom point in Ω j with probability ensity function p(x)/p j. So it is possible to estimate I j by the average of N j

23 observations an I by θ N = m θ nj. j=1 θ N = p j N j N j s=1 f(ξ j ), m N j = N, j=1 Theorem 4. Let N j = 1 for j = 1,..., m (so that m = N). The function f has continuous an boune erivatives ( f x α for every k = 1, 2,..., ) (k) an let there exist constants c 1, c 2 such that conitions (33) an (34) hol. Then for the variance of θ the following relation is fulfille Dθ N = (c 1 c 2 α) 2 N 1 2/N. Using the Tchebychev s inequality it is possible to obtain R N = 2c 1 c 2 αn 1/2 1/. (35)

24 The Monte Carlo algorithm constructe above has a superconvergent probability error. Comparing (35) with (32) one can conclue that ψ() = 1. We can try to relax a little bit the conitions of the last theorem because they are too strong. So, the following problem may be consiere: Is it possible to obtain the same result for the convergence of the algorithm but for functions that are only continuous? Let us consier the problem in IR. Let [a, b] be partitione into n subintervals [x j 1, x j ] an let j = x j x j 1. Then if ξ is a ranom point in [x j 1, x j ] with probability ensity function p(x)/p j, where p j = error of the estimator θn is given by the following: xj x j 1 p(x)x, the probable Theorem 5. Let f be continuous in [a, b] an let there exist positive constant c 1, c 2, c 3

25 satisfying p j c 1 /N, c 3 j c 2 /N for j = 1, 2,..., N. Then where = max j r N 4 2 c 1c 2 c 3 τ (f; 32 ) L 2 N 3/2, j an τ(f; δ) L2 is the average moulus of smoothness, i.e. τ(f; δ) L2 = ω(f, ; δ) L2 = ( 1/q b (ω(f, x; δ)) x) q, 1 q, a δ [0, (b a)] an ω(f, x; δ) = sup{ h f(t) : t, t + h [x δ/2, x + δ/2] [a, b]}. where h is the restriction operator. In IR the following theorem hols:

26 Theorem 6. (Dimov, Tonev) Let f be continuous in Ω IR an let there exist positive constants c 1, c 2, c 3 such that p j c 1 /N, j c 2 N 1/ an S j (, c 3 ) Ω j, j = 1, 2,..., N, where S j (, c 3 ) is a sphere with raius c 3. Then r N 4 2 c 1c 2 c 3 τ(f; ) L2 N 1/2 1/. Let us note that the best quarature formula with fixe noes in IR in the sense of Nikolskiy for the class of functions W (1) (l; [a, b]) is the rectangular rule with equiistant noes, for which the error is approximately equal to c/n. For the Monte Carlo algorithm given by Theorem 6 when N j = 1 the rate of convergence is improve by an orer of 1/2. This is an essential improvement. At the same time, the estimate given in Theorem 5 for the rate of convergence attains the lower boun estimate obtaine by Bakhvalov for the error of an arbitrary ranom quarature formula for the class of continuous functions in an interval [a, b]. Some further evelopments in this irection will be presente in Chapter.

27 II. Optimal Monte Carlo Metho for Multiimensional Integrals of Smooth Functions An optimal Monte Carlo metho for numerical integration of multiimensional integrals is propose an stuie. It is know that the best possible orer of the mean square error of a Monte Carlo integration metho ( over ) the class of the k times ifferentiable functions of variables is O N 1 2 k. We present two algorithms implementing the metho uner consieration. Estimates for the computational complexity are obtaine. showing the efficiency of the algorithms are also given. Numerical tests Here a Monte Carlo metho for calculating multiimensional integrals of smooth functions is consiere. Let an k be integers, an, k 1. We consier the class W k ( f ; E ) (sometimes abbreviate to W k ) of real functions f efine over the unit cube E = [0, 1), possessing all the partial erivatives r f(x) x α , α xα α = r k,

28 which are continuous when r < k an boune in sup norm when r = k. The semi-norm on W k is efine as { r } f(x) f = sup x α xα α α = k, x (x 1,..., x ) E. There are two classes of methos for numerical integration of such functions over E eterministic an stochastic or Monte Carlo methos. We consier the following quarature formula I(f) = N c i f(x (i) ), (36) i=1 ( ) where x (i) x (i) 1,..., x(i) E, i = 1,..., N, are the noes an c i, i = 1,..., N are the weights. If x (i) an c i are real values, the formula (36) efines a eterministic quarature formula. If x (i), i = 1,..., N, are ranom points

29 efine in E an c i are ranom variables efine in IR then (36) efines a Monte Carlo quarature formula. The following results of Bahvalov establish lower bouns for the integration error in both cases: Theorem 7. (Bakhvalov) There exists a constant c(, k) such that for every quarature formula I(f) that is fully eterministic an uses the function values at N points there exists a function f W k such that f(x)x I(f) c(, k) f N k. E Theorem 8. (Bakhvalov) There exists a constant c(, k) such that for every quarature formula I(f) that involves ranom variables an uses the function values at N points there exists a function f W k such that { [ ] } 2 1/2 E f(x)x I(f) c(, k) f N 1 E 2 k.

30 When is sufficiently large it is evient that methos involving the use of ranom variables will outperform the eterministic methos. ( ) Monte Carlo methos, that achieve the orer O N 1 2 k are calle optimal. In fact methos of this kin are superconvergent following Definition 2 given before, hey have a unimprovable rate of convergence. It is not an easy task to construct a unifie metho with such rate of convergence for any imension an any value ( of k. Various ) methos for Monte Carlo integration that achieve the orer O N 1 2 k are known. While in the case of k = 1 an k = 2 these methos are fairly simple an are wiely use, when k 3 such methos become much more sophisticate. The first optimal stochastic metho was propose by Mrs. Dupach for k = 1. This metho uses the iea of separation of the omain of integration into uniformly small (accoring both to the probability an to the sizes) isjoint subomains an generating one or small number of points in each subomain. This iea was largely use for creation Monte Carlo methos with high rate of convergence. There exist also the so-calle aaptive Monte Carlo methos propose by Lautrup, which use a priori an/or a posteriori information obtaine uring

31 calculations. The main iea of this approach is to aapt the Monte Carlo quarature formula to the element of integration. The iea of separation of the omain into uniformly small subomains is combine with the iea of aaptivity to obtain an optimal Monte Carlo quarature for the case k = 1. We also combine both ieas - separation an aaptivity an present an optimal Monte Carlo quarature for any k. We separate the omain of integration into isjoint subomains. Since we consier the cube E we ivie it into N = n isjoint ( cubes K j ), j = 1,..., N. In each cube K j we calculate the + k 1 coorinates of points y (r). We select m uniformly istribute an mutually inepenent ranom points from each cube K j an consier the Lagrange interpolation polynomial of the function f at the point z, which uses the information from the function values at the points y (r). After that we approximate the integral in the cube K j using the values of the function an the Lagrange polynomial at the m selecte ranom points. Then we sum these estimates over all cubes K j, j = 1,..., N. The aaptivity is use when we consier the Lagrange interpolation polynomial. The estimates for the probable error an for the mean square error are proven. It is shown that the presente

32 metho has the best possible rate of convergence, i.e. it is an optimal metho. Two algorithms for Monte Carlo integration that achieve such orer of the integration error, along with estimates of their computational complexity. It is known that the computational complexity is efine as number of operations neee to perform the algorithm on the sequential (von Neumann) moel of computer architecture. It is important to be able to compare ifferent Monte Carlo algorithms for solving the same problem with the same accuracy (with the same probable or mean square error). We have shown that the computational complexity coul be estimate as a prouct of tσ 2 (θ), where t is the time (number of operations) neee to calculate one value of the ranom variable θ, whose mathematical expectation is equal to the exact value of the integral uner consieration an σ 2 (θ) is the variance. Here we o not use this presentation an, instea, estimate the computational complexity irectly as number of floating point operations (flops) use to calculate the approximate value of the integral. One can also use some other estimators of the quality of the algorithm (if parallel machines are available), such as speeup an parallel efficiency. It

33 is easy to see that the spee-up of our algorithms is linear an the parallel efficiency is close to 1 ue to the relatively small communication costs. The numerical tests performe on 2-processor an 4-processor machines confirm this.

34 Description of the Metho an Theoretical Estimates Definition 3. Given a Monte Carlo integration formula for the functions in the space W k by err(f, I) we enote the integration error E f(x)x I(f), by ε(f) the probable error meaning that ε(f) is the least possible real number with P ( err(f, I) < ε(f)) 1 2 an by r(f) the mean square error r(f) = { E [ ] } 2 1/2 f(x)x I(f). E For each integer n,, k 1 we efine a Monte Carlo integration formula,

35 epening on an integer parameter m 1 an ( ) +k 1 points in E in the following way: The ( ) +k 1 points x (r) have to fulfill the conition that if for some polynomial P (x) of combine egree less than k P ( x (r)) = 0, then P 0. Let N = n, n 1. We ivie the unit cube E into n isjoint cubes n E = K j, where K j = [a j i, bj i ), j=1 with b j i aj i = 1 n for all i = 1,...,. Now in each cube K j we calculate the coorinates of ( ) +k 1 points y (r), efine by i=1 y (r) i = a r i + 1 n x(r) i.

36 Suppose, we select m ranom points ξ(j, s) = (ξ 1 (j, s),..., ξ (j, s)) from each cube K j, such that all ξ i (j, s) are uniformly istribute an mutually inepenent, calculate all f(y (r) ) an f(ξ(j, s)) an consier the Lagrange interpolation polynomial of the function f at the point z, which uses the information from the function values at the points y (r). We call it L k (f, z). For all polynomials P of egree at most k 1 we have L k (p, z) z. We approximate K j f(x)x 1 mn m [f(ξ(j, s)) L k (f, ξ(j, s))] + s=1 K j L k (f, x)x. Then we sum these estimates over all j = 1,..., N to achieve I(f) 1 mn N j=1 m [f(ξ(j, s)) L k (f, ξ(j, s))] + s=1 N j=1 K j L k (f, x)x.

37 We prove the following Theorem 9. The quarature formula constructe above satisfies ε(f, k,, m) c 1,k m f N 1 2 k an r(f, k,, m) c 1,k m f N 1 2 k, where the constants c,k an c,k epen implicitly on the points x(r), but not on N. Proof 5. One can see that E { 1 mn m [f(ξ(j, s)) L k (f, ξ(j, s))] + s=1 K j L k (f, x)x } = K j f(x)x

38 an { 1 m D mn [f(ξ(j, s)) L k (f, ξ(j, s))] + s=1 = 1 { } 1 m D n [f(ξ(j, 1)) L k(f, ξ(j, 1))]. K j L k (f, x)x } Note that K j L k (f, x)x = 1 n 0 i i k 1 A(r)f(y (r) ), where the coefficients A(r) are the same for all cubes K j an epen only on { x (r) }. Using Taylor series expansion of f over the center of the cube K j, one can see that f(ξ(s, t) L k (f, ξ(j, s) c,k n k f, an therefore { } 1 D n [f(ξ(j, s)) L k(f, ξ(j, s))] c,kn 2 n 2k f 2.

39 Taking into account that the ξ(j, s) are inepenent, we obtain that D n j=1 1 mn m [f(ξ(j, s)) L k (f, ξ(j, s))] + L k (f, x)x K j t=1 n 1 m 2mc,kn 2k n 2 f 2 = 1 m c,kn n 2k f 2 an therefore (N = n ) r(f, k,, m) 1 m c(, k)n 1 2 k f. The application of the Tchebychev s inequality yiels ε(f, k,, m) 1 m c,k f N 1 2 k for the probable error ε, where c (, k) = 2c(, k), which conclues the proof.

40 One can replace the Lagrange approximation polynomial with other approximation schemes that use the function values at some fixe points, provie they are exact for all polynomials of egree less than k. For Quasi- Monte Carlo integration of smooth functions an approach with Tchebychev polynomial approximation is evelope. We (Dimov, Atanasov) show that the propose technique allows one to formulate optimal algorithms in the Höler class of functions H k λ (α, E ), (0 < λ 1). The class H k λ (α, E ), (0 < λ 1) is efine as functions from C k, which erivatives of orer k satisfy Höler conition with a parameter λ: H k λ(α, E ) { f C k : D k f(y 1,..., y ) D k f(z 1,..., z ) α y j z j λ. (37) j=1 For the class H k λ (α, E ) we prove the following theorem:

41 Theorem 10. The cubature formula constructe above satisfies r N (f, k + λ,, m) c (, k + λ) 1 m αn 1 2 k+λ an ( E ( ) ) 2 1/2 f(x)x I(f) c (, k + λ) 1 1 αn E m 2 k+λ, where the constants c (, k +λ) an c (, k +λ) epen implicitly on the points x (r), but not on N. The above theorem shows that the convergence of the metho can be improve by aing a parameter λ to the factor of smoothness k if the integran belongs to the Höler class of functions H k λ (α, E ), (0 < λ 1).

42 Estimates of the Computational Complexity Two algorithms implementing our metho are given. computational complexity of both algorithms are presente. Estimates of the *Algorithm 1 (A.1) In the fist algorithm the points x (r) are selecte so that they fulfill certain conitions that woul assure goo Lagrange approximation of any function from W k. Let us orer all the monomials of variables an egree less than k µ 1,..., µ t. Note that there are exactly ( ) +k 1 of them. We use a pseuo-ranom number generator to obtain many sets of points x (r), then we select the one for which the norm of the inverse of the matrix A = (a ij ) with is the smallest one. a ij = µ i (x (j) ) Once it is selecte for fixe k an, the same set of points will be use for integrating every functions from the space W k. We o not nee to store the

43 co-orinates of the points x (j), if we can recor the state of the generator just before it prouces the best set of x (j). Since the calculations of the elements of the matrix A an its inverse, as well as the coefficients of the interpolation type quarature formula are mae only once if we know the state of the pseuo-ranom number generator that will prouce the set of points x (j), they count as O(1) in our estimates for the number of flops use to calculate the integral of a certain function from W k. These calculations coul be consiere as preprocessing. We prove the following Theorem 11. The computational complexity of the numerical integration of a function from W k using Algorithm A.1 is estimate by: [ ( )] + k 1 N fp N m + a f + mn [ (b r + 2) + 1] + (38) ( ) [ ( ) ] + k 1 + k 1 N 2m c(, k)

44 where b r enotes the number of flops use to prouce a uniformly istribute ranom number in [0, 1), a f stans for the number of flops neee for each calculation of a function value, an c(, k) epens only on an k. Proof 6. As it was pointe out above, the calculation of the coorinates of the points x (r), the elements of A an A 1, an the coefficients of the interpolation type quarature formula can be consiere to be one with c 1 (, k) flops, if we know the initial state of the pseuo-ranom number generator that prouces the set of points x (r). Then in ( ) + k 1 2N operations we calculate the coorinates of the points y (r) in each cube an in mn(b r + 2) flops we obtain the uniformly istribute ranom points we shall use. The

45 calculation of the values of f at all these points takes flops. N [ m + ( )] + k 1 Next we apply the interpolation type quarature formula with the previously calculate coefficients (reorering the terms) using ( ) + k 1 (N + 1) operations. About the contribution of the Lagrange interpolation polynomials note that a f S = N m L k (f, ξ(j, s)) = N m ( +k 1 ) f (+k 1 ( ) y (i)) t ir µ r (ξ(j, s)), j=1 s=1 j=1 s=1 i=1 r=1

46 where µ r are all the monomials of egree less than k. Reorering the terms we obtain S = N ( +k 1 ) f (+k 1 ( ) y (i)) t ir m µ r (ξ(j, s)). j=1 i=1 r=1 s=1 Using the fact the the value of each monomial can be obtaine using only one multiplication, once we know all the values of the monomials of lesser egree at the same point, we see that S can be calculate with less than ( ) + k 1 (2m 1)N + ( ( ) + k 1 2 ) ( + k ) N flops. The summing of all function values at the ranom points ξ (s, k) takes

47 mn 1 flops. Now we sum all these estimates to obtain ( ) + k 1 N fp 2N + mn (b r + 2) [ ( )] ( ) + k 1 + k 1 + N m + a f + (2m + 1) N ( ) 2 ( ) + k 1 + k 1 + 2N + (N + 1) + mn + c 1 (, k) [ ( )] + k 1 = N m + a f + mn ( (b r + 2) + 1) ( ) [ ( ) ] + k 1 + k 1 + N 2m c(, k). The theorem is proven. *Algorithm 2 (A.2)

48 The secon algorithm is a variation of the first one, when first some k ifferent points z (j) i (0, 1) are selecte in each imension, an then the points x (r) have coorinates { } (z (j 1) 1,..., z (j ) ) : (j j < k). In this case the interpolation polynomial is calculate in the form of Newton, namely if w r (t) = a j r + (b j r a j r)z r (t), then L k (f, ξ) =... j j <k R(j 1,..., j, 0,..., 0) ( ) ξ i (j, s) w (j i 1) i, ( ) ξ i (j, s) w (1) i... i=1 where R(j 1,..., j, l 1,..., l ) = f(w j 1 1,..., wj ) if all j i = l i,

49 an R(j 1,..., j i,..., j, l 1,..., l i,..., l ) 1 = (w j i i w l i i )[R(j 1,..., j i,..., j, l 1,..., l i + 1,..., l ) R(j 1,..., j i 1,..., j, l 1,..., l i,..., l )] if j i > l i. In this moification we have the following Theorem 12. The computational complexity of the numerical integration of a function from W k using Algorithm A.2 is estimate by: [ ( )] + k 1 N fp N m + a f + Nm [(b r k) + 1] ( ) + k 1 + N ( m + 1) + c (, k),

50 where a f an b m are as above. Proof 7. One can see that for the calculation of the ivie ifferences in imensions require for the Lagrange - Newton approximation we nee to apply (39) exactly ( ) + k 1 N + 1 times, which is less than For the calculation of the sum ( ) + k 1 Nk. S = N j=1 i=1 m L k (f, ξ (j, s)) = s=1 N j=1 m s=1 j j <k ( ) ( ) ξ i (j, s) w (1) i... ξ i (j, s) w (j i 1) i R(j 1,..., j, 0,..., 0)

51 we make use of the fact that each term ( ) ( ) ξ i (j, s) w (1) i... ξ i (j, s) w (j i 1) i i=1 can be obtaine from a previously compute one through one multiplication, provie we have compute all the k ifferences ξ i (j, s) w (r) i. Thus, we are able to compute the sum S with less than ( ) + k 1 (2m + 1) N + kmn flops.

52 The other estimates are one in the same way as in Theorem 11 to obtain ( ) [ ( )] + k 1 + k 1 N fp 2N + mn(b r + 2) + N m + ( ) ( ) + k 1 + k 1 + (N + 1) + (2m + 1) N + kmn ( ) + k 1 + 2Nk + mn + c 1 (, k) [ ( )] + k 1 = N m + a f + Nm ((b r k) + 1) ( ) + k 1 + N ( m + 1) + c (, k), which proves the theorem. a f

53 Numerical Tests Numerical tests, showing the computational efficiency of the algorithms uner consieration are given. Here we present results for the following integrals: I 1 = I 2 = I 3 = I 4 = e (x1+2x2) cos(x 3 ) x 1 x 2 x 3 x 4 ; E x 2 + x 3 + x 4 x 1 x 2 2e x 1x 2 sin x 3 cos x 4 x 1 x 2 x 3 x 4 ; E 4 e x 1 sin x 2 cos x 3 log(1 + x 4 ) x 1 x 2 x 3 x 4 ; E 4 e x 1+x 2 +x 3 +x 4 x 1 x 2 x 3 x 4 ; E 4

54 In the Tables 1 to 4 the results of some numerical experiments in the case when k = = 4 are presente. They are performe on ORIGIN-2000 machine using only one CPU. n Algorithm Error err rel n 6 CPU time, s. 10 A A A A A A A A A A Table 1: Results of MC numerical integration performe on ORIGIN-2000 for I 1 (Exact value )

55 n Algorithm Error err rel n 6 CPU time, s. 10 A A A A A A A A A A Table 2: Results of MC numerical integration performe on ORIGIN-2000 for I 2 (Exact value )

56 n Algorithm Error err rel n 6 CPU time, s. 10 A A A A A A A A A A Table 3: Results of MC numerical integration performe on ORIGIN-2000 for I 3 (Exact value ).

57 n Algorithm Error err rel n 6 CPU time, s. 10 A A A A A A A A A A Table 4: Results of MC numerical integration performe on ORIGIN-2000 for I 4 (Exact value ).

58 1e-05 1e-06 Results from the calculation of Integral 1 Algorithm A.1 Algorithm A.2 Simpson s rule 1e-07 relative error 1e-08 1e-09 1e-10 1e e+06 1e+07 1e+08 number of function values taken Figure 1: Errors for the integral I 1.

59 1e-05 Results from the calculation of Integral 2 Algorithm A.1 Algorithm A.2 Simpson s rule 1e-06 relative error 1e-07 1e-08 1e-09 1e e+06 1e+07 1e+08 number of function values taken Figure 2: Errors for the integral I 2.

60 80 70 Computational time for the calculation of Integral 2 on ORIGIN-2000 Algorithm A.1 Algorithm A.2 60 CPU time, secons n Figure 3: Computational time for I 2.

61 1e-06 Effect of the parameter m on CPU time an relative error - Integral 2 Algorithm A.2 m=1 Algorithm A.2 m=16 1e-07 relative error 1e-08 1e-09 1e CPU time, secons Figure 4: CPU-time an relative error for I 2 (A.2). Some results are presente on Figures 1 4. The epenencies of the error on the number of points where the function values are taken when both algorithms

62 are applie to the integral I 1 are presente on Figure 1. The same epenencies for the integral I 2 are presente on Figure 2. The results from the application of the iterate Simpson s metho are provie for comparison. The Figure 3 shows how increases the computational time for the calculation of integral I 2 when n = N 1/ increases. On Figure 4 we compare the CPU time an relative error of the calculation of integral I 2 using algorithm A.2 with two ifferent values of the parameter m - 1 an 16. One can see that setting m = 16 yiels roughly twice better results for the same amount of CPU time.

63 Concluing Remarks A Monte Carlo metho for calculating multiimensional integrals of smooth functions is presente an stuie. It is proven that the metho has the highest possible rate of convergence, i.e. it is an optimal superconvergent metho. Two algorithms implementing the metho are escribe. Estimates for the computational complexity of both algorithms (A.1 an A.2) are presente. The numerical examples show that both algorithms give comparable results for the same number of points where the function values are taken. In all our examples Algorithm A.2 is quicker. It offers more possibilities for use in high imensions an for functions with high orer smoothness. We emonstrate how one can achieve better results for the same computational time using carefully chosen m > 1.

64 Both algorithms are easy to implement on parallel machines, because the calculations performe for each cube are inepenent from that for any other. The fine granularity of the tasks an the low communication costs allow for efficient parallelization. It is important to know how expensive are the most important parts of the algorithms. Our measurements in the case of integral I 2 yiel the following results : Algorithm A.1: 2% to obtain the uniformly istribute ranom points; 73% to calculate all the values of f at all points; 1% to apply the interpolation type quarature formula; 18% to calculate the Lagrange interpolation polynomial at the ranom points. 6% other tasks Algorithm A.2: 2% to obtain the uniformly istribute ranom points; 86% to calculate the values of f at all points;

65 1% to apply the interpolation type quarature formula; 5% to calculate the Lagrange-Newton approximation for the function f at all ranom points; 6% other tasks.

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Tractability results for weighted Banach spaces of smooth functions

Tractability results for weighted Banach spaces of smooth functions Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March

More information

A Review of Multiple Try MCMC algorithms for Signal Processing

A Review of Multiple Try MCMC algorithms for Signal Processing A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

6 Wave equation in spherical polar coordinates

6 Wave equation in spherical polar coordinates 6 Wave equation in spherical polar coorinates We now look at solving problems involving the Laplacian in spherical polar coorinates. The angular epenence of the solutions will be escribe by spherical harmonics.

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Math 1271 Solutions for Fall 2005 Final Exam

Math 1271 Solutions for Fall 2005 Final Exam Math 7 Solutions for Fall 5 Final Eam ) Since the equation + y = e y cannot be rearrange algebraically in orer to write y as an eplicit function of, we must instea ifferentiate this relation implicitly

More information

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim QF101: Quantitative Finance September 5, 2017 Week 3: Derivatives Facilitator: Christopher Ting AY 2017/2018 I recoil with ismay an horror at this lamentable plague of functions which o not have erivatives.

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Solution to the exam in TFY4230 STATISTICAL PHYSICS Wednesday december 1, 2010

Solution to the exam in TFY4230 STATISTICAL PHYSICS Wednesday december 1, 2010 NTNU Page of 6 Institutt for fysikk Fakultet for fysikk, informatikk og matematikk This solution consists of 6 pages. Solution to the exam in TFY423 STATISTICAL PHYSICS Wenesay ecember, 2 Problem. Particles

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

Function Spaces. 1 Hilbert Spaces

Function Spaces. 1 Hilbert Spaces Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

Polynomial Inclusion Functions

Polynomial Inclusion Functions Polynomial Inclusion Functions E. e Weert, E. van Kampen, Q. P. Chu, an J. A. Muler Delft University of Technology, Faculty of Aerospace Engineering, Control an Simulation Division E.eWeert@TUDelft.nl

More information

1. Aufgabenblatt zur Vorlesung Probability Theory

1. Aufgabenblatt zur Vorlesung Probability Theory 24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

ON THE OPTIMAL CONVERGENCE RATE OF UNIVERSAL AND NON-UNIVERSAL ALGORITHMS FOR MULTIVARIATE INTEGRATION AND APPROXIMATION

ON THE OPTIMAL CONVERGENCE RATE OF UNIVERSAL AND NON-UNIVERSAL ALGORITHMS FOR MULTIVARIATE INTEGRATION AND APPROXIMATION ON THE OPTIMAL CONVERGENCE RATE OF UNIVERSAL AN NON-UNIVERSAL ALGORITHMS FOR MULTIVARIATE INTEGRATION AN APPROXIMATION MICHAEL GRIEBEL AN HENRYK WOŹNIAKOWSKI Abstract. We stuy the optimal rate of convergence

More information

Laplace s Equation in Cylindrical Coordinates and Bessel s Equation (II)

Laplace s Equation in Cylindrical Coordinates and Bessel s Equation (II) Laplace s Equation in Cylinrical Coorinates an Bessel s Equation (II Qualitative properties of Bessel functions of first an secon kin In the last lecture we foun the expression for the general solution

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS THE PUBISHING HOUSE PROCEEDINGS O THE ROMANIAN ACADEMY, Series A, O THE ROMANIAN ACADEMY Volume, Number /, pp. 6 THE ACCURATE EEMENT METHOD: A NEW PARADIGM OR NUMERICA SOUTION O ORDINARY DIERENTIA EQUATIONS

More information

Witten s Proof of Morse Inequalities

Witten s Proof of Morse Inequalities Witten s Proof of Morse Inequalities by Igor Prokhorenkov Let M be a smooth, compact, oriente manifol with imension n. A Morse function is a smooth function f : M R such that all of its critical points

More information

Lower Bounds for Local Monotonicity Reconstruction from Transitive-Closure Spanners

Lower Bounds for Local Monotonicity Reconstruction from Transitive-Closure Spanners Lower Bouns for Local Monotonicity Reconstruction from Transitive-Closure Spanners Arnab Bhattacharyya Elena Grigorescu Mahav Jha Kyomin Jung Sofya Raskhonikova Davi P. Wooruff Abstract Given a irecte

More information

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number

More information

Linear and quadratic approximation

Linear and quadratic approximation Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function

More information

12.11 Laplace s Equation in Cylindrical and

12.11 Laplace s Equation in Cylindrical and SEC. 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential 593 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential One of the most important PDEs in physics an engineering

More information

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Ashish Goel Michael Kapralov Sanjeev Khanna Abstract We consier the well-stuie problem of fining a perfect matching in -regular bipartite

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.

More information

II. First variation of functionals

II. First variation of functionals II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent

More information

2 GUANGYU LI AND FABIO A. MILNER The coefficient a will be assume to be positive, boune, boune away from zero, an inepenent of t; c will be assume con

2 GUANGYU LI AND FABIO A. MILNER The coefficient a will be assume to be positive, boune, boune away from zero, an inepenent of t; c will be assume con A MIXED FINITE ELEMENT METHOD FOR A THIRD ORDER PARTIAL DIFFERENTIAL EQUATION G. Li 1 an F. A. Milner 2 A mixe finite element metho is escribe for a thir orer partial ifferential equation. The metho can

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

Final Exam Study Guide and Practice Problems Solutions

Final Exam Study Guide and Practice Problems Solutions Final Exam Stuy Guie an Practice Problems Solutions Note: These problems are just some of the types of problems that might appear on the exam. However, to fully prepare for the exam, in aition to making

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

A. Incorrect! The letter t does not appear in the expression of the given integral

A. Incorrect! The letter t does not appear in the expression of the given integral AP Physics C - Problem Drill 1: The Funamental Theorem of Calculus Question No. 1 of 1 Instruction: (1) Rea the problem statement an answer choices carefully () Work the problems on paper as neee (3) Question

More information

On conditional moments of high-dimensional random vectors given lower-dimensional projections

On conditional moments of high-dimensional random vectors given lower-dimensional projections Submitte to the Bernoulli arxiv:1405.2183v2 [math.st] 6 Sep 2016 On conitional moments of high-imensional ranom vectors given lower-imensional projections LUKAS STEINBERGER an HANNES LEEB Department of

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

Stable and compact finite difference schemes

Stable and compact finite difference schemes Center for Turbulence Research Annual Research Briefs 2006 2 Stable an compact finite ifference schemes By K. Mattsson, M. Svär AND M. Shoeybi. Motivation an objectives Compact secon erivatives have long

More information

On combinatorial approaches to compressed sensing

On combinatorial approaches to compressed sensing On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets

Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets Proceeings of the 4th East-European Conference on Avances in Databases an Information Systems ADBIS) 200 Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets Eleftherios Tiakas, Apostolos.

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

Center of Gravity and Center of Mass

Center of Gravity and Center of Mass Center of Gravity an Center of Mass 1 Introuction. Center of mass an center of gravity closely parallel each other: they both work the same way. Center of mass is the more important, but center of gravity

More information

Capacity Analysis of MIMO Systems with Unknown Channel State Information

Capacity Analysis of MIMO Systems with Unknown Channel State Information Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

Code_Aster. Detection of the singularities and computation of a card of size of elements

Code_Aster. Detection of the singularities and computation of a card of size of elements Titre : Détection es singularités et calcul une carte [...] Date : 0/0/0 Page : /6 Responsable : Josselin DLMAS Clé : R4.0.04 Révision : 9755 Detection of the singularities an computation of a car of size

More information

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,

More information

Problem set 2: Solutions Math 207B, Winter 2016

Problem set 2: Solutions Math 207B, Winter 2016 Problem set : Solutions Math 07B, Winter 016 1. A particle of mass m with position x(t) at time t has potential energy V ( x) an kinetic energy T = 1 m x t. The action of the particle over times t t 1

More information

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides Reference 1: Transformations of Graphs an En Behavior of Polynomial Graphs Transformations of graphs aitive constant constant on the outsie g(x) = + c Make graph of g by aing c to the y-values on the graph

More information

A LIMIT THEOREM FOR RANDOM FIELDS WITH A SINGULARITY IN THE SPECTRUM

A LIMIT THEOREM FOR RANDOM FIELDS WITH A SINGULARITY IN THE SPECTRUM Teor Imov r. ta Matem. Statist. Theor. Probability an Math. Statist. Vip. 81, 1 No. 81, 1, Pages 147 158 S 94-911)816- Article electronically publishe on January, 11 UDC 519.1 A LIMIT THEOREM FOR RANDOM

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Exam 2 Review Solutions

Exam 2 Review Solutions Exam Review Solutions 1. True or False, an explain: (a) There exists a function f with continuous secon partial erivatives such that f x (x, y) = x + y f y = x y False. If the function has continuous secon

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

Lecture 6 : Dimensionality Reduction

Lecture 6 : Dimensionality Reduction CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

Chapter 2 Lagrangian Modeling

Chapter 2 Lagrangian Modeling Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie

More information

Qubit channels that achieve capacity with two states

Qubit channels that achieve capacity with two states Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March

More information

Discrete Operators in Canonical Domains

Discrete Operators in Canonical Domains Discrete Operators in Canonical Domains VLADIMIR VASILYEV Belgoro National Research University Chair of Differential Equations Stuencheskaya 14/1, 308007 Belgoro RUSSIA vlaimir.b.vasilyev@gmail.com Abstract:

More information

Section 7.1: Integration by Parts

Section 7.1: Integration by Parts Section 7.1: Integration by Parts 1. Introuction to Integration Techniques Unlike ifferentiation where there are a large number of rules which allow you (in principle) to ifferentiate any function, the

More information

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES Communications on Stochastic Analysis Vol. 2, No. 2 (28) 289-36 Serials Publications www.serialspublications.com SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

More information

On lower bounds for integration of multivariate permutation-invariant functions

On lower bounds for integration of multivariate permutation-invariant functions arxiv:1310.3959v1 [math.na] 15 Oct 2013 On lower bouns for integration of multivariate permutation-invariant functions Markus Weimar October 16, 2013 Abstract In this note we stuy multivariate integration

More information

Calculus in the AP Physics C Course The Derivative

Calculus in the AP Physics C Course The Derivative Limits an Derivatives Calculus in the AP Physics C Course The Derivative In physics, the ieas of the rate change of a quantity (along with the slope of a tangent line) an the area uner a curve are essential.

More information

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions Math 3 Winter 2 Avance Bounary Value Problems I Bessel s Equation an Bessel Functions Department of Mathematical an Statistical Sciences University of Alberta Bessel s Equation an Bessel Functions We use

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

Entanglement is not very useful for estimating multiple phases

Entanglement is not very useful for estimating multiple phases PHYSICAL REVIEW A 70, 032310 (2004) Entanglement is not very useful for estimating multiple phases Manuel A. Ballester* Department of Mathematics, University of Utrecht, Box 80010, 3508 TA Utrecht, The

More information

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France APPROXIMAE SOLUION FOR RANSIEN HEA RANSFER IN SAIC URBULEN HE II B. Bauouy CEA/Saclay, DSM/DAPNIA/SCM 91191 Gif-sur-Yvette Ceex, France ABSRAC Analytical solution in one imension of the heat iffusion equation

More information

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS Yannick DEVILLE Université Paul Sabatier Laboratoire Acoustique, Métrologie, Instrumentation Bât. 3RB2, 8 Route e Narbonne,

More information

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations Optimize Schwarz Methos with the Yin-Yang Gri for Shallow Water Equations Abessama Qaouri Recherche en prévision numérique, Atmospheric Science an Technology Directorate, Environment Canaa, Dorval, Québec,

More information

FURTHER BOUNDS FOR THE ESTIMATION ERROR VARIANCE OF A CONTINUOUS STREAM WITH STATIONARY VARIOGRAM

FURTHER BOUNDS FOR THE ESTIMATION ERROR VARIANCE OF A CONTINUOUS STREAM WITH STATIONARY VARIOGRAM FURTHER BOUNDS FOR THE ESTIMATION ERROR VARIANCE OF A CONTINUOUS STREAM WITH STATIONARY VARIOGRAM N. S. BARNETT, S. S. DRAGOMIR, AND I. S. GOMM Abstract. In this paper we establish an upper boun for the

More information

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2 High-Resolution Analysis of Least Distortion of Fixe-Rate Vector Quantizers Begin with Bennett's Integral D 1 M 2/k Fin best inertial profile Zaor's Formula m(x) λ 2/k (x) f X(x) x Fin best point ensity

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. Uniqueness for solutions of ifferential equations. We consier the system of ifferential equations given by x = v( x), () t with a given initial conition

More information

arxiv: v2 [cond-mat.stat-mech] 11 Nov 2016

arxiv: v2 [cond-mat.stat-mech] 11 Nov 2016 Noname manuscript No. (will be inserte by the eitor) Scaling properties of the number of ranom sequential asorption iterations neee to generate saturate ranom packing arxiv:607.06668v2 [con-mat.stat-mech]

More information

Chapter 4. Electrostatics of Macroscopic Media

Chapter 4. Electrostatics of Macroscopic Media Chapter 4. Electrostatics of Macroscopic Meia 4.1 Multipole Expansion Approximate potentials at large istances 3 x' x' (x') x x' x x Fig 4.1 We consier the potential in the far-fiel region (see Fig. 4.1

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

Math 115 Section 018 Course Note

Math 115 Section 018 Course Note Course Note 1 General Functions Definition 1.1. A function is a rule that takes certain numbers as inputs an assigns to each a efinite output number. The set of all input numbers is calle the omain of

More information

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek

More information

Optimization Notes. Note: Any material in red you will need to have memorized verbatim (more or less) for tests, quizzes, and the final exam.

Optimization Notes. Note: Any material in red you will need to have memorized verbatim (more or less) for tests, quizzes, and the final exam. MATH 2250 Calculus I Date: October 5, 2017 Eric Perkerson Optimization Notes 1 Chapter 4 Note: Any material in re you will nee to have memorize verbatim (more or less) for tests, quizzes, an the final

More information

Spectral properties of a near-periodic row-stochastic Leslie matrix

Spectral properties of a near-periodic row-stochastic Leslie matrix Linear Algebra an its Applications 409 2005) 66 86 wwwelseviercom/locate/laa Spectral properties of a near-perioic row-stochastic Leslie matrix Mei-Qin Chen a Xiezhang Li b a Department of Mathematics

More information