Multidimensional Fast Gauss Transforms by Chebyshev Expansions
|
|
- Ralph Wade
- 6 years ago
- Views:
Transcription
1 Multiimensional Fast Gauss Transforms by Chebyshev Expansions Johannes Tausch an Alexaner Weckiewicz May 9, 009 Abstract A new version of the fast Gauss transform FGT) is introuce which is base on a truncate Chebyshev series expansion of the Gaussian. Unlike the traitional fast algorithms, the scheme oes not subivie sources an evaluation points into multiple clusters. Instea, the whole problem geometry is treate as a single cluster. Estimates for the error as a function of the imension an the expansion orer p will be erive. The new algorithm has orer `+p+1 N +M) complexity, where M an N are the number of source- an evaluation points. For a fixe p, this estimate is only polynomial in. However, to maintain accuracy it is necessary to increase p with. The precise relationship between an p is investigate analytically an numerically. 1 Introuction The Gauss transform is important in many applications, ranging from financial calculus [4], image processing [5, 14], multivariate ata analysis [11], machine learning [15, 9] an approximation by raial basis function [8], to name only a few. The task is to fin the potentials Φx i ), 1 i N, for given sources q j, 1 j M, such that ) M Φx i ) exp x i y j q j 1) δ j1 where x i R is the location of the i-th evaluation point, y j R is the location of the j-th source, δ > 0 is the variance, is the imension an is the Eucliean norm. We assume that the scaling is such that all sources an evaluation points are locate in the unit cube C : [ 1, 1]. The irect computation of the sum at every evaluation point is an ONM) algorithm which is prohibitively large even for moerate values of N an M. However, it was recognize early on that a variant of the fast multipole metho can be use to evaluate the potentials in optimal ON + M) complexity. This is the fast Gauss transform, introuce by Greengar an Strain [6]. 1
2 The algorithm subivies the unit cube into smaller cubes an approximates interactions between the small cubes using moments-to-local MtL) translation operators. The complete algorithm consists of computing the moments, computing the MtL translations, an evaluating the truncate series to obtain the potentials. The fast Gauss transform is simpler than the tree coes evelope for potential theory in that it suffices to have only one level of interacting cubes. The cost of the fast Gauss transform epens on the complexity of one MtL translation an the number of interacting cubes. Since the Gauss kernel ecays exponentially, only nearby interactions have to be compute, thus it is possible to choose the parameters number of cubes in a linear irection, number of expansion terms) such that the complexity is optimal an inepenent of the variance. The fast algorithm computes only an approximation of the potential, because the Gauss kernel is replace by a truncate series expansion. The conventional approach is to work with the Taylor series of the Gaussian. This is also known as the Hermite expansion since erivatives of the Gaussian involve Hermite polynomials. After the original papers, several optimizations an extensions of this algorithm have been iscusse, e.g., [, 7, 10, 1, 14]. In aition to the original papers, there is a number of papers that iscuss the error that is introuce by the truncation of the Hermite expansion, e.g., [3, 13]. From these papers it is evient that it is ifficult to obtain realistic error estimates in imensions larger than one. In this article we are primarily intereste in case of high imensions. Although the optimal ON + M) complexity of the original FGT hols in any imension, the constants grow exponentially with an thus the algorithm suffers from the curse of imensionality. There are two aspects of the algorithm that contribute to this exponential epenence. The first is that if the omain is subivie into L cubes in each irection then the number of interacting cubes grows at least like L, assuming that the istribution of sources an evaluation points is fairly uniform. The secon aspect is that the number of terms that have to be retaine in the expansion grows exponentially with the imension. To aress the first issue, more efficient space subivision schemes have been consiere [10]. In our work we avoi the first problem altogether by using only one cube that contains all sources an evaluation points. In this setting it is necessary to have an expansion of the Gauss kernel that converges rapily in the whole unit cube, especially for small values of the variance. It is well known that the Hermite expansion converges rapily only in a neighborhoo of the expansion point. On the other han, we will show that the Chebyshev expansion has goo global convergence properties. To aress the secon issue, we retain the terms in the multivariate Chebyshev series where the sum of the orers is less than p. A similar approach been consiere for the Taylor series expansion in [10]. Thus the number of terms is +p ), which grows only like polynomial in. Furthermore, we will show how to exploit the Kronecker prouct form to compute the MtL translation using less than p + 1) ) +p multiplications. Thus for a given orer the algorithm has polynomial complexity.
3 However, in the complexity estimates it is important to take approximation error uner consieration. Our analysis shows that to control the error of the potential in a weighte L -norm it is necessary that the orer must be proportional to the imension. Thus the complexity still has an exponential epenence on the imension. However, our numerical results suggest that this epenence is very mil an hence we are able to compute Gauss transforms in imensions as high as 8 on an eight gigabyte workstation. The outline of the paper is as follows. In Section we erive the approximation theory of the Gaussian using Chebyshev expansions. an emonstrate that the expansion coefficients can be represente in close form using Bessel functions. Then in Section 3 we escribe the one-imensional fast Gauss transform. Section 4 extens the methoology to arbitrary imensions an escribes how to write the MtL transform in Kronecker prouct form. Error an complexity estimates for the multiimensional case are erive in Sections 5 an 6. Section 7 conclues with numerical results. Chebyshev Expansion of the Gaussian We consier the expansion of the one-imensional Gaussian in Chebyshev polynomials. It is well known that the Chebyshev polynomials T n x) cosnacosx)), x [ 1, 1], are L w [ 1, 1]-orthogonal with weight function wx) 1 x ) 1. That is, 1 1 T n x)t m x)wx) x π γ n δ n,m, where γ 0 1 an γ n for n 1. Thus the Gaussian has the expansion ) exp r E n δ)t n r), r [ 1, 1] ) δ with coefficients E n δ) γ n π 1 n0 1 exp ) x T n x)wx) x. 3) δ The integral can be expresse in close form, using Bessel functions. To see this, recall Formulas an of [1], which can be combine as expiz cos θ) γ n J n z)i n cosnθ). n0 Here J n enotes the Bessel function of orer n. If we replace θ θ, set z i/δ) an multiply by exp 1/δ)), we obtain exp 1 ) δ cos θ γ n i n exp 1 ) ) i J n cosnθ). δ δ n0 3
4 If r cos θ then the left han sie is the Gaussian, an the right han sie is the expansion in Chebyshev polynomials. Thus the coefficients in ) are { γn i n exp ) 1 E n δ) δ J n i ) δ, n even, 0 n o. 4) To obtain error bouns of the truncate series in ), we nee to estimate the coefficients. Since it is har to fin uniform bouns for Bessel functions of arbitrary orer an argument, it is more promising to work with the integral efinition of the coefficients in 3). The estimate below epens on the following technical result. Lemma.1 Let Then hols for b > 0. Ib) : 1 π π 0 exp b cos θ) ) θ. Ib) 1 π b Proof. It is easily checke that Now thus Ib) 1 π exp b sin θ) ) θ 1 π exp b sin θ) ) θ π π π π Ib) 1 π π π sin θ θ π, 0 θ π, exp 4b ) π θ θ 1 exp 4b ) π π θ θ 1 π b. Lemma. For every a > 0 the estimate E n δ) γ n δπ a n+1 exp hols. 1 4δ a 1 ) ) a Proof. After change of variables x cos θ the integral in 3) becomes E n δ) γ n π π 0 exp 1 ) δ cos θ) expinθ) θ 4
5 This integran is perioic with perio π an is an entire function in the complex θ-plane. By the Cauchy integral theorem, the contour of integration can be move by ã units in the upper imaginary half plane. Hence E n δ) γ n π an we may estimate π E n δ) γ n π e nã 0 [ exp 1 ] [ ] δ cos θ + iã) exp inθ + iã) θ, π Elementary manipulations lea to Re { cos θ + iã) } 1 where a expã). Thus E n δ) γ n πa n exp 1 4δ γ n a n exp 1 4δ 0 exp 1 ) δ Re cos θ + iã) θ. a + 1a ) cos θ) 1 a 1 ) 4 a a 1 ) ) π exp b cos θ) ) θ, a 0 ) ) a 1 a Ib), where Ib) was efine in Lemma.1 an b 1 a + 1a ) δ. 5) Because of 1 δ b a the assertion can be erive from Lemma.1. Since the estimate of Lemma. is vali for any positive a, we can choose a that minimizes the last estimate. Simple calculus shows that the optimal value of a is [ a δñ δñ) ) ] 1 1, where ñ n + 1. Thus where κt) 1 ln t t ) 1 E n δ) γ n δπ exp ñκñδ) ) 6) ) 1 4t [ ) t t ) 1 1 t t ) 1 ) 1 ]. 5
6 The function κt) is monotonically increasing for t 0, furthermore κt) t, t 0, 7) 4 κt) 1 lnt), t. 8) We enote the p-term expansion of the Gaussian by G p, i.e., G p δ, r) p E n δ)t n r) 9) n0 an the remainer by R p. Since T n 1 in [ 1, 1] it follows that R p δ, r) E n δ)t n r) E n δ), r 1. 10) np+1 np+1 Using estimate 6) an the remainer of the geometric series the following boun can be erive δπ δπ exp pκδ p)) R p δ, r) exp ñκδñ)) 1 exp κδ p)), 11) np+1 where p p + if p is o an p p + 3 if p is even. The secon step is justifie because κ ) is monotonically increasing. If δ is fixe an p then we obtain from the asymptotic 8) that δπ p R p δ, r) [δp]. 1) Thus the convergence is super-exponential, which has to o with the fact that the Gauss kernel is an entire function. Furthermore, the estimate makes clear that the convergence is slower when δ gets smaller. The Chebyshev coefficients an their estimates are shown in Figure 1. 3 One Dimensional Gauss Transform The fast Gauss transform escribe below epens on a truncate series expansion of the Gauss kernel, which is the exponential in 1) consiere as a function of the x- an the y-variable. We will consier two ifferent truncation methos, the first is to truncate the Chebyshev expansion in both variables, the other is to truncate the expansion of the Gaussian in the r-variable an then use an aition theorem of the Chebyshev polynomials. 6
7 δ1/4, exact δ1/4, estimate δ1/16, exact δ1/16, estimate δ1/64, exact δ1/64, estimate 10 6 E n orer Figure 1: Comparison of E n, with the estimate 6) 3.1 L w-orthogonal approximation If x, y [ 1, 1], the two-variate Chebyshev expansion of the Gauss kernel is ) x y) exp E k,l δ)t k x)t l y) 13) δ k,l N 0 where the coefficients are E k,l δ) γ kγ l π exp ) x y) T k x)t l y)wx)wy) yx 14) δ Unlike the case of the Gaussian, there oes not appear to be a close-form expression for the coefficients of the Gauss kernel. Thus the integrals have to be 7
8 compute by quarature when the metho is implemente. We will comment on this issue in Section 7. The expansion 14) is truncate by retaining all terms smaller than a given orer p, the resulting approximation is enote by G p x, y) E k,l δ)t k x)t l y) 15) k+l p an the remainer is enote by R p x, y). approximation in the L w L w-norm. By orthogonality, Gp is the best 3. Expansion base on an Aition Theorem An alternative way to obtain an approximation of the Gauss kernel is base the following aition theorem of Chebyshev polynomials. Lemma 3.1 There are coefficients a n) k,l such that ) x y T n a n) k,l T kx)t l y). 16) k+l n For any p n, the coefficients are given by where x p) j a n) k,l γ p 1 kγ l p cosπ j 1 p ). i0 j0 p) x i T n x p) j ) T k x p) i )T l x p) j ), 17) Proof. Since T n is a polynomial of egree n it follows that T 1 n x y)) is a linear combination of the monomials x k y l, 0 k + l n. Since T k x)t l y), 0 k+l n is also a basis for this subspace assertion 16) follows. By orthogonality, the coefficients are given by a n) k,l γ kγ l π ) x y T n T k x)t l y)wy)wx) yx. If the integrals are replace by the p-th orer Gauss-Chebyshev quarature rule, which is exact for this integran when p n, then assertion 17) follows. To approximate the Gauss kernel, begin with the expansion of the Gaussian in equation 9) exp 1δ ) x y) exp 4 δ ) ) x y δ G p 4, x y ) δ +R p 4, x y ), where, by the aition theorem, δ G p 4, x y ) p ) ) δ x y E n T n E p) k,l 4 δ)t kx)t l y). n0 k+l p 8
9 The coefficients are given by E p) k,l δ) p nk+l a n) k,l E n ) δ, 18) 4 an the remainer of the truncate series of the Gauss kernel is δ R p 4, x y ) ) ) δ x y E n T n. 19) 4 np+1 The convergence results of Section apply since x, y Fast Gauss Transform To compute the sum in 1) efficiently, replace the Gauss kernel by one of the two truncate Chebyshev series expansions escribe above. Thus the potential at the evaluation points Φx i ) is approximate by Φ p x i ), given by Φ p x i ) M j1 k+l p E k,l T k x i )T l y j )q j. Here, E k,l E k,l δ) in the case of the two-variate expansion an E k,l E p) k,l δ) in case of expansion with the aition theorem. Rearranging the orer of summation, we see that the approximate potential has an expansion in terms of Chebyshev polynomials p Φ p x i ) λ k T k x i ), 0) where the expansion coefficients are k0 p k λ k E k,l µ l. 1) Here µ l are the moments of the sources, which are given by µ l l0 M T l y j )q j. ) j0 Instea of evaluating the exact potentials Φx i ) via the sum in 1), we use the following proceure for computing the approximate potentials Φ p x i ). Fast Gauss Transform 1. QtM translation: Compute the moments in ).. MtL translation: Compute the expansion coefficients in 1). 9
10 3. LtP translation: Compute the truncate series 0) at the evaluation points. The complexities of Steps 1 an 3 are OMp) an ONp), respectively; the complexity of the MtL translation is Op ). Thus the metho will be faster than the irect evaluation if the expansion orer satisfies pn + M) NM. 3.4 Approximation Error If the Gauss kernel is approximate with the aition theorem, error estimates for the potential follow irectly from the previously erive estimates for the remainer of the Gaussian. The case of the L w-orthogonal expansion is more ifficult, since it is har to estimate the magnitue of the coefficients in the twovariate Chebyshev series 13). Instea of fining bouns for these coefficients, we will use the optimality of the approximation in the L w-norm. Since this type of argument will reappear in the analysis of the multi-imensional transform, we will iscuss the etails below. In view of 0) the error of the potential when using the L w -orthogonal expansion can be expresse by Chebyshev series Φx) Φ p x) ˆλ k T k x), where the expansion coefficients are ˆλ k By orthogonality, it follows that k+l p Φ Φ p L w k+l p k0 k0 E k,l δ)µ l. 3) π γ k ˆλ k : ˆλ l w 4) To estimate the lw -norm of ˆλ we apply the Cauchy-Schwarz inequality to 3) ˆλ k 4 π π E γ k,lδ) π µ l. l γ l Multiplying by π γ k l N 0 an aing over k leas to ˆλ l 4 π w π E γ k γ k,lδ) µ l. 5) l w k+l p The expression in the brackets is the L w -norm of the resiual. Since the L w - orthogonal approximation is optimal, we can use the approximation base on the aition theorem as the upper boun, hence, k+l p π γ k γ l E k,l δ) R L w L w π max x, y 1 R p δ 4, x y ). 10
11 Combining the last estimate with 4), 5) an 11) leas to δπ exp[ pκ pδ/4)] Φ Φ p L w 1 exp[ κ pδ/4)] µ l. w 4 Multiimensional Transforms If the sources an evaluation points are locate in a -imensional Eucliean space then the Gauss kernel separates ) x y exp exp x 1 y 1 ) )... exp x y ) ). δ δ δ Expaning each term in the prouct shows that the expansion of the multivariate Gauss kernel are proucts of single-variate expansion coefficients ) x y exp E α,β δ)t α x)t β y), x, y [ 1, 1]. 6) δ α,β Here an in the following we use the usual multi inex notation α α 1,..., α ), α α α, an efine the multivariate Chebyshev polynomial by T α x) T α1 x 1 )... T α x ) an the multivariate coefficients by E α,β δ) E α1,β 1 δ)... E α,β δ). 7) We seek an approximation of the Gauss kernel in the space of -variate polynomials of egree p Π p : span { x α y β : α + β p } span {T α x)t β y) : α + β p}. The inex set of the basis of Π p is efine by S p is the -imensional simplex Sp. Here, S p } { α 1,..., α ) : α p 4.1 L w-orthogonal approximation In analogy to the one-imensional case we can simply truncate the Chebyshev series, an obtain G p x, y) E α,β δ)t α x)t β y). 8) α+β p The corresponing resiual is enote by R p x, y). By orthogonality of the Chebyshev polynomials, 8) is the best approximation of the Gauss kernel in Π p with respect to the L w-norm, an the resiual is L w-orthogonal to G p. Furthermore, the coefficients are proucts of the single-variate coefficients, which. 11
12 will be important for efficient MtL translations. Thus the approximation 8) will be the starting point for the multivariate fast Gauss transform. Because of the proucts in 7) it is ifficult to estimate the approximation error. Therefore we will consier a secon approximation scheme to the multivariate Gauss kernel, for which the error estimates for the one-imensional Gaussian apply. 4. Raially symmetric approximation The erivation of the raially symmetric approximation begins with the oneimensional Chebyshev series of the Gaussian ) x y exp exp 4 ) ) x y δ δ ) ) δ x y δ G p, 4 x y + R p, 4, where R p is the remainer of the Gaussian, efine in 10), an ) δ x y G p, 4 p n0 ) ) δ x y E n T n 4. 9) Note that if x, y C then the argument to the Chebyshev polynomials is less than unity, therefore the estimates of the remainer erive in Section apply. Furthermore, because of 4), the sum contains only terms of even orer, thus 9) is in Π p. This follows irectly from the following lemma. Lemma 4.1 If n is even then T n x y ) Π n. Proof. For even n, T n z) is a polynomial in z, hence there are coefficients a k such that ) n/ x y T n a k x y k. From the binomial formula it follows that 1 k! x y k 1 [ x1 y 1 ) x y ) ] k k! x y) γ [α + β)]! 1) β x α y β. γ! α + β)!α!β! γ k k0 α+β k Combining the last two results implies the assertion. 1
13 Because of the Lemma, we have coefficients E p) α,β such that ) δ x y G p, 4 E p) α,β δ)t αx)t β y). α+β p Unfortunately, the coefficients E p) α,β are not proucts of single-variate coefficients, thus the tensor-prouct form of the Chebyshev series is lost, an therefore the raially-symmetric approximation is not attractive for numerical computations. It will be important for error estimates, which will be iscusse in Section Multivariate Fast Gauss Transform Replacing the Gauss kernel in 1) by G p leas to the multiimensional Gauss transform. Similar to the one-imensional case, the potential has the expansion Φ p x) α p λ α T α x). 30) The expansion coefficients are etermine by the MtL-translation λ α E α,β µ β, α Sp, 31) where the moments are given by µ β β+α p M T β x j )q j, β Sp. 3) j1 We illustrate the issues of evaluating the MtL-translation for the case of three imensions. Writing out all inices explicitly, the translation is λ α1,α,α 3 p α β 30 p α β 3 E α3,β 3 β 0 p α β β 3 E α,β β 10 E α1,β 1 µ β1,β,β 3. 33) The expansion coefficients can be compute by a sequence of one-imensional transforms. Set λ 0) µ an compute λ 1) α 1,α,α 3,β,β 3 λ ) α 1,α,α 3,β 3 λ 3) α 1,α,α 3 p α β β 3 β 10 p α β 3 β 0 p α β 30 E α1,β 1 λ 0) β 1,β,β 3 E α,β λ 1) α 1,α,α 3,β,β 3 E α3,β 3 λ ) α 1,α,α 3,β 3, 13
14 then λ λ 3). Unfortunately, λ 1) has five inices an λ ) has four inices instea of just three. Thus their computation is expensive because of the large number of terms that have to be compute. It is possible to better exploit the Kronecker prouct form of the Gauss kernel by incluing more terms in the computation of the λ s. In the three-imensional case this can be accomplishe by evaluating the following expression λ α1,α,α 3 p α 1 α β 30 p α 1 β 3 E α3,β 3 β 0 In this case λ 1) epens only on three inices β 10 p β β 3 E α,β β 10 p β β 3 λ 1) α 1,β,β 3 E α1,β 1 λ 0) β 1,β,β 3. E α1,β 1 λ 0) β 1,β,β 3. 34) If the λ 1) s are compute for inices α 1, β, β 3 ) S 3 p it is sufficient to compute the λ 0) s for inices in S 3 p, since the inices in the above sum satisfy β 1 + β + β 3 p. Now λ ) is given by p α 1 β 3 λ ) α 1,α,β 3 E α,β λ 1) α 1,β,β 3. β 0 If the λ ) s are compute for inices α 1, α, β 3 ) Sp 3 then by the same token, we only nee the coefficients of λ 1) in Sp. 3 Finally, λ λ 3) is given by p α 1 α λ 3) α 1,α,α 3 E α3,β 3 λ ) α 1,α,β 3. β 30 As before, an inspection of the upper boun in the summation shows that we only nee the coefficients of λ ) in S 3 p to compute the coefficients of λ 3) in S 3 p. The same principle generalizes to any imension, thus the algorithm in the general case is 14
15 Multivariate MtL-translation for j 1 : for α 1,..., α j, β j+1,..., β ) Sp n p α 1... α j 1 β j+1... β en en λ j) α 1,...,α j,β j+1,...,β n β j0 E αj,β j λ j 1) α 1,...,α j 1,β j,...,β 5 Complexity an Implementation Details It is well known that the carinality of Sp is given by ) p + #Sp + p)!.!p! A boun for #Sp can be obtaine with Stirling s formula Equation in[1]) n! πnn n exp n + θ ), n > 0, 1n for some θ 0, 1). To obtain an upper boun for #S p set θ 1 for the factorials in the numerator an θ 0 in the enominator. This leas to #S p µ 0 1 p + 1 ) p ) 1 + p, 35) p) where µ 0 exp1/1)/ π Since the mile term in the above estimate is boune, the carinality of Sp grows like a polynomial in the imension if p is fixe. However, the subsequent error analysis shows that the expansion orer must be increase when the imension is increase. We will provie more accurate complexity estimates later on. We will now iscuss the complexity of the three translations in the FGT algorithm. MtL translation. The sum in the inner loop of the MtL operation is compute once for every α 1,..., α j, β j+1,..., β ) Sp. The number of terms in the sum is variable, but boune above by p + 1. Finally, the j-loop is execute once for every irection. We choose the number of multiplications as a measure for the complexity, thus we see that there are ) p + N MtL #Spp + 1) p + 1) operations. 15
16 QtM an LtP translations. The computation of the moments is one by a straightforwar evaluation of the sum 3). Likewise, the computation of the potentials is one by evaluation of the sum 30) for every evaluation point. Hence the complexities for these operations are N QtM N T M an N MtL N T N, respectively, where N T is the cost to compute T α x) for a given x an α S p. The obvious metho for evaluating the multivariate Chebyshev polynomial is the following algorithm. Direct computation of T α x), α S p. 1. Compute single-variate Chebyshev polynomials T k x j ) for k 0,..., p an j 1,...,.. Set T α x) T α1 x 1 )... T α x ) for α S p. The cost of Step 1 is of lower orer an will be be neglecte. The main contribution comes from Step which entails ) p + N T D 1)#Sp 1) multiplications. Since T α must be compute once for every source- an every evaluation point it is worthwhile to optimize this calculation. An alternative algorithm is base on the observation that multi inices can be generate by recurrence. From the efinitions it is clear that p Sp j { } ˆα, α j ) : ˆα S j 1 p α j. α j0 This motivates the following algorithm. Computation of T α x), α S p by recurrence. 1. Compute single-variate Chebyshev polynomials T k x j ) for k 0,..., p an j 1,...,.. Set T 1) α 1 T α1 x 1 ), for α 1 0,..., p. 3. Compute for j : for α j 0 : p for ˆα S j 1 p α j T j) ˆα,α T j) α j x j )T j 1) ˆα en en en 16
17 To assess the complexity of this algorithm we count the number of multiplications in Step 3. N T R p #S j 1 p α j j α j0 j α j0 p ) p + j 1 αj ) p + j p j p α j ) p + j j ) p + j p + 1 j 1 j0 p j ) ) p 0 ) lower orer terms. In lines 3 an 5 of the above calculation we have use a well known property of binomial coefficients, [1], Formula Comparing the irect computation of T α x) an the computation by recurrence leas to the ratio N T R p N T D p + 1) 1), which shows that the computation by recurrence can be significantly faster than the irect computation. Neglecting lower orer terms, the total cost of the fast Gauss transform is N F GT N MtL + N T R N + M) ) ) p + p p + 1) + M + N). In a typical situation N, M, p, thus the cost of computing the QtM an LtP translations ominates over the cost of computing the MtL transform. 6 Multivariate Approximation Error To estimate the error introuce by the fast Gauss transform we precee in a similar manner as in Section 3.4. We begin with the expansion of the error Φx) Φ p x) ˆλ α T α x). α α N 0 With a calculation similar to the one-imensional case in Section 3.4 we obtain ) 1 π ) Φ Φ p L ˆλ w γ α ˆλ l α w π ˆR p L µ w L w l w, 17
18 where ˆR p is the resiual ˆR p x, y) α,β) I p E α,β T α x)t β y). Here, the inex set Ip enotes the inices that are inclue in the MtL translation, see equation 34). In the estimate of the resiual below, it is not necessary to completely characterize this set. The only important property is that by construction S p I p hols. There are fewer terms in ˆR p than there are in R p, efine by 8), hence it follows from orthogonality that ˆR p L w L R p w L w L 36) w. Because of the best approximation property we can use the raially symmetric approximation as an upper boun. R p L w L w ) R p δ L R x y w L p, w 4 wx)wy) xy C C ) δ π max R x y p, x C 4 y C. Thus estimate 11) for the remainer of the one imensional Gaussian can be applie. It follows that Φ Φ p L w ) 1 δπ exp pκ δ 4 p)) 8 1 exp κ δ 4 p)) µ l. 37) w We now iscuss how the orer shoul be ajuste to the imension such that the error is controlle. We consier a relationship of the form p µ 38) for some constant µ > 0. Then simple algebra leas to the estimate ) 1 δπ exp pκ δ 4 p)) 8 1 exp κ δ 4 p)) c ) 1 [ δπ exp ln µκ 8 )] ) δµ 4 39) where c 1 expδµ/4)) 1 is an unimportant constant. For error control, the argument to the exponential function must not be positive. This leas to a conition on µ ) δµ µκ ln. 4 Since κ is positive an monotonically increasing this conition can be satisfie for any given δ. With conition 38) the complexity of the FGT oes no longer 18
19 grow algebraically in the imension. In fact, estimate 35) implies that the growth is exponential #Sp µ 1 µ with constants µ 1 µ ) 1 µ an µ 1 + µ) 1 + µ) 1 µ. 7 Numerical Results We have implemente the fast Gauss transform for arbitrary imension an expansion orer. The coe was written in C, compile with the gcc compiler an teste on a single core of a ual core AMD Opteron with 400 MHz clock spee. The system s memory of eight Gbyte was sufficient to run all examples escribe below in core memory. We compute the Gauss transform for the case that the source- an evaluation points coincie. Their location an the source strengths are generate ranomly with the C-library routine ran). The output of the ranom number generator is normalize such that the points are in C an the source strengths are in the interval [0, 1]. We o not re-see this routine between successive runs which ensures that we obtain the same sources when comparing results with ifferent parameters. To assess the accuracy we compute the potentials at the source points using the irect ON ) algorithm an compute the maximal relative error on the noe points max Φx i) Φ p x i ) i1,...,n ɛ : max Φx i) i1,...,n Note that our theoretical results mostly pertain the L w-error. However, this quantity is not irectly available an hence we resort to the maximal error. Since the expansion coefficients of the Gauss kernel E k,l have no close-form expression, we investigate computing the integrals 14) by Gauss-Chebyshev quarature an replacing the coefficients by E p) k,l efine 18). This resulte in only marginal ifferences in the results. To illustrate the behavior of the fast Gauss transform, we have performe three experiments, which are escribe as follows. 1. Test of the error as a function of the variance δ an expansion orer. The results are shown in Figure. The imension is three an there are 10,000 points. The error ecreases super-exponentially with the orer an the rate of ecrease eteriorates if the variance is reuce. Thus the behavior is similar to the Chebyshev coefficients illustrate in Figure 1. The CPU time for the fast Gauss transform varies between 0.03 sec p4) an 1.63 sec p48), the CPU time for the irect computation is about
20 sec. Note that the plots only show errors for even values of p. The error for the next-larger o value is harly smaller because there are only even terms in the Chebyshev series of the Gaussian. If o values of p were inclue, the plots woul exhibit staircase-like behavior rel. error 10 4 δ1/4 δ1/16 δ1/ orer Figure : Error vs. orer for various values of δ an fixe 3.. Depenence of the CPU time as a function of the number of points an the imension for variance δ 0.5 fixe. We select the expansion orer such that the relative error is not greater than 0.0. By experimentation we fin empirically that this can be accomplishe if the orers are chosen as in Table 1. The results in Figure 3 clearly show the linear scaling in Dimension Orer Table 1: Parameters for results of Figure 3. the number of points for a given imension. For a higher imension the constant factor is larger, resulting in parallel lines on the logarithmic plot. The figure also isplays the quaratic scaling of the irect evaluation. Here the constant factor epens in a much weaker way on the imension. As a result, the fast Gauss transform is only faster if the number of points is sufficiently large, especially in high imensions. As it is evient from the plot, the cross over for imensions less or equal eight occurs before, 000 an for imension ten at about N 30,
21 times) number points 4F) 4D) 6F) 6D) 8F) 8D) 10F) 10D) Figure 3: CPU time vs. number of points, δ Depenence of the CPU time as a function of the imension for a fixe number of points an fixe expansion orer. To compensate for bigger errors in higher imensions we increase the variance by the formula δ 0.. Accoring to the theory, the L w-error for a fixe expansion orer shoul still increase with the imension. However, as Figure 4 suggests, the relative maximum oes not appear to grow. Accoring to our complexity estimates the CPU time shoul grow like a polynomial. This is well reprouce in the logarithmic plot, that shows that the curve for the CPU time approximates a straight line well when the imension is high enough. 8 Acknowlegments The proof of Lemma.1 was suggeste by an anonymous reviewer. Alexaner Weckiewicz research was supporte by a grant from John McCaw. References [1] M. Abramowitz an I. Stegun, eitors. Hanbook of mathematical functions. U.S. Govt. Print. Off., [] F. Anersson an G. Beylkin. The fast Gauss transform with complex parameters. J. Comput. Phys., 031):74 86,
22 rel. Error x) Time in sec. o) imension Figure 4: CPU time an error vs. imension, when p 9, N 10 4 an δ 0.. [3] B. Baxter an G. Roussos. A new error estimate of the fast Gauss transform. SIAM J. Sci. Statist. Comput., 41):57 59, 00. [4] M. Broay an Y. Yamamoto. Application of the fast Gauss transform to option pricing. Mamagement Science, 498): , 003. [5] A. Elgammal, R. Duraiswami, an L.S. Davis. Efficient kernel ensity estimation using the fast Gauss transform with applications to color moeling an tracking. IEEE Trans. Pattern Anal. an Mach. Intel., 511): , 003. [6] L. Greengar an J. Strain. The fast Gauss transform. SIAM J. Sci. Statist. Comput., 1:79 94, [7] L. Greengar an X. Sun. A new version of the fast Gauss transform. Doc. Math. J. DMV, Extra Volume ICM 1998, III: , [8] O. Livne an B. Wright. Fast evaluation of smooth raial basis function expansions. Electronic Transactions on Numerical Analysis, 3:63 87, 006. [9] M. Mahaviani, N. e Freitas, B. Fraser, an F. Hamze. Fast computational methos for visually guie robots. In Proc. IEEE Intnl. Conf. on Robotics an Automation, pages , 005. [10] V.C. Raykar, C. Yang, R. Duraiswami, an N. Gumerov. Fast computation of sums of Gaussians in high imensions. Technical Report CS-TR-4767,
23 Department of Computer Science, University of Marylan, CollegePark, 005. [11] D.W. Scott. Multivariate Density Estimation. Wiley, 199. [1] X. Sun an Y. Bao. A Kronecker prouct representation of the fast Gauss transform. SIAM J. Matrix Anal. Appl., 43): , 003. [13] X. Wan an G.E. Karniaakis. A sharp error estimate for the fast Gauss transform. J. Comput. Phys., 19:7 1, 006. [14] C. Yang, R. Duraiswami, N.A. Gumerov, an L. Davis. Improve fast gauss transform an efficient kernel ensity estimation. In Proceeings. Ninth IEEE International Conference on Computer Vision, pages , 003. [15] X. Zhu, Z. Ghahramani, an J. Lafferty. Semi-supervise learning using Gaussian fiels an harmonic functions. In 0th International Conference on Machine Learning,
Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationFunction Spaces. 1 Hilbert Spaces
Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure
More informationLinear First-Order Equations
5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationNOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,
NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which
More informationA PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,
More informationLATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION
The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische
More informationLower Bounds for the Smoothed Number of Pareto optimal Solutions
Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.
More informationSchrödinger s equation.
Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More information7.1 Support Vector Machine
67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to
More informationQubit channels that achieve capacity with two states
Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March
More informationAn Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an
More informationSeparation of Variables
Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More information1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.
Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency
More informationConservation Laws. Chapter Conservation of Energy
20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action
More informationThe Principle of Least Action
Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of
More information6 General properties of an autonomous system of two first order ODE
6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x
More informationMonte Carlo Methods with Reduced Error
Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for
More informationSturm-Liouville Theory
LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory
More informationTractability results for weighted Banach spaces of smooth functions
Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March
More informationEuler equations for multiple integrals
Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................
More informationarxiv: v4 [math.pr] 27 Jul 2016
The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,
More informationMath 1271 Solutions for Fall 2005 Final Exam
Math 7 Solutions for Fall 5 Final Eam ) Since the equation + y = e y cannot be rearrange algebraically in orer to write y as an eplicit function of, we must instea ifferentiate this relation implicitly
More informationConsider for simplicity a 3rd-order IIR filter with a transfer function. where
Basic IIR Digital Filter The causal IIR igital filters we are concerne with in this course are characterie by a real rational transfer function of or, equivalently by a constant coefficient ifference equation
More informationGeneralized Tractability for Multivariate Problems
Generalize Tractability for Multivariate Problems Part II: Linear Tensor Prouct Problems, Linear Information, an Unrestricte Tractability Michael Gnewuch Department of Computer Science, University of Kiel,
More informationBasic IIR Digital Filter Structures
Basic IIR Digital Filter Structures The causal IIR igital filters we are concerne with in this course are characterie by a real rational transfer function of or, equivalently by a constant coefficient
More informationarxiv:hep-th/ v1 3 Feb 1993
NBI-HE-9-89 PAR LPTHE 9-49 FTUAM 9-44 November 99 Matrix moel calculations beyon the spherical limit arxiv:hep-th/93004v 3 Feb 993 J. Ambjørn The Niels Bohr Institute Blegamsvej 7, DK-00 Copenhagen Ø,
More informationConvergence of Random Walks
Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of
More informationTime-of-Arrival Estimation in Non-Line-Of-Sight Environments
2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor
More information19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control
19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior
More informationQuantum Mechanics in Three Dimensions
Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.
More informationEstimation of the Maximum Domination Value in Multi-Dimensional Data Sets
Proceeings of the 4th East-European Conference on Avances in Databases an Information Systems ADBIS) 200 Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets Eleftherios Tiakas, Apostolos.
More informationIntegration Review. May 11, 2013
Integration Review May 11, 2013 Goals: Review the funamental theorem of calculus. Review u-substitution. Review integration by parts. Do lots of integration eamples. 1 Funamental Theorem of Calculus In
More informationHyperbolic Moment Equations Using Quadrature-Based Projection Methods
Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the
More informationCalculus Class Notes for the Combined Calculus and Physics Course Semester I
Calculus Class Notes for the Combine Calculus an Physics Course Semester I Kelly Black December 14, 2001 Support provie by the National Science Founation - NSF-DUE-9752485 1 Section 0 2 Contents 1 Average
More informationLecture 2 Lagrangian formulation of classical mechanics Mechanics
Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,
More informationensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y
Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay
More informationThermal conductivity of graded composites: Numerical simulations and an effective medium approximation
JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University
More informationSome Examples. Uniform motion. Poisson processes on the real line
Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]
More informationDiscrete Operators in Canonical Domains
Discrete Operators in Canonical Domains VLADIMIR VASILYEV Belgoro National Research University Chair of Differential Equations Stuencheskaya 14/1, 308007 Belgoro RUSSIA vlaimir.b.vasilyev@gmail.com Abstract:
More informationMath 342 Partial Differential Equations «Viktor Grigoryan
Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite
More information6 Wave equation in spherical polar coordinates
6 Wave equation in spherical polar coorinates We now look at solving problems involving the Laplacian in spherical polar coorinates. The angular epenence of the solutions will be escribe by spherical harmonics.
More informationConstruction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems
Construction of the Electronic Raial Wave Functions an Probability Distributions of Hyrogen-like Systems Thomas S. Kuntzleman, Department of Chemistry Spring Arbor University, Spring Arbor MI 498 tkuntzle@arbor.eu
More informationPhysics 505 Electricity and Magnetism Fall 2003 Prof. G. Raithel. Problem Set 3. 2 (x x ) 2 + (y y ) 2 + (z + z ) 2
Physics 505 Electricity an Magnetism Fall 003 Prof. G. Raithel Problem Set 3 Problem.7 5 Points a): Green s function: Using cartesian coorinates x = (x, y, z), it is G(x, x ) = 1 (x x ) + (y y ) + (z z
More informationA note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz
A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae
More informationAcute sets in Euclidean spaces
Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of
More informationMath 1B, lecture 8: Integration by parts
Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores
More informationThe Exact Form and General Integrating Factors
7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily
More informationAssignment 1. g i (x 1,..., x n ) dx i = 0. i=1
Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition
More informationThe Press-Schechter mass function
The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for
More informationImplicit Differentiation
Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,
More informationCapacity Analysis of MIMO Systems with Unknown Channel State Information
Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,
More informationThe effect of dissipation on solutions of the complex KdV equation
Mathematics an Computers in Simulation 69 (25) 589 599 The effect of issipation on solutions of the complex KV equation Jiahong Wu a,, Juan-Ming Yuan a,b a Department of Mathematics, Oklahoma State University,
More informationLinear and quadratic approximation
Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function
More informationSINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES
Communications on Stochastic Analysis Vol. 2, No. 2 (28) 289-36 Serials Publications www.serialspublications.com SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES
More informationCalculus of Variations
16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t
More informationQuantile function expansion using regularly varying functions
Quantile function expansion using regularly varying functions arxiv:705.09494v [math.st] 9 Aug 07 Thomas Fung a, an Eugene Seneta b a Department of Statistics, Macquarie University, NSW 09, Australia b
More informationStable and compact finite difference schemes
Center for Turbulence Research Annual Research Briefs 2006 2 Stable an compact finite ifference schemes By K. Mattsson, M. Svär AND M. Shoeybi. Motivation an objectives Compact secon erivatives have long
More informationComputing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions
Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5
More informationMake graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides
Reference 1: Transformations of Graphs an En Behavior of Polynomial Graphs Transformations of graphs aitive constant constant on the outsie g(x) = + c Make graph of g by aing c to the y-values on the graph
More informationAPPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France
APPROXIMAE SOLUION FOR RANSIEN HEA RANSFER IN SAIC URBULEN HE II B. Bauouy CEA/Saclay, DSM/DAPNIA/SCM 91191 Gif-sur-Yvette Ceex, France ABSRAC Analytical solution in one imension of the heat iffusion equation
More informationCalculus in the AP Physics C Course The Derivative
Limits an Derivatives Calculus in the AP Physics C Course The Derivative In physics, the ieas of the rate change of a quantity (along with the slope of a tangent line) an the area uner a curve are essential.
More informationWe G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies
We G15 5 Moel Reuction Approaches for Solution of Wave Equations for Multiple Frequencies M.Y. Zaslavsky (Schlumberger-Doll Research Center), R.F. Remis* (Delft University) & V.L. Druskin (Schlumberger-Doll
More informationFLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction
FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number
More informationu!i = a T u = 0. Then S satisfies
Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace
More informationTEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE
TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS Yannick DEVILLE Université Paul Sabatier Laboratoire Acoustique, Métrologie, Instrumentation Bât. 3RB2, 8 Route e Narbonne,
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical
More informationDiscrete Mathematics
Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner
More informationθ x = f ( x,t) could be written as
9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)
More informationProof of SPNs as Mixture of Trees
A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a
More informationPhysics 5153 Classical Mechanics. The Virial Theorem and The Poisson Bracket-1
Physics 5153 Classical Mechanics The Virial Theorem an The Poisson Bracket 1 Introuction In this lecture we will consier two applications of the Hamiltonian. The first, the Virial Theorem, applies to systems
More informationWitten s Proof of Morse Inequalities
Witten s Proof of Morse Inequalities by Igor Prokhorenkov Let M be a smooth, compact, oriente manifol with imension n. A Morse function is a smooth function f : M R such that all of its critical points
More informationUnit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule
Unit # - Families of Functions, Taylor Polynomials, l Hopital s Rule Some problems an solutions selecte or aapte from Hughes-Hallett Calculus. Critical Points. Consier the function f) = 54 +. b) a) Fin
More informationTHE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE
Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek
More informationFinal Exam Study Guide and Practice Problems Solutions
Final Exam Stuy Guie an Practice Problems Solutions Note: These problems are just some of the types of problems that might appear on the exam. However, to fully prepare for the exam, in aition to making
More informationA LIMIT THEOREM FOR RANDOM FIELDS WITH A SINGULARITY IN THE SPECTRUM
Teor Imov r. ta Matem. Statist. Theor. Probability an Math. Statist. Vip. 81, 1 No. 81, 1, Pages 147 158 S 94-911)816- Article electronically publishe on January, 11 UDC 519.1 A LIMIT THEOREM FOR RANDOM
More information2Algebraic ONLINE PAGE PROOFS. foundations
Algebraic founations. Kick off with CAS. Algebraic skills.3 Pascal s triangle an binomial expansions.4 The binomial theorem.5 Sets of real numbers.6 Surs.7 Review . Kick off with CAS Playing lotto Using
More informationPermanent vs. Determinant
Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an
More informationIntroduction to the Vlasov-Poisson system
Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its
More informationA Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation
A Novel ecouple Iterative Metho for eep-submicron MOSFET RF Circuit Simulation CHUAN-SHENG WANG an YIMING LI epartment of Mathematics, National Tsing Hua University, National Nano evice Laboratories, an
More information05 The Continuum Limit and the Wave Equation
Utah State University DigitalCommons@USU Founations of Wave Phenomena Physics, Department of 1-1-2004 05 The Continuum Limit an the Wave Equation Charles G. Torre Department of Physics, Utah State University,
More informationEquilibrium in Queues Under Unknown Service Times and Service Value
University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 1-2014 Equilibrium in Queues Uner Unknown Service Times an Service Value Laurens Debo Senthil K. Veeraraghavan University
More informationWhy Bernstein Polynomials Are Better: Fuzzy-Inspired Justification
Why Bernstein Polynomials Are Better: Fuzzy-Inspire Justification Jaime Nava 1, Olga Kosheleva 2, an Vlaik Kreinovich 3 1,3 Department of Computer Science 2 Department of Teacher Eucation University of
More informationA simple tranformation of copulas
A simple tranformation of copulas V. Durrleman, A. Nikeghbali & T. Roncalli Groupe e Recherche Opérationnelle Créit Lyonnais France July 31, 2000 Abstract We stuy how copulas properties are moifie after
More informationModelling and simulation of dependence structures in nonlife insurance with Bernstein copulas
Moelling an simulation of epenence structures in nonlife insurance with Bernstein copulas Prof. Dr. Dietmar Pfeifer Dept. of Mathematics, University of Olenburg an AON Benfiel, Hamburg Dr. Doreen Straßburger
More informationSYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS
SYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS GEORGE A HAGEDORN AND CAROLINE LASSER Abstract We investigate the iterate Kronecker prouct of a square matrix with itself an prove an invariance
More informationEVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES
MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1117 1130 S 0025-5718(00)01120-0 Article electronically publishe on February 17, 2000 EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION
More informationLogarithmic spurious regressions
Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate
More informationLectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs
Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent
More informationII. First variation of functionals
II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent
More informationOptimization of Geometries by Energy Minimization
Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.
More informationTMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments
Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary
More informationA new proof of the sharpness of the phase transition for Bernoulli percolation on Z d
A new proof of the sharpness of the phase transition for Bernoulli percolation on Z Hugo Duminil-Copin an Vincent Tassion October 8, 205 Abstract We provie a new proof of the sharpness of the phase transition
More informationOn Characterizing the Delay-Performance of Wireless Scheduling Algorithms
On Characterizing the Delay-Performance of Wireless Scheuling Algorithms Xiaojun Lin Center for Wireless Systems an Applications School of Electrical an Computer Engineering, Purue University West Lafayette,
More informationA Spectral Method for the Biharmonic Equation
A Spectral Metho for the Biharmonic Equation Kenall Atkinson, Davi Chien, an Olaf Hansen Abstract Let Ω be an open, simply connecte, an boune region in Ê,, with a smooth bounary Ω that is homeomorphic
More information