Liberating the Dimension for Function Approximation and Integration

Size: px
Start display at page:

Download "Liberating the Dimension for Function Approximation and Integration"

Transcription

1 Liberating the Dimension for Function Approximation and Integration G W Wasilkowski Abstract We discuss recent results on the complexity and tractability of problems dealing with -variate functions Such problems, especially path integrals, arise in many areas including mathematical finance, quantum physics and chemistry, and stochastic differential equations It is possible to replace the -variate problem by one that has only d variables since the difference between the two problems diminishes with d approaching infinity Therefore, one could use algorithms obtained in the Information-Based Complexity study, where problems with arbitrarily large but fixed d have been analyzed However, to get the optimal results, the choice of a specific value of d should be a part of an efficient algorithm This is why the approach discussed in the present paper is called liberating the dimension Such a choice should depend on the cost of sampling d-variate functions and on the error demand ε Actually, as recently observed for a specific class of problems, optimal algorithms are from a family of changing dimension algorithms which approximate -variate functions by a combination of special functions, each depending on a different set of variables Moreover, each such set contains no more than d(ε) = O(ln(1/ε)/ ln(ln(1/ε))) variables This is why the new algorithms have the total cost polynomial in 1/ε even if the cost of sampling a d-variate function is exponential in d 1 Introduction We discuss some recent results on computational problems dealing with functions of infinitely many variables, which are called -variate functions Such problems arise in many areas including mathematical finance, quantum physics and chemistry, and in solving deterministic and stochastic differential equations The main source of such problems is probably given by path integrals A partial list of references include [3, 4, 5, 6, 7, 12, 13, 14, 17, 36] Actually, all problems involving expectations of stochastic processes X(t) can be viewed as integration problems for -variate functions, ie, path integration Indeed, consider the expectation E(V(X(t))) for a function V and a Gaussian process X, eg, Brownian motion Due to the Karhulen-Loéve expansion, X(t) = x j g j (t) for iid N (0,1) random variables x j and some functions g j, the expectation is the integral of the -variate function ( ) f(x 1,x 2,) := V x j g j (t) with respect to the probability density functions ρ(x i )=e x2 i /2 / 2π One of the main tools used so far in practice is a variant of the Monte Carlo algorithm; however, it may be slow Since typical -variate functions could be approximated by functions with finite but sufficiently large number d of variables, the volume of results from Information-Based Complexity (IBC for short) could be applied, see, eg, [21] However, we believe that such an approach to -variate problems would not yield the most efficient algorithms G W Wasilkowski Department of Computer Science, University of Kentucky, Lexington, KY 40506, USA, greg@csukyedu 1

2 2 G W Wasilkowski Indeed, the majority of IBC papers on the complexity of multivariate problems consider spaces of functions with d variables for finite yet arbitrarily large d, see again [21] and papers cited there A typical question addressed in these papers is: How does the cost depend on the error demand ε and d? There are many positive results However, since d may be arbitrarily large independently of ε, there are also many negative results We are convinced that when dealing with -variate problems the selection of d should be a part of efficient algorithms and, in particular, should depend on the cost of sampling d-variate functions which is is denoted here by $(d) For instance, sampling a d-variate polynomial of degree 2 requires $(d)=o(d 2 ) arithmetic operations, whereas sampling polynomials of degree 10 is more expensive, $(d)=o(d 10 ) When simulating the Brownian path X(t), the Karhulen-Loéve expansion is usually truncated For instance, we may have X(t;x 1,x 2,) 2/π d sin(( j 1/2)π t/t) x j j 1/2 Hence, again, the cost depends on d and it would be reasonable to take $(d)=o(d) in this case Equally importantly, the value of d should depend on the error demand ε More precisely, d= d(ε) should be a function of ε so that the cost of computing an ε-approximation to the original -variate problem is minimized As we shall see later, for some problems, d(ε) increases surprisingly slowly with decreasing ε This point is important and shows the difference between the study of -variate problems and the study of tractability of multivariate problems For -variate problems, we are interested in algorithms that have good properties (eg, small cost) only for the pairs (ε,d(ε)) for ε (0,1), whereas for multivariate problems, good properties should hold for all the pairs (ε,d) for ε (0,1) and d = 1,2, This is why there are problems with negative tractability results if all pairs(ε, d) are considered and positive results if only pairs(ε,d(ε)) are of interest Such an approach for -variate functions was considered in [24, 31] for approximating Feynman-Kac type of integrals, and more recently in [2, 8, 9, 10, 16, 18, 19, 23] for approximating more general integrals, as well as in [33, 34] for function approximation The presentation of this paper is based on results from [16, 23, 33, 34] As in the four papers mentioned above, the functions to be integrated or approximated belong to a quasireproducing kernel Hilbert space (or Q-RKH space for short) This means that function evaluation may be a discontinuous functional for some sampling points We restrict the attention to the changing dimension algorithms (or CD algorithms for short) introduced in [16] since they provide optimal results, modulo logarithmic terms, with sharp bounds on the tractability exponents The CD-algorithms approximate only the most important terms from a special Fourier expansion of the function being approximated Each term depends on a different set of variables Quite surprisingly, each set contains at most O(ln(1/ε)/ ln(ln(1/ε))) variables This allows efficient algorithms even when the cost function $(d) is exponential in d The approach of using optimal algorithms for approximating the original -variate problem without prespecifying the value of d is what we call liberating the dimension

3 Liberating the Dimension for Function Approximation and Integration 3 2 Basic Concepts In this section, we recall basic definitions/concepts used in the paper For more detailed discussions, we refer to [16, 23, 33, 34] 21 Quasi-Reproducing Kernel Hilbert Spaces We follow here the model introduced in [16] and extended in [33] The spaces F γ of -variate functions are defined as weighted sums of tensor products of a space of univariate real functions More precisely, for a Borel measurable set D R, let F be a separable reproducing kernel Hilbert space (or RKH space for short) of real functions with the domain D whose kernel is denoted by K 1 To omit the trivial case, we always assume that K 0 To stress that F is generated by K, we will often write F = H(K) We will assume throughout the paper that 1 / F, (1) where 1 denotes the constant function f 1 When the information used by algorithms is restricted to function values, we will additionally assume that K(a,a)=0 (2) for a point a D called an anchor We are ready to define the class F γ Let D be the set of infinite sequences x=[x 1,x 2,] with x i D For a finite subset u ofn + ={1,2,}, define the reproducing kernel K u : D D R by The RKH space generated by K u is denoted by K u (x,y) := K(x j,y j ) for all x,y D, with K /0 1 j u H u = H(K u ), where H /0 is the space of constant functions Although, formally, the functions from H u have D as their domain, they depend only on the variables whose indices are listed in u Such variables are referred to as active variables In a number of important applications, consecutive variables of the functions have diminishing importance and/or the spaces of functions have small effective dimension, see eg, [1, 27, 28] Such function spaces can be modeled by using weights γ = {γ u } u, where γ u is a non-negative number The role of γ u is to quantify the importance of the group of variables with indices in u; the larger γ u the more important the group In particular, γ u = 0 means that the corresponding group of variables does not contribute to the functions For example, when γ u = 0 if u 3 then f is a sum of functions with each term depending on at most two variables Although results of [33, 34] hold for general weights, for simplicity of presentation we will restrict the attention to the product weights of the form γ u = γ j and γ /0 = 1, j u where γ j are positive numbers Without loss of generality, we assume that they are ordered, γ j γ j+1 for j 1 Consider next H γ as the pre-hilbert space spanned by the spaces H u and equipped with the inner-product f u, g u := γu 1 f u,g u Hu 1 The results of [33] hold for general Hilbert spaces F We restrict the attention to RKH spaces to simplify the presentation

4 4 G W Wasilkowski for u N+ γu 1 f u 2 H u < and u N+ γu 1 g u 2 H u < Finally, the space F γ is the completion of H γ with respect to the inner-product introduced above Since 1 H u for all u /0, the subspaces H u are mutually orthogonal and any function f F γ has the unique representation f = f u with f u H u (3) Clearly, F γ is also separable Moreover, it is a RKH space iff γ u K u (x,x)< for all x D (4) Since u N+ γ u K u (x,x)= (1+γ j K(x j,x j )), the condition (4) holds iff supk(x,x)< x D and γ j <, and then K γ (x,y) := γ u K u (x,y)= u (1+γ j K(x j,y j )) is well defined and it is the reproducing kernel of F γ If (4) does not hold then function sampling, L x ( f) := f(x), is a discontinuous or ill-defined functional for some x D However, even then, L x is continuous when x has only finitely many components different from the anchor a Indeed, for given x D and u, let[x; u] be a short hand notation for the point with active variables listed inu, ie, { x [x;u] := y=[y 1,y 2,] with y j := j if j u, (5) a if j / u Then f([x;u])= f v (x) and v u L [x;u] 2 = γ v K v (x,x)< v u Of course,[x; /0]=a=[a,a,] and f([x; /0])= f /0 for any x D and any f F γ If (4) does not hold then we refer to such spaces as quasi-reproducing kernel Hilbert spaces (Q-RKH spaces for short) Important examples of such spaces are provided by those generated by Wiener kernel discussed in the following example Example Consider K(x,y)=min(x,y) with D=[0,1] or D=[0, ) In this case, F consists of (locally) absolutely continuous functions with f(0)=0 and f L 2 (D), and the anchor equals a=0 Clearly, if γ j <, then F γ is a RKH space when D=[0,1], and it is only a Q-RKH space when D=[0, ) since sup x [0, ) K(x,x)= 22 Integration Problem Let ρ be a given probability density (p d) function on D We are interested in approximating integrals INT( f) := lim d D d f(x 1,,x d,a,a,) d ρ(x j )d(x 1,,x d ) for f F γ We assume that INT is a well defined and continuous functional on F γ Then INT 2 = γ u C u 0 = (1+γ j C 0 )<, (6)

5 Liberating the Dimension for Function Approximation and Integration 5 where Thus INT < iff C 0 := ρ(x) K(x,y) ρ(y)dydx D D C 0 < and γ j < (7) This is why we will assume that (7) holds whenever the integration problem is considered Moreover, we will also assume that C 0 > 0 since, otherwise, INT( f)= f(a) for all functions from F γ which makes the integration problem trivial 23 Function Approximation Problem As in the previous section, ρ is a given probability desnity on D Without loss of generality, we assume that it is positive almost everywhere on D Then L 2 (D,ρ) endowed with the norm f 2 L 2 (D,ρ) := D f(x) 2 ρ(x)d(x), is a well defined Hilbert space Following [33, 34], we assume that H(K) is continuously imbedded in L 2 (D,ρ), ie, H(K) L 2 (D,ρ) and f 2 L C 1 := sup 2 (D,ρ) f H(K) f 2 < with the convention 0 0 = 0 H(K) Next, consider the space G consisting of functions from F γ with the norm defined by f u Note that the last norm is always finite We are interested in approximating the imbedding operator 2 G := f u 2 L 2 (ρ u,d u ) (8) APP : F γ G given by APP( f)= f The problem is well defined if APP is continuous, and this holds iff APP 2 = sup γ u C u 1 < It is well known that C 1 is is the largest eigenvalue of the integral operator W : H(K) H(K) given by W( f)(x) := f(t) K(t,x) ρ(t)dt (9) D We want to stress that the space G is very special and, perhaps, not always interesting from a practical point of view In particular, it can happen that the approximation problem is easier than the integration problem, and that INT( f) sup < does not hold in general (10) f F f G Indeed, take the reproducing kernel K such that C 0 > 0 and C 1 < Note that D K(x,x) ρ(x) d(x) < implies that C 1 < For γ j = j β with β (0,1], we then have

6 6 G W Wasilkowski INT = while APP = max k N Ck 1 /(k!)β < We chose such a space G in [33, 34] as the first step in the study of approximation for -variate functions The results obtained there will be used in a forthcoming paper [35], see also Section 43, where function approximation is considered with G replaced by the Hilbert space L 2 (D,ρ ) whose norm is given by f 2 L 2 (D,ρ ) = lim d D d f(x 1,,x d,a,a,) 2 d ρ(x j )d(x 1,,x d ) (11) Of course, we will need stronger assumptions on F γ for f L2 (D,ρ ) to be well defined for all f F γ However, then INT( f) f L2 (D,ρ ) for all f F γ, and integration is no harder than approximation 24 Algorithms Let T be the solution operator whose values T ( f) we want to approximate; T = INT for the integration problem, and T = APP for the approximation problem Since F γ is a Hilbert space, we may restrict the attention to linear algorithms, see eg, [26], A n ( f)= n i=1 L i ( f) g i (12) Here the L i s are continuous linear functionals and their values {L i ( f)} n i=1 provide information about the specific function f The elements g i s are numbers for integration and functions from G for approximation If L i s may be arbitrary continuous linear functionals, then we deal with unrestricted linear information In many applications, including integration, only function samplings L i ( f)= f(t i ) are allowed Then A n ( f)= n i=1 f(t i ) g i with t i D and this corresponds to standard information Since in general, F γ is only a Q-RKH space, the sampling points t i used by the algorithms have to be restricted to those that have only finitely many active variables, see (5), ie, t i =[x i,u i ] for some x i D andu i That is, the algorithms using standard information are of the form A n ( f)= n i=1 f([x i,u i ]) g i (13) We believe that the cost of evaluating f at t i =[x i,u i ] should depend on the number u i of active variables in t i That is why we assume that the cost equals $( u i ) for a given cost function $ :N [1, ] At this moment, we only require that $ is monotonically non-decreasing Examples of $ include $(d)=(1+d) α, $(d)=e dα, and $(d)=e edα for α 0 The (information) cost of A n is defined as the total cost of sampling f at the points t i =[x i,u i ], ie, cost(a n ) := n i=1 $( u i )

7 Liberating the Dimension for Function Approximation and Integration 7 For algorithms that use linear functionals L i ( f), the definition of the cost is extended in a natural way with the cost of evaluating L i given as follows Let L i ( f)= f,h i Fγ, where h i F γ is the generator of L i For any h F γ, let Var(h) :={u : h u 0} for h= h u Then Var(h) is the number of active variables in h and the cost of L i ( f) is defined as $( Var(h i ) ) We say that an algorithm A n of the form (12) with L f ( f)= f,h i Fγ, is of a fixed dimension (FD), if there is a finite set V ofn + such that Var(h i )= V for all i=1,2,,n For example, we may have Var(h i ) = {1,,d}, for all i Otherwise, the algorithm is of a changing dimension (CD) As observed in [16], CD algorithms may be significantly superior to FD algorithms In the worst case setting, the error of A n is defined by error wor (A n )=error wor (A n ;F γ,t ) := sup T ( f) A n ( f) G f Fγ 1 In the randomized setting, the choice of the functionals L i or function sample points [x i ;u i ] may be random Then the error of a randomized algorithm is defined by ( 1/2, error ran (A n )=error ran (A n ;F γ,t ) := sup E T ( f) A n ( f) G) 2 f Fγ 1 whereedenotes the expectation with respect to all random parameters in the randomized algorithm A n 25 Complexity and Tractability For a given error demand ε > 0, let comp sett (ε)=comp sett (ε;f γ,t ) := inf { cost(a n ) : error sett (A n ) ε } be the minimal cost among algorithms with errors not exceeding ε Here and elsewhere, sett {wor,ran} denotes the setting When only standard information is allowed, we consider of course only algorithms that use function values To distinguish the complexities with standard and unrestricted linear information, we will sometimes write comp sett (ε;λ) or comp sett (ε;λ,f γ,t ) with Λ = Λ std for standard information and Λ = Λ all for unrestricted linear information We say that the problem is weakly tractable if the complexity is not exponential in 1/ε, ie, lim ε ln( comp sett (ε) ) = 0 ε 0 A stronger notion is polynomial tractability which means that there are some non-negative C and p such that comp sett (ε) C ε p for all ε > 0 The smallest (or more precisely, infimum of) such p is called the exponent of polynomial tractability, ie, p sett := limsup ε 0 ln(comp sett (ε)) ln(1/ε) We sometimes write p sett = p sett (Λ) or p sett (Λ,F γ,t ) with Λ {Λ all,λ std } to stress what type of information is used

8 8 G W Wasilkowski 3 Results for Integration We present in this section selected results from [23] for CD algorithms Recall that these algorithms were defined for the first time in [16] and have the following form A n ( f)= n i=1 f([x i ;u i ]) g i for some points x i, sets of active variables u i, and the numbers g i which may depend on n Moreover, in the randomized setting, all parameters x i,u i, and g i may be chosen randomly In what follows, the operator I is the integration operator for functions from H(K), Theorem 1 Let sett {wor, ran} Suppose that the product weights satisfy I( f)= f(x) ρ(x)dx D γ j = O( j β ) for β > 1, (14) there exists a sequence of algorithms{a n } n for the univariate problem and positive constants α,c such that A n uses at most n function evaluations and the error of A n for the univariate integration problem over the space H(K) satisfies error sett (A n ;H(K),I) c n α for all n N (15) Then there are algorithms{a ε } ε for the -variate integration problem such that error sett (A ε ;F γ,int) ε for all ε > 0 with the following bounds on their cost If $(d) = O ( e k d) ) for some k 0, then for all p > max( 1α 2, β 1 there exists a number C depending, in particular, on p such that cost(a ε ) C ε p for all ε > 0 This means that the -variate integration problem is polynomially tractable with the exponent at most ( ) 1 max α, 2 β 1 Furthermore, in the worst case setting, the exponent is equal to the maximum above if α and β are sharp, and $(d)=ω(d) ( If $(d)=o e ek d) for some k 0, then lim ε 0 ε ln(cost(a ε))=0 This means that the -variate integration problem is weakly tractable We now comment on this theorem As shown in [16] for the integration problem in the worst case setting for the Wiener kernel and D=[0,1], the assumption (14) is necessary for polynomial tractability If the algorithms A n are deterministic then so are the algorithms A ε The proof is constructive Algorithms A ε are based on Smolyak s construction from [25] and results from [30] Moreover, the algorithms A ε use function values at points that have at most d(ε) = o(ln(1/ε)) active variables This is why the problem is polynomially tractable even when the cost function $ is exponential, and is weakly tractable even when the cost function $ is doubly exponential

9 Liberating the Dimension for Function Approximation and Integration 9 Assume now that the complexity of the univariate integration problem over H(K) is Θ(ε p ) Then we can find algorithms A n for which (15) holds with α = 1/p and this value of α is the largest one Then the exponent of polynomial tractability equals p sett = α 1 = p whenever β 1+2/p (16) In this case, the -variate problem is roughly of the same complexity as the univariate problem If β (1,1+2/p) then the -variate problem is harder than the univariate problem but still we have polynomial tractability In this case, however, the exponent can be arbitrarily large The proof that the exponent is sharp also in this case is based on a lower bound from [16] for the -integration problem in the worst case setting We illustrate the theorem for the Wiener kernel Example (continued) For K(x,y)=min(x,y), D=[0,1], and ρ 1, the condition (15) holds with α = 1 in the worst case setting and with α = 3/2 in the randomized setting, and both values are sharp Hence ( ) ( ) p wor = max 1, and β 1 3 pran max 3, 2 β 1 Note that the exponent in the randomized setting is smaller that the exponent in the worst case setting if β > 3 It is open what is the actual value of p ran for β (1,4) 4 Results for Approximation We present in this section selected results from [33] for unrestricted linear information and from [34] for standard information All of them are for the worst case setting and for the range space G We next discuss extensions of the results to the randomized setting and to the range space L 2 (D,ρ ) Recall that for the approximation problem, APP is the imbedding operator from F γ to G We will also use S to denote the imbedding from H(K) to L 2 (D,ρ) 41 Unrestricted Linear Information Consider the operator W defined by (9) It is well known, see eg, [26], that a necessary condition for polynomial tractability of the approximation problem is a polynomial dependence of the eigenvalues λ j of W, ie, λ j = O ( j 2 α) for some α > 0 (17) This is because the errors of optimal algorithms A n for the univariate approximation over H(K) are equal to or equivalently, error wor (A n ;H(K),S)= λ n+1 = O ( n α), comp wor (ε;λ all,h(k),s)=$(1) inf { n : λ n+1 ε 2} One of the results in [33] is the construction of optimal algorithms for the -variate problem which allows to get a necessary and sufficient condition on the polynomial tractability for general weights γ u Here we state one special result for the product weights Theorem 2 Consider the worst case setting Suppose that the product weights satisfy ( γ j = O j β) for β > 0 (18)

10 10 G W Wasilkowski and the eigenvalues satisfy (17) Then there are algorithms{a ε } ε for the -variate approximation problem such that error wor (A ε ;F γ,app) ε with the following bounds on their cost If $(d)=o ( e k d) ) for some k 0, then for all p>max( 1α, 2 β there exists a number C depending, in particular, on p such that cost(a ε ) C ε p for all ε > 0 This means that the -variate problem is polynomially tractable with the exponent at most ( 1 max α, 2 ) β Furthermore, ( the exponent is equal to the maximum above if α and β are sharp, and $(d) = Ω(d) If $(d)=o e ek d) for some k 0, then This means that the problem is weakly tractable lim ε ln(cost(a ε))=0 ε 0 As before, the proof is constructive Moreover, A ε uses inner products with generators having at most d(ε) active variables, where now ( ) ln(1/ε) d(ε)=o ln(ln(/ε)) Example (continued) For the Wiener kernel, D=[0,1], and ρ 1, we have α = 1 and, hence, ( p wor (Λ all )=max 1, 2 ) β 42 Standard Information We have a similar result for algorithms using standard information, see [34, Thm7] Theorem 3 Consider the worst case setting Suppose that the product weights satisfy (18) and there exists a sequence of algorithms{a n } n, each using at most n function evaluations, such that their errors for the univariate approximation problem over the space H(K) satisfy error wor (A n ;H(K),S) c n α for α > 0 (19) Then there are algorithms{a ε } ε for the -variate approximation problem using standard information such that error wor (A ε ;F γ,app) ε with the following bounds on their cost If $(d)=o ( e k d) ) for some k 0, then for all p>max( 1α, 2 β there exists a number C depending, in particular, on p such that cost(a ε ) C ε p for all ε (0,1) This means that the problem is polynomially tractable with the exponent at most

11 Liberating the Dimension for Function Approximation and Integration 11 ( 1 max α, 2 ) β Furthermore, ( the exponent is equal to the maximum above if α and β are sharp, and $(d) = Ω(d) If $(d)=o e ek d) for some k 0, then lim ε ln(cost(a ε))=0 ε 0 This means that the -variate problem is weakly tractable Again, the proof is constructive and the sampling points used by A ε have at most d(ε) active variables with ( ) ln(1/ε) d(ε)=o ln(ln(1/ε)) We stress that the parameters α in (17) and (19) are not necessarily the same The parameter α in (17) describes the power of unrestricted linear information given by the decay of the eigenvalues λ j The parameter α in (19) describes the power of standard information given by the best speed of convergence of algorithms using n function evaluations There are examples of spaces H(K) for which the values of α for unrestricted linear and standard information are different, see [11] There is still an open problem whether they are the same if we assume that α > 1/2 for unrestricted linear information, see [22] for more details Example (continued) For the Wiener kernel, D=[0,1], and ρ 1, the conditions (17) and (19) hold with the same α = 1 Therefore, the exponent of polynomial tractability for standard information is the same as for unrestricted linear information, ( p wor (Λ std )= p wor (Λ all )=max 1, 2 ) β For this particular space, standard and unrestricted linear information are equally powerful Note that for β (0, 3), the exponent for the approximation problem is smaller than the corresponding exponent for the integration problem This is due to the special form of the space G, see Section L 2 -Approximation As already mentioned, the space G was chosen for the approximation problem since it has a relatively simple structure of the eigenpairs of the operator W = APP APP In the forthcoming paper [35], we will present results for the L 2 -approximation problem with the space G replaced by the L 2 = L 2 (D,ρ ) space whose norm is given by (11) Here are some results for product weights It is easy to see that L 2 = G if C 0 = 0 This is why we assume that C 0 > 0 also for the L 2 -approximation problem The first result of [35] is for $(d)= ( e k d) Then the L 2 -approximation problem is polynomially tractable iff the exponent β satisfies β > 1 Recall that for the G -approximation problem, we only need β > 0 Next, if β > 1 then the exponent of polynomial tractability of the L 2 -approximation problem is bounded by ( ) 1 p wor (Λ) max α, 2, β 1 where α is from Theorem 2 for Λ = Λ all and from Theorem 3 for Λ = Λ std Moreover, if α and β are sharp and Ω(d)=$(d)=O ( e k d) then

12 12 G W Wasilkowski ( ) 1 p wor (Λ std )=max α, 2 β 1 ( Furthermore, if $(d) = O e ek d) then the L 2 -approximation problem is weakly tractable In another words, we have similar results for L 2 -approximation as for G -approximation with β replaced by β 1 44 Randomized Setting It has been known for quite some time, see [20, 29], that randomization does not help for multivariate approximation defined over Hilbert spaces when unrestricted linear information is allowed More precisely, for a Hilbert space F d of d-variate functions and the the space G d with norm f 2 G d = f(x) 2 ρ d (x)dx, D d consider the problem of approximating the corresponding imbedding operator S d : F d G d Let sett {wor,ran} Denote by κ sett (Λ all,s d ) the order of convergence of optimal algorithms in the worst case and randomized settings, respectively That is, κ sett (Λ all,s d ) is the supremum of α for which the worst case (or randomized) error of an optimal algorithm using n linear functionals is of order n α Then the results of [20, 29] imply that κ ran (Λ all,s d )=κ wor (Λ all,s d ) More recently, it has been constructively proved in [32] that the standard information is as powerful as Λ all in the randomized setting That is, if κ sett (Λ std,s d ) denotes the order of convergence of optimal algorithms using standard information then κ ran (Λ std,s d )=κ ran (Λ all,s d )=κ wor (Λ all,s d ) As already mentioned, the power of standard information in the worst case setting is not yet completely known It terms of the orders of convergence, it was recently shown, see [11], that there exist reproducing kernel Hilbert spaces F d for which However, it is still open whether κ wor (Λ std,s d )=0 and κ wor (Λ all,s d )= 1 2 κ wor (Λ std,s d )=κ wor (Λ all,s d ) if κ wor (Λ all,s d )> 1 2, see [22] for more details Since in the study of multivariate problems the cost is measured by the number of linear functional or function values used by an algorithm, these and further results translate into complexity and tractability results Namely, the complexity and tractability of{s d } in the worst case setting with Λ all are equivalent to complexity and tractability in the randomized setting with Λ all and/or Λ std It turns out that similar results hold for -variate approximation problem with the cost depending on the number of active variables More precisely, we have the following theorem Theorem 4 Assume that the cost function $(d)=o ( e k d) for some k 0, the eigenvalues λ j = O ( j 2 α) for some α > 0, the product weights are γ j = O( j β ) with the exponent β > 0 for the G -approximation problem, and β > 1 for the L 2 -approximation problem Then p ran (Λ std )= p ran (Λ all )= p wor (Λ all )

13 Liberating the Dimension for Function Approximation and Integration 13 We now outline the proof of this theorem We will do it only for the G -approximation problem since similar arguments and arguments similar to those in [34] can be used for the L 2 -approximation For this purpose, we need to recall some facts about the optimal algorithms for G -approximation in the worst case setting with Λ all The optimal algorithm A ε whose error is at most ε has the form A ε ( f)= A u,n(u,ε) ( f u ) for u U(ε) f = u U γ f u, where A u,n(u,ε) are special projections into H u and they use n(u,ε) linear functional evaluations The set U(ε) is a special finite subset of U γ In particular, A ε ( f v )=0 for f v with v / U(ε) Furthermore, for all u U(ε) we have u d(ε), where d(ε) is the maximal number of active variables As already mentioned, we have d(ε)= max u U(ε) u =O ( ln(1/ε) ln(ln(1/ε)) ) The cost of A ε is given by cost(a ε )= $( u ) n(u,ε) $(d(ε)) n(u,ε) u U(ε) u U(ε) Each algorithm A u,ε can be replaced by the corresponding randomized algorithm that uses standard information due to the already cited result from [32] However, these randomized algorithms need to evaluate functions f u for u U(ε) instead of the whole function f As shown in [15], a value of f u can be obtained by computing at most 2 u values of f at points with at most u active variables Note that ) ln (2 d(ε) 2 u 2 d(ε) and (ln(1/ε)) c/ln(ln(1/ε)) 1 ln(1/ε) for a positive constant c This implies that To show the opposite inequality, ie, note that since ln(cost(a ε )) lim sup ε 0 ln(1/ε) limsup ε 0 p ran (Λ std ;APP) p wor (Λ all ;APP) p wor (Λ all ;APP) p ran (Λ all ;APP), ln ( u U(ε) n(u,ε) ) ) + ln ($(d(ε)) 2 d(ε) ln(1/ε) ) ln ($(d(ε)) 2 d(ε) lim sup lim(ln(1/ε)) c /ln(ln(1/ε)) 1 = 0 ε 0 ln(1/ε) ε 0 ln ( u U(ε) n(u,ε) ) = limsup ε 0 ln(1/ε) This means that the cost function $ does not contribute to the tractability exponents p sett (Λ all ;APP) and we can replace it by $(d) 1 For such a constant cost function, the worst case ε-complexity is the same as the complexity with respect the space F d given by the span of H u for u U(ε/2) as follows from the proof in [33] Moreover, the complexity in the randomized setting is bounded from below if F γ is replaced by F d Hence the results from [20, 29] complete the proof for the G -approximation Acknowledgements I would like to thank Henryk Woźniakowski for valuable comments and suggestions to this paper

14 14 G W Wasilkowski References 1 Caflisch, R E, Morokoff, M, Owen, A B: Valuation of mortgage backed securities using Brownian bridges to reduce effective dimension J Computational Finance (1997) 2 Creutzig, J, Dereich, S, Müller-Gronbach, T, Ritter, K: Infinite-dimensional quadrature and approximation of distributions Found Comput Math (2009) 3 Das, A: Field Theory: A Path Integral Approach Lecture Notes in Physics, Vol 52, World Scientific, Singapore, DeWitt-Morette, C (editor): Special Issue on Functional Integration J Math Physics 36 (1995) 5 Duffie, D: Dynamic Asset Pricing Theory Princeton University, Princeton, NJ, Egorov, R P, Sobolevsky, P I, Yanovich, L A: Functional Integrals: Approximate Evaluation and Applications Kluver Academic, Dordrecht, Feynman, R P, Hibbs, A R: Quantum Mechanics and Path-Integrals McGraw-Hill, New York, Gnewuch, M: Infinite-dimensional integration on weighted Hilbert spaces Submitted (2010) 9 Hickernell, F J, Müller-Gronbach, T, Niu, B, Ritter, K: Multi-level Monte Carlo algorithms for infinite-dimensional integration on R N J Complexity 26 (2010), (2010) 10 Hickernell, F J, Wang, X: The error bounds and tractability of quasi-monte Carlo algorithms in infinite dimension Math Comp (2002) 11 Hinrichs, A, Novak, E, Vybiral, J: Linear information versus function evaluations for L 2 -approximation, J Complexity (2008) 12 Hull, J: Option, Futures, and Other Derivative Securities 2nd ed, Prentice Hall, Engelwood Cliffs NJ, Khandekar, D C, Lawande, S V, Bhagwat, K V: Path-Integral Methods and their Applications World Scientific, Singapore, Kleinert, H: Path Integrals in Quantum Mechanics, Statistics and Polymer Physics World Scientific, Singapore, Kuo, F Y, Sloan, I H, Wasilkowski, G W, Woźniakowski, H: On decompositions of multivariate functions Math Comp (2010), DOI: 01090/S Kuo, F Y, Sloan, I H, Wasilkowski, G W, Woźniakowski, H: Liberating the dimension J Complexity (2010) 17 Merton, R: Continuous Time Finance, Basil Blackwell, Oxford, Niu, B, Hickernell, F J: Monte Carlo simulation of stochastic integrals when the cost function evaluation is dimension dependent In: Ecuyer, P L, Owen, A B (eds) Monte Carlo and Quasi-Monte Carlo Methods 2008, pp , Springer (2008) 19 Niu, B, Hickernell, F J, Müller-Gronbach, T, Ritter, K: Deterministic multi-level algorithms for infinite-dimensional integration on R N Submitted, (2010) 20 Novak, E: Optimal linear randomized methods for linear operators in Hilbert spaces, J Complexity 8, 22 36, (1992) 21 Novak, E, Woźniakowski, H: Tractability of Multivariate Problems, European Mathematical Society, Zürich, (2008) 22 Novak, E, Woźniakowski, H: On the power of function values for the approximation problem in various settings Submitted (2010) 23 Plaskota, L, Wasilkowski, G W: Tractability of infinite-dimensional integration in the worst case and randomized settings Submitted (2010) 24 Plaskota, L, Wasilkowski, G W, Woźniakowski, H: A new algorithm and worst case complexity for Feynman-Kac path integration J Computational Physics 164, (2000) 25 Smolyak, S A: Quadrature and interpolation formulas for tensor products of certain classes of functions Dokl Acad Nauk SSSR 4, (1963) 26 Traub, J F, Wasilkowski, G W, Woźniakowski, H: Information-Based Complexity, Academic Press, New York, (1988) 27 Wang, X, Fang, K -T: Effective dimensions and quasi-monte Carlo integration J Complexity 19, (2003) 28 Wang, X, Sloan, I H: Why are high-dimensional finance problems often of low effective dimension? SIAM J Sci Comput 27, (2005) 29 Wasilkowski, G W: Randomization for continuous problems, J Complexity 5, (1989) 30 Wasilkowski, G W, Woźniakowski, H: Explicit cost bounds for multivariate tensor product problems J Complexity 11, 1-56 (1995) 31 Wasilkowski, G W, Woźniakowski, H: On tractability of path integration, J Math Physics 37, (1996) 32 Wasilkowski, G W, Woźniakowski, H: The power of standard information for multivariate approximation in the randomized setting, Mathematics of Computation 76, (2007) 33 Wasilkowski, G W, Woźniakowski, H: Liberating the dimension for function approximation J Complexity 27, (2011) 34 Wasilkowski, G W, Woźniakowski, H: Liberating the dimension for function approximation: standard information J Complexity To appear, (2011) 35 Wasilkowski, G W, Woźniakowski, H: Liberating the dimension for L 2 -function approximation In progress (2011) 36 Wiegel, F W: Path Integral Methods in Physics and Polymer Physics, World Scientific, Singapore (1986)

New Multilevel Algorithms Based on Quasi-Monte Carlo Point Sets

New Multilevel Algorithms Based on Quasi-Monte Carlo Point Sets New Multilevel Based on Quasi-Monte Carlo Point Sets Michael Gnewuch Institut für Informatik Christian-Albrechts-Universität Kiel 1 February 13, 2012 Based on Joint Work with Jan Baldeaux (UTS Sydney)

More information

Optimal Randomized Algorithms for Integration on Function Spaces with underlying ANOVA decomposition

Optimal Randomized Algorithms for Integration on Function Spaces with underlying ANOVA decomposition Optimal Randomized on Function Spaces with underlying ANOVA decomposition Michael Gnewuch 1 University of Kaiserslautern, Germany October 16, 2013 Based on Joint Work with Jan Baldeaux (UTS Sydney) & Josef

More information

On Infinite-Dimensional Integration in Weighted Hilbert Spaces

On Infinite-Dimensional Integration in Weighted Hilbert Spaces On Infinite-Dimensional Integration in Weighted Hilbert Spaces Sebastian Mayer Universität Bonn Joint work with M. Gnewuch (UNSW Sydney) and K. Ritter (TU Kaiserslautern). HDA 2013, Canberra, Australia.

More information

A PIECEWISE CONSTANT ALGORITHM FOR WEIGHTED L 1 APPROXIMATION OVER BOUNDED OR UNBOUNDED REGIONS IN R s

A PIECEWISE CONSTANT ALGORITHM FOR WEIGHTED L 1 APPROXIMATION OVER BOUNDED OR UNBOUNDED REGIONS IN R s A PIECEWISE CONSTANT ALGORITHM FOR WEIGHTED L 1 APPROXIMATION OVER BONDED OR NBONDED REGIONS IN R s FRED J. HICKERNELL, IAN H. SLOAN, AND GRZEGORZ W. WASILKOWSKI Abstract. sing Smolyak s construction [5],

More information

-Variate Integration

-Variate Integration -Variate Integration G. W. Wasilkowski Department of Computer Science University of Kentucky Presentation based on papers co-authored with A. Gilbert, M. Gnewuch, M. Hefter, A. Hinrichs, P. Kritzer, F.

More information

arxiv: v3 [math.na] 18 Sep 2016

arxiv: v3 [math.na] 18 Sep 2016 Infinite-dimensional integration and the multivariate decomposition method F. Y. Kuo, D. Nuyens, L. Plaskota, I. H. Sloan, and G. W. Wasilkowski 9 September 2016 arxiv:1501.05445v3 [math.na] 18 Sep 2016

More information

A NEARLY-OPTIMAL ALGORITHM FOR THE FREDHOLM PROBLEM OF THE SECOND KIND OVER A NON-TENSOR PRODUCT SOBOLEV SPACE

A NEARLY-OPTIMAL ALGORITHM FOR THE FREDHOLM PROBLEM OF THE SECOND KIND OVER A NON-TENSOR PRODUCT SOBOLEV SPACE JOURNAL OF INTEGRAL EQUATIONS AND APPLICATIONS Volume 27, Number 1, Spring 2015 A NEARLY-OPTIMAL ALGORITHM FOR THE FREDHOLM PROBLEM OF THE SECOND KIND OVER A NON-TENSOR PRODUCT SOBOLEV SPACE A.G. WERSCHULZ

More information

A MONTE CARLO ALGORITHM FOR WEIGHTED INTEGRATION OVER R d

A MONTE CARLO ALGORITHM FOR WEIGHTED INTEGRATION OVER R d MATHEMATICS OF COMPUTATION Volume 73, Number 246, Pages 813 825 S 25-5718(3)1564-3 Article electronically published on August 19, 23 A MONTE CARLO ALGORITHM FOR WEIGHTED INTEGRATION OVER R d PIOTR GAJDA,

More information

Low Discrepancy Sequences in High Dimensions: How Well Are Their Projections Distributed?

Low Discrepancy Sequences in High Dimensions: How Well Are Their Projections Distributed? Low Discrepancy Sequences in High Dimensions: How Well Are Their Projections Distributed? Xiaoqun Wang 1,2 and Ian H. Sloan 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084,

More information

On lower bounds for integration of multivariate permutation-invariant functions

On lower bounds for integration of multivariate permutation-invariant functions On lower bounds for of multivariate permutation-invariant functions Markus Weimar Philipps-University Marburg Oberwolfach October 2013 Research supported by Deutsche Forschungsgemeinschaft DFG (DA 360/19-1)

More information

Tractability of Multivariate Problems

Tractability of Multivariate Problems Erich Novak University of Jena Chemnitz, Summer School 2010 1 Plan for the talk Example: approximation of C -functions What is tractability? Tractability by smoothness? Tractability by sparsity, finite

More information

Integration of permutation-invariant. functions

Integration of permutation-invariant. functions permutation-invariant Markus Weimar Philipps-University Marburg Joint work with Dirk Nuyens and Gowri Suryanarayana (KU Leuven, Belgium) MCQMC2014, Leuven April 06-11, 2014 un Outline Permutation-invariant

More information

STRONG TRACTABILITY OF MULTIVARIATE INTEGRATION USING QUASI MONTE CARLO ALGORITHMS

STRONG TRACTABILITY OF MULTIVARIATE INTEGRATION USING QUASI MONTE CARLO ALGORITHMS MATHEMATICS OF COMPUTATION Volume 72, Number 242, Pages 823 838 S 0025-5718(02)01440-0 Article electronically published on June 25, 2002 STRONG TRACTABILITY OF MULTIVARIATE INTEGRATION USING QUASI MONTE

More information

Tutorial on quasi-monte Carlo methods

Tutorial on quasi-monte Carlo methods Tutorial on quasi-monte Carlo methods Josef Dick School of Mathematics and Statistics, UNSW, Sydney, Australia josef.dick@unsw.edu.au Comparison: MCMC, MC, QMC Roughly speaking: Markov chain Monte Carlo

More information

ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS

ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS GREGORY E. FASSHAUER, FRED J. HICKERNELL, AND HENRYK WOŹNIAKOWSKI Abstract. This article studies the problem

More information

APPLIED MATHEMATICS REPORT AMR04/16 FINITE-ORDER WEIGHTS IMPLY TRACTABILITY OF MULTIVARIATE INTEGRATION. I.H. Sloan, X. Wang and H.

APPLIED MATHEMATICS REPORT AMR04/16 FINITE-ORDER WEIGHTS IMPLY TRACTABILITY OF MULTIVARIATE INTEGRATION. I.H. Sloan, X. Wang and H. APPLIED MATHEMATICS REPORT AMR04/16 FINITE-ORDER WEIGHTS IMPLY TRACTABILITY OF MULTIVARIATE INTEGRATION I.H. Sloan, X. Wang and H. Wozniakowski Published in Journal of Complexity, Volume 20, Number 1,

More information

Convergence Rates of Kernel Quadrature Rules

Convergence Rates of Kernel Quadrature Rules Convergence Rates of Kernel Quadrature Rules Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE NIPS workshop on probabilistic integration - Dec. 2015 Outline Introduction

More information

Lifting the Curse of Dimensionality

Lifting the Curse of Dimensionality Lifting the Curse of Dimensionality Frances Y. Kuo and Ian H. Sloan Introduction Richard Bellman [1] coined the phrase the curse of dimensionality to describe the extraordinarily rapid growth in the difficulty

More information

What s new in high-dimensional integration? designing for applications

What s new in high-dimensional integration? designing for applications What s new in high-dimensional integration? designing for applications Ian H. Sloan i.sloan@unsw.edu.au The University of New South Wales UNSW Australia ANZIAM NSW/ACT, November, 2015 The theme High dimensional

More information

Infinite-Dimensional Integration on Weighted Hilbert Spaces

Infinite-Dimensional Integration on Weighted Hilbert Spaces Infinite-Dimensional Integration on Weighted Hilbert Spaces Michael Gnewuch Department of Computer Science, Columbia University, 1214 Amsterdam Avenue, New Yor, NY 10027, USA Abstract We study the numerical

More information

Weighted Geometric Discrepancies and Numerical Integration on Reproducing Kernel Hilbert Spaces

Weighted Geometric Discrepancies and Numerical Integration on Reproducing Kernel Hilbert Spaces Weighted Geometric Discrepancies and Numerical Integration on Reproducing Kernel Hilbert Spaces ichael Gnewuch Institut für Informatik, Christian-Albrechts-Universität Kiel, Christian-Albrechts-Platz 4,

More information

APPLIED MATHEMATICS REPORT AMR04/2 DIAPHONY, DISCREPANCY, SPECTRAL TEST AND WORST-CASE ERROR. J. Dick and F. Pillichshammer

APPLIED MATHEMATICS REPORT AMR04/2 DIAPHONY, DISCREPANCY, SPECTRAL TEST AND WORST-CASE ERROR. J. Dick and F. Pillichshammer APPLIED MATHEMATICS REPORT AMR4/2 DIAPHONY, DISCREPANCY, SPECTRAL TEST AND WORST-CASE ERROR J. Dick and F. Pillichshammer January, 24 Diaphony, discrepancy, spectral test and worst-case error Josef Dick

More information

Exponential Convergence and Tractability of Multivariate Integration for Korobov Spaces

Exponential Convergence and Tractability of Multivariate Integration for Korobov Spaces Exponential Convergence and Tractability of Multivariate Integration for Korobov Spaces Josef Dick, Gerhard Larcher, Friedrich Pillichshammer and Henryk Woźniakowski March 12, 2010 Abstract In this paper

More information

ON THE COMPLEXITY OF STOCHASTIC INTEGRATION

ON THE COMPLEXITY OF STOCHASTIC INTEGRATION MATHEMATICS OF COMPUTATION Volume 7, Number 34, Pages 685 698 S 5-5718)114-X Article electronically published on March, ON THE COMPLEXITY OF STOCHASTIC INTEGRATION G. W. WASILKOWSKI AND H. WOŹNIAKOWSKI

More information

Approximation numbers of Sobolev embeddings - Sharp constants and tractability

Approximation numbers of Sobolev embeddings - Sharp constants and tractability Approximation numbers of Sobolev embeddings - Sharp constants and tractability Thomas Kühn Universität Leipzig, Germany Workshop Uniform Distribution Theory and Applications Oberwolfach, 29 September -

More information

On ANOVA expansions and strategies for choosing the anchor point

On ANOVA expansions and strategies for choosing the anchor point On ANOVA expansions and strategies for choosing the anchor point Zhen Gao and Jan S. Hesthaven August 31, 2010 Abstract The classic Lebesgue ANOVA expansion offers an elegant way to represent functions

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Infinite-Dimensional Integration in Weighted Hilbert Spaces: Anchored Decompositions, Optimal Deterministic Algorithms, and Higher Order Convergence

Infinite-Dimensional Integration in Weighted Hilbert Spaces: Anchored Decompositions, Optimal Deterministic Algorithms, and Higher Order Convergence arxiv:121.4223v1 [math.na] 16 Oct 212 Infinite-Dimensional Integration in Weighted Hilbert Spaces: Anchored Decompositions, Optimal Deterministic Algorithms, and Higher Order Convergence Josef Dick School

More information

Adapting quasi-monte Carlo methods to simulation problems in weighted Korobov spaces

Adapting quasi-monte Carlo methods to simulation problems in weighted Korobov spaces Adapting quasi-monte Carlo methods to simulation problems in weighted Korobov spaces Christian Irrgeher joint work with G. Leobacher RICAM Special Semester Workshop 1 Uniform distribution and quasi-monte

More information

Henryk Wozniakowski. University of Warsaw and Columbia University. Abstract

Henryk Wozniakowski. University of Warsaw and Columbia University. Abstract Computational Complexity of Continuous Problems Columbia University Computer Science Department Report CUCS-025-96 Henryk Wozniakowski University of Warsaw and Columbia University May 20, 1996 Abstract

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

Stochastic optimization, multivariate numerical integration and Quasi-Monte Carlo methods. W. Römisch

Stochastic optimization, multivariate numerical integration and Quasi-Monte Carlo methods. W. Römisch Stochastic optimization, multivariate numerical integration and Quasi-Monte Carlo methods W. Römisch Humboldt-University Berlin Institute of Mathematics www.math.hu-berlin.de/~romisch Chemnitzer Mathematisches

More information

Bounded uniformly continuous functions

Bounded uniformly continuous functions Bounded uniformly continuous functions Objectives. To study the basic properties of the C -algebra of the bounded uniformly continuous functions on some metric space. Requirements. Basic concepts of analysis:

More information

DFG-Schwerpunktprogramm 1324

DFG-Schwerpunktprogramm 1324 DFG-Schwerpunktprogramm 1324 Extraktion quantifizierbarer Information aus komplexen Systemen Deterministic Multi-level Algorithms for Infinite-dimensional Integration on R N B. Niu, F.J. Hickernell, T.

More information

Hilbert Spaces. Contents

Hilbert Spaces. Contents Hilbert Spaces Contents 1 Introducing Hilbert Spaces 1 1.1 Basic definitions........................... 1 1.2 Results about norms and inner products.............. 3 1.3 Banach and Hilbert spaces......................

More information

Progress in high-dimensional numerical integration and its application to stochastic optimization

Progress in high-dimensional numerical integration and its application to stochastic optimization Progress in high-dimensional numerical integration and its application to stochastic optimization W. Römisch Humboldt-University Berlin Department of Mathematics www.math.hu-berlin.de/~romisch Short course,

More information

An intractability result for multiple integration. I.H.Sloan y and H.Wozniakowski yy. University of New South Wales. Australia.

An intractability result for multiple integration. I.H.Sloan y and H.Wozniakowski yy. University of New South Wales. Australia. An intractability result for multiple integration I.H.Sloan y and H.Wozniakowski yy Columbia University Computer Science Technical Report CUCS-019-96 y School of Mathematics University of ew South Wales

More information

Effective dimension of some weighted pre-sobolev spaces with dominating mixed partial derivatives

Effective dimension of some weighted pre-sobolev spaces with dominating mixed partial derivatives Effective dimension of some weighted pre-sobolev spaces with dominating mixed partial derivatives Art B. Owen Stanford University August 2018 Abstract This paper considers two notions of effective dimension

More information

Convergence of greedy approximation I. General systems

Convergence of greedy approximation I. General systems STUDIA MATHEMATICA 159 (1) (2003) Convergence of greedy approximation I. General systems by S. V. Konyagin (Moscow) and V. N. Temlyakov (Columbia, SC) Abstract. We consider convergence of thresholding

More information

Quantum Algorithms and Complexity for Continuous Problems

Quantum Algorithms and Complexity for Continuous Problems Quantum Algorithms and Complexity for Continuous Problems A. Papageorgiou J. F. Traub SFI WORKING PAPER: 2007-12-042 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily

More information

Fast evaluation of mixed derivatives and calculation of optimal weights for integration. Hernan Leovey

Fast evaluation of mixed derivatives and calculation of optimal weights for integration. Hernan Leovey Fast evaluation of mixed derivatives and calculation of optimal weights for integration Humboldt Universität zu Berlin 02.14.2012 MCQMC2012 Tenth International Conference on Monte Carlo and Quasi Monte

More information

High-Dimensional Problems: Multivariate Linear Tensor Product Problems, Multivariate Numerical Integration, and Geometric Discrepancy

High-Dimensional Problems: Multivariate Linear Tensor Product Problems, Multivariate Numerical Integration, and Geometric Discrepancy High-Dimensional Problems: Multivariate Linear Tensor Product Problems, Multivariate Numerical Integration, and Geometric Discrepancy Habilitationsschrift vorgelegt von Michael Gnewuch Kiel 2009 1 Preface

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces 9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Quasi-Monte Carlo Methods for Applications in Statistics

Quasi-Monte Carlo Methods for Applications in Statistics Quasi-Monte Carlo Methods for Applications in Statistics Weights for QMC in Statistics Vasile Sinescu (UNSW) Weights for QMC in Statistics MCQMC February 2012 1 / 24 Quasi-Monte Carlo Methods for Applications

More information

On the Behavior of the Weighted Star Discrepancy Bounds for Shifted Lattice Rules

On the Behavior of the Weighted Star Discrepancy Bounds for Shifted Lattice Rules On the Behavior of the Weighted Star Discrepancy Bounds for Shifted Lattice Rules Vasile Sinescu and Pierre L Ecuyer Abstract We examine the question of constructing shifted lattice rules of rank one with

More information

Approximation of High-Dimensional Numerical Problems Algorithms, Analysis and Applications

Approximation of High-Dimensional Numerical Problems Algorithms, Analysis and Applications Approximation of High-Dimensional Numerical Problems Algorithms, Analysis and Applications Christiane Lemieux (University of Waterloo), Ian Sloan (University of New South Wales), Henryk Woźniakowski (University

More information

Week 2: Sequences and Series

Week 2: Sequences and Series QF0: Quantitative Finance August 29, 207 Week 2: Sequences and Series Facilitator: Christopher Ting AY 207/208 Mathematicians have tried in vain to this day to discover some order in the sequence of prime

More information

The Hilbert Transform and Fine Continuity

The Hilbert Transform and Fine Continuity Irish Math. Soc. Bulletin 58 (2006), 8 9 8 The Hilbert Transform and Fine Continuity J. B. TWOMEY Abstract. It is shown that the Hilbert transform of a function having bounded variation in a finite interval

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals

Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals 1/33 Orthogonal Polynomials, Quadratures & Sparse-Grid Methods for Probability Integrals Dr. Abebe Geletu May, 2010 Technische Universität Ilmenau, Institut für Automatisierungs- und Systemtechnik Fachgebiet

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

Some Background Math Notes on Limsups, Sets, and Convexity

Some Background Math Notes on Limsups, Sets, and Convexity EE599 STOCHASTIC NETWORK OPTIMIZATION, MICHAEL J. NEELY, FALL 2008 1 Some Background Math Notes on Limsups, Sets, and Convexity I. LIMITS Let f(t) be a real valued function of time. Suppose f(t) converges

More information

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION Malaysian Journal of Mathematical Sciences 6(2): 25-36 (202) Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces Noli N. Reyes and Rosalio G. Artes Institute of Mathematics, University of

More information

Online Gradient Descent Learning Algorithms

Online Gradient Descent Learning Algorithms DISI, Genova, December 2006 Online Gradient Descent Learning Algorithms Yiming Ying (joint work with Massimiliano Pontil) Department of Computer Science, University College London Introduction Outline

More information

ERROR ESTIMATES FOR THE ANOVA METHOD WITH POLYNOMIAL CHAOS INTERPOLATION: TENSOR PRODUCT FUNCTIONS

ERROR ESTIMATES FOR THE ANOVA METHOD WITH POLYNOMIAL CHAOS INTERPOLATION: TENSOR PRODUCT FUNCTIONS ERROR ESTIMATES FOR THE ANOVA METHOD WITH POLYNOMIAL CHAOS INTERPOLATION: TENSOR PRODUCT FUNCTIONS ZHONGQIANG ZHANG, MINSEOK CHOI AND GEORGE EM KARNIADAKIS Abstract. We focus on the analysis of variance

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Computational Complexity

Computational Complexity Computational Complexity (Lectures on Solution Methods for Economists II: Appendix) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 18, 2018 1 University of Pennsylvania 2 Boston College Computational

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Real Variables # 10 : Hilbert Spaces II

Real Variables # 10 : Hilbert Spaces II randon ehring Real Variables # 0 : Hilbert Spaces II Exercise 20 For any sequence {f n } in H with f n = for all n, there exists f H and a subsequence {f nk } such that for all g H, one has lim (f n k,

More information

Quadrature using sparse grids on products of spheres

Quadrature using sparse grids on products of spheres Quadrature using sparse grids on products of spheres Paul Leopardi Mathematical Sciences Institute, Australian National University. For presentation at ANZIAM NSW/ACT Annual Meeting, Batemans Bay. Joint

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Classical and Quantum Complexity of the Sturm-Liouville Eigenvalue Problem

Classical and Quantum Complexity of the Sturm-Liouville Eigenvalue Problem Classical and Quantum Complexity of the Sturm-Liouville Eigenvalue Problem A. Papageorgiou 1 and H. Woźniakowski 1, Department of Computer Science, Columbia University, New York, USA Institute of Applied

More information

Approximation of Functions Using Digital Nets

Approximation of Functions Using Digital Nets Approximation of Functions Using Digital Nets Josef Dick 1, Peter Kritzer 2, and Frances Y. Kuo 3 1 UNSW Asia, 1 Kay Siang Road, Singapore 248922. E-mail: j.dick@unswasia.edu.sg 2 Fachbereich Mathematik,

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

2 FRED J. HICKERNELL the sample mean of the y (i) : (2) ^ 1 N The mean square error of this estimate may be written as a sum of two parts, a bias term

2 FRED J. HICKERNELL the sample mean of the y (i) : (2) ^ 1 N The mean square error of this estimate may be written as a sum of two parts, a bias term GOODNESS OF FIT STATISTICS, DISCREPANCIES AND ROBUST DESIGNS FRED J. HICKERNELL Abstract. The Cramer{Von Mises goodness-of-t statistic, also known as the L 2 -star discrepancy, is the optimality criterion

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

The ANOVA decomposition of a non-smooth function of infinitely many variables can have every term smooth

The ANOVA decomposition of a non-smooth function of infinitely many variables can have every term smooth Wegelerstraße 6 53115 Bonn Germany phone +49 228 73-3427 fax +49 228 73-7527 www.ins.uni-bonn.de M. Griebel, F.Y. Kuo and I.H. Sloan The ANOVA decomposition of a non-smooth function of infinitely many

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions

More information

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov INTERDISCIPLINARY MATHEMATICS INSTITUTE 2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration V. Temlyakov IMI PREPRINT SERIES COLLEGE OF ARTS AND SCIENCES UNIVERSITY OF SOUTH

More information

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition

More information

Econ Lecture 3. Outline. 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n

Econ Lecture 3. Outline. 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n Econ 204 2011 Lecture 3 Outline 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n 1 Metric Spaces and Metrics Generalize distance and length notions

More information

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes On the Lebesgue constant of barycentric rational interpolation at equidistant nodes by Len Bos, Stefano De Marchi, Kai Hormann and Georges Klein Report No. 0- May 0 Université de Fribourg (Suisse Département

More information

Absolute Value Information from IBC perspective

Absolute Value Information from IBC perspective Absolute Value Information from IBC perspective Leszek Plaskota University of Warsaw RICAM November 7, 2018 (joint work with Paweł Siedlecki and Henryk Woźniakowski) ABSOLUTE VALUE INFORMATIONFROM IBC

More information

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e., Abstract Hilbert Space Results We have learned a little about the Hilbert spaces L U and and we have at least defined H 1 U and the scale of Hilbert spaces H p U. Now we are going to develop additional

More information

Quasi-Monte Carlo integration over the Euclidean space and applications

Quasi-Monte Carlo integration over the Euclidean space and applications Quasi-Monte Carlo integration over the Euclidean space and applications f.kuo@unsw.edu.au University of New South Wales, Sydney, Australia joint work with James Nichols (UNSW) Journal of Complexity 30

More information

General Theory of Large Deviations

General Theory of Large Deviations Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Only Intervals Preserve the Invertibility of Arithmetic Operations

Only Intervals Preserve the Invertibility of Arithmetic Operations Only Intervals Preserve the Invertibility of Arithmetic Operations Olga Kosheleva 1 and Vladik Kreinovich 2 1 Department of Electrical and Computer Engineering 2 Department of Computer Science University

More information

NOTES ON VECTOR-VALUED INTEGRATION MATH 581, SPRING 2017

NOTES ON VECTOR-VALUED INTEGRATION MATH 581, SPRING 2017 NOTES ON VECTOR-VALUED INTEGRATION MATH 58, SPRING 207 Throughout, X will denote a Banach space. Definition 0.. Let ϕ(s) : X be a continuous function from a compact Jordan region R n to a Banach space

More information

Some Results on the Complexity of Numerical Integration

Some Results on the Complexity of Numerical Integration Some Results on the Complexity of Numerical Integration Erich Novak Abstract We present some results on the complexity of numerical integration. We start with the seminal paper of Bakhvalov (1959) and

More information

Stochastic Processes

Stochastic Processes Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space

More information

Self-adjoint extensions of symmetric operators

Self-adjoint extensions of symmetric operators Self-adjoint extensions of symmetric operators Simon Wozny Proseminar on Linear Algebra WS216/217 Universität Konstanz Abstract In this handout we will first look at some basics about unbounded operators.

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

Online gradient descent learning algorithm

Online gradient descent learning algorithm Online gradient descent learning algorithm Yiming Ying and Massimiliano Pontil Department of Computer Science, University College London Gower Street, London, WCE 6BT, England, UK {y.ying, m.pontil}@cs.ucl.ac.uk

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Kernel B Splines and Interpolation

Kernel B Splines and Interpolation Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate

More information