Tractability results for weighted Banach spaces of smooth functions

Similar documents
Generalized Tractability for Multivariate Problems

ON THE OPTIMAL CONVERGENCE RATE OF UNIVERSAL AND NON-UNIVERSAL ALGORITHMS FOR MULTIVARIATE INTEGRATION AND APPROXIMATION

On lower bounds for integration of multivariate permutation-invariant functions

Function Spaces. 1 Hilbert Spaces

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

6 General properties of an autonomous system of two first order ODE

LECTURE NOTES ON DVORETZKY S THEOREM

Lower bounds on Locality Sensitive Hashing

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Least-Squares Regression on Sparse Spaces

Discrete Mathematics

PDE Notes, Lecture #11

Agmon Kolmogorov Inequalities on l 2 (Z d )

Linear First-Order Equations

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Implicit Differentiation

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

Convergence of Random Walks

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

Sturm-Liouville Theory

Monte Carlo Methods with Reduced Error

A new proof of the sharpness of the phase transition for Bernoulli percolation on Z d

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Acute sets in Euclidean spaces

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Separation of Variables

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Introduction to the Vlasov-Poisson system

GLOBAL SOLUTIONS FOR 2D COUPLED BURGERS-COMPLEX-GINZBURG-LANDAU EQUATIONS

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Discrete Operators in Canonical Domains

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

Table of Common Derivatives By David Abraham

Euler equations for multiple integrals

Permanent vs. Determinant

REAL ANALYSIS I HOMEWORK 5

DIFFERENTIAL GEOMETRY, LECTURE 15, JULY 10

MAT 545: Complex Geometry Fall 2008

Lecture 5. Symmetric Shearer s Lemma

Proof of SPNs as Mixture of Trees

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread]

Some Examples. Uniform motion. Poisson processes on the real line

MARKO NEDELJKOV, DANIJELA RAJTER-ĆIRIĆ

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

Homotopy colimits in model categories. Marc Stephan

Analysis IV, Assignment 4

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides

A. Incorrect! The letter t does not appear in the expression of the given integral

Robustness and Perturbations of Minimal Bases

On the number of isolated eigenvalues of a pair of particles in a quantum wire

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

The Generalized Incompressible Navier-Stokes Equations in Besov Spaces

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

arxiv: v2 [math.dg] 16 Dec 2014

1. Aufgabenblatt zur Vorlesung Probability Theory

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Schrödinger s equation.

SYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS

Math 1B, lecture 8: Integration by parts

The Exact Form and General Integrating Factors

Well-posedness of hyperbolic Initial Boundary Value Problems

Similar Operators and a Functional Calculus for the First-Order Linear Differential Operator

Approximation numbers of Sobolev embeddings - Sharp constants and tractability

Iterated Point-Line Configurations Grow Doubly-Exponentially

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Abstract A nonlinear partial differential equation of the following form is considered:

On colour-blind distinguishing colour pallets in regular graphs

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP

All s Well That Ends Well: Supplementary Proofs

12.11 Laplace s Equation in Cylindrical and

7.1 Support Vector Machine

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk

Lower Bounds for Local Monotonicity Reconstruction from Transitive-Closure Spanners

Witten s Proof of Morse Inequalities

TRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS

Interconnected Systems of Fliess Operators

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Lecture 6: Calculus. In Song Kim. September 7, 2011

On the Cauchy Problem for Von Neumann-Landau Wave Equation

Calculus and optimization

Calculus of Variations

A nonlinear inverse problem of the Korteweg-de Vries equation

Qubit channels that achieve capacity with two states

arxiv: v2 [math.pr] 27 Nov 2018

Quantum mechanical approaches to the virial

II. First variation of functionals

SYMPLECTIC GEOMETRY: LECTURE 3

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim

The Sokhotski-Plemelj Formula

WELL-POSEDNESS OF A POROUS MEDIUM FLOW WITH FRACTIONAL PRESSURE IN SOBOLEV SPACES

arxiv: v1 [math.mg] 10 Apr 2018

Transcription:

Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March 11, 2011 Abstract We stuy the L -approximation problem for weighte Banach spaces of smooth -variate functions, where can be arbitrarily large. We consier the worst case error for algorithms that use finitely many pieces of information from ifferent classes. Aaptive algorithms are also allowe. For a scale of Banach spaces we prove necessary an sufficient conitions for tractability in the case of prouct weights. Furthermore, we show the equivalence of weak tractability with the fact that the problem oes not suffer from the curse of imensionality. 1 Introuction The so-calle curse of imensionality can often be observe for multivariate approximation problems. That is, the minimal number of information operations neee to compute an ε-approximation of a -variate problem epens exponentially on the imension. The phrase curse of imensionality was alreay coine by Bellman in 1957. Since the late 1980 s there has been a consierable interest in fining optimal algorithms, also concerning the optimal epenence on an a theory calle information-base complexity (IBC) has been create, see, e.g., [10]. Since there are ifferent ways to measure the lack of exponential behavior, several kins of tractability were introuce. A brief history of the stuies of multivariate problems, as well as general tractability results an many concrete examples can be foun in, e.g., [5, 6, 8]. 1

In this paper we especially consier the L -approximation problem efine on some Banach spaces F of real-value -variate functions. In Section 2 we formulate the problem exactly an recall usual error efinitions, as well as notions of tractability. Afterwars, in Section 3, we illustrate the harness of the problem with an example stuie by Novak an Woźniakowski [7] an show how weighte spaces can help to improve this negative result. Thereby, we especially concentrate on so-calle prouct weights. While there exists a wellevelope concept to hanle problems efine on Hilbert spaces, we nee an essentially new approach to conclue results in the general Banach space setting. These new ieas are presente in Section 4. Using this technique we prove a lower error boun for a very small class of functions, i.e. we consier the space P γ of -variate polynomials of egree at most one in each coorinate, equippe with some weighte norm. In Section 5 we recall a known result of Kuo, Wasilkowski an Woźniakowski [3] about upper error bouns on a certain weighte reproucing kernel Hilbert space H γ. Next, in Section 6, we prove the three main theorems of this paper. That is, we show necessary an sufficient conitions for several kins of tractability for a whole scale of weighte Banach function spaces F γ, where P γ F γ Hγ, in terms of the weights γ. In particular, we provie a characterization of weak tractability an the curse of imensionality. It is shown that for these kins of tractability results we can restrict ourselves to linear non-aaptive algorithms. We illustrate our results by applying them to selecte examples an iscuss a typical case of prouct weights. Finally, in Section 7, we a some remarks about possible extensions of the result to other omains. In aition, we briefly consier the L p -approximation problem for 1 p < an correct a small mistake state in [7]. 2 The approximation problem We investigate tractability properties of the approximation problem efine on some Banach spaces F of boune functions f : [0, 1] R. We want to minimize the worst case error e wor (A n, ; F ) = sup f B(F ) f An, (f) L ([0, 1] ) with respect to all algorithms A n, A n that use n pieces of information in imensions from a certain class Λ. Here B(F ) = {f F f F 1} enotes the unit ball of F. Hence, we stuy the n-th minimal error e(n, ; F ) = inf e wor (A n, ; F ) A n, A n of L -approximation on F. An algorithm A n, A n is moele as a mapping φ: R n L ([0, 1] ) an a function N : F R n such that A n, = φ N. In etail, the information 2

map N is given by N(f) = (L 1 (f), L 2 (f),..., L n (f)), f F, (1) where L j Λ. Here we istinguish certain classes of information operations Λ. In one case we assume that we can compute arbitrary continuous linear functionals. Then Λ = Λ all coincies with F, the ual space of F. Often only function evaluations are permitte, i.e. L j (f) = f ( t (j)) for a certain fixe t (j) [0, 1]. In this case Λ = Λ st is calle stanar information. If function evaluation is continuous for all t [0, 1] we have Λ st Λ all. If L j epens continuously on f but is not necessarily linear the class is enote by Λ cont. Note that in this case also N is continuous an we obviously have Λ all Λ cont. Furthermore, we istinguish between aaptive an non-aaptive algorithms. The latter case is escribe above in formula (1), where L j oes not epen on the previously compute values L 1 (f),..., L j 1 (f). In contrast, we also iscuss algorithms of the form A n, = φ N with N(f) = (L 1 (f), L 2 (f; y 1 ),..., L n (f; y 1,..., y n 1 )), f F, (2) where y 1 = L 1 (f) an y j = L j (f; y 1,..., y j 1 ) for j = 2, 3,..., n. If N is aaptive we restrict ourselves to the case where L j epens linearly on f, i.e. L j ( ; y 1,..., y j 1 ) Λ all. In all cases of information maps, the mapping φ can be chosen arbitrarily an is not necessarily linear or continuous. The smallest class of algorithms uner consieration is the class of linear, non-aaptive algorithms of the form (A n, f)(x) = n L j (f) g j (x), x [0, 1], with some g j L an L j Λ all or even L j Λ st. We enote the class of all such algorithms by A lin n. On the other han, the most general classes consist of algorithms A n, = φ N, where φ is arbitrary an N either uses non-aaptive continuous or aaptive linear information. We enote the respective classes by A cont n an A aapt n. The minimal number of information operations neee to achieve an error smaller than a given ε > 0, n(ε, ; F ) = min {n N 0 e(n, ; F ) ε}, is calle information complexity. If for a given problem, like the L -approximation (with respect to a given class of algorithms) consiere here, n(ε, ; F ) increases exponentially in the imension we say the 3

problem suffers from the curse of imensionality. That is, there exist constants c > 0 an C > 1 such that for at least one ε > 0 we have n(ε, ; F ) c C, for infinitely many N. More generally, if the information complexity epens exponentially on or ε 1 we call the problem intractable. Otherwise we have weak tractability, which can be expresse by ln (n(ε, ; F )) lim ε 1 + ε 1 + We want to stress the point that weak tractability implies the absence of the curse of imensionality, but in general the converse is not true. Since there are many ways to measure the lack of exponential epenence we later istinguish between ifferent types of tractability. The most important type is polynomial tractability. We say that the problem is polynomially tractable if there exist constants c, p, q > 0 such that = 0. n(ε, ; F ) c ε p q for all N, ε > 0. If this inequality hols with q = 0, the problem is calle strongly polynomially tractable. For more specific efinitions an relations between these classes of tractability see, e.g., [6]. 3 The concept of weighte spaces In [7] it is shown that the approximation problem efine on C ([0, 1] ) is intractable. In fact, Novak an Woźniakowski consiere the linear space of all real-value infinitely ifferentiable functions f efine on the unit cube [0, 1] in imensions for which the norm f F = sup D α f α N 0 of f F is finite. Here enotes the usual sup-norm over [0, 1] an D α = α x α 1 1... xα where α = α j enotes the length of the multi-inex α N 0. The initial error of this problem is given by e(0, ; F ) = 1, the norm of the embeing F L, since A 0, 0 is a vali choice of an algorithm which oes not use any information of f. This means that the problem is well-scale. In etail, Theorem 1 in [7] yiels that for L -approximation efine on F we have e(n, ; F ) = 1 for all n = 0, 1,..., 2 /2 1., 4

Therefore, for all N an ε (0, 1), n(ε, ; F ) 2 /2. Hence, the problem suffers from the curse of imensionality; in particular it is intractable. One possibility to avoi this exponential epenence on, i.e. to break the curse, is to shrink the function space F. A closer look at the norm yiels that for f B(F ) we have D α f 1 for all α N 0. (3) Hence, every erivative is equally important. In orer to shrink the space, for each α N 0 we replace the right-han sie of inequality (3) by a weight 0 γ α 1. For α with α = 1 this means that we control the importance of every single variable. So, the norm in the weighte space is now given by f F γ = sup 1 D α f α N 0 γ, α where we eman D α f to be equal to zero if γ α = 0. The iea to introuce weights irectly into the norm of the function space appeare for the first time in a paper of Sloan an Woźniakowski in 1998, see [9]. They stuie the integration problem efine over some Sobolev Hilbert space, equippe with so-calle prouct weights, to explain the overwhelming success of QMC integration rules. Thenceforth, weighte problems attracte a lot of attention. For example it turne out that tractability of approximation of linear operators between Hilbert spaces can be fully characterize in terms of the weights an singular values of the linear operators if we use information operations from the class Λ all. Let us have a closer look at prouct weights. Assume that for every N there exists an orere an boune sequence 1 γ,1 γ,2... γ, 0. Then for N, the prouct weight sequence γ = (γ α ) α N 0 is given by γ α = (γ,j ) α j, α N 0. (4) Note that the epenence of x j on f is now controlle by the so-calle generator weight γ,j. Since γ,j = 0 for some j {1,..., } implies that f oes not epen on x j,..., x we 5

assume that γ, > 0 in the rest of the paper. Moreover, the orering of γ,j is without loss of generality. Later on we will see that tractability of our problem will only epen on summability properties of the generator weights. Among other things, it turns out that for the L -approximation problem efine on the Banach space with the norm given above an generator weights γ,j γ j = Θ ( j β) we have intractability for β = 0, weak tractability but no polynomial tractability for 0 < β < 1, strong polynomial tractability if 1 < β. Moreover, we prove that for β = 1 the problem is not strongly polynomially tractable. 4 Lower bouns First, we want to escribe the main ieas use in the Hilbert space setting. Hence, for a moment, consier the problem of L 2 -approximation with respect to linear algorithms efine on a reproucing kernel Hilbert space H(K ) of functions f : [0, 1] R. Let W : H(K ) H(K ), W (g) = g(x) K (, x) x. [0,1] We assume that W is compact. Then the worst case error is fully characterize by the spectrum of W that is also a self-ajoint, an non-negative efinite operator. Let {(λ,j, η,j ) j N} enote a complete orthonormal system of eigenpairs of W, inexe accoring to the nonincreasing orer of the eigenvalues, i.e. W (η,j ) = λ,j η,j an η,i, η,j H(K ) = δ ij with λ,j λ,j+1 0. For λ,n > 0, it is well known that the algorithm A n,(f) = n f, η,j L2 η,j, where η,j = η,j λ,j is optimal. Then the n-th minimal error is given by e(n, ; H(K )) = e wor (A n,; H(K )) = λ,n+1. 6

For more etails see, e.g., [4] an [6], as well as the references in there. For a comprehensive introuction to reproucing kernel Hilbert spaces see, for instance, Chapter 1 in the book of Wahba [11]. In the general Banach space setting this approach obviously oesn t work. Our technique is base on the ieas of Werschulz an Woźniakowski [12], as well as Novak an Woźniakowski [7]. Among other things it uses a result from Banach space theory an nonlinear functional analysis, namely, the theorem of Borsuk-Ulam. The proof of the following proposition can be foun in Chapter 1.4.2, [1]. Proposition 1 (Borsuk-Ulam). Let V be a linear norme space over R with im V = m an, moreover, let N : V R n be a continuous mapping for n < m. Then there exists an element f V with f V = 1 such that N(f ) = N( f ). The main tool to conclue lower bouns in the Banach space setting now reas as follows. Lemma 1. Assume that F an G are linear norme spaces such that F G. Furthermore, suppose that V F is a linear subspace of imension m an there exists a constant a > 0 such that Then for every n < m an every A n A cont n f F 1 f G for all f V. (5) a e wor (A n ; F ) = A aapt n sup f A n (f) G a. f B(F ) Proof. For A n A cont n the assertion is a simple conclusion of Proposition 1 an can be foun in [7]. On the other han, if A n A aapt n the proof can be obtaine by arguments from linear algebra, which are inicate in the proof of Theorem 3.1 in [12]. In any case we exclusively use norm properties from the space G, no aitional structure of G is use. Therefore, this tool is available for any kin of approximation problem, not only for L -approximation. In the following we use Lemma 1 to conclue a lower boun for the approximation error for the space } P γ {p = span i : [0, 1] R, p i (x) = (x j ) i j i = (i 1,..., i ) {0, 1} of all real-value -variate polynomials of egree at most one in each coorinate irection, efine on the unit cube [0, 1]. We equip this linear space with the weighte norm f P γ = max 1 D α f α {0,1} γ, f P γ, α where γ is the prouct weight sequence escribe as in Section 3. 7

Theorem 1. Let e(n, ; P γ ) be the n-th minimal error of L -approximation on P γ respect to the class A cont n A aapt n of all algorithms escribe in Section 2. Then with e(n, ; P γ ) 1 for all n < 2s, an some integer s [0, ] with ( ) s > 1 3 γ,j 2. (6) Proof. The proof of the lower error boun consists of several steps. At first, we construct a partition of the set {1,..., } into s+1 parts which we will nee later an with s satisfying (6). In a secon step, we efine a special linear subspace V P γ with im V = 2s. Step 3 then shows that V satisfies the assumptions of Lemma 1. The proof is complete in Step 4. Step 1. For k {0,..., }, we efine inuctively m 0 = 0 an m k = inf { t N m k 1 < t, with 2 t j=m k 1 +1 γ,j } with the usual convention inf =. Note that the infimum coincies with the minimum in the finite case, since then m k N. Moreover, we set s = max {k {0,..., } m k < }. We enote I k = {m k 1 + 1, m k 1 + 2,..., m k } for k = 1,..., s. Thus, this gives a uniquely efine isjoint partition of the set ( s ) {1,..., } = I k {m s + 1,..., }, k=1 an m k enotes the last element of the block I k. For all k = 1,..., s, we conclue 2 j I k γ,j < 2 + γ,mk < 3. Finally, summation of these inequalities gives γ,j < s k=1 j I k γ,j + 2 < 3s + 2, 8

an (6) follows immeiately. If s = 0 we can stop at this point since the initial error is 1 as the norm of the embeing P γ L an the remaining assertion is trivial. Hence, from now on we can assume that s > 0 an m s 1. Step 2. To apply Lemma 1 we have to construct a linear subspace V of F = P γ such that the conition (5) hols for G = L ([0, 1] ) an a = 1. First, we restrict ourselves to the set F = {f F f epens only on x 1,..., x ms }. By a simple isometric isomorphism we can interpret F as the space Pm γ s. We are reay to construct a suitable space V using the partition from Step 1. We efine V as the span of all functions g i : X = [0, 1] ms R, i = (i 1,..., i s ) {0, 1} s, of the form ( ) ik s g i (x) = γ,j x j, x X. j I k k=1 Obviously, V is a linear subspace of Pm γ s an with the interpretation above also a linear subspace of F. Moreover, it is easy to see that we have by construction g Pm γ s = g F an g L (X) = g L ([0, 1] ) for g V. Finally, we note that im V = #{0, 1} s = 2 s. It remains to show that this subspace is the right choice to prove the claim using Lemma 1. Step 3. The proof of the neee conition (5), g P γ ms g L (X) for all g V, is a little bit technical. Due to the special structure of the functions g V, the left han sie reuces to max {γα 1 D α g L (X) α M}, where the maximum is taken over the set { } M = α {0, 1} ms j I k α j 1 for all k = 1,..., s. This is simply because for α / M we have D α g 0 an the inequality is trivial. To simplify the notation let us efine T : {0, 1} ms N s 0, α T (α) = σ = (σ 1,..., σ s ), where σ k = j I k α j for k = 1,..., s. 9

Note that T (M) = {0, 1} s. Moreover, for every g = i {0,1} s a i g i ( ) V efine the function h g : Z = s k=1 [0, j Ik γ,j ] R, h g (z) = Hence, h g (z) = g(x) uner the transformation x z such that i {0,1} s a i s k=1 z i k k. z k = j I k γ,j x j for every k = 1,..., s an every x X. The span W of all functions h: Z R with this structure also is a linear space. Furthermore, easy calculus yiels ( ms ) (Dx α g) (x) = (γ,j ) α j (D ) T (α) z h g (z) for all g V, α M an x X. (7) Here the x an z in Dx α an Dz T (α) inicate ifferentiation with respect to x an z, respectively. Since the mapping x z is surjective we obtain D α g L (X) = γ α D T (α) h g L (Z) by the form of γ given by (4). Hence, max α M 1 γ α D α g L (X) = max σ {0,1} Dσ h g L (Z). s Note that (7) with α = 0 yiels g L (X) = h g L (Z). Therefore, the claim reuces to max σ {0,1} Dσ h g L (Z) h g L (Z) for every g V. s We show this estimate for every h W, i.e., D σ h L (Z) h L (Z) for all σ {0, 1} s. (8) We start with the special case of one erivative, i.e. σ = e k for a certain k {1,..., s}. Since h is affine in each coorinate we can represent it as h(z) = a(z k ) z k + b(z k ) with functions a an b which only epen on z k = (z 1,..., z k 1, z k+1,..., z s ). Thus, we have D e k h(z) = a(z k ) an nee to show that a(z k ) { b(z max k ), a(zk ) 10 } γ,j + b(z k ). (9) j I k

This is obviously true for every z Z with a(z k ) = 0. For a(z k ) 0 we can ivie by a(z k ) to get { } 1 max t, γ,j t j I k if we set t = b(z k )/a(z k ). The last maximum is minimal if both of its entries coincie. This is for t = 1 2 j I k γ,j. Hence, we nee to eman 2 j I k γ,j to conclue (9) for all amissible z Z. But this is true for every k {1,..., s} by efinition of the sets I k in Step 1. This proves (8) for the special case σ = e k for all k {1,..., s}. The inequality (8) also hols for every σ {0, 1} s by an easy inuctive argument on the carinality of σ. Inee, if σ 2 then σ = σ + e k with σ = σ 1. We now nee to estimate D σ +e kh L (Z). Since D e kh(z) = a(z k ) has the same structure as the function h itself, we have D σ +e kh L (Z) = D σ a(z k ) L (Z) an the proof is complete by the inuctive step. Step 4. For every g V we have g P γ = g P γ ms = max α {0,1} ms T (α) {0,1} s 1 γ α D α g L (X) = h g L (Z) = g L (X) = g L ([0, 1] ), max σ {0,1} Dσ h g L (Z) s where V is a linear subspace of F = P γ with im V = 2s. Therefore, Lemma 1 with a = 1 yiels that the worst case error of any algorithm A n, we consier, with n < im V pieces of information, is boune from below by one. That is, e wor (A n, ; P γ ) 1. We complete the proof by taking the infimum with respect to A n, A cont n A aapt n. 5 Upper bouns The approximation problem has been stuie in many ifferent settings. We restrict ourselves to the case of L -approximation efine on a special weighte anchore Sobolev Hilbert space H γ = H(Kγ ). For = 1 an γ > 0, this is the space of all absolutely continuous functions f : [0, 1] R whose first erivatives belong to L 2 ([0, 1]). The inner prouct in the space H γ 1 is efine as 1 f, g H γ = f(0)g(0) + γ 1 f (x)g (x) x, f, g H γ 1 1, 0 11

where the erivatives have to be unerstoo in the weak sense. For γ = 0 the space consists of only constant functions. It turns out that H γ 1 is a reproucing kernel Hilbert space H(K γ 1 ) whose kernel is K γ 1 (x, y) = 1 + γ min {x, y} for x, y [0, 1]. For > 1, the space H γ = H(Kγ ) is efine as the -fol tensor prouct of H(Kγ,j 1 ), where we once again assume prouct weights, see (4), with 1 γ,1 γ,2... γ, 0. Due to the prouct structure of γ α, the corresponing reproucing kernel of H γ Wiener sheet kernel, is a weighte K γ (x, y) = (1 + γ,j min {x j, y j }), x, y [0, 1]. The associate inner prouct is given by f, g H γ = 1 α f (x α, 0) α g (x α, 0) x α, f, g H γ γ α α {0,1} [0,1] x α α x. α Here the term (x α, a) means the -imensional vector with (x α, a) j = x j for all coorinates j with α j = 1 an (x α, a) j = a j otherwise. For α = 0 we replace the integral by f(a)g(a). Therefore, the point a = 0 [0, 1] is sometimes calle an anchor of the space. A closer look at the respective norm justifies to refer to H(K γ ) as a Sobolev space of ominating mixe smoothness. For γ, > 0, the space H(K γ ) algebraically coincies with the space { } f : [0, 1] R D α f L 2 ([0, 1] ) for all α = (α 1,..., α ) with max α j 1,,..., where D α f once again enotes the weak erivative in the Sobolev sense. Equippe with the usual norm, this space is often enote by W (1,...,1) 2,mix ([0, 1] ), or S2W 1 ([0, 1] ), respectively. If γ,j = 0 for some j {1,..., } we obtain a proper subspace of functions that are constant with respect to x j,..., x. Therefore, we always assume γ, > 0. Kuo, Wasilkowski an Woźniakowski [3, 8. Example] showe 12

Proposition 2. There exists a linear algorithm A n, for L -approximation on H γ such that it uses n non-aaptively chosen linear functionals an for every τ (1/2, 1) there are constants a τ, b τ > 0 inepenent of γ an such that e wor (A n,; H γ ) b τ n (1 τ)/(2τ) ( ) 1 + aτ γ,j τ 1/(2τ). Furthermore, A n, is close to be optimal in the class Alin n. 6 Conclusions an applications We now combine lower an upper bouns presente before an prove general results for L -approximation on weighte Banach function spaces. More precisely, consier a sequence of Banach spaces F γ of functions f : [0, 1] R which fulfills the following simple assumptions: (A1) P γ F γ with an embeing factor C 1, 1 for all, (A2) F γ Hγ with an embeing factor C 2, for all an ( ) C 2, a exp b (γ,j ) t for some constants a, b 0 an a parameter t (0, 1], inepenent of an γ. By A B with an embeing factor C, we mean that the norme linear space A is continuously embee in the norme linear space B an f B C f A for all f A. That is, we can take C = i L(A, B) as the (operator-) norm of the ientity i: A B. Moreover, γ is once again a prouct weight sequence given by formula (4). The spaces P γ an H γ are efine in Section 4 an Section 5, respectively. To simplify the notation for necessary an sufficient conitions of tractability, we use the commonly known efinitions of the so-calle sum exponents for the weight sequence γ, p(γ) = inf { κ 0 P κ (γ) = lim sup } (γ,j ) κ < 13

an q(γ) = inf { κ 0 Q κ (γ) = lim sup (γ,j) κ ln( + 1) } <, with the convention that inf =. Theorem 2 (Necessary conitions). Assume that (A1) hols. Consier L -approximation over F γ with respect to the class of algorithms Acont n A aapt n. Then ( n(ε, ; F γ ) > exp 1 3 ln 2 ( γ,j 2 )) for all N an ε (0, 1). (10) Therefore, if the problem is polynomially tractable then q(γ) 1, strongly polynomially tractable then p(γ) 1. Proof. Due to (A1), every algorithm A n, A cont n A aapt n for L -approximation efine on F γ also applies to the embee space Pγ. Furthermore, C 1, 1 implies that the unit ball B(P γ ) is containe in the unit ball B(F γ ). Therefore, From Theorem 1 we have e wor (A n, ; F γ ) ewor (A P n, γ; P γ ) e(n, ; Pγ ). e(n, ; P γ ) 1 for n < 2s, where s = s(γ, ) [0, ] satisfies (6). Hence, for N an ε (0, 1) we conclue n(ε, ; F γ ) 2s > 1 21/3 γ,j, 41/3 as claime in (10). Suppose now that the problem is polynomially tractable. Then there are non-negative constants C, p an q such that n(ε, ; F γ ) Cε p q for all N, ε > 0. 14

Take now an arbitrarily fixe ε in (0, 1). Then (10) implies that there is a positive C such that 2 1/3 γ,j C q for all N. This is equivalent to the bouneness of γ,j/ ln( + 1), an therefore q(γ) 1, as claime. Suppose that the problem is strongly polynomially tractable. Then q = 0 in the boun above, an γ,j is uniformly boune in. Hence, p(γ) 1, as claime. Of course, the conitions q(γ) 1 an p(γ) 1 are also necessary for polynomial an strong polynomial tractability with respect to smaller classes of algorithms. We next assume (A2) an show that slightly stronger conitions on the weights γ than in Theorem 2 are sufficient for polynomial an strong polynomial tractability. Theorem 3 (Sufficient conitions). Assume that (A2) hols with a parameter t (0, 1]. Consier L -approximation over F γ with respect to the class of linear algorithms Alin n. Then q(γ) < t implies that the problem is polynomially tractable, p(γ) < t implies that the problem is strongly polynomially tractable. Proof. Due to (A2), the restriction of the algorithm A n, in Proposition 2 from Hγ to F γ is a vali linear algorithm for L -approximation over F γ. Furthermore, ue to linearity of A n, for all f Hγ, we have f A n, f L ([0, 1] ) e wor (A n,; H γ ) f Hγ ewor (A n,; H γ ) C 2, f F γ. Therefore, we can estimate the n-th minimal error by e(n, ; F γ ) ewor (A n, F γ; F γ ) C 2, e wor (A n,; H γ ) ( ) a exp b (γ,j ) t b τ n (1 τ)/(2τ) ( ) 1 + aτ γ,j τ 1/(2τ), where τ is an arbitrary number from (1/2, 1). Using 1 + x e x for x 0, we have ( ) e(n, ; F γ ) a b τ n (1 τ)/(2τ) exp b (γ,j ) t + a τ (γ,j ) τ. 2τ 15

Choosing n such that the right-han sie is at most ε, we obtain an estimate for the information complexity with respect to the class of linear algorithms, ( ) n(ε, ; F γ ) c 1 ε 2τ/(1 τ) exp (γ,j ) t + c 3 (γ,j ) τ, (11) where the positive constants c 1, c 2 an c 3 only epen on τ, a an b. Suppose that q(γ) < t. Then Q κ (γ) is finite for every κ > q(γ). Taking κ = t we obtain c 2 (γ,j) t ln( + 1) ln( + 1) (Q t (γ) + δ) ln( + 1) = ln( + 1) Qt(γ)+δ for every δ > 0 whenever is larger than a certain δ. ) exp (c 2 (γ,j) t in (11) is polynomially epenent on. This means that the factor On the other han, we can choose τ (max {q(γ), 1/2}, 1) such that Q τ (γ) is finite an ) the factor exp (c 3 (γ,j) τ in (11) is also polynomially epenent on. So, for this value of τ we can rewrite (11) as n(ε, ; F γ ) = O ( ε 2τ/(1 τ) ( + 1) c 4 ), with c 4 inepenent of an ε. This means that the problem is polynomially tractable, as claime. Suppose finally that p(γ) < t. Then the sums (γ,j) t an (γ,j) τ for τ (max {p(γ), 1/2}, 1) are both uniformly boune in. Therefore (11) yiels strong polynomial tractability, an completes the proof. The conitions in Theorem 3 are obviously also sufficient if we consier larger classes of algorithms. Moreover, the proof of Theorem 3 also provies explicit upper bouns for the exponents of tractability. We now iscuss the role of assumptions (A1) an (A2). They are quite ifferent. The assumption (A1) is use to fin a lower boun on the information complexity for the space F γ as long the space P γ is continuously embee in F γ with an embeing factor at most one. Such an embeing can be shown for several ifferent classes of functions. The assumption (A2) is use to fin an upper boun on the information complexity for the space F γ as long as it is continuously embee in the space Hγ with an embeing factor epening exponentially on the sum of some power of the prouct weights. This consierably restricts the choice of F γ. We nee this assumption in orer to use the linear algorithm A n, efine on the space Hγ ue to Kuo et al. [3] an the error boun they 16

prove. Obviously, we can replace the space H γ in (A2) by some other space which contains at least P γ an for which we know a linear algorithm using n linear functionals whose worst case error is polynomial in n 1 with an explicit epenence on the prouct weights. We now show that the assumptions (A1) an (A2) allow us to characterize weak tractability an the curse of imensionality. Theorem 4 (Weak tractability an the curse of imensionality). Suppose that (A1) an (A2) with a parameter t (0, 1] hol. Then for L -approximation efine on the space F γ the following statements are equivalent: (i) The problem is weakly tractable with respect to the class A lin n. (ii) The problem is weakly tractable with respect to the class A cont n (iii) There is no curse of imensionality for the class A lin n. (iv) There is no curse of imensionality for the class A cont n 1 (v) For all κ > 0 we have lim (γ,j) κ = 0. 1 (vi) There exists κ (0, t) such that lim (γ,j) κ = 0. Proof. We start by showing that (vi) implies (i), i.e., ln (n(ε, ; F γ lim )) ε 1 + ε 1 + = 0, A aapt n. A aapt n. where the information complexity is taken with respect to linear algorithms A lin n. By the arguments use in the proof of Theorem 3 we obtain estimate (11) for all ε > 0, as well as for every N an all τ (1/2, 1), ue to assumption (A2). Clearly, for κ (0, t) as in the hypothesis an t (0, 1] as in the embeing conition, we can fin τ (1/2, 1) such that κ < min {t, τ}. So, since γ,j 1, we can estimate both sums on the right-han sie of (11) from above by (γ,j) min{t,τ} (γ,j) κ. Thus, ln (n(ε, ; F γ )) ε 1 + ln(c 1) ε 1 + + 2τ 1 τ ln (ε 1 ) ε 1 + + max {c 2, c 3 } (γ,j) κ ε 1 + tens to zero when ε 1 + approaches infinity, as claime. 17

Clearly, (i) (ii) (iii) (iv) an (v) (vi). Hence, we only nee to show that (iv) (v). From (A1) we have estimate (10). Then no curse of imensionality implies 1 lim γ,j = 0. Now, Jensen s inequality yiels 1 γ,j ( 1 ) 1/κ (γ,j ) κ for 0 < κ 1, since f(y) = y κ is a concave function for y > 0. Thus, 1 lim (γ,j ) κ = 0 for all 0 < κ 1. Finally, for every κ 1 we can estimate γ,j (γ,j ) κ since γ,j 1 for j = 1,...,. Therefore, lim 1 (γ,j) κ = 0 also hols for κ > 1, an the proof is complete. In the last part of this section, we give some examples to illustrate the results. In the following we only have to prove the embeings, i.e. assumptions (A1) an (A2) from the beginning of this section. Example 1 (Limiting cases P γ an Hγ ). To begin with, we check the case F γ = Pγ. Then (A1) obviously hols with C 1, = 1. To prove (A2), note that the algebraical inclusion F γ Hγ is trivial by arguments given in Section 5. For f F γ = Pγ we calculate f H γ 2 1 D α f 2 γ x α f F γ 2 γ α. α α {0,1} [0,1] α α {0,1} Hence, the norm of the embeing F γ Hγ α {0,1} γ α 1/2 is boune by ( 1/2 ( 1 = (1 + γ,j )) exp 2 ) γ,j. So, with a = 1, b = 1/2 an t = 1 also assumption (A2) is fulfille an we can apply the state theorems for the space F γ = Pγ. 18

We now turn to the case F γ = Hγ. Unfortunately, the estimate above inicates that (A1) may not hol for F γ = Hγ with C 1, 1. Nevertheless, in this case assumption (A2) is true with C 2, = 1, i.e., a = 1, b = 0 an t = 1. Therefore, we can apply Theorem 3 for this space. Then the problem is strongly polynomially tractable if p(γ) < 1. Moreover, we have polynomial tractability if q(γ) < 1. It is known that these conitions are also necessary, see, e.g., Theorem 12 in [3]. Example 2 (C (1,...,1) ). Consier the space F γ = {f : [0, 1] R f C (1,...,1), where f F γ = max 1 D α f α {0,1} γ < }. α Since P γ is a linear subset of F γ an Pγ is simply the restriction of F γ we have P γ F γ with an embeing factor C 1, = 1 an (A1) hols. For the factor C 2, of the embeing F γ Hγ, the same estimates hol exactly as in the previous example an, moreover, the set inclusion is obvious. Therefore, also assumption (A2) is fulfille an we can apply the theorems of this section to the space F γ. Finally, the last example shows that even very high smoothness oes not improve the conitions for tractability. Example 3 (C ). Assume F γ = {f : [0, 1] R f C, where f F γ = sup 1 D α f α N γ < }. 0 α Obviously, P γ C, an functions from P γ are at most linear in each coorinate. Hence, D α f 0 for all α N 0 \ {0, 1}. Therefore, once again we have f P γ = max 1 D α f α {0,1} γ = f F γ for all f Pγ. α This yiels P γ F γ with an embeing factor C 1, = 1. In aition, also (A2) can be conclue as in the examples above. So, even infinite smoothness leas to the the same conitions for tractability an the curse of imensionality as before. Note that in the last example we o not nee to claim a prouct structure for the weights accoring to multi-inices α N 0 \ {0, 1}. Moreover, this example is a generalization of the space consiere in [7]. For γ α 1 we reprouce the intractability result state there. In conclusion, we iscuss the tractability behavior of L -approximation efine on one of the spaces F γ above using prouct weights which are inepenent of the imension, i.e., γ,j γ j = Θ(j β ) for some β 0. 19

This is a typical example in the theory of prouct weights, an p(γ) is finite if an only if β > 0. If so then p(γ) = 1/β. See, e.g., Section 5.3.4 in [6]. If β = 0 then the problem is intractable ue to Theorem 4, assertion (v), since 1 γ,j oes not ten to zero. For β (0, 1), easy calculus yiels q(γ) > 1. So, using Theorem 2 we conclue polynomial intractability in this case. On the other han, for all δ an κ with 0 < δ < κ 1, we have j κ = j κ κ (1+δ) κ δ j (1+δ) κ δ 0 with an if κ > 1 then the fraction obviously tens to zero, too. Hence, conition (vi) of Theorem 4 hols an the problem is weakly tractable if β > 0. For β = 1, we use inequality (10) from Theorem 2 an estimate γ,j = j 1 c ln( + 1) for some positive c. Therefore, n(ε, ; F γ ) 1 2 2/3 ( + 1)c/3 ln(2) for all N, ε (0, 1). Hence, strong polynomial tractability oes not hol. Moreover, it is easy to show that for β = 1 the sufficient conition q(γ) < 1 for polynomial tractability is not fulfille. So, we o not know if polynomial tractability hols. If β > 1 we easily see that p(γ) = 1 < 1 = t. Hence, Theorem 3 provies strong β polynomial tractability in this case. 7 Final remarks Note that the main result of this paper, the lower boun given in Theorem 1, can be easily transfere from [0, 1] to more general omains Ω. Inee, the case Ω = [c 1, c 2 ], where c 1 < c 2, can be immeiately obtaine using our techniques. It turns out that in this case we have to moify estimate (6) by a constant which epens only on the length of the interval [c 1, c 2 ]. Thus, the general tractability behavior oes not change. Another extension of the results is possible if we consier the L p -norms (1 p < ) instea of the L -norm. We want to briefly iscuss these norms for the unweighte case. Then 20

the moifications for the weighte case are obvious. Following Novak an Woźniakowski [7] let { } F,p = f : [c 1, c 2 ] R f C with f F,p = sup D α f L p < α N 0 for 1 p < an N. Let l = c 2 c 1 > 0. We want to approximate f F,p in the norm of L p, i.e., we consier the n-th minimal error e p (n, ; F,p ) = sup f A n, (f) L p ([c 1, c 2 ] ). inf e wor p (A n, ; F,p ) = inf A n, A n A n, A n f B(F,p ) Without loss of generality we restrict ourselves to the case [c 1, c 2 ] = [0, l]. In orer to conclue a lower boun analogue to Theorem 1, i.e., e p (n, ; F,p ) 1 for n < 2 s, we once again use Lemma 1 with F = F,p an G = L p ([0, l] ). The authors of [7] suggest to use the subspace V (k) F,p efine as V (k) = span g i : [c 1, c 2 ] R, x g i (x) = s jk m=(j 1)k+1 x m i j i {0, 1} s, where s = /k an k N such that kl 2(p + 1) 1/p. Hence, if l < 2(p + 1) 1/p we have to use blocks of variables with size k > 1 in orer to guarantee (5), i.e., to fulfill the conition g F,p g L p for all g V (k). (12) Therefore, Novak an Woźniakowski efine k = 2(p + 1) 1/p /l, but this is too small as the following example shows. Take l = 1, i.e. [c 1, c 2 ] = [0, 1], an p = 1. Then k = 4 shoul be a proper choice, but for g (x) = (x 1 + x 2 + x 3 + x 4 ) 2 we obtain g L 1 = 7/15 by using Maple, while g / x 1 L 1 = 1. This contraicts (12). Proposition 3. Let 1 p < an k N with k 8(p + 1) 2/p /l 2. (13) Then conition (12) hols for V (k) e p (n, ; F,p ) 1 for all n < 2 /k. F,p. Hence, the problem remains intractable since 21

Proof. Step 1. Due to the structure of functions g from V (k), it suffices to show D α g L p ([0, l] ks ) g Lp ([0, l] ks ) for all g V (k) an for every α M (k), where the set of multi-inices M (k) is efine by M (k) = α {0, 1}ks α m 1, for all j = 1,..., s m I j an I j = {(j 1)k + 1,..., jk}. Similar to the proof of Theorem 1, we only consier the case α = e t {0, 1} ks with t I j. The rest then follows by inuction. We can represent g V (k), as well as D et g, by functions a, b: [0, l] k(s 1) R such that g(x) = a( x) k y m + b( x) an D et g(x) = a( x), m=1 where x = (x I1,..., x Ij 1, y, x Ij+1,..., x Is ) [0, l] ks an x = (x I1,..., x Ij 1, x Ij+1,..., x Is ) [0, l] k(s 1), as well as y = (y 1,..., y k ) [0, l] k. Here x Ij enotes the k-imensional vector of components x m with coorinates m I j. Therefore, we can rewrite the inequality D et g L p ([0, l] ks ) g L p ([0, l] ks ) as p a( x) p k y x [0,l] k(s 1) [0,l] k [0,l] k(s 1) [0,l] a( x) y m + b( x) y x k such that it is enough to prove a point wise estimate of the inner integrals for fixe x [0, l] k(s 1) with a = a( x) 0. Easy calculus yiels p k a y m + b y = l p+k k p a z m + b z [0,l] k m=1 [ 1/2,1/2] k for some constant b R. The right-han sie is minimize for b = 0. So, we can estimate this integral from below by p k [0,l] a y m + b y l p+k a p k p z k m=1 [ 1/2,1/2] m z k m=1 = l p a p k p y z m z. [0,l] k [ 1/2,1/2] k 22 m=1 m=1 m=1

Hence, it remains to show that the choice of k implies that k p z m z l p. m=1 [ 1/2,1/2] k m=1 Step 2. In this last part, we will show by arguments from Banach space geometry that k p ( ) p/2 ( ) p/2 k 1 k 1/2 z m z [ 1/2,1/2] 2 2 k p (1 + p) = x p x. (14) 2 1/2 Obviously, we only nee to prove the inequality for k 2 since the equation on the right, as well as the case k = 1, are trivial. To abbreviate the notation, we efine f : R k R, k z = (z 1,..., z k ) m=1 z m for fixe k 2. For given vectors z, ξ R k, let z, ξ enote the scalar prouct k m=1 z mξ m. In the special case ξ = 1/ k (1,..., 1) S k 1 it is z, ξ = t for a given t R, if an only if, f(z) = t k. Furthermore, note that every ξ in the k-imensional unit sphere S k 1 uniquely efines a hyperplane ξ = {z R k z, ξ = 0} perpenicular to ξ which contains zero. Therefore, for every t [0, ), the set ξ + tξ = {z R k z, ξ = t} escribes a parallel shifte hyperplane with istance t to the origin. Using Fubini s theorem, this leas to the following representation f(z) p z = 2 f(z) p z = 2 k p/2 t p 1 z t. [ 1/2,1/2] k 0 [ 1/2,1/2] k z,ξ 0 [ 1/2,1/2] k z,ξ =t Now we see that the inner integral escribes the (k 1)-imensional volume v(t) = λ k 1 ( [ 1/2, 1/2] k (ξ + tξ) ) of the parallel section of the unit cube with the hyperplane efine above. Because of Ball s famous theorem we know v(0) 2, inepenent of k, see, e.g., [2, Chapter 7]. Moreover, ξ provies a central hyperplane section of the unit cube such that we have 0 v(t) t = 1 2 λ k([ 1/2, 1/2] k ) = 1 2 23

an, by Brunn s theorem (see Theorem 2.3 in [2]), v 0 is non-increasing on [0, ). Thus, v is relate to the istribution function of a certain non-negative real-value ranom variable X, up to a normalizing factor, i.e. v(t) = v(0) P(X t). Using Höler s inequality we obtain E(X 1+p ) (EX) 1+p an, respectively, 0 t p v(t) t ( 1 ) 1+p v(0) p (1 + p) v(t) t 0 by integration by parts. Altogether we conclue inequality (14) an, with k boune from below by (13), even [ 1/2,1/2] k f(z) p z l p. Therefore, the proof is complete. Using other methos, we can improve inequality (14) in Step 2 of the last proof. In etail, we can represent the integral on the left as an expectation E( f(y ) p ) with a suitable ranom vector Y. For p = 2N with N N this can be calculate exactly. Finally, it turns out that it is enough to take { 12/l 2, if 2 p < 4 k 8/l 2, if 4 p in orer to conclue the claime intractability result for the L p -approximation problem. Nevertheless, we want to stress the point that also with this improvements the lower bouns on k are not sharp since we know from [7] that in the limit case p = we can take k = 2/l. On the other han, upper bouns for the k-imensional integral, conclue using Hoeffing s inequality, yiel that k p/2 is the right orer. Acknowlegments The author thanks A. Hinrichs, E. Novak an H. Woźniakowski for their useful hints an valuable comments on this paper. References [1] K. Deimling, Nonlinear Functional Analysis, Springer-Verlag, Berlin, 1985. 24

[2] A. Kolobsky, Fourier Analysis in Convex Geometry, Amer. Math. Soc., Provience, RI, 2005. [3] F. Y. Kuo, G. W. Wasilkowski, an H. Woźniakowski, Multivariate L approximation in the worst case setting over reproucing kernel Hilbert spaces, J. Approx. Theory, 152, 135 160, 2008. [4] F. Y. Kuo, G. W. Wasilkowski, an H. Woźniakowski, On the power of stanar information for multivariate approximation in the worst case setting, J. Approx. Theory, 158, 97 125, 2009. [5] E. Novak, I. H. Sloan, J. F. Traub, an H. Woźniakowski, Essays on the Complexity of Continuous Problems, Europ. Math. Soc., Zürich, 2009. [6] E. Novak, an H. Woźniakowski, Tractability of Multivariate Problems. Vol. I: Linear Information, Europ. Math. Soc., Zürich, 2008. [7] E. Novak, an H. Woźniakowski, Approximation of infinitely ifferentiable multivariate functions is intractable, J. Complexity, 25, 398 404, 2009. [8] E. Novak, an H. Woźniakowski, Tractability of Multivariate Problems. Vol. II: Stanar Information for Functionals, Europ. Math. Soc., Zürich, 2010. [9] I. H. Sloan, an H. Woźniakowski, When are quasi-monte Carlo algorithms efficient for high-imensional integrals?, J. Complexity, 14, 1 33, 1998. [10] J. F. Traub, G. W. Wasilkowski, an H. Woźniakowski, Information-base Complexity, Acaemic Press Inc., Boston, 1988. [11] G. Wahba, Spline Moels for Observational Data, Soc. Inust. Appl. Math. (SIAM), Philaelphia, 1990. [12] A. G. Werschulz, an H. Woźniakowski, Tractability of multivariate approximation over a weighte unanchore Sobolev space, Constr. Approx., 30, 395 421, 2009. 25