recent developments of approximation theory and greedy algorithms

Size: px
Start display at page:

Download "recent developments of approximation theory and greedy algorithms"

Transcription

1 recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in General Relativity Pasadena, CA June 6-7, 2013

2 Outline Table of Contents Greedy Algorithms Initial Remarks Greedy Bases Examples for Greedy Algorithms Tree Approximation Initial Setup Binary Partitions Near-Best Approximation Near-Best Tree Approximation Parameter Dependent PDEs Reduced Basis Method Kolmogorov Widths Results Robustness

3 Greedy Algorithms Polynomial Approximations Initial Remarks find the best L 2 -approximation via polynomials to a function in an interval [a, b] space X = L 2 [a, b] of functions: f X f < ( ) 1 norm f = f X = f 2 := f(x) 2 2 dx [a,b] basis ϕ 1 = 1, ϕ 2 = x, ϕ 3 = x 2,..., ϕ n = x n 1,... space of polynomials of degree n 1 Φ n := span{ϕ 1, ϕ 2,..., ϕ n } approximation p n := argmin f p p Φ n n representation p n = c n,j (f)ϕ j j=1 in general, the coefficients c n,j (f) are not easy to find in this case we can use orthogonality Hilbert spaces

4 Greedy Algorithms Polynomial Approximations Initial Remarks find the best L 2 -approximation via polynomials to a function in a domain Ω space X = L 2 (Ω) of functions: f X f < ( ) 1 norm f = f X = f 2 := f(x) 2 2 dx basis ϕ 1, ϕ 2, ϕ 3,..., ϕ n,... space of polynomials of degree n Φ n := span{ϕ 1, ϕ 2,..., ϕ n } approximation p n := argmin f p p Φ n n representation p n = c n,j (f)ϕ j j=1 in general, the coefficients c n,j (f) are not easy to find in this case we can use orthogonality Hilbert spaces Ω

5 Hilbert Space Setup Greedy Algorithms Initial Remarks Banach space X normed linear space f X Hilbert space H Banach space with a scalar product f, g L 2 (Ω) is a Hilbert space f, g := f(x)g(x) dx Ω ( ) 1 2 (induced) norm f := f, f orthogonality f g f, g = 0 ḡ - complex conjugation n 1 orthogonal basis ψ n = ϕ n + q j ϕ j and ψ n Φ n 1 Gramm-Schmidt j=1 C j do not depend on n space of polynomials Φ n = span{ϕ 1, ϕ 2,..., ϕ n } = span{ψ 1, ψ 2,..., ψ n } n f p representation p n = C j (f)ψ j := argmin p Φ n C j (f) := f, ψ j ψ j, ψ j j=1 in case ψ j = 1, we have p n = n f, ψ j ψ j j=1

6 Greedy Algorithms Approximation in Hilbert Spaces Initial Remarks orthonormal basis of H: ψ 1, ψ 2, ψ 3,..., ψ n,... { } ψ j { j=1 0 if j k ψ j, ψ k = δ j,k := H = span { } ψ 1 if j = k j j=1 linear approximation p n = n f, ψ j ψ j Φ n j=1 approximation error f p n 2 = nonlinear approximation g n = j=n+1 f, ψj 2 n f, ψ kj ψ kj index set Λ n = { k 1, k 2,..., k n } IN Λn = Λ n 1 {k n } j=1 How to find Λ n? f, ψ k1 f, ψ k2 f, ψ k3... Parseval s Identity f 2 = f, ψ j 2 j=1

7 Greedy Algorithms Nonlinear Approximation Initial Remarks given a basis { } ψ j, choose any n elements from it and form the linear j=1 n combinations C j ψ kj j=1 approximation class (not a space!) Σ n := { g = } C k ψ k : Λ IN, #Λ n k Λ ( {ψj } ) Σ n = Σ n can be defined for any basis in a Banach space X j=1 approximate f X via functions from Σ n best approximation σ n (f) := inf g Σ n f g basic question: how to find g n Σ n such that f g n Cσ n? Note that although { } ψ j j=1 and { } ϕ j might yield the same polynomial spaces Φn, it is usually the case that j=1 ( {ψj } ) ( {ϕj } ) Σ n Σ j=1 n for all n and even the rates of σ j=1 n can be completely different

8 Greedy Approximation Greedy Algorithms Greedy Bases how to find efficiently g n Σ n that approximates f well? the case of an orthonormal basis { } ψ j in a Hilbert space X j=1 incremental algorithm for finding g n = f, ψ k ψ k k Λ n Λ 0 = and Λ j = Λ j 1 {k j }, where k j = argmax f, ψ k = argmax f g j 1, ψ k k IN\Λ j 1 k IN for a general basis in a Banach space X, let f X has the representation f = c j (f)ψ j j=1 Greedy Algorithm : define g n = k Λ n c k (f)ψ k for Λ 0 = and Λ j = Λ j 1 {k j }, where k j = argmax k IN\Λ j 1 c k (f) Note that in the general case g n is no longer the best approximation from Σ n to f

9 Greedy Basis Greedy Algorithms Greedy Bases the bases, for which g n is a good approximation Greedy Basis { } ψ j j=1 for any f X the greedy approximation g n = g n (f) to f satisfies f g n Gσ n (f) with a constant G independent on f and n Unconditional Basis { } ψ j j=1 for any sign sequence { } θ j j=1, ( ) θ j = ±1, the operator M θ defined by M θ a j ψ j = θ j a j ψ j is bounded Democratic Basis { } ψ j there exists a constant D such that for j=1 any two finite sets of indeces P and Q with the same cardinality #P = #Q we have ψ k D ψ k k P Theorem [Konyagin, Temlyakov] A basis is greedy if and only if it is unconditional and democratic. k Q j=1 j=1

10 Weak Greedy Algorithm Greedy Algorithms Greedy Bases often it is difficult (or even impossible) to find the maximizing element ψ k settle for an element which is at least γ times the best with 0 < γ 1 define g n (f) := k Λ n c k (f)ψ k for Λ 0 = and Λ j = Λ j 1 {k j }, where c kj (f) γ max k IN\Λ j 1 c k (f) Theorem [Konyagin, Temlyakov] For any greedy basis of a Banach space X and any γ (0, 1] there is a basis-specific constant C(γ), independent on f and n, such that f g n (f) C(γ) σ n (f)

11 General Greedy Strategy Greedy Algorithms Examples for Greedy Algorithms start with g 0 = 0 and Λ 0 = set j = 1 and loop through the next items analyze the element f g 0 with the possible improvements related to (some of) the elements from { } ψ k by calculating a decision functional k=1 λ j (f g j 1, ψ k ) for each possible ψ k in tree approximation the number of possible elements is bounded by (a multiple of) j in the classical settings λ j is usually related to inf k,c f g j 1 Cψ k be greedy, use the element ψ kj with the largest λ j or at least the one, for which λ j (f g j 1, ψ kj ) γ sup λ j (f g j 1, ψ k ) k set Λ j = Λ j 1 {k j } calculate the next approximation g j based on { ψ k } k Λ j in the classical settings g j is found in the form g j 1 + Cψ kj set j := j + 1 and continue the loop

12 Greedy Algorithms Examples for Greedy Algorithms Greedy Algorithms for Dictionaries in Hilbert Spaces { ψk } k=1 is a dictionary (not a basis!) in a Hilbert space with ψ k = 1 Pure Greedy Algorithm k j := argmax f g j 1, ψ k and g j = g j 1 + f g j 1, ψ kj ψ kj k

13 Greedy Algorithms Examples for Greedy Algorithms Greedy Algorithms for Dictionaries in Hilbert Spaces { ψk } k=1 is a dictionary (not a basis!) in a Hilbert space with ψ k = 1 Pure Greedy Algorithm k j := argmax f g j 1, ψ k and g j = g j 1 + f g j 1, ψ kj ψ kj k Orthogonal Greedy Algorithm k j := argmax f g j 1, ψ k and g j = P {ψk } k Λj f k where P Ψ f is the orthogonal projection of f on the space span{ψ}

14 Greedy Algorithms Examples for Greedy Algorithms Greedy Algorithms for Dictionaries in Hilbert Spaces { ψk } k=1 is a dictionary (not a basis!) in a Hilbert space with ψ k = 1 Pure Greedy Algorithm k j := argmax f g j 1, ψ k and g j = g j 1 + f g j 1, ψ kj ψ kj k Orthogonal Greedy Algorithm k j := argmax f g j 1, ψ k and g j = P {ψk } k Λj f k where P Ψ f is the orthogonal projection of f on the space span{ψ} Weak Greedy Algorithm with 0 < γ 1 f g j 1, ψ kj γ sup f g j 1, ψ k k and g j = g j 1 + f g j 1, ψ kj ψ kj

15 Greedy Algorithms Examples for Greedy Algorithms Greedy Algorithms for Dictionaries in Hilbert Spaces { ψk } k=1 is a dictionary (not a basis!) in a Hilbert space with ψ k = 1 Pure Greedy Algorithm k j := argmax f g j 1, ψ k and g j = g j 1 + f g j 1, ψ kj ψ kj k Orthogonal Greedy Algorithm k j := argmax f g j 1, ψ k and g j = P {ψk } k Λj f k where P Ψ f is the orthogonal projection of f on the space span{ψ} Weak Greedy Algorithm with 0 < γ 1 f g j 1, ψ kj γ sup f g j 1, ψ k k and g j = g j 1 + f g j 1, ψ kj ψ kj Weak Orthogonal Greedy Algorithm with 0 < γ 1 f g j 1, ψ kj γ sup f g j 1, ψ k and g j = P {ψk } k Λj f k

16 Greedy Algorithms Examples for Greedy Algorithms Greedy Algorithms for Dictionaries in Hilbert Spaces { ψk } k=1 is a dictionary (not a basis!) in a Hilbert space with ψ k = 1 Pure Greedy Algorithm k j := argmax f g j 1, ψ k and g j = g j 1 + f g j 1, ψ kj ψ kj k Orthogonal Greedy Algorithm k j := argmax f g j 1, ψ k and g j = P {ψk } k Λj f k where P Ψ f is the orthogonal projection of f on the space span{ψ} Weak Greedy Algorithm with 0 < γ 1 f g j 1, ψ kj γ sup f g j 1, ψ k k and g j = g j 1 + f g j 1, ψ kj ψ kj Weak Orthogonal Greedy Algorithm with 0 < γ 1 f g j 1, ψ kj γ sup f g j 1, ψ k and g j = P {ψk } k Λj f k more in [V. Temlyakov, Greedy Approximation, Cambridge University Press, 2011]

17 Two Additional Examles Greedy Algorithms Examples for Greedy Algorithms Coarse-to-Fine Algorithms in Tree Approximation framework for adaptive partitioning strategies gn corresponds to a (binary) tree with complexity n functionals λk are estimators of the local errors search for kj is limited to the leaves of the tree corresponding to g j 1 greedy strategy does not work needs modifications theoretical estimates ensure near-best approximation with essentially linear complexity

18 Two Additional Examles Greedy Algorithms Examples for Greedy Algorithms Coarse-to-Fine Algorithms in Tree Approximation framework for adaptive partitioning strategies gn corresponds to a (binary) tree with complexity n functionals λk are estimators of the local errors search for kj is limited to the leaves of the tree corresponding to g j 1 greedy strategy does not work needs modifications theoretical estimates ensure near-best approximation with essentially linear complexity Greedy Approach to Reduced Basis Method the problem is to estimate a high dimensional parametric set via a low dimensional subspace the error cannot be calculated efficiently and one has to settle with a calculation of a surrogate using weak greedy strategies is a must the general comparison of the greedy approximation with the best approximation requires exponential constants should apply finer estimation techniques

19 Tree Approximation Initial Setup Adaptive Approximation on Binary Partitions Function f : X Y X IR d domain equipped with a measure ρ X such that ρ X (X ) = 1 Y [ M, M] IR for a given constant M Adaptive binary partitions of X building blocks j,k with j = 1, 2,... and k = 0, 1,.., 2 j 1 k represents a bitstream with length j 0, = X ; 1,0 1,1 = 0, and ρ X ( 1,0 1,1 ) = 0 j+1,2k j+1,2k+1 = j,k and ρ X ( j+1,2k j+1,2k+1 ) = 0 adaptive partition P : start with 0, and for certain pairs (j, k) replace j,k with j+1,2k and j+1,2k+1 corresponding binary tree T = T (P) with nodes j,k

20 Binary Partitions Tree Approximation Binary Partitions

21 Binary Partitions Tree Approximation Binary Partitions

22 Binary Partitions Tree Approximation Binary Partitions

23 Tree Approximation Binary Partitions Adaptive Approximation on Binary Partitions piecewise polynomial approximation of f on the partition P f P (x) := P p,f (x) χ (x) the process of finding an appropriate partition P can be defined on the corresponding tree T = T (P) tree algorithms

24 Tree Approximation Binary Partitions Adaptive Approximation on Binary Partitions piecewise polynomial approximation of f on the partition P f P (x) := P p,f (x) χ (x) the process of finding an appropriate partition P can be defined on the corresponding tree T = T (P) tree algorithms in T the node j,k is the parent of its children j+1,2k and j+1,2k+1 not every tree corresponds to a partition admissible trees T is admissible if for each node T its sibling is also in T the elements of P are the terminal nodes of T, its leaves L(T ) usually the complexity of P N = #P is measured by the number of its elements the number of nodes of the binary tree T (P) is equivalent measure since #T (P) = 2N 1

25 Tree Approximation Near-Best Approximation Near-Best Approximation Best Approximation σ N (f) := inf P : #P N f f P

26 Tree Approximation Near-Best Approximation Near-Best Approximation Best Approximation σ N (f) := inf P : #P N f f P Approximation class A s (X): f A s (X) σ N (f) = O(N s/d ) shows the asymptotic behavior of the approximation note the dependence on the dimension curse of dimensionality Usually, the theoretical results are given in terms of how the algorithms perform for functions from an approximation class does not provide any assurance about the performance for an individual function. Can we do better? Near-Best Approximation f there exist constants C 1 < and c 2 > 0 such that f f C 1 σ c2n (f) Sometimes referred as instance optimality

27 Error Functionals Tree Approximation Near-Best Tree Approximation a functional e : node T error e( ) 0 total error E(T ) := e( ). L(T ) Subadditivity For any node T if C( ) is the set of its children, then e( ) e( ) C( ) Weak Subadditivity There exists C 0 1 such that for any T and for any finite subtree T T with root node e( ) C 0 e( ) L(T )

28 Tree Approximation Near-Best Tree Approximation Greedy Strategy for Tree Approximation Example: Approximation in L 2 [0, 1] of a function f defined as linear combination of scaled Haar functions: f(x) := AH 0 + B I H 0 := [0, 2 M ], where M is huge constant I set of 2 k 1 dyadic subintervals of [ 1 2, 1] with length 2 k H L2[0,1] = 1 and A = B + ε with ε > 0 arbitrarily small (B > 0) e([0, 2 m ]) = A 2 for m M and e( ) = B 2 for I The greedy algorithm will first subdivide [1/2, 1] and its descendants until we obtain the set of intervals I. From then on it will subdivide [0, 2 m ] for m M (the ancestors of 0 ). After N := 2 k + M 2 subdivisions, the greedy algorithm will give the tree T with error E(T ) = f 2 L 2 = A k B 2. If we would have subdivided [1/2, 1] and its descendants to dyadic level k + 1, we would have used just n := 2 k+1 subdivisions and gotten an error σ n (f) = A 2. σ 2 k+1 = A 2 < 2 k (A k B 2 ) = E(T ) with #T = 2 k + M 2>>2 k+1

29 Tree Approximation Near-Best Tree Approximation Modified Greedy Strategy for Tree Approximation the standard greedy strategy does not work for tree approximation need a modification that will change the decision functional design modified error functionals to appropriately penalize the depth of subdivision use the greedy strategy based on these modified error functionals use dynamic instead of static decision functionals extensions of the algorithms for high dimensions - sparse occupancy trees

30 Tree Approximation Near-Best Tree Approximation Basic Idea of Tree Algorithm [B., DeVore 2004] For all of the nodes of the initial tree T 0 Then, for each child j, j = 1,..., m( ) of we define ẽ( ) = e( ). m( ) j=1 e( j ) ẽ( j ) := q( ) := e( ) + ẽ( ) ẽ( ). Note that ẽ is constant on the children of. Define the penalty terms p( j ) := e( j) ẽ( j ) The main property of ẽ : m( ) j=1 p( j ) = p( ) + 1.

31 Tree Approximation Near-Best Tree Approximation Adaptive Algorithm on Binary Trees [2007] Modified Error ẽ : initial partition subtree T 0 T, T 0 : ẽ( ) := e( ) ( 1 for each child j of : ẽ( j ) := e( j ) + 1 ) 1 ẽ( ) Adaptive Tree Algorithm ( creates a sequence of trees T j, j = 1, 2,... ): start with T 0 subdivide leaves L(T j 1 ) with largest ẽ( ) to produce T j To eliminate sorting, we can consider all ẽ( ) with 2 l ẽ( ) < 2 l+1 as if they are equally large.

32 Tree Approximation Near-Best Tree Approximation Near-Best Approximation on Binary Trees Best Approximation σ N (f) := inf P : #P N f f P Assume that the error functionals e( ) 0 satisfy the subadditivity condition. Then the adaptive tree algorithm that produces a tree T N corresponding to a partition P N with N n elements satisfies ( ) E(T N ) = f f N N σ n (f) N n + 1 This gives the constant C 1 = in the general estimate f f N C 1 σ c2n(f). N (1 c 2)N+1 for any chosen 0 < c 2 1

33 Parameter Dependent PDEs Parameter Dependent PDEs input parameters µ D IR p differential operator A µ : H H functional l H solution u µ of A µu µ = l(u µ) quantity of interest I(µ) = l(u µ) I µ opt µ D example: 2 H = 2 = a µ(, ) A µ, v := a µ(u, v) = p j=1 θ j (µ j ) Ω j u v dx uniform ellipticity: c 1 v 2 a µ(v, v) C 1 v 2 v H, µ D [Y. Maday, T. Patera, G. Turicini,...]

34 Parameter Dependent PDEs Reduced Basis Method Reduced Basis exploit sparsity solution manifold compact set F := { u µ = A 1 µ l : µ D } H offline: compute f 0, f 1,..., f n 1 such that for F n := span{f 0,..., f n 1 } σ n := max f F f P nf [tollerance] online: for each µ-query solve a small Galerkin problem in F n a µ (u n µ, f j ) = l(f j ) j = 0,..., n 1

35 Example Parameter Dependent PDEs Reduced Basis Method 3 a µ(u, v) = j=1 µ j u v dx + u v dx Ω j Ω 4 l(v) = v ds Γ C µ 1 = 0.1, µ 2 = 0.3, µ 3 = 0.8 l(u) = µ 1 = 0.4, µ 2 = 0.4, µ 3 = 7.1 l(u) =

36 Parameter Dependent PDEs Reduced Basis Method Reduced Basis exploit sparsity offline: f 0, f 1,..., f n 1 such that for F n := span{f 0,..., f n 1 } σ n := max f F f P nf [tollerance] online: for each µ solve a small Galerkin problem in F n a µ (u n µ, f j ) = l(f j ) j = 0,..., n 1 l(u µ ) l(u n µ) = a µ (u µ u n µ, u µ u n µ) C 1 σ n (F) 2 C 1 [tollerance] 2 use u n µ for solving the optimization problem

37 Parameter Dependent PDEs Reduced Basis Method Basis Construction greedy approach ideal algorithm pure greedy f0 := argmax f, F 1 := span{f 0 }, σ 1 (F) := f 0 f F given F n := span{f 0,..., f n 1 } and σ n (F) := max f F f P nf fn := argmax f P n f f F a feasible variant weak greedy algorithm using a computable surrogate R n (f) for which c 2 R n (f) f P n f C 2 R n (f) f fn γ σ n (F) e.g. f n := argmax R n (f) f F and γ = c2 C 2

38 Kolmogorov Widths Parameter Dependent PDEs Kolmogorov Widths d n (F) := inf dim(y )=n sup f F dist H (f, Y ) σ n (F) Can one bound σ n (F) in terms of d n (F)? Are optimal subspaces spanned by elements of F? d n (F) := inf Y F n sup f F dist H (f, Y ) σ n (F) Theorem: for any compact set F we have dn (F) (n + 1) d n (F) given any ε > 0 there is a set F such that d n (F) (n 1 ε) d n (F)

39 Parameter Dependent PDEs Widths vs Greedy Kolmogorov Widths n = 1 n = 2 Kolmogorov Widths vs Greedy Basis

40 Parameter Dependent PDEs Results Results pure greedy (γ = 1) [Buffa, Maday, Patera, Prudhomme, Turinici] σ n (F) C n2 n d n (F) slight improvement σ n (F) 2n+1 3 d n (F) for any n > 0 and any ε > 0 there exists a set F = F n such that for the pure greedy algorithm σ n (F) (1 ε)2 n d n (F)

41 Parameter Dependent PDEs Results Results pure greedy (γ = 1) [Buffa, Maday, Patera, Prudhomme, Turinici] σ n (F) C n2 n d n (F) slight improvement σ n (F) 2n+1 3 d n (F) for any n > 0 and any ε > 0 there exists a set F = F n such that for the pure greedy algorithm σ n (F) (1 ε)2 n d n (F) What if 2 n d n (F) 0? What if γ < 1?

42 Parameter Dependent PDEs Polynomial Convergence Rates Results Theorem [Binev, Cohen, Dahmen, DeVore, Petrova, Wojtaszczyk] Suppose that d 0 (F) M. Then d n (F) Mn α for n > 0 σ n (F) CMn α for n > 0 with C := 4 α q α+ 1 2 and q := 2 α+1 γ 1 2.

43 Parameter Dependent PDEs Polynomial Convergence Rates Results Theorem [Binev, Cohen, Dahmen, DeVore, Petrova, Wojtaszczyk] Suppose that d 0 (F) M. Then d n (F) Mn α for n > 0 σ n (F) CMn α for n > 0 with C := 4 α q α+ 1 2 and q := 2 α+1 γ 1 2. using the Flatness Lemma: Let 0 < θ < 1 and assume that for q := 2θ 1 γ 1 2 and some integers m and n we have σ n+qm (F) θ σ n (F). Then σ n (F) q 1 2 dm (F).

44 Idea of the Proof Parameter Dependent PDEs Results σ n+qm (F) θσ n(f) σ n(f) q 1 2 d m(f)

45 Parameter Dependent PDEs Results Idea of the Proof σ n+qm (F) θσ n(f) σ n(f) q 1 2 d m(f) σn(f) CMn α for n N 0 assume it fails for some N > N0 flatness for m n apply flatness lemma contradiction d n (F) Mn α for n > 0 σ n (F) CMn α for n > 0

46 Parameter Dependent PDEs Sub-Exponential Rates Results finer resolutions between n α and e an Theorem [DeVore, Petrova, Wojtaszczyk] For any compact set F and n 1, we have 2 σ n (F) min γ d n m n m (F) 1 m<n In particular, σ 2n (F) 2dn (F) γ and d n (F) C 0 e anα for n 1 2C0 σ n (F) γ e c1anα for n 1 with c 1 = 2 1 2α.

47 Robustness Parameter Dependent PDEs Robustness in reality f j cannot be computed exactly we receive f j (that might not be in F) with f j f j ε instead of F n use F { n := span f0,..., f } n 1 performance of the noisy weak greedy algorithm σ n (F) := sup f F Theorem [polynomial rates, n > 0] dist H (f, F n ) d n (F) Mn α σ n (F) C max{mn α, ε} with C = C(α, γ). similar result for subexponential rates

48 The End Thanks T HAN K YOU!

Convergence Rates for Greedy Algorithms in Reduced Basis Methods

Convergence Rates for Greedy Algorithms in Reduced Basis Methods Convergence Rates for Greedy Algorithms in Reduced Basis Methods Peter Binev, Albert Cohen, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, and Przemyslaw Wojtaszczyk October 4, 2010 Abstract The reduced

More information

Reduced Modeling in Data Assimilation

Reduced Modeling in Data Assimilation Reduced Modeling in Data Assimilation Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Challenges in high dimensional analysis and computation

More information

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation Y. Maday (LJLL, I.U.F., Brown Univ.) Olga Mula (CEA and LJLL) G.

More information

Empirical Interpolation Methods

Empirical Interpolation Methods Empirical Interpolation Methods Yvon Maday Laboratoire Jacques-Louis Lions - UPMC, Paris, France IUF and Division of Applied Maths Brown University, Providence USA Doctoral Workshop on Model Reduction

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (2004) 97: 193 217 Digital Object Identifier (DOI) 10.1007/s00211-003-0493-6 Numerische Mathematik Fast computation in adaptive tree approximation Peter Binev, Ronald DeVore Department of

More information

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid Greedy control Martin Lazar University of Dubrovnik Opatija, 2015 4th Najman Conference Joint work with E: Zuazua, UAM, Madrid Outline Parametric dependent systems Reduced basis methods Greedy control

More information

How to Best Sample a Solution Manifold?

How to Best Sample a Solution Manifold? J A N U A R Y 2 0 1 5 P R E P R I N T 4 1 6 How to Best Sample a Solution Manifold? Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik Templergraben 55, 52056 Aachen, Germany This work has

More information

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein

Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems. Giulia Fabrini Laura Iapichino Stefan Volkwein Universität Konstanz Reduced-Order Greedy Controllability of Finite Dimensional Linear Systems Giulia Fabrini Laura Iapichino Stefan Volkwein Konstanzer Schriften in Mathematik Nr. 364, Oktober 2017 ISSN

More information

Mathematics Subject Classification (2000). Primary 41-02, 46-02; Secondary 62C20, 65N30, 68Q25, 74S05.

Mathematics Subject Classification (2000). Primary 41-02, 46-02; Secondary 62C20, 65N30, 68Q25, 74S05. Optimal computation Ronald A. DeVore Abstract. A large portion of computation is concerned with approximating a function u. Typically, there are many ways to proceed with such an approximation leading

More information

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov INTERDISCIPLINARY MATHEMATICS INSTITUTE 2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration V. Temlyakov IMI PREPRINT SERIES COLLEGE OF ARTS AND SCIENCES UNIVERSITY OF SOUTH

More information

Greedy bases are best for m-term approximation

Greedy bases are best for m-term approximation Greedy bases are best for m-term approximation Witold Bednorz Institute of Mathematics, Warsaw University Abstract We study the approximation of a subset K in a Banach space X by choosing first basis B

More information

Convergence of greedy approximation I. General systems

Convergence of greedy approximation I. General systems STUDIA MATHEMATICA 159 (1) (2003) Convergence of greedy approximation I. General systems by S. V. Konyagin (Moscow) and V. N. Temlyakov (Columbia, SC) Abstract. We consider convergence of thresholding

More information

The Generalized Empirical Interpolation Method: Stability theory on Hilbert spaces with an application to the Stokes equation

The Generalized Empirical Interpolation Method: Stability theory on Hilbert spaces with an application to the Stokes equation The Generalized Empirical Interpolation Method: Stability theory on Hilbert spaces with an application to the Stokes equation The MIT Faculty has made this article openly available. Please share how this

More information

Computation of operators in wavelet coordinates

Computation of operators in wavelet coordinates Computation of operators in wavelet coordinates Tsogtgerel Gantumur and Rob Stevenson Department of Mathematics Utrecht University Tsogtgerel Gantumur - Computation of operators in wavelet coordinates

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

REDUCED BASIS METHOD

REDUCED BASIS METHOD REDUCED BASIS METHOD approximation of PDE s, interpolation, a posteriori estimates Yvon Maday Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie, Paris, France, Institut Universitaire de

More information

Greedy Control. Enrique Zuazua 1 2

Greedy Control. Enrique Zuazua 1 2 Greedy Control Enrique Zuazua 1 2 DeustoTech - Bilbao, Basque Country, Spain Universidad Autónoma de Madrid, Spain Visiting Fellow of LJLL-UPMC, Paris enrique.zuazua@deusto.es http://enzuazua.net X ENAMA,

More information

On the bang-bang property of time optimal controls for infinite dimensional linear systems

On the bang-bang property of time optimal controls for infinite dimensional linear systems On the bang-bang property of time optimal controls for infinite dimensional linear systems Marius Tucsnak Université de Lorraine Paris, 6 janvier 2012 Notation and problem statement (I) Notation: X (the

More information

Approximation of Geometric Data

Approximation of Geometric Data Supervised by: Philipp Grohs, ETH Zürich August 19, 2013 Outline 1 Motivation Outline 1 Motivation 2 Outline 1 Motivation 2 3 Goal: Solving PDE s an optimization problems where we seek a function with

More information

Nonlinear tensor product approximation

Nonlinear tensor product approximation ICERM; October 3, 2014 1 Introduction 2 3 4 Best multilinear approximation We are interested in approximation of a multivariate function f (x 1,..., x d ) by linear combinations of products u 1 (x 1 )

More information

ANALYSIS QUALIFYING EXAM FALL 2016: SOLUTIONS. = lim. F n

ANALYSIS QUALIFYING EXAM FALL 2016: SOLUTIONS. = lim. F n ANALYSIS QUALIFYING EXAM FALL 206: SOLUTIONS Problem. Let m be Lebesgue measure on R. For a subset E R and r (0, ), define E r = { x R: dist(x, E) < r}. Let E R be compact. Prove that m(e) = lim m(e /n).

More information

Adaptive Approximation of Curves

Adaptive Approximation of Curves Adaptive Approximation of Curves Peter Binev, Wolfgang Dahmen, Ronald DeVore, Nira Dyn July 15, 2004 Abstract We propose adaptive multiscale refinement algorithms for approximating and encoding curves

More information

Outline of Fourier Series: Math 201B

Outline of Fourier Series: Math 201B Outline of Fourier Series: Math 201B February 24, 2011 1 Functions and convolutions 1.1 Periodic functions Periodic functions. Let = R/(2πZ) denote the circle, or onedimensional torus. A function f : C

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

The Theoretical Foundation of Reduced Basis Methods

The Theoretical Foundation of Reduced Basis Methods The Theoretical Foundation of Reduced Basis Methods Ronald DeVore January 9, 2014 Abstract The main theme of this volume is the efficient solution of families of stochastic or parametric partial differential

More information

arxiv: v1 [math.fa] 17 May 2018

arxiv: v1 [math.fa] 17 May 2018 CHARACTERIZATIONS OF ALMOST GREEDY AND PARTIALLY GREEDY BASES S. J. DILWORTH AND DIVYA KHURANA arxiv:1805.06778v1 [math.fa] 17 May 2018 Abstract. We shall present new characterizations of partially greedy

More information

How much should we rely on Besov spaces as a framework for the mathematical study of images?

How much should we rely on Besov spaces as a framework for the mathematical study of images? How much should we rely on Besov spaces as a framework for the mathematical study of images? C. Sinan Güntürk Princeton University, Program in Applied and Computational Mathematics Abstract Relations between

More information

Density, Overcompleteness, and Localization of Frames. I. Theory

Density, Overcompleteness, and Localization of Frames. I. Theory The Journal of Fourier Analysis and Applications Volume 2, Issue 2, 2006 Density, Overcompleteness, and Localization of Frames. I. Theory Radu Balan, Peter G. Casazza, Christopher Heil, and Zeph Landau

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

Wavelets on hierarchical trees

Wavelets on hierarchical trees University of Iowa Iowa Research Online Theses and Dissertations Fall 2016 Wavelets on hierarchical trees Lu Yu University of Iowa Copyright 2016 Lu Yu This dissertation is available at Iowa Research Online:

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1 Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,

More information

Best approximation in the 2-norm

Best approximation in the 2-norm Best approximation in the 2-norm Department of Mathematical Sciences, NTNU september 26th 2012 Vector space A real vector space V is a set with a 0 element and three operations: Addition: x, y V then x

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

ON QUASI-GREEDY BASES ASSOCIATED WITH UNITARY REPRESENTATIONS OF COUNTABLE GROUPS. Morten Nielsen Aalborg University, Denmark

ON QUASI-GREEDY BASES ASSOCIATED WITH UNITARY REPRESENTATIONS OF COUNTABLE GROUPS. Morten Nielsen Aalborg University, Denmark GLASNIK MATEMATIČKI Vol. 50(70)(2015), 193 205 ON QUASI-GREEDY BASES ASSOCIATED WITH UNITARY REPRESENTATIONS OF COUNTABLE GROUPS Morten Nielsen Aalborg University, Denmark Abstract. We consider the natural

More information

Wavelets and modular inequalities in variable L p spaces

Wavelets and modular inequalities in variable L p spaces Wavelets and modular inequalities in variable L p spaces Mitsuo Izuki July 14, 2007 Abstract The aim of this paper is to characterize variable L p spaces L p( ) (R n ) using wavelets with proper smoothness

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

General Franklin systems as bases in H 1 [0, 1]

General Franklin systems as bases in H 1 [0, 1] STUDIA MATHEMATICA 67 (3) (2005) General Franklin systems as bases in H [0, ] by Gegham G. Gevorkyan (Yerevan) and Anna Kamont (Sopot) Abstract. By a general Franklin system corresponding to a dense sequence

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

Approximation numbers of Sobolev embeddings - Sharp constants and tractability

Approximation numbers of Sobolev embeddings - Sharp constants and tractability Approximation numbers of Sobolev embeddings - Sharp constants and tractability Thomas Kühn Universität Leipzig, Germany Workshop Uniform Distribution Theory and Applications Oberwolfach, 29 September -

More information

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 17 March 30th 1 Overview In the previous lecture, we saw examples of combinatorial problems: the Maximal Matching problem and the Minimum

More information

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Algorithms for sparse analysis Lecture I: Background on sparse approximation Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data

More information

Riesz Bases of Wavelets and Applications to Numerical Solutions of Elliptic Equations

Riesz Bases of Wavelets and Applications to Numerical Solutions of Elliptic Equations Riesz Bases of Wavelets and Applications to Numerical Solutions of Elliptic Equations Rong-Qing Jia and Wei Zhao Department of Mathematical and Statistical Sciences University of Alberta Edmonton, Canada

More information

An Introduction to Wavelets and some Applications

An Introduction to Wavelets and some Applications An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

CONDITIONAL QUASI-GREEDY BASES IN HILBERT AND BANACH SPACES

CONDITIONAL QUASI-GREEDY BASES IN HILBERT AND BANACH SPACES CONDITIONAL QUASI-GREEDY BASES IN HILBERT AND BANACH SPACES G. GARRIGÓS AND P. WOJTASZCZYK Abstract. For quasi-greedy bases B in Hilbert spaces, we give an improved bound of the associated conditionality

More information

Deep Learning: Approximation of Functions by Composition

Deep Learning: Approximation of Functions by Composition Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation

More information

Algorithms and Error Bounds for Multivariate Piecewise Constant Approximation

Algorithms and Error Bounds for Multivariate Piecewise Constant Approximation Algorithms and Error Bounds for Multivariate Piecewise Constant Approximation Oleg Davydov Department of Mathematics and Statistics, University of Strathclyde, 26 Richmond Street, Glasgow G1 1XH, Scotland,

More information

Besov regularity for the solution of the Stokes system in polyhedral cones

Besov regularity for the solution of the Stokes system in polyhedral cones the Stokes system Department of Mathematics and Computer Science Philipps-University Marburg Summer School New Trends and Directions in Harmonic Analysis, Fractional Operator Theory, and Image Analysis

More information

Greediness of higher rank Haar wavelet bases in L p w(r) spaces

Greediness of higher rank Haar wavelet bases in L p w(r) spaces Stud. Univ. Babeş-Bolyai Math. 59(2014), No. 2, 213 219 Greediness of higher rank Haar avelet bases in L (R) saces Ekaterine Kaanadze Abstract. We rove that higher rank Haar avelet systems are greedy in

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

CHAPTER 6. Representations of compact groups

CHAPTER 6. Representations of compact groups CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive

More information

Isogeometric Analysis:

Isogeometric Analysis: Isogeometric Analysis: some approximation estimates for NURBS L. Beirao da Veiga, A. Buffa, Judith Rivas, G. Sangalli Euskadi-Kyushu 2011 Workshop on Applied Mathematics BCAM, March t0th, 2011 Outline

More information

Nonparametric estimation using wavelet methods. Dominique Picard. Laboratoire Probabilités et Modèles Aléatoires Université Paris VII

Nonparametric estimation using wavelet methods. Dominique Picard. Laboratoire Probabilités et Modèles Aléatoires Université Paris VII Nonparametric estimation using wavelet methods Dominique Picard Laboratoire Probabilités et Modèles Aléatoires Université Paris VII http ://www.proba.jussieu.fr/mathdoc/preprints/index.html 1 Nonparametric

More information

A Primal-Dual Weak Galerkin Finite Element Method for Second Order Elliptic Equations in Non-Divergence Form

A Primal-Dual Weak Galerkin Finite Element Method for Second Order Elliptic Equations in Non-Divergence Form A Primal-Dual Weak Galerkin Finite Element Method for Second Order Elliptic Equations in Non-Divergence Form Chunmei Wang Visiting Assistant Professor School of Mathematics Georgia Institute of Technology

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

Construction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam

Construction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Construction of wavelets Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Contents Stability of biorthogonal wavelets. Examples on IR, (0, 1), and (0, 1) n. General domains

More information

MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics

MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 05.2013 February 2013 (NEW 10.2013 A weighted empirical

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Greedy controllability of finite dimensional linear systems

Greedy controllability of finite dimensional linear systems Greedy controllability of finite dimensional linear systems Martin Lazar a Enrique Zuazua b a University of Dubrovnik, Ćira Carića 4, 2 Dubrovnik, Croatia b Departamento de Matemáticas, Universidad Autónoma

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

Data Assimilation and Sampling in Banach spaces

Data Assimilation and Sampling in Banach spaces Data Assimilation and Sampling in Banach spaces Ronald DeVore, Guergana Petrova, and Przemyslaw Wojtaszczyk August 2, 2016 Abstract This paper studies the problem of approximating a function f in a Banach

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

Mixed Hybrid Finite Element Method: an introduction

Mixed Hybrid Finite Element Method: an introduction Mixed Hybrid Finite Element Method: an introduction First lecture Annamaria Mazzia Dipartimento di Metodi e Modelli Matematici per le Scienze Applicate Università di Padova mazzia@dmsa.unipd.it Scuola

More information

Course Notes Bilbao 2013 Numerical Methods in High Dimensions Lecture 1: The Role of Approximation Theory

Course Notes Bilbao 2013 Numerical Methods in High Dimensions Lecture 1: The Role of Approximation Theory Course Notes Bilbao 2013 Numerical Methods in High Dimensions Lecture 1: The Role of Approximation Theory Ronald DeVore April 11, 2013 Abstract In this first lecture, we give the setting of classical approximation

More information

MULTIVARIATE HISTOGRAMS WITH DATA-DEPENDENT PARTITIONS

MULTIVARIATE HISTOGRAMS WITH DATA-DEPENDENT PARTITIONS Statistica Sinica 19 (2009), 159-176 MULTIVARIATE HISTOGRAMS WITH DATA-DEPENDENT PARTITIONS Jussi Klemelä University of Oulu Abstract: We consider estimation of multivariate densities with histograms which

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Orthogonality of hat functions in Sobolev spaces

Orthogonality of hat functions in Sobolev spaces 1 Orthogonality of hat functions in Sobolev spaces Ulrich Reif Technische Universität Darmstadt A Strobl, September 18, 27 2 3 Outline: Recap: quasi interpolation Recap: orthogonality of uniform B-splines

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Laboratory Computer Science Tripos, Part IB Michaelmas Term 2016/17 Professor J. Daugman Exercise problems Fourier and related methods 15 JJ Thomson Avenue

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Convergence and Rate of Convergence of Approximate Greedy-Type Algorithms

Convergence and Rate of Convergence of Approximate Greedy-Type Algorithms University of South Carolina Scholar Commons Theses and Dissertations 2017 Convergence and Rate of Convergence of Approximate Greedy-Type Algorithms Anton Dereventsov University of South Carolina Follow

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Wavelets and multiresolution representations. Time meets frequency

Wavelets and multiresolution representations. Time meets frequency Wavelets and multiresolution representations Time meets frequency Time-Frequency resolution Depends on the time-frequency spread of the wavelet atoms Assuming that ψ is centred in t=0 Signal domain + t

More information

Space-Time Adaptive Wavelet Methods for Parabolic Evolution Problems

Space-Time Adaptive Wavelet Methods for Parabolic Evolution Problems Space-Time Adaptive Wavelet Methods for Parabolic Evolution Problems Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam joint work with Christoph Schwab (ETH, Zürich), Nabi

More information

Linear reconstructions and the analysis of the stable sampling rate

Linear reconstructions and the analysis of the stable sampling rate Preprint submitted to SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING DVI file produced on Linear reconstructions and the analysis of the stable sampling rate Laura Terhaar Department of Applied Mathematics

More information

Model reduction for multiscale problems

Model reduction for multiscale problems Model reduction for multiscale problems Mario Ohlberger Dec. 12-16, 2011 RICAM, Linz , > Outline Motivation: Multi-Scale and Multi-Physics Problems Model Reduction: The Reduced Basis Approach A new Reduced

More information

Functional Analysis (2006) Homework assignment 2

Functional Analysis (2006) Homework assignment 2 Functional Analysis (26) Homework assignment 2 All students should solve the following problems: 1. Define T : C[, 1] C[, 1] by (T x)(t) = t x(s) ds. Prove that this is a bounded linear operator, and compute

More information

ECE 901 Lecture 16: Wavelet Approximation Theory

ECE 901 Lecture 16: Wavelet Approximation Theory ECE 91 Lecture 16: Wavelet Approximation Theory R. Nowak 5/17/29 1 Introduction In Lecture 4 and 15, we investigated the problem of denoising a smooth signal in additive white noise. In Lecture 4, we considered

More information

A subspace of l 2 (X) without the approximation property

A subspace of l 2 (X) without the approximation property A subspace of l (X) without the approximation property by Christopher Chlebovec A dissertation submitted in partial fulfillment of the requirements for the degree of Master of Science (Mathematics) Department

More information

N-Widths and ε-dimensions for high-dimensional approximations

N-Widths and ε-dimensions for high-dimensional approximations N-Widths and ε-dimensions for high-dimensional approximations Dinh Dũng a, Tino Ullrich b a Vietnam National University, Hanoi, Information Technology Institute 144 Xuan Thuy, Hanoi, Vietnam dinhzung@gmail.com

More information

Game Theory and its Applications to Networks - Part I: Strict Competition

Game Theory and its Applications to Networks - Part I: Strict Competition Game Theory and its Applications to Networks - Part I: Strict Competition Corinne Touati Master ENS Lyon, Fall 200 What is Game Theory and what is it for? Definition (Roger Myerson, Game Theory, Analysis

More information

Quarkonial frames of wavelet type - Stability, approximation and compression properties

Quarkonial frames of wavelet type - Stability, approximation and compression properties Quarkonial frames of wavelet type - Stability, approximation and compression properties Stephan Dahlke 1 Peter Oswald 2 Thorsten Raasch 3 ESI Workshop Wavelet methods in scientific computing Vienna, November

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

DISCRETE DIFFERENTIAL OPERATORS IN MULTIDIMENSIONAL HAAR WAVELET SPACES

DISCRETE DIFFERENTIAL OPERATORS IN MULTIDIMENSIONAL HAAR WAVELET SPACES IJMMS 2004:44 2347 2355 PII. S0161171204307234 http://ijmms.hindawi.com Hindawi Publishing Corp. DISCRETE DIFFERENTIAL OPERATORS IN MULTIDIMENSIONAL HAAR WAVELET SPACES CARLO CATTANI and LUIS M. SÁNCHEZ

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information

Mathematical Methods in Quantum Mechanics With Applications to Schrödinger Operators 2nd edition

Mathematical Methods in Quantum Mechanics With Applications to Schrödinger Operators 2nd edition Errata G. Teschl, Mathematical Methods in Quantum Mechanics With Applications to Schrödinger Operators nd edition Graduate Studies in Mathematics, Vol. 157 American Mathematical Society, Providence, Rhode

More information

Near best tree approximation

Near best tree approximation Advances in Computational Mathematics 16: 357 373, 2002. 2002 Kluwer Academic Publishers. Printed in the Netherlands. Near best tree approximation R.G. Baraniuk a,,r.a.devore b,g.kyriazis c and X.M. Yu

More information

Math Models of OR: Branch-and-Bound

Math Models of OR: Branch-and-Bound Math Models of OR: Branch-and-Bound John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA November 2018 Mitchell Branch-and-Bound 1 / 15 Branch-and-Bound Outline 1 Branch-and-Bound

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

Chapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ =

Chapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ = Chapter 6. Integration 1. Integrals of Nonnegative Functions Let (, S, µ) be a measure space. We denote by L + the set of all measurable functions from to [0, ]. Let φ be a simple function in L +. Suppose

More information