Numerical Methods for Elliptic Partial Differential Equations with Random Coefficients

Size: px
Start display at page:

Download "Numerical Methods for Elliptic Partial Differential Equations with Random Coefficients"

Transcription

1 MSC MATHEMATICS MASTER THESIS Nuerical Methods for Elliptic Partial ifferential Equations with Rando Coefficients Author: Chiara Pizzigoni Supervisor: dr. S.G. Cox prof. dr. R.P. Stevenson Exaination date: March 9, 017 Korteweg-de Vries Institute for Matheatics

2 Abstract This thesis analyses the stochastic collocation ethod for approxiating the solution of elliptic partial differential equations with rando coefficients. The ethod consists of a finite eleent approxiation in the spatial doain and a collocation at the zeros of suitable tensor product orthogonal polynoials in the probability space and naturally leads to the solution of uncoupled deterinistic probles. The coputational cost of the ethod depends on the choice of the collocation points and thus we copare few possible constructions. Although the rando fields describing the coefficients of the proble are in general infinite-diensional, an approxiation with certain optiality properties is obtained by truncating the Karhunen-Loéve expansion of these rando fields. We estiate the convergence rates of the ethod, depending on the regularity of the rando coefficients. In particular we prove exponential convergence in probability space. Nuerical exaples illustrate the theoretical results. Title: Nuerical Methods for Elliptic Partial ifferential Equations with Rando Coefficients Author: Chiara Pizzigoni, chiarapizzigoni9@gail.co, Supervisor: dr. S.G. Cox, prof. dr. R.P. Stevenson Exaination date: March 9, 017 Korteweg-de Vries Institute for Matheatics University of Asterda Science Park , 1098 XG Asterda ii

3 Contents Acknowledgeents Introduction v vii 1 Preliinaries The Karhunen-Loève Expansion Existence of the Karhunen-Loève Expansion Truncation of the Karhunen-Loève Expansion The Karhunen-Loève Expansion of a Rando Field More on Convergence of the Karhunen-Loève Expansion Proble Setting 15.1 Strong and Weak Forulations Finite-iensional Stochastic Space The Stochastic Collocation Method Full Tensor Grid of Collocation Points Sparse Tensor Grid of Collocation Points Anisotropic Version Convergence of the Method Regularity Results Convergence Analysis Error of the Truncation Finite Eleent Error Collocation Error - Full Tensor Method Convergence Result Collocation Error - Sparse Tensor Method Convergence of Moents Anisotropic Sparse Method - Selection of Weights α Nuerical Ipleentation Exaple Conclusions Suary 61 Bibliography 63 iii

4

5 Acknowledgeents I a ore than grateful to Sonja and Rob. Thank you for supporting e and not only supervising. In Italian the word support is translated with supporto and if you change only one letter getting sopporto, the eaning will becoe the slightly ore negative tolerate. So I really have to thank you for both supporting and stand e. Thank you for the tie, the energies and the attention that both of you have dedicated to e. When the atheatics is not the biggest proble, I can always rely on the. Thank you Sara for enlightening when I can see nothing but darkness. Thank you Marco for being always next to e even if you are far away. Thanks to y soul ates, Francesca, Linda and Alessia, loyal ates and beautiful souls. Silvia, thank you for being siply the best person standing by y side and for giving your all to e. I personally think that behind great en and woen there are always great parents. So thank you u and dad for being great even if I a not. Grazie! v

6

7 Introduction Elliptic second order partial differential equations are well suited to describe a wide variety of phenoena which present a static behaviour, e.g. the stationary solution to a diffusion proble. An entire branch of research is dedicated to the theoretical analysis and nuerical ipleentation of ethods which allow to approxiate the exact solution of these equations. On the other hand one should always keep in ind that in practice these odels are approxiations of the physical syste and they do not describe the given proble exactly. Therefore we ai to study equations which account as uch as possible for the uncertainties arising naturally fro the atheatical observation of the real world. Sources of such uncertainties can be for exaple errors in the easureents, the intrinsic nature of the syste, partially known data sets or an overall knowledge extrapolated fro only a few spatial locations. The atheatical odel describing a phenoenon can be thought as an input-output achine which takes soe functions and returns the solution of the odel. Hence we expect the inaccuracies in the inputs to easily propagate to the output. In order to include in our atheatical exaination all these oderately predictable factors, we odel the as noises following soe probability distribution. Thus we turn our attention to partial differential equations where the coefficients are rando fields depending on these uncertain paraeters. Naely, instead of focusing on elliptic equations, defined on a spatial doain R d, of the for we consider the following odel (a(x) u(x)) = f(x), x, (a(ω, x) u(ω, x)) = f(ω, x), (ω, x) Ω (0.0.1) where Ω is the set of possible outcoes. Again the ain difference relies on the fact that the functions of the second proble have a stochastic representation ephasizing the inexactness of the coefficients of which we suppose to know the law. In our discussion we consider a particular paraetrization of the rando coefficients given by the Karhunen-Loève expansion, a linear cobination of infinitely any uncorrelated rando variables. This choice is otivated because it guarantees soe optial results as it has been proved in [14]. We underline the fact that in general this decoposition is infinite-diensional. The coputational necessity to deal with finitediensional objects leads to the need of truncating the Karhunen-Loève expansion and the quantification of the consequent error. To approxiate nuerically the solution of proble (0.0.1) appended with soe suitable boundary conditions, we will use the stochastic collocation ethod which eploys vii

8 standard deterinistic techniques in the spatial doain and tensor product polynoial approxiation in the rando doain. This ethod is extensively analysed in [11]. Unfortunately tensor product spaces suffer fro the so-called curse of diensionality as the diension of the polynoial space grows exponentially fast in the nuber of ters that we keep in the Karhunen-Loève truncation. In case where this nuber becoes quite large we ay switch to sparse tensor product spaces which highly reduce the coputational coplexity of the ethod while preserving a good effectiveness (see [1]). Concretely the procedure consists in approxiating, using the Galerkin finite eleent schee, the solutions of probles which are deterinistic in the doain once we evaluate the probabilistic variables at several collocation points. The final approxiation is then recovered by interpolating the sei-discrete finite eleent approxiations. This approach differs fro the widely used Monte Carlo ethod in the selection of the evaluation points. The latter adopts a rando pick subject to a given distribution. By applying the stochastic collocation ethod instead, the points are chosen as the roots of (possibly sparse) tensor product polynoials orthogonal with respect to the joint probability density of the rando variables appearing in the Karhunen-Loève truncation. This preference benefits fro the nice properties of the zeros of orthogonal polynoials (see [15], Chapter III). While keeping the advantage of solving uncoupled deterinistic probles as in the Monte Carlo approach, the stochastic collocation achieves a faster convergence rate as we will prove in Chapter 4. The ain goals of this work are to provide a rigorous investigation on the existence of the Karhunen-Loève paraetrization and how its truncation influences our analysis, to describe the nuerical techniques to solve a stochastic odel proble and to estiate in detail the errors arising fro the subsequent approxiations that we introduce. The outline of the thesis is as follows: Chapter 1 focuses on the existence of the Karhunen-Loève expansion of an eleent in a tensor product Hilbert space and on the estiation of the error, easured in soe suitable nor, which arises as a consequence of truncating the expansion. Afterwards this abstract setting is tuned to our specific situation, analysing the results in the special case of rando fields. Chapter describes the infinite-diensional boundary value proble whose solution we want to approxiate. In particular we show that such a solution indeed exists and in order to give an approxiation, we turn to the corresponding finite-diensional proble where all the rando fields are represented by truncations of the Karhunen- Loève decoposition. In Chapter 3 we present the stochastic collocation ethod, both in its full and sparse versions, and briefly discuss the anisotropic (direction dependent) variant of the last one. The ai of Chapter 4 is to provide a rigorous error estiate of the entire ethod by splitting the error into three parts: the truncation error, the approxiation error in the spatial doain and the approxiation error in the stochastic space. In particular the latter is analysed both for the full and sparse version of the ethod coparing the convergence rate of the two. This is done under soe ild regularity assuptions on the coefficients entering the proble which we need to ipose in order to ensure corviii

9 responding regularity of the solution in the rando doain. The theoretical result concerning the collocation error, which is obtained conducting a stepwise one-diensional analysis, guarantees exponential convergence in the rando space for both variants of the stochastic collocation ethod. Chapter 5 is devoted to soe coputational exaples including a nuerical coparison with the Monte Carlo ethod. ix

10

11 1 Preliinaries 1.1 The Karhunen-Loève Expansion Uncertainties in a physical odel are very often odelled as rando fields. The ai of this section is to prove that any infinite-diensional rando field can be represented by the so-called Karhunen-Loève expansion which is roughly speaking an infinite linear cobination of uncorrelated rando variables. Moreover the best (in ters of ean square error) finite-diensional approxiation of the rando field is obtained by truncating this expansion. We will give precise bounds for this error. Following [14], we develop our construction in the general fraework of Hilbert spaces. Throughout this chapter all Hilbert spaces are real and separable unless stated otherwise Existence of the Karhunen-Loève Expansion Let (H 1,, H1 ), (H,, H ) and (S,, S ) be separable Hilbert spaces over R. For i {1, }, let (e (i) n ) n Ii and (s ) J be orthonoral bases of H i and S, respectively, with I i and J countable sets. For x H i and y S define the bilinear for x y : H i S R [x y](a, b) := x, a Hi y, b S, (a, b) H i S. Let E be the set of all finite linear cobinations of such bilinear fors. We define the inner product, Hi S on E as follows where x 1, x H i and y 1, y S. x 1 y 1, x y Hi S := x 1, x Hi y 1, y S efinition The tensor product of H i and S denoted by H i S is defined as the copletion of E under the inner product, Hi S. It can be shown that (e (i) n s ) (n,) Ii J is an orthonoral basis for the space H i S [1, Section II.4]. Thus we can represent any eleent f H i S as f = (n,) I i J c n, e (i) n s where c n, = f, e (i) n s Hi S and (n,) I i J c n, = f H i S. efine f := n I i c n, e (i) n H i. It is useful for later use to observe that f H i = n I i c n,. We get the following expansion for any eleent f H i S : f = J f s. 1

12 We can prove the following Proposition Let f H 1 S and g H S. The ap C, : (H 1 S) (H S) H 1 H defined as C fg = J f g is bilinear, well-defined and independent on the choice of the orthonoral basis in S. Proof. Bilinearity coes directly fro the distributive property of tensor product. We prove that the ap is well-defined: C fg H1 H f g H1 H = f H1 g H J J ( ) 1 ( ) 1 f H 1 g H J = f H1 S g H S where we have used the Cauchy-Schwarz inequality. Finally we show that the ap is independent on the choice of the orthonoral basis in S. Let {s } J be an orthonoral basis of S and {s } J be a second one. Consequently for each J there exist (γ n, ) n J R such that s = n J γ n,s n. By the orthonorality condition it follows that γ n, γ n,k = δ k. Indeed δ k = s, s k S = n J = n J γ n, s n, l J (n,l) J J γ n, γ l,k δ nl = n J J γ l,k s l S = γ n, γ n,k. (n,l) J J γ n, γ l,k s n, s l S Let {e (i) n } n Ii be an orthonoral basis of H i. As f H 1 S there exist (α n, ) (n,) I1 J R such that f = α n, e (1) n s. (n,) I 1 J Siilarly as g H S there exist (β n, ) (n,) I J R such that g = β n, e () n s. (n,) I J Now we expand f and g with respect to the basis {s } J : f = α n,k e(1) n s k = (α n,k γ,k)e (1) n s, k J (n,k) I 1 J (n,) I 1 J

13 g = (n,k) I J β n,k e() n s k = k J By uniqueness of the expansion it holds α n, = k J (n,) I J (β n,k γ,k)e () n s. α n,k γ,k, β n, = k J β n,k γ,k. Therefore we have C (s) f,g = J = = = = = = n I1 α n, e (1) n (n,l,) I 1 I J (n,l,) I 1 I J (n,l) I 1 I (n,l) I 1 I (n,l) I 1 I (n,l) I 1 I = k J l I β l, e () l (α n, β l, )e (1) n e () l ( ) ( ) α n,k γ,k β l,i γ,i e (1) n e () l k J (,k,i) J J J α n,k β l,i (k,i) J J J (k,i) J J ( k J n I1 α n,k e(1) n α n,k β l,k i J α n,k β l,i γ,kγ,i e (1) n e () γ,k γ,i e (1) n e () α n,k β l,i δ ki e (1) n e () ) e (1) n e () l l I β l,k e() l l = C (s ) f,g. l l As a consequence of the previous result we are allowed to introduce the following efinition C fg defined in Proposition 1.1. is called the correlation of f and g. Now we investigate the possible existence of an operator which can be associated to the correlation C f := C ff for f H S. Before doing this we recall soe concepts and results of functional analysis. Let C : H H be an operator on the real Hilbert space H. We can associate to 3

14 C the nor C := sup Cv H. v H, v H =1 We say that a linear and bounded operator C on H is copact if there exists a sequence (C n ) n of finite rank operators such that C C n 0. The Spectral Theore for copact and syetric operators on a separable Hilbert space states (see [], Theore 5.1) Theore Let H be a separable real Hilbert space. Let C be a syetric and copact operator on H. Then for any v H Cv = J λ v, e H e where (e ) for an orthonoral basis for H, (λ ) R are the eigenvalues corresponding to the eigenvectors e of C and λ 0. The notation λ 0 denotes a non-increasing sequence converging to 0. efine, for any orthonoral basis (e ) J, the trace of the non-negative definite syetric operator C as Tr(C) := J Ce, e H. Indeed it can be proved that this definition is independent on the choice of the basis in H, see for exaple [1]. We say that a non-negative definite syetric copact operator is trace class if Tr(C) <. Equivalently by using the characterization of the Spectral Theore, a non-negative definite syetric copact operator is trace class if λ <. We can now proceed to prove the following Theore Let (H,, H ) and (S,, S ) be separable Hilbert spaces of the sae diension and let (e ) J, (s ) J be orthonoral bases of H and S, respectively. The ap Φ : {C f H H : f H S} {C : C non-negative definite trace class operator} given by ( ) Φ(C f )(v) = Φ f f (v) := J f, v H f H, v H (1.1.1) J is a one-to-one correspondence. Proof. For f H S we denote Φ(C f ) by C f. First we prove that indeed C f has the required properties. By definition we can iediately conclude that C f is copact as it is a nor liit of finite-rank operators. Indeed if we define C n := n f, H f then C f C n >n f 0, n. 4

15 We show that C f is non-negative definite. Let v H, v 0. Then C f v, v H = J f, v H f, v H = J f, v H 0. Now we check that C f is a trace class operator. Naely plugging the expression for C f into the definition of trace, Tr(C f ) = C f e, e H = n, e H f n, e H = J J n J f f n, e H e, e H n J J = f n, e H e, f n, e H e H = f n H = f H S <. n J J J n J Therefore we can conclude that C f is a non-negative definite trace class operator. Now it reains to show that the ap Φ is a one-to-one correspondence. Observe that the following chain of equalities holds true for v, w H: C f v, w H = J f, v H f, w H = J f f, v w H H = J f f, v w H H = C f, v w H H. (1.1.) We are going to check that our ap is injective, i.e. C f = C g for f, g H S iplies C f = C g. Assuing that C f = C g we have by (1.1.) that for all v, w H C f, v w H H = C g, v w H H. By linearity we have also for all n N and for all v i, w i H or equivalently C f, n v i w i H H = C g, i=1 C f C g, n v i w i H H i=1 n v i w i H H = 0. i=1 The clai is proved after observing that the set { n i=1 v i w i : v i, w i H, n N} is a dense subset in H H and therefore C f C g = 0. It reains to show that the ap is also surjective. Let C be a non-negative definite trace class operator. We want to show that there exist f H S such that C f = C. Let (φ ) J be the sequence of eigenvectors of C foring an orthonoral basis for H and (λ ) J R + be the eigenvalues of C, i.e. Cφ = λ φ. (1.1.3) Moreover λ < as C is trace class. As a consequence we get convergence of the following series f = J λ φ s. 5

16 For this eleent we have f = λ φ. Hence C f = J λ φ φ. Thus for all v H it holds that C f v = J λ φ, v H φ. Fro this we can state that the spectru of C f equals (1.1.3) and therefore C f = C. Corollary Let (H,, H ) be a separable Hilbert space and let C H H be a correlation. Let C be defined by (1.1.1) with eigenpairs (λ, φ ) J. Then C can be represented as C = J λ φ φ. (1.1.4) The following theore is crucial for our purpose. Theore Let (H,, H ) and (S,, S ) be separable Hilbert spaces of the sae diension. Let (φ ) J be an orthnoral basis for H. Let C H H be a correlation represented as in (1.1.4). Let f H S. Then C f = C if and only if there exists an orthonoral faily (Y ) J S such that f = J λ φ Y. (1.1.5) Proof. Assue that there exists an orthonoral faily (Y ) J S such that f = λ φ Y. Then by Proposition 1.1. we can conclude the stateent, possibly after having copleted the faily (Y ) J to a basis for S. On the other way around, assue that C f = C. We have already observed at the beginning of the section that we can represent the eleent f H S as f = J φ X (1.1.6) where (X ) S. Moreover we have the expansion X = n J X, s n S s n for (s n ) n J being an orthonoral basis of S. Thus we get f = φ X = X, s n S φ s n. J J n J 6

17 If we call f n := J X, s n S φ we have C f = n f n f n = ( ( ), s n S) X, s n S φ φ n J J X J ( ) = X, s n S X, s n S φ φ = = (, ) J J (, ) J J (, ) J J n J ( ) X, s n S X, s n S s n, s n S φ φ n J X, X S φ φ. If we copared the previous result with (1.1.4) we conclude that necessarily X, X S = λ δ. Therefore (X ) is an orthogonal faily with X S = λ. Hence fro (1.1.6) we get the stateent with Y = X λ. efinition The representation (1.1.5) of f given the spectru of C f is called the Karhunen- Loève expansion of f Truncation of the Karhunen-Loève Expansion For any eleent f H S with a given correlation we have established the existence of an expansion taking the for f = J λ φ Y where (φ ) J and (Y ) J are orthonoral systes and λ 0. Now we are interested in investigating the properties of such an expansion. In order to do that we introduce the following standard notation: for U a closed subspace of H, P U will indicate the orthogonal projection of H onto U. Theore If f H S has the Karhunen-Loève expansion (1.1.5), then for any N N it holds inf f P U S f H S λ, (1.1.7) U H Uclosed N+1 di U=N with equality in the case U = span{φ 1,..., φ N }. 7

18 Proof. If U = span{φ 1,..., φ N } then P U S f = N =1 truncation of the Karhunen-Loève expansion. Thus f P U S f H S = N+1 = N+1 = N+1 λ φ Y which is exactly the N th λ φ Y H S λ φ Y, φ Y H S λ φ Y H S = N+1 and so we have equality in (1.1.7). We prove the first stateent by induction. If N = 0 then (1.1.7) holds. Firstly observe that we are allowed to take (Y ) J to be an orthonoral basis of S with J finite or countable infinite. Let U H be closed and such that di U = N. Consider g U S. Then we can represent g as g = J u Y λ where (u ) U. It holds that g = J λ u Y with u := u λ U. Using this we get f g U H = f g, f g U H = λ (φ u ) Y, λn (φ n u n ) Y n U H J n J = λ λn (φ u ) Y, (φ n u n ) Y n U H J n J = λ λn φ u, φ n u n H Y, Y n S J n J = λ φ u H. (1.1.8) J Consider span{u 1,..., u N 1 } U. If di(span{u 1,..., u N 1 }) = N 1 then W := span{u 1,..., u N 1 }. Otherwise if di(span{u 1,..., u N 1 }) < N 1 take W := span{u 1,..., u N 1, φ N,..., φ N+k } 8

19 for soe k N such that di W = N 1. We use the notation W U for the space such that U = W (W U ). Observe that di(w U ) = 1 and therefore W U = span{e} for soe e U with e H = 1. We have f g U H = λ φ u H + N 1 N = N 1 λ φ P W u H + N + N = J λ φ u H λ φ u H+ λ φ P W u H N λ φ P W u H λ φ P W u H N λ ( φ P W u H φ u H) ( ) λ φ P W u H λ P W U φ H J N = J λ φ P W u H N λ e, φ H λ φ P W u H sup λ e, φ H J N N J λ φ P W u H λ N e H = J λ φ P W u H λ N where in the last inequality we have used Bessel s inequality and the assuption that the sequence of eigenvalues is non-increasing. The inequality arked with ( ) holds true because φ P W u H = P W φ P W u + P W U φ H = P W (φ u ) + P W U φ H = P W (φ u ) H + P W U φ H φ u H + P W U φ H. By (1.1.) for w = J λ P W u Y W S we have λ φ P W u H λ N = f w W S λ N inf f h W S h W S λ N. J Thus we have shown f g U H inf W H W closed di W =N 1 inf f h W S h W S λ N. 9

20 Applying the inductive hypothesis we finally get inf U H Uclosed di U=N inf f g U S g U H inf W H W closed di W =N 1 N λ λ N = N+1 λ. inf h W S f h W S λ N 1. The Karhunen-Loève Expansion of a Rando Field The ain goal of this section is to specialize the results we have previously obtained to the particular case of infinite-diensional rando fields. In the notation of the previous section, let H = L () and S = L P (Ω). Let a L P (Ω) L () (i.e. a is a second order rando field) with ean E a (x) := a(ω, x)dp (ω) and covariance function V a (x, x ) := a(ω, x)a(ω, x )dp (ω) E a (x)e a (x ) Ω = [a(ω, x) E a (x)][a(ω, x ) E a (x )]dp (ω). Ω Observe that E a (x) L () and V a (x, x ) L () L (). Consider ã(ω, x) := a(ω, x) E a (x) L P (Ω) L (). Ω Let (s ) N and (e n ) n N be orthonoral bases for L P (Ω) and L (), respectively. According to our discussion at the beginning of Section 1.1.1, we have the following representation ã = ã n, e n s, (ã n, ) l (N N). n N N 10

21 efine ã := ã n, e n L (). Now we verify that Cã as given in efinition n N coincides with V a defined as above. Indeed if we identify ã with an eleent of L (Ω ) V a (x, x ) = ã(ω, x)ã(ω, x )dp (ω) Ω ( ) ( ) = ã n, e n (x)s (ω) ã n, e n (x )s (ω) dp (ω) Ω n N N n N N = ã n, ã n, e n(x)e n (x ) s (ω)s (ω)dp (ω) n N N n N N Ω = ã n, ã n, e n(x)e n (x )δ n N N n N N = ( ) ( ) ã n, e n (x) ã n,e n (x ) N n N = N ã (x)ã (x ) = Cã. n N Therefore V a is indeed the correlation of ã. By Theore we can associate to V a a non-negative definite trace class operator V a : L () L () taking the for (V a v)(x) = N ã, v L ()ã (x) = ( ) ã (x )v(x )dx ã (x) N = ã (x)ã (x )v(x )dx = N V a (x, x )v(x )dx. (1..1) This is called the Carlean operator. In particular V a is copact and has an eigenpairs sequence (λ, φ ) =1 such that V a φ = λ φ (1..) with the properties that the eigenvalues are non-negative real such that λ 0 and the eigenvectors (φ ) for an orthonoral sequence in L (). By Corollary we have a representation for the covariance given by V a (x, x ) = N λ φ (x)φ (x ). Finally we have the following equivalent forulation of Theore 1.1.7: 11

22 Corollary If a L P (Ω) L (), then there exists a sequence (Y ) L P (Ω) such that EY = 0, Cov(Y, Y n ) = δ n for all, n N and a(ω, x) = E a (x) + N λ φ (x)y (ω) (1..3) where (λ ) and (φ ) are the eigenvalues and eigenvectors of the Carlean operator V a defined in (1..1). Moreover Y (ω) = 1 [a(ω, x) E a (x)]φ (x)dx. λ efinition 1... ecoposition (1..3) is called the Karhunen-Loève expansion of the rando field a. In this context the analogue of Theore states that Corollary If a L P (Ω) L () has the Karhunen-Loève expansion (1..3), then for any N N it holds N ã λ φ Y = λ. (1..4) =1 L P (Ω) L () N More on Convergence of the Karhunen-Loève Expansion Under certain conditions it can be shown that that the convergence of the truncated Karhunen-Loève expansion is in L (Ω ). Naely assuing that the sequence (Y ) is uniforly bounded in L (Ω) and λ φ L () converges, we have N ã λ φ Y C λ φ L () (1..5) =1 L (Ω ) N+1 where the constant C is independent of N. We will use this iportant observation in Section 4..1 where we analyse the overall error of the stochastic collocation ethod. Therefore inspired by the bound (1..5), we are interested in studying the eigenvalue decay and point-wise eigenfunction bounds. The results we are going to present are strictly related to the regularity of the covariance V a as stated in the following efinition Let p, q [0, ). A covariance function V a : R is said to be piecewise analytic (respectively sooth and H p,q ) if there exists a finite partition = { j } K j=1 of into siplices j and there exists a finite faily G = {G j } K j=1 of open sets in Rd such that K = j, j=1 j G j, j = 1,..., K and such that V a j j has an extension to G j G j which is analytic in G j G j (respectively sooth in G j G j and is in (H p (G j ) L ()) (H q (G j ) L ())) for any pair (j, j ). 1

23 We refer the reader to [14, Section.] for a proof of the following result. Proposition Let V a L ( ) and let λ as in (1..). 1) If V a is piecewise analytic, then the eigenvalues satisfy λ C 1 e C 1/d, for soe constants C 1, C > 0 depending only on V a. ) If V a is piecewise H k,0, then the eigenvalues satisfy λ C 3 k/d, for a constant C 3 > 0 depending only on V a. 3) If V a is piecewise sooth, then the eigenvalues satisfy λ C 4 s, for any s > 0 with a constant C 4 > 0 depending only on V a and s. 4) If V a is a Gaussian covariance function taking the for V a (x, x ) = σ e x x γ dia() for σ, γ > 0 called standard deviation and correlation length, respectively, then the eigenvalues satisfy σ (1/γ) 1/d λ C 5 γ Γ(0.5 1/d ), where Γ( ) denotes the Gaa function and C 5 > 0 is independent of. The proof of the following proposition can be found in [14, Section.3]. Proposition Let V a L ( ) and let φ as in (1..). 1) If V a is piecewise H k,0, then φ H k ( j ) for all j and for every ε (0, k d/] there exists a constant C 6 = C 6 (ε, k) > 0 such that φ L () C 6 λ (d/+ɛ)/k,. ) If V a is piecewise sooth, for any s > 0 there exists a constant C 7 = C 7 (s, d) > 0 such that φ L () C 7 λ s,. Cobining Propositions 1..5 and 1..6 we finally get control on the series (1..5). Corollary If V a is piecewise H k,0, then φ H k ( j ) for all j and λ φ L () K s where s = 1 d (k d ε) for an arbitrary ε (0, 1 (k d)] and the constant K is independent of. 13

24

25 Proble Setting.1 Strong and Weak Forulations For d N, let R d be a bounded Lipschitz doain and (Ω, F, P ) be a coplete probability space. Let a, f : Ω R be known rando functions. Our odel proble is an elliptic partial differential equation with rando coefficients and it takes the for: P -a.s. { (a(ω, x) u(ω, x)) = f(ω, x), x (.1.1) u(ω, x) = 0, x where the gradient operator is taken with respect to x. The ain goal is to find a rando field u : Ω R which satisfies the above proble. As we will see, under certain assuptions on a and f, it is relatively easy to show that such solution exists and it is unique. On the other hand no explicit expression of u is known in general. Thus we ai to approxiate nuerically the solution to (.1.1). Consider the Hilbert space L P (Ω) of square integrable functions with respect to P with the usual nor and the Hilbert space H0 1() of H1 ()-functions vanishing at the boundary of in a trace sense, equipped with the nor φ H0 1() = φ(x) dx. In what follows we are going to ake several assuptions: (A1) a L P (Ω) L () with ean and covariance function defined as: E a (x) := a(ω, x)dp (ω), V a (x, x ) := a(ω, x)a(ω, x )dp (ω) E a (x)e a (x ). Ω (A) a is uniforly bounded fro below: a in > 0 : P (ω Ω : a(ω, x) > a in x ) = 1. (A3) f L P (Ω) L () with ean and covariance function defined as: E f (x) := f(ω, x)dp (ω), V f (x, x ) := f(ω, x)f(ω, x )dp (ω) E f (x)e f (x ). Ω In this fraework we deal with stochastic functions living on a doain which is the Cartesian product of spatial and probabilistic doains. For this reason the choice of Ω Ω 15

26 tensor product spaces is natural. In particular we focus on the tensor product Hilbert space V P := L P (Ω) H1 0 () endowed with the inner product v, w VP = E v(ω, x) w(ω, x)dx. Moreover we introduce the subspace { } V P,a := v V P : E a(ω, x) v(ω, x) dx < with the nor v V P,a = E a(ω, x) v(ω, x) dx. Observe that due to (A) we have the continuous ebedding V P,a V P with v VP 1 ain v VP,a. (.1.) efine the bilinear for B : V P V P R and the linear functional F : V P R as follows B(w, v) := E a(ω, x) w(ω, x) v(ω, x)dx, F (v) := E f(ω, x)v(ω, x)dx. We can now reforulate proble (.1.1) in a variational for: find u V P such that B(u, v) = F (v), v V P. (.1.3) We show in a oent that proble (.1.3) is well-posed in the sense that a solution exists and it is unique. More precisely if we can prove that B is continuous and coercive and F is continuous, then by the Lax-Milgra Theore we have the clai (see [9], Theore.7.7). In the proof of this result we will use Poincaré s inequality: v L () C P v L (), v H 1 0 (). (.1.4) Lea.1.1. Under assuptions (A) and (A3), proble (.1.3) adits a unique solution u V P such that u VP C P f a L in P (Ω) L (). Proof. Holder s inequality applied twice gives that B(u, v) u VP,a v VP,a and therefore B is continuous with continuity constant equal to 1. It is straightforward to show also that B(v, v) = v V P,a 16

27 which in turn gives coerciveness with constant equal to 1. For what concerns the continuity of F we have F (v) = E fvdx E fv dx E[ f L () v L ()] C P E[ f L () v L ()] C P ain (E[ f L () ])1/ v VP,a = C P ain f L P (Ω) L () v V P,a C P ain E[ f L () a v L ()] where we have used Holder s inequality again and Poincaré s inequality (.1.4). Thus, C by assuption (A3), F is continuous with continuity constant ain P f L P (Ω) L (). Having shown continuity of B and F and coerciveness of B, the first part of the lea is then proved by applying the Lax-Milgra Theore and the fact that V P,a V P. It reains to show that the estiate in the stateent holds. We have u VP 1 ( ) 1/ E[a u ]dx C ( ) 1/ P E[( (a u)) ]dx a in a in = C P a in ( ) 1/ E[f ]dx = C P f a L in P (Ω) L () where the first inequality coes fro (A) and the second one fro (.1.4).. Finite-iensional Stochastic Space In order to deal with the infinite-diensional proble (.1.3) we ai to replace the coefficients by finite-diensional counterparts which can be treated nuerically. More precisely we consider a new proble whose coefficients depend only on a finite nuber N of rando variables. We have shown in Chapter 1 that we can decopose a rando field with the Karhunen-Loève expansion and we can consider a finite-diensional truncation of this last one. Hence we choose the new coefficient N a N (ω, x) = E a (x) + λ φ (x)y (ω) =1 which converges to a(ω, x) in the space L P (Ω) L (). A siilar result holds for f. We ephasize the dependence on the rando variables Y s introducing the notation a N (ω, x) = a N (Y 1 (ω),..., Y N (ω), x), f N (ω, x) = f N (Y 1 (ω),..., Y N (ω), x) where (Y ) N =1 are real valued with EY = 0 and Cov(Y, Y n ) = δ n (see Corollary 4.1.4). The new proble is then to find u N V P such that E a N (ω, x) u N (ω, x) v(ω, x)dx = E f N (ω, x)v(ω, x)dx, v V P. (..1) 17

28 Observe that by the oob-ynkin Lea (see [3], Theore 0.1) it holds true that also the function u N depends on the sae rando variables, i.e. u N (ω, x) = u N (Y 1 (ω),..., Y N (ω), x). With this characterization the infinite-diensional probability space Ω has been substituted by an N-diensional one. We underline the fact that a N and f N are inexact representations of the coefficients appearing in (.1.3) and therefore the solution u N will also be an approxiation of the exact solution u and the truncation error u u N has to be estiated (see Section 4..1). We will adopt the following notation. Let Γ := Y (Ω) R which can be either bounded or unbounded. efine N Γ := Γ. =1 Moreover let assue that the rando variables Y 1,..., Y N have joint probability density function ρ : Γ R +. We now consider the tensor product space V ρ = L ρ(γ) H 1 0 (). Introducing the new notation we can reforulate proble (..1) as follows: find u N V ρ such that a N (y, x) u N (y, x) v N (y, x)ρ(y)dxdy Γ (..) = f N (y, x)v N (y, x)ρ(y)dxdy, v N V ρ. We can equivalently consider u N, a N and f N as functions Γ u N, a N, f N : Γ H 1 0 (). The analogue of proble (..) is then find u N V ρ such that a N (y) u N (y) ψ(x)dx = f N (y)ψ(x)dx, ψ H0 1 (), ρ-a.e. in Γ. (..3) Reark 1. The use of the finite-diensional truncation of the Karhunen-Loève expansion has turned the stochastic proble (.1.3) into the deterinistic paraetric elliptic proble (..3) with an N-diensional paraeter. 18

29 3 The Stochastic Collocation Method The stochastic collocation ethod ais to approxiate nuerically the solution u N to proble (..3). We follow [11] in the description of the ethod. It is based on standard finite eleent approxiations in the space H0 1 () where is a bounded Lipschitz doain and a collocation, in the space L ρ(γ), on a tensor grid built upon the zeros of polynoials orthogonal with respect to the joint probability density function of the rando variables Y 1,...,Y N. 3.1 Full Tensor Grid of Collocation Points We seek an approxiation in the finite-diensional tensor space V p,h := P p (Γ) S q h () where: S q h () H1 0 () is a finite eleent space of diension N h containing piecewise polynoials of degree q on a unifor triangulation T h with esh size h. P p (Γ) L ρ(γ) is the span of tensor product polynoials with degree p = (p 1,..., p N ), i.e. P p (Γ) := N =1 Hence N p := di(p p (Γ)) = P p (Γ ) where for each {1,..., N} P p (Γ ) := span{y i : i = 0,..., p }. N =1 (p + 1). The first step in the approxiating process consists in choosing a set of collocation points and applying Lagrange interpolation to u N at those points. One way to do so is by using a full tensor grid interpolation operator. The evaluation points are chosen to be the roots of polynoials which are orthogonal with respect to a certain auxiliary density function. Standard probability density functions, such as Gaussian or unifor, lead to well known Gauss quadrature nodes and weights. Recall that in general the rando variables Y s are uncorrelated but not independent and therefore the joint probability density function ρ does not factorize. In this case we introduce ˆρ : Γ R + such that N ˆρ(y) = ˆρ (y ), ρ ˆρ <. (3.1.1) L (Γ) =1 See Section 4.1 for conditions on the existence of ˆρ. For each {1,..., N}, we denote by {y,k } p+1 k =1 Γp+1 the roots of the non-trivial 19

30 polynoial q p+1 P p+1(γ ) such that for any v P p (Γ ) Γ q p+1(y )v(y )ˆρ (y ) = 0. We consider the tensorized grid of collocation points Y := {y k = (y,k ) N =1 : 1 k p + 1}. We order this set introducing the following index associated to the vector k = (k 1,..., k N ) : N 1 k := k 1 + (k i+1 1) j + 1). j i(p i=1 More precisely we introduce the bijection Ψ : {k = (k 1,..., k N ) : k {1,..., p + 1}} {1,,..., N p }, Ψ(k) = k. Then we denote by y k the collocation point y k = (y 1,Ψ 1 (k) 1,..., y N,Ψ 1 (k) N ). We define the full tensor Lagrange interpolant I full : C 0 (Γ; H0 1()) P p(γ) H0 1(), I full v(y, x) = v(y k, x)l k (y) where l k (y) = N =1 y k Y l,k (y ) and l,k P p (Γ ), l,k (y ) = p +1 i=1 i k Note that l,j (y,i ) = δ ji for j, i = 1,..., p + 1 and consequently l k (y j ) = δ kj. Equivalently by using the global index notation we get where l k (y) = N =1 l,ψ 1 (k) (y ). N p y y,i y,k y,i. I full v(y, x) = v(y k, x)l k (y) (3.1.) k=1 By defining one-diensional interpolants I p : C 0 (Γ ; H 1 0 ()) P p (Γ ) H 1 0 (), I p v(y, x) = p +1 k =1 we have the equivalent forulation I full = v(y,k, x)l,k (y ) (3.1.3) The sei-discrete approxiation in the stochastic doain is obtained by applying the operator I full to u N, the solution to proble (..3): N =1 I p. u N,p = I full u N P p (Γ) H 1 0 (). 0

31 The second step in the approxiating procedure consists in projecting the seiapproxiation u N,p onto the finite eleent space S q h (). We ay do this using the Galerkin ethod and we get the final approxiation u N,p,h : P p (Γ) S q h () satisfying a N (y) u N,p,h (y) ψ h (x)dx = f N (y)ψ h (x)dx ψ h S q h (), y Y. The ain goal of the ipleentations in Chapter 5 will be to approxiate the statistics of the solution u to proble (.1.3). Therefore we explain how to recover the expected value of the final approxiation u N,p,h. We define the Gauss quadrature weights ω,k := l,k (y )ˆρ (y )dy, Γ ω k = N ω,ψ 1 (k). (3.1.4) Given these weights, we introduce the following Gauss quadrature forula (see [11], (.3)) ] N p N p E ρ (u N,p,h )(x) = E ρ [I full u N,h (y, x) u N,h (y k, x)l k (y) = u N,h (y k, x)ω k = E ρ k=1 =1 k=1 (3.1.5) where u N,h (y k, x) S q h () indicates the finite eleent solution of the proble with coefficients evaluated at point y k. Reark. The stochastic collocation ethod is equivalent to solve N p deterinistic probles. Note that each of these probles is naturally decoupled. On the other hand N p grows exponentially in the nuber N of rando variables giving place to a huge coputational work, the so-called curse of diensionality. 3. Sparse Tensor Grid of Collocation Points We propose an alternative to the approach described in the previous section based on a different choice of the collocation points. We construct the grid by the Solyak algorith as described in [11] and [1]. The ain goal is to keep the nuber of points oderate. Let i N + be a positive integer denoting the level of approxiation and t : N + N + an increasing function denoting the nuber of collocation points used to build the approxiation at level i with t(1) = 1. In this context for = 1,..., N the one-diensional interpolation operator I t(i) : C 0 (Γ ; H0 1()) P t(i) 1(Γ ) H0 1 () takes the for I t(i) v(y, x) = t(i) k =1 v(y,k, x)l,k (y ). (3..1) 1

32 Here siilarly as before {y,k } t(i) k =1 are the roots of the polynoial q,t(i) P t(i) (Γ ) orthogonal to P t(i) 1 (Γ ) with respect to ˆρ. We introduce the difference operators t(i) = { I t(i) I t(i 1), i I t(i), i = 1. Given a ulti-index i = (i 1,..., i N ) N N + we consider a function g : N N + N strictly increasing in each arguent. Let w N. Then the isotropic sparse grid approxiation u N,p,h is obtained by projecting onto the finite eleent space S q h () the seiapproxiation u N,p = Sw t,g u N where S t,g w = i N N + g(i) w N =1 t(i). (3..) For j = N =1 j, the following equivalent forulation can be proved by induction: S t,g w = i N N + j {0,1} N g(i) w g(i+j) w ( 1) j N =1 I t(i). (3..3) Therefore the sparse grid approxiation can be seen as a linear cobination of full tensor product interpolations and the sparse grid Y sparse Γ is obtained as a superposition of all full tensor grids used in (3..3) which correspond to non-zero coefficients. Naely Y sparse = i N N + j {0,1} N g(i) w g(i+j) w ( Yt(i1 ) Y t(in )) (3..4) where Y t(i) Γ denotes the grid of points used by I t(i). Recall that the full tensor product grid is obtained as Y = Y p1 Y pn with Y p Γ the grid of points used by I p. So the choice of w and g in the sparse construction is driven by the idea to reduce the nuber of non-zero ters in (3..4). Figure 3..1 shows the significant difference between a full grid and a sparse grid. The good effectiveness of the sparse collocation ethod strongly relies on the proper selection of the functions t and g. A typical choice which leads to the isotropic Solyak algorith is t(i) = { 1, i = 1 i 1 + 1, i, g(i) = N (i 1). (3..5) =1

33 Figure 3..1: A full tensor product grid and a sparse tensor product grid for Γ = [ 1, 1] both with axiu polynoial degree in each direction equal to 33. On the other hand it is interesting to notice that the full tensor product ethod is recovered when we consider t(i) = i, g(i) = ax 1 N (i 1). The advantage of the sparse tensor product grid over the full tensor product grid can already be seen for a -diensional stochastic doain (see Figure 3..). Since in general the function t is non surjective we define a left-inverse given by t 1 (k) := in{i N + : t(i) k}. Note that t 1 (t(i)) = i and t(t 1 (k)) k. Let t(i) = (t(i 1 ),..., t(i N )) and consider the following set Θ = {p N N : g(t 1 (p + 1)) w}. It is not hard to see that the Solyak functions (3..5) give rise to Θ = { p N N : } N 0, p = 0 f(p ) w, f(p ) = 1, p = 1. (3..6) log (p ), p =1 efine the polynoial space P Θ (Γ) := span { N =1 y p : p = (p 1,..., p N ) Θ }. Then S t,g w u N P Θ (Γ) H 1 0 (). 3

34 Figure 3..: A coparison between the nuber of collocation points in a full grid and a sparse grid for N =. We plot the log of the nuber of points versus the axiu nuber of points t ax eployed in each direction. Siilarly to the previous section, we describe how to copute the first oent of the final approxiation u N,p,h. For the Solyak algorith, it can be shown (see [1], forula (3.9)) that E ρ (u i N,p,h )(x) = and fro (3.1.5) i N N + w+1 i w+n E ρ [ N =1 ( ) [ N ] N 1 ( 1) w+n i E ρ I t(i) u i N,h w + N i (y, x) =1 I t(i) u i N,h (y, x) ] = N t(i) k=1 u i N,h (y k, x)ω k (3..7) where ω k are the sae as in (3.1.4) and again u N,h (y k, x) is the finite eleent solution of the proble with coefficients evaluated at point y k Anisotropic Version It is possible to construct an even refined algorith acting differently on each direction y. This ay be useful in situations where the convergence rate is poor in soe 4

35 directions with respect to others. This anisotropy can be described introducing soe weights α = (α 1,..., α N ) in the function g. One possible choice can be for instance g(i; α) := N =1 α α in (i 1), α in := in 1 N α. Consequently the corresponding anisotropic sparse interpolation operator takes the for Sw,α t,g = N t(i). i N N + g(i;α) w Observe that the isotropic Solyak ethod described in the previous section is a special case of the anisotropic forula when we take all the coponents of the weight vector to be equal, i.e. α 1 =... = α N. We will see in the next chapter (cf. Section 4.4) how the selection of weights is related to the analytic dependence of the solution u N with respect to each of the rando variables Y. The key idea is to place ore points in those directions where the convergence rate in the rando doain is slower. =1 5

36

37 4 Convergence of the Method Before studying the convergence of the stochastic collocation ethod we need to ipose soe ore requireents on the data of the proble which in turn, as we will see, iply regularity of the stochastic behaviour of the solution u N to proble a N (y) u N (y) ψ(x)dx = f N (y)ψ(x)dx, ψ H0 1 (), ρ-a.e. in Γ. (4.0.1) The results presented in this chapter can be partially found in [11]. In the following section for convenience we drop the subscript N which indicates the finite-diensional noise dependence. 4.1 Regularity Results We introduce the following weight σ : Γ R + which allows to keep control on the exponential growth at infinity of soe function: { N 1, Γ is bounded σ(y) = σ (y ), σ (y ) = e α y, Γ is unbounded =1 for soe α > 0. We consider the corresponding functional space { } Cσ(Γ; 0 V ) = v : Γ V, v continuous in y, ax σ(y) v(y) V < y Γ where V is an Hilbert space defined on. As we have announced at the beginning of the chapter, we ake the following assuptions on the data of proble (..3): (A4) f C 0 σ(γ; L ()). (A5) The joint probability density function ρ is such that y Γ, ρ(y) C ρ e N =1 (δy) (4.1.1) for soe C ρ > 0 and δ = 0 in the case Γ is bounded and δ > 0 when Γ is unbounded. We have rearked in Section 3.1 that we need to introduce an auxiliary probability density function satisfying (3.1.1). The following condition is sufficient C () in e (δy) ˆρ (y ) < C () axe (δy), y Γ 7

38 for soe constants C () in, C() ax > 0 which do not depend on y. Then (3.1.1) holds with ρ ˆρ L (Γ) C ρ C in, C in := N =1 C () in. Under the previous assuptions the following continuous ebeddings hold true: Indeed we have v L ρ (Γ;V ) ρ ˆρ C 0 σ(γ; V ) L ρ (Γ; V ) L ρ(γ; V ). 1/ L (Γ) ( ) 1/ v L (Γ;V ) ρ Cρ v C L in ρ (Γ;V ) (4.1.) proving continuity of the second ebedding. For the first one we have v L ρ (Γ;V ) = (σ(y) v(y) V ) ˆρ(y) Γ σ (y) dy ˆρ(y) v Cσ 0(Γ;V ) Γ σ (y) dy N = v ˆρ (y ) Cσ 0(Γ;V ) Γ σ(y ) dy. =1 enote with M := ˆρ (y ) Γ σ(y ) dy. We have two possible cases. If Γ is bounded then σ (y ) = 1 and δ = 0 and therefore M = ˆρ (y )dy Γ C axe () dy = C ax Γ () Γ where Γ denotes the volue of Γ. On the other hand if Γ is unbounded then σ (y ) = e α y and δ > 0. Thus ( M = ˆρ (y )e α y dy = ˆρ (y ) e δ y +α y ) e δ y dy Γ Γ ( ) C ax () e δ y+α y dy C () ax Γ Γ ( π = C ax () e δ e δ y+ α δ ( ) α δ. ) + δ y dy Hence C 0 σ(γ; V ) is continuously ebedded in L ρ (Γ; V ) for either Γ bounded or unbounded. Lea If f C 0 σ(γ; L ()) and a is uniforly bounded fro below by a in > 0, then the solution to (4.0.1) is such that u C 0 σ(γ; H 1 0 ()). 8

39 Proof. By the definition of the functional space Cσ(Γ; 0 H0 1 ()) we have u C 0 σ (Γ;H 1 0 ()) = sup σ(y) u(y) H 1 0 () = sup σ(y) u(y) L () y Γ y Γ 1 sup σ(y) a(y) u(y) a L () in y Γ C P sup σ(y) f(y) a L () in y Γ = C P a in f(y) C 0 σ (Γ;L ()) where in the second inequality we have used Poincaré s inequality. The last result we prove before investigating the convergence of the stochastic collocation ethod concerns a kind of regularity of the solution u in the rando doain. For later convenience we introduce the following notation. Siilarly we write Γ := N Γ j, y := (y 1,..., y 1, y +1,..., y N ) Γ. j=1 j ˆρ := N j=1 j ˆρ j, σ := N σ j. j=1 j With slight abuse of notation we write v(y, x) = v(y, y, x) for any = 1,..., N. Lea Assue that for every y Γ and any {1,..., N} there exists γ < such that for all k N y k a(y) γ k y k f(y) L a(y) k!, () γ 1 + f(y) k!. k L () L () Then the solution u(y, y, x) to proble (4.0.1) as a function u : Γ C 0 σ (Γ ; H 1 0 ()) adits an analytic extension ũ(z, y, x) in the region Σ(Γ ; τ ) := {z C : dist(z, Γ ) τ } C with 0 < τ < 1 γ. Moreover for all z Σ(Γ ; τ ) σ (Rz) ũ(z) C 0 σ (Γ ;H 1 0 ()) C P e ατ a in (1 τ γ ) (1 + f C 0 σ(γ;l ())) where C P is the constant appearing in Poincaré s inequality (.1.4). 9

40 Proof. Consider the weak proble (4.0.1) which, after suppressing subscripts, takes the for a(y) u(y) ψ(x)dx = f(y)ψ(x)dx, ψ H0 1 (), ρ-a.e. in Γ. ifferentiating k ties with respect to y and using Leibniz s product rule we end up with k ( ) k ( y l l a(y)) y k l u(y) ψ(x)dx = ( y k f(y))ψ(x)dx l=0 and reordering ters we obtain a(y) y k u(y) ψ(x)dx = k l=1 ( ) k ( y l l a(y)) y k l u(y) ψ(x)dx + ( y k f(y))ψ(x)dx. Setting ψ = k y u and dropping the variable dependence, we get Thus a k y u = k l=1 k ( ) k a y k u L () = l l=1 k ( ) k y l a l a l=1 ( ) k ( y l l a) y k l u y k u + ( y k f) y k u. L () l y a a ( a y k l u) ( a y k u) + ( y k f) y k u ( a y k l u) ( a y k u) + ( y k f) y k u. Applying Holder s inequality to the ters in absolute value, we have a k y u L () k ( ) k y l a l a l=1 L () + k y f L () k y u L (). a k l y u L () a k y u) L ()+ 30

41 ividing both sides by a k y u L () and recalling assuption (A), it holds a k y u L () k ( ) k y l a l a l=1 L () a k l y u L ()+ + y k y k f u L () L () a y k u L () k ( ) k y l a a y k l l a u L ()+ l=1 L () + y k y k f u L () L () ain y k. u L () Application of Poincaré s inequality gives a k y u L () k ( ) k y l a l a l=1 L () a k l y u L () = C P + y k y k f L () u L () ain y k u L () k ( ) k y l a a y k l l a u L () l=1 L () + C P ain k y f L (). (4.1.3) Let define R k := a y k u L (). Using the bounds in the assuption of the lea k! it holds true that R k k γr l k l + C P γ k ain (1 + f L ()). (4.1.4) l=1 By induction we are going to prove that the following relation holds: R k 1 ( (γ ) k R 0 + C ) P (1 + f L ain ()). (4.1.5) For convenience set the constant C := C P ain (1 + f L ()). It is easily verified that for 31

42 k = 1, (4.1.5) follows fro (4.1.4). Now suppose (4.1.5) holds true for k > 1. Then k+1 R k+1 γr l k+1 l + γ k+1 C = l=1 k l=1 k γr l k+1 l + γ k+1 R 0 + γ k+1 C l=1 γ l 1 (γ ) k+1 l (R 0 + C) + γ k+1 (R 0 + C) = 1 (γ ) k+1 (R 0 + C) = 1 (γ ) k+1 (R 0 + C) = 1 (γ ) k+1 (R 0 + C). Observe that (4.1.3) gives for k = 0 Moreover Therefore we finally achieve k y u L () k! k l=1 R 0 = a u L () l + γ k+1 (R 0 + C) (1 1 k ) + γ k+1 (R 0 + C) C P ain f L (). ain y k R k u L (). k! R k 1 1 ain ain (γ ) k 1 ain 1 (γ ) k ( R 0 + ( CP ain f L () + = C P a in (γ ) k ( 1 + f L () C ) P (1 + f L ain ()) C P ain (1 + f L ()) ). (4.1.6) Now fix y Γ and define the power series ũ : C C 0 σ (Γ ; H 1 0 ()) ũ(z, y, x) := (z y ) k y k k! u(y, y, x). k=0 We ai to prove that the above series converges to the solution u in a coplex disc centred at y. For z C we have ũ(z) C 0 σ (Γ ;H1 0 ()) = k=0 k=0 (z y ) k k! k y u(y ) C 0 σ (Γ ;H 1 0 ()) (z y ) k y k k! u(y ) C 0 σ (Γ ;L ()). ) 3

43 Fro (4.1.6) we get for all k N 0 y k u(y ) C 0 σ (Γ ;L ()) = ax σ(y ) y Γ y k u(y, y) L () ax y Γ σ(y ) C P (γ ) k k! ( ) 1 + f(y) a L () in ( ax = C P a in (γ ) k k! C P a in (γ ) k k! y Γ σ(y ) + f(y ) C 0 σ (Γ ;L ()) ) ( 1 + f(y ) C 0 σ (Γ ;L ()) where the last inequality follows by definition of the weight σ. Inserting this bound in the previous display ũ(z) C 0 σ (Γ ;H1 0 ()) (z y ) k k=0 k! k=0 (z y ) k k! C P a in (γ ) k k! k y u(y ) C 0 σ (Γ ;L ()) ( ) 1 + f(y ) C 0 σ (Γ ;L ()) = C ( ) P 1 + f(y ) a C 0 in σ (Γ ;L ()) [(z y )(γ )] k. (4.1.7) Therefore we can conclude that the power series converges for all z C such that k=0 dist(z, y ) τ < 1 γ. This reasoning is independent on the choice of y Γ. Hence by a continuation arguent the series converges in Σ(Γ ; τ ) for τ < 1 γ. Finally observe that the power series converges exactly to u. Indeed for all z C n u(z) (z y ) k y k k! u(y, y, x) k=0 C 0 σ (Γ ;H1 0 ()) sup y n+1 u(s, y z y n+1, x) C 0 σ dist(s,y ) z y (Γ ;H0 1()) (n + 1)! C ( ) P z y n+1 (γ ) n f(y ) a C 0 in σ (Γ ;L ()) and this upper-bound goes to zero as n whenever z y < 1 γ. Note that for z Σ(Γ ; τ ) it holds that σ (Rz) e ατ σ (y ). ) 33

44 Along the sae line as above, the estiate in the stateent follows fro σ (Rz) u(z) C 0 σ (Γ ;H0 1()) eατ σ (y ) u(z) C 0 σ (Γ ;H0 1()) e ατ ax σ (y ) C ( ) P 1 + f y Γ a C 0 in σ (Γ ;L ()) [(z y )(γ )] k k=0 = e C ( ) ατ P ax σ (y ) + f a C 0 in y Γ σ (Γ;L ()) [(z y )(γ )] k k=0 e C ατ P ( ) 1 + f C 0 a σ (Γ;L ()) [(z y )(γ )] k in k=0 C P e ατ a in (1 τ γ ) (1 + f C 0 σ(γ;l ())) where again we have used in the second-to-last inequality the fact that σ 1 in Γ. Reark 3. The coefficients a N and f N in (4.0.1) are decoposed by using truncations of the Karhunen-Loève expansions of the corresponding infinite-diensional rando fields. In this case assuptions of Lea 4.1. are fulfilled. In particular for a N (ω, x) = E a (x) + N λ φ (x)y (ω) =1 provided that a N (ω, x) a in in and P -a.s. in Ω, we have y k a N (y) λ φ L (), k = 1 a a N (y) in L () 0, k > 1 and we can take γ = λ φ L (). Siilarly for a in we have f N (ω, x) = E f (x) + k y f N (y) L () 1 + f N (y) L () N µ ϕ (x)y (ω) =1 { µ ϕ L (), k = 1 0, k > 1 and we can take γ = µ ϕ L (). We ay also be interested, as we will do in Chapter 5, in a truncated exponential decoposition of the diffusion coefficient. More precisely, log[a N (ω, x) a in ] = E a (x) + N λ φ (x)y (ω). =1 34

45 In this case we have y k a N (y) a N (y) L () and we can take γ = λ φ L (). ( λ φ L ()) k, k 1 4. Convergence Analysis The ain goal of this section is to provide an a priori estiate on the total error between the exact solution u of proble (.1.3) and the approxiation u N,p,h we finally recover by applying the stochastic collocation ethod. By considering the subsequent approxiations we have introduced in our analysis, we can naturally split the total error in the following way u u N,p,h VP u u N VP + u N u N,p VP + u N,p u N,p,h VP where V P = L P (Ω) H1 0 () as defined in Chapter Error of the Truncation We first focus on the ter u u N VP. Here u N indicates the exact solution of the weak proble (..3) where the rando coefficients of (.1.3) have been substituted by the corresponding Karhunen-Loève truncated expansions. Thus we ai to give an upper bound for the truncation error of u depending on the truncation errors of a and f which have been investigated in Section 1.. Consider the two variational forulations of finding u, u N V P such that respectively it holds ψ H0 1 (), P -a.e. in Ω a(ω, x) u(ω, x) ψ(x)dx = f(ω, x)ψ(x)dx, (4..1) a N (ω, x) u N (ω, x) ψ(x)dx = f N (ω, x)ψ(x)dx. (4..) Now dropping the variable dependence for shortness of the notation, the following holds P -a.e. in Ω and ψ H0 1() a (u u N ) ψdx = a u ψdx a N u N ψdx + a N u N ψdx a u N ψdx = (f f N )ψdx + (a N a) u N ψdx. 35

46 Applying Holder s and Poincaré s inequalities we get a (u u N ) ψdx f f N L () ψ L () + a a N L () u N L () ψ L () C P f f N L () ψ H 1 0 () Therefore we end up with the P -a.e. estiate u u N H 1 0 () 1 a in sup ψ H 1 0 () + a a N L () u N L () ψ H 1 0 (). a (u u N) φ ψ H 1 0 () f f N L () + a a N L () u N L () f f N L () + a a N L () f N L () where the notation b c eans that there exists a constant K such that b Kc. Rearkably the constants we have oitted in the previous estiate do not depend on N. Finally we obtain u u N VP f f N L P (Ω) L () + a a N L P (Ω;L () f N L P (Ω) L (). Given the Karhunen-Loève expansions of a and f a(ω, x) = E a (x) + λ φ (x)y (ω), f(ω, x) = E f (x) + =1 we have shown in Section 1. that the following estiate holds f f N L P (Ω) L () = µ. N+1 µ ϕ (x)y (ω), On the other hand, assuing that the sequence (Y ) is uniforly bounded in L P (Ω), a a N L P (Ω;L () a a N L (Ω ) C λ φ L (). N+1 Moreover depending on the regularity, as in efinition 1..4, of the covariance functions V a and V f of a and f respectively, we have presented explicit bounds for the eigenvalues and eigenfunctions (see Section 1..1). 4.. Finite Eleent Error Here we concentrate on the analysis of the error caused by the finite eleent approxiation as described at the end of Section 3.1. We refer the reader to [9]. We are interested in estiating u N,p u N,p,h VP. First of all observe that by passing to the probability space L ρ(γ) we have =1 u N,p u N,p,h VP = u N,p u N,p,h Vρ 36

47 where V ρ = L ρ(γ) H 1 0 () as defined in Section.. Consider V h() a finite eleent space of continuous piecewise polynoials defined on a unifor triangulation T h of the doain R d with axiu esh size h. The approxiation error can be controlled by the esh size. Naely u N,p u N,p,h Vρ C in u v L ρ (Γ) V N,p v Vρ C(u N,p, n)h n h() where both C and C(u N,p, n) are independent of the esh size and n is a positive integer deterined by the soothness of u N,p in and the degree of the finite eleent space. In particular for S q h () H1 0 () containing all continuous piecewise polynoials of degree q we have for l {0, 1} u N,p u N,p,h L ρ (Γ) H l () h r l u N,p L ρ (Γ) H r () for 1 r in{q + 1, s} whenever u N,p L ρ(γ) H s () (see [9], Reark 4.4.7) Collocation Error - Full Tensor Method The evaluation of the last error ter u N u N,p VP deriving fro the approxiation in the rando space requires a ore extensive treatent. We first conduct a one-diensional analysis using the sae notation as in the previous chapters with = 1,..., N. Let Γ R be a bounded or unbounded doain. Consider the density ˆρ : Γ R +, ˆρ (y ) C () axe (δy) for soe C () ax > 0 and δ = 0 in the case Γ is bounded and δ > 0 when Γ is unbounded. Moreover let σ : Γ R + such that σ (y ) C e (δy) 4 for soe C > 0. Observe that this requireent is fulfilled both by a Gaussian weight σ (y ) = e (µy) for µ δ / and an exponential weight σ (y ) = e α y for soe α > 0. Recall the functional space { } Cσ 0 (Γ ; V ) = v : Γ V, v continuous in y, ax σ (y ) v(y ) V < y Γ where V is an Hilbert space. Under the previous assuptions it easily follows that the following continuous ebedding holds and we denote the continuity constant with C 1. C 0 σ (Γ ; V ) L ρ (Γ ; V ) (4..3) 37

48 enote with {y,k } p+1 k =1 the zeros of the non-trivial polynoial q p +1 P p+1(γ ) orthogonal to the space P p (Γ ) with respect to ˆρ. enote by I p the Lagrange interpolation operator I p : C 0 σ (Γ ; V ) L ρ (Γ ; V ), I p v(y ) = where l,k P p (Γ ), l,k (y ) = p +1 i=1 i k p +1 k =1 y y,i y,k y,i. v(y,k )l,k (y ) (4..4) Observe that l,j (y,i ) = δ ji for j, i = 1,..., p + 1. In the following we exploit several properties of the above polynoials. The first one is their utual orthogonality with respect to ˆρ. Indeed by noting that l,j (y,j )l,i (y,j ) = 0 for all j = 1,..., p + 1 and i j, we can conclude that there exists a polynoial h P p (Γ ) such that l,j (y )l,i (y ) = h(y )q p+1(y ). Therefore by assuption on q p+1 we obtain l,j (y )l,i (y )ˆρ (y )dy = h(y )q p+1(y )ˆρ (y )dy = 0. Γ Γ The second useful property reads p +1 k =1 l,k (y ) = 1. (4..5) This easily follows after observing that p +1 k l =1,k (y ) 1 is a polynoial of degree p with the p + 1 roots y,k. Consequently it has to be p +1 k l =1,k (y ) 1 0. Recall fro Section 3.1 the Gauss quadrature weights ω,k = l,k (y )ˆρ (y )dy. Γ Observe that p +1 k =1 p +1 k =1 ω,k = ω,k = 1. Indeed by orthogonality and equation (4..5) we have p +1 Γ k =1 l,k (y )ˆρ (y )dy = = ˆρ (y )dy = 1. Γ We can prove the following Γ p +1 l,k (y ) k =1 ˆρ (y )dy 38

49 Lea The operator I p : C 0 σ (Γ ; V ) L ρ (Γ ; V ) defined in (4..4) is continuous with I p v L ρ (Γ ;V ) C v C 0 σ (Γ ;V ). Proof. For any v C 0 σ (Γ ; V ) we have I p v L ρ (Γ;V ) = Γ I p v(y ) V ˆρ (y )dy = = Γ p +1 v(y,k )l,k (y ) k =1 p +1 Γ k,k n=1 p +1 k =1 V ˆρ (y )dy l,k (y )l,kn (y ) v(y,k ) V v(y,kn ) V ˆρ (y )dy v(y,k ) V ω,k p +1 ax k σ (y,k ) v(y,k ) ω,k V {1,...,p +1} σ k =1 (y,k ) p +1 v Cσ 0 (Γ;V ) k =1 ω,k σ (y,k ) (4..6) where we have used the orthogonality property of the Lagrange polynoials. Now we have to possible cases. If Γ is bounded then δ = 0 and consequently σ C. Thus (4..6) gives I p v 1 L (Γ;V ) ρ C p +1 v Cσ 0 (Γ;V ) k =1 ω k = 1 C v Cσ 0 (Γ;V ). In the case of Γ unbounded we apply a result fro [5, Section 7]. Naely as all the even oents ˆρ (y )dy are bounded by the oents of the Gaussian density Γ y n e (δy) by assuption on ˆρ, we can conclude p +1 k =1 Therefore fro (4..6) ω,k p ˆρ (y ) σ(y,k ) Γ I p v L ρ (Γ;V ) v C 0 σ (Γ;V ) k =1 σ(y ) dy C() ax ω,k C σ(y,k ) C() ax C π δ. π δ v C 0 σ (Γ;V ). 39

50 Hence the lea is proved with continuity constant independent of p defined as 1 C C, Γ is bounded := C ax (). π C, Γ is unbounded δ Lea 4... For any function v C 0 σ (Γ ; V ) the interpolation error satisfies v I p v L (Γ ρ ;V ) C 3 inf v w C w P p (Γ 0 ) V σ (Γ;V ) (4..7) where C 3 is independent of p. Proof. Note that for all w P p (Γ ) V it holds that I p w = w. Thus v I p v L ρ (Γ ;V ) v w L ρ (Γ;V ) + I p w I p v L ρ (Γ ;V ) v w L ρ (Γ ;V ) + I p (w v) L ρ (Γ ;V ) C 1 v w C 0 σ (Γ ;V ) + C v w C 0 σ (Γ ;V ) C 3 v w C 0 σ (Γ ;V ) where in the second-to-last inequality we have used the ebedding (4..3) and Lea The constant C 3 is equal to ax{c 1, C }. Since w P p (Γ ) V was arbitrary, the clai follows. Now we exaine the error on the right hand side of (4..7). In particular we consider the case of a function v : Γ V which adits an analytic extension, again denoted with v, in the region Σ(Γ ; τ ) := {z C : dist(z, Γ ) τ } C for soe τ > 0. We split the analysis into the two cases of Γ bounded or unbounded. Let us first focus on the bounded case where σ = 1. Lea Let v : Γ V be a function which adits an analytic extension v in the region Σ(Γ ; τ ) for soe τ > 0. Then inf v w C w P p (Γ 0 (Γ ;V ) ) V where 1 < ϱ = τ Γ + 4τ Γ + 1. ϱ 1 ϱ p ax z Σ(Γ ;τ ) v(z) V Proof. Consider the following change of variable: g(t) = y 0 + Γ t with y 0 the idpoint of Γ. Consequently g([ 1, 1]) = Γ. Let ṽ(t) := v(g(t)). Thus ṽ adits an analytic extension in the region ( Σ [ 1, 1]; τ ). Γ 40

51 The function x ṽ(cos x) is even, π-periodic with continuous derivative. Thus it has a convergent Fourier series where a k V is defined as ṽ(cos x) = a 0 + a k cos(kx) a k = 1 π π π k=1 ṽ(cos x) cos(kx)dx. efine the Chebyshev polynoials C k (cos x) := cos(kx). Let t = cos x. We can reforulate the previous expansion in ters of the C k s on [ 1, 1] and we get for ṽ : [ 1, 1] V ṽ(t) = a 0 + a k C k (t). (4..8) Thanks to [6, Theore 7] and [7, proof of Theore 8.1] we know that for all ϱ > 1 such that ṽ is analytic on ϱ := {z = y + iw : y a + w b 1, a = ϱ + ϱ 1, b = ϱ ϱ 1 }, it holds that the series (4..8) converges within ϱ and for all k N the coefficients satisfy the following bound a k V ϱ k k=1 ax ṽ(z) V. (4..9) z ϱ enote by Π p the truncation of Chebyshev expansion (4..8) up to p. Then inf v w C w P p (Γ 0 (Γ ;V ) = ) V inf w P p ([ 1,1]) V ṽ w C 0 ([ 1,1];V ) ṽ Π p ṽ C 0 ([ 1,1];V ) = a k C k (t) k=p +1 k=p +1 a k V ax z ϱ ṽ(z) V C 0 ([ 1,1];V ) k=p +1 ϱ p = ax ṽ(z) V z ϱ ϱ 1 ( z Σ ax [ 1,1]; τ Γ ϱ k ϱ p ) ṽ(z) V ϱ 1 = ax v(z) ϱ p V z Σ(Γ ;τ ) ϱ 1 41

52 where we have used in the second inequality the fact that ax t [ 1,1] C k (t) = 1 and in the third inequality the bound (4..9). ( ) Finally observe that the largest ellipse that can be drawn inside Σ [ 1, 1]; τ Γ is obtained by equating the inor sei-axis of ϱ, the boundary of ϱ, with the radius τ Γ, i.e. ϱ ϱ 1 Solving the equation we find ϱ = τ Γ + = τ Γ. 4τ Γ + 1. Now we turn to the case of Γ unbounded, i.e. Γ = R. We first present a result concerning Herite polynoials. Let H n (y) P n (R) denote the noralized Herite polynoials 1 H n (y) = π n n! ( 1)n e y n y n (e y ) and h n the Herite functions h n (y) = e y Hn (y). Fro [13] we know that Herite polynoials for a coplete orthonoral basis of L (R) with respect to the weight e y, i.e. H n (y)h (y)e y dy = δ n. R It has been proved in [8, Theore 1] that the following holds true. Lea Let f be an analytic function in S(R; τ) := {z = y + iw : w [ τ, τ]} C. The Herite-Fourier series f n h n (z), f n = f(y)h n (y)dy (4..10) n=0 exists and converges to f in S(R; τ) if and only if for every β [0, τ) there exists a finite positive constant C(β) such that Moreover it holds that f(y + iw) C(β)e y β w, y R, w [ β, β]. R f n Ce τ n+1. (4..11) 4

53 Recall fro the beginning of the section the two weights G (y) := e (δy) 4, σ (y) = e α y for soe α, δ > 0. Clearly C 0 σ (Γ ; V ) C 0 G (Γ ; V ). Lea Let v C 0 σ (Γ ; V ) which adits an analytic extension in Σ(R; τ ) C for soe τ > 0 and such that for all z = y + iw Σ(R; τ ) it holds σ (y) v(z) V C v (τ ) for soe positive finite constant C v (τ ). Then, for any δ > 0, there exist a constant K, independent of p, and a function χ(p ) = O( p ) such that inf v w C w P p (Γ 0 ) V G (Γ;V ) KΘ(p )e τδ p. Proof. Consider the following change of variable: g(t) = ṽ adits an analytic extension in the region ( Σ R; τ ) δ. δ t. Let ṽ(t) := v(g(t)). Thus Moreover observe that ṽ C 0 σ (Γ ; V ) with σ (t) = e α δ t. We consider the expansion of ṽ in Herite polynoials ṽ(t) = v n H n (t), v n = n=0 R ṽ(t)h n (t)e t dt. (4..1) Now define f(z) := ṽ(z)e z. Note that the Herite-Fourier series of f defined in (4..10) has the sae coefficients as the expansion (4..1). Indeed f n = f(t)h n (t)dt = ṽ(t)e t e t Hn (t)dt = v n. R As a product of analytic functions, f is also analytic in Σ f(y + iw) V = For β (y+iw) e [ 0, τδ ) define C(β ) := R ṽ(z) V e y w ( ) R; τδ. Moreover 1 σ (y) C v(τ ) = e y w e α y δ C v (τ ). ax exp { y w + α y + y } β y R δ w. w [ β,β ] 43

54 Note that this is a bounded constant for all β. Observe also that f(y + iw) V C v (τ )e y w + α δ y + y β w e y β w C v (τ )C(β )e y β w. Therefore f satisfies the ( assuptions ) of Lea Hence the Herite-Fourier series of f converges in Σ R; τδ and the coefficients f n satisfies the bound (4..11). Consequently v n V Ce τδ n+1. enote by Π p the truncation of the Herite expansion (4..1) up to p. Then inf v w C w P p (Γ 0 ) V G (Γ;V ) = inf w P p (Γ ) V ṽ w C 0 G (Γ ;V ) ṽ Π p ṽ C (Γ;V ) 0 G = ax t R v n H n (t)e t / n=p +1 = ax t R v n h n (t) n=p +1 V ax v n h n (t) V. t R n=p +1 Fro [10, bound (.)] we know that h n (t) 1 for all t R and all n. Thus V inf v w C w P p (Γ 0 ) V G (Γ;V ) n=p +1 v n V C n=p +1 e τδ n+1. Finally we use the following result given in [11, Lea A.]: for r R +, r < 1 it holds ( r n+1 ) p + 1 r (1 r ) + O(1) r p. n=p+1 We can now proceed to prove the following Theore Under the assuptions of Section 4.1, there exist positive constants (r ) N =1 and C, independent of p = (p 1,..., p N ), such that the approxiation error of the full tensor collocation ethod satisfies u N I full u N VP C N =1 χ (p )e rpθ 44

55 where { { 1, Γ is bounded θ = 1, Γ is unbounded, χ 1, Γ is bounded (p ) = O( p ), Γ is unbounded, ( ) τ log r = Γ τ Γ, Γ is bounded τ δ, Γ is unbounded with τ defined in Lea 4.1. and δ as in (4.1.1). Proof. Firstly observe that by passing to the probability space L ρ(γ) we have u N I full u N VP = u N I full u N Vρ where V ρ = L ρ(γ) H0 1 (). Consequently we want to provide an upper-bound for u N I full u N Vρ. In order to lighten the notation we will drop the subscript N. Now observe that fro (4.1.) we have u I full u L ρ (Γ) H0 1() ρ ˆρ 1/ L (Γ) u I full u L ρ (Γ) H 1 0 (). To exaine the error in the right-hand side we want to exploit the one-diensional analysis we have conducted in Section The ter u I full u is an interpolation error. Indeed recall that I full = N =1 I p and I p is the one-diensional interpolant defined in (3.1.3). We define the following interpolation operator acting on the last N j + 1 directions with j = 1,..., N I p j v(y, x) = p j +1 k j =1 p N +1 k N =1 v(y 1,..., y j 1, y j,kj,..., y N,kN, x)l j,kj (y j ) l N,kN (y N ). Equivalently I p j = I pj I pn. Let us consider the first direction. Since I full = I p1 I p, we get u I full u L ρ (Γ) H 1 0 () = u I p 1 u + I p1 u (I p1 I p )u L ρ (Γ) H 1 0 () u I p1 u L ρ (Γ) H 1 0 () + I p 1 (u I p u) L ρ (Γ) H 1 0 (). 45

56 We introduce the following notation: N N N Γ j := Γ i, ˆρ j := ˆρ i, σj := σ i. i=j We view u as a function u : Γ 1 L ρ (Γ ) H1 0 (). Then u L ρ 1 (Γ 1 ; L ρ (Γ ) H1 0 ()). This allows us to write i=j u I p1 u L (Γ) H ρ 0 1() = u I p 1 u L (Γ ρ1 1 ;L ρ (Γ ) H1 0 ()), (4..13) I p1 (u I p u) L (Γ) H ρ 0 1() = I p 1 (u I p u) L (Γ ρ1 1 ;L ρ (Γ ) H1 0 ()). (4..14) For V = L ρ (Γ ) H1 0 (), we have already proved that the following continuous ebeddings hold Cσ 0 1 (Γ 1 ; V ) CG 0 1 (Γ 1 ; V ) L ρ 1 (Γ 1 ; V ) with σ 1 = G 1 = 1 if Γ 1 is bounded and G 1 (y 1 ) = e (δ 1 y 1 ) 4, σ 1 (y 1 ) = e α 1 y 1 for soe α 1, δ 1 > 0 if Γ 1 is unbounded. Fro Lea 4..1 we know that I p1 is continuous as an operator fro both C 0 σ 1 (Γ 1 ; V ) and C 0 G 1 (Γ 1 ; V ) to L ρ 1 (Γ 1 ; V ). Thus thanks to Lea 4.. we have the following bound for (4..13) u I p1 u L (Γ ρ1 1 ;V ) C 3 inf u w 1 C 0 w 1 P p1 (Γ 1 ) V G1 (Γ 1 ;V ). By Leas 4..5 and 4..3 we finally get inf u w 1 C 0 w P p1 (Γ 1 ) V G1 (Γ 1 ;V ) ϱ 1 1 ax u(z) V e p 1 log ϱ 1, Γ 1 is bounded z Σ(Γ 1 ;τ 1 ) K 1 O( p 1 )e τ 1δ 1 p1, Γ 1 is unbounded { e p 1 log ϱ 1, Γ 1 is bounded C 1 O( p 1 )e τ 1δ 1 p1, Γ 1 is unbounded { } where ϱ 1 = τ 1 Γ τ 1 Γ 1 and C 1 := ax K 1, ϱ 1 1 ax u(z) V is independent of p 1. Note that in both cases we need the regularity results as stated in Lea z Σ(Γ 1 ;τ 1 ) Moreover by Lea the assuption u Cσ 0 1 (Γ 1 ; V ) of Lea 4..5 is fulfilled. It reains to bound the ter in (4..14). We can do that by using Lea 4..1 and we get I p1 (u I p u) L (Γ ρ1 1 ;V ) C u I p u C 0 σ1 (Γ 1 ;V ). Observe that for the right-hand side we have u I p u C 0 σ1 (Γ 1 ;L ρ (Γ ) H1 0 ()) = ax σ 1 (y 1 ) u I p y 1 Γ u L 1 ρ (Γ ) H1 0 () ax u I p y 1 Γ u L 1 ρ (Γ ) H1 0 (). (4..15) i=j 46

57 Therefore we can proceed iteratively considering I p and I p 3 such that I p = I p I p 3 and (4..15) can be bounded, uniforly in Γ 1, in the following way u I p u L ρ (Γ ) H1 0 () u I p u L (Γ ρ ;L ρ (Γ 3 ) H1 0 ()) 3 + I p (u I p 3 u) L (Γ ρ ;L ρ (Γ 3 ) H1()). 0 3 The ost-right-hand side in the previous display will be again bounded uniforly in Γ and so on considering all the reaining directions y 3,..., y N. At each step for j =,..., N 1 we define and C j = ax { K j, ρ j 1 { C N = ax K N, } ax u L Σ(Γ j ;τ j ) ρ (Γ j+1 ;H1 0 ()) j+1 ρ N 1 } ax u H Σ(Γ N ;τ N ) 0 1(). The constant C appearing in the stateent will be then { C = ax C 1, ax C,..., ax ax y 1 Γ 1 y 1 Γ 1 y N 1 Γ N 1 C N } Convergence Result We can collect all the results obtained so far in the following Theore If the following assuptions are satisfied: (A1) a L P (Ω) L (). (A) a is uniforly bounded fro below by a in > 0. (A3) f L P (Ω) L (). (A4) f C 0 σ(γ; L ()). (A5) y Γ, ρ(y) C ρ e N =1 (δy) for soe C ρ > 0. y k (A6) y Γ, γ < : a(y) γ k y k f(y) L a(y) k! and () 1 + f(y) L () L () (A7) (Y ) is uniforly bounded in L (Ω). (A8) u N,p L ρ(γ) (H s () H0 1 ()) for s 1. (A9) y Γ, σ (y ) C e (δy) 4 for soe C > 0. γ k k!. 47

58 Then for 1 r in{q + 1, s} and q the degree of the polynoial approxiation in, the following error bound for the stochastic full tensor collocation approxiation holds true u u N,p,h L P (Ω) H0 1 () N+1 µ + f L P (Ω) L () + h r 1 u N,p L ρ (Γ) H r () + N =1 N+1 λ φ L () χ (p )e rpθ (4..16) where µ, λ and φ are as in Section 4..1, h is the esh size introduced in Section 4.. and χ, r and θ are defined in Theore Proof. The first two ters on the right-hand side of (4..16) are related to the truncation error and the bound can be found in Section The second line of the clai concerns the spatial approxiation in and it has been exained in Section 4... The last ter controls the convergence in the rando doain Ω and the result is given in Theore Collocation Error - Sparse Tensor Method We have an analogue result of Theore 4..6 when we adopt a sparse collocation grid instead of the full one. For a ore detailed proof see [1]. We adopt the sae notation as in Section 3.. Theore Under the assuptions of Section 4.1, in the case of Γ bounded, the approxiation error of the isotropic Solyak sparse tensor collocation ethod satisfies where r in := in 1 N does not depend on w. u N S t,g w u N VP C(r in, N)we r in w in r, r is defined in Theore 4..6 and the constant C(r in, N) y Γ Proof. As in the proof of Theore 4..6, we have u N S t,g w u N VP = u N S t,g w u N Vρ ρ ˆρ 1/ L (Γ) u N S t,g w u N V ρ. For convenience we drop the subscript N. The proof is based on a one-diensional arguent. Let w N and recall the operator (3..) S t,g w = i N N + g(i) w N =1 t(i) 48

59 where t(i) = { 1, i = 1 i 1 + 1, i, g(i) = N (i 1). Let {1,..., N} and V = L ρ (Γ ) H0 1 () with the notation as in Section 4.1. Fro Lea 4.. we have for the one-diensional operator (3..1) =1 u I t(i) u L ρ (Γ ;V ) C 3 inf w P t(i) 1 (Γ ) V u w C 0 σ (Γ;V ). Fro Lea 4..3 we also have inf u w C w P t(i) 1 (Γ 0 ) V σ (Γ;V ) e r 1 M (u)e [t(i) 1]r where M (u) := ax u(z) V, z Σ(Γ ;τ ) r = log τ Γ τ Γ and τ is defined in Lea (4.1.). Now by definition of the difference operator (3.) we get t(i) u L (Γ ρ ;V ) = (It(i) I t(i 1) )u L (Γ ρ ;V ) u I t(i) u L (Γ ρ ;V ) + u It(i 1) u L (Γ ρ ;V ) e r 1 M (u)e [t(i 1) 1]r. Let Id : Γ Γ the identity operator on Γ. Observe that u = i N N N =1 t(i+1) u. Hence (Id efine M(u) := N i w =1 ax 1 N t(i+1) )u L (Γ;H ρ 0 1()) N t(i+1) i >w =1 = ax M (u) and r in = y Γ N i >w =1 in 1 N u L ρ (Γ ;V ) 4 e r 1 M (u)e [t(i) 1]r. in r. y Γ 49

60 Then for C 4 := e r in 1 we obtain u N i w =1 t(i+1) u L (Γ;H ρ 0 1()) C N M(u) N i >w = C N M(u) N i >w = C N M(u) N i >w e r N in [t(i ) 1] =1 e r N in i 1 =1 e r in Fro the fact that for any i N + the inequality i > i holds, we get u i w N =1 where C(r in, N) = [ CM(u)] N = t(i+1) u L (Γ;H ρ 0 1()) C(r in, N) e r in i >w [ ] 4M(u) N e r. in 1 C(r in, N) i >w = C(r in, N) i >w e r in N e r in i C(r in, N)we r in w =1 i. N i =1 N i =1 Reark 4. A related result in the case of Γ unbounded holds. In particular in the proof Lea 4..5 has to be invoked instead of Lea Now we want to relate the nuber of collocation points in the Solyak sparse grid to the level w. Lea The cardinality of the Solyak sparse tensor grid at level w satisfies the following upper bound η = η(w, N) w w N 1 where the oitted constant is of the order N. 50

61 Proof. By definition of Solyak operator we have η = i N N : i w =1 w i = j=0 i =j w j=0 N t(i + 1) = j ( j + N 1 N 1 w j = j=0 i =j ). Observe that ( ) j+n 1 N 1 (j+n 1) N 1 (N 1)! j N 1. Therefore we get η w j=0 i =j =1 N t(i + 1) = w j #{i N N + : i = j} j=0 w j=0 i =j =1 N ( i + 1) j N 1. On the other hand ( ) j+n 1 N 1 j N 1 (N 1)! w j j N 1 w w N 1. j=0 4.3 Convergence of Moents The ain goal of the ipleentations which are aterial of the next chapter is the coputation of the ean value and possibly higher oents of the final approxiation u N,p,h. Therefore we are interested in providing bounds for the error in the first two oents with respect to soe suitable nor. Lea For V () := L () or V () := H0 1 () it holds E(u u N,p,h ) V () u u N,p,h L P (Ω) V (). Proof. Let us write ε := u u N,p,h. We start with the case V () = L (). By Jensen s inequality we have ( E(ε) L () E(ε )) 1/ = ε L P (Ω) L () and we have the clai for L (). Now consider the case V () = H0 1 (). Again by Jensen s inequality we get E(ε) H 1 0 () = ( ) 1/ ( (E(ε)) E( ε) )) 1/ = ε L P (Ω) H1 0 (). For a proof of the following lea we refer the reader to [11, Lea 4.8]. 51

62 Lea E(u u N,p,h ) L 1 () C u u N,p,h L P (Ω) L () with constant C independent of h and p. Reark 5. In order to estiate the convergence rate of the error of higher oents or the convergence rate of the error of the second oent in stronger nors, we need ore regularity assuptions on the solution. 4.4 Anisotropic Sparse Method - Selection of Weights α Recall fro the Section 4..3 the two Leas 4..3 and 4..5 which state that in a univariate case, assuing analyticity of the solution in the rando space, the error of the approxiation by polynoials of the dependence on each rando variable decays exponentially fast in the polynoial degree. The size τ of the analyticity region depends in general on the direction y (cf. Lea 4.1.). Therefore the decay coefficients in both the leas will depend on the sae direction as well, respectively for the bounded and unbounded case r = log τ Γ + 4τ Γ + 1, r = τ δ. As we have already rearked in Section 3..1, the ain point of the anisotropic version of the sparse collocation ethod is to increase the nuber of points in those directions where the convergence rate is poor, i.e. r sall. One possible choice to relate the weights α = (α 1,..., α N ) to the rate r = (r 1,..., r N ) is the following α = r, {1,..., N}. 5

63 5 Nuerical Ipleentation The last part of this work is devoted to the ipleentation of the stochastic collocation ethod in its isotropic version with the ai to validate the theoretical results we have proved in the previous chapters. 5.1 Exaple This exaple is taken fro [1]. We analyse the following elliptic boundary value proble { (a(ω, x) u(ω, x)) = f(ω, x), x u(ω, x) = 0, x where = [0, 1] R and x = (x 1, x ). The coefficient f is deterinistic, i.e. f(x) = cos(x 1 ) sin(x ). On the other hand the truncated diffusion coefficient a N is given by ( ) 1/ πl log(a N (ω, x) 0.5) = 1 + Y 1 (ω) + N λ φ (x)y (ω) (5.1.1) where for ( λ = ( ( ( πl) 1/ exp πl) ) sin πx ) 1, even L, φ (x) = ( p 8 cos πx ). 1, odd L p It can be shown (see [1]) that (5.1.1) is the truncation of a rando field with covariance ( V log(a 0.5) (x, x (x1 x ) 1 ) = exp ). Here L c denotes a physical correlation length. This eans that the rando fields log[a(x) 0.5] and log[a(x ) 0.5] are only very weakly correlated for x x L c. So we take L and L p in the previous displays as L p = ax{1, L c } and L = L c /L p. In our coputations we will fix L c = We consider independent rando variables (Y ) N =1 which are uniforly distributed on the interval [ 3, 3]. Thus the stochastic doain Γ is given by [ 3, 3] N. Moreover the collocation points in this case will be the roots of the Legendre polynoials. = L c 53

64 Figure 5.1.1: The triangular esh of the doain corresponding to n = 8. The spatial approxiation is conducted applying a finite eleent ethod which eploys piecewise linear polynoials over a unifor triangulation of of esh size /n, n N (see Figure for an exaple). In this section we are particularly interested in studying the convergence of the collocation error as described in Section 4..3 and therefore we fix both the diension N of the truncation and the diension n of the finite eleent space. For N = and n = 8, the first two statistics of the approxiated solution are shown in Figure Figure 5.1.: Expectation (left) and variance (right) of the final approxiation u N,h,p. They have been coputed as described in (3.1.5) with p = (5, 5). Reasonably the sae results can be recovered by eans of the Solyak forula (3..7). To investigate the convergence of the full tensor product ethod we vary the polynoial degree of the space P p (Γ) defined in Section 3.1. We ay estiate the error in each direction separately. Naely we choose the th direction, we keep fixed all 54

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

FAST DYNAMO ON THE REAL LINE

FAST DYNAMO ON THE REAL LINE FAST DYAMO O THE REAL LIE O. KOZLOVSKI & P. VYTOVA Abstract. In this paper we show that a piecewise expanding ap on the interval, extended to the real line by a non-expanding ap satisfying soe ild hypthesis

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING Volue 3, Nuber 2, Pages 211 231 c 2006 Institute for Scientific Coputing and Inforation A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR

More information

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n THE POLYNOMIAL REPRESENTATION OF THE TYPE A n RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n SHEELA DEVADAS AND YI SUN Abstract. We study the polynoial representation of the rational Cherednik algebra

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994) 1

More information

arxiv: v1 [math.pr] 17 May 2009

arxiv: v1 [math.pr] 17 May 2009 A strong law of large nubers for artingale arrays Yves F. Atchadé arxiv:0905.2761v1 [ath.pr] 17 May 2009 March 2009 Abstract: We prove a artingale triangular array generalization of the Chow-Birnbau- Marshall

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

PREPRINT 2006:17. Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL

PREPRINT 2006:17. Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL PREPRINT 2006:7 Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL Departent of Matheatical Sciences Division of Matheatics CHALMERS UNIVERSITY OF TECHNOLOGY GÖTEBORG UNIVERSITY

More information

Characterization of the Line Complexity of Cellular Automata Generated by Polynomial Transition Rules. Bertrand Stone

Characterization of the Line Complexity of Cellular Automata Generated by Polynomial Transition Rules. Bertrand Stone Characterization of the Line Coplexity of Cellular Autoata Generated by Polynoial Transition Rules Bertrand Stone Abstract Cellular autoata are discrete dynaical systes which consist of changing patterns

More information

Tail estimates for norms of sums of log-concave random vectors

Tail estimates for norms of sums of log-concave random vectors Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Modeling Chemical Reactions with Single Reactant Specie

Modeling Chemical Reactions with Single Reactant Specie Modeling Cheical Reactions with Single Reactant Specie Abhyudai Singh and João edro Hespanha Abstract A procedure for constructing approxiate stochastic odels for cheical reactions involving a single reactant

More information

Comparison of Stability of Selected Numerical Methods for Solving Stiff Semi- Linear Differential Equations

Comparison of Stability of Selected Numerical Methods for Solving Stiff Semi- Linear Differential Equations International Journal of Applied Science and Technology Vol. 7, No. 3, Septeber 217 Coparison of Stability of Selected Nuerical Methods for Solving Stiff Sei- Linear Differential Equations Kwaku Darkwah

More information

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011)

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011) E0 370 Statistical Learning Theory Lecture 5 Aug 5, 0 Covering Nubers, Pseudo-Diension, and Fat-Shattering Diension Lecturer: Shivani Agarwal Scribe: Shivani Agarwal Introduction So far we have seen how

More information

The linear sampling method and the MUSIC algorithm

The linear sampling method and the MUSIC algorithm INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

Asymptotics of weighted random sums

Asymptotics of weighted random sums Asyptotics of weighted rando sus José Manuel Corcuera, David Nualart, Mark Podolskij arxiv:402.44v [ath.pr] 6 Feb 204 February 7, 204 Abstract In this paper we study the asyptotic behaviour of weighted

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Shannon Sampling II. Connections to Learning Theory

Shannon Sampling II. Connections to Learning Theory Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information

Concentration of ground states in stationary Mean-Field Games systems: part I

Concentration of ground states in stationary Mean-Field Games systems: part I Concentration of ground states in stationary Mean-Field Gaes systes: part I Annalisa Cesaroni and Marco Cirant June 9, 207 Abstract In this paper we provide the existence of classical solutions to soe

More information

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it

More information

ANALYSIS OF A FULLY DISCRETE FINITE ELEMENT METHOD FOR THE PHASE FIELD MODEL AND APPROXIMATION OF ITS SHARP INTERFACE LIMITS

ANALYSIS OF A FULLY DISCRETE FINITE ELEMENT METHOD FOR THE PHASE FIELD MODEL AND APPROXIMATION OF ITS SHARP INTERFACE LIMITS ANALYSIS OF A FULLY DISCRETE FINITE ELEMENT METHOD FOR THE PHASE FIELD MODEL AND APPROXIMATION OF ITS SHARP INTERFACE LIMITS XIAOBING FENG AND ANDREAS PROHL Abstract. We propose and analyze a fully discrete

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

which together show that the Lax-Milgram lemma can be applied. (c) We have the basic Galerkin orthogonality

which together show that the Lax-Milgram lemma can be applied. (c) We have the basic Galerkin orthogonality UPPSALA UNIVERSITY Departent of Inforation Technology Division of Scientific Coputing Solutions to exa in Finite eleent ethods II 14-6-5 Stefan Engblo, Daniel Elfverson Question 1 Note: a inus sign in

More information

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe PROPERTIES OF MULTIVARIATE HOMOGENEOUS ORTHOGONAL POLYNOMIALS Brahi Benouahane y Annie Cuyt? Keywords Abstract It is well-known that the denoinators of Pade approxiants can be considered as orthogonal

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Approximation by Piecewise Constants on Convex Partitions

Approximation by Piecewise Constants on Convex Partitions Aroxiation by Piecewise Constants on Convex Partitions Oleg Davydov Noveber 4, 2011 Abstract We show that the saturation order of iecewise constant aroxiation in L nor on convex artitions with N cells

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable.

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable. Prerequisites 1 Set Theory We recall the basic facts about countable and uncountable sets, union and intersection of sets and iages and preiages of functions. 1.1 Countable and uncountable sets We can

More information

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs International Cobinatorics Volue 2011, Article ID 872703, 9 pages doi:10.1155/2011/872703 Research Article On the Isolated Vertices and Connectivity in Rando Intersection Graphs Yilun Shang Institute for

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 9th February 011 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion of

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

ORIGAMI CONSTRUCTIONS OF RINGS OF INTEGERS OF IMAGINARY QUADRATIC FIELDS

ORIGAMI CONSTRUCTIONS OF RINGS OF INTEGERS OF IMAGINARY QUADRATIC FIELDS #A34 INTEGERS 17 (017) ORIGAMI CONSTRUCTIONS OF RINGS OF INTEGERS OF IMAGINARY QUADRATIC FIELDS Jürgen Kritschgau Departent of Matheatics, Iowa State University, Aes, Iowa jkritsch@iastateedu Adriana Salerno

More information

APPROXIMATION BY GENERALIZED FABER SERIES IN BERGMAN SPACES ON INFINITE DOMAINS WITH A QUASICONFORMAL BOUNDARY

APPROXIMATION BY GENERALIZED FABER SERIES IN BERGMAN SPACES ON INFINITE DOMAINS WITH A QUASICONFORMAL BOUNDARY NEW ZEALAND JOURNAL OF MATHEMATICS Volue 36 007, 11 APPROXIMATION BY GENERALIZED FABER SERIES IN BERGMAN SPACES ON INFINITE DOMAINS WITH A QUASICONFORMAL BOUNDARY Daniyal M. Israfilov and Yunus E. Yildirir

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 17th February 2010 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion

More information

Numerical issues in the implementation of high order polynomial multidomain penalty spectral Galerkin methods for hyperbolic conservation laws

Numerical issues in the implementation of high order polynomial multidomain penalty spectral Galerkin methods for hyperbolic conservation laws Nuerical issues in the ipleentation of high order polynoial ultidoain penalty spectral Galerkin ethods for hyperbolic conservation laws Sigal Gottlieb 1 and Jae-Hun Jung 1, 1 Departent of Matheatics, University

More information

Statistics and Probability Letters

Statistics and Probability Letters Statistics and Probability Letters 79 2009 223 233 Contents lists available at ScienceDirect Statistics and Probability Letters journal hoepage: www.elsevier.co/locate/stapro A CLT for a one-diensional

More information

Max-Product Shepard Approximation Operators

Max-Product Shepard Approximation Operators Max-Product Shepard Approxiation Operators Barnabás Bede 1, Hajie Nobuhara 2, János Fodor 3, Kaoru Hirota 2 1 Departent of Mechanical and Syste Engineering, Bánki Donát Faculty of Mechanical Engineering,

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

A NEW CHARACTERIZATION OF THE CR SPHERE AND THE SHARP EIGENVALUE ESTIMATE FOR THE KOHN LAPLACIAN. 1. Introduction

A NEW CHARACTERIZATION OF THE CR SPHERE AND THE SHARP EIGENVALUE ESTIMATE FOR THE KOHN LAPLACIAN. 1. Introduction A NEW CHARACTERIZATION OF THE CR SPHERE AND THE SHARP EIGENVALUE ESTIATE FOR THE KOHN LAPLACIAN SONG-YING LI, DUONG NGOC SON, AND XIAODONG WANG 1. Introduction Let, θ be a strictly pseudoconvex pseudo-heritian

More information

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine CSCI699: Topics in Learning and Gae Theory Lecture October 23 Lecturer: Ilias Scribes: Ruixin Qiang and Alana Shine Today s topic is auction with saples. 1 Introduction to auctions Definition 1. In a single

More information

ANALYSIS OF A NUMERICAL SOLVER FOR RADIATIVE TRANSPORT EQUATION

ANALYSIS OF A NUMERICAL SOLVER FOR RADIATIVE TRANSPORT EQUATION ANALYSIS OF A NUMERICAL SOLVER FOR RADIATIVE TRANSPORT EQUATION HAO GAO AND HONGKAI ZHAO Abstract. We analyze a nuerical algorith for solving radiative transport equation with vacuu or reflection boundary

More information

On Certain C-Test Words for Free Groups

On Certain C-Test Words for Free Groups Journal of Algebra 247, 509 540 2002 doi:10.1006 jabr.2001.9001, available online at http: www.idealibrary.co on On Certain C-Test Words for Free Groups Donghi Lee Departent of Matheatics, Uni ersity of

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

A Study on B-Spline Wavelets and Wavelet Packets

A Study on B-Spline Wavelets and Wavelet Packets Applied Matheatics 4 5 3-3 Published Online Noveber 4 in SciRes. http://www.scirp.org/ournal/a http://dx.doi.org/.436/a.4.5987 A Study on B-Spline Wavelets and Wavelet Pacets Sana Khan Mohaad Kaliuddin

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Numerically repeated support splitting and merging phenomena in a porous media equation with strong absorption. Kenji Tomoeda

Numerically repeated support splitting and merging phenomena in a porous media equation with strong absorption. Kenji Tomoeda Journal of Math-for-Industry, Vol. 3 (C-), pp. Nuerically repeated support splitting and erging phenoena in a porous edia equation with strong absorption To the eory of y friend Professor Nakaki. Kenji

More information

Research Article Robust ε-support Vector Regression

Research Article Robust ε-support Vector Regression Matheatical Probles in Engineering, Article ID 373571, 5 pages http://dx.doi.org/10.1155/2014/373571 Research Article Robust ε-support Vector Regression Yuan Lv and Zhong Gan School of Mechanical Engineering,

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Journal of Machine Learning Research 5 (2004) 529-547 Subitted 1/03; Revised 8/03; Published 5/04 Coputable Shell Decoposition Bounds John Langford David McAllester Toyota Technology Institute at Chicago

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

A := A i : {A i } S. is an algebra. The same object is obtained when the union in required to be disjoint.

A := A i : {A i } S. is an algebra. The same object is obtained when the union in required to be disjoint. 59 6. ABSTRACT MEASURE THEORY Having developed the Lebesgue integral with respect to the general easures, we now have a general concept with few specific exaples to actually test it on. Indeed, so far

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

lecture 37: Linear Multistep Methods: Absolute Stability, Part I lecture 38: Linear Multistep Methods: Absolute Stability, Part II

lecture 37: Linear Multistep Methods: Absolute Stability, Part I lecture 38: Linear Multistep Methods: Absolute Stability, Part II lecture 37: Linear Multistep Methods: Absolute Stability, Part I lecture 3: Linear Multistep Methods: Absolute Stability, Part II 5.7 Linear ultistep ethods: absolute stability At this point, it ay well

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information