Numerical Evaluation of Functionals based on Variance Minimisation

Size: px
Start display at page:

Download "Numerical Evaluation of Functionals based on Variance Minimisation"

Transcription

1 Oliver Pajonk Numerical Evaluation of Functionals based on Variance Minimisation Diploma Thesis b a f x dx w i f x i i a b

2 Pajonk, Oliver: Numerical Evaluation of Functionals based on Variance Minimisation Diploma Thesis Version:.0 (Build 53) This document was created on July 0, 2008 using L A TEX 2ε. Institute of Scientific Computing Technical University Braunschweig Hans-Sommer-Strasse 65 D 3806 Braunschweig, Germany ScientifiComputing

3 Erklärung: Ich versichere, dass ich die Arbeit ohne fremde Hilfe und ohne Benutzung anderer als der angegebenen Quellen angefertigt habe, und dass die Arbeit in gleicher oder ähnlicher Form noch keiner anderen Prüfungsbehörde vorgelegen hat und von dieser als Teil einer Prüfungsleistung angenommen wurde. Alle Ausführungen, die wörtlich oder sinngemäß übernommen wurden, sind als solche gekennzeichnet. Braunschweig, den 0. Juli Oliver Pajonk

4

5 Abstract Numerically evaluating the effect of a functional on a function is a very common task in scientific computing. The definite integral of a function over a domain is an example, differentiating a function in a certain point into a certain direction is another one. In this diploma thesis we develop, implement and evaluate a generic method to compute the effect of a functional using a linear approximation formula. The method is designed to generate the nodes and weights needed to approximate different functionals using a single set of tools: it regards the target function as a stochastic field and uses a user-defined covariance function for this field to minimise the error made by the approximation formula. The resulting formulas are optimal in an average case sense: all possible realisations of this stochastic field are taken into account while computing the solution. This results in nodes and weights that evaluate the target functional applied to any realisation with a minimized average error. The space of all realisations of such a stochastic field can be infinitely large whereas classical approaches often only consider a finite dimensional space of functions.

6

7 Table of Contents Table of Contents List of Figures List of Tables vii ix xi Theory 3. The Variance Minimisation Approach Preliminaries and Definitions Reformulation of the Problem Defining the Semi-Norm Minimising the Error Functional Measuring the Error Gaussian Covariance Function Error Estimation Homogeneous Covariance Functions Frequency Domain Solution Reproducing Kernels Reproducing Kernels: Example Example Functionals Integration Functional Analytic Results Error Estimation Directional Derivative Functional Analytic Results Classical Approaches: Numerical Quadrature Newton-Cotes Quadrature Rules Derivation Error Estimation Generalisation Summary Gaussian Quadrature Rules Derivation Error Estimation Generalisation Summary Monte Carlo Quadrature Classical Approaches: Numerical Differentiation Finite Difference Approach Central Difference Approach Multiple Dimensions Higher Orders of Convergence vii

8 viii TABLE OF CONTENTS.4.5 Higher Orders of Differentiation Comparison of the Classical Approaches with Our Approach Implementation and Evaluation Introduction Software Overview Target Platforms and Additional Software Architecture Implementation Details Determining the Start Values Integration Functional Directional Differentiation Functional Normalised Covariance Function Preconditioning the Residual Iterative Solution Strategy Preconditioning the System of Equations Results Verification with Monomial Covariance Function The Solutions Summary Verification of Error Estimation Verification One: H = I A Verification Two: H = i H i Definite Integration Functional Parameter Dependency Order of Convergence Problematic Solutions Example Integrations Conclusion Directional Derivative Functional Parameter Dependency Example Differentiations Conclusion Summary Final Remarks Next Research Steps General Theory Covariance Function Solution Process of Definite Integral Prospects A Implementation Description 59 Bibliography 6

9 List of Figures. Error Functions for One Dimension (Integration Functional) Convolution Examples for a Homogeneous Covariance Function Lowpass Covariance Example Gauss-Legendre Example UML Diagram of the Architectural Layout Typical Start Values for the Two Functionals NormalizedDefaultCovariance σ-dependency Graphical System of Equations Solutions Continuous in σ Continuous Path Connecting Solutions Iterative Solution Process Schema Difficult Solutions for the Integration Functional in Two Dimensions Verification for Monomials Verification of Error Estimation () Example of a Linear Combination of H i Second Example of a Linear Combination of H i Verification of Error Estimation (2) Verification of Error Estimation (3) σ-dependency of Integration Nodes and Weights Example Nodes for the Integration Functional Order of Convergence for the Integration Functional in Two Dimensions Order of Convergence for the Integration Functional in Higher Dimensions Example Integration of a Function in Two Dimensions Example Integration of a Function in Six Dimensions Example Integration of a Function in Ten Dimensions σ-dependency of Differentiation Nodes and Weights Example Nodes for the Differentiation Functional Differentiation Error Dependency on Dimension Differentiation Error Dependency on Nodes ix

10

11 List of Tables 2. NormalizedNDCovariance Residual σ-dependency for Four Dimensions NormalizedNDCovariance Residual σ-dependency for Ten Dimensions Condition of M for Ten Nodes in One and Two Dimensions Success Rates for Difficult Solutions of the Integration Functional Parameters for the Convergence Formula in Two Dimensions Parameters for the Convergence Formula in Higher Dimensions xi

12

13 Introduction One of the best-known problems in scientific computing is to approximate numerically the effect of a certain operation Γ on a function f. A common example is the definite integral of a function over a closed set A R d : Γ(f) = f(x) dx A Functional analysis tells us that integration can be regarded as another function working on elements f of a certain function space. It maps each element of this function space (each function) to an element of the base field, namely its definite integral value over the set A. Such functions working on other functions are called functionals. They will be the object of our interest in this diploma thesis because we can reformulate the problem stated above very conveniently in this context: we want to approximate the effect of a functional Γ on a function f with a weighted sum of evaluations of f at certain points x i. Evaluating a function f at x is the effect of the evaluation functional δ x on f which is defined by δ x (f) := f(x) Some may recognise this functional as the Dirac delta which emerges in many scientific contexts. Now let Γ be the functional that we want to approximate. Then we are able to express our approximation by Γ(f) = w i δ xi (f) + E(f) () where w i R with i {, 2,..., n} are called weights, x i R d are called nodes and E is a functional describing the error that almost inevitably 2 happens with any approximation. The goal is to find n nodes and weights which approximate the effect of the functional Γ with a small 3 error E. The classical approaches use different ways to obtain the nodes and weights for evaluating a certain functional. In this diploma thesis we develop a single method to find a solution for all functionals in a certain space. Contents In the first chapter we describe and develop the variance minimisation approach theoretically. We introduce two examples of functionals for which we develop concrete sets of equations that form the basis of our implementation. Afterwards we introduce several classical approaches to numerical integration and differentiation. These are the two problems that we want to solve using our variance minimisation technique. Finally we compare them to our approach. The base field in this diploma thesis will always be R. 2 For certain special cases the error may be zero but this is certainly not the common case. 3 Small is understood in an intuitive sense here as we have not defined it yet.

14 2 LIST OF TABLES The second chapter describes the implementation and discusses the problems that came up together with their solutions. It presents the results obtained by the variance minimisation technique, describes their properties and compares them to results obtained by classical approaches. The third chapter concludes this diploma thesis with a summary of what we achieved, ideas for further development of the variance minimisation approach and prospects on future applications. Notation Convention The following notational conventions are used throughout this thesis. They are straightforward and adopted from the general mathematical literature. Symbol Meaning N Natural numbers without zero N 0 Natural numbers with zero d N Denotes the dimension of something (for example R d ) n N Denotes the number of nodes and weights used in a linear approximation i, j, k, l N Indices for nodes, weights and elements of a vector w R A weight x, y R d d-dimensional variables (x) i The ith component of a d-dimensional variable x 0 R d The point in which a differentiation takes place v R d The direction of a differentiation U R d An open subset A R d A closed subset Γ, G, H D (U) Distributions Γ f D (U) A regular distribution G x (H y (t(x, y))) Two distributions applied to a target function. The indices denote on which variable a distribution works. Here G works on the x-part of t and H works on the y-part. t A covariance function u A homogeneous covariance function supp ϕ = { x ϕ(x) 0} is called the support of a function M := (m i,j ) A matrix with entries m i,j.

15 Chapter Theory This chapter describes the theory of the variance minimisation approach. We introduce two example functionals that will accompany us throughout the thesis. We also present several classical approaches that solve these two functionals and compare them to our approach.. The Variance Minimisation Approach The first obstacle on our way to get a mathematical foundation for our approach is to find a function space such that, together with a certain topology, all the functionals that we want to approximate are linear and continuous. Then we can identify these functionals as members of the dual space of this function space and can use the wealth of functional analysis for our considerations... Preliminaries and Definitions It turns out that the theory of generalised functions, or distributions, is a suitable mathematical context for our approach. If you are familiar with the basic concepts of distribution theory you can safely skip this section and continue with the main idea of the variance minimisation approach in section..2 on page 4. Let U R d be an open set. Then we call the space D(U) = C 0 (U) the space of all infinitely often differentiable (or smooth) functions with compact support in U, meaning that ϕ D(U) : supp ϕ U. D(U) is obviously a vector space. To turn it into a topological vector space we have to introduce a notion of convergence. Most of the following definitions and theorems are adopted from [3]. A slightly different way to introduce D(U) is performed in [3], p. 399ff. Definition... We call the vector α = (α, α 2,..., α d ) with α i N 0 a d-dimensional multiindex. We define the following:. α := d α i is called the order of the multi-index 2. x α := x α... xα d d 3. D α := α... α d x α... xα d d defines a component-wise exponent for the vector x R d simplifies the notation for partial derivatives considerably Definition..2. We say that the sequence (ϕ ν ), ϕ ν D(U) converges to ϕ D(U) iff. there exists a compact set K U such that supp ϕ ν K for each ν N and 2. the sequence D α ϕ ν converges to D α ϕ uniformly on K for each multi-index α N d 0 Definition..3. A linear form defined on D(U) which is continuous with respect to the convergence introduced in definition..2 is called a distribution. The set of distributions is denoted by D (U). 3

16 4 CHAPTER. THEORY Now we are in a position to say when a linear form on D(U) is continuous: Theorem... A linear form Γ defined on D(U) is continuous iff for every compact set K U there exists a constant C > 0 and an index m such that Γ(ϕ) C ϕ m,k, (.) where ϕ m,k = max α m sup x K (D α ϕ)(x) and supp ϕ K (without proof, see [3]). Corollary... The evaluation functional δ x is, together with this notion of convergence, a continuous linear functional on D(U). Corollary..2. Let f L loc, the space of locally integrable functions. We write Γ f (ϕ) := fϕ R d Then Γ f is in D and the distribution Γ f is said to be regular. Definition..4. Let G, H D (U). The expression G + H defined as is called the sum of the distributions G and H. (G + H)(ϕ) := G(ϕ) + H(ϕ), ϕ D(U) Definition..5. Let g be in C (R d ), ϕ D(U) and Γ D (U) be a distribution. Then the mapping ϕ Γ(gϕ) defines a new distribution on D(U). We conveniently write g Γ for this type of product. Theorem..2. If G D (R d ) and H D (R d2 ), then there exists exactly one distribution L D (R d+d2 ) such that for ϕ D(R d ) and ϕ 2 D(R d2 ). Moreover, for ϕ D(R d R d2 ) (without proof, see [3]). L(ϕ ϕ 2 ) = G(ϕ )H(ϕ 2 ) L(ϕ(, )) = G x (H y (ϕ(x, y))) = H x (G y (ϕ(x, y))) Definition..6. Let ϕ be in D(U), then the linear form defined by ϕ ( ) α Γ(D α ϕ) is called the α-derivative of Γ. This form will be denoted by D (α) Γ. This differentiation of distributions is a linear continuous operation in D (U). Using these definitions we can reformulate equation on page in the context of distributions...2 Reformulation of the Problem Let f : R d R be a function from D(R d ) and Γ be a distribution (or functional ) from the dual space D (R d ), meaning that Γ : D(R d ) R. We now recall equation on page which describes what we want to achieve: Γ(f) = w i δ xi (f) + E(f). We use these two terms synonymously in this diploma thesis.

17 .. THE VARIANCE MINIMISATION APPROACH 5 This formulation of the problem does not help us very much because we do not know the target function f. Thus we also do not know neither Γ(f) nor E(f). The only thing we have is δ x (f), meaning that we can evaluate the function f at some point x. Now we exploit the fact that we turned the space where Γ, E and δ come from into a vector space, namely the dual space of D(R d ). We can reformulate the problem independently of f: Γ(f) = Γ = w i δ xi (f) + E(f) w i δ xi + E. The goal is to find nodes and weights which create a small error E, so we express the error functional for this approximation in the following obvious way: E = Γ w i δ xi. Now we need a notion of size on the space where the error functional comes from to be able to measure it. This will in turn enable us to minimise the error functional with respect to w i and x i. The way to go is to introduce a norm or at least a semi-norm on D (R d )...3 Defining the Semi-Norm Since the target function f is unknown it makes sense to regard it as a stochastic field. This enables us to treat it within probability theory, allowing us to derive some interesting results. Let (Ω, Σ, P ) be a probability space 2, meaning that P (Ω) =. Now let f D(R d ) and ω Ω. Then we define ω f ω which formally turns the unknown function f into a stochastic field f ω. As f ω is in D(R d ) for each ω we can apply a functional Γ D (R d ) to f ω. This implies immediately the following definition of a scalar valued random variable X Γ : Ω R: X Γ : ω Γ(f ω ). The next step is to remember that the covariance function of two random variables is a positive semi-definite, symmetric and bilinear form. Additionally any such form s defines a semi-norm by := s(, ). This is what we exploit to create the semi-norm we need. Definition..7. Let X G, X H be the random variables associated to the functionals G, H D (R d ) in the way described above. Then we define the following symmetric, positive semi-definite bilinear form on D (R d ): s t (G, H) := cov(x G, X H ). (.2) We use this form to define the following semi-norm on D (R d ): We call this the t-(semi)-norm. G t := s t (G, G). (.3) 2 When the tuple (Ω, Σ, P ) is a probability space we call Ω the set of states of nature, Σ the set of events and P a probability measure. The exact definition does not matter, though; we only use it as a formal tool.

18 6 CHAPTER. THEORY The fact that we use a covariance function to define the semi-norm justifies the title of this diploma thesis: as described above we will minimise the error functional in this norm to determine the nodes and weights, so we in fact minimise the variance 3 of a functional. The last missing thing now is the covariance function cov(x G, X H ). You may have wondered why the semi-norm is called t-norm when in fact there is no t in its definition but this will become clear now as we dismantle the covariance function. Since cov(x G, X H ) is a very abstract form it does not help us when it comes to creating concrete formulas. We have to see how this function internally works: for a fixed but unknown ω we have cov(x G, X H ) = cov(g(f ω ), H(f ω )) =G x (H y (cov(f ω (x), f ω (y)))) =G x (H y (cov(δ x (f ω ), δ y (f ω )))). This shows us that if we know the covariance function for the Dirac distribution applied to our unknown target function f ω for all combinations of x, y, we can derive the covariance function for all distributions. We call this special covariance function the t function and write t(x, y) := cov(δ x (f ω ), δ y (f ω )) cov(x G, X H ) =G x (H y (t(x, y))). For symmetry reasons it is obvious that the following equalities are also true: =H y (G x (t(x, y))) =H x (G y (t(x, y))) =G y (H x (t(x, y))). The only implications on t are that it indeed has to be a covariance function and that t D(R d R d ). If we look at theorem..2 on page 4, we immediately see that this definition is consistent with the distribution theory. The last requirement is indeed a really strong one, but we only need it for our theoretical considerations. For concrete functionals we can decide how many times differentiable the function really has to be as the case arises...4 Minimising the Error Functional Now that we have a notion of size on the space of distributions we are able to measure the error functional E. This enables us to formulate our goal in a formal way: we want to find nodes x i and weights w i such that: min E 2 t. w i,x i If we simply look at E 2 t as a function that maps the nodes and weights into R, we see that in a minimum the variations in w i and x i, namely the first partial derivatives, have to be zero. This results in the following necessary condition in a minimum: 3 Remember that var(x) = cov(x, X).

19 .. THE VARIANCE MINIMISATION APPROACH 7 E 2 t E (δe) = s t(e, E) (δe) ( E = 2s t E, E ) E (δe) = 2s t (E, δe) (.4) 0. The formal argument δe means the variance of E, which is E w k and E (x k ) l for all k [,..., n] and l [,..., d] as we have mentioned above. Thus we compute: E = (Γ n w iδ xi ) w k w k = n w iδ xi w k = w kδ xk w k = δ xk. Now we insert this result in equation.4: 2s t (E, δ xk ) = 2s t (E, δ xk ) = 2 [E x (δ xk,y(t(x, y)))] = 2 [E x (t(x, x k ))] [ = 2 Γ x (t(x, x k )) The second partial derivative works in essentially the same way: ] w i δ xi,x(t(x, x k )) [ n ] = 2 w i t(x i, x k ) Γ x (t(x, x k )). (.5) E = (Γ n w iδ xi ) (x k ) l (x k ) l = n w iδ xi (x k ) l = w k δ xk (x k ) l. The partial derivative of a distribution seems arkward and we are tempted to apply definition..6 here, but we have to consider the following: we want to compute the partial derivative of the function before we have applied the functional and not of the functional itself. If we now formally apply the functional to a function f and then compute the partial derivative we have: δ xk (f) (x k ) l = f(x k) (x k ) l = δ xk ( f(x) (x) l ).

20 8 CHAPTER. THEORY We insert the result of our second partial derivative into equation.4: ( 2s t E, w ) ( ) k δ xk δ xk = 2w k s t E, (x k ) l (x k ) l ( ) δxk,y = 2w k E x (t(x, y)) (x k ) l ( )) t(x, y) = 2w k E x (δ xk,y (y) l [ ( )) t(x, y) = 2w k Γ x (δ xk,y w i δ xi,x (y) l [ n t(x i, y) ( y=xk t(x, y) = 2w k w i Γ x (y) l (y) l ( δ xk,y ( )) ] t(x, y) (y) l ) ] y=xk. (.6) Equation.5 on the preceding page states that s t (E, δ x ) has zeros in x = x k whereas equation.6 states that its first partial derivatives should vanish in these zeros, too. We use these properties in our error estimation theory in section..7 on page 0. By now the theoretical foundations are ready. The only missing things are the functional Γ whose effect on f should be evaluated and a concrete covariance function t. Then we have to analytically determine the effect of Γ on t and its partial derivatives and use the resulting concrete set of equations to solve our minimisation problem...5 Measuring the Error To determine the quality of a solution or decide whether some nodes and weights are a solution we need a concrete formula for the t-norm of E itself. This is easy enough to derive: E 2 t = s t (E, E) = E x (E y (t(x, y))) = E x Γ y (t(x, y)) w j δ xj,y(t(x, y)) j= = E x Γ y (t(x, y)) w j t(x, x j ) j= = Γ x Γ y (t(x, y)) w j t(x, x j ) w i δ xi,x Γ y (t(x, y)) w j t(x, x j ) j= j= n = Γ x (Γ y (t(x, y))) w j Γ x (t(x, x j )) w i Γ y (t(x i, y)) w j t(x i, x j ) = Γ x (Γ y (t(x, y))) 2 j= w i Γ y (t(x i, y)) j= w i j= w i w j t(x i, x j ). (.7) The ability to verify whether the t-norm of the error functional is indeed small for a possible solution allows us to identify problems with our solution algorithm: it could happen that it gets stuck in a local minimum and reports a solution, but as E 2 t is not small enough there this problem can be identified.

21 .. THE VARIANCE MINIMISATION APPROACH 9 If the nodes and weights we insert into equation.7 on the preceding page really are a solution we can simplify it, because in a solution we have s t (E, δ xi ) = 0 for all x i as required by equation.5 on page 7: E 2 t = s t (E, E) = E x [Γ y (t(x, y)) = E x (Γ y (t(x, y))) = s t (E, Γ) ] w i δ xi,y(t(x, y)) w i δ xi,y(e x (t(x, y))) w i s t (E, δ xi ) } {{ } =0 in solution = s t (E, Γ). (.8) This can save us from superfluous calculations when computing the t-norm for a solution. Additionally this result helps us with the error estimation, as we will see later...6 Gaussian Covariance Function Before we start to develop the error estimation theory for our approach in section..7 on the next page we introduce one of the two last components that are missing in our theory: a concrete covariance function. We then use it in the next section to show some examples and depict the theoretical results. We choose ( ) d t(x, y) = σ e 2 2π ( ) d d = σ 2π ( ) x y 2 2 σ e ((x) i (y) i ) 2 2 σ 2 (.9) as our first covariance function. You may recognise this as a d dimensional Gaussian- or normal distribution over the distance of the two points x and y. Its mean value is zero and it uses the same standard deviation σ for each dimension. This covariance function has several advantages that will help us on our way to create concrete solutions for some functional Γ: It is relatively easy to differentiate and integrate. This will reduce the complexity when it comes to applying a functional Γ to it and its partial derivatives. As it is easy to handle we can apply parametrised functionals to it. For example the integration region for the definite integration functional in section.2. on page 5 is introduced by parameters rather than concrete values. This allows us to change it easily later and we do not have to re-calculate our analytic results. Its multi-dimensional content is normalised to.0, which turns out to have numerical advantages when it comes to higher dimensional computations (see also section on page 32). We can adopt it for different target functions by changing the standard deviation σ. The effect of different settings for the standard deviation is simple: the function changes its width and, through normalisation, its height.

22 0 CHAPTER. THEORY Figure.: Error function f(y) = E x (t(x, y)) = s t (E, δ y ) for the integration functional in one dimension. The left figure shows it for a solution with four nodes, the right one for 20 nodes. Note that this covariance function is not fitted to a special class of functions like the covariance functions created in [5]. We choose a different way and first define the covariance function that fits our needs. Then we try to find functions for which we can evaluate Γ(f) with a defined error...7 Error Estimation The error estimation for our variance minimisation approach turns out to be quite straight forward. The interesting part is to find the space of functions for which we can give an error estimation. For the Newton-Cotes rules and Gaussian quadrature, for example, it is the space of interpolation polynomials. For our approach this depends on the chosen covariance function t(x, y). Theorem..3. Let f be a function of the form f(x) = s t (δ x, H) (.0) with H D (R d ). Then we can show that E(f) = s t (E, H). Moreover if the distribution H is regular, its associated function h is integrable over R d and E x (t(x, y)) 0 and is integrable over R d then E(f) = α s t (E, I R d) for some α [inf(h), sup(h)]. Figure. shows why E x (t(x, y)) 0 in a solution is a reasonable assumption. It depicts the error function for two example solutions of the definite integration functional (see section.2. on page 5) in one dimension. We see that it is positive and all forced zeros (the nodes) are stationary zeros (remember the end of section..4 on page 8). The same behaviour can be observed in multiple dimensions. Proof (I). Let f and H be as described in theorem..3. Then we have: E(f) = Γ x (f) w i δ xi,x(f) = Γ x (s t (δ x, H)) w i δ xi,x(s t (δ x, H)) ( n ) = s t (Γ, H) s t w i δ xi, H = s t (Γ ) w i δ xi, H = s t (E, H). (.)

23 .. THE VARIANCE MINIMISATION APPROACH Equation. gives us the exact error that we make with our approximation for functions f of a special form. Proof (II). If furthermore H and t have the additional properties stated above we can refine the previous result to the following: s t (E, H) = E x (H y (t(x, y))) = H y (E x (t(x, y))) = h(y)e x (t(x, y)) dy (.2) R d With the above assumptions placed on h we can now write according to the first mean value theorem for integration: Finally we end up with which can be estimated as = α E x (t(x, y)) dy. R d E(f) = α s t (E, I R d) (.3) E(f) sup(h) s t (E, I R d). (.4) Equation.4 gives us a simpler version of equation. on the preceding page: We do not have to know the whole functional H; it is sufficient to know sup(h) to give an error bound...7. Homogeneous Covariance Functions For covariance functions like the Gaussian distribution covariance given in section..6 on page 9 we can derive an interesting result: if t is homogeneous, meaning that t(x, y) = u(x y), and f as in theorem..3 on the preceding page we can further concretise the form of f: f(x) = s t (δ x, H) = h(y)u(x y) dy R d = (h u)(x) (.5) where is the convolution operation. Equation.5 shows that if we have a homogeneous covariance function the class of functions for which we know an error estimation consists of convolutions 4 of our covariance function u with some other function h. Figure.2 on the following page shows the convolution of two examples for h with the Gaussian covariance function introduced in section..6 on page 9. The classes of functions created by homogeneous covariance functions are easy to understand and analyse. For other, non-homogeneous covariance functions s t (δ x, H) does not define functions f via the convolution operation. For example, the covariance function used for verification in section 2.4. on page 40 is not a homogeneous one. 4 The introduction of the convolution operation immediately leads to the Fourier transform. We exploit this in the next section.

24 2 CHAPTER. THEORY * = (a) (b) (c) * = (d) (e) (f) Figure.2: Convolution of a homogeneous covariance function ((b) and (e)) with two examples of h j functions. The resulting functions (c) and (f) are smoothed versions of the original functions (a) and (d). The parameter σ of the covariance function is set to Frequency Domain Solution For any homogeneous covariance function t(x, y) = u(x y) and two regular distributions Γ g, Γ h D (U) we can derive the following: s t (Γ g, Γ h ) = g(x)u(x y)h(y) dx dy R d R d = g(x) h(y)u(x y) dy dx R d R d = g(x)[h u](x) dx R d = g(x) h(ω)ũ(ω) e i2πω x dω dx R d R d = h(ω)ũ(ω) g(x) e i2πω x dx dω R d R ( d ) = h(ω)ũ(ω) g(x) e i2πω x dx dω R d R d = h(ω)ũ(ω) g (ω) dω (.6) R d where g denotes the complex conjugate of g and g, h, ũ denote the Fourier transformed counterparts to g, h, u. We see that the Fourier transform diagonalises the covariance function and we end up with a single integral over all three functions. This result can lower the complexity to find the analytical solutions we need for a concrete regular functional Γ. Additionally it can be easier to find positive definite functions u in the Fourier space as their Fourier transformed counterparts ũ are simply positive functions.

25 .. THE VARIANCE MINIMISATION APPROACH Reproducing Kernels One drawback of formula.0 on page 0 is that normally we do not know the distribution H that creates a function f of this special form. However there are cases where the function f has a very interesting property for a matching covariance function t. First we remember that every locally integrable function f defines a distribution Γ f (see corollary..2 on page 4): If we are able to find functions f with the property Γ f (ϕ) := f(x)ϕ(x) dx. R d (.7) f(x) = s t (δ x, Γ f ) (.8) we have gained an advantage: we know the function h in formula.3 on page : it is the function f itself. This helps us in determining sup(h), of course. The converse way works, too: if we can find a covariance function that makes equation.8 hold for a given function we have the same advantage. The interesting part is now to describe those functions that have the special property stated in equation.8 and how this property is connected with the covariance function. To be able to do this we first have to turn s t (G, H) into a non-degenerate bilinear form which allows us to define a scalar product on the space of distributions. The popular way is to define an equivalence relation in the following way: G 0 s t (G, G) = 0. (.9) Then we create a factor space with the help of this equivalence relation: 5 s t (G, H) := s t (G, H)/ The equivalence relation defines equivalence classes via G H s t (G H, G H) = 0. From now on we calculate with representants for each equivalence class. The procedure applied here is widely known so we will not go into details. We additionally do not rename every instance of s t ; it is implied that we mean the correct one. The important result achieved by changing from s t to s t is that G, H := s t (G, H) (.20) now is a scalar product as s t is now turned from a positive semi-definite bilinear form into a positive definite one. Note that this turns t from a semi-norm into a norm. This scalar product enables us to perform the following: for each y we define a special regular distribution: Γ ty (ϕ) := t(x, y)ϕ(x) dx R d = t y (x)ϕ(x) dx. (.2) R d Then we define a space of distributions with a special property: F t :={Γ f Γ f, Γ ty = Γ f, δ y }. (.22) 5 Remember that s t(g, G) = 0 does currently not imply that G = 0. This is the problem that we are going to solve here.

26 4 CHAPTER. THEORY Theorem..4. The space F t contains those regular distributions Γ f that are based on functions fulfilling equation.8 on the previous page and if equation.8 also holds for f(x) = t y (x) then Γ ty is the reproducing kernel for this space. Proof. We have to show that () Γ ty F t for any y and (2) Γ f, Γ ty = Γ f, δ y for any Γ f built from a function f of the form.8 on the preceding page. Part (2) is called the reproducing property in the literature. Part () is evident immediately if equation.8 also holds for f(x) = t y (x). Part (2) requires a little more work: which leads with equation.8 to Γ f, Γ ty = s t (Γ f, Γ ty ) = Γ ty,η(s t (Γ f, δ η )), = Γ ty,η(f(η)) = f(η)t y (η) dη R d = f(η)t(η, y) dη R d = s t (Γ f, δ y ) = Γ f, δ y. Now we have shown that F t contains distributions built from functions of the form of equation.8 on the previous page and if if Γ ty F t then Γ ty is indeed the reproducing kernel for these distributions. This result opens us the wide field of reproducing kernel Hilbert spaces, so we can use it for further development of our variance minimisation approach; possibly not only for the error estimation but for the structure of the approach itself. A very good and in-depth overview of RKHS, as these are abbreviated, can be found in []. Further analysis of this application of reproducing kernels is omitted here as it would go beyond the scope of this diploma thesis Reproducing Kernels: Example We conclude this section with an example of a reproducing kernel. We can depict its reproducing property if we apply the theory to a special homogeneous covariance function as introduced in section..7. on page. From signal processing we know the famous sinus cardinalis or sinc-function 6. It is a perfect lowpass filter: its Fourier transformed counterpart is the rect function 7. Figure.3 on the next page shows both functions. Additionally it has all properties of a homogeneous covariance function, so we can rewrite equation.8 using convolution: f(x) = (f sinc)(x). This is an easy statement: the functions f that match this equation are those which contain only frequencies that are lower than the cut-off frequency of the lowpass filter sinc. And theorem..4 states that if we convolve these functions with the same lowpass filter again, the functions do not change, which is of course true: 6 This is defined as sinc(x) := { sin(x). Its discontinuity at x = 0 is removed by setting sinc(0) =. x 7 x This is defined as rect(x) = 2. 0 else

27 .2. EXAMPLE FUNCTIONALS Figure.3: Lowpass covariance function: the left part shows the sinc-function, the right part its Fourier transformed counterpart, the rect-function. It is easy to see that it is a perfect lowpass filter. f(x) sinc(x) = (f sinc)(x) sinc(x) = (f sinc sinc)(x) = (f sinc)(x) = f(x). This is what we would expect from the signal transmission point of view: filtering a signal multiple times with the same perfect lowpass filter does not change the signal more than filtering it once. Additionally we can identify the convolution operation used here with the scalar product introduced above, so we just depicted the identity Γ f, Γ sincy = Γ f, δ y..2 Example Functionals To demonstrate the capabilities of our general functional evaluation approach we develop the abstract equations.5 and.6 for two different functionals and apply the results to the Gaussian covariance function described in section..6 on page 9. The resulting equations are then used in chapter 2 to implement these two functionals and evaluate the variance minimisation approach..2. Integration Functional Let I A : ϕ R d A ϕ be the definite d-dimensional integration functional over A R d. This defines a regular distribution of the form Γ f (see corollary..2) with f = A being the indicator function 8 of A. We insert this functional in equation.5 on page 7, which is just: 8 Remember that A (ξ) = [ E 2 n ] t = 2 w i t(x i, x k ) I A,x (t(x, x k )) w k [ n ] = 2 w i t(x i, x k ) t(x, x k )) dx. (.23) A { if ξ A, 0 if ξ / A.

28 6 CHAPTER. THEORY The same procedure is applied to equation.6 on page 8 E 2 t (x k ) l = 2w k [ n = 2w k [ n t(x i, y) y=xk w i I A,x (y) l t(x i, y) y=xk w i (y) l A ( t(x, y) t(x, y) (y) l ) ] y=xk (y) l ] dx. (.24) y=xk.2.. Analytic Results Now we know which operations on the covariance function t we have to solve analytically to be able to numerically solve these equations: t(x,y) t(x, y)) dx, A (y) k and t(x,y) A (y) k dx. We apply them to our Gaussian covariance function introduced in section..6 on page 9: A A t(x, y) dx = t(x, y) (y) k t(x, y) (y) k dx = ( ) d d ( ) ( ) (a)i (y) i (b)i (y) erf erf i 2 2σ 2σ i k (.25) = (x) k (y) k σ 2 t(x, y) (.26) ( ) d d [ ( ) ( )] π σ 2π 2 σ (a)i (y) i (b)i (y) erf i erf 2σ 2σ [ e ((b) k (y) k ) 2 2σ 2 e ((a) k (y) k )2 2σ 2 ] (.27) where k, l [,..., d] and erf is the error function 9. With these results we are ready to implement equations.23 and.24 and calculate some integration nodes and weights. As we want to be able to compute E 2 t, too, we additionally need the following result (see also section..5 on page 8): A A t(x, y) dxdy = ( ) d d [ ( 2σ e ((a) i (b) i ) 2π ) 2 2σ ( ) (a)i (b) i π erf ((b) i (a) i ) 2σ ] (.28).2..2 Error Estimation For the definite integration functional we can further develop our error estimation theory: when Γ = I A and t(x, y) = u(x y) is a homogeneous covariance function (see..7. on page ) we can rewrite equation.3: 9 The error function (sometimes called the Gauss error function) is defined as erf(x) = 2 π x 0 e t2 dt. See also

29 .2. EXAMPLE FUNCTIONALS 7 [ α E x (u(x y)) dy = α u(x y) dy dx R d A R d = α u(x y) dy dx A R } d {{} =c [ ] = αc dx w i A ] w i δ xi,xu(x y) dy R d w i u(x i y) dy R } d {{} =c (.29) which gives us a hard error estimation for the integration functional in the special case where f(x) = s t (δ x, H): the integration error is and goes down with E(f) = αc [ A c sup(h) [ A dx dx [ A ] w i dx ] w i ] w i 0. (.30) Note that for a normalised covariance function u like the one we are developing in section..6 on page 9 we have c =.0. If we furthermore have supp(f) A we can rewrite equation.2 on page as h(y)e x (t(x, y)) dy = α E d A = αs t (E, I A ). E x (u(x y)) dy (.3) Remembering the second part of section..5 we know that in a solution w i, x i we can write this as resulting in an even sharper error boundary. = αs t (E, E) sup(h) E 2 t. (.32).2.2 Directional Derivative Functional Let Γ be the directional derivative functional of first order D x 0,v where D x 0,v(f) = δ x0 (f), v = δ x0 ( f), v. have: If we insert this distribution into equation.5 on page 7 we ( E 2 n ) s = 2 w i t(x i, x k ) D x w k 0,v,x(t(x, x k )) d = 2 t(x, x k ) x=x0 w i t(x i, x k ) (v) j (.33) (x) j j=

30 8 CHAPTER. THEORY and with equation.6 on page 8 we have: E 2 s (x k ) l = 2w k ( n = 2w k.2.2. Analytic Results t(x i, y) y=xk w i D x (y) 0,v,x l t(x i, y) y=xk w i (y) l j= ( t(x, y) ) ) y=xk (y) l d 2 t(x, y) (v) j. (.34) (y) l (x) j x=x0,y=x k To be able to evaluate this functional we need the solution for 2 t(x,y) (x) k (y) l, in addition to the operations required by the definite integration functional. Again we compute it for our Gaussian covariance function: 2 t(x, y) (x) k (y) l = ( ( ) σ ((x) k (y) k ) 2 2 σ 4 ) ((x) k (y) k ) ((x) l (y) l σ 4 t(x, y) for k = l t(x, y) else. (.35).3 Classical Approaches: Numerical Quadrature This section introduces three classical approaches to evaluate the definite integration functional and describes their main properties..3. Newton-Cotes Quadrature Rules The Newton-Cotes rules, named after Isaac Newton and Roger Cotes, place n equally-spaced nodes x i over the interval [a, b] R and compute the associated weights w i using the Lagrange polynomials. This results in quadrature formulas which integrate polynomials of degree n exactly..3.. Derivation Let a = x x 2 x n = b be the n equidistant nodes over [a, b] =: A and let h be the distance between two nodes. We want to approximate I A (f) in the following way: I A (f) = b a b a f(x) dx p(x) dx where p(x) is an interpolation polynomial. We insert its definition in our approximation formula: x x j x i x j = b a f(x i )L i (x) dx where L i (x) = n j=,j i that L i (x i ) = and L i (x j ) = 0 x j x i. By linearity we can pull out the sum: are the well-known Lagrange polynomials. These have the property = f(x i ) b a L i (x)dx

31 .3. CLASSICAL APPROACHES: NUMERICAL QUADRATURE 9 and we define w i := b a L i(x) dx, which results in the closed Newton-Cotes formula of degree n: = w i f(x i ). The weights w i are computed analytically and we are ready to approximate the definite integral of functions over [a, b] Error Estimation The Newton-Cotes rule of degree n integrates polynomials of degree n exactly, so the error estimation results in a formula of the following form 0 : xn x f(x) dx w i f(x i ) = { O(h (n+2) f (n+) ) O(h (n+) f (n) ) for n odd for n even (.36) for n (respectively n + ) times continuously differentiable f Generalisation The classical Newton-Cotes rules integrate functions from R, but we can easily extend them to higher dimensions by simply taking the tensor product of the integration points: having the one-dimensional integration nodes x,..., x m we create the two-dimensional nodes as x := (x, x ) x 2 := (x, x 2 ). x n := (x m, x m ) x n := (x m, x m ) and their weights as w := w w w 2 := w w 2. w n := w m w m w n := w m w m. The extension for dimensions greater than two is obvious. Note that for d dimensions we have to use n = m d nodes with m a one-dimensional number of nodes to get a method that has the same integration error estimation as the original method for one dimension. For simplicity we omit combinations of two or more different one-dimensional rules here. They work in the same way as combinations of identical rules. 0 The derivation of these formulas can be seen at so we only give the result here.

32 20 CHAPTER. THEORY.3..4 Summary The family of Newton-Cotes integration rules suffers from some problems that make it unusable for larger amounts of integration points and higher dimensions: For some n 9 the weights can take negative values which makes numerical obliteration possible. The integrand has to be n times continuously differentiable to have a valid error estimation for the quadrature rule of degree n. We have to perform a relatively great amount of evaluations of the integrand f to achieve a good approximation. To extend these rules to A R d we have to use the tensor product of the integration points which results in an exponentially growing amount of nodes. This is of course unfeasible for higher dimensions. We cannot choose arbitrary amounts of nodes. Only n = d n i with n i some one dimensional numbers of nodes are allowed..3.2 Gaussian Quadrature Rules The family of Gaussian quadrature rules, named after Carl Friedrich Gauss, use a similar ansatz as the Newton-Cotes rules: they integrate a certain polynomial space exactly. The general form is b a p(x)w(x) dx = w i f(x i ) (.37) for an interpolation polynomial p(x) with deg(p(x)) 2n and a weighting function w(x). The difference to the Newton-Cotes rules is that they do not use equally-spaced integration nodes x i on the interval [a, b] R: the x i are the zeros of some orthogonal polynomial of degree n, the w i are again calculated with the help of the Lagrange polynomials and a weighting function w(x) Derivation We will derive the Gauss-Legendre quadrature formulas here. They use the family of Legendre orthogonal polynomials to derive the nodes x i [, ] and weights w i R +. The weighting function is w(x) =. For other orthogonal polynomials the arguments are just the same. Only the weighting functions and integration intervals differ. Let p(x) be an interpolation polynomial of degree 2n. Then we can represent it as p (2n ) (x) = s (n) (x)q (n ) (x) + r (n ) (x) where s (n) (x) is the Legendre polynomial of degree n and q (n ) (x), r (n ) (x) are polynomials with a maximum degree of n. As we want to integrate p (2n ) (x) we write the following: p (2n ) (x) dx = = s (n) (x)q (n ) (x) + r (n ) (x) dx s (n) (x)q (n ) (x) dx } {{ } =0 + r (n ) (x) dx. (.38)

33 .3. CLASSICAL APPROACHES: NUMERICAL QUADRATURE 2 Since s n (x) is orthogonal to all polynomials of lower degree this part of the integral is zero: = r (n ) (x) dx. We remember from the family of Newton-Cotes quadrature rules that we can integrate the residuum polynomial exactly using n integration points: = w i r (n ) (x i ) where w i are weights that are computed the same way as for the Newton-Cotes quadrature. The difference is that we do not use equidistant nodes x i but we have to use the zeros of the Legendre polynomial of degree n such that the first part of the integral in equation.38 on the facing page becomes zero Error Estimation The Gauss quadrature rule of degree n integrates polynomials of degree 2n exactly, so the error estimation results in a formula of the following form : xn x f(x) dx for 2n times continuously differentiable f. w i f(x i ) = O(f (2n) ) (.39) Generalisation The Gauss quadrature rules can be extended to multiple dimensions in the same way as the Newton-Cotes rules by simply taking the tensor product of the one-dimensional nodes and weights. See section.3..3 on page 9 for a description of the process. Figure.4 on the following page shows two examples of Gauss-Legendre nodes and weights for two dimensions Summary The Gauss quadrature rules have some advantages compared to the Newton-Cotes rules. Most notably: They integrate polynomials of degree 2n exactly using only n nodes and weights. They do not create negative weights, so we can use integration rules of arbitrarily high degrees. But one problem remains: for integrals over A R d we have to use the tensor product of the integration nodes, just as with the Newton-Cotes rules. This becomes quickly infeasible for higher dimensions. Nevertheless, the Gauss quadrature rules are good for lower dimensions, so we will compare them later to our integration approach. We limit the number of nodes to n = m d with m a one dimensional number of nodes. The exact error estimation formulas can be seen at and thus we only present the result here.

34 22 CHAPTER. THEORY Figure.4: Gauss-Legendre quadrature example points. Bigger dots indicate higher weights. The left figure shows 3 2 = 9 nodes, the right 4 2 = 6 nodes. For simplicity we do not use rules like 3 4 = 2 nodes..3.3 Monte Carlo Quadrature The simplest approach to generate the weights and nodes is one of the most effective when it comes to high dimensional integration: we just take random, uniformly distributed points from the integration domain A R d and use equal weights such that a constant function over the whole integration domain is integrated exactly. For lower dimensions this method is not very efficient but for higher d it becomes surprisingly effective: we can show that the error scales independently of d with / n, where n is the number of nodes 2. This is the second numerical quadrature method we will later use for comparison purposes..4 Classical Approaches: Numerical Differentiation Numerical differentiation is also an example of evaluating the effect of a functional (a differential functional this time) on a function. We can write it as D a x 0,v(f) = w i δ xi (f) + E(f) where D a x 0,v(f) = ( ) a a δ x0 (f), v = δ x0 ( a f), v is the directional derivative functional of order a evaluated in point x 0 and in direction of v (see also definition..6 on page 4). Note that for a = () and in one dimension this complex expression simply becomes f (x 0 )..4. Finite Difference Approach The simplest ansatz to evaluate this functional is to take the definition of the derivative of a function f in a point x 0 R: 2 See also

Number of Voronoi-relevant vectors in lattices with respect to arbitrary norms

Number of Voronoi-relevant vectors in lattices with respect to arbitrary norms Fakultät für Elektrotechnik, Informatik und Mathematik Arbeitsgruppe Codes und Kryptographie Number of Voronoi-relevant vectors in lattices with respect to arbitrary norms Master s Thesis in Partial Fulfillment

More information

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods Introduction In this workshop we will introduce you to the least-squares spectral element method. As you can see from the lecture notes, this method is a combination of the weak formulation derived from

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method

COURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

CHAPTER 6 bis. Distributions

CHAPTER 6 bis. Distributions CHAPTER 6 bis Distributions The Dirac function has proved extremely useful and convenient to physicists, even though many a mathematician was truly horrified when the Dirac function was described to him:

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

Lehrstuhl für Informatik 10 (Systemsimulation)

Lehrstuhl für Informatik 10 (Systemsimulation) FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG) Lehrstuhl für Informatik 10 (Systemsimulation) Calculations in Structural Mechanics

More information

i=1 α i. Given an m-times continuously

i=1 α i. Given an m-times continuously 1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable

More information

e (x y)2 /4kt φ(y) dy, for t > 0. (4)

e (x y)2 /4kt φ(y) dy, for t > 0. (4) Math 24A October 26, 2 Viktor Grigoryan Heat equation: interpretation of the solution Last time we considered the IVP for the heat equation on the whole line { ut ku xx = ( < x

More information

Green s Functions and Distributions

Green s Functions and Distributions CHAPTER 9 Green s Functions and Distributions 9.1. Boundary Value Problems We would like to study, and solve if possible, boundary value problems such as the following: (1.1) u = f in U u = g on U, where

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

6 The Fourier transform

6 The Fourier transform 6 The Fourier transform In this presentation we assume that the reader is already familiar with the Fourier transform. This means that we will not make a complete overview of its properties and applications.

More information

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018 Numerical Analysis Preliminary Exam 0.00am.00pm, January 9, 208 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Section 6.6 Gaussian Quadrature

Section 6.6 Gaussian Quadrature Section 6.6 Gaussian Quadrature Key Terms: Method of undetermined coefficients Nonlinear systems Gaussian quadrature Error Legendre polynomials Inner product Adapted from http://pathfinder.scar.utoronto.ca/~dyer/csca57/book_p/node44.html

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Here we used the multiindex notation:

Here we used the multiindex notation: Mathematics Department Stanford University Math 51H Distributions Distributions first arose in solving partial differential equations by duality arguments; a later related benefit was that one could always

More information

1 Continuity Classes C m (Ω)

1 Continuity Classes C m (Ω) 0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 9, 2011 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Chapter 1. Distributions

Chapter 1. Distributions Chapter 1 Distributions The concept of distribution generalises and extends the concept of function. A distribution is basically defined by its action on a set of test functions. It is indeed a linear

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know Physics 110. Electricity and Magnetism. Professor Dine Spring, 2008. Handout: Vectors and Tensors: Everything You Need to Know What makes E&M hard, more than anything else, is the problem that the electric

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence 1 CONVERGENCE THEOR G. ALLAIRE CMAP, Ecole Polytechnique 1. Maximum principle 2. Oscillating test function 3. Two-scale convergence 4. Application to homogenization 5. General theory H-convergence) 6.

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

ANALYTIC SEMIGROUPS AND APPLICATIONS. 1. Introduction

ANALYTIC SEMIGROUPS AND APPLICATIONS. 1. Introduction ANALYTIC SEMIGROUPS AND APPLICATIONS KELLER VANDEBOGERT. Introduction Consider a Banach space X and let f : D X and u : G X, where D and G are real intervals. A is a bounded or unbounded linear operator

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Regularity for Poisson Equation

Regularity for Poisson Equation Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects

More information

Numerical Methods I Orthogonal Polynomials

Numerical Methods I Orthogonal Polynomials Numerical Methods I Orthogonal Polynomials Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Nov. 4th and 11th, 2010 A. Donev (Courant Institute)

More information

PARTIAL DIFFERENTIAL EQUATIONS MIDTERM

PARTIAL DIFFERENTIAL EQUATIONS MIDTERM PARTIAL DIFFERENTIAL EQUATIONS MIDTERM ERIN PEARSE. For b =,,..., ), find the explicit fundamental solution to the heat equation u + b u u t = 0 in R n 0, ). ) Letting G be what you find, show u 0 x) =

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University  Math 8530, Spring 2017 Linear maps Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 8530, Spring 2017 M. Macauley (Clemson) Linear maps Math 8530, Spring 2017

More information

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0)

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0) Mollifiers and Smooth Functions We say a function f from C is C (or simply smooth) if all its derivatives to every order exist at every point of. For f : C, we say f is C if all partial derivatives to

More information

Examples of metric spaces. Uniform Convergence

Examples of metric spaces. Uniform Convergence Location Kroghstræde 7, room 63. Main A T. Apostol, Mathematical Analysis, Addison-Wesley. BV M. Bökstedt and H. Vosegaard, Notes on point-set topology, electronically available at http://home.imf.au.dk/marcel/gentop/index.html.

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Sergey E. Mikhailov Brunel University West London, Department of Mathematics, Uxbridge, UB8 3PH, UK J. Math. Analysis

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

NOTES ON PRODUCT SYSTEMS

NOTES ON PRODUCT SYSTEMS NOTES ON PRODUCT SYSTEMS WILLIAM ARVESON Abstract. We summarize the basic properties of continuous tensor product systems of Hilbert spaces and their role in non-commutative dynamics. These are lecture

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

Taylor series. Chapter Introduction From geometric series to Taylor polynomials Chapter 2 Taylor series 2. Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Such series can be described informally as infinite

More information

MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES

MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES 2018 57 5. p-adic Numbers 5.1. Motivating examples. We all know that 2 is irrational, so that 2 is not a square in the rational field Q, but that we can

More information

Wavelets For Computer Graphics

Wavelets For Computer Graphics {f g} := f(x) g(x) dx A collection of linearly independent functions Ψ j spanning W j are called wavelets. i J(x) := 6 x +2 x + x + x Ψ j (x) := Ψ j (2 j x i) i =,..., 2 j Res. Avge. Detail Coef 4 [9 7

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Residual and Error of Finite Element Solutions Mixed BVP of Poisson Equation

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

1 Functions of Several Variables 2019 v2

1 Functions of Several Variables 2019 v2 1 Functions of Several Variables 2019 v2 11 Notation The subject of this course is the study of functions f : R n R m The elements of R n, for n 2, will be called vectors so, if m > 1, f will be said to

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

Hilbert Spaces. Contents

Hilbert Spaces. Contents Hilbert Spaces Contents 1 Introducing Hilbert Spaces 1 1.1 Basic definitions........................... 1 1.2 Results about norms and inner products.............. 3 1.3 Banach and Hilbert spaces......................

More information

1.3.1 Definition and Basic Properties of Convolution

1.3.1 Definition and Basic Properties of Convolution 1.3 Convolution 15 1.3 Convolution Since L 1 (R) is a Banach space, we know that it has many useful properties. In particular the operations of addition and scalar multiplication are continuous. However,

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

26 : Spectral GMs. Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G.

26 : Spectral GMs. Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G. 10-708: Probabilistic Graphical Models, Spring 2015 26 : Spectral GMs Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G. 1 Introduction A common task in machine learning is to work with

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Pure Quantum States Are Fundamental, Mixtures (Composite States) Are Mathematical Constructions: An Argument Using Algorithmic Information Theory

Pure Quantum States Are Fundamental, Mixtures (Composite States) Are Mathematical Constructions: An Argument Using Algorithmic Information Theory Pure Quantum States Are Fundamental, Mixtures (Composite States) Are Mathematical Constructions: An Argument Using Algorithmic Information Theory Vladik Kreinovich and Luc Longpré Department of Computer

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes). CE 025 - Lecture 6 LECTURE 6 GAUSS QUADRATURE In general for ewton-cotes (equispaced interpolation points/ data points/ integration points/ nodes). x E x S fx dx hw' o f o + w' f + + w' f + E 84 f 0 f

More information

Notes on Mathematics Groups

Notes on Mathematics Groups EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties

More information

Numerical Integration in Meshfree Methods

Numerical Integration in Meshfree Methods Numerical Integration in Meshfree Methods Pravin Madhavan New College University of Oxford A thesis submitted for the degree of Master of Science in Mathematical Modelling and Scientific Computing Trinity

More information

Transformation of Probability Densities

Transformation of Probability Densities Transformation of Probability Densities This Wikibook shows how to transform the probability density of a continuous random variable in both the one-dimensional and multidimensional case. In other words,

More information

Tips and Tricks in Real Analysis

Tips and Tricks in Real Analysis Tips and Tricks in Real Analysis Nate Eldredge August 3, 2008 This is a list of tricks and standard approaches that are often helpful when solving qual-type problems in real analysis. Approximate. There

More information

Quadratic reciprocity (after Weil) 1. Standard set-up and Poisson summation

Quadratic reciprocity (after Weil) 1. Standard set-up and Poisson summation (September 17, 010) Quadratic reciprocity (after Weil) Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ I show that over global fields (characteristic not ) the quadratic norm residue

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

A Primer on Three Vectors

A Primer on Three Vectors Michael Dine Department of Physics University of California, Santa Cruz September 2010 What makes E&M hard, more than anything else, is the problem that the electric and magnetic fields are vectors, and

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Differentiation and Integration

Differentiation and Integration Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces 9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we

More information

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Solution. If we does not need the pointwise limit of

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

8 A pseudo-spectral solution to the Stokes Problem

8 A pseudo-spectral solution to the Stokes Problem 8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method 8.1.1 Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω,

More information

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Portland State University PDXScholar University Honors Theses University Honors College 2014 A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Edison Tsai Portland

More information

More on Estimation. Maximum Likelihood Estimation.

More on Estimation. Maximum Likelihood Estimation. More on Estimation. In the previous chapter we looked at the properties of estimators and the criteria we could use to choose between types of estimators. Here we examine more closely some very popular

More information

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX 100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX.1 Norms If we have an approximate solution at a given point and we want to calculate the absolute error, then we simply take the magnitude

More information

Reminder on basic differential geometry

Reminder on basic differential geometry Reminder on basic differential geometry for the mastermath course of 2013 Charts Manifolds will be denoted by M, N etc. One should think of a manifold as made out of points (while the elements of a vector

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Lecture 8: Differential Equations. Philip Moriarty,

Lecture 8: Differential Equations. Philip Moriarty, Lecture 8: Differential Equations Philip Moriarty, philip.moriarty@nottingham.ac.uk NB Notes based heavily on lecture slides prepared by DE Rourke for the F32SMS module, 2006 8.1 Overview In this final

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

Review I: Interpolation

Review I: Interpolation Review I: Interpolation Varun Shankar January, 206 Introduction In this document, we review interpolation by polynomials. Unlike many reviews, we will not stop there: we will discuss how to differentiate

More information

SYDE 112, LECTURE 7: Integration by Parts

SYDE 112, LECTURE 7: Integration by Parts SYDE 112, LECTURE 7: Integration by Parts 1 Integration By Parts Consider trying to take the integral of xe x dx. We could try to find a substitution but would quickly grow frustrated there is no substitution

More information

df(x) = h(x) dx Chemistry 4531 Mathematical Preliminaries Spring 2009 I. A Primer on Differential Equations Order of differential equation

df(x) = h(x) dx Chemistry 4531 Mathematical Preliminaries Spring 2009 I. A Primer on Differential Equations Order of differential equation Chemistry 4531 Mathematical Preliminaries Spring 009 I. A Primer on Differential Equations Order of differential equation Linearity of differential equation Partial vs. Ordinary Differential Equations

More information