Data fitting by vector (V,f)-reproducing kernels
|
|
- Magdalen Jennifer Payne
- 5 years ago
- Views:
Transcription
1 Data fitting by vector (V,f-reproducing kernels M-N. Benbourhim to appear in ESAIM.Proc 2007 Abstract In this paper we propose a constructive method to build vector reproducing kernels. We define the notion of vector (V,f-reproducing kernel and we prove that every vector reproducing kernel is a (V,f-reproducing kernel. We study the minimal approximation by these (V,f-reproducing kernels for different choices of V and F. Keywords: Vector (V, f-reproducing kernels, approximation theory, smoothing and interpolation (V, f-splines. AMS classification: 65Dxx-41A15-65D15, 60E05 Introduction Kernels are valuable tools in various fields of numerical analysis, including approximation, interpolation, meshless methods for solving partial differential equations, neural networks, and machine learning. This contribution proposes a constructive method to build vector reproducing kernels for approximation theory uses. The problem of computing a function from empirical data is addressed in several areas of mathematics and engineering. Depending on the context, this problem goes under the name of function estimation (statistics, function approximation and interpolation (approximation theory, among others. The outline of this paper is as follows: in Section 1, we recall some fundamental results on vectors reproducing kernels and we define the notion of (V, f-reproducing kernels. We state the fundamental result, that a matrix function is a vector reproducing kernel if and only if it is a (V, f-reproducing kernel. In Section 2, we give examples of (V, f-reproducing kernels. In Section 3, we present the first vector approximation problem and we prove the existence and/or uniqueness of the solution. In Section 4, we present the second vector approximation problem, preserving a finite dimensional vector space and we prove the existence and/or uniqueness of the solution. Laboratoire MIP-UMR 5640, Université Paul Sabatier, UFR MIG, 118, route de Narbonne, F Toulouse Cedex 04, FRANCE. bbourhim@cict.fr. 1
2 Benbourhim/Data fitting by vector (V,f-reproducing kernels. 2 1 Vector (V, f-reproducing kernels 1.1 Vector reproducing kernels For any set Ω we denote by (R n Ω the real vector space of functions h : Ω R n equipped with the topology of point wise convergence. Definition 1.1 A real valued matrix-function H(t, s (H k,l (t, s 1 k,l n defined on Ω Ω is a reproducing kernel (RK if 1- It is symmetric H(t, s H T (s, t, t, s Ω. ( For every finite set {t j } 1 j N of distinct points in Ω and for every set of real scalars, we have {λ i,l }1 l n 1 i N 1 k, H k,l (t i, t j λ j,l λ i,k 0. (1.2 Remark 1.1 Taking λ j,l µ j c j,l in Equation (1.2, it is easy to see that Definition 1.1 is equivalent to that for every finite set {t j } N 1 of distinct points in Ω and for every elements c j (c j,l 1 l n of R n, the matrix (c T i H(t i, t j c j 1 N is a positive matrix. Proposition 1.1 The RK H(t, s has the following properties: 1- For all k1,...,n, the function H k,k (t, s is a RK. 2- For all k,,...,n, and t, s in Ω, we have the Cauchy-Schwartz inequality H k,l (t, s 2 H k,k (t, th l,l (s, s. (1.3 Proof. It is a consequence of Remark 1.1 and the properties of symmetric positive matrices. Definition 1.2 A vector subspace H of (R n Ω equipped with a scalar product H is called a hilbertian subspace of (R n Ω if 1- (H, H is a Hilbert space. 2- The natural injection from H into (R n Ω is continuous. We recall some important results on RK and its associated hilbertian subspace which are studied in [9]. Theorem 1.1 For any reproducing kernel H(t, s there exists a unique hilbertian subspace H H of (R n Ω such that: 1- The space { H 0,H u (R n Ω u(t is a dense subspace of H H. } H(t, t i c i, c i R n, 1 i N, t Ω (1.4
3 Benbourhim/Data fitting by vector (V,f-reproducing kernels H(t, s is the reproducing kernel of H H : for all u H H, c R n and t Ω. 1.2 Vector (V,f-reproducing kernels u H(., tc HH c T u(t, (1.5 Definition 1.3 Let H(t, s (H k,l (t, s 1 k,l n a real valued matrix-function defined on Ω Ω. We say that H(t, s is a (V, f-reproducing kernel ((V, f RK if there exist a real Hilbert space (V, V and a function f (f k 1 k n from Ω into V n such that H(t, s ( f k (t f l (s V 1 k,l n. (1.6 Theorem 1.2 A real valued matrix-function H(t, s (H k,l (t, s 1 k,l n is a reproducing kernel if and only if it is a (V, f-reproducing kernel. Proof. It is clear that a (V, f RK, H(t, s ( f k (t f l (s V 1 k,l n is symmetric and satisfies 2 λ i,k λ j,l H k,l (t i, t j λ i,k λ j,l f k (t i f l (t j V λ i,k f k (t i 0, 1 k, 1 k, k1 which implies that H(t, s is a RK. Conversely let H(t, s (H k,l (t, s 1 k,l n be a RK. From Theorem 1.1, there exists a hilbertian subspace H H of (R n Ω which admits H(t, s as a reproducing kernel. Let V H f and f k (t H(t,.e k H H with e k (δ k,l 1 l n. From the reproducing formula (1.5, we get: H k,l (t, s f k (t f l (s Hf. Then H(t, s is a (V, f RK. In the following theorem we establish a characterization of the hilbertian subspace H f associated to H. Theorem 1.3 Let H(t, s ( f k (t f l (s V 1 k,l n be a (V, f-reproducing kernel. Its associated hilbertian subspace H f of (R n Ω is defined by: { } H f u (u k 1 k n (R n Ω v V : u k (t v f k (t V, 1 k n, t Ω. (1.7 Proof. Let A f : V (R n Ω be defined by (A f v k (t v f k (t V, 1 k n. The application A f is linear and from the inequality ( ( (A f v(t 2 v f k (t V 2 f k (t 2 V v 2 V H k,k (t, t 2 v 2 V, k1 k1 k1 we deduce that A f is continuous. Let ker(a f {v V v f k (t V 0, k 1,, n and t Ω} and, B ( ker(a f the orthogonal space in V of B. One can easily verify that B is the closure of the space span {f k (t} (k,t Nn Ω, with N n {1,..., n}. We denote P B the orthogonal projector on B. We define on H f A f (V the bilinear form A f u A f v Hf P B u P B v V. It is easy to see that this bilinear form is a scalar product on H f. Then the linear application A f : (ker(a f H f V
4 Benbourhim/Data fitting by vector (V,f-reproducing kernels. 4 is an isometry and consequently (H f, Hf is a Hilbert space. For all s Ω and c (c l 1 l n R n the function ( ( H(t, sc : t Ω H k,l (t, sc l 1 k n ( f k (t f l (s V c l f k (t f l (sc l V is an element of H f and satisfies the reproducing formula (1.5. Thus A f v H(, sc Hf P B v P B ( c l v f l (s V f l (sc l V v 1 k n 1 k n f l (sc l V c l (A f v l (s c T.(A f v(s, ( A f ( for all v in V. Consequently (see Theorem 1.1 H f is a hilbertian subspace of (R n Ω and admits H as a reproducing kernel. 2 Examples of (V, f-reproducing kernels 2.1 Example 1 Let V L 2 (a, b and let Ω be a subset of R d. For any function c k : Ω R, 1 k n, let f k (t(x exp(c k (tx. We have exp(b(c k (t + c l (s exp(a(c k (t + c l (s if (c H k,l (t, s k (t + c l (s 0, c k (t + c l (s (2.1 b a otherwise. and H f { u (u k 1 k n (R n Ω v L 2 (a, b : u k (t 2.2 Example 2 b a f l (sc l (t, } v(x exp(c k (txdx, 1 k n, t Ω. Let V L 2 (0, + and let Ω be a subset of R d. For any function c k : Ω ]0, + [, 1 k n. Thus 1 If f k (t(x 4 1 exp( c k (t x 2 1 then H k,l (t, s π ck (t + c l (s. 1 2 If f k (t(x exp( c k (tx then H k,l (t, s c k (t + c l (s and in particular if c k(t P k(t Q k (t, where P k (t and Q k (t are polynomials, we obtain a rational reproducing kernel H k,l (t, s Q k (tq k (s P k (tq k (s + P k (sq k (t. (2.2
5 Benbourhim/Data fitting by vector (V,f-reproducing kernels Example 3 : (V,f-RK of convolution type We consider the case where 1 V L 2 (R d and Ω R d. 2 f k (t(x f k (t x with f k is in the usual Sobolev space H m (R d. Then H k,l (t, s f k (t xf l (s xdx. R d Theorem 2.1 We have the following properties 1- H k,l (t, s G k,l (t s with H k,l (ξ f k ˇf l (ξ where ˇf l (x f l ( x and F is the Fourier transform. 2- G k,l C m 0 (Rd, where C m 0 (Rd is the space of compactly supported functions of class C m on R d. 3- The associated hilbertian subspace of His { } H f f L 2 (R d u (u k 1 k n C(R d ; R d v L 2 (R d : u k f k v, 1 k n C0 m (R d 4- If f k is radial, 1 k n, then the (V,f-RK H k, is radial: H k,k (t, s H k,k ( t s for 1 k, l n. Proof. 1- We have H k,l (t, s f k (t xf l (s xdx R d f k (yf l (y (t sdy f k ˇf l (t s. R d 2- f k H m (R d D α H k,l (D α f k ˇf l C 0 0 (Rd for α m, (See[4]. 3- It is a consequence of Theorem 1.2 and the property given in item For any orthogonal matrix A H k,l (At f k (xf l (x Atdx R d f k (Axf l (A(x tdx R d f k (xf l (x tdx H k,l (t, R d since f k (Ax f k (x and det A 1. 3 Data fitting by vector (V, f-reproducing kernels Given a subset Ω N {t 1,..., t N } of Ω and a set of vectors Z N {z 1,..., z N } in R n, the scattered data approximation problem consists in finding a vector-valued function σ ɛ such that the system of equations σ ɛ (t i z i + θ i (ɛ, (3.1
6 Benbourhim/Data fitting by vector (V,f-reproducing kernels. 6 has a solution of the form σ ɛ kn k1 where the unknown error function θ i (ɛ satisfy, θ i (0 0. Let A N be a linear operator from H f into (R N n defined by First, we give the following definition H f (t, t k, a ɛ i (3.2 A N u ( u 1 (t 1,..., u 1 (t N,..., u n (t 1,..., u n (t N. Definition 3.1 For all Z N (R n N, and ɛ 0 we define a (V, f-spline function as a solution σ ɛ of the following approximation problem: (P ɛ (Z N : min ((1 ɛ v 2 + ɛ Av Z 2(R Hf v C ɛ(z N n, (3.3 N where C ɛ (Z N { A 1 (Z N for ɛ 0 (Interpolating Problem H f for ɛ > 0 (Smoothing Problem and A 1 N (Z N {v H f : A N v Z N }. Here. (R n N denotes the standard Euclidean norm on (R n N. The explicit expression of the solution of the problem (3.3 is given in the following theorem. Theorem 3.1 For all u H f the problem (3.3 with A N u Z N, admits a unique solution σ ɛ H f which is explicitly given by (3.4 (3.5 σ ɛ (t H f (t, t i a ɛ i. (3.6 The coefficients a ɛ i for i 1,..., N, are obtained by solving the nn nn linear system where a ɛ and Z N are the vectors given by { ( 0 if ɛ 0, H N f + c ɛ I nn a ɛ Z N with c ɛ 1 if ɛ > 0, ɛ (3.7 a ɛ (a ɛ 1,1,..., aɛ N,1,..., aɛ 1,n,..., aɛ N,n t R n.n, Z N (Z 1,n,..., Z N,n t R n.n, and I nn and Hf N (HN (l,k f 1 l,k n are the nn nn matrix identity and nn nn matrices, respectively. The blocks H (l,k f of H f are given by Hf N < f k (t i f l (t j > V [ ]. H f (l,k 1 i N 1 j N Proof. From the continuous embedding H f R n (see Theorem 1.1, we deduce that A N is continuous. Let I be the identity operator in H f. We have (l,k
7 Benbourhim/Data fitting by vector (V,f-reproducing kernels A N (H f is a closed subspace of (R n N. 2- I(H f H f is closed. 3- ker(a N ker(i {0}. 4- ker(a N + ker(i ker(a N is closed in H f. According to the general spline theory (see [2, 5], we get the theorem. Using the general spline theory, we obtain the following proposition in the case of smoothing problem (ɛ > 0. Proposition 3.1 For all (ɛ, Z N ]0, 1[ (R n N σ ɛ H f which is explicitly given by the problem (3.3 admits a unique solution σ ɛ (t H f (t, t i a ɛ i. (3.8 The coefficients a ɛ i for i 1,..., N, are obtained by solving the nn nn non singular linear system ( Hf N + 1 ɛ I nn a ɛ Z N. (3.9 For the case of interpolating problem (ɛ 0we have the proposition Proposition 3.2 The following properties are equivalent: 1- The linear application A N is surjective from H f onto (R n N. 2- For all Z N (R n N the problem (P 0 (Z N admits a unique solution. 3- The matrix H N f is non singular, i.e. definite positive. 4- The system {f k (t i }1 k n 1 i N Proof. We have is linearly independent in V. 1 2 It is a consequence of the general spline theory (see [2, 5]. 2 3 Let a ɛ (a ɛ 1,, aɛ N be a solution of the homogeneous system HN f aɛ 0 and σ ɛ (t H f (t, t i a ɛ i. We have A N σ ɛ Hf Naɛ 0. Item 2 implies that σ ɛ 0. According to the reproducing formula (1.5, we get that for all v in H f 0 < σ ɛ v > Hf (a ɛ i T.v(t i (3.10 Let Z (i,k (δ k,l 1 l n, for i 1,, N and k 1,, n. There exists an interpolating (V, f-spline function σ 0,(i,k, 1 i N and 1 k n, solution of the problem (P 0 (Z (i,k given by (3.3. Taking σ 0,(i,k successively in equation (3.10, we get a ɛ i 0 for i 1,..., N.
8 Benbourhim/Data fitting by vector (V,f-reproducing kernels Since the matrix Hf N ( f k (t i f l (t j V 1 N is a Gram matrix, it is invertible if and 1 k,l n only if the system {f k (t i }1 i N is linearly independent in V. 1 k n 4 1 Since the matrix Hf N is invertible, then for all Z N (R n N there exists a (a i 1 i N in (R n N such that Hf Na Z N. The element σ 0 of H f defined by σ 0 (t H f (t, t i a i satisfies A N σ 0 Z N. 4 Data fitting preserving polynomials Let P be a finite dimensional vector subspace of (R n Ω and let {p 1,..., p m } be a basis of P. In the second scattered data approximation problem, the P-reproduction property is required. Given a subset Ω N {t 1,..., t N } of Ω and a set of vectors Z N {z 1,..., z N } in R nn, the scattered data approximation problem consists in finding a vector-valued function σ ɛ such that the system of equations σ ɛ (t i z i + θ i (ɛ (4.1 has a solution of the form σ ɛ kn k1 n(m H f (t, t k a ɛ i + j1 where the unknown error function θ i (ɛ satisfies, θ i (0 0. Hereafter we assume that p j (tb ɛ j, (4.2 (H1 For all p P : {p(t i 0, 1 i N} p 0. (4.3 (H2 H f P {0}. (4.4 Let HP f denote the Hilbert direct sum: HP f H f P. We denote by Π f the orthogonal projector from HP f onto H f and we define on HP f the linear application First, we give the following definition A N u ( u 1 (t 1,..., u 1 (t N,..., u n (t 1,..., u n (t N. Definition 4.1 For all Z N (R n N, and ɛ 0 we define a (V, f-spline function as a solution σ ɛ of the following approximation problem: (P ɛ (Z N : min ((1 ɛ Π f v 2 + ɛ Av Z 2(R Hf v C ɛ(z N n, (4.5 N where A 1 (Z N for ɛ 0 (Interpolating Problem C ɛ (Z N (HP f for ɛ > 0 (Smoothing Problem P for ɛ 1 (Least squares and A 1 (Z N {v (HP f : A N v Z N }. (4.6 (4.7
9 Benbourhim/Data fitting by vector (V,f-reproducing kernels. 9 The explicit expression of the solution of the problem (4.5 is given in the following theorem. Theorem 4.1 For all u (HP f the problem (4.5 with A N u Z N, admits a unique solution σ ɛ (HP f which is explicitly given by σ ɛ (t m H f (t, t i a ɛ i + q j (tb ɛ j (4.8 j1 The coefficients a ɛ i and bɛ j for i 1,..., N, are obtained by solving the nn nn linear system ( H N f + c ɛ I nn M M t O ( a ɛ b ɛ ( ZN 0 with c ɛ { 0 if ɛ 0, 1 ɛ if ɛ > 0, (4.9 where a ɛ, b ɛ and Z N are the vectors given by a ɛ (a ɛ 1,1,..., aɛ N,1,..., aɛ 1,n,..., aɛ N,n t R n.n, b ɛ (b ɛ 1,1,..., bɛ n(m,1,..., bɛ 1,n,..., bɛ n(m,n t R n.n(m, Z N (z 1,1,..., z N,1,..., z 1,n,..., z N,n t R n.n, and Hf N (HN (l,k f 1 l,k n and M (M (l,k 1 l,k n are nn nn and nn nn(m matrices, [ ] [ ] respectively. The blocks Hf N (l,k Hf N and M (l,k M (l,k of Hf N and M are given by and H N f (l,k (l,k 1 i N 1 j N < f k (t i f l (t j > V M (l,k δ l,k q j (x i, 1 i m 1 j N respectively. In particular, the property of preserving polynomials holds, which means that if there exists q P such that q(t i z i, for i 1,..., N then σ ɛ q. Proof. We have 1. A N is continuous and A N ((HP f is a closed: From the continuous embedding property, H f (R n Ω (see Theorem 1.1, we deduce that A N is continuous. A N ((HP f is closed cause it is a finite dimensional space. 2. Π f ((HP f H f is closed. 3. ker(a N ker(π f {0}. 4. ker(a N + ker(π f is closed: It is a consequence of the fact that ker(a N is closed and ker(π f P is a finite dimensional space. According to the general spline theory (see [2, 5], we get the theorem. Using the general spline theory, we obtain the following proposition in the case of smoothing problem (ɛ > 0.
10 Benbourhim/Data fitting by vector (V,f-reproducing kernels. 10 Proposition 4.1 For all (ɛ, Z N ]0, 1[ (R n N σ ɛ (HP f which is explicitly given by the problem (4.5 admits a unique solution σ ɛ (t m H f (t, t i a ɛ i + p j (tb ɛ j. (4.10 j1 The coefficients a ɛ i and bɛ j for i 1,..., N for i 1,..., N, are the solution of the nn nn non singular linear system ( Hf N + 1 ɛ I nn a ɛ Z N. (4.11 For the case of smoothing problem (ɛ > 0, using a similar proof as in Proposition 3.2, we get Proposition 4.2 The following properties are equivalent: 1. The linear application A N is surjective. 2. For all Z N (R n N the problem (P 0 (Z N admits a unique solution. 3. The matrix H N f is non singular, i.e. definite positive. 4. The system {f k (t i }1 k n 1 i N References is linearly independent in V. [1] L. Amodei, Reproducing Kernels of Vector-Valued Function Spaces, Curve and surface fitting; Chamonix 1996, A. Le Méhauté, C. Rabut and L.L. Schumeker, eds, Vanderbilt University Press, Nashville, 2000, 1-9. [2] M. Attéia, Hilertian Kernels and Splines Functions, Elsevier Science, North Holland, [3] M.N. Benbourhim, Constructive Approximation by (V,f-Reproducing Kernels, Curve and surface fitting; Saint Malo 1999, A. Cohen, C. Rabut and L.L. Schumeker, eds, Vanderbilt University Press, Nashville, 2000, [4] W.F. Donughue, Distributions and Fourier Transform, Academic Press, [5] P.J. Laurent, Approximation et Optimisation, Hermann, Paris, [6] C.A. Micchelli and M. Pontil, On Learning Vector-Valued Functions, Research Note RN/03/08, Department of Computer Science, University College London, [7] S. Saitoh, Theory of reproducing kernels and its applications, Pitman Research Notes in Mathematics Series, Longman Scientific and Technical, [8] R. Schaback and H. Wendland Kernel Techniques: From Machine Learning to Meshless Methods, Acta Numerica (2006, pp. 1-97, Cambridge University Press, [9] L. Schwartz, Sous-espaces hilbertiens d espaces vectoriels topologiques et noyaux associés, J. Analyse Math. 13(1964,
Encyclopedia of Mathematics, Supplemental Vol. 3, Kluwer Acad. Publishers, Dordrecht,
Encyclopedia of Mathematics, Supplemental Vol. 3, Kluwer Acad. Publishers, Dordrecht, 2001, 328-329 1 Reproducing kernel Consider an abstract set E and a linear set F of functions f : E C. Assume that
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH
More information1. Foundations of Numerics from Advanced Mathematics. Linear Algebra
Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationRegularization in Reproducing Kernel Banach Spaces
.... Regularization in Reproducing Kernel Banach Spaces Guohui Song School of Mathematical and Statistical Sciences Arizona State University Comp Math Seminar, September 16, 2010 Joint work with Dr. Fred
More informationElements of Positive Definite Kernel and Reproducing Kernel Hilbert Space
Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department
More informationL. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS
Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationChapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011)
Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011) Before we consider Gelfond s, and then Schneider s, complete solutions to Hilbert s seventh problem let s look back
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More information(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).
Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationReview of linear algebra
Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationINNER PRODUCT SPACE. Definition 1
INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationMath 3108: Linear Algebra
Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationMultivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness
Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate
More informationInner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:
Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationInterpolation by Basis Functions of Different Scales and Shapes
Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive
More informationHomework set 4 - Solutions
Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationFall, 2003 CIS 610. Advanced geometric methods. Homework 3. November 11, 2003; Due November 25, beginning of class
Fall, 2003 CIS 610 Advanced geometric methods Homework 3 November 11, 2003; Due November 25, beginning of class You may work in groups of 2 or 3 Please, write up your solutions as clearly and concisely
More informationMultiscale Frame-based Kernels for Image Registration
Multiscale Frame-based Kernels for Image Registration Ming Zhen, Tan National University of Singapore 22 July, 16 Ming Zhen, Tan (National University of Singapore) Multiscale Frame-based Kernels for Image
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationVector Spaces, Affine Spaces, and Metric Spaces
Vector Spaces, Affine Spaces, and Metric Spaces 2 This chapter is only meant to give a short overview of the most important concepts in linear algebra, affine spaces, and metric spaces and is not intended
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationA. Bouhamidi 1. Introduction
ESAIM: PROCEEDINGS, October 7, Vol., 7-8 Mohammed-Najib Benbourhim, Patrick Chenin, Abdelhak Hassouni & Jean-Baptiste Hiriart-Urruty, Editors PSEUDO-DIFFERENTIAL OPERATOR ASSOCIATED TO THE RADIAL BASIS
More informationL p MAXIMAL REGULARITY FOR SECOND ORDER CAUCHY PROBLEMS IS INDEPENDENT OF p
L p MAXIMAL REGULARITY FOR SECOND ORDER CAUCHY PROBLEMS IS INDEPENDENT OF p RALPH CHILL AND SACHI SRIVASTAVA ABSTRACT. If the second order problem ü + B u + Au = f has L p maximal regularity for some p
More informationMATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY
MATH 22: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY When discussing separation of variables, we noted that at the last step we need to express the inhomogeneous initial or boundary data as
More informationEECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels
EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationSolutions for Math 225 Assignment #5 1
Solutions for Math 225 Assignment #5 1 (1) Find a polynomial f(x) of degree at most 3 satisfying that f(0) = 2, f( 1) = 1, f(1) = 3 and f(3) = 1. Solution. By Lagrange Interpolation, ( ) (x + 1)(x 1)(x
More informationALGEBRA 8: Linear algebra: characteristic polynomial
ALGEBRA 8: Linear algebra: characteristic polynomial Characteristic polynomial Definition 8.1. Consider a linear operator A End V over a vector space V. Consider a vector v V such that A(v) = λv. This
More informationInequalities of Babuška-Aziz and Friedrichs-Velte for differential forms
Inequalities of Babuška-Aziz and Friedrichs-Velte for differential forms Martin Costabel Abstract. For sufficiently smooth bounded plane domains, the equivalence between the inequalities of Babuška Aziz
More informationFourier and Wavelet Signal Processing
Ecole Polytechnique Federale de Lausanne (EPFL) Audio-Visual Communications Laboratory (LCAV) Fourier and Wavelet Signal Processing Martin Vetterli Amina Chebira, Ali Hormati Spring 2011 2/25/2011 1 Outline
More information1. Introduction Since the pioneering work by Leray [3] in 1934, there have been several studies on solutions of Navier-Stokes equations
Math. Res. Lett. 13 (6, no. 3, 455 461 c International Press 6 NAVIER-STOKES EQUATIONS IN ARBITRARY DOMAINS : THE FUJITA-KATO SCHEME Sylvie Monniaux Abstract. Navier-Stokes equations are investigated in
More information1 Continuity Classes C m (Ω)
0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +
More informationDefinition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition
6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition
More informationM. Ledoux Université de Toulouse, France
ON MANIFOLDS WITH NON-NEGATIVE RICCI CURVATURE AND SOBOLEV INEQUALITIES M. Ledoux Université de Toulouse, France Abstract. Let M be a complete n-dimensional Riemanian manifold with non-negative Ricci curvature
More informationWavelet Bases of the Interval: A New Approach
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 21, 1019-1030 Wavelet Bases of the Interval: A New Approach Khaled Melkemi Department of Mathematics University of Biskra, Algeria kmelkemi@yahoo.fr Zouhir
More informationMath 396. An application of Gram-Schmidt to prove connectedness
Math 396. An application of Gram-Schmidt to prove connectedness 1. Motivation and background Let V be an n-dimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V
More informationRow Space, Column Space, and Nullspace
Row Space, Column Space, and Nullspace MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Every matrix has associated with it three vector spaces: row space
More informationInjective semigroup-algebras
Injective semigroup-algebras J. J. Green June 5, 2002 Abstract Semigroups S for which the Banach algebra l (S) is injective are investigated and an application to the work of O. Yu. Aristov is described.
More informationNormality of adjointable module maps
MATHEMATICAL COMMUNICATIONS 187 Math. Commun. 17(2012), 187 193 Normality of adjointable module maps Kamran Sharifi 1, 1 Department of Mathematics, Shahrood University of Technology, P. O. Box 3619995161-316,
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationT ((x 1, x 2,..., x n )) = + x x 3. , x 1. x 3. Each of the four coordinates in the range is a linear combination of the three variables x 1
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationHilbert Spaces. Contents
Hilbert Spaces Contents 1 Introducing Hilbert Spaces 1 1.1 Basic definitions........................... 1 1.2 Results about norms and inner products.............. 3 1.3 Banach and Hilbert spaces......................
More informationDefinitions for Quizzes
Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationSolving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels
Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation
More informationVector Spaces - Definition
Vector Spaces - Definition Definition Let V be a set of vectors equipped with two operations: vector addition and scalar multiplication. Then V is called a vector space if for all vectors u,v V, the following
More informationD. Shepard, Shepard functions, late 1960s (application, surface modelling)
Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in
More informationChapter 2 Linear Transformations
Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 9, 2011 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationMath The Laplacian. 1 Green s Identities, Fundamental Solution
Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external
More informationGeometric control and dynamical systems
Université de Nice - Sophia Antipolis & Institut Universitaire de France 9th AIMS International Conference on Dynamical Systems, Differential Equations and Applications Control of an inverted pendulum
More informationKernels for Multi task Learning
Kernels for Multi task Learning Charles A Micchelli Department of Mathematics and Statistics State University of New York, The University at Albany 1400 Washington Avenue, Albany, NY, 12222, USA Massimiliano
More informationMath 113 Final Exam: Solutions
Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P
More informationAn Introduction to Kernel Methods 1
An Introduction to Kernel Methods 1 Yuri Kalnishkan Technical Report CLRC TR 09 01 May 2009 Department of Computer Science Egham, Surrey TW20 0EX, England 1 This paper has been written for wiki project
More informationRKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee
RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators
More informationFourier Transform & Sobolev Spaces
Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev
More information3. Some tools for the analysis of sequential strategies based on a Gaussian process prior
3. Some tools for the analysis of sequential strategies based on a Gaussian process prior E. Vazquez Computer experiments June 21-22, 2010, Paris 21 / 34 Function approximation with a Gaussian prior Aim:
More informationMath Linear algebra, Spring Semester Dan Abramovich
Math 52 0 - Linear algebra, Spring Semester 2012-2013 Dan Abramovich Fields. We learned to work with fields of numbers in school: Q = fractions of integers R = all real numbers, represented by infinite
More informationInner product spaces. Layers of structure:
Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty
More information1 Discretizing BVP with Finite Element Methods.
1 Discretizing BVP with Finite Element Methods In this section, we will discuss a process for solving boundary value problems numerically, the Finite Element Method (FEM) We note that such method is a
More informationThere are two things that are particularly nice about the first basis
Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially
More informationADVANCED TOPICS IN ALGEBRAIC GEOMETRY
ADVANCED TOPICS IN ALGEBRAIC GEOMETRY DAVID WHITE Outline of talk: My goal is to introduce a few more advanced topics in algebraic geometry but not to go into too much detail. This will be a survey of
More informationFunctional Analysis Review
Functional Analysis Review Lorenzo Rosasco slides courtesy of Andre Wibisono 9.520: Statistical Learning Theory and Applications September 9, 2013 1 2 3 4 Vector Space A vector space is a set V with binary
More informationReproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Differential Operators
Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Differential Operators Qi Ye Abstract In this paper we introduce a generalization of the classical L 2 ( )-based Sobolev
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More information3. Fourier decomposition of functions
22 C. MOUHOT 3.1. The Fourier transform. 3. Fourier decomposition of functions Definition 3.1 (Fourier Transform on L 1 (R d )). Given f 2 L 1 (R d ) define its Fourier transform F(f)( ) := R d e 2i x
More informationIntroduction to Signal Spaces
Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationArchiv der Mathematik Holomorphic approximation of L2-functions on the unit sphere in R^3
Manuscript Number: Archiv der Mathematik Holomorphic approximation of L-functions on the unit sphere in R^ --Manuscript Draft-- Full Title: Article Type: Corresponding Author: Holomorphic approximation
More informationScattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions
Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the
More informationChapter 7: Bounded Operators in Hilbert Spaces
Chapter 7: Bounded Operators in Hilbert Spaces I-Liang Chern Department of Applied Mathematics National Chiao Tung University and Department of Mathematics National Taiwan University Fall, 2013 1 / 84
More informationMicrolocal Methods in X-ray Tomography
Microlocal Methods in X-ray Tomography Plamen Stefanov Purdue University Lecture I: Euclidean X-ray tomography Mini Course, Fields Institute, 2012 Plamen Stefanov (Purdue University ) Microlocal Methods
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationAn Attempt of Characterization of Functions With Sharp Weakly Complete Epigraphs
Journal of Convex Analysis Volume 1 (1994), No.1, 101 105 An Attempt of Characterization of Functions With Sharp Weakly Complete Epigraphs Jean Saint-Pierre, Michel Valadier Département de Mathématiques,
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationRIESZ BASES AND UNCONDITIONAL BASES
In this paper we give a brief introduction to adjoint operators on Hilbert spaces and a characterization of the dual space of a Hilbert space. We then introduce the notion of a Riesz basis and give some
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationOverview of normed linear spaces
20 Chapter 2 Overview of normed linear spaces Starting from this chapter, we begin examining linear spaces with at least one extra structure (topology or geometry). We assume linearity; this is a natural
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationMath 210B. Artin Rees and completions
Math 210B. Artin Rees and completions 1. Definitions and an example Let A be a ring, I an ideal, and M an A-module. In class we defined the I-adic completion of M to be M = lim M/I n M. We will soon show
More information16 1 Basic Facts from Functional Analysis and Banach Lattices
16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness
More informationLECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups
LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS 1. Lie groups A Lie group is a special smooth manifold on which there is a group structure, and moreover, the two structures are compatible. Lie groups are
More informationReproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators
Noname manuscript No. (will be inserted by the editor) Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators Gregory E. Fasshauer Qi Ye Abstract
More informationExistence of minimizers for the pure displacement problem in nonlinear elasticity
Existence of minimizers for the pure displacement problem in nonlinear elasticity Cristinel Mardare Université Pierre et Marie Curie - Paris 6, Laboratoire Jacques-Louis Lions, Paris, F-75005 France Abstract
More informationSpring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier
Spring 0 CIS 55 Fundamentals of Linear Algebra and Optimization Jean Gallier Homework 5 & 6 + Project 3 & 4 Note: Problems B and B6 are for extra credit April 7 0; Due May 7 0 Problem B (0 pts) Let A be
More information