Function Spaces. 1 Hilbert Spaces

Size: px
Start display at page:

Download "Function Spaces. 1 Hilbert Spaces"

Transcription

1 Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure is exploite by algorithms an theoretical analysis. Here we review some basic facts about function spaces. As motivation, consier nonparametric regression. We observe (X 1, Y 1 ),..., (X n, Y n ) an we want to estimate m(x) = E(Y X = x). We cannot simply choose m to minimize the training error i (Y i m(x i )) as this will lea to interpolating the ata. One approach is to minimize i (Y i m(x i )) while restricting m to be in a well behave function space. 1 Hilbert Spaces Let V be a vector space. A norm is a mapping : V [0, ) that satisfies 1. x + y x + y.. ax = a x for all a R. 3. x = 0 implies that x = 0. An example of a norm on V = R k is the Eucliean norm x = i x i. A sequence x 1, x,... in a norme space is a Cauchy sequence if x m x n 0 as m, n. The space is complete if every Cauchy sequence converges to a limit. A complete, norme space is calle a Banach space. An inner prouct is a mapping, : V V R that satisfies, for all x, y, z V an a R: 1. x, x 0 an x, x = 0 if an only if x = 0. x, y + z = x, y + x, z 3. x, ay = a x, y 4. x, y = y, x An example of an inner prouct on V = R k is x, y = i x iy i. Two vectors x an y are orthogonal if x, y = 0. An inner prouct efines a norm v = v, v. We then have the Cauchy-Schwartz inequality x, y x y. (1) 1

2 A Hilbert space is a complete, inner prouct space. Every Hilbert space is a Banach space but the reverse is not true in general. In a Hilbert space, we write f n f to mean that f n f 0 as n. Note that f n f 0 oes NOT imply that f n (x) f(x). For this to be true, we nee the space to be a reproucing kernel Hilbert space which we iscuss later. If V is a Hilbert space an L is a close subspace then for any v V there is a unique y L, calle the projection of v onto L, which minimizes v z over z L. The set of elements orthogonal to every z L is enote by L. Every v V can be written uniquely as v = w + z where z is the projection of v onto L an w L. In general, if L an M are subspaces such that every l L is orthogonal to every m M then we efine the orthogonal sum (or irect sum) as L M = l + m : l L, m M. () A set of vectors e t, t T is orthonormal if e s, e t = 0 when s t an e t = 1 for all t T. If e t, t T are orthonormal, an the only vector orthogonal to each e t is the zero vector, then e t, t T is calle an orthonormal basis. Every Hilbert space has an orthonormal basis. A Hilbert space is separable if there exists a countable orthonormal basis. Theorem 1 Let V be a separable Hilbert space with countable orthonormal basis e 1, e,.... Then, for any x V, we have x = θ je j where θ j = x, e j. Furthermore, x = θ j, which is known as Parseval s ientity. The coefficients θ j = x, e j are calle Fourier coefficients. The set R with inner prouct v, w = j v jw j is a Hilbert space. Another example of a Hilbert space is the set of functions f : [a, b] R such that b f (x)x < with inner a prouct f(x)g(x)x. This space is enote by L (a, b). L p Spaces Let F be a collection of functions taking [a, b] into R. The L p norm on F is efine by where 0 < p <. For p = we efine ( 1/p b f p = f(x) x) p (3) a f = sup f(x). (4) x

3 Sometimes we write f simply as f. The space L p (a, b) is efine as follows: L p (a, b) = f : [a, b] R : f p <. (5) Every L p is a Banach space. Some useful inequalities are: Cauchy-Schwartz ( f(x)g(x)x ) f (x)x g (x)x Minkowski f + g p f p + g p where p > 1 Höler fg 1 f p g q where (1/p) + (1/q) = 1. Special Properties of L. As we mentione earlier, the space L (a, b) is a Hilbert space. The inner prouct between two functions f an g in L (a, b) is b f(x)g(x)x an the norm a of f is f = b f (x) x. With this inner prouct, L a (a, b) is a separable Hilbert space. Thus we can fin a countable orthonormal basis φ 1, φ,...; that is, φ j = 1 for all j, b φ a i(x) φ j (x)x = 0 for i j an the only function that is orthogonal to each φ j is the zero function. (In fact, there are many such bases.) It follows that if f L (a, b) then f(x) = θ j φ j (x) (6) where θ j = b are the coefficients. Also, recall Parseval s ientity a f(x) φ j (x) x (7) b a f (x)x = θj. (8) The set of functions n a j φ j (x) : a 1,..., a n R is the calle the span of φ 1,..., φ n. The projection of f = θ jφ j (x) onto the span of φ 1,..., φ n is f n = n θ jφ j (x). We call f n the n-term linear approximation of f. Let Λ n enote all functions of the form g = a jφ j (x) such that at most n of the a j s are non-zero. Note that Λ n is not a linear space, since if g 1, g Λ n it oes not follow that g 1 +g is in Λ n. The best approximation to f in Λ n is f n = j A n θ j φ j (x) where A n are the n inices corresponing to the n largest θ j s. We call f n the n-term nonlinear approximation of f. (9) 3

4 The Fourier basis on [0, 1] is efine by setting φ 1 (x) = 1 an φ j (x) = 1 cos(jπx), φ j+1 (x) = 1 sin(jπx), j = 1,,... (10) The cosine basis on [0, 1] is efine by φ 0 (x) = 1, φ j (x) = cos(πjx), j = 1,,.... (11) The Legenre basis on ( 1, 1) is efine by P 0 (x) = 1, P 1 (x) = x, P (x) = 1 (3x 1), P 3 (x) = 1 (5x3 3x),... (1) These polynomials are efine by the relation P n (x) = 1 n n n! x n (x 1) n. (13) The Legenre polynomials are orthogonal but not orthonormal, since 1 1 P n(x)x = n + 1. (14) However, we can efine moifie Legenre polynomials Q n (x) = (n + 1)/ P n (x) which then form an orthonormal basis for L ( 1, 1). The Haar basis on [0,1] consists of functions φ(x), ψ jk (x) : j = 0, 1,..., k = 0, 1,..., j 1 (15) where ψ jk (x) = j/ ψ( j x k) an φ(x) = 1 if 0 x < 1 0 otherwise, (16) ψ(x) = 1 if 0 x 1 1 if 1 < x 1. (17) This is a oubly inexe set of functions so when f is expane in this basis we write f(x) = αφ(x) + j 1 k=1 β jk ψ jk (x) (18) where α = 1 f(x) φ(x) x an β 0 jk = 1 f(x) ψ 0 jk(x) x. The Haar basis is an example of a wavelet basis. 4

5 Let [a, b] = [a, b] [a, b] be the -imensional cube an efine ( L ) [a, b] = f : [a, b] R : f (x 1,..., x ) x 1... x <. [a,b] (19) Suppose that B = φ 1, φ,... is an orthonormal basis for L ([a, b]). Then the set of functions B = B B = φ i1 (x 1 ) φ i (x ) φ i (x ) : i 1, i,..., i 1,,...,, (0) is calle the tensor prouct of B, an forms an orthonormal basis for L ([a, b] ). 3 Höler Spaces Let β be a positive integer. 1 g : T R such that Let T R. The Holer space H(β, L) is the set of functions g (β 1) (y) g (β 1) (x) L x y, for all x, y T. (1) The special case β = 1 is sometimes calle the Lipschitz space. If β = then we have g (x) g (y) L x y, for all x, y. Roughly speaking, this means that the functions have boune secon erivatives. There is also a multivariate version of Holer spaces. Let T R. Given a vector s = (s 1,..., s ), efine s = s s, s! = s 1! s!, x s = x s 1 1 x s an D s = s 1+ +s x s 1 1 x s. The Höler class H(β, L) is the set of functions g : T R such that for all x, y an all s such that s = β 1. D s g(x) D s g(y) L x y β s () If g H(β, L) then g(x) is close to its Taylor series approximation: g(u) g x,β (u) L u x β (3) where g x,β (u) = s β (u x) s D s g(x). (4) s! 1 It is possible to efine Holer spaces for non-integers but we will not nee this generalization. 5

6 In the case of β =, this means that g(u) [g(x) + (u x) T g(x)] L x u. We will see that in function estimation, the optimal rate of convergence over H(β, L) uner L loss is O(n β/(β+) ). 4 Sobolev Spaces Let f be integrable on every boune interval. Then f is weakly ifferentiable if there exists a function f that is integrable on every boune interval, such that y f (s)s = f(y) f(x) x whenever x y. We call f the weak erivative of f. Let D j f enote the j th weak erivative of f. The Sobolev space of orer m is efine by W m,p = f L p (0, 1) : D m f L p (0, 1). (5) The Sobolev ball of orer m an raius c is efine by W m,p (c) = f : f W m,p, D m f p c. (6) For the rest of this section we take p = an write W m instea of W m, Theorem The Sobolev space W m is a Hilbert space uner the inner prouct f, g = m 1 k=0 f (k) (0)g (k) (0) f (m) (x)g (m) (x) x. (7) Define K(x, y) = m 1 k=1 Then, for each f W m we have an 1 x y k! xk y k (x u) m 1 (y u) m 1 + u. (8) 0 (m 1)! f(y) = f, K(, y) (9) K(x, y) = K(, x), K(, y). (30) We say that K is a kernel for the space an that W m is a reproucing kernel Hilbert space or RKHS. See Section 7 for more on reproucing kernel Hilbert spaces. 6

7 It follows from Mercer s theorem (Theorem 4) that there is an orthonormal basis e 1, e,..., for L (a, b) an real numbers λ 1, λ,... such that K(x, y) = λ j e j (x) e j (y). (31) The functions e j are eigenfunctions of K an the λ j s are the corresponing eigenvalues, K(x, y) e j (y) y = λ j e j (x). (3) Hence, the inner prouct efine in (7) can be written as f, g = j=0 where f(x) = j=0 θ je j (x) an g(x) = j=0 β je j (x). θ j β j λ j (33) Next we iscuss how the functions in a Sobolev space can be parameterize by using another convenient basis. An ellipsoi is a set of the form Θ = θ : a jθj c (34) where a j is a sequence of numbers such that a j as j. If Θ is an ellipsoi an if a j (πj) m as j, we call Θ a Sobolev ellipsoi an we enote it by Θ m (c). Theorem 3 Let φ j, j = 1,,... be the Fourier basis: Then, φ 1 (x) = 1, φ j (x) = 1 cos(jπx), φ j+1 (x) = 1 sin(jπx), j = 1,,... (35) W m (c) = f : f = θ j φ j, a jθj c where a j = (πj) m for j even an a j = (π(j 1)) m for j o. Thus, a Sobolev space correspons to a Sobolev ellipsoi with a j (πj) m. (36) Note that (36) allows us to efine the Sobolev space W m for fractional values of m as well as integer values. A multivariate version of Sobolev spaces can be efine as follows. Let α = (α 1,..., α ) be non-negative integers an efine α = α α. Given x = (x 1,..., x ) R write x α = x α 1 1 x α an D α = α x α 1 1 x α. (37) 7

8 Then the Sobolev space is efine by ( W m,p = f L ) p [a, b] : D α f L p ([a, b] ) for all α m. (38) We will see that in function estimation, the optimal rate of convergence over W β, uner L loss is O(n β/(β+) ). 5 Besov Spaces* Functions in Sobolev spaces are homogeneous, meaning that their smoothness oes not vary substantially across the omain of the function. Besov spaces are richer classes of functions that inclue inhomogeneous functions. Let r ( ) r h f(x) = ( 1) k f(x + kh). (39) k (r) Thus, (0) h f(x) = f(x) an (r) Next efine h k=0 f(x) = (r 1) h f(x + h) (r 1) h f(x). (40) w r,p (f; t) = sup (r) h f p (41) h t where g p = g(x) p x 1/p. Given (p, q, ς), let r be such that r 1 ς r. The Besov seminorm is efine by [ f ς p,q = (h ς w r,p (f; h)) q h ] 1/q. (4) h For q = we efine 0 f ς p, = sup w r,p (f; h). (43) 0<h<1 h ς The Besov space B ς p,q(c) is efine to be the set of functions f mapping [0, 1] into R such that f p < an f ς p,q c. Besov spaces inclue a wie range of familiar function spaces. The Sobolev space W m, correspons to the Besov ball B,. m The generalize Sobolev space W m,p which uses an L p norm on the m th erivative is almost a Besov space in the sense that Bp,1 m W p (m) Bp,. m The Höler space H α with α = k + β is equivalent to B,, k+β an the set T consisting of functions of boune variation satisfies B1,1 1 T B1,. 1 8

9 6 Entropy an Dimension Given a norm on a function space F, a sphere of raius ɛ is a set of the form f F : f g ɛ for some g. A set of spheres covers F if F is containe in their union. The covering number N(ɛ, ) is the smallest number of spheres of raius ɛ require to cover F. We rop the epenence on the norm when it is unerstoo from context. The metric entropy of F is H(ɛ) = log N(ɛ). The class F has imension if, for all small ɛ, N(ɛ) = c(1/ɛ) for some constant c. A finite set f 1,..., f k is an ɛ-net if f i f j > ɛ for all i j. The packing number M(ɛ) is the size of the largest ɛ-net, an the packing entropy is V (ɛ) = log M(ɛ). The packing entropy an metric entropy are relate by Here are some common spaces an their entropies: M(ɛ) H(ɛ) M(ɛ). (44) Space Sobolev W m,p Besov Bpq ς Höler H α H(ɛ) ɛ /m ɛ /ς ɛ /α 7 Mercer Kernels an Reproucing Kernel Hilbert Spaces Intuitively, a reproucing kernel Hilbert space (RKHS) is a class of smooth functions efine by an object calle a Mercer kernel. Here are the etails. Mercer Kernels. A Mercer kernel is a continuous function K : [a, b] [a, b] R such that K(x, y) = K(y, x), an such that K is positive semiefinite, meaning that n n K(x i, x j )c i c j 0 (45) i=1 for all finite sets of points x 1,..., x n [a, b] an all real numbers c 1,..., c n. The function K(x, y) = m 1 k=1 1 k! xk y k + x y 0 (x u) m 1 (y u) m 1 (m 1)! u (46) introuce in the Section 4 on Sobolev spaces is an example of a Mercer kernel. The most commonly use kernel is the Gaussian kernel K(x, y) = e x y σ. 9

10 Theorem 4 (Mercer s theorem) Suppose that K : X X R is symmetric an satisfies sup x,y K(x, y) <, an efine T K f(x) = K(x, y) f(y) y (47) X suppose that T k : L (X ) L (X) is positive semiefinite; thus, K(x, y) f(x) f(y) x y 0 (48) X X for any f L (X ). Let λ i, Ψ i be the eigenfunctions an eigenvectors of T K, with K(x, y)ψ i (y) y = λ i Ψ i (x) (49) Ψ i (x)x = 1. Then i λ i <, sup x Ψ i (x) <, an X K(x, y) = where the convergence is uniform in x, y. λ i Ψ i (x) Ψ i (y), (50) i=1 This gives the mapping into feature space as x Φ(x) = ( λ1 Ψ 1 (x), λ Ψ (x),...) (51) The positive semiefinite requirement for Mercer kernels is generally ifficult to verify. But the following basic results show how one can buil up kernels in pieces. If K 1 : X X R an K : X X R are Mercer kernels then so are the following: K(x, y) = K 1 (x, y) + K (x, y) (5) K(x, y) = c K 1 (x, y) + K (x, y) for c R + (53) K(x, y) = K 1 (x, y) + c for c R + (54) K(x, y) = K 1 (x, y) K (x, y) (55) K(x, y) = f(x) f(y) for f : X R (56) K(x, y) = (K 1 (x, y) + c) for θ 1 R + an N (57) K(x, y) = exp ( K 1 (x, y)/σ ) for σ R (58) K(x, y) = exp ( (K 1 (x, x) K 1 (x, y) + K 1 (y, y))/σ ) (59) K(x, y) = K 1 (x, y)/ K 1 (x, x) K 1 (y, y) (60) 10

11 RKHS. Given a kernel K, let K x ( ) be the function obtaine by fixing the first coorinate. That is, K x (y) = K(x, y). For the Gaussian kernel, K x is a Normal, centere at x. We can create functions by taking liner combinations of the kernel: f(x) = Let H 0 enote all such functions: H 0 = f : k α j K xj (x). k α j K xj (x). Given two such functions f(x) = k α jk xj (x) an g(x) = m β jk yj (x) we efine an inner prouct f, g = f, g K = α i β j K(x i, y j ). i j In general, f (an g) might be representable in more than one way. You can check that f, g K is inepenent of how f (or g) is represente. The inner prouct efines a norm: f K = f, f = α j α k K(x j, x k ) = α T Kα where α = (α 1,..., α k ) T an K is the k k matrix with K jk = K(x j, x k ). j k The Reproucing Property. Let f(x) = i α ik xi (x). Note the following crucial property: f, K x = α i K(x i, x) = f(x). i This follows from the efinition of f, g where we take g = K x. This implies that K x, K x = K(x, x). This is calle the reproucing property. It also implies that K x is the representer of the evaluation functional. The completion of H 0 with respect to K is enote by H K an is calle the RKHS generate by K. To verify that this is a well-efine Hilbert space, you shoul check that the following properties hol: f, g = g, f cf + g, h = c f, h + c g, h f, f = 0 iff f = 0. 11

12 The last one is not obvious so let us verify it here. It is easy to see that f = 0 implies that f, f = 0. Now we must show that f, f = 0 implies that f(x) = 0. So suppose that f, f = 0. Pick any x. Then 0 f (x) = f, K x = f, K x f, K x f K x = f, f K x = 0 where we use Cauchy-Schwartz. So 0 f (x) 0 which means that f(x) = 0. Evaluation Functionals. A key property of RKHS s is the behavior of the evaluation functional. The evaluation functional δ x assigns a real number to each function. It is efine by δ x f = f(x). In general, the evaluation functional is not continuous. This means we can have f n f but δ x f n oes not converge to δ x f. For example, let f(x) = 0 an f n (x) = ni(x < 1/n ). Then f n f = 1/ n 0. But δ 0 f n = n which oes not converge to δ 0 f = 0. Intuitively, this is because Hilbert spaces can contain very unsmooth functions. But in an RKHS, the evaluation functional is continuous. Intuitively, this means that the functions in the space are well-behave. To see this, suppose that f n f. Then δ x f n = f n K x fk x = f(x) = δ x f so the evaluation functional is continuous. In fact: A Hilbert space is a RKHS if an only if the evaluation functionals are continuous. Examples. Here are some examples of RKHS s. Example 5 Let H be all functions f on R such that the support of the Fourier transform of f is containe in [ a, a]. Then K(x, y) = sin(a(y x)) a(y x) an f, g = fg. Example 6 Let H be all functions f on (0, 1) such that 1 0 (f (x) + (f (x)) )x x <. 1

13 Then an K(x, y) = (xy) 1 ( e x sinh(y)i(0 < x y) + e y sinh(x)i(0 < y x) ) f = 1 0 (f (x) + (f (x)) )x x. Example 7 The Sobolev space of orer m is (roughly speaking) the set of functions f such that (f (m) ) <. For m = 1 an X = [0, 1] the kernel is 1 + xy + xy K(x, y) = y3 0 y x xy + yx x3 0 x y 1 6 an f K = f (0) + f (0) (f (x)) x. Spectral Representation. Suppose that sup x,y K(x, y) <. Define eigenvalues λ j an orthonormal eigenfunctions ψ j by K(x, y)ψ j (y)y = λ j ψ j (x). Then j λ j < an sup x ψ j (x) <. Also, K(x, y) = λ j ψ j (x)ψ j (y). Define the feature map Φ by Φ(x) = ( λ 1 ψ 1 (x), λ ψ (x),...). We can expan f either in terms of K or in terms of the basis ψ 1, ψ,...: f(x) = i α i K(x i, x) = β j ψ j (x). Furthermore, if f(x) = j a jψ j (x) an g(x) = j b jψ j (x), then f, g = a j b j λ j. Roughly speaking, when f K is small, then f is smooth. 13

14 Representer Theorem. Let l be a loss function epening on (X 1, Y 1 ),..., (X n, Y n ) an on f(x 1 ),..., f(x n ). Let f minimize l + g( f K) where g is any monotone increasing function. Then f has the form f(x) = n α i K(x i, x) i=1 for some α 1,..., α n. RKHS Regression. Define m to minimize R = i (Y i m(x i )) + λ m K. By the representer theorem, m(x) = n i=1 α ik(x i, x). Plug this into R an we get R = Y Kα + λα T Kα where K jk = K(X j, X k ) is the Gram matrix. The minimizer over α is α = (K + λi) 1 Y an m(x) = j α jk(x i, x). The fitte values are Ŷ = K α = K(K + λi) 1 Y = LY. So this is a linear smoother. We will iscuss this in etail later. Support Vector Machines. Suppose Y i 1, +1. Recall the the linear SVM minimizes the penalize hinge loss: J = i [1 Y i (β 0 + β T X i )] + + λ β. The ual is to maximize subject to 0 α i C. α i 1 α i α j Y i Y j X i, X j i i,j The RKHS version is to minimize J = i [1 Y i f(x i )] + + λ f K. 14

15 The ual is the same except that X i, X j is replace with K(X i, X j ). This is calle the kernel trick. The Kernel Trick. This is a fairly general trick. In many algorithms you can replace x i, x j with K(x i, x j ) an get a nonlinear version of the algorithm. This is equivalent to replacing x with Φ(x) an replacing x i, x j with Φ(x i ), Φ(x j ). However, K(x i, x j ) = Φ(x i ), Φ(x j ) an K(x i, x j ) is much easier to compute. In summary, by replacing x i, x j with K(x i, x j ) we turn a linear proceure into a nonlinear proceure without aing much computation. Hien Tuning Parameters. There are hien tuning parameters in the RKHS. Consier the Gaussian kernel K(x, y) = e x y σ. For nonparametric regression we minimize i (Y i m(x i )) subject to m K L. We control the bias variance traeoff by oing cross-valiation over L. But what about σ? This parameter seems to get mostly ignore. Suppose we have a uniform istribution on a circle. The eigenfunctions of K(x, y) are the sines an cosines. The eigenvalues λ k ie off like (1/σ) k. So σ affects the bias-variance traeoff since it weights things towars lower orer Fourier functions. In principle we can compensate for this by varying L. But clearly there is some interaction between L an σ. The practical effect is not well unerstoo. Now consier the polynomial kernel K(x, y) = (1 + x, y ). This kernel has the same eigenfunctions but the eigenvalues ecay at a polynomial rate epening on. So there is an interaction between L, an, the choice of kernel itself. 15

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Witten s Proof of Morse Inequalities

Witten s Proof of Morse Inequalities Witten s Proof of Morse Inequalities by Igor Prokhorenkov Let M be a smooth, compact, oriente manifol with imension n. A Morse function is a smooth function f : M R such that all of its critical points

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

The Three-dimensional Schödinger Equation

The Three-dimensional Schödinger Equation The Three-imensional Schöinger Equation R. L. Herman November 7, 016 Schröinger Equation in Spherical Coorinates We seek to solve the Schröinger equation with spherical symmetry using the metho of separation

More information

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold CHAPTER 1 : DIFFERENTIABLE MANIFOLDS 1.1 The efinition of a ifferentiable manifol Let M be a topological space. This means that we have a family Ω of open sets efine on M. These satisfy (1), M Ω (2) the

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

LECTURE NOTES ON DVORETZKY S THEOREM

LECTURE NOTES ON DVORETZKY S THEOREM LECTURE NOTES ON DVORETZKY S THEOREM STEVEN HEILMAN Abstract. We present the first half of the paper [S]. In particular, the results below, unless otherwise state, shoul be attribute to G. Schechtman.

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

CIS 520: Machine Learning Oct 09, Kernel Methods

CIS 520: Machine Learning Oct 09, Kernel Methods CIS 520: Machine Learning Oct 09, 207 Kernel Methods Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture They may or may not cover all the material discussed

More information

Analysis IV, Assignment 4

Analysis IV, Assignment 4 Analysis IV, Assignment 4 Prof. John Toth Winter 23 Exercise Let f C () an perioic with f(x+2) f(x). Let a n f(t)e int t an (S N f)(x) N n N then f(x ) lim (S Nf)(x ). N a n e inx. If f is continuously

More information

Direct Learning: Linear Classification. Donglin Zeng, Department of Biostatistics, University of North Carolina

Direct Learning: Linear Classification. Donglin Zeng, Department of Biostatistics, University of North Carolina Direct Learning: Linear Classification Logistic regression models for classification problem We consider two class problem: Y {0, 1}. The Bayes rule for the classification is I(P(Y = 1 X = x) > 1/2) so

More information

Math 115 Section 018 Course Note

Math 115 Section 018 Course Note Course Note 1 General Functions Definition 1.1. A function is a rule that takes certain numbers as inputs an assigns to each a efinite output number. The set of all input numbers is calle the omain of

More information

Tractability results for weighted Banach spaces of smooth functions

Tractability results for weighted Banach spaces of smooth functions Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Linear and quadratic approximation

Linear and quadratic approximation Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function

More information

Differentiation ( , 9.5)

Differentiation ( , 9.5) Chapter 2 Differentiation (8.1 8.3, 9.5) 2.1 Rate of Change (8.2.1 5) Recall that the equation of a straight line can be written as y = mx + c, where m is the slope or graient of the line, an c is the

More information

12.11 Laplace s Equation in Cylindrical and

12.11 Laplace s Equation in Cylindrical and SEC. 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential 593 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential One of the most important PDEs in physics an engineering

More information

The Generalized Incompressible Navier-Stokes Equations in Besov Spaces

The Generalized Incompressible Navier-Stokes Equations in Besov Spaces Dynamics of PDE, Vol1, No4, 381-400, 2004 The Generalize Incompressible Navier-Stokes Equations in Besov Spaces Jiahong Wu Communicate by Charles Li, receive July 21, 2004 Abstract This paper is concerne

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Homework 2 Solutions EM, Mixture Models, PCA, Dualitys

Homework 2 Solutions EM, Mixture Models, PCA, Dualitys Homewor Solutions EM, Mixture Moels, PCA, Dualitys CMU 0-75: Machine Learning Fall 05 http://www.cs.cmu.eu/~bapoczos/classes/ml075_05fall/ OUT: Oct 5, 05 DUE: Oct 9, 05, 0:0 AM An EM algorithm for a Mixture

More information

Robust Low Rank Kernel Embeddings of Multivariate Distributions

Robust Low Rank Kernel Embeddings of Multivariate Distributions Robust Low Rank Kernel Embeings of Multivariate Distributions Le Song, Bo Dai College of Computing, Georgia Institute of Technology lsong@cc.gatech.eu, boai@gatech.eu Abstract Kernel embeing of istributions

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk Notes on Lie Groups, Lie algebras, an the Exponentiation Map Mitchell Faulk 1. Preliminaries. In these notes, we concern ourselves with special objects calle matrix Lie groups an their corresponing Lie

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

Exam 2 Review Solutions

Exam 2 Review Solutions Exam Review Solutions 1. True or False, an explain: (a) There exists a function f with continuous secon partial erivatives such that f x (x, y) = x + y f y = x y False. If the function has continuous secon

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Implicit Differentiation Using the Chain Rule In the previous section we focuse on the erivatives of composites an saw that THEOREM 20 (Chain Rule) Suppose that u = g(x) is ifferentiable

More information

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim QF101: Quantitative Finance September 5, 2017 Week 3: Derivatives Facilitator: Christopher Ting AY 2017/2018 I recoil with ismay an horror at this lamentable plague of functions which o not have erivatives.

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

23 Implicit differentiation

23 Implicit differentiation 23 Implicit ifferentiation 23.1 Statement The equation y = x 2 + 3x + 1 expresses a relationship between the quantities x an y. If a value of x is given, then a corresponing value of y is etermine. For

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

1. Aufgabenblatt zur Vorlesung Probability Theory

1. Aufgabenblatt zur Vorlesung Probability Theory 24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Kernels A Machine Learning Overview

Kernels A Machine Learning Overview Kernels A Machine Learning Overview S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola, Stéphane Canu, Mike Jordan and Peter

More information

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx Math 53 Notes on turm-liouville equations Many problems in physics, engineering, an chemistry fall in a general class of equations of the form w(x)p(x) u ] + (q(x) λ) u = w(x) on an interval a, b], plus

More information

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions

Math 300 Winter 2011 Advanced Boundary Value Problems I. Bessel s Equation and Bessel Functions Math 3 Winter 2 Avance Bounary Value Problems I Bessel s Equation an Bessel Functions Department of Mathematical an Statistical Sciences University of Alberta Bessel s Equation an Bessel Functions We use

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

The Sokhotski-Plemelj Formula

The Sokhotski-Plemelj Formula hysics 25 Winter 208 The Sokhotski-lemelj Formula. The Sokhotski-lemelj formula The Sokhotski-lemelj formula is a relation between the following generalize functions (also calle istributions), ±iǫ = iπ(),

More information

Self-normalized Martingale Tail Inequality

Self-normalized Martingale Tail Inequality Online-to-Confience-Set Conversions an Application to Sparse Stochastic Banits A Self-normalize Martingale Tail Inequality The self-normalize martingale tail inequality that we present here is the scalar-value

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

Jointly continuous distributions and the multivariate Normal

Jointly continuous distributions and the multivariate Normal Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability

More information

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

13.1: Vector-Valued Functions and Motion in Space, 14.1: Functions of Several Variables, and 14.2: Limits and Continuity in Higher Dimensions

13.1: Vector-Valued Functions and Motion in Space, 14.1: Functions of Several Variables, and 14.2: Limits and Continuity in Higher Dimensions 13.1: Vector-Value Functions an Motion in Space, 14.1: Functions of Several Variables, an 14.2: Limits an Continuity in Higher Dimensions TA: Sam Fleischer November 3 Section 13.1: Vector-Value Functions

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

The Representor Theorem, Kernels, and Hilbert Spaces

The Representor Theorem, Kernels, and Hilbert Spaces The Representor Theorem, Kernels, and Hilbert Spaces We will now work with infinite dimensional feature vectors and parameter vectors. The space l is defined to be the set of sequences f 1, f, f 3,...

More information

IMPLICIT DIFFERENTIATION

IMPLICIT DIFFERENTIATION IMPLICIT DIFFERENTIATION CALCULUS 3 INU0115/515 (MATHS 2) Dr Arian Jannetta MIMA CMath FRAS Implicit Differentiation 1/ 11 Arian Jannetta Explicit an implicit functions Explicit functions An explicit function

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES Communications on Stochastic Analysis Vol. 2, No. 2 (28) 289-36 Serials Publications www.serialspublications.com SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

More information

MATH 120 Theorem List

MATH 120 Theorem List December 11, 2016 Disclaimer: Many of the theorems covere in class were not name, so most of the names on this sheet are not efinitive (they are escriptive names rather than given names). Lecture Theorems

More information

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics UC Berkeley Department of Electrical Engineering an Computer Science Department of Statistics EECS 8B / STAT 4B Avance Topics in Statistical Learning Theory Solutions 3 Spring 9 Solution 3. For parti,

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

1 Lecture 13: The derivative as a function.

1 Lecture 13: The derivative as a function. 1 Lecture 13: Te erivative as a function. 1.1 Outline Definition of te erivative as a function. efinitions of ifferentiability. Power rule, erivative te exponential function Derivative of a sum an a multiple

More information

The Sokhotski-Plemelj Formula

The Sokhotski-Plemelj Formula hysics 24 Winter 207 The Sokhotski-lemelj Formula. The Sokhotski-lemelj formula The Sokhotski-lemelj formula is a relation between the following generalize functions (also calle istributions), ±iǫ = iπ(),

More information

A Spectral Method for the Biharmonic Equation

A Spectral Method for the Biharmonic Equation A Spectral Metho for the Biharmonic Equation Kenall Atkinson, Davi Chien, an Olaf Hansen Abstract Let Ω be an open, simply connecte, an boune region in Ê,, with a smooth bounary Ω that is homeomorphic

More information

Lecture 6 : Dimensionality Reduction

Lecture 6 : Dimensionality Reduction CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing

More information

II. First variation of functionals

II. First variation of functionals II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent

More information

Partial Differential Equations

Partial Differential Equations Chapter Partial Differential Equations. Introuction Have solve orinary ifferential equations, i.e. ones where there is one inepenent an one epenent variable. Only orinary ifferentiation is therefore involve.

More information

Students need encouragement. So if a student gets an answer right, tell them it was a lucky guess. That way, they develop a good, lucky feeling.

Students need encouragement. So if a student gets an answer right, tell them it was a lucky guess. That way, they develop a good, lucky feeling. Chapter 8 Analytic Functions Stuents nee encouragement. So if a stuent gets an answer right, tell them it was a lucky guess. That way, they evelop a goo, lucky feeling. 1 8.1 Complex Derivatives -Jack

More information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

1 Lecture 20: Implicit differentiation

1 Lecture 20: Implicit differentiation Lecture 20: Implicit ifferentiation. Outline The technique of implicit ifferentiation Tangent lines to a circle Derivatives of inverse functions by implicit ifferentiation Examples.2 Implicit ifferentiation

More information

Grothendieck s Inequality

Grothendieck s Inequality Grothendieck s Inequality Leqi Zhu 1 Introduction Let A = (A ij ) R m n be an m n matrix. Then A defines a linear operator between normed spaces (R m, p ) and (R n, q ), for 1 p, q. The (p q)-norm of A

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

Generalized Tractability for Multivariate Problems

Generalized Tractability for Multivariate Problems Generalize Tractability for Multivariate Problems Part II: Linear Tensor Prouct Problems, Linear Information, an Unrestricte Tractability Michael Gnewuch Department of Computer Science, University of Kiel,

More information

SYMPLECTIC GEOMETRY: LECTURE 3

SYMPLECTIC GEOMETRY: LECTURE 3 SYMPLECTIC GEOMETRY: LECTURE 3 LIAT KESSLER 1. Local forms Vector fiels an the Lie erivative. A vector fiel on a manifol M is a smooth assignment of a vector tangent to M at each point. We think of M as

More information

Kernel Methods. Jean-Philippe Vert Last update: Jan Jean-Philippe Vert (Mines ParisTech) 1 / 444

Kernel Methods. Jean-Philippe Vert Last update: Jan Jean-Philippe Vert (Mines ParisTech) 1 / 444 Kernel Methods Jean-Philippe Vert Jean-Philippe.Vert@mines.org Last update: Jan 2015 Jean-Philippe Vert (Mines ParisTech) 1 / 444 What we know how to solve Jean-Philippe Vert (Mines ParisTech) 2 / 444

More information

AN INTRODUCTION TO THE THEORY OF REPRODUCING KERNEL HILBERT SPACES

AN INTRODUCTION TO THE THEORY OF REPRODUCING KERNEL HILBERT SPACES AN INTRODUCTION TO THE THEORY OF REPRODUCING KERNEL HILBERT SPACES VERN I PAULSEN Abstract These notes give an introduction to the theory of reproducing kernel Hilbert spaces and their multipliers We begin

More information

Basic Thermoelasticity

Basic Thermoelasticity Basic hermoelasticity Biswajit Banerjee November 15, 2006 Contents 1 Governing Equations 1 1.1 Balance Laws.............................................. 2 1.2 he Clausius-Duhem Inequality....................................

More information

Monte Carlo Methods with Reduced Error

Monte Carlo Methods with Reduced Error Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

EIGEN-ANALYSIS OF KERNEL OPERATORS FOR NONLINEAR DIMENSION REDUCTION AND DISCRIMINATION

EIGEN-ANALYSIS OF KERNEL OPERATORS FOR NONLINEAR DIMENSION REDUCTION AND DISCRIMINATION EIGEN-ANALYSIS OF KERNEL OPERATORS FOR NONLINEAR DIMENSION REDUCTION AND DISCRIMINATION DISSERTATION Presente in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Grauate

More information

Some Examples. Uniform motion. Poisson processes on the real line

Some Examples. Uniform motion. Poisson processes on the real line Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]

More information

model considered before, but the prey obey logistic growth in the absence of predators. In

model considered before, but the prey obey logistic growth in the absence of predators. In 5.2. First Orer Systems of Differential Equations. Phase Portraits an Linearity. Section Objective(s): Moifie Preator-Prey Moel. Graphical Representations of Solutions. Phase Portraits. Vector Fiels an

More information

1 Heisenberg Representation

1 Heisenberg Representation 1 Heisenberg Representation What we have been ealing with so far is calle the Schröinger representation. In this representation, operators are constants an all the time epenence is carrie by the states.

More information

An extension of Alexandrov s theorem on second derivatives of convex functions

An extension of Alexandrov s theorem on second derivatives of convex functions Avances in Mathematics 228 (211 2258 2267 www.elsevier.com/locate/aim An extension of Alexanrov s theorem on secon erivatives of convex functions Joseph H.G. Fu 1 Department of Mathematics, University

More information

Second order differentiation formula on RCD(K, N) spaces

Second order differentiation formula on RCD(K, N) spaces Secon orer ifferentiation formula on RCD(K, N) spaces Nicola Gigli Luca Tamanini February 8, 018 Abstract We prove the secon orer ifferentiation formula along geoesics in finite-imensional RCD(K, N) spaces.

More information

ORDINARY DIFFERENTIAL EQUATIONS AND SINGULAR INTEGRALS. Gianluca Crippa

ORDINARY DIFFERENTIAL EQUATIONS AND SINGULAR INTEGRALS. Gianluca Crippa Manuscript submitte to AIMS Journals Volume X, Number 0X, XX 200X Website: http://aimsciences.org pp. X XX ORDINARY DIFFERENTIAL EQUATIONS AND SINGULAR INTEGRALS Gianluca Crippa Departement Mathematik

More information

Multidimensional Fast Gauss Transforms by Chebyshev Expansions

Multidimensional Fast Gauss Transforms by Chebyshev Expansions Multiimensional Fast Gauss Transforms by Chebyshev Expansions Johannes Tausch an Alexaner Weckiewicz May 9, 009 Abstract A new version of the fast Gauss transform FGT) is introuce which is base on a truncate

More information

Differentiation Rules Derivatives of Polynomials and Exponential Functions

Differentiation Rules Derivatives of Polynomials and Exponential Functions Derivatives of Polynomials an Exponential Functions Differentiation Rules Derivatives of Polynomials an Exponential Functions Let s start with the simplest of all functions, the constant function f(x)

More information

REAL ANALYSIS I HOMEWORK 5

REAL ANALYSIS I HOMEWORK 5 REAL ANALYSIS I HOMEWORK 5 CİHAN BAHRAN The questions are from Stein an Shakarchi s text, Chapter 3. 1. Suppose ϕ is an integrable function on R with R ϕ(x)x = 1. Let K δ(x) = δ ϕ(x/δ), δ > 0. (a) Prove

More information

Problem set 2: Solutions Math 207B, Winter 2016

Problem set 2: Solutions Math 207B, Winter 2016 Problem set : Solutions Math 07B, Winter 016 1. A particle of mass m with position x(t) at time t has potential energy V ( x) an kinetic energy T = 1 m x t. The action of the particle over times t t 1

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Proof of Proposition 1

Proof of Proposition 1 A Proofs of Propositions,2,. Before e look at the MMD calculations in various cases, e prove the folloing useful characterization of MMD for translation invariant kernels like the Gaussian an Laplace kernels.

More information

1 Definition of the derivative

1 Definition of the derivative Math 20A - Calculus by Jon Rogawski Chapter 3 - Differentiation Prepare by Jason Gais Definition of the erivative Remark.. Recall our iscussion of tangent lines from way back. We now rephrase this in terms

More information

Final Exam Study Guide and Practice Problems Solutions

Final Exam Study Guide and Practice Problems Solutions Final Exam Stuy Guie an Practice Problems Solutions Note: These problems are just some of the types of problems that might appear on the exam. However, to fully prepare for the exam, in aition to making

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical

More information

4.2 First Differentiation Rules; Leibniz Notation

4.2 First Differentiation Rules; Leibniz Notation .. FIRST DIFFERENTIATION RULES; LEIBNIZ NOTATION 307. First Differentiation Rules; Leibniz Notation In this section we erive rules which let us quickly compute the erivative function f (x) for any polynomial

More information

1 Parametric Bessel Equation and Bessel-Fourier Series

1 Parametric Bessel Equation and Bessel-Fourier Series 1 Parametric Bessel Equation an Bessel-Fourier Series Recall the parametric Bessel equation of orer n: x 2 y + xy + (a 2 x 2 n 2 )y = (1.1) The general solution is given by y = J n (ax) +Y n (ax). If we

More information

The Ehrenfest Theorems

The Ehrenfest Theorems The Ehrenfest Theorems Robert Gilmore Classical Preliminaries A classical system with n egrees of freeom is escribe by n secon orer orinary ifferential equations on the configuration space (n inepenent

More information

6 Wave equation in spherical polar coordinates

6 Wave equation in spherical polar coordinates 6 Wave equation in spherical polar coorinates We now look at solving problems involving the Laplacian in spherical polar coorinates. The angular epenence of the solutions will be escribe by spherical harmonics.

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information