Kernel pea and De-Noising in Feature Spaces

Size: px
Start display at page:

Download "Kernel pea and De-Noising in Feature Spaces"

Transcription

1 Kerne pea and De-Noising in Feature Spaces Sebastian Mika, Bernhard Schokopf, Aex Smoa Kaus-Robert Muer, Matthias Schoz, Gunnar Riitsch GMD FIRST, Rudower Chaussee 5, Berin, Germany {mika, bs, smoa, kaus, schoz, Abstract Kerne PCA as a noninear feature extractor has proven powerfu as a preprocessing step for cassification agorithms. But it can aso be considered as a natura generaization of inear principa component anaysis. This gives rise to the question how to use noninear features for data compression, reconstruction, and de-noising, appications common in inear PCA. This is a nontrivia task, as the resuts provided by kerne PCA ive in some high dimensiona feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernes, and shows experimenta resuts using these pre-images in data reconstruction and de-noising on toy exampes as we as on rea word data. 1 pea and Feature Spaces Principa Component Anaysis (PC A) (e.g. [3]) is an orthogona basis transformation. The new basis is found by diagonaizing the centered covariance matrix of a data set {Xk E RNk = 1,...,f}, defined by C = ((Xi - (Xk))(Xi - (Xk))T). The coordinates in the Eigenvector basis are caed principa components. The size of an Eigenvaue >. corresponding to an Eigenvector v of C equas the amount of variance in the direction of v. Furthermore, the directions of the first n Eigenvectors corresponding to the biggest n Eigenvaues cover as much variance as possibe by n orthogona directions. In many appications they contain the most interesting information: for instance, in data compression, where we project onto the directions with biggest variance to retain as much information as possibe, or in de-noising, where we deiberatey drop directions with sma variance. Ceary, one cannot assert that inear PCA wi aways detect a structure in a given data set. By the use of suitabe noninear features, one can extract more information. Kerne PCA is very we suited to extract interesting noninear structures in the data [9]. The purpose of this work is therefore (i) to consider noninear de-noising based on Kerne PCA and (ii) to carify the connection between feature space expansions and meaningfu patterns in input space. Kerne PCA first maps the data into some feature space F via a (usuay noninear) function <II and then performs inear PCA on the mapped data. As the feature space F might be very high dimensiona (e.g. when mapping into the space of a possibe d-th order monomias of input space), kerne PCA empoys Mercer kernes instead of carrying

2 Kerne pea and De-Noising in Feature Spaces 537 out the mapping <I> expicity. A Mercer kerne is a function k(x, y) which for a data sets {Xi} gives rise to a positive matrix Kij = k(xi' Xj) [6]. One can show that using k instead of a dot product in input space corresponds to mapping the data with some <I> to a feature space F [1], i.e. k(x,y) = (<I>(x). <I>(y)). Kernes that have proven usefu incude Gaussian kernes k(x, y) = exp( -x - Y2 Ie) and poynomia kernes k(x, y) = (x y)d. Ceary, a agorithms that can be formuated in terms of dot products, e.g. Support Vector Machines [1], can be carried out in some feature space F without mapping the data expicity. A these agorithms construct their soutions as expansions in the potentiay infinite-dimensiona feature space. The paper is organized as foows: in the next section, we briefy describe the kerne PCA agorithm. In section 3, we present an agorithm for finding approximate pre-images of expansions in feature space. Experimenta resuts on toy and rea word data are given in section 4, foowed by a discussion of our findings (section 5). 2 Kerne pea and Reconstruction To perform PCA in feature space, we need to find Eigenvaues A > 0 and Eigenvectors V E F\{O} satisfying AV = GV with G = (<I>(Xk)<I>(Xk)T).1 Substituting G into the Eigenvector equation, we note that a soutions V must ie in the span of <I>-images of the training data. This impies that we can consider the equivaent system A( <I>(Xk). V) = (<I>(Xk). GV) for a k = 1,...,f (1) and that there exist coefficients Q1,...,Q such that V = L i = Qi<>(Xi) (2) Substituting C and (2) into (1), and defining an f x f matrix K by Kij := (<I>(Xi) <I>(Xj)) = k( Xi, X j), we arrive at a probem which is cast in terms of dot products: sove faa = Ko. (3) where 0. = (Q1,...,Q)T (for detais see [7]). Normaizing the soutions V k, i.e. (V k. Vk) = 1, transates into Ak(o.k.o.k) = 1. To extract noninear principa components for the <I>-image of a test point X we compute the projection onto the k-th component by 13k := (Vk. <I> (X)) = 2:f= Q~ k(x, Xi). Forfeature extraction, we thus have to evauate f kerne functions instead of a dot product in F, which is expensive if F is high-dimensiona (or, as for Gaussian kernes, infinite-dimensiona). To reconstruct the <I>-image of a vector X from its projections 13k onto the first n principa components in F (assuming that the Eigenvectors are ordered by decreasing Eigenvaue size), we define a projection operator Pn by k= If n is arge enough to take into account a directions beonging to Eigenvectors with nonzero Eigenvaue, we have Pn<>(Xi) = <I>(Xi). Otherwise (kerne) PCA sti satisfies (i) that the overa squared reconstruction error 2: i II Pn<>(Xi) - <I>(xd 2 is minima and (ii) the retained variance is maxima among a projections onto orthogona directions in F. In common appications, however, we are interested in a reconstruction in input space rather than in F. The present work attempts to achieve this by computing a vector z satisfying <I>(z) = Pn<>(x). The hope is that for the kerne used, such a z wi be a good approximation of X in input space. However. (i) such a z wi not aways exist and (ii) if it exists, I For simpicity, we assume that the mapped data are centered in F. Otherwise, we have to go through the same agebra using ~(x) := <I>(x) - (<I>(x;»). (4)

3 538 S. Mika et a. it need be not unique.2 As an exampe for (i), we consider a possibe representation of F. One can show [7] that <I> can be thought of as a map <I> (x) = k( x,.) into a Hibert space 1{ k of functions Li ai k( Xi,.) with a dot product satisfying (k( x,.). k(y,.)) = k( x, y). Then 1{k is cahed reproducing kerne Hibert space (e.g. [6]). Now, for a Gaussian kerne, 1{k contains ahinear superpositions of Gaussian bumps on RN (pus imit points), whereas by definition of <I> ony singe bumps k(x,.) have pre-images under <1>. When the vector P n<>( x) has no pre-image z we try to approximate it by minimizing p(z) = 1I<I>(z) - Pn<>(x) 112 (5) This is a specia case of the reduced set method [2]. Repacing terms independent of z by 0, we obtain p(z) = 1I<I>(z)112-2(<I>(z). Pn<>(x)) + 0 (6) Substituting (4) and (2) into (6), we arrive at an expression which is written in terms of dot products. Consequenty, we can introduce a kerne to obtain a formua for p (and thus V' % p) which does not rey on carrying out <I> expicity n p(z) = k(z, z) - 2 L f3k L a~ k(z, Xi) + 0 (7) k= i= 3 Pre-Images for Gaussian Kernes To optimize (7) we empoyed standard gradient descent methods. If we restrict our attention to kernes of the form k(x, y) = k(x - Y12) (and thus satisfying k(x, x) == const. for a x), an optima z can be determined as fohows (cf. [8]): we deduce from (6) that we have to maximize p(z) = (<I>(z). Pn<>(x)) + 0' = L Ii k(z, Xi) + 0' (8) i= where we set Ii = L~= f3ka: (for some 0' independent of z). For an extremum, the gradient with respect to z has to vanish: V' %p(z) = L~= ik'(z - xi112)(z - Xi) = O. This eads to a necessary condition for the extremum: z = Li tjixd Lj tjj, with tji,ik'(z - xii2). For a Gaussian kerne k(x, y) = exp( -ix - Y2 jc) we get L~= Ii exp( -iz - xi1 2 jc)xi z= ~ Li= iexp(-z - xi12jc) We note that the denominator equas (<I>(z). P n<>(x)) (cf. (8». Making the assumption that Pn<>(x) i= 0, we have (<I>(x). Pn<>(x)) = (Pn<>(x). Pn<>(x)) > O. As k is smooth, we concude that there exists a neighborhood of the extremum of (8) in which the denominator of (9) is i= O. Thus we can devise an iteration scheme for z by Zt+ = L~= Ii exp( -zt - xi2 jc)xi Li= i exp(-iz t - xi1 2jc) Numerica instabiities reated to (<I>(z). Pn<>(x)) being smah can be deat with by restarting the iteration with a different starting vaue. Furthermore we note that any fixed-point of (10) wi be a inear combination of the kerne PCA training data Xi. If we regard (10) in the context of custering we see that it resembes an iteration step for the estimation of 2If the kerne aows reconstruction of the dot-product in input space, and under the assumption that a pre-image exists, it is possibe to construct it expicity (cf. [7]). But ceary, these conditions do not hod true in genera. (10)

4 Kerne pea and De-Noising in Feature Spaces 539 the center of a singe Gaussian custer. The weights or 'probabiities' T'i refect the (anti-) correation between the amount of cp (x) in Eigenvector direction Vk and the contribution of CP(Xi) to this Eigenvector. So the 'custer center' z is drawn towards training patterns with positive T'i and pushed away from those with negative T'i, i.e. for a fixed-point Zoo the infuence of training patterns with smaner distance to x wi11 tend to be bigger. 4 Experiments To test the feasibiity of the proposed agorithm, we run severa toy and rea word experiments. They were performed using (10) and Gaussian kernes of the form k(x, y) = exp( -(x - YI12)/(nc)) where n equas the dimension of input space. We mainy focused on the appication of de-noising, which differs from reconstruction by the fact that we are aowed to make use of the origina test data as starting points in the iteration. Toy exampes: In the first experiment (tabe 1), we generated a data set from eeven Gaussians in RIO with zero mean and variance u 2 in each component, by seecting from each source 100 points as a training set and 33 points for a test set (centers of the Gaussians randomy chosen in [-1, 1]10). Then we appied kerne pea to the training set and computed the projections 13k of the points in the test set. With these, we carried out de-noising, yieding an approximate pre-image in RIO for each test point. This procedure was repeated for different numbers of components in reconstruction, and for different vaues of u. For the kerne, we used c = 2u 2 We compared the resuts provided by our agorithm to those of inear pea via the mean squared distance of an de-noised test points to their corresponding center. Tabe 1 shows the ratio of these vaues; here and beow, ratios arger than one indicate that kerne pea performed better than inear pea. For amost every choice of nand u, kerne PeA did better. Note that using ao components, inear pea is just a basis transformation and hence cannot de-noise. The extreme superiority of kerne pea for sma u is due to the fact that a test points are in this case ocated cose to the eeven spots in input space, and inear PeA has to cover them with ess than ten directions. Kerne PeA moves each point to the correct source even when using ony a sman number of components. n= :i.5:i U 1.8U Tabe 1: De-noising Gaussians in RIO (see text). Performance ratios arger than one indicate how much better kerne PeA did, compared to inear PeA, for different choices of the Gaussians' std. dev. u, and different numbers of components used in reconstruction. To get some intuitive understanding in a ow-dimensiona case, figure 1 depicts the resuts of de-noising a haf circe and a square in the pane, using kerne pea, a noninear autoencoder, principa curves, and inear PeA. The principa curves agorithm [4] iterativey estimates a curve capturing the structure of the data. The data are projected to the cosest point on a curve which the agorithm tries to construct such that each point is the average of a data points projecting onto it. It can be shown that the ony straight ines satisfying the atter are principa components, so principa curves are a generaization of the atter. The agorithm uses a smoothing parameter whic:h is anneaed during the iteration. In the noninear autoencoder agorithm, a 'botteneck' 5-ayer network is trained to reproduce the input vaues as outputs (i.e. it is used in autoassociative mode). The hidden unit activations in the third ayer form a ower-dimensiona representation of the data, cosey reated to

5 540 S. Mika et a. PCA (see for instance [3]). Training is done by conjugate gradient descent. In a agorithms, parameter vaues were seected such that the best possibe de-noising resut was obtained. The figure shows that on the cosed square probem, kerne PeA does (subjectivey) best, foowed by principa curves and the noninear autoencoder; inear PeA fais competey. However, note that a agorithms except for kerne PCA actuay provide an expicit one-dimensiona parameterization of the data, whereas kerne PCA ony provides us with a means of mapping points to their de-noised versions (in this case, we used four kerne PCA features, and hence obtain a four-dimensiona parameterization). kerne PCA noninear autoencoder Principa Curves inear PCA ~It%i 1J;:qf~~ I::f;~.;~,: '~ Figure 1: De-noising in 2-d (see text). Depicted are the data set (sma points) and its de-noised version (big points, joining up to soid ines). For inear PCA, we used one component for reconstruction, as using two components, reconstruction is perfect and thus does not de-noise. Note that a agorithms except for our approach have probems in capturing the circuar structure in the bottom exampe. USPS exampe: To test our approach on rea-word data, we aso appied the agorithm to the USPS database of 256-dimensiona handwritten digits. For each of the ten digits, we randomy chose 300 exampes from the training set and 50 exampes from the test set. We used (10) and Gaussian kernes with c = 0.50, equaing twice the average of the data's variance in each dimensions. In figure 4, we give two possibe depictions of 1111 II ii fi 0"(1'. 11m Figure 2: Visuaization of Eigenvectors (see text). Depicted are the 2,..., 2 8 -th Eigenvector (from eft to right). First row: inear PeA, second and third row: different visuaizations for kerne PCA. the Eigenvectors found by kerne PCA, compared to those found by inear PCA for the USPS set. The second row shows the approximate pre-images of the Eigenvectors V k, k = 2,...,28, found by our agorithm. In the third row each image is computed as foows: Pixe i is the projection of the <II-image of the i-th canonica basis vector in input space onto the corresponding Eigenvector in features space (upper eft <II(e). V k, ower right <II (e256). Vk). In the inear case, both methods woud simpy yied the Eigenvectors ofinear PCA depicted in the first row; in this sense, they may be considered as generaized Eigenvectors in input space. We see that the first Eigenvectors are amost identica (except for signs). But we aso see, that Eigenvectors in inear PeA start to concentrate on highfrequency structures aready at smaer Eigenvaue size. To understand this, note that in inear PCA we ony have a maximum number of 256 Eigenvectors, contrary to kerne PCA which gives us the number of training exampes (here 3000) possibe Eigenvectors. This

6 Kerne pea and De-Noising in Feature Spaces 541 & 3.55 & I.CI' ( o.n ( 0.7( S S! Figure 3: Reconstruction of USPS data. Depicted are the reconstructions of the first digit in the test set (origina in ast coumn) from the first n = 1,...,20 components for inear pea (first row) and kerne pea (second row) case. The numbers in between denote the fraction of squared distance measured towards the origina exampe. For a sma number of components both agorithms do neary the same. For more components, we see that inear pea yieds a resut resembing the origina digit, whereas kerne pea gives a resut resembing a more prototypica 'three' aso expains some of the resuts we found when working with the USPS set. In these experiments, inear and kerne pea were trained with the origina data. Then we added (i) additive Gaussian noise with zero mean and standard deviation u = 0.5 or (ii) 'specke' noise with probabiity p = 0.4 (i.e. each pixe fips to back or white with probabiity p/2) to the test set. For the noisy test sets we computed the projections onto the first n inear and noninear components, and carried out reconstruction for each case. The resuts were compared by taking the mean squared distance of each reconstructed digit from the noisy test set to its origina counterpart. As a third experiment we did the same for the origina test set (hence doing reconstruction, not de-noising). In the atter case, where the task is to reconstruct a given exampe as exacty as possibe, inear pea did better, at east when using more than about 10 components (figure 3). This is due to the fact that inear pea starts earier to account for fine structures, but at the same time it starts to reconstruct noise, as we wi see in figure 4. Kerne PCA, on the other hand, yieds recognizabe resuts even for a sma number of components, representing a prototype of the desired exampe. This is one reason why our approach did better than inear pea for the de-noising exampe (figure 4). Taking the mean squared distance measured over the whoe test set for the optima number of components in inear and kerne PCA, our approach did better by a factor of 1.6 for the Gaussian noise, and 1.2 times better for the 'specke' noise (the optima number of components were 32 in inear pea, and 512 and 256 in kerne PCA, respectivey). Taking identica numbers of components in both agorithms, kerne pea becomes up to 8 (!) times better than inear pea. However, note that kerne PCA comes with a higher computationa compexity. 5 Discussion We have studied the probem of finding approximate pre-images of vectors in feature space, and proposed an agorithm to sove it. The agorithm can be appied to both reconstruction and de-noising. In the former case, resuts were comparabe to inear pea, whie in the atter case, we obtained significanty better resuts. Our interpretation of this finding is as foows. Linear pea can extract at most N components, where N is the dimensionaity of the data. Being a basis transform, a N components together fuy describe the data. If the data are noisy, this impies that a certain fraction of the components wi be devoted to the extraction of noise. Kerne pea, on the other hand, aows the extraction of up to f features, where f is the number of training exampes. Accordingy, kerne pea can provide a arger number of features carrying information about the structure in the data (in our experiments, we had f > N). In addition, if the structure to be extracted is noninear, then inear pea must necessariy fai, as we have iustrated with toy exampes. These methods, aong with depictions of pre-images of vectors in feature space, provide some understanding of kerne methods which have recenty attracted increasing attention. Open questions incude (i) what kind of resuts kernes other than Gaussians wi provide,

7 542 S. Mika et a. Figure 4: De-Noising of USPS data (see text). The eft haf shows: top: the first occurrence of each digit in the test set, second row: the upper digit with additive Gaussian noise (0' = 0.5), foowing five rows: the reconstruction for inear PCA using n = 1,4,16,64,256 components, and, ast five rows: the resuts of our approach using the same number of components. In the right haf we show the same but for 'specke' noise with probabiity p = 0.4. (ii) whether there is a more efficient way to sove either (6) or (8), and (iii) the comparison (and connection) to aternative noninear de-noising methods (cf. [5]). References [1] B. Boser, I. Guyon, and V.N. Vapnik. A training agorithm for optima margin cassifiers. In D. Hausser, editor, Proc. COLT, pages , Pittsburgh, ACM Press. [2] C.J.c. Burges. Simpified support vector decision rues. In L. Saitta, editor, Prooceedings, 13th ICML, pages 71-77, San Mateo, CA, [3] K.I. Diamantaras and S.Y. Kung. Principa Component Neura Networks. Wiey, New York,1996. [4] T. Hastie and W. Stuetze. Principa curves. JASA, 84: ,1989. [5] S. Maat and Z. Zhang. Matching Pursuits with time-frequency dictionaries. IEEE Transactions on Signa Processing, 41(12): , December [6] S. Saitoh. Theory of Reproducing Kernes and its Appications. Longman Scientific & Technica, Harow, Engand, [7] B. Schokopf. Support vector earning. Odenbourg Verag, Munich, [8] B. Schokopf, P. Knirsch, A. Smoa, and C. Burges. Fast approximation of support vector kerne expansions, and an interpretation of custering as approximation in feature spaces. In P. Levi et. a1., editor, DAGM'98, pages , Berin, Springer. [9] B. Schokopf, A.J. Smoa, and K.-R. Muer. Noninear component anaysis as a kerne eigenvaue probem. Neura Computation, 10: ,1998.

Kernel PCA and De-Noising in Feature Spaces

Kernel PCA and De-Noising in Feature Spaces , Kernel PCA and De-Noising in Feature Spaces Sebastian Mika, Bernhard Schölkopf, Alex Smola Klaus-Robert Müller, Matthias Schol, Gunnar Rätsch GMD FIRST, Rudower Chaussee 5, 1489 Berlin, Germany mika,

More information

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing Kernel PCA Pattern Reconstruction via Approximate Pre-Images Bernhard Scholkopf, Sebastian Mika, Alex Smola, Gunnar Ratsch, & Klaus-Robert Muller GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany fbs,

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

THE THREE POINT STEINER PROBLEM ON THE FLAT TORUS: THE MINIMAL LUNE CASE

THE THREE POINT STEINER PROBLEM ON THE FLAT TORUS: THE MINIMAL LUNE CASE THE THREE POINT STEINER PROBLEM ON THE FLAT TORUS: THE MINIMAL LUNE CASE KATIE L. MAY AND MELISSA A. MITCHELL Abstract. We show how to identify the minima path network connecting three fixed points on

More information

SVM-based Supervised and Unsupervised Classification Schemes

SVM-based Supervised and Unsupervised Classification Schemes SVM-based Supervised and Unsupervised Cassification Schemes LUMINITA STATE University of Pitesti Facuty of Mathematics and Computer Science 1 Targu din Vae St., Pitesti 110040 ROMANIA state@cicknet.ro

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

Partial permutation decoding for MacDonald codes

Partial permutation decoding for MacDonald codes Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics

More information

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA) 1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using

More information

BALANCING REGULAR MATRIX PENCILS

BALANCING REGULAR MATRIX PENCILS BALANCING REGULAR MATRIX PENCILS DAMIEN LEMONNIER AND PAUL VAN DOOREN Abstract. In this paper we present a new diagona baancing technique for reguar matrix pencis λb A, which aims at reducing the sensitivity

More information

4 Separation of Variables

4 Separation of Variables 4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

Paper presented at the Workshop on Space Charge Physics in High Intensity Hadron Rings, sponsored by Brookhaven National Laboratory, May 4-7,1998

Paper presented at the Workshop on Space Charge Physics in High Intensity Hadron Rings, sponsored by Brookhaven National Laboratory, May 4-7,1998 Paper presented at the Workshop on Space Charge Physics in High ntensity Hadron Rings, sponsored by Brookhaven Nationa Laboratory, May 4-7,998 Noninear Sef Consistent High Resoution Beam Hao Agorithm in

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

$, (2.1) n="# #. (2.2)

$, (2.1) n=# #. (2.2) Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

From Margins to Probabilities in Multiclass Learning Problems

From Margins to Probabilities in Multiclass Learning Problems From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-660: Numerica Methods for Engineering esign and Optimization in i epartment of ECE Carnegie Meon University Pittsburgh, PA 523 Side Overview Conjugate Gradient Method (Part 4) Pre-conditioning Noninear

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

LECTURE NOTES 8 THE TRACELESS SYMMETRIC TENSOR EXPANSION AND STANDARD SPHERICAL HARMONICS

LECTURE NOTES 8 THE TRACELESS SYMMETRIC TENSOR EXPANSION AND STANDARD SPHERICAL HARMONICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Eectromagnetism II October, 202 Prof. Aan Guth LECTURE NOTES 8 THE TRACELESS SYMMETRIC TENSOR EXPANSION AND STANDARD SPHERICAL HARMONICS

More information

FFTs in Graphics and Vision. Spherical Convolution and Axial Symmetry Detection

FFTs in Graphics and Vision. Spherical Convolution and Axial Symmetry Detection FFTs in Graphics and Vision Spherica Convoution and Axia Symmetry Detection Outine Math Review Symmetry Genera Convoution Spherica Convoution Axia Symmetry Detection Math Review Symmetry: Given a unitary

More information

Two view learning: SVM-2K, Theory and Practice

Two view learning: SVM-2K, Theory and Practice Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk

More information

An explicit Jordan Decomposition of Companion matrices

An explicit Jordan Decomposition of Companion matrices An expicit Jordan Decomposition of Companion matrices Fermín S V Bazán Departamento de Matemática CFM UFSC 88040-900 Forianópois SC E-mai: fermin@mtmufscbr S Gratton CERFACS 42 Av Gaspard Coriois 31057

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

Mode in Output Participation Factors for Linear Systems

Mode in Output Participation Factors for Linear Systems 2010 American ontro onference Marriott Waterfront, Batimore, MD, USA June 30-Juy 02, 2010 WeB05.5 Mode in Output Participation Factors for Linear Systems Li Sheng, yad H. Abed, Munther A. Hassouneh, Huizhong

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 13 Feb 2003

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 13 Feb 2003 arxiv:cond-mat/369v [cond-mat.dis-nn] 3 Feb 3 Brownian Motion in wedges, ast passage time and the second arc-sine aw Aain Comtet, and Jean Desbois nd February 8 Laboratoire de Physique Théorique et Modèes

More information

4 1-D Boundary Value Problems Heat Equation

4 1-D Boundary Value Problems Heat Equation 4 -D Boundary Vaue Probems Heat Equation The main purpose of this chapter is to study boundary vaue probems for the heat equation on a finite rod a x b. u t (x, t = ku xx (x, t, a < x < b, t > u(x, = ϕ(x

More information

More Scattering: the Partial Wave Expansion

More Scattering: the Partial Wave Expansion More Scattering: the Partia Wave Expansion Michae Fower /7/8 Pane Waves and Partia Waves We are considering the soution to Schrödinger s equation for scattering of an incoming pane wave in the z-direction

More information

Higher dimensional PDEs and multidimensional eigenvalue problems

Higher dimensional PDEs and multidimensional eigenvalue problems Higher dimensiona PEs and mutidimensiona eigenvaue probems 1 Probems with three independent variabes Consider the prototypica equations u t = u (iffusion) u tt = u (W ave) u zz = u (Lapace) where u = u

More information

Algorithms to solve massively under-defined systems of multivariate quadratic equations

Algorithms to solve massively under-defined systems of multivariate quadratic equations Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, as we as various

More information

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1654 March23, 1999

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

An approximate method for solving the inverse scattering problem with fixed-energy data

An approximate method for solving the inverse scattering problem with fixed-energy data J. Inv. I-Posed Probems, Vo. 7, No. 6, pp. 561 571 (1999) c VSP 1999 An approximate method for soving the inverse scattering probem with fixed-energy data A. G. Ramm and W. Scheid Received May 12, 1999

More information

Discriminant Analysis: A Unified Approach

Discriminant Analysis: A Unified Approach Discriminant Anaysis: A Unified Approach Peng Zhang & Jing Peng Tuane University Eectrica Engineering & Computer Science Department New Oreans, LA 708 {zhangp,jp}@eecs.tuane.edu Norbert Riede Tuane University

More information

The Group Structure on a Smooth Tropical Cubic

The Group Structure on a Smooth Tropical Cubic The Group Structure on a Smooth Tropica Cubic Ethan Lake Apri 20, 2015 Abstract Just as in in cassica agebraic geometry, it is possibe to define a group aw on a smooth tropica cubic curve. In this note,

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Kernel Trick Embedded Gaussian Mixture Model

Kernel Trick Embedded Gaussian Mixture Model Kerne Trick Embedded Gaussian Mixture Mode Jingdong Wang, Jianguo Lee, and Changshui Zhang State Key Laboratory of Inteigent Technoogy and Systems Department of Automation, Tsinghua University Beijing,

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Approximated MLC shape matrix decomposition with intereaf coision constraint Thomas Kainowski Antje Kiese Abstract Shape matrix decomposition is a subprobem in radiation therapy panning. A given fuence

More information

Support Vector Machine and Its Application to Regression and Classification

Support Vector Machine and Its Application to Regression and Classification BearWorks Institutiona Repository MSU Graduate Theses Spring 2017 Support Vector Machine and Its Appication to Regression and Cassification Xiaotong Hu As with any inteectua project, the content and views

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

An Information Geometrical View of Stationary Subspace Analysis

An Information Geometrical View of Stationary Subspace Analysis An Information Geometrica View of Stationary Subspace Anaysis Motoaki Kawanabe, Wojciech Samek, Pau von Bünau, and Frank C. Meinecke Fraunhofer Institute FIRST, Kekuéstr. 7, 12489 Berin, Germany Berin

More information

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION

ORTHOGONAL MULTI-WAVELETS FROM MATRIX FACTORIZATION J. Korean Math. Soc. 46 2009, No. 2, pp. 281 294 ORHOGONAL MLI-WAVELES FROM MARIX FACORIZAION Hongying Xiao Abstract. Accuracy of the scaing function is very crucia in waveet theory, or correspondingy,

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues Numerische Mathematik manuscript No. (wi be inserted by the editor) Smoothness equivaence properties of univariate subdivision schemes and their projection anaogues Phiipp Grohs TU Graz Institute of Geometry

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing

More information

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION SAHAR KARIMI AND STEPHEN VAVASIS Abstract. In this paper we present a variant of the conjugate gradient (CG) agorithm in which we invoke a subspace minimization

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, we as various

More information

Interpolating function and Stokes Phenomena

Interpolating function and Stokes Phenomena Interpoating function and Stokes Phenomena Masazumi Honda and Dieep P. Jatkar arxiv:504.02276v3 [hep-th] 2 Ju 205 Harish-Chandra Research Institute Chhatnag Road, Jhunsi Aahabad 209, India Abstract When

More information

Physics 505 Fall Homework Assignment #4 Solutions

Physics 505 Fall Homework Assignment #4 Solutions Physics 505 Fa 2005 Homework Assignment #4 Soutions Textbook probems: Ch. 3: 3.4, 3.6, 3.9, 3.0 3.4 The surface of a hoow conducting sphere of inner radius a is divided into an even number of equa segments

More information

Data Mining Technology for Failure Prognostic of Avionics

Data Mining Technology for Failure Prognostic of Avionics IEEE Transactions on Aerospace and Eectronic Systems. Voume 38, #, pp.388-403, 00. Data Mining Technoogy for Faiure Prognostic of Avionics V.A. Skormin, Binghamton University, Binghamton, NY, 1390, USA

More information

Physics 235 Chapter 8. Chapter 8 Central-Force Motion

Physics 235 Chapter 8. Chapter 8 Central-Force Motion Physics 35 Chapter 8 Chapter 8 Centra-Force Motion In this Chapter we wi use the theory we have discussed in Chapter 6 and 7 and appy it to very important probems in physics, in which we study the motion

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017 In-pane shear stiffness of bare stee deck through she finite eement modes G. Bian, B.W. Schafer June 7 COLD-FORMED STEEL RESEARCH CONSORTIUM REPORT SERIES CFSRC R-7- SDII Stee Diaphragm Innovation Initiative

More information

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS A SIPLIFIED DESIGN OF ULTIDIENSIONAL TRANSFER FUNCTION ODELS Stefan Petrausch, Rudof Rabenstein utimedia Communications and Signa Procesg, University of Erangen-Nuremberg, Cauerstr. 7, 958 Erangen, GERANY

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

Some Measures for Asymmetry of Distributions

Some Measures for Asymmetry of Distributions Some Measures for Asymmetry of Distributions Georgi N. Boshnakov First version: 31 January 2006 Research Report No. 5, 2006, Probabiity and Statistics Group Schoo of Mathematics, The University of Manchester

More information

Theory and implementation behind: Universal surface creation - smallest unitcell

Theory and implementation behind: Universal surface creation - smallest unitcell Teory and impementation beind: Universa surface creation - smaest unitce Bjare Brin Buus, Jaob Howat & Tomas Bigaard September 15, 218 1 Construction of surface sabs Te aim for tis part of te project is

More information

A Robust Voice Activity Detection based on Noise Eigenspace Projection

A Robust Voice Activity Detection based on Noise Eigenspace Projection A Robust Voice Activity Detection based on Noise Eigenspace Projection Dongwen Ying 1, Yu Shi 2, Frank Soong 2, Jianwu Dang 1, and Xugang Lu 1 1 Japan Advanced Institute of Science and Technoogy, Nomi

More information

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch The Eighth Internationa Symposium on Operations Research and Its Appications (ISORA 09) Zhangiaie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 402 408 Minimizing Tota Weighted Competion

More information

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law Gauss Law 1. Review on 1) Couomb s Law (charge and force) 2) Eectric Fied (fied and force) 2. Gauss s Law: connects charge and fied 3. Appications of Gauss s Law Couomb s Law and Eectric Fied Couomb s

More information

6 Wave Equation on an Interval: Separation of Variables

6 Wave Equation on an Interval: Separation of Variables 6 Wave Equation on an Interva: Separation of Variabes 6.1 Dirichet Boundary Conditions Ref: Strauss, Chapter 4 We now use the separation of variabes technique to study the wave equation on a finite interva.

More information

Multicategory Classification by Support Vector Machines

Multicategory Classification by Support Vector Machines Muticategory Cassification by Support Vector Machines Erin J Bredensteiner Department of Mathematics University of Evansvie 800 Lincon Avenue Evansvie, Indiana 47722 eb6@evansvieedu Kristin P Bennett Department

More information

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008 Random Booean Networks Barbara Drosse Institute of Condensed Matter Physics, Darmstadt University of Technoogy, Hochschustraße 6, 64289 Darmstadt, Germany (Dated: June 27) arxiv:76.335v2 [cond-mat.stat-mech]

More information

Math 124B January 31, 2012

Math 124B January 31, 2012 Math 124B January 31, 212 Viktor Grigoryan 7 Inhomogeneous boundary vaue probems Having studied the theory of Fourier series, with which we successfuy soved boundary vaue probems for the homogeneous heat

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Agorithmic Operations Research Vo.4 (29) 49 57 Approximated MLC shape matrix decomposition with intereaf coision constraint Antje Kiese and Thomas Kainowski Institut für Mathematik, Universität Rostock,

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

Vectors in many applications, most of the i, which are found by solving a quadratic program, turn out to be 0. Excellent classication accuracies in bo

Vectors in many applications, most of the i, which are found by solving a quadratic program, turn out to be 0. Excellent classication accuracies in bo Fast Approximation of Support Vector Kernel Expansions, and an Interpretation of Clustering as Approximation in Feature Spaces Bernhard Scholkopf 1, Phil Knirsch 2, Alex Smola 1, and Chris Burges 2 1 GMD

More information

Preconditioned Locally Harmonic Residual Method for Computing Interior Eigenpairs of Certain Classes of Hermitian Matrices

Preconditioned Locally Harmonic Residual Method for Computing Interior Eigenpairs of Certain Classes of Hermitian Matrices MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.mer.com Preconditioned Locay Harmonic Residua Method for Computing Interior Eigenpairs of Certain Casses of Hermitian Matrices Vecharynski, E.; Knyazev,

More information

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Eectromagnetism II October 7, 202 Prof. Aan Guth LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL

More information

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems Convergence Property of the Iri-Imai Agorithm for Some Smooth Convex Programming Probems S. Zhang Communicated by Z.Q. Luo Assistant Professor, Department of Econometrics, University of Groningen, Groningen,

More information

NIKOS FRANTZIKINAKIS. N n N where (Φ N) N N is any Følner sequence

NIKOS FRANTZIKINAKIS. N n N where (Φ N) N N is any Følner sequence SOME OPE PROBLEMS O MULTIPLE ERGODIC AVERAGES IKOS FRATZIKIAKIS. Probems reated to poynomia sequences In this section we give a ist of probems reated to the study of mutipe ergodic averages invoving iterates

More information

Formulas for Angular-Momentum Barrier Factors Version II

Formulas for Angular-Momentum Barrier Factors Version II BNL PREPRINT BNL-QGS-06-101 brfactor1.tex Formuas for Anguar-Momentum Barrier Factors Version II S. U. Chung Physics Department, Brookhaven Nationa Laboratory, Upton, NY 11973 March 19, 2015 abstract A

More information

Active Learning & Experimental Design

Active Learning & Experimental Design Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection

More information

Input-to-state stability for a class of Lurie systems

Input-to-state stability for a class of Lurie systems Automatica 38 (2002) 945 949 www.esevier.com/ocate/automatica Brief Paper Input-to-state stabiity for a cass of Lurie systems Murat Arcak a;, Andrew Tee b a Department of Eectrica, Computer and Systems

More information

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS ISEE 1 SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS By Yingying Fan and Jinchi Lv University of Southern Caifornia This Suppementary Materia

More information

Math 124B January 17, 2012

Math 124B January 17, 2012 Math 124B January 17, 212 Viktor Grigoryan 3 Fu Fourier series We saw in previous ectures how the Dirichet and Neumann boundary conditions ead to respectivey sine and cosine Fourier series of the initia

More information

Kernel Matching Pursuit

Kernel Matching Pursuit Kerne Matching Pursuit Pasca Vincent and Yoshua Bengio Dept. IRO, Université demontréa C.P. 6128, Montrea, Qc, H3C 3J7, Canada {vincentp,bengioy}@iro.umontrea.ca Technica Report #1179 Département d Informatique

More information

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation Robust Sensitivity Anaysis for Linear Programming with Eipsoida Perturbation Ruotian Gao and Wenxun Xing Department of Mathematica Sciences Tsinghua University, Beijing, China, 100084 September 27, 2017

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

Strauss PDEs 2e: Section Exercise 2 Page 1 of 12. For problem (1), complete the calculation of the series in case j(t) = 0 and h(t) = e t.

Strauss PDEs 2e: Section Exercise 2 Page 1 of 12. For problem (1), complete the calculation of the series in case j(t) = 0 and h(t) = e t. Strauss PDEs e: Section 5.6 - Exercise Page 1 of 1 Exercise For probem (1, compete the cacuation of the series in case j(t = and h(t = e t. Soution With j(t = and h(t = e t, probem (1 on page 147 becomes

More information

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation Approximation and Fast Cacuation of Non-oca Boundary Conditions for the Time-dependent Schrödinger Equation Anton Arnod, Matthias Ehrhardt 2, and Ivan Sofronov 3 Universität Münster, Institut für Numerische

More information

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen

More information

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS TONY ALLEN, EMILY GEBHARDT, AND ADAM KLUBALL 3 ADVISOR: DR. TIFFANY KOLBA 4 Abstract. The phenomenon of noise-induced stabiization occurs

More information

hole h vs. e configurations: l l for N > 2 l + 1 J = H as example of localization, delocalization, tunneling ikx k

hole h vs. e configurations: l l for N > 2 l + 1 J = H as example of localization, delocalization, tunneling ikx k Infinite 1-D Lattice CTDL, pages 1156-1168 37-1 LAST TIME: ( ) ( ) + N + 1 N hoe h vs. e configurations: for N > + 1 e rij unchanged ζ( NLS) ζ( NLS) [ ζn unchanged ] Hund s 3rd Rue (Lowest L - S term of

More information

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.mer.com Absoute Vaue Preconditioning for Symmetric Indefinite Linear Systems Vecharynski, E.; Knyazev, A.V. TR2013-016 March 2013 Abstract We introduce

More information