Local Linear ICA for Mutual Information Estimation in Feature Selection

Size: px
Start display at page:

Download "Local Linear ICA for Mutual Information Estimation in Feature Selection"

Transcription

1 Local Linear ICA for Mutual Information Estimation in Feature Selection Tian Lan, Deniz Erogmus Department of Biomeical Engineering, OGI, Oregon Health & Science University, Portlan, Oregon, USA Abstract Mutual information is an important tool in many applications. Specifically, in classification systems, feature selection base on MI estimation between features an class labels helps to ientify the most useful features irectly relate to the classification performance. MI estimation is extremely ifficult an imprecise in high imensional feature spaces with an arbitrary istribution. We propose a framework using ICA an sample-spacing base entropy estimators to estimate MI. In this framework, the higher imensional MI estimation is reuce to inepenent multiple one-imensional MI estimation problems. This approach is computationally efficient, however, its precision heavily relies on the results of ICA. In our previous work, we assume the feature space has linear structure, hence linear ICA was aopte. Nevertheless, this is a weak assumption, which might not be true in many applications. Although nonlinear ICA can solve any ICA problem in theory, its complexity an the requirement of ata samples restrict its application. A better trae-off between linear an non-linear ICA coul be local linear ICA, which uses piecewise linear ICA to approximate the non-linear relationships among the ata. In this paper, we propose that by replacing linear ICA with local linear ICA, we can get precise MI estimation without greatly increasing the computational complexity. The experiment results also substantiate this claim. Keywors Local Linear ICA, Mutual Information, Entropy Estimation, Feature Selection I. INTRODUCTION Mutual information is an important tool in many applications, such as communications, signal processing, an machine learning. Specifically, in pattern recognition, imensionality reuction an feature selection base on mutual information maximization between features an class labels has attracte increasing attention, because this approach can fin out the most relevant features, therefore (i) reuces the computational loa in real-time system; (ii) can eliminate irrelevant or noisy features, hence increases the robustness of the system; (iii) is a filter approach, which is inepenent of the esign of classifier, an is more flexible. The MI base metho for feature selection is motivate by lower an upper bouns in information theory [2,3]. The average probability of error has been shown to be relate to MI between the feature vectors an the class labels. Specifically, Fano s an Hellman & Raviv s bouns emonstrate that probability of error is boune from below an above by quantities that epen on the Shannon MI between these variables. Maximizing this MI reuces both bouns, therefore, forces the probability of error to ecrease. One of the ifficulties of this MI-base feature selection metho is that, estimating MI requires the knowlege of joint pf of the ata in feature space. However, since features are generally mutually epenent, feature selection in this manner is typically suboptimal in the sense of maximum joint mutual information principle. Several MI-base methos have been evelope for feature selection in the past years [4-9]. Unfortunately, all of these methos faile to solve the high imensional situation. In practice, the mutual information must be estimate nonparametrically from the training samples. Although this is a challenging problem for multiple continuous-value ranom variables, the class labels are iscrete-value in the classification context. This reuces the problem to just estimating entropies of continuous ranom vectors. Furthermore, if the components of the ranom vector are inepenent, the joint entropy becomes the sum of marginal entropies. Thus, the joint mutual information of a feature vector with the class labels is equal to the sum of marginal mutual information of each iniviual feature with the class labels, provie that the features are inepenent. In our previous work, we exploite this fact an propose a framework using ICA transformation an sample-spacing estimator to estimate the mutual information between features an class labels [1]. This framework is superior because it is open to iverse algorithms, i.e. each component, incluing ICA transformation an entropy estimator can be replace by any qualifie algorithm/alternative.

2 Cluster 1 Class label c x 1 x 2 x Clustering Cluster n Linear ICA W (1) Linear ICA W (n) Mutual Information estimation c Mutual Information estimation Mutual Information Figure 1. Local linear ICA for mutual information i i We employe the cumulant-base generalize eigenvalue ecomposition (GED) approach [10] to etermine the linear ICA transformation an the sample-spacing estimator [11] as the marginal entropy estimator in EEG signal classification [12]. Although linear ICA yiels goo performance in some applications, it is a weak assumption in many real worl problems. Since this framework heavily relies on the performance of ICA, the precision of MI will be greatly impaire in nonlinear situations. Recently, nonlinear ICA has attracte more attention ue to its ability to capture the nonlinear relationships within the ata [13]. However, the complexity of fining robust regularize nonlinear transformations makes it ifficult to use in many situations. Furthermore, nonlinear ICA has the following shortcomings: (i) It requires more training ata, especially in higher imensional situations; (ii) our framework nees to be revise accoring to the form of the nonlinear ICA. In this case, local linear ICA (which will be use in this paper) presents a goo trae-off [14]. Local ICA uses piece-wise linear ICA transformations to approximate nonlinear ICA. In practice, a clustering algorithm is applie first to ivie the ata into segments. We assume the ata within segments have the linear relationship, thus we can use the previously propose framework to estimate MI within each segment. Accoring to the principle of information aition, total MI equals to the summation of the MIs of each segment. In this way, we can exten our previous framework easily to local ICA-base nonlinear MI estimation. The system block iagram of local ICA-base MI estimation is shown in Fig. 1. In the following sections, we will introuce the theoretic feasibility of local ICA for MI estimation. To verify the propose approach, we also present experimental results on synthetic an EEG ata sets. II. THEORETICAL BACKGROUND Consier a group of nonlinearly istribute, - imensional feature vectors: x=[x 1,x 2,...,x ] T. We first apply a suitable clustering algorithm to segment the ata into n partitions: x (1), x (2),, x (n). We assume within each partition x (i), the ata is imensional, an istribute in accorance with the linear ICA moel. Within each partition, we apply the linear ICA transformation to get feature vectors: y (1), y (2),, y (n), where y (i) =[y (i) 1,y (i) 2,,y (i) ] T. Since the linear ICA transformation oes not change the mutual information, we have: I S ( x ; c) = I S ( y ; c) (1) where i = 1 n, enote the n clusters, x (i) are the original epenent components, an y (i) are the inepenent components. It is well known that the mutual information can be expresse in terms of conitional entropy as follows: I S ( y ; c) = H S ( y ) pc H S ( y c) (2) c where c is the class label, p c are the prior class probabilities. If the components of the ranom vector y (i) are inepenent, the joint an joint-conitional entropies become the sum of marginal an marginal-conitional entropies: 1 I S ( y, y, K, y ; c) H S ( y ) 1 2 (3) Pc H S ( y c) c It is easy to show that the total MI between x an c equals to the summation of the MI within each segmente ata group: I S ( x; c) = I S ( x ; c) (4) i Combining (1)-(4), we can get: I S ( x1, x2, K, x ; c) H S ( y ) i (5) Pc H S ( y c) c In principle, the propose local ICA for MI estimation contains three parts: clustering algorithm, linear ICA algorithm, an one-imensional entropy 1 Here we assume that the same ICA transformation achieves inepenence in the overall conitional istributions simultaneously.

3 estimator. For each component, one can use any establishe metho. In this paper, we use K-means clustering, GED-base ICA transformations, an sample-spacing entropy estimators. K-means clustering: The K-means metho is efine as to minimize the over all istance of the ata to the center of K clusters: J = k x S i x m i where m i is the center of each cluster. This algorithm first selects K arbitrary cluster centers, an then calculates the istance between all ata points to these clusters center respectively. By grouping the nearest ata to the each center in a particular group, we get a segmentation of the ata set. Because we select the cluster center arbitrarily, this segmentation may not minimize eq (6). Replacing the each cluster center with the mean value of each group, we get a new set of cluster centers. The process above is repeate until J converges to its minimum value. ICA Using Generalize Eigenvalue Decomposition: The square linear ICA problem is expresse in (7), where X is the n N observation matrix, A is the n n mixing matrix, an S is the n N inepenent source matrix. X = AS (7) Each column of X an S represents one sample of ata. Many effective an efficient algorithms base on a variety of assumptions incluing maximization of non- Gaussianity an minimization of mutual information exist to solve the ICA problem [11,15,16]. Those utilizing fourth orer cumulants coul be compactly formulate in the form of a generalize eigenecomposition problem that gives the ICA solution in an analytical form [10]. Accoring to this formulation, one possible assumption set that leas to an ICA solution utilizes the higher orer statistics (specifically fourth-orer cumulants). Uner this set of assumptions, the separation matrix W is the solution to the following generalize eigenecomposition problem: R x W = Q x WΛ (8) where R x is the covariance matrix an Q x is the cumulant matrix estimate using sample averages: Q x =E[x T xxx T ]-R x tr(r x )-E[xx T ]E[xx T ]-R x R x. Given the estimates for these matrices, the ICA solution can be easily etermine using efficient generalize eigenecomposition algorithms (or using the eig comman in Matlab). Estimating MI Using Sample-Spacings: There exist many entropy estimators in the literature for singleimensional variables. Here, we use an estimator base on sample-spacings [11], which stems from orer statistics. This estimator is selecte because of its 2 (6) Figure 2. Synthetic ataset. Top: istribution of x 1 an x 2, bottom: istribution of x 3 an x 4. consistency, rapi asymptotic convergence, an simplicity. Consier a one imensional ranom variable Y. Given a set of ii samples of Y {y 1,,y N }, first these samples are sorte in increasing orer such that y (1) y (N). The m-spacing entropy estimator is given by: N m N + y i+ m y i H 1 ( 1)( ( ) ( ) ) ( Y ) = log N m m (9) i= 1 The selection of the parameter m is etermine by a bias-variance trae-off an typically m = N. In general, for asymptotic consistency the sequence m(n) shoul satisfy lim m( N) = lim m( N) / N = 0 (10) N N III. EXPERIMENTS AND RESULTS

4 Synthetic Dataset: In orer to illustrate the feasibility an the performance of the propose local ICA for MI estimation approach, we apply it to a synthetic ataset. This ataset consists of four imensional feature vectors: x i (i=1,,4), where x 1 an x 2 are nonlinearly relate (Fig 2 top), x 3 an x 4 are Gaussian istribute with ifferent mean an variance (Fig. 2 bottom). There are two classes in this ataset (represente as blue an re in Fig. 2). These two classes are separable in the x 1 an x 2 plane, but overlapping in the x 3 an x 4 plane. It is clear that this ataset can be well classifie by only using x 1 an x 2, while x 3 an x 4 provies reunant an insufficient information for classification. From the Fig. 2 we can see that x 2 has less overlap compare with x 1, while x 3 has less overlap than x 4. So ieally, the feature ranking in escening orer of importance in terms of classification rate shoul be x 2, x 1, x 3, x 4. In our experiments, we choose the sample size as 10000, an segment ata into 50 partitions. The + in Fig.2 represents the cluster centers. In the top figure we can see these centers istribute evenly along the curve of each classes, segmenting the nonlinear ata into piecewise linear groups. The choice of the number of centers K is very critical. By choosing large K, the linear relationship within each segment becomes stronger. However, a large K will cause iminishing sample size within some partitions, which will cause inaccurate ICA transformation estimates an entropy estimates. By applying the propose metho on this synthesis ataset, we get the feature ranking result: x 2, x 1, x 3, x 4, which is consistent with our expectation. EEG ata classification: To compare the performance of local ICA for MI estimation with the previous linear ICA approximation for MI estimation, we apply both approaches on an EEG-base brain computer interface (BCI) ataset. The EEG ata is collecte as part of an augmente cognition project, in which the estimate cognitive state is use to assess the mental loa of the subject in orer to moify the interaction of the subject with a computer system with the goal of increasing user performance. During ata collection, two subjects are require to execute ifferent levels of mental tasks, which are classifie as high workloa an low workloa. EEG ata is collecte using a BioSemi Active Two system using a 31 channel (AF3, AF4, C3, C4, CP1, CP2, CP5, CP6, Cz, F3, F4, F7, F8, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P3, P4, P7, P8, PO3, PO4, Pz) EEG cap an eye electroes. Vertical an horizontal eye movements an blinks were recore with electroes below an lateral to the left eye. EEG is sample an recore at 256Hz from 31 channels. EEG signals are pre-processe to remove eye blinks using an aaptive linear filter base on the Wirow- R ate n o a ti ific C l a ss Classification rate vs. number of best EEG channels Linear ICA subject 1 Linear ICA subject 2 Local ICA subject 1 Local ICA subject n best EEG channels Figure 3 EEG channel ranking in terms of classification rate for two subjects by linear ICA an local linear ICA. Hoff training rule (LMS) [17]. Information from the VEOGLB ocular reference channel was use as the noise reference source for the aaptive ocular filter. DC rifts were remove using high pass filters (0.5Hz cutoff). A ban pass filter (between 2Hz an 50Hz) was also employe, as this interval is generally associate with cognitive activity. The power spectral ensity (PSD) of the EEG signals, estimate using the Welch metho [18] with 50%-overlapping 1-secon winows, is integrate over 5 frequency bans: 4-8Hz (theta), 8-12Hz (alpha), 12-16Hz (low beta), 16-30Hz (high beta), 30-44Hz (gamma). These bans, sample every 0.25 secons, are use as the basic features for cognitive classification. The novelty in this application is that the subjects are freely moving aroun in contrast to the typical brain-computer interface (BCI) experimental setups where the subjects are in a strictly controlle setting. The assessment of cognitive state in ambulatory subjects is particularly ifficult, since the movements introuce strong artifacts irrelevant to the mental task/loa. Furthermore, the features extracte from EEG ata exhibit strong nonlinearity. In this case, feature selection by local ICA becomes important ue to its abilities to precisely keep the useful information an eliminate the irrelevant information for classification. To test the performance of local ICA for MI estimation in feature selection, we apply the EEG ata into the classification system, which contains four parts: preprocessing, feature extraction an selection, classification, an postprocessing. Preprocessing is use to filter out noise an remove the artifacts as mentione above. Feature extraction an selection generates features from the clean EEG signal, an selects useful EEG channels (each channel contains 5 frequency

5 bans) using the propose metho. Consier we have aroun 2500 ata samples for each subject, an the imension of feature is 155 (31 EEG channels, 5 frequency ban each), we use K=4, which means to segment ata into 4 groups. This is a roughly approximation. However, we can not choose K to be a larger number because on average we only have aroun 600 ata samples for each group with 155 imensions. For classification, the K-Nearest-Neighbor (KNN) classifier is utilize. The postprocessing uses the assumption that the variations in cognitive state for a given continuous task will be slowly varying in time. A meian filter operating on a winow of 2-secon ecisions recently generate by the classifier is use to eliminate a portion of erroneous ecisions mae by the classification system. The EEG channel selection results evaluate by classification rate are shown in Fig. 3. The re lines represent the classification performance with ifferent number of optimal EEG channels for subject 1, while the blue lines are for subject 2. As a comparison, we also illustrate the performance using linear ICA for EEG channel ranking on both subjects. The soli line with stars illustrates the classification results for local ICA, while, the ashe line with circles illustrates the classification results for linear ICA. From Fig. 3 we can see that the ability to fin an optimum subset for local ICA is superior to linear ICA: for subject 1, we get better classification performance from 7 optimal EEG channels; for subject 2, we get better performance from 1 optimal EEG. To compare feature ranking/selection results for linear ICA an local ICA more clearly, we list the EEG channel ranking in escen orer in terms of contribution to classification for both subjects in the Table. Table. EEG channel ranking (escening orer) in terms of contribution to classification rate for subject 1 an subject 2 with linear ICA an local ICA methos. Subject Metho EEG channel ranking Sub 1 Linear ICA Local ICA Sub 2 Linear ICA FC2, AF3, CPZ, FP1, CP5, CP1, C4, CP6, P3, CP2, F4, F3, PO4, O2, P4, O1, PZ, P8, FCZ, FC1, FC6, AF4, FC5, FZ, P7, F8, CZ, FP2, F7, PO3, OZ FC2, AF3, CPZ, AF4, FC5, F7, CZ, O2, F3, F4, FC6, C4, F8, P3, FP2, CP6, P8, PZ, P7, FZ, FC1, OZ, PO3, FCZ, FP1, CP2, CP1, P4, CP5, PO4, O1 FC1, CP1, CZ, O1, C4, F3, FCZ, FC2, FZ, CP2, AF3, FP1, CP6, F4, P3, CPZ, CP5, AF4, FC6, P7, PO4, OZ, PZ, PO3, P4, F8, FC5, O2, F7, FP2, P8 Local ICA CP1, O1, FP1, CZ, 'FC1, P8, PO4, FP2, FCZ, P7, F4, P3, P4, PO3, CP6, FC6, CPZ, FC5, AF4, FZ, F3, CP5, F7, F8, AF3, CP2, C4, PZ, FC2, O2, OZ IV. CONCLUSIONS In this paper, we presente a local ICA approach for MI estimation in feature selection. This work is an extension to our previous proposal of using linear ICA transformations plus sample-spacing entropy estimators for MI estimation. The local ICA approach combines the clustering algorithm of choice (K-means in this paper), the cumulant-base analytical solution for linear ICA transformations, an a computationally efficient mutual information estimator, by taking into account the fact that minimization of Bayes classification error can be approximately achieve by maximizing the mutual information between the features an the class labels. Local ICA is use to approximate nonlinear ICA piecewise linearly for ata whose istribution is nonlinearly generate. This yiels increase performance an accuracy compare with linear ICA transformations. In theory, nonlinear ICA can perfectly transform epenent components into inepenent feature vectors; however, the complexity an requirement for large amount of ata samples limit its applications. On the other han, local ICA is relatively simple compare with nonlinear ICA an requires less ata samples, which is preferre in most of the applications. We must mention that in orer to estimate MI precisely, both nonlinear an local ICA nee a large ata set, while local ICA leave much free space for a trae-off between accuracy an complexity. Experiments using synthetic an real (EEG) ata emonstrate the utility, feasibility an the effectiveness of the propose technique. In comparison with the previous linear ICA approach, it is observe (as expecte) that local linear ICA yiels better performance than linear ICA. ACKNOWLEDGEMENTS This work was supporte by DARPA uner contract DAAD C The EEG ata was collecte at the Human-Centere Systems Lab., Honeywell, Minneapolis, Minnesota. REFERENCES [1] T. Lan, D. Erogmus, A. Aami, M. Pavel, Feature Selection by Inepenent Component Analysis an Mutual Information Maximization in EEG Signal Classification, accepte to IJCNN 2005, Montreal, Canaa, [2] R.M. Fano, Transmission of Information: A Statistical Theory of Communications, Wiley, New York, 1961.

6 [3] M.E. Hellman, J. Raviv, Probability of Error, Equivocation an the Chernoff Boun, IEEE Transactions on Information Theory, vol. 16, pp , [4] K. Torkkola, Feature Extraction by Non- Parametric Mutual Information Maximization, Journal of Machine Learning Research, vol. 3, pp , [5] R. Battiti, Using Mutual Information for Selecting Features in Supervise Neural Networks learning, IEEE Trans. Neural Networks, vol. 5, no. 4, pp , [6] A. Ai-ani, M. Deriche, An Optimal Feature Selection Technique Using the Concept of Mutual Information, Proceeings of ISSPA, pp , [7] N. Kwak, C-H. Choi, Input Feature Selection for Classification Problems, IEEE Transactions on Neural Networks, vol. 13, no. 1, pp , [9] H.H. Yang, J. Mooy, Feature Selection Base on Joint Mutual Information, in Avances in Intelligent Data Analysis an Computational Intelligent Methos an Application, [9] H.H. Yang, J. Mooy, Data Visualization an Feature Selection: New Algorithms for Nongaussian Data, Avances in NIPS, pp , [10] L. Parra, P. Saja, Blin Source Separation via Generalize Eigenvalue Decomposition, Journal of Machine Learning Research, vol. 4, pp , [11] E.G. Learne-Miller, J.W. Fisher III, ICA Using Spacings Estimates of Entropy,.Journal of Machine Learning Research, vol. 4, pp , [12] Deniz Erogmus, Anre Aami, Michael Pavel, Tian Lan, Santosh Mathan, Stephen Whitlow, Michael Dorneich, Cognitive State Estimation Base on EEG for Augmente Cognition. Proceeings of the 2 n International IEEE EMBS Conference on Neural Engineering, Arlington, Virginia, Mar , [13] C. Jutten, J. Karhunen, Avances in Nonlinear Blin Source Separation, Proc. of the 4 th Int. Symp. on Inepenent Component Analysis an Blin signal separation (ICA2003), Nara, Japan, April 1-4, 2003, pp [14] J. Karhunen, S. Malaroiu, Locally Linear Inepenent Component Analysis, Proc. of the Int. Joint Conf. on Neural Networks (IJCNN 99), July, 1999, Washington DC., USA, pp [15] K.E. Hil II, D. Erogmus, J.C. Principe, Blin Source Separation Using Renyi's Mutual Information, IEEE Signal Processing Letters, vol. 8, no. 6, pp , [16] A. Hyvärinen, E. Oja, A Fast Fixe Point Algorithm for Inepenent Component Analysis, Neural Computation, vol. 9, no. 7, pp , [17] B. Wirow an M. E. Hoff, "Aaptive switching circuits," in IRE WESCON Convention Recor, 1960, pp [18] P. Welch, The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Metho Base on Time Averaging Over Short Moifie Perioograms, IEEE Transactions on Auio an Electroacoustics, vol. 15, no. 2, pp , 1967.

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Influence of weight initialization on multilayer perceptron performance

Influence of weight initialization on multilayer perceptron performance Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

inflow outflow Part I. Regular tasks for MAE598/494 Task 1 MAE 494/598, Fall 2016 Project #1 (Regular tasks = 20 points) Har copy of report is ue at the start of class on the ue ate. The rules on collaboration will be release separately. Please always follow the

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

Capacity Analysis of MIMO Systems with Unknown Channel State Information

Capacity Analysis of MIMO Systems with Unknown Channel State Information Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,

More information

This module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics

This module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics This moule is part of the Memobust Hanbook on Methoology of Moern Business Statistics 26 March 2014 Metho: Balance Sampling for Multi-Way Stratification Contents General section... 3 1. Summary... 3 2.

More information

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21 Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting

More information

Qubit channels that achieve capacity with two states

Qubit channels that achieve capacity with two states Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain Nonlinear Aaptive Ship Course Tracking Control Base on Backstepping an Nussbaum Gain Jialu Du, Chen Guo Abstract A nonlinear aaptive controller combining aaptive Backstepping algorithm with Nussbaum gain

More information

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

Image Denoising Using Spatial Adaptive Thresholding

Image Denoising Using Spatial Adaptive Thresholding International Journal of Engineering Technology, Management an Applie Sciences Image Denoising Using Spatial Aaptive Thresholing Raneesh Mishra M. Tech Stuent, Department of Electronics & Communication,

More information

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS Yannick DEVILLE Université Paul Sabatier Laboratoire Acoustique, Métrologie, Instrumentation Bât. 3RB2, 8 Route e Narbonne,

More information

Bayesian Estimation of the Entropy of the Multivariate Gaussian

Bayesian Estimation of the Entropy of the Multivariate Gaussian Bayesian Estimation of the Entropy of the Multivariate Gaussian Santosh Srivastava Fre Hutchinson Cancer Research Center Seattle, WA 989, USA Email: ssrivast@fhcrc.org Maya R. Gupta Department of Electrical

More information

An Analytical Expression of the Probability of Error for Relaying with Decode-and-forward

An Analytical Expression of the Probability of Error for Relaying with Decode-and-forward An Analytical Expression of the Probability of Error for Relaying with Decoe-an-forwar Alexanre Graell i Amat an Ingmar Lan Department of Electronics, Institut TELECOM-TELECOM Bretagne, Brest, France Email:

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Independent Component Analysis Barnabás Póczos Independent Component Analysis 2 Independent Component Analysis Model original signals Observations (Mixtures)

More information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,

More information

Space-time Linear Dispersion Using Coordinate Interleaving

Space-time Linear Dispersion Using Coordinate Interleaving Space-time Linear Dispersion Using Coorinate Interleaving Jinsong Wu an Steven D Blostein Department of Electrical an Computer Engineering Queen s University, Kingston, Ontario, Canaa, K7L3N6 Email: wujs@ieeeorg

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.

More information

A NONLINEAR SOURCE SEPARATION APPROACH FOR THE NICOLSKY-EISENMAN MODEL

A NONLINEAR SOURCE SEPARATION APPROACH FOR THE NICOLSKY-EISENMAN MODEL 6th European Signal Processing Conference EUSIPCO 28, Lausanne, Switzerlan, August 25-29, 28, copyright by EURASIP A NONLINEAR SOURCE SEPARATION APPROACH FOR THE NICOLSKY-EISENMAN MODEL Leonaro Tomazeli

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

Level Construction of Decision Trees in a Partition-based Framework for Classification

Level Construction of Decision Trees in a Partition-based Framework for Classification Level Construction of Decision Trees in a Partition-base Framework for Classification Y.Y. Yao, Y. Zhao an J.T. Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canaa S4S

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Robust Low Rank Kernel Embeddings of Multivariate Distributions

Robust Low Rank Kernel Embeddings of Multivariate Distributions Robust Low Rank Kernel Embeings of Multivariate Distributions Le Song, Bo Dai College of Computing, Georgia Institute of Technology lsong@cc.gatech.eu, boai@gatech.eu Abstract Kernel embeing of istributions

More information

Lecture 6 : Dimensionality Reduction

Lecture 6 : Dimensionality Reduction CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Keywors: multi-view learning, clustering, canonical correlation analysis Abstract Clustering ata in high-imensions is believe to be a har problem in general. A number of efficient clustering algorithms

More information

Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets

Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets Proceeings of the 4th East-European Conference on Avances in Databases an Information Systems ADBIS) 200 Estimation of the Maximum Domination Value in Multi-Dimensional Data Sets Eleftherios Tiakas, Apostolos.

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

CS9840 Learning and Computer Vision Prof. Olga Veksler. Lecture 2. Some Concepts from Computer Vision Curse of Dimensionality PCA

CS9840 Learning and Computer Vision Prof. Olga Veksler. Lecture 2. Some Concepts from Computer Vision Curse of Dimensionality PCA CS9840 Learning an Computer Vision Prof. Olga Veksler Lecture Some Concepts from Computer Vision Curse of Dimensionality PCA Some Slies are from Cornelia, Fermüller, Mubarak Shah, Gary Braski, Sebastian

More information

Expected Value of Partial Perfect Information

Expected Value of Partial Perfect Information Expecte Value of Partial Perfect Information Mike Giles 1, Takashi Goa 2, Howar Thom 3 Wei Fang 1, Zhenru Wang 1 1 Mathematical Institute, University of Oxfor 2 School of Engineering, University of Tokyo

More information

Generalization of the persistent random walk to dimensions greater than 1

Generalization of the persistent random walk to dimensions greater than 1 PHYSICAL REVIEW E VOLUME 58, NUMBER 6 DECEMBER 1998 Generalization of the persistent ranom walk to imensions greater than 1 Marián Boguñá, Josep M. Porrà, an Jaume Masoliver Departament e Física Fonamental,

More information

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary

More information

Cascaded redundancy reduction

Cascaded redundancy reduction Network: Comput. Neural Syst. 9 (1998) 73 84. Printe in the UK PII: S0954-898X(98)88342-5 Cascae reunancy reuction Virginia R e Sa an Geoffrey E Hinton Department of Computer Science, University of Toronto,

More information

Lecture 6: Generalized multivariate analysis of variance

Lecture 6: Generalized multivariate analysis of variance Lecture 6: Generalize multivariate analysis of variance Measuring association of the entire microbiome with other variables Distance matrices capture some aspects of the ata (e.g. microbiome composition,

More information

CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, and Tony Wu

CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, and Tony Wu CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, an Tony Wu Abstract Popular proucts often have thousans of reviews that contain far too much information for customers to igest. Our goal for the

More information

Monte Carlo Methods with Reduced Error

Monte Carlo Methods with Reduced Error Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for

More information

Flexible High-Dimensional Classification Machines and Their Asymptotic Properties

Flexible High-Dimensional Classification Machines and Their Asymptotic Properties Journal of Machine Learning Research 16 (2015) 1547-1572 Submitte 1/14; Revise 9/14; Publishe 8/15 Flexible High-Dimensional Classification Machines an Their Asymptotic Properties Xingye Qiao Department

More information

arxiv: v1 [cs.lg] 22 Mar 2014

arxiv: v1 [cs.lg] 22 Mar 2014 CUR lgorithm with Incomplete Matrix Observation Rong Jin an Shenghuo Zhu Dept. of Computer Science an Engineering, Michigan State University, rongjin@msu.eu NEC Laboratories merica, Inc., zsh@nec-labs.com

More information

KNN Particle Filters for Dynamic Hybrid Bayesian Networks

KNN Particle Filters for Dynamic Hybrid Bayesian Networks KNN Particle Filters for Dynamic Hybri Bayesian Networs H. D. Chen an K. C. Chang Dept. of Systems Engineering an Operations Research George Mason University MS 4A6, 4400 University Dr. Fairfax, VA 22030

More information

Fast image compression using matrix K-L transform

Fast image compression using matrix K-L transform Fast image compression using matrix K-L transform Daoqiang Zhang, Songcan Chen * Department of Computer Science an Engineering, Naning University of Aeronautics & Astronautics, Naning 2006, P.R. China.

More information

arxiv: v2 [cond-mat.stat-mech] 11 Nov 2016

arxiv: v2 [cond-mat.stat-mech] 11 Nov 2016 Noname manuscript No. (will be inserte by the eitor) Scaling properties of the number of ranom sequential asorption iterations neee to generate saturate ranom packing arxiv:607.06668v2 [con-mat.stat-mech]

More information

Sparse Reconstruction of Systems of Ordinary Differential Equations

Sparse Reconstruction of Systems of Ordinary Differential Equations Sparse Reconstruction of Systems of Orinary Differential Equations Manuel Mai a, Mark D. Shattuck b,c, Corey S. O Hern c,a,,e, a Department of Physics, Yale University, New Haven, Connecticut 06520, USA

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

A Hybrid Approach for Modeling High Dimensional Medical Data

A Hybrid Approach for Modeling High Dimensional Medical Data A Hybri Approach for Moeling High Dimensional Meical Data Alok Sharma 1, Gofrey C. Onwubolu 1 1 University of the South Pacific, Fii sharma_al@usp.ac.f, onwubolu_g@usp.ac.f Abstract. his work presents

More information

Classifying Biomedical Text Abstracts based on Hierarchical Concept Structure

Classifying Biomedical Text Abstracts based on Hierarchical Concept Structure Classifying Biomeical Text Abstracts base on Hierarchical Concept Structure Rozilawati Binti Dollah an Masai Aono Abstract Classifying biomeical literature is a ifficult an challenging tas, especially

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri ITA, UC San Diego, 9500 Gilman Drive, La Jolla, CA Sham M. Kakae Karen Livescu Karthik Sriharan Toyota Technological Institute at Chicago, 6045 S. Kenwoo Ave., Chicago, IL kamalika@soe.ucs.eu

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Necessary and Sufficient Conditions for Sketched Subspace Clustering

Necessary and Sufficient Conditions for Sketched Subspace Clustering Necessary an Sufficient Conitions for Sketche Subspace Clustering Daniel Pimentel-Alarcón, Laura Balzano 2, Robert Nowak University of Wisconsin-Maison, 2 University of Michigan-Ann Arbor Abstract This

More information

HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems

HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems Silvia Chiappa and Samy Bengio {chiappa,bengio}@idiap.ch IDIAP, P.O. Box 592, CH-1920 Martigny, Switzerland Abstract. We compare the use

More information

Analytic Scaling Formulas for Crossed Laser Acceleration in Vacuum

Analytic Scaling Formulas for Crossed Laser Acceleration in Vacuum October 6, 4 ARDB Note Analytic Scaling Formulas for Crosse Laser Acceleration in Vacuum Robert J. Noble Stanfor Linear Accelerator Center, Stanfor University 575 San Hill Roa, Menlo Park, California 945

More information

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the

More information

Generalizing Kronecker Graphs in order to Model Searchable Networks

Generalizing Kronecker Graphs in order to Model Searchable Networks Generalizing Kronecker Graphs in orer to Moel Searchable Networks Elizabeth Boine, Babak Hassibi, Aam Wierman California Institute of Technology Pasaena, CA 925 Email: {eaboine, hassibi, aamw}@caltecheu

More information

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing

More information

Entanglement is not very useful for estimating multiple phases

Entanglement is not very useful for estimating multiple phases PHYSICAL REVIEW A 70, 032310 (2004) Entanglement is not very useful for estimating multiple phases Manuel A. Ballester* Department of Mathematics, University of Utrecht, Box 80010, 3508 TA Utrecht, The

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

Non-Linear Bayesian CBRN Source Term Estimation

Non-Linear Bayesian CBRN Source Term Estimation Non-Linear Bayesian CBRN Source Term Estimation Peter Robins Hazar Assessment, Simulation an Preiction Group Dstl Porton Down, UK. probins@stl.gov.uk Paul Thomas Hazar Assessment, Simulation an Preiction

More information

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae

More information

Nested Saturation with Guaranteed Real Poles 1

Nested Saturation with Guaranteed Real Poles 1 Neste Saturation with Guarantee Real Poles Eric N Johnson 2 an Suresh K Kannan 3 School of Aerospace Engineering Georgia Institute of Technology, Atlanta, GA 3332 Abstract The global stabilization of asymptotically

More information

Delay Limited Capacity of Ad hoc Networks: Asymptotically Optimal Transmission and Relaying Strategy

Delay Limited Capacity of Ad hoc Networks: Asymptotically Optimal Transmission and Relaying Strategy Delay Limite Capacity of A hoc Networks: Asymptotically Optimal Transmission an Relaying Strategy Eugene Perevalov Lehigh University Bethlehem, PA 85 Email: eup2@lehigh.eu Rick Blum Lehigh University Bethlehem,

More information

Harmonic Modelling of Thyristor Bridges using a Simplified Time Domain Method

Harmonic Modelling of Thyristor Bridges using a Simplified Time Domain Method 1 Harmonic Moelling of Thyristor Briges using a Simplifie Time Domain Metho P. W. Lehn, Senior Member IEEE, an G. Ebner Abstract The paper presents time omain methos for harmonic analysis of a 6-pulse

More information

A Quantitative Analysis of Coupling for a WPT System Including Dielectric/Magnetic Materials

A Quantitative Analysis of Coupling for a WPT System Including Dielectric/Magnetic Materials Progress In Electromagnetics Research Letters, Vol. 72, 127 134, 2018 A Quantitative Analysis of Coupling for a WPT System Incluing Dielectric/Magnetic Materials Yangjun Zhang *, Tatsuya Yoshiawa, an Taahiro

More information

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION UNIFYING AND MULISCALE APPROACHES O FAUL DEECION AND ISOLAION Seongkyu Yoon an John F. MacGregor Dept. Chemical Engineering, McMaster University, Hamilton Ontario Canaa L8S 4L7 yoons@mcmaster.ca macgreg@mcmaster.ca

More information

PD Controller for Car-Following Models Based on Real Data

PD Controller for Car-Following Models Based on Real Data PD Controller for Car-Following Moels Base on Real Data Xiaopeng Fang, Hung A. Pham an Minoru Kobayashi Department of Mechanical Engineering Iowa State University, Ames, IA 5 Hona R&D The car following

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri ITA, UC San Diego, 9500 Gilman Drive, La Jolla, CA Sham M. Kakae Karen Livescu Karthik Sriharan Toyota Technological Institute at Chicago, 6045 S. Kenwoo Ave., Chicago, IL kamalika@soe.ucs.eu

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

Equilibrium in Queues Under Unknown Service Times and Service Value

Equilibrium in Queues Under Unknown Service Times and Service Value University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 1-2014 Equilibrium in Queues Uner Unknown Service Times an Service Value Laurens Debo Senthil K. Veeraraghavan University

More information

Recipes for the Linear Analysis of EEG and applications

Recipes for the Linear Analysis of EEG and applications Recipes for the Linear Analysis of EEG and applications Paul Sajda Department of Biomedical Engineering Columbia University Can we read the brain non-invasively and in real-time? decoder 1001110 if YES

More information

A POSTFILTER TO MODIFY THE MODULATION SPECTRUM IN HMM-BASED SPEECH SYNTHESIS

A POSTFILTER TO MODIFY THE MODULATION SPECTRUM IN HMM-BASED SPEECH SYNTHESIS 24 IEEE International Conference on Acoustic, Speech an Signal Processing (ICASSP A POSTFILTER TO MODIFY THE MODULATION SPECTRUM IN HMM-BASED SPEECH SYNTHESIS Shinnosuke Takamichi, Tomoki Toa, Graham Neubig,

More information

Experimental Robustness Study of a Second-Order Sliding Mode Controller

Experimental Robustness Study of a Second-Order Sliding Mode Controller Experimental Robustness Stuy of a Secon-Orer Sliing Moe Controller Anré Blom, Bram e Jager Einhoven University of Technology Department of Mechanical Engineering P.O. Box 513, 5600 MB Einhoven, The Netherlans

More information

Hybrid Fusion for Biometrics: Combining Score-level and Decision-level Fusion

Hybrid Fusion for Biometrics: Combining Score-level and Decision-level Fusion Hybri Fusion for Biometrics: Combining Score-level an Decision-level Fusion Qian Tao Raymon Velhuis Signals an Systems Group, University of Twente Postbus 217, 7500AE Enschee, the Netherlans {q.tao,r.n.j.velhuis}@ewi.utwente.nl

More information

FREQUENCY ESTIMATION BASED ON ADJACENT DFT BINS

FREQUENCY ESTIMATION BASED ON ADJACENT DFT BINS FREQUENCY ESTIMATION BASED ON ADJACENT DFT BINS Michaël Betser, Patrice Collen, France Telecom R&D 3551 Cesson-Sévigné, France first.name@francetelecom.com Gaël Richar. Telecom Paris 75634 Paris, France

More information

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains Hyperbolic Systems of Equations Pose on Erroneous Curve Domains Jan Norström a, Samira Nikkar b a Department of Mathematics, Computational Mathematics, Linköping University, SE-58 83 Linköping, Sween (

More information

On the Surprising Behavior of Distance Metrics in High Dimensional Space

On the Surprising Behavior of Distance Metrics in High Dimensional Space On the Surprising Behavior of Distance Metrics in High Dimensional Space Charu C. Aggarwal, Alexaner Hinneburg 2, an Daniel A. Keim 2 IBM T. J. Watson Research Center Yortown Heights, NY 0598, USA. charu@watson.ibm.com

More information

IERCU. Institute of Economic Research, Chuo University 50th Anniversary Special Issues. Discussion Paper No.210

IERCU. Institute of Economic Research, Chuo University 50th Anniversary Special Issues. Discussion Paper No.210 IERCU Institute of Economic Research, Chuo University 50th Anniversary Special Issues Discussion Paper No.210 Discrete an Continuous Dynamics in Nonlinear Monopolies Akio Matsumoto Chuo University Ferenc

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

State observers and recursive filters in classical feedback control theory

State observers and recursive filters in classical feedback control theory State observers an recursive filters in classical feeback control theory State-feeback control example: secon-orer system Consier the riven secon-orer system q q q u x q x q x x x x Here u coul represent

More information

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES Mika Inki and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FIN-215 HUT, Finland ABSTRACT

More information

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France APPROXIMAE SOLUION FOR RANSIEN HEA RANSFER IN SAIC URBULEN HE II B. Bauouy CEA/Saclay, DSM/DAPNIA/SCM 91191 Gif-sur-Yvette Ceex, France ABSRAC Analytical solution in one imension of the heat iffusion equation

More information

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,

More information

Bandwidth expansion in joint source- channel coding and twisted modulation

Bandwidth expansion in joint source- channel coding and twisted modulation Banwith expansion in joint source- channel coing an twiste moulation Pål Aners Floor Tor A Ramsta Outline. Introuction to nonlinear mappings. Banwith expansion an Twiste moulation. Some history. Optimum

More information

One Dimensional Convection: Interpolation Models for CFD

One Dimensional Convection: Interpolation Models for CFD One Dimensional Convection: Interpolation Moels for CFD ME 448/548 Notes Geral Recktenwal Portlan State University Department of Mechanical Engineering gerry@p.eu ME 448/548: D Convection-Di usion Equation

More information

Optimal Signal Detection for False Track Discrimination

Optimal Signal Detection for False Track Discrimination Optimal Signal Detection for False Track Discrimination Thomas Hanselmann Darko Mušicki Dept. of Electrical an Electronic Eng. Dept. of Electrical an Electronic Eng. The University of Melbourne The University

More information

State-Space Model for a Multi-Machine System

State-Space Model for a Multi-Machine System State-Space Moel for a Multi-Machine System These notes parallel section.4 in the text. We are ealing with classically moele machines (IEEE Type.), constant impeance loas, an a network reuce to its internal

More information

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling Balancing Expecte an Worst-Case Utility in Contracting Moels with Asymmetric Information an Pooling R.B.O. erkkamp & W. van en Heuvel & A.P.M. Wagelmans Econometric Institute Report EI2018-01 9th January

More information

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS TIME-DEAY ESTIMATION USING FARROW-BASED FRACTIONA-DEAY FIR FITERS: FITER APPROXIMATION VS. ESTIMATION ERRORS Mattias Olsson, Håkan Johansson, an Per öwenborg Div. of Electronic Systems, Dept. of Electrical

More information

6. Friction and viscosity in gasses

6. Friction and viscosity in gasses IR2 6. Friction an viscosity in gasses 6.1 Introuction Similar to fluis, also for laminar flowing gases Newtons s friction law hols true (see experiment IR1). Using Newton s law the viscosity of air uner

More information

New Bounds for Distributed Storage Systems with Secure Repair

New Bounds for Distributed Storage Systems with Secure Repair New Bouns for Distribute Storage Systems with Secure Repair Ravi Tanon 1 an Soheil Mohajer 1 Discovery Analytics Center & Department of Computer Science, Virginia Tech, Blacksburg, VA Department of Electrical

More information

Optimal Cooperative Spectrum Sensing in Cognitive Sensor Networks

Optimal Cooperative Spectrum Sensing in Cognitive Sensor Networks Optimal Cooperative Spectrum Sensing in Cognitive Sensor Networks Hai Ngoc Pham, an Zhang, Paal E. Engelsta,,3, Tor Skeie,, Frank Eliassen, Department of Informatics, University of Oslo, Norway Simula

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection an System Ientification Borhan M Sananaji, Tyrone L Vincent, an Michael B Wakin Abstract In this paper,

More information

arxiv: v4 [stat.ml] 21 Dec 2016

arxiv: v4 [stat.ml] 21 Dec 2016 Online Active Linear Regression via Thresholing Carlos Riquelme Stanfor University riel@stanfor.eu Ramesh Johari Stanfor University rjohari@stanfor.eu Baosen Zhang University of Washington zhangbao@uw.eu

More information