Combining Classifiers and Learning Mixture-of-Experts

Size: px
Start display at page:

Download "Combining Classifiers and Learning Mixture-of-Experts"

Transcription

1 38 Combining Classiiers and Learning Mixture-o-Experts Lei Xu Chinese University o Hong Kong Hong Kong & Peing University China Shun-ichi Amari Brain Science Institute Japan INTRODUCTION Expert combination is a classic strategy that has been widely used in various problem solving tass. A team o individuals with diverse and complementary sills tacle a tas ointly such that a perormance better than any single individual can mae is achieved via integrating the strengths o individuals. Started rom the late 980 in the handwritten character recognition literature studies have been made on combining multiple classiiers. Also rom the early 990 in the ields o neural networs and machine learning eorts have been made under the name o ensemble learning or mixture o experts on how to learn ointly a mixture o experts (parametric models) and a combining strategy or integrating them in an optimal sense. The article aims at a general setch o two streams o studies not only with a re-elaboration o essential tass basic ingredients and typical combining rules but also with a general combination ramewor (especially one concise and more useul one-parameter modulated special case called a-integration) suggested to uniy a number o typical classiier combination rules and several mixture based learning models as well as max rule and min rule used in the literature on uzzy system. BACKGROUND Both streams o studies are eatured by two periods o developments. The irst period is roughly rom the late 980s to the early 990s. In the handwritten character recognition literature various classiiers have been developed rom dierent methodologies and dierent eatures which motivate studies on combining multiple classiiers or a better perormance. A systematical eort on the early stage o studies was made in (Xu Krzyza & Suen 992) with an attempt o setting up a general ramewor or classiier combination. As reelaborated in Tab. not only two essential tass were identiied and a ramewor o three level combination was presented or the second tas to cope with dierent types o classiier s output inormation but also several rules have been investigated towards two o the three levels especially with Bayes voting rule product rule and Dempster-Shaer rule proposed. Subsequently the rest one (i.e. ran level) was soon studied in (Ho Hull & Srihari 994) via Borda count. Interestingly and complementarily almost in the same period the irst tas happens to be the ocus o studies in the neural networs learning literature. Encountering the problems that there are dierent choices or the same type o neural net by varying its scale (e.g. the number o hidden units in a three layer net) dierent local optimal results on the same neural net due to dierent initializations studies have been made on how to train an ensemble o diverse and complementary networs via cross-validation- partitioning correlation reduction pruning perormance guided re-sampling etc such that the resulted combination produces a better generalization perormance (Hansen & Salamon 990; Xu Krzyza & Suen 99; Wolpert 992; Baxt 992 Breiman 992&94; Drucer et al 994). In addition to classiication this stream also handles unction regression via integrating individual estimators by a linear combination (Perrone & Cooper 993). Furthermore this stream progresses to consider the perormance o two tass in Tab. ointly in help o the mixture-o-expert (ME) models (Jacobs et al 99; Jordan & Jacobs 994; Xu & Jordan 993; Xu Jordan & Hinton 994) which can learn either or both o the combining mechanism and individual experts in a maximum lielihood sense. Two stream studies in the irst period ointly set up a landscape o this emerging research area together Copyright 2009 IGI Global distributing in print or electronic orms without written permission o IGI Global is prohibited.

2 Combining Classiiers and Learning Mixture-o-Experts Table. Essential tass and their implementations C with a number o typical topics or directions. Thereater urther studies have been urther conducted on each o these typical directions. First theoretical analyses have been made or deep insights and improved perormances. For examples convergence analysis on the EM algorithm or the mixture based learning are conducted in (Jordan & Xu 995; Xu & Jordan 996). In Tumer & Ghosh (996) the additive errors o posteriori probabilities by classiiers or experts are considered with variances and correlations o these errors investigated or improving the perormance o a sum based combination. In Kittler et al (998) the eect o these errors on the sensitivity o sum rule vs product rule are urther investigated with a conclusion that summation is much preerred. Also a theoretical ramewor is suggested or taing several combining rules as special cases (Kittler 998) being unaware o that this ramewor is actually the mixture-o-experts model that was proposed irstly or combining multiple unction regressions in (Jacobs et al 99) and then or combining multiple classiiers in (Xu & Jordan 993). In addition another theoretical study is made on six classiier usion strategies in (Kuncheva 2002). Second there are urther studies on Dempster- Shaer rule (Al-Ania 2002) and other combing methods such as ran based boosting based as well as local accuracy estimates (Woods Kegelmeyer & Bowyer 997). Third there are a large number o applications. Due to space limit details are reerred to Ranawana & Palade (2006) and Sharey & Sharey (999). A GENERAL ARCHITECTURE TWO TASKS AND THREE INGREDIENTS We consider a general architecture shown in Fig.. There are { e } experts with each e (x) as either a classiier or an estimator. As shown in Tab.2 a classiier outputs one o three types o inormation on which we have three levels o combination. The irst two can be regarded as special cases o the third one that outputs a vector o measurements. A typical example T is [ p ( x) p ( m x)] with each p ( x) 0 expressing a posteriori probability that x is classiied to the -th class. Also p ( x) p ( y x) can be urther extended to p that describes a distribution or a m regression x y R. In Figure there is also a gating net that generates signals { ( x)} to modulate experts by a combining mechanism M(x). 39

3 Combining Classiiers and Learning Mixture-o-Experts Figure. A general architecture or expert combination Table 2. Three levels o combination Based on this architecture two essential tass o expert combination could be still quoted rom (Xu Krzyza & Suen 992) with a slight modiication on Tas as shown in Tab. that the phrase `or a speciic application? should be deleted in consideration o the previously introduced studies (Hansen & Salamon 990; Xu Krzyza & Suen 99; Wolpert 992; Baxt 992 Breiman 992&94; Drucer et al 994; Tumer & Ghosh 996). Insights can be obtained by considering three basic ingredients o two streams o studies as shown in Fig.2. Combinatorial choices o dierent ingredients lead to dierent speciic models or expert combination and dierences in the roles by each ingredient highlight the dierent ocuses o two streams. In the stream o neural networs and machine learning provided with a structure or each e (x) a gating structure and a combining structure M(x) all the rest unnowns are determined under guidance o a learning theory in term o minimizing an error cost. Such a minimization is implemented via an optimizing procedure by a learn- N ing algorithm based on a training set { } that xt y t t m teaches a target y t or each mapping xt R. While in the stream o combing classiiers all { p } are nown without unnowns let to be speciied. Also M is designed according to certain heuristics or principles with or without help o a training set and studies are mainly placed on developing and analyzing dierent combining mechanisms or which we will urther dis- 320

4 Combining Classiiers and Learning Mixture-o-Experts cuss subsequently. The inal combining perormance is empirically evaluated by the misclassiication rate but there is no eort yet on developing a theory or one M that minimizes the misclassiication rate or a cost unction though there are some investigations on how estimated posteriori probabilities can be improved by a sum rule and on error sensitivity o estimated posteriori probabilities (Tumer & Ghosh 996; Kittler et al 998). This under-explored direction also motivate uture studies subsequently. -COMBINATION The arithmetic geometric and harmonic mean o nonnegative number b 0 has been urther extended into one called: -mean m ( ( b )) where (r) is a monotonic scalar unction and a > 0 (Hardy Littlewood & Polya 952). We can urther generalize this -mean to the general architecture shown in Fig. resulting in the ollowing -combination: M ( ( p )) or ( M ) where ( p a > 0. ) In the ollowing we discuss to use it as a general ramewor to uniy not only typical classiier combining rules but also mixture-o-expert learning and RBF net learning as shown in Tab.3. We observe the three columns or three special cases o (r). The irst column is the case (r) r we return to the ME model: M p which was proposed irstly or combining multiple regressions in (Jacobs et al 99) and then or combining classiiers in (Xu & Jordan 993). For dierent special cases o a (x) we are lead to a number o existing typical examples. As already pointed out in (Kittler et al 998) the irst three rows are our typical classiier combining rules (the 2 nd row directly applies to the min-rule too). The next three rows are three types o ME learning models and a rather systematic summary C Figure 2. Three basic ingredients 32

5 Combining Classiiers and Learning Mixture-o-Experts Table 3. Typical examples (including several existing rules and potential topics) is reerred to Sec. 4.3 in (Xu 200). The last row is a recent development o the 3 rd row. The 2nd row o the 2nd column is the geometric mean: M p is ust a marginal probability M p which is equal to the product rule (Xu Krzyza and Suen 992; Kittler et al 998 Hinton 2002) i each a priori is equal i.e. a (x) /m. Generally i a (x) /m there is a dierence by a scaling actor a (x) /. The product rule wors in a probability theory sense under a condition that classiiers are mutually independent. In (Kittler et al 998) attempting to discuss a number o rules under a uniied system the sum rule is approximately derived rom the product rule under an extra condition that is usually diicult to satisy. Actually such an imposed lin between the product rule and the sum rule is unnecessary the sum: p( y x) which is already in the ramewor o probability theory. That is both the sum rule and the product rule already coexist in the ramewor o probability theory. On the other hand it can be observed that the sum: ln p is dominated by a p i it is close to 0. That is this combination expects that every expert should cast enough votes otherwise the combined votes will be still very low ust because there is only one that casts a very low vote. In other words this combination can 322

6 Combining Classiiers and Learning Mixture-o-Experts be regarded as a relaxed logical AND that is beyond the ramewor o probability theory when a (x) /m. However staying within the ramewor o probability theory does not mean that it is better not only because it requires that classiiers are mutually independent but also because there lacs theoretical analysis on both rules in a sense o classiication errors or which urther investigations are needed. In Tab.2 the 2nd row o the third column is the harmonic mean. It can be observed that the problem o combining the degrees o support is changed into a problem o combining the degrees o disagree. This is interesting. Unortunately eorts o this ind are seldom ound yet. Exceptionally there are also examples that can not be included in the -combination such as Dempster-Shaer rule (Xu Krzyza and Suen 992; Al-Ania 2002) and ran based rule (Ho Hull Srihari 994). a-integration Ater completed the above -combination the irst author becomes aware o the wor by (Hardy Littlewood & Polya 952) through one coming paper (Amari 2007) that studies a much concise and more useul one-parameter modulated special case called a-integration. With help o a concrete mathematical oundation rom an inormation geometry perspective. Imposing an additional but reasonable nature that the -mean should be linear scale-ree i.e.: cm ( ( cb )) or any scale c alternative choices o (r) reduces into the ollowing only one: 0.5( ( r) r ln r ). It is not diicult to chec that r ( r) ln r / r 3. Thus the discussions on the examples in Tab.2 are applicable to this a (r). Moreover the irst row in Tab.2 holds when and + or whatever a gating net which thus includes two typical operators o the uzzy system as special case too. Also the amily is systematically modulated by a parameter + which provides not only a spectrum rom the most optimistic integration to the most pessimistic integration as varying rom to + but also a possibility o adapting a or a best combining perormance. Furthermore Amari (2007) also provides a theoretical ustiication that a-integration is optimal in a sense o minimizing a weighted average o a-divergence. Moreover it provides a potential road or studies on combining classiiers and learning mixture models rom the perspective o inormation geometry. FUTURE TRENDS Further studies are expected along several directions as ollows: Empirical and analytical comparisons on perormance are needed or those unexplored or less explored items in Tab.2. Is there a best structure or a (x)? comparisons need to be made on dierent types o a (x) especially the ones by the MUV type in the last row and the ME types rom the 4 th to the 7 th rows Is it necessary to relax the constraint: a (x) > 0 ( ) x e.g. removing non-negative requirement and to relax the distribution p to other types o unctions? How weights a (x) can be learned under a generalization error bound. As discussed in Fig.2 classiier combination and mixture based learning are two aspects with dierent eatures. How to let each part to tae their best roles in an integrated system? CONCLUSION Updating the purpose o (Xu Krzyza & Suen 992) the article provides not only a general setch o studies 323 C

7 Combining Classiiers and Learning Mixture-o-Experts on combining classiiers and learning mixture models but also a general combination ramewor to uniy a number o classiier combination rules and mixture based learning models as well as a number o directions or urther investigations. Acnowledgment The wor is supported by Chang Jiang Scholars Program by Chinese Ministry o Education or Chang Jiang Chair Proessorship in Peing University. REFERENCES Al-Ania A. (2002) A New Technique or Combining Multiple Classiiers using The Dempster-Shaer Theory o Evidence Journal o Artiicial Intelligence Research Amari S. (2007) Integration o stochastic models by minimizing a-divergence Neural Computation 9(0) Baxt W. G. (992) Improving the accuracy o an artiicial neural networ using multiple dierently trained networs Neural Computation Breiman L. (992) Staced Regression Department o Statistics Bereley TR-367. Breiman L. (994) Bagging Predictors Department o Statistics Bereley TR-42. DrucerH. CortesC. JacelL.D. LeCunY.& Vapni V. (994) Boosting and other ensemble methods Neural Computation Hardy G.H. Littlewood J.E. and Polya G. (952) Inequalities 2nd edition Cambridge University Press. Hansen L.K. & Salamon P. (990) Neural networ ensembles. IEEE Transactions Pattern Analysis Machine Intelligence 2(0) Hinton G.E.(2002) Training products o experts by minimizing contrastive divergence Neural Computation Ho T.K. Hull. J.J. Srihari S. N. (994) Decision combination in multiple classiier systems IEEE Transactions Pattern Analysis Machine Intelligence 6() Jacobs R. A. Jordan M. I. Nowlan S. J. & Hinton G. E. (99) Adaptive mixtures o local experts Neural Computation Jordan M. I. & Jacobs R. A. (994) Hierarchical mixtures o experts and the EM algorithm Neural Computation Jordan M.I. & Xu L. (995) Convergence Results or The EM Approach to Mixtures o Experts Architectures Neural Networs Kittler J. Hate M. Duinand R. P. W. Matas J. (998) On combining classiiers IEEE Trans. Pattern Analysis Machine Intelligence 20(3) Kittler J. (998) Combining classiiers: A theoretical ramewor Pattern Analysis and Applications Kuncheva L.I. (2002) A theoretical study on six classiier usion strategies IEEE Transactions Pattern Analysis Machine Intelligence 24(2) Lee C. Greiner R. Wang S. (2006) Using query-speciic variance estimates to combine Bayesian classiiers Proc. o the 23rd International Conerence on Machine Learning (ICML) Pittsburgh June ( ). Perrone M.P. & Cooper L.N.(993) When networs disagree: Ensemble methods or neural networs Neural Networs or Speech and Image Processing Mammone R. J. editor Chapman Hall. Ranawana R. & Palade V. (2006) Multi-Classiier Systems: Review and a roadmap or developers International Journal o Hybrid Intelligent Systems 3() Shaer G. (976) A mathematical theory o evidence Princeton University Press. Sharey A.J.& Sharey N.E. (997) Combining diverse neural nets Knowledge Engineering Review 2(3) Sharey A.J.& Sharey N.E. (999.) Combining Artiicial Neural Nets: Ensemble and Modular Multi- Net Systems Springer-Verlag NewYor Inc. Tumer K. & Ghosh J. (996) Error correlation and error reduction in ensemble classiiers Connection Science 8 (3/4)

8 Combining Classiiers and Learning Mixture-o-Experts Wolpert D.H. (992) Staced generalization Neural Networs 5(2) WoodsK. KegelmeyerK.P. & Bowyer K. (997) Combination o multiple classiiers using local accuracy estimates IEEE Transactions Pattern Analysis Machine Intelligen 9(4) Xu L (2007) A uniied perspective and new results on RHT computing mixture based learning and multilearner based problem solving Pattern Recognition Xu L (200) Best harmony uniied RPCL and automated model selection or unsupervised and supervised learning on Gaussian mixtures three-layer nets and ME-RBF-SVM Models International Journal o Neural Systems () Xu L. (998) RBF Nets Mixture Experts and Bayesian Ying-Yang Learning Neurocomputing Xu L. & Jordan M.I. (996) On convergence properties o the EM algorithm or Gaussian mixtures Neural Computation 8() Xu L. Jordan M.I. & Hinton G.E. (995) An Alternative Model or Mixtures o Experts Advances in Neural Inormation Processing Systems 7 Cowan Tesauro and Alspector editors MIT Press Xu L. Jordan M.I. & Hinton G.E. (994) A Modiied Gating Networ or the Mixtures o Experts Architecture Proc. WCNN94 San Diego CA (2) Xu L. & Jordan M.I. (993) EM Learning on A Generalized Finite Mixture Model or Combining Multiple Classiiers Proc. o WCNN93 (IV) Xu L. Krzyza A. & Sun C.Y. (992) Several methods or combining multiple classiiers and their applications in handwritten character recognition IEEE Transactions on System Man and Cybernetics Xu L. Krzyza A. & Sun C.Y. (99) Associative Switch or Combining Multiple Classiiers Proc. o IJCNN9 July 8-2. Seattle WA (I) Xu L. Oa E. & Kultanen P. (990) A New Curve Detection Method Randomized Hough transorm (RHT) Pattern Recognition Letters Key TERMS Conditional Distribution p(y x): Describes the uncertainty that an input x is mapped into an output y that simply taes one o several labels. In this case x is classiied into the class label y with a probability p(y x). Also y can be a real-valued vector or which x is mapped into y according density distribution p(y x). Classiier Combination: Given a number o classiiers each classiies a same input x into a class label and the labels maybe dierent or dierent classiiers. We see a rule M(x) that combines these classiiers as a new one that perorms better than anyone o them. Sum Rule (Bayes Voting): A classiier classiies x to a label y can be regarded as casting one vote to this label a simplest combination is to count the votes received by every candidate label. The -th classiier classiies x to a label y with a probability p means that one vote is divided to dierent candidates in ractions. We can sum up: p to count the votes on a candidate label y which is called Bayes voting since p(y x) is usually called Bayes posteriori probability. Product Rule: When classiiers { e } are mutually independent a combination is given by p( x C y x) p( x C ) y p( x C y e p( x C ) y C 325

9 Combining Classiiers and Learning Mixture-o-Experts or concisely p( y x) p ( y) p which is also called product rule. Mixture o Experts: Each expert is described by a conditional distribution p either with y taing one o several labels or a classiication problem or with y being a real-valued vector or a regression problem. A combination o experts is given by: M p (x) p( x) > 0 which is called a mixture-o-experts model. Particularly or y in a real-valued vector its regression orm is E( y x) ( y) yp ( y x dy. ) -Mean: Given a set o non-negative numbers b 0 the -mean is given by: m ( ( b )) where (r) is a monotonic scalar unction and a > 0. Particularly one most interesting special case is that (r) satisies cm ( ( cb )) or any scale c which is called a -mean. Perormance Evaluation Approach: It usually wors in the literature on classiier. Combination with a chart low that considering a set o classiiers { e } designing a combining mechanism M(x) according to certain principles evaluating perormances o combination empirically via misclassiication rates in help o samples with nown correct labels. Error-Reduction Approach: It usually wors in the literature on mixture based learning where what needs to be pre-designed is the structures o classiiers or experts as well as the combining structure M(x) with unnown parameters. A cost or error measure is evaluated via a set o training samples and then minimized through learning all the unnown parameters. 326

Experts. Lei Xu. Dept. of Computer Science, The Chinese University of Hong Kong. Dept. of Computer Science. Toronto, M5S 1A4, Canada.

Experts. Lei Xu. Dept. of Computer Science, The Chinese University of Hong Kong. Dept. of Computer Science. Toronto, M5S 1A4, Canada. An Alternative Model for Mixtures of Experts Lei u Dept of Computer Science, The Chinese University of Hong Kong Shatin, Hong Kong, Email lxu@cscuhkhk Michael I Jordan Dept of Brain and Cognitive Sciences

More information

Data Dependence in Combining Classifiers

Data Dependence in Combining Classifiers in Combining Classifiers Mohamed Kamel, Nayer Wanas Pattern Analysis and Machine Intelligence Lab University of Waterloo CANADA ! Dependence! Dependence Architecture! Algorithm Outline Pattern Recognition

More information

MIXTURE OF EXPERTS ARCHITECTURES FOR NEURAL NETWORKS AS A SPECIAL CASE OF CONDITIONAL EXPECTATION FORMULA

MIXTURE OF EXPERTS ARCHITECTURES FOR NEURAL NETWORKS AS A SPECIAL CASE OF CONDITIONAL EXPECTATION FORMULA MIXTURE OF EXPERTS ARCHITECTURES FOR NEURAL NETWORKS AS A SPECIAL CASE OF CONDITIONAL EXPECTATION FORMULA Jiří Grim Department of Pattern Recognition Institute of Information Theory and Automation Academy

More information

General Bayes Filtering of Quantized Measurements

General Bayes Filtering of Quantized Measurements 4th International Conerence on Inormation Fusion Chicago, Illinois, UA, July 5-8, 20 General Bayes Filtering o Quantized Measurements Ronald Mahler Uniied Data Fusion ciences, Inc. Eagan, MN, 5522, UA.

More information

Fisher Consistency of Multicategory Support Vector Machines

Fisher Consistency of Multicategory Support Vector Machines Fisher Consistency o Multicategory Support Vector Machines Yueng Liu Department o Statistics and Operations Research Carolina Center or Genome Sciences University o North Carolina Chapel Hill, NC 7599-360

More information

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69 R E S E A R C H R E P O R T Online Policy Adaptation for Ensemble Classifiers Christos Dimitrakakis a IDIAP RR 03-69 Samy Bengio b I D I A P December 2003 D a l l e M o l l e I n s t i t u t e for Perceptual

More information

Selection of Classifiers based on Multiple Classifier Behaviour

Selection of Classifiers based on Multiple Classifier Behaviour Selection of Classifiers based on Multiple Classifier Behaviour Giorgio Giacinto, Fabio Roli, and Giorgio Fumera Dept. of Electrical and Electronic Eng. - University of Cagliari Piazza d Armi, 09123 Cagliari,

More information

A Brief Survey on Semi-supervised Learning with Graph Regularization

A Brief Survey on Semi-supervised Learning with Graph Regularization 000 00 002 003 004 005 006 007 008 009 00 0 02 03 04 05 06 07 08 09 020 02 022 023 024 025 026 027 028 029 030 03 032 033 034 035 036 037 038 039 040 04 042 043 044 045 046 047 048 049 050 05 052 053 A

More information

Unsupervised Learning with Permuted Data

Unsupervised Learning with Permuted Data Unsupervised Learning with Permuted Data Sergey Kirshner skirshne@ics.uci.edu Sridevi Parise sparise@ics.uci.edu Padhraic Smyth smyth@ics.uci.edu School of Information and Computer Science, University

More information

Gaussian Process Regression Models for Predicting Stock Trends

Gaussian Process Regression Models for Predicting Stock Trends Gaussian Process Regression Models or Predicting Stock Trends M. Todd Farrell Andrew Correa December 5, 7 Introduction Historical stock price data is a massive amount o time-series data with little-to-no

More information

The achievable limits of operational modal analysis. * Siu-Kui Au 1)

The achievable limits of operational modal analysis. * Siu-Kui Au 1) The achievable limits o operational modal analysis * Siu-Kui Au 1) 1) Center or Engineering Dynamics and Institute or Risk and Uncertainty, University o Liverpool, Liverpool L69 3GH, United Kingdom 1)

More information

Improving the Expert Networks of a Modular Multi-Net System for Pattern Recognition

Improving the Expert Networks of a Modular Multi-Net System for Pattern Recognition Improving the Expert Networks of a Modular Multi-Net System for Pattern Recognition Mercedes Fernández-Redondo 1, Joaquín Torres-Sospedra 1 and Carlos Hernández-Espinosa 1 Departamento de Ingenieria y

More information

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract Published in: Advances in Neural Information Processing Systems 8, D S Touretzky, M C Mozer, and M E Hasselmo (eds.), MIT Press, Cambridge, MA, pages 190-196, 1996. Learning with Ensembles: How over-tting

More information

Brief Introduction of Machine Learning Techniques for Content Analysis

Brief Introduction of Machine Learning Techniques for Content Analysis 1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview

More information

Gaussian Mixture Distance for Information Retrieval

Gaussian Mixture Distance for Information Retrieval Gaussian Mixture Distance for Information Retrieval X.Q. Li and I. King fxqli, ingg@cse.cuh.edu.h Department of omputer Science & Engineering The hinese University of Hong Kong Shatin, New Territories,

More information

Objectives. By the time the student is finished with this section of the workbook, he/she should be able

Objectives. By the time the student is finished with this section of the workbook, he/she should be able FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise

More information

Hand Written Digit Recognition using Kalman Filter

Hand Written Digit Recognition using Kalman Filter International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 5, Number 4 (2012), pp. 425-434 International Research Publication House http://www.irphouse.com Hand Written Digit

More information

Chapter 6 Reliability-based design and code developments

Chapter 6 Reliability-based design and code developments Chapter 6 Reliability-based design and code developments 6. General Reliability technology has become a powerul tool or the design engineer and is widely employed in practice. Structural reliability analysis

More information

The concept of limit

The concept of limit Roberto s Notes on Dierential Calculus Chapter 1: Limits and continuity Section 1 The concept o limit What you need to know already: All basic concepts about unctions. What you can learn here: What limits

More information

Scattering of Solitons of Modified KdV Equation with Self-consistent Sources

Scattering of Solitons of Modified KdV Equation with Self-consistent Sources Commun. Theor. Phys. Beijing, China 49 8 pp. 89 84 c Chinese Physical Society Vol. 49, No. 4, April 5, 8 Scattering o Solitons o Modiied KdV Equation with Sel-consistent Sources ZHANG Da-Jun and WU Hua

More information

APPENDIX 1 ERROR ESTIMATION

APPENDIX 1 ERROR ESTIMATION 1 APPENDIX 1 ERROR ESTIMATION Measurements are always subject to some uncertainties no matter how modern and expensive equipment is used or how careully the measurements are perormed These uncertainties

More information

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Least Absolute Shrinkage is Equivalent to Quadratic Penalization Least Absolute Shrinkage is Equivalent to Quadratic Penalization Yves Grandvalet Heudiasyc, UMR CNRS 6599, Université de Technologie de Compiègne, BP 20.529, 60205 Compiègne Cedex, France Yves.Grandvalet@hds.utc.fr

More information

Oriental Medical Data Mining and Diagnosis Based On Binary Independent Factor Model

Oriental Medical Data Mining and Diagnosis Based On Binary Independent Factor Model Oriental Medical Data Mining and Diagnosis Based On Binary Independent Factor Model Jeong-Yon Shim, Lei Xu Dept. of Computer Software, YongIn SongDam College MaPyeong Dong 571-1, YongIn Si, KyeongKi Do,

More information

Knowledge Extraction from DBNs for Images

Knowledge Extraction from DBNs for Images Knowledge Extraction from DBNs for Images Son N. Tran and Artur d Avila Garcez Department of Computer Science City University London Contents 1 Introduction 2 Knowledge Extraction from DBNs 3 Experimental

More information

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction . ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties

More information

Classifier Evasion and Adversarial Games

Classifier Evasion and Adversarial Games Classiier Evasion and Adversarial Games Blaine Nelson 1 Pavel Laskov 1 1 Cognitive Systems Group Wilhelm Schickard Institute or Computer Science Universität Tübingen, Germany Advanced Topics in Machine

More information

OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION

OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION Xu Bei, Yeo Jun Yoon and Ali Abur Teas A&M University College Station, Teas, U.S.A. abur@ee.tamu.edu Abstract This paper presents

More information

A Simple Algorithm for Learning Stable Machines

A Simple Algorithm for Learning Stable Machines A Simple Algorithm for Learning Stable Machines Savina Andonova and Andre Elisseeff and Theodoros Evgeniou and Massimiliano ontil Abstract. We present an algorithm for learning stable machines which is

More information

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points Roberto s Notes on Dierential Calculus Chapter 8: Graphical analysis Section 1 Extreme points What you need to know already: How to solve basic algebraic and trigonometric equations. All basic techniques

More information

The Deutsch-Jozsa Problem: De-quantization and entanglement

The Deutsch-Jozsa Problem: De-quantization and entanglement The Deutsch-Jozsa Problem: De-quantization and entanglement Alastair A. Abbott Department o Computer Science University o Auckland, New Zealand May 31, 009 Abstract The Deustch-Jozsa problem is one o the

More information

Feature Selection. Pattern Recognition X. Michal Haindl. Feature Selection. Outline

Feature Selection. Pattern Recognition X. Michal Haindl. Feature Selection. Outline Feature election Outline Pattern Recognition X motivation technical recognition problem dimensionality reduction ց class separability increase ր data compression (e.g. required communication channel capacity)

More information

Micro-canonical ensemble model of particles obeying Bose-Einstein and Fermi-Dirac statistics

Micro-canonical ensemble model of particles obeying Bose-Einstein and Fermi-Dirac statistics Indian Journal o Pure & Applied Physics Vol. 4, October 004, pp. 749-757 Micro-canonical ensemble model o particles obeying Bose-Einstein and Fermi-Dirac statistics Y K Ayodo, K M Khanna & T W Sakwa Department

More information

Method for Optimizing the Number and Precision of Interval-Valued Parameters in a Multi-Object System

Method for Optimizing the Number and Precision of Interval-Valued Parameters in a Multi-Object System Method for Optimizing the Number and Precision of Interval-Valued Parameters in a Multi-Object System AMAURY CABALLERO*, KANG YEN*, *, JOSE L. ABREU**, ALDO PARDO*** *Department of Electrical & Computer

More information

Estimation of Human Emotions Using Thermal Facial Information

Estimation of Human Emotions Using Thermal Facial Information Estimation o Human Emotions Using Thermal acial Inormation Hung Nguyen 1, Kazunori Kotani 1, an Chen 1, Bac Le 2 1 Japan Advanced Institute o Science and Technology, 1-1 Asahidai, Nomi, Ishikawa, Japan

More information

A Rough Sets Based Classifier for Induction Motors Fault Diagnosis

A Rough Sets Based Classifier for Induction Motors Fault Diagnosis A ough Sets Based Classiier or Induction Motors Fault Diagnosis E. L. BONALDI, L. E. BOGES DA SILVA, G. LAMBET-TOES AND L. E. L. OLIVEIA Electronic Engineering Department Universidade Federal de Itajubá

More information

Overriding the Experts: A Stacking Method For Combining Marginal Classifiers

Overriding the Experts: A Stacking Method For Combining Marginal Classifiers From: FLAIRS-00 Proceedings. Copyright 2000, AAAI (www.aaai.org). All rights reserved. Overriding the Experts: A Stacking ethod For Combining arginal Classifiers ark D. Happel and Peter ock Department

More information

Improvement of Sparse Computation Application in Power System Short Circuit Study

Improvement of Sparse Computation Application in Power System Short Circuit Study Volume 44, Number 1, 2003 3 Improvement o Sparse Computation Application in Power System Short Circuit Study A. MEGA *, M. BELKACEMI * and J.M. KAUFFMANN ** * Research Laboratory LEB, L2ES Department o

More information

Basic Principles of Unsupervised and Unsupervised

Basic Principles of Unsupervised and Unsupervised Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization

More information

Complexity and Algorithms for Two-Stage Flexible Flowshop Scheduling with Availability Constraints

Complexity and Algorithms for Two-Stage Flexible Flowshop Scheduling with Availability Constraints Complexity and Algorithms or Two-Stage Flexible Flowshop Scheduling with Availability Constraints Jinxing Xie, Xijun Wang Department o Mathematical Sciences, Tsinghua University, Beijing 100084, China

More information

Heterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels

Heterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels ESANN'29 proceedings, European Symposium on Artificial Neural Networks - Advances in Computational Intelligence and Learning. Bruges Belgium), 22-24 April 29, d-side publi., ISBN 2-9337-9-9. Heterogeneous

More information

On a Closed Formula for the Derivatives of e f(x) and Related Financial Applications

On a Closed Formula for the Derivatives of e f(x) and Related Financial Applications International Mathematical Forum, 4, 9, no. 9, 41-47 On a Closed Formula or the Derivatives o e x) and Related Financial Applications Konstantinos Draais 1 UCD CASL, University College Dublin, Ireland

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

Kalman filtering based probabilistic nowcasting of object oriented tracked convective storms

Kalman filtering based probabilistic nowcasting of object oriented tracked convective storms Kalman iltering based probabilistic nowcasting o object oriented traced convective storms Pea J. Rossi,3, V. Chandrasear,2, Vesa Hasu 3 Finnish Meteorological Institute, Finland, Eri Palménin Auio, pea.rossi@mi.i

More information

CLASSIFICATION OF MULTIPLE ANNOTATOR DATA USING VARIATIONAL GAUSSIAN PROCESS INFERENCE

CLASSIFICATION OF MULTIPLE ANNOTATOR DATA USING VARIATIONAL GAUSSIAN PROCESS INFERENCE 26 24th European Signal Processing Conerence EUSIPCO CLASSIFICATION OF MULTIPLE ANNOTATOR DATA USING VARIATIONAL GAUSSIAN PROCESS INFERENCE Emre Besler, Pablo Ruiz 2, Raael Molina 2, Aggelos K. Katsaggelos

More information

Fast Inference and Learning for Modeling Documents with a Deep Boltzmann Machine

Fast Inference and Learning for Modeling Documents with a Deep Boltzmann Machine Fast Inference and Learning for Modeling Documents with a Deep Boltzmann Machine Nitish Srivastava nitish@cs.toronto.edu Ruslan Salahutdinov rsalahu@cs.toronto.edu Geoffrey Hinton hinton@cs.toronto.edu

More information

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition COMP 408/508 Computer Vision Fall 07 PCA or Recognition Recall: Color Gradient by PCA v λ ( G G, ) x x x R R v, v : eigenvectors o D D with v ^v (, ) x x λ, λ : eigenvalues o D D with λ >λ v λ ( B B, )

More information

Least-Squares Spectral Analysis Theory Summary

Least-Squares Spectral Analysis Theory Summary Least-Squares Spectral Analysis Theory Summary Reerence: Mtamakaya, J. D. (2012). Assessment o Atmospheric Pressure Loading on the International GNSS REPRO1 Solutions Periodic Signatures. Ph.D. dissertation,

More information

Learning Algorithms for RBF Functions and Subspace Based Functions

Learning Algorithms for RBF Functions and Subspace Based Functions 60 Chapter 3 Learning Algorithms for RBF Functions and Subspace Based Functions Lei Xu Chinese University of Hong Kong and Beiing University, PR China ABSTRACT Among extensive studies on radial basis function

More information

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods Numerical Methods - Lecture 1 Numerical Methods Lecture. Analysis o errors in numerical methods Numerical Methods - Lecture Why represent numbers in loating point ormat? Eample 1. How a number 56.78 can

More information

Part I: Thin Converging Lens

Part I: Thin Converging Lens Laboratory 1 PHY431 Fall 011 Part I: Thin Converging Lens This eperiment is a classic eercise in geometric optics. The goal is to measure the radius o curvature and ocal length o a single converging lens

More information

Two Further Gradient BYY Learning Rules for Gaussian Mixture with Automated Model Selection

Two Further Gradient BYY Learning Rules for Gaussian Mixture with Automated Model Selection Two Further Gradient BYY Learning Rules for Gaussian Mixture with Automated Model Selection Jinwen Ma, Bin Gao, Yang Wang, and Qiansheng Cheng Department of Information Science, School of Mathematical

More information

Detecting Doubly Compressed JPEG Images by Factor Histogram

Detecting Doubly Compressed JPEG Images by Factor Histogram APSIPA ASC 2 Xi an Detecting Doubly Compressed JPEG Images by Factor Histogram Jianquan Yang a, Guopu Zhu a, and Jiwu Huang b a Shenzhen Institutes o Advanced Technology, Chinese Academy o Sciences, Shenzhen,

More information

Solving Multi-Mode Time-Cost-Quality Trade-off Problem in Uncertainty Condition Using a Novel Genetic Algorithm

Solving Multi-Mode Time-Cost-Quality Trade-off Problem in Uncertainty Condition Using a Novel Genetic Algorithm International Journal o Management and Fuzzy Systems 2017; 3(3): 32-40 http://www.sciencepublishinggroup.com/j/ijms doi: 10.11648/j.ijms.20170303.11 Solving Multi-Mode Time-Cost-Quality Trade-o Problem

More information

Bayesian Technique for Reducing Uncertainty in Fatigue Failure Model

Bayesian Technique for Reducing Uncertainty in Fatigue Failure Model 9IDM- Bayesian Technique or Reducing Uncertainty in Fatigue Failure Model Sriram Pattabhiraman and Nam H. Kim University o Florida, Gainesville, FL, 36 Copyright 8 SAE International ABSTRACT In this paper,

More information

Margin Maximizing Loss Functions

Margin Maximizing Loss Functions Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing

More information

Bayesian ensemble learning of generative models

Bayesian ensemble learning of generative models Chapter Bayesian ensemble learning of generative models Harri Valpola, Antti Honkela, Juha Karhunen, Tapani Raiko, Xavier Giannakopoulos, Alexander Ilin, Erkki Oja 65 66 Bayesian ensemble learning of generative

More information

What the No Free Lunch Theorems Really Mean; How to Improve Search Algorithms

What the No Free Lunch Theorems Really Mean; How to Improve Search Algorithms What the No Free Lunch Theorems Really Mean; How to Improve Search Algorithms David H. Wolpert SFI WORKING PAPER: 2012-10-017 SFI Working Papers contain accounts o scientiic work o the author(s) and do

More information

An Alternative Poincaré Section for Steady-State Responses and Bifurcations of a Duffing-Van der Pol Oscillator

An Alternative Poincaré Section for Steady-State Responses and Bifurcations of a Duffing-Van der Pol Oscillator An Alternative Poincaré Section or Steady-State Responses and Biurcations o a Duing-Van der Pol Oscillator Jang-Der Jeng, Yuan Kang *, Yeon-Pun Chang Department o Mechanical Engineering, National United

More information

3. Several Random Variables

3. Several Random Variables . Several Random Variables. Two Random Variables. Conditional Probabilit--Revisited. Statistical Independence.4 Correlation between Random Variables. Densit unction o the Sum o Two Random Variables. Probabilit

More information

Decision Level Fusion: An Event Driven Approach

Decision Level Fusion: An Event Driven Approach 2018 26th European Signal Processing Conerence (EUSIPCO) ecision Level Fusion: An Event riven Approach Siddharth Roheda epartment o Electrical and Computer Engineering sroheda@ncsu.edu Hamid Krim epartment

More information

On the Girth of (3,L) Quasi-Cyclic LDPC Codes based on Complete Protographs

On the Girth of (3,L) Quasi-Cyclic LDPC Codes based on Complete Protographs On the Girth o (3,L) Quasi-Cyclic LDPC Codes based on Complete Protographs Sudarsan V S Ranganathan, Dariush Divsalar and Richard D Wesel Department o Electrical Engineering, University o Caliornia, Los

More information

Abstract. In this paper we propose recurrent neural networks with feedback into the input

Abstract. In this paper we propose recurrent neural networks with feedback into the input Recurrent Neural Networks for Missing or Asynchronous Data Yoshua Bengio Dept. Informatique et Recherche Operationnelle Universite de Montreal Montreal, Qc H3C-3J7 bengioy@iro.umontreal.ca Francois Gingras

More information

Upper bound on singlet fraction of mixed entangled two qubitstates

Upper bound on singlet fraction of mixed entangled two qubitstates Upper bound on singlet raction o mixed entangled two qubitstates Satyabrata Adhikari Indian Institute o Technology Jodhpur, Rajasthan Collaborator: Dr. Atul Kumar Deinitions A pure state ψ AB H H A B is

More information

DETC A GENERALIZED MAX-MIN SAMPLE FOR RELIABILITY ASSESSMENT WITH DEPENDENT VARIABLES

DETC A GENERALIZED MAX-MIN SAMPLE FOR RELIABILITY ASSESSMENT WITH DEPENDENT VARIABLES Proceedings o the ASME International Design Engineering Technical Conerences & Computers and Inormation in Engineering Conerence IDETC/CIE August 7-,, Bualo, USA DETC- A GENERALIZED MAX-MIN SAMPLE FOR

More information

An Evolutionary Strategy for. Decremental Multi-Objective Optimization Problems

An Evolutionary Strategy for. Decremental Multi-Objective Optimization Problems An Evolutionary Strategy or Decremental Multi-Objective Optimization Problems Sheng-Uei Guan *, Qian Chen and Wenting Mo School o Engineering and Design, Brunel University, UK Department o Electrical and

More information

Incorporating User Control into Recommender Systems Based on Naive Bayesian Classification

Incorporating User Control into Recommender Systems Based on Naive Bayesian Classification Incorporating User Control into Recommender Systems Based on Naive Bayesian Classiication ABSTRACT Verus Pronk Philips Research Europe High Tech Campus 34 5656 AE Eindhoven The Netherlands verus.pronk@philips.com

More information

Advanced Signal Processing Techniques for Feature Extraction in Data Mining

Advanced Signal Processing Techniques for Feature Extraction in Data Mining Volume 9 o.9, April 0 Advanced Signal Processing Techniques or Feature Extraction in Data Mining Maya ayak Proessor Orissa Engineering College BPUT, Bhubaneswar Bhawani Sankar Panigrahi Asst. Proessor

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 218 Outlines Overview Introduction Linear Algebra Probability Linear Regression 1

More information

Syllabus Objective: 2.9 The student will sketch the graph of a polynomial, radical, or rational function.

Syllabus Objective: 2.9 The student will sketch the graph of a polynomial, radical, or rational function. Precalculus Notes: Unit Polynomial Functions Syllabus Objective:.9 The student will sketch the graph o a polynomial, radical, or rational unction. Polynomial Function: a unction that can be written in

More information

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment Enhancing Generalization Capability of SVM Classifiers ith Feature Weight Adjustment Xizhao Wang and Qiang He College of Mathematics and Computer Science, Hebei University, Baoding 07002, Hebei, China

More information

TESTING TIMED FINITE STATE MACHINES WITH GUARANTEED FAULT COVERAGE

TESTING TIMED FINITE STATE MACHINES WITH GUARANTEED FAULT COVERAGE TESTING TIMED FINITE STATE MACHINES WITH GUARANTEED FAULT COVERAGE Khaled El-Fakih 1, Nina Yevtushenko 2 *, Hacene Fouchal 3 1 American University o Sharjah, PO Box 26666, UAE kelakih@aus.edu 2 Tomsk State

More information

Neural Networks and Machine Learning research at the Laboratory of Computer and Information Science, Helsinki University of Technology

Neural Networks and Machine Learning research at the Laboratory of Computer and Information Science, Helsinki University of Technology Neural Networks and Machine Learning research at the Laboratory of Computer and Information Science, Helsinki University of Technology Erkki Oja Department of Computer Science Aalto University, Finland

More information

Robust feedback linearization

Robust feedback linearization Robust eedback linearization Hervé Guillard Henri Bourlès Laboratoire d Automatique des Arts et Métiers CNAM/ENSAM 21 rue Pinel 75013 Paris France {herveguillardhenribourles}@parisensamr Keywords: Nonlinear

More information

( x) f = where P and Q are polynomials.

( x) f = where P and Q are polynomials. 9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational

More information

Lecture : Feedback Linearization

Lecture : Feedback Linearization ecture : Feedbac inearization Niola Misovic, dipl ing and Pro Zoran Vuic June 29 Summary: This document ollows the lectures on eedbac linearization tought at the University o Zagreb, Faculty o Electrical

More information

OBSERVER/KALMAN AND SUBSPACE IDENTIFICATION OF THE UBC BENCHMARK STRUCTURAL MODEL

OBSERVER/KALMAN AND SUBSPACE IDENTIFICATION OF THE UBC BENCHMARK STRUCTURAL MODEL OBSERVER/KALMAN AND SUBSPACE IDENTIFICATION OF THE UBC BENCHMARK STRUCTURAL MODEL Dionisio Bernal, Burcu Gunes Associate Proessor, Graduate Student Department o Civil and Environmental Engineering, 7 Snell

More information

Educational Procedure for Designing and Teaching Reflector Antennas in Electrical Engineering Programs. Abstract. Introduction

Educational Procedure for Designing and Teaching Reflector Antennas in Electrical Engineering Programs. Abstract. Introduction Educational Procedure or Designing and Teaching Relector Antennas in Electrical Engineering Programs Marco A.B. Terada Klipsch School o Electrical and Computer Engineering New Mexico State University Las

More information

Ensemble learning 11/19/13. The wisdom of the crowds. Chapter 11. Ensemble methods. Ensemble methods

Ensemble learning 11/19/13. The wisdom of the crowds. Chapter 11. Ensemble methods. Ensemble methods The wisdom of the crowds Ensemble learning Sir Francis Galton discovered in the early 1900s that a collection of educated guesses can add up to very accurate predictions! Chapter 11 The paper in which

More information

Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach

Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach International Journal of Approximate Reasoning 6 (00) 9±44 www.elsevier.com/locate/ijar Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach

More information

Active Sonar Target Classification Using Classifier Ensembles

Active Sonar Target Classification Using Classifier Ensembles International Journal of Engineering Research and Technology. ISSN 0974-3154 Volume 11, Number 12 (2018), pp. 2125-2133 International Research Publication House http://www.irphouse.com Active Sonar Target

More information

INPUT GROUND MOTION SELECTION FOR XIAOWAN HIGH ARCH DAM

INPUT GROUND MOTION SELECTION FOR XIAOWAN HIGH ARCH DAM 3 th World Conerence on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 2633 INPUT GROUND MOTION LECTION FOR XIAOWAN HIGH ARCH DAM CHEN HOUQUN, LI MIN 2, ZHANG BAIYAN 3 SUMMARY In

More information

Machine Learning Lecture 2

Machine Learning Lecture 2 Announcements Machine Learning Lecture 2 Eceptional number of lecture participants this year Current count: 449 participants This is very nice, but it stretches our resources to their limits Probability

More information

Estimation and detection of a periodic signal

Estimation and detection of a periodic signal Estimation and detection o a periodic signal Daniel Aronsson, Erik Björnemo, Mathias Johansson Signals and Systems Group, Uppsala University, Sweden, e-mail: Daniel.Aronsson,Erik.Bjornemo,Mathias.Johansson}@Angstrom.uu.se

More information

Robust Residual Selection for Fault Detection

Robust Residual Selection for Fault Detection Robust Residual Selection or Fault Detection Hamed Khorasgani*, Daniel E Jung**, Gautam Biswas*, Erik Frisk**, and Mattias Krysander** Abstract A number o residual generation methods have been developed

More information

Sparse Support Vector Machines by Kernel Discriminant Analysis

Sparse Support Vector Machines by Kernel Discriminant Analysis Sparse Support Vector Machines by Kernel Discriminant Analysis Kazuki Iwamura and Shigeo Abe Kobe University - Graduate School of Engineering Kobe, Japan Abstract. We discuss sparse support vector machines

More information

Using Rough Sets Techniques as a Fault Diagnosis Classifier for Induction Motors

Using Rough Sets Techniques as a Fault Diagnosis Classifier for Induction Motors Using ough Sets Techniques as a Fault Diagnosis Classiier or Induction Motors E. L. Bonaldi, L. E. Borges da Silva*, G. Lambert-Torres, L. E. L. Oliveira and F. O. Assunção * Electronic Engineering Departament

More information

Landslide susceptibility zonation - A comparison of natural and cut slopes using statistical models

Landslide susceptibility zonation - A comparison of natural and cut slopes using statistical models Landslide susceptibility zonation - A comparison o natural and cut slopes using statistical models QuocPhi Nguyen a, TruongThanh Phi b, AnhNguyet Nguyen b, VanAnh Lam c a Hanoi University o Mining and

More information

Digital Image Processing. Lecture 6 (Enhancement) Bu-Ali Sina University Computer Engineering Dep. Fall 2009

Digital Image Processing. Lecture 6 (Enhancement) Bu-Ali Sina University Computer Engineering Dep. Fall 2009 Digital Image Processing Lecture 6 (Enhancement) Bu-Ali Sina University Computer Engineering Dep. Fall 009 Outline Image Enhancement in Spatial Domain Spatial Filtering Smoothing Filters Median Filter

More information

Ensemble Learning. Synonyms Committee machines; Multiple classier systems

Ensemble Learning. Synonyms Committee machines; Multiple classier systems Chapter No: 00400 Page Proof Page 1 29-1-2010 #1 E Ensemble Learning G B e University of Manchester, School of Computer Science, Kilburn Building, Oxford Road, Manchester, M PL, UK Synonyms Committee machines;

More information

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

Analysis Scheme in the Ensemble Kalman Filter

Analysis Scheme in the Ensemble Kalman Filter JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine

More information

Bayesian Mixtures of Bernoulli Distributions

Bayesian Mixtures of Bernoulli Distributions Bayesian Mixtures of Bernoulli Distributions Laurens van der Maaten Department of Computer Science and Engineering University of California, San Diego Introduction The mixture of Bernoulli distributions

More information

ROBUST STABILITY AND PERFORMANCE ANALYSIS OF UNSTABLE PROCESS WITH DEAD TIME USING Mu SYNTHESIS

ROBUST STABILITY AND PERFORMANCE ANALYSIS OF UNSTABLE PROCESS WITH DEAD TIME USING Mu SYNTHESIS ROBUST STABILITY AND PERFORMANCE ANALYSIS OF UNSTABLE PROCESS WITH DEAD TIME USING Mu SYNTHESIS I. Thirunavukkarasu 1, V. I. George 1, G. Saravana Kumar 1 and A. Ramakalyan 2 1 Department o Instrumentation

More information

A Bayesian Local Linear Wavelet Neural Network

A Bayesian Local Linear Wavelet Neural Network A Bayesian Local Linear Wavelet Neural Network Kunikazu Kobayashi, Masanao Obayashi, and Takashi Kuremoto Yamaguchi University, 2-16-1, Tokiwadai, Ube, Yamaguchi 755-8611, Japan {koba, m.obayas, wu}@yamaguchi-u.ac.jp

More information

Sparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference

Sparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference Sparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference Shunsuke Horii Waseda University s.horii@aoni.waseda.jp Abstract In this paper, we present a hierarchical model which

More information

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp

a subset of these N input variables. A naive method is to train a new neural network on this subset to determine this performance. Instead of the comp Input Selection with Partial Retraining Pierre van de Laar, Stan Gielen, and Tom Heskes RWCP? Novel Functions SNN?? Laboratory, Dept. of Medical Physics and Biophysics, University of Nijmegen, The Netherlands.

More information

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing 1 Zhong-Yuan Zhang, 2 Chris Ding, 3 Jie Tang *1, Corresponding Author School of Statistics,

More information

2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd.

2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd. .6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics Spline interpolation was originally developed or image processing. In GIS, it is mainly used in visualization o spatial

More information