A Correntropy based Periodogram for Light Curves & Semi-supervised classification of VVV periodic variables
|
|
- Elvin Jackson
- 5 years ago
- Views:
Transcription
1 A Correntropy based Periodogram for Light Curves & Semi-supervised classification of VVV periodic variables Los Pablos (P4J): Pablo Huijse Pavlos Protopapas Pablo Estévez Pablo Zegers Jose Principe Harvard-Chile Data Science School
2 Motivation Light curve analysis challenges: Uneven sampling. Different noise sources and heteroscedastic errors. May have few points. Databases can be huge. Picture sources: and
3 Least Squares Spectral Analysis (LSSA) For a given frequency, the Lomb-Scargle (LS) power is equivalent to the L2 norm of the coefficients of the sinusoidal model that best fits the data in a least squares sense The Generalized Lomb Scargle (GLS) periodogram M. Zechmeister, The Generalized Lomb Scargle Periodogram, A&A, 2009 J. VanderPlas & Z. Ivezic, Periodograms for Multiband Astronomical Time Series, ApJ, 2015
4 Maximum correntropy criterion For two arbitrary R.V. with N realizations - Generalizes correlation to higher order moments - Samples are compared through a kernel - Parameter: kernel bandwidth J. Principe, Information Theoretic Learning, Springer, 2010
5 Maximum correntropy criterion - Fit a model to the data by maximizing correntropy - MCC is equivalent to maximizing the pdf of the error at e=0 (Principe 2010) - M-estimator - Robust to non-gaussian noise and outliers - Assumes homoscedastic noise J. Principe, Information Theoretic Learning, Springer, 2010
6 Weighted Maximum Correntropy Criterion Simple sample weighting through the kernel bandwidth Fixed-point updates: 1. Assume sigma fixed and update 2. Assume beta fixed and update 3. WMCC convergence: Stop or go back to 1
7 Statistical test for period significance - LS has an analytical expression for the false alarm pbb (assumes Gaussian noise) - Generalized Extreme Value (GEV) statistics - The maxima* from several realizations of an experiment follow - Do bootstrap, find maxima on a subset of frequencies, fit GEV, compute false alarm probability [1] [1] M. Suveges, Extreme-value modelling for the significance assessment of periodogram peaks, MNRAS, 2012
8 Synthetic test - - Simple irregular sampling: Generate linearly spaced vector, add jitter proportional to the 1/Fs, discard 80% of the points Model import P4J t = P4J.irregular_sampling(T=100.0, N=100) y_clean = P4J.trigonometric_model(t, f0=2.0, A=[1.0, 0.5,.25]) y, y_noisy, dy = P4J.contaminate_time_series(t, y_clean, SNR=0.0, red_noise_ratio=0.25, outlier_ratio=0.0)
9 Example my_per = P4J.periodogram(M=3, method='wmcc') my_per.fit(t, y_noisy, dy) freq, per = my_per.grid_search(0.0, 5.0, 1.0, 0.1, n_local_max=10) my_per.fit_extreme_cdf(n_bootstrap=100, n_frequencies=100) per_levels = my_per.get_fap(np.asarray([0.05, 0.01, 0.001])) SNR=10.0, red_noise_var=0.25
10 Example my_per = P4J.periodogram(M=3, method='wmcc') my_per.fit(t, y_noisy, dy) freq, per = my_per.grid_search(0.0, 5.0, 1.0, 0.1, n_local_max=10) my_per.fit_extreme_cdf(n_bootstrap=100, n_frequencies=100) per_levels = my_per.get_fap(np.asarray([0.05, 0.01, 0.001])) SNR=0.0, red_noise_var=0.25
11 Results - Performance: The % of cases were relative error is below tol Confidence: The average significance at f = f0 No outliers, red_noise_ratio = 0.0, i.e. Noise is perfectly explained by uncertainties 10 random time vectors, 100 noise realizations
12 Results - Performance: The % of cases were relative error is below tol Confidence: The average significance at f = f0 No outliers, red_noise_ratio = 1/8 10 random time vectors, 100 noise realizations
13 Results - Performance: The % of cases were relative error is below tol Confidence: The average significance at f = f0 No outliers, red_noise_ratio = 1/4 10 random time vectors, 100 noise realizations
14 VISTA Variables of the Via Lactea (VVV) ESO survey. Most measurements in the K band (near infrared) using 7 apertures. Public Survey. Study the structure of the Galactic bulge and the origin of our galaxy.
15 VISTA Variables of the Via Lactea (VVV) ESO survey. Most measurements in the K band (near infrared) using 7 apertures. Public Survey. Study the structure of the Galactic bulge and the origin of our galaxy. F. Gran et al 2016: - 1,019 RRab Light curves - Fields b201 - b228 (~47 sq. deg) - Detected with AoV, corrected manually
16 VISTA Variables of the Via Lactea (VVV) Method 1. Grab light curves (LC) from fields b201 b228 (~47 sq. deg) 2. Discard LC with chi2 < 2.0 and N<30 3. Discard LC with per. confidence < th 4. Create features 5. Semi-supervised PU classification
17 Analysis of VVV periodic variables First N light curves sorted in periodicity confidence - Amount of reported RRL out of this set (lost RRL) - Relative error of the detected periods vs reported period 10,000 20,000 50,000 Lomb Scargle 154 / / / Generalized LS 554 / / / WMCC periodogram 75 / / / 0.018
18 Analysis of VVV periodic variables
19 Analysis of VVV periodic variables
20 Analysis of VVV periodic variables
21 Analysis of VVV periodic variables
22 Semi-supervised and PU Learning - Semi-supervised classification - Manifold assumption - Clustering assumption - Low-density separation - Self-Learning/Graph-based/Avoiding changes in dense regions [1] - >10,000 unlabeled periodic light curves ~1,000 labeled RRab (positive class) No other survey to crossmatch Positive/Unlabeled (PU) scenario Images taken from wikipedia [1] X. Zhu, Semi-supervised Learning Literature Survey, 2005 (online, public)
23 Efficient SS/PU Learning Bagging PU [1] (transductive version) Positive dataset (size NP), Unlabeled dataset (size NU) Do T bootstrap sets from unlabeled data (size K) Train T weak learners (NP+K), predict in OOB set (NU-unique[K]) Average OOB predictions No graph computation/few parameters/highly parallel/simple github.com/phuijse/bagging_pu [1] F. Mordelet and J.P Vert, A bagging SVM to learn from PU ensembles Pattern Recog. Letter, v. 37, 2014 [2] M. Claesen et al. A robust ensemble approach to learn PU Data using SVM, Neurocomputing, 2014 [3] M. Claesen et al. Assesing binary classifiers using only PU data, Neurocomputing, 2015
24 Analysis of VVV periodic variables L.J.P. van der Maaten and G.E. Hinton, Visualizing High-Dimensional Data Using t-sne, JMLR, 2008
25 Analysis of VVV periodic variables L.J.P. van der Maaten and G.E. Hinton, Visualizing High-Dimensional Data Using t-sne, JMLR, 2008
26 Analysis of VVV periodic variables [1] F. Mordelet and J.P Vert, A bagging SVM to learn from PU ensembles Pattern Recog. Letter, v. 37, 2014 [2] M. Claesen et al. A robust ensemble approach to learn PU Data using SVM, Neurocomputing, 2014 [3] M. Claesen et al. Assesing binary classifiers using only PU data, Neurocomputing, 2015
27 Pbb in [0.95, 1.00] 132 lc
28 Pbb in [0.85, 0.95] 101 lc
29 Pbb in [0.65, 0.85] 102 lc
30 Pbb in [0.50, 0.65] 82 lc
31 Conclusions and Future work - Periodicity detection based on information theoretic functionals are more precise and less sensitive to FP New set of VVV RR Lyrae candidates to confirm, and more fields to run Compare with more periodicity detection methods (C. Entropy, AoV, PDM), test different features (FATS) and PU/SS methods Test other surveys (Pan-STARRS, CTRS, synthetic LSST lc) Improve computational implementations LINKS: pypi.python.org/pypi/p4j github.com/phuijse/p4j github.com/phuijse/bagging_pu
Bagging and Other Ensemble Methods
Bagging and Other Ensemble Methods Sargur N. Srihari srihari@buffalo.edu 1 Regularization Strategies 1. Parameter Norm Penalties 2. Norm Penalties as Constrained Optimization 3. Regularization and Underconstrained
More informationOverview of Statistical Tools. Statistical Inference. Bayesian Framework. Modeling. Very simple case. Things are usually more complicated
Fall 3 Computer Vision Overview of Statistical Tools Statistical Inference Haibin Ling Observation inference Decision Prior knowledge http://www.dabi.temple.edu/~hbling/teaching/3f_5543/index.html Bayesian
More informationComputational Intelligence Challenges and. Applications on Large-Scale Astronomical Time Series Databases
Computational Intelligence Challenges and 1 Applications on Large-Scale Astronomical Time Series Databases arxiv:1509.07823v1 [astro-ph.im] 25 Sep 2015 Pablo Huijse, Millennium Institute of Astrophysics,
More informationIntroduction to Machine Learning Midterm Exam
10-701 Introduction to Machine Learning Midterm Exam Instructors: Eric Xing, Ziv Bar-Joseph 17 November, 2015 There are 11 questions, for a total of 100 points. This exam is open book, open notes, but
More informationMassive Event Detection. Abstract
Abstract The detection and analysis of events within massive collections of time-series has become an extremely important task for timedomain astronomy. In particular, many scientific investigations (e.g.
More informationFalse Alarm Probability based on bootstrap and extreme-value methods for periodogram peaks
False Alarm Probability based on bootstrap and extreme-value methods for periodogram peaks M. Süveges ISDC Data Centre for Astrophysics, University of Geneva, Switzerland Abstract One of the most pertinent
More informationHoldout and Cross-Validation Methods Overfitting Avoidance
Holdout and Cross-Validation Methods Overfitting Avoidance Decision Trees Reduce error pruning Cost-complexity pruning Neural Networks Early stopping Adjusting Regularizers via Cross-Validation Nearest
More informationLinear and Non-Linear Dimensionality Reduction
Linear and Non-Linear Dimensionality Reduction Alexander Schulz aschulz(at)techfak.uni-bielefeld.de University of Pisa, Pisa 4.5.215 and 7.5.215 Overview Dimensionality Reduction Motivation Linear Projections
More informationOutline Challenges of Massive Data Combining approaches Application: Event Detection for Astronomical Data Conclusion. Abstract
Abstract The analysis of extremely large, complex datasets is becoming an increasingly important task in the analysis of scientific data. This trend is especially prevalent in astronomy, as large-scale
More informationEfficient and Principled Online Classification Algorithms for Lifelon
Efficient and Principled Online Classification Algorithms for Lifelong Learning Toyota Technological Institute at Chicago Chicago, IL USA Talk @ Lifelong Learning for Mobile Robotics Applications Workshop,
More informationStatistical Learning Theory and the C-Loss cost function
Statistical Learning Theory and the C-Loss cost function Jose Principe, Ph.D. Distinguished Professor ECE, BME Computational NeuroEngineering Laboratory and principe@cnel.ufl.edu Statistical Learning Theory
More informationUsing conditional entropy to identify periodicity
MNRAS 434, 2629 2635 (2013) Advance Access publication 2013 July 26 doi:10.1093/mnras/stt1206 Using conditional entropy to identify periodicity Matthew J. Graham, Andrew J. Drake, S. G. Djorgovski, Ashish
More informationData Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition
Data Mining Classification: Basic Concepts and Techniques Lecture Notes for Chapter 3 by Tan, Steinbach, Karpatne, Kumar 1 Classification: Definition Given a collection of records (training set ) Each
More informationJeff Howbert Introduction to Machine Learning Winter
Classification / Regression Support Vector Machines Jeff Howbert Introduction to Machine Learning Winter 2012 1 Topics SVM classifiers for linearly separable classes SVM classifiers for non-linearly separable
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 Exam policy: This exam allows two one-page, two-sided cheat sheets; No other materials. Time: 2 hours. Be sure to write your name and
More informationOnline Manifold Regularization: A New Learning Setting and Empirical Study
Online Manifold Regularization: A New Learning Setting and Empirical Study Andrew B. Goldberg 1, Ming Li 2, Xiaojin Zhu 1 1 Computer Sciences, University of Wisconsin Madison, USA. {goldberg,jerryzhu}@cs.wisc.edu
More informationAn Information Theoretic Algorithm for Finding Periodicities in Stellar Light Curves
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL., NO., JANUARY 22 An Information Theoretic Algorithm for Finding Periodicities in Stellar Light Curves Pablo Huijse, Student Member, IEEE, Pablo A. Estévez*,
More informationLearning with multiple models. Boosting.
CS 2750 Machine Learning Lecture 21 Learning with multiple models. Boosting. Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Learning with multiple models: Approach 2 Approach 2: use multiple models
More information6.036 midterm review. Wednesday, March 18, 15
6.036 midterm review 1 Topics covered supervised learning labels available unsupervised learning no labels available semi-supervised learning some labels available - what algorithms have you learned that
More informationKernel Methods. Barnabás Póczos
Kernel Methods Barnabás Póczos Outline Quick Introduction Feature space Perceptron in the feature space Kernels Mercer s theorem Finite domain Arbitrary domain Kernel families Constructing new kernels
More informationNatural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley
Natural Language Processing Classification Classification I Dan Klein UC Berkeley Classification Automatically make a decision about inputs Example: document category Example: image of digit digit Example:
More informationMachine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.
Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted
More informationDoing Right By Massive Data: How To Bring Probability Modeling To The Analysis Of Huge Datasets Without Taking Over The Datacenter
Doing Right By Massive Data: How To Bring Probability Modeling To The Analysis Of Huge Datasets Without Taking Over The Datacenter Alexander W Blocker Pavlos Protopapas Xiao-Li Meng 9 February, 2010 Outline
More informationWhat is semi-supervised learning?
What is semi-supervised learning? In many practical learning domains, there is a large supply of unlabeled data but limited labeled data, which can be expensive to generate text processing, video-indexing,
More informationSupport Vector Machine. Industrial AI Lab.
Support Vector Machine Industrial AI Lab. Classification (Linear) Autonomously figure out which category (or class) an unknown item should be categorized into Number of categories / classes Binary: 2 different
More informationChapter 14 Combining Models
Chapter 14 Combining Models T-61.62 Special Course II: Pattern Recognition and Machine Learning Spring 27 Laboratory of Computer and Information Science TKK April 3th 27 Outline Independent Mixing Coefficients
More informationSampler of Interdisciplinary Measurement Error and Complex Data Problems
Sampler of Interdisciplinary Measurement Error and Complex Data Problems James Long April 22, 2016 1 / 28 Time Domain Astronomy and Variable Stars An Observation on Heteroskedasticity and Misspecified
More informationNonlinear Dimensionality Reduction. Jose A. Costa
Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time
More informationCS 6375 Machine Learning
CS 6375 Machine Learning Nicholas Ruozzi University of Texas at Dallas Slides adapted from David Sontag and Vibhav Gogate Course Info. Instructor: Nicholas Ruozzi Office: ECSS 3.409 Office hours: Tues.
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationExploiting Sparse Non-Linear Structure in Astronomical Data
Exploiting Sparse Non-Linear Structure in Astronomical Data Ann B. Lee Department of Statistics and Department of Machine Learning, Carnegie Mellon University Joint work with P. Freeman, C. Schafer, and
More informationWhen Dictionary Learning Meets Classification
When Dictionary Learning Meets Classification Bufford, Teresa 1 Chen, Yuxin 2 Horning, Mitchell 3 Shee, Liberty 1 Mentor: Professor Yohann Tendero 1 UCLA 2 Dalhousie University 3 Harvey Mudd College August
More informationLearning SVM Classifiers with Indefinite Kernels
Learning SVM Classifiers with Indefinite Kernels Suicheng Gu and Yuhong Guo Dept. of Computer and Information Sciences Temple University Support Vector Machines (SVMs) (Kernel) SVMs are widely used in
More informationIntroduction to Machine Learning Midterm Exam Solutions
10-701 Introduction to Machine Learning Midterm Exam Solutions Instructors: Eric Xing, Ziv Bar-Joseph 17 November, 2015 There are 11 questions, for a total of 100 points. This exam is open book, open notes,
More informationEvaluation. Andrea Passerini Machine Learning. Evaluation
Andrea Passerini passerini@disi.unitn.it Machine Learning Basic concepts requires to define performance measures to be optimized Performance of learning algorithms cannot be evaluated on entire domain
More informationLearning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking p. 1/31
Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking Dengyong Zhou zhou@tuebingen.mpg.de Dept. Schölkopf, Max Planck Institute for Biological Cybernetics, Germany Learning from
More informationKernel Logistic Regression and the Import Vector Machine
Kernel Logistic Regression and the Import Vector Machine Ji Zhu and Trevor Hastie Journal of Computational and Graphical Statistics, 2005 Presented by Mingtao Ding Duke University December 8, 2011 Mingtao
More informationStatistical Machine Learning
Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x
More informationFINAL: CS 6375 (Machine Learning) Fall 2014
FINAL: CS 6375 (Machine Learning) Fall 2014 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for
More informationAdditional Keplerian Signals in the HARPS data for Gliese 667C from a Bayesian re-analysis
Additional Keplerian Signals in the HARPS data for Gliese 667C from a Bayesian re-analysis Phil Gregory, Samantha Lawler, Brett Gladman Physics and Astronomy Univ. of British Columbia Abstract A re-analysis
More informationCosmic Ray Electrons and GC Observations with H.E.S.S.
Cosmic Ray Electrons and GC Observations with H.E.S.S. Christopher van Eldik (for the H.E.S.S. Collaboration) MPI für Kernphysik, Heidelberg, Germany TeVPA '09, SLAC, July 2009 The Centre of the Milky
More informationText Mining. Dr. Yanjun Li. Associate Professor. Department of Computer and Information Sciences Fordham University
Text Mining Dr. Yanjun Li Associate Professor Department of Computer and Information Sciences Fordham University Outline Introduction: Data Mining Part One: Text Mining Part Two: Preprocessing Text Data
More informationEvaluation requires to define performance measures to be optimized
Evaluation Basic concepts Evaluation requires to define performance measures to be optimized Performance of learning algorithms cannot be evaluated on entire domain (generalization error) approximation
More informationAutomated Variable Source Classification: Methods and Challenges
Automated Variable Source Classification: Methods and Challenges James Long Texas A&M Department of Statistics September 14, 2016 1 / 55 Methodology: Statistical Classifiers Methodology: CART Example with
More informationGraph-Based Semi-Supervised Learning
Graph-Based Semi-Supervised Learning Olivier Delalleau, Yoshua Bengio and Nicolas Le Roux Université de Montréal CIAR Workshop - April 26th, 2005 Graph-Based Semi-Supervised Learning Yoshua Bengio, Olivier
More informationLecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University
Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization
More informationCSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18
CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$
More informationAdvanced Introduction to Machine Learning CMU-10715
Advanced Introduction to Machine Learning CMU-10715 Gaussian Processes Barnabás Póczos http://www.gaussianprocess.org/ 2 Some of these slides in the intro are taken from D. Lizotte, R. Parr, C. Guesterin
More informationSupport Vector Machine. Industrial AI Lab. Prof. Seungchul Lee
Support Vector Machine Industrial AI Lab. Prof. Seungchul Lee Classification (Linear) Autonomously figure out which category (or class) an unknown item should be categorized into Number of categories /
More informationL11: Pattern recognition principles
L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction
More informationPeriod. End of Story?
Period. End of Story? Twice Around With Period Detection: Updated Algorithms Eric L. Michelsen UCSD CASS Journal Club, January 9, 2015 Can you read this? 36 pt This is big stuff 24 pt This makes a point
More informationMulti-label Active Learning with Auxiliary Learner
Multi-label Active Learning with Auxiliary Learner Chen-Wei Hung and Hsuan-Tien Lin National Taiwan University November 15, 2011 C.-W. Hung & H.-T. Lin (NTU) Multi-label AL w/ Auxiliary Learner 11/15/2011
More informationModels, Data, Learning Problems
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Models, Data, Learning Problems Tobias Scheffer Overview Types of learning problems: Supervised Learning (Classification, Regression,
More informationHypothesis Evaluation
Hypothesis Evaluation Machine Learning Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Hypothesis Evaluation Fall 1395 1 / 31 Table of contents 1 Introduction
More informationLearning algorithms at the service of WISE survey
Katarzyna Ma lek 1,2,3, T. Krakowski 1, M. Bilicki 4,3, A. Pollo 1,5,3, A. Solarz 2,3, M. Krupa 5,3, A. Kurcz 5,3, W. Hellwing 6,3, J. Peacock 7, T. Jarrett 4 1 National Centre for Nuclear Research, ul.
More informationMachine Learning Lecture 7
Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant
More informationFinal Exam, Machine Learning, Spring 2009
Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3
More informationIterative Laplacian Score for Feature Selection
Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,
More informationA Least Squares Formulation for Canonical Correlation Analysis
A Least Squares Formulation for Canonical Correlation Analysis Liang Sun, Shuiwang Ji, and Jieping Ye Department of Computer Science and Engineering Arizona State University Motivation Canonical Correlation
More information1 [15 points] Frequent Itemsets Generation With Map-Reduce
Data Mining Learning from Large Data Sets Final Exam Date: 15 August 2013 Time limit: 120 minutes Number of pages: 11 Maximum score: 100 points You can use the back of the pages if you run out of space.
More informationMachine Learning Practice Page 2 of 2 10/28/13
Machine Learning 10-701 Practice Page 2 of 2 10/28/13 1. True or False Please give an explanation for your answer, this is worth 1 pt/question. (a) (2 points) No classifier can do better than a naive Bayes
More informationStatistical validation of PLATO 2.0 planet candidates
Statistical validation of PLATO 2.0 planet candidates Rodrigo F. Díaz Laboratoire d Astrophysique de Marseille José Manuel Almenara, Alexandre Santerne, Claire Moutou, Anthony Lehtuillier, Magali Deleuil
More informationEnsembles. Léon Bottou COS 424 4/8/2010
Ensembles Léon Bottou COS 424 4/8/2010 Readings T. G. Dietterich (2000) Ensemble Methods in Machine Learning. R. E. Schapire (2003): The Boosting Approach to Machine Learning. Sections 1,2,3,4,6. Léon
More informationGraphs in Machine Learning
Graphs in Machine Learning Michal Valko Inria Lille - Nord Europe, France TA: Pierre Perrault Partially based on material by: Mikhail Belkin, Jerry Zhu, Olivier Chapelle, Branislav Kveton October 30, 2017
More informationDoes Unlabeled Data Help?
Does Unlabeled Data Help? Worst-case Analysis of the Sample Complexity of Semi-supervised Learning. Ben-David, Lu and Pal; COLT, 2008. Presentation by Ashish Rastogi Courant Machine Learning Seminar. Outline
More informationFINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE
FINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE You are allowed a two-page cheat sheet. You are also allowed to use a calculator. Answer the questions in the spaces provided on the question sheets.
More informationSVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning
SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning Mark Schmidt University of British Columbia, May 2016 www.cs.ubc.ca/~schmidtm/svan16 Some images from this lecture are
More informationClassification Semi-supervised learning based on network. Speakers: Hanwen Wang, Xinxin Huang, and Zeyu Li CS Winter
Classification Semi-supervised learning based on network Speakers: Hanwen Wang, Xinxin Huang, and Zeyu Li CS 249-2 2017 Winter Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions Xiaojin
More informationPAC Learning Introduction to Machine Learning. Matt Gormley Lecture 14 March 5, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University PAC Learning Matt Gormley Lecture 14 March 5, 2018 1 ML Big Picture Learning Paradigms:
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationBig Data Analytics. Special Topics for Computer Science CSE CSE Feb 24
Big Data Analytics Special Topics for Computer Science CSE 4095-001 CSE 5095-005 Feb 24 Fei Wang Associate Professor Department of Computer Science and Engineering fei_wang@uconn.edu Prediction III Goal
More informationActive learning in sequence labeling
Active learning in sequence labeling Tomáš Šabata 11. 5. 2017 Czech Technical University in Prague Faculty of Information technology Department of Theoretical Computer Science Table of contents 1. Introduction
More informationTIME SERIES ANALYSIS
2 WE ARE DEALING WITH THE TOUGHEST CASES: TIME SERIES OF UNEQUALLY SPACED AND GAPPED ASTRONOMICAL DATA 3 A PERIODIC SIGNAL Dots: periodic signal with frequency f = 0.123456789 d -1. Dotted line: fit for
More informationCS534 Machine Learning - Spring Final Exam
CS534 Machine Learning - Spring 2013 Final Exam Name: You have 110 minutes. There are 6 questions (8 pages including cover page). If you get stuck on one question, move on to others and come back to the
More informationIntroduction to Support Vector Machines
Introduction to Support Vector Machines Shivani Agarwal Support Vector Machines (SVMs) Algorithm for learning linear classifiers Motivated by idea of maximizing margin Efficient extension to non-linear
More informationCS145: INTRODUCTION TO DATA MINING
CS145: INTRODUCTION TO DATA MINING 4: Vector Data: Decision Tree Instructor: Yizhou Sun yzsun@cs.ucla.edu October 10, 2017 Methods to Learn Vector Data Set Data Sequence Data Text Data Classification Clustering
More informationBetter Algorithms for Selective Sampling
Francesco Orabona Nicolò Cesa-Bianchi DSI, Università degli Studi di Milano, Italy francesco@orabonacom nicolocesa-bianchi@unimiit Abstract We study online algorithms for selective sampling that use regularized
More informationarxiv: v1 [astro-ph.sr] 2 Dec 2015
Astronomy & Astrophysics manuscript no. gl581ha c ESO 18 September, 18 Periodic Hα variations in GL 581: Further evidence for an activity origin to GL 581d A. P. Hatzes Thüringer Landessternwarte Tautenburg,
More informationarxiv: v1 [astro-ph.im] 16 Jun 2011
Efficient use of simultaneous multi-band observations for variable star analysis Maria Süveges, Paul Bartholdi, Andrew Becker, Željko Ivezić, Mathias Beck and Laurent Eyer arxiv:1106.3164v1 [astro-ph.im]
More informationAdvanced Introduction to Machine Learning CMU-10715
Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research
More informationMACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA
1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR
More informationCS7267 MACHINE LEARNING
CS7267 MACHINE LEARNING ENSEMBLE LEARNING Ref: Dr. Ricardo Gutierrez-Osuna at TAMU, and Aarti Singh at CMU Mingon Kang, Ph.D. Computer Science, Kennesaw State University Definition of Ensemble Learning
More informationLecture #11: Classification & Logistic Regression
Lecture #11: Classification & Logistic Regression CS 109A, STAT 121A, AC 209A: Data Science Weiwei Pan, Pavlos Protopapas, Kevin Rader Fall 2016 Harvard University 1 Announcements Midterm: will be graded
More informationLearning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text
Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text Yi Zhang Machine Learning Department Carnegie Mellon University yizhang1@cs.cmu.edu Jeff Schneider The Robotics Institute
More informationPrincipal Component Analysis
B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition
More informationarxiv: v2 [astro-ph.im] 28 Mar 2014
Astronomy& Astrophysics manuscript no. KimEROS213 c ESO 214 April 1, 214 The EPOCH Project: I. Periodic variable stars in the EROS-2 LMC database Dae-Won Kim 1, Pavlos Protopapas 2, Coryn A.L. Bailer-Jones
More informationThe Perceptron Algorithm
The Perceptron Algorithm Greg Grudic Greg Grudic Machine Learning Questions? Greg Grudic Machine Learning 2 Binary Classification A binary classifier is a mapping from a set of d inputs to a single output
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationFinal Exam, Fall 2002
15-781 Final Exam, Fall 22 1. Write your name and your andrew email address below. Name: Andrew ID: 2. There should be 17 pages in this exam (excluding this cover sheet). 3. If you need more room to work
More informationStatistical Learning. Dong Liu. Dept. EEIS, USTC
Statistical Learning Dong Liu Dept. EEIS, USTC Chapter 6. Unsupervised and Semi-Supervised Learning 1. Unsupervised learning 2. k-means 3. Gaussian mixture model 4. Other approaches to clustering 5. Principle
More informationCharacterization of variable stars using the ASAS and SuperWASP databases
Characterization of variable stars using the ASAS and SuperWASP databases Derck P. Smits Dept of Mathematical Sciences, PO Box 392, UNISA, 0003, South Africa E-mail: smitsdp@unisa.ac.za Abstract. A photographic
More informationLearning theory. Ensemble methods. Boosting. Boosting: history
Learning theory Probability distribution P over X {0, 1}; let (X, Y ) P. We get S := {(x i, y i )} n i=1, an iid sample from P. Ensemble methods Goal: Fix ɛ, δ (0, 1). With probability at least 1 δ (over
More informationComments on New Approaches in Period Analysis of Astronomical Time Series by Pavlos Protopapas (Or: A Pavlosian Response ) Don Percival
Comments on New Approaches in Period Analysis of Astronomical Time Series by Pavlos Protopapas (Or: A Pavlosian Response ) Don Percival Applied Physics Laboratory Department of Statistics University of
More informationSYSTEMATIC CONSTRUCTION OF ANOMALY DETECTION BENCHMARKS FROM REAL DATA. Outlier Detection And Description Workshop 2013
SYSTEMATIC CONSTRUCTION OF ANOMALY DETECTION BENCHMARKS FROM REAL DATA Outlier Detection And Description Workshop 2013 Authors Andrew Emmott emmott@eecs.oregonstate.edu Thomas Dietterich tgd@eecs.oregonstate.edu
More informationDecision Tree And Random Forest
Decision Tree And Random Forest Dr. Ammar Mohammed Associate Professor of Computer Science ISSR, Cairo University PhD of CS ( Uni. Koblenz-Landau, Germany) Spring 2019 Contact: mailto: Ammar@cu.edu.eg
More informationHow to learn from very few examples?
How to learn from very few examples? Dengyong Zhou Department of Empirical Inference Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tuebingen, Germany Outline Introduction Part A
More informationLinear Dimensionality Reduction
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis
More informationMetric Embedding of Task-Specific Similarity. joint work with Trevor Darrell (MIT)
Metric Embedding of Task-Specific Similarity Greg Shakhnarovich Brown University joint work with Trevor Darrell (MIT) August 9, 2006 Task-specific similarity A toy example: Task-specific similarity A toy
More informationMixture of Gaussians Models
Mixture of Gaussians Models Outline Inference, Learning, and Maximum Likelihood Why Mixtures? Why Gaussians? Building up to the Mixture of Gaussians Single Gaussians Fully-Observed Mixtures Hidden Mixtures
More informationRobotics 2. AdaBoost for People and Place Detection. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard
Robotics 2 AdaBoost for People and Place Detection Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard v.1.1, Kai Arras, Jan 12, including material by Luciano Spinello and Oscar Martinez Mozos
More information