Extracellular Recording and Spike Train Analysis

Size: px
Start display at page:

Download "Extracellular Recording and Spike Train Analysis"

Transcription

1 Recording and Spike Train Analysis Christophe CNRS UMR 8118 and Paris-Descartes University Paris, France ENP 2009: July 3rd

2 Outline

3 Outline

4 are attractive. Why? They produce a (whole) lot of data with moderate tissue damage. They potentially provide sub-millisecond resolution of neuronal spiking activity. They are relatively cheap to implement. But... They require a lot of processing. They cannot provide some very important pieces of information like the neuronal cell type, the sub-threshold activity, the morphology of the recorded cells.

5 Outline

6 Multi-Electrodes In Vivo Recordings in Insects From the outside the neuronal activity appears as brief electrical impulses: the action potentials or spikes. Left, the brain and the probe with 16 electrodes (bright spots). Width of one probe shank: 80 µm. Right, 1 sec of raw data from 4 electrodes. The local extrema are the action potentials.

7 More info on the experimental setting The brain shown on the figure belongs to a locust (Schistocerca americana). The part of the brain close to the probe is the antennal lobe, the first olfactory relay in the insect brain. It s diameter is 400 µm. The thickness of the probe s shanks is µm. The electrodes are squares of 13 x 13 µm 2. The center of one tetrode is separated by 150 µm from its two nearest neighbors. The shown on the right hand side of the figure comes from the lowest right tetrode which was located approximately 100 µm bellow the antennal lobe surface. The data were filtered between 300 Hz and 5 khz.

8 Why Are Tetrodes Used? The last 200 ms of the previous figure. With the upper site only it would be difficult to properly classify the two first large spikes (**). With the lower site only it would be difficult to properly classify the two spikes labeled by ##.

9 Tetrodes are not always usefull You should not conclude from the last slide that tetrodes will always improve the classification of the recorded events. The gain due to tetrodes depends a lot on the tissue and / or on the species. For antennal lobe for instance, tetrodes are required to get good classification in the locust, Schistocerca americana, they do not bring anything in the cockroach, periplaneta americana where the density of active neurite is much lower leading to good performances of isolated sites. In the honeybee, Apis mellifera, the density of active neurites is so high that the signal to noise ratio on each site make the data useless.

10 Other Experimental Techniques Can Also be Used A single neuron patch-clamp coupled to calcium imaging. Data from Moritz Paehler and Peter Kloppenburg (Cologne University).

11 limitations The last slide reminds you that it is always a good idea to use several techniques on the same preparation. have many advantages but this comes at a price of harder to interpret data. In particular the question of which cell type is / was recorded is very difficult to answer by observing only the spike trains. It is then very useful to get other viewpoints on your favorite system with for instance patch / or sharp electrode combined with a morphological study.

12 Outline

13 Data Preprocessing: Spike Sorting To exploit our we must first: Find out how many neurons are recorded. For each neuron estimate some features like the spike waveform, the discharge statistics, etc. For each detected event find the probability with which each neuron could have generated it. Find an automatic method to answer these questions.

14 A Similar Problem (1) Think of a room with many seating people who are talking to each other using a language we do not know. Assume that microphones were placed in the room and that their are given to us. Our task is to isolate the discourse of each person.

15 A Similar Problem (2) We have therefore a situation like... With apologies to Brueghel...

16 A Similar Problem (3) To fulfill our task we could make use of the following features: Some people have a low pitch voice while other have a high pitch one. Some people speak loudly while other do not. One person can be close to one microphone and far from another such that its talk is simultaneously recorded by the two with different amplitudes. Some people speak all the time while other just utter a comment here and there, that is, the discourse statistics changes from person to person.

17 Spike Sorting as a Set of Standard Statistical Problems Efficient spike sorting requires: 1. Events detection followed by events space dimension reduction. 2. A clustering stage. This can be partially or fully automatized depending on the data. 3. Events classification.

18 Avoid loosing time As I just wrote, spike sorting involves a sequence of standard statistical problems. Then do not loose (too much) time with the dedicated spike sorting literature whose average quality is appallingly low! A good reference on classification / clustering problems is Brian Ripley s book (1996) Pattern Recognition and Neural Networks. For statistics in general A Davison (2003) Statistical Models is an excellent start. It also very important to choose a good software implementing all the classical clustering algorithms: k-means, Gaussian mixtures, etc. Don t hesitate go for R (

19 Detection illustration

20 Dimension Reduction Site 1 Site 2 Site 3 Site 4 Left, 100 spikes. Right, 1000 spikes projected on the subspace defined by the first 4 principal components.

21 Clustering Visualisation Gaussian mixture clustering (number of cluster minimizing the BIC) with mclust. Projection plan selected with the proj. pursuit of GGobi.

22 Spikes Attributed to Two Neurons of the Model Spikes of neurons 3 and 10. Site 1 Site 2 Site 1 Site 2 Site 3 Site 4 Site 3 Site 4

23 Outline

24 After the rather heavy spike sorting pre-processing stage spike trains are obtained. s

25 Studying spike trains per se A central working hypothesis of systems neuroscience is that action potential or spike occurrence times, as opposed to spike waveforms, are the sole information carrier between brain regions. This hypothesis legitimates and leads to the study of spike trains per se. It also encourages the development of models whose goal is to predict the probability of occurrence of a spike at a given time, without necessarily considering the biophysical spike generation mechanisms.

26 The raw data of one bursty neuron of the cockroach antennal lobe. 1 minute of spontaneous activity. s are not Poisson processes

27 s are not Renewal processes Some renewal tests applied to the previous data.

28 A counting process formalism (1) Probabilists and Statisticians working on series of events whose only (or most prominent) feature is there occurrence time (car accidents, earthquakes) use a formalism based on the following three quantities (Brillinger, 1988, Biol Cybern 59:189). Counting Process: For points {t j } randomly scattered along a line, the counting process N(t) gives the number of points observed in the interval (0, t]: N(t) = {t j with 0 < t j t} where stands for the cardinality (number of elements) of a set.

29 A counting process formalism (2) History: The history, H t, consists of the variates determined up to and including time t that are necessary to describe the evolution of the counting process. Conditional Intensity: For the process N and history H t, the conditional intensity at time t is defined as: λ(t H t ) = lim h 0 Prob{event (t, t + h] H t } h for small h one has the interpretation: Prob{event (t, t + h] H t } λ(t H t ) h

30 Goodness of fit tests for counting processes All goodness of fit tests derive from a mapping or a time transformation of the observed process realization. Namely one introduces the integrated conditional intensity : Λ(t) = t 0 λ(u H u ) du If Λ is correct it is not hard to show that the process defined by : {t 1,..., t n } {Λ(t 1 ),..., Λ(t n )} is a Poisson process with rate 1.

31 An illustration with simulated data. Time transformation illustrated

32 A goodness of fit test based on Donsker s theorem Y Ogata (1988, JASA, 83:9) introduced several procedures testing the time transformed event sequence against the uniform Poisson hypothesis. We propose an additional test built as follows : X j = Λ(t j+1 ) Λ(t j ) 1 S m = m j=1 X j W n (t) = S nt / n Donsker s theorem (Billingsley, 1999, pp 86-91) states that if Λ is correct then W n converges weakly to a standard Wiener process. We therefore test if the observed W n is within the tight confidence bands obtained by Kendall et al (2007, Statist Comput 17:1) for standard Wiener processes.

33 The proposed test applied to the simulated data. The boundaries have the form: f (x; a, b) = a + b x. Illustration of the proposed test

34 Where Are We? We are now in the fairly unusual situation (from the neuroscientist s viewpoint) of knowing how to show that the model we entertain is wrong without having an explicit expression for this model... We need a way to find candidates for the CI: λ(t H t ).

35 What Do We Put in H t? It is common to summarize the stationary discharge of a neuron by its inter-spike interval (ISI) histogram. If the latter histogram is not a pure decreasing mono-exponential, that implies that λ(t H t ) will at least depend on the elapsed time since the last spike: t t l. For the real data we saw previously we also expect at least a dependence on the length of the previous inter spike interval, isi 1. We would then have: λ(t H t ) = λ(t t l, isi 1 )

36 What About The Functional Form? We haven t even started yet and we are already considering a function of at least 2 variables: t t l, isi 1. What about its functional form? Following Brillinger (1988) we discretize our time axis into bins of size h small enough to have at most 1 spike per bin. We are then lead to a binomial regression problem. For analytical and computational convenience we are going to use the logistic transform: log ( λ(t t l, isi 1 ) h ) = η(t tl, isi 1 ) 1 λ(t t l, isi 1 ) h

37 Smoothing spline (0) Since cellular biophysics does not provide much guidance on how to build η(t t l, isi 1 ) we have chosen to use the nonparametric smoothing spline approach implemented in the gss package. η(t t l, isi 1 ) is then uniquely decomposed as : η(t t l, isi 1 ) = η + η l (t t l) + η 1 (isi 1 ) + η l,1 (t t l, isi 1 ) Where for instance: η 1 (u)du = 0 the integral being evaluated on the definition domain of the variable isi 1.

38 Smoothing spline (1) Given data: Y i = η(x i ) + ɛ i, i = 1,..., n where x i [0, 1] and ɛ i N(0, σ 2 ), we want to find η ρ minimizing: 1 n n (Y i η ρ (x i )) 2 + ρ i=1 1 0 (d 2 η ρ dx 2 ) 2dx

39 Smoothing spline (2) A simple example with simulated data y x

40 Smoothing spline (3) It can be shown (but it s not exactly trivial) that, for a given ρ, the solution of the functional minimization problem can be expressed on a finite basis: η ρ (x) = m 1 ν=0 d ν φ ν (x) + n c i R 1 (x i, x) i=1 where the functions, φ ν (), and R 1 (x i, ), are known.

41 Smoothing spline (4) Cst. unpenalized term Linear unpenalized term Penalized basis fct # x x x Penalized basis fct # 40 Penalized basis fct # 60 Penalized basis fct # x x x

42 Smoothing spline (5): What about ρ? λ = 0.5 x y λ = 0.05 x y λ = x y λ = 5e 04 x y λ = 5e 05 x y λ = 5e 06 x y

43 Smoothing spline (6): Cross-validation Ideally we would like ρ such that: 1 n n (η ρ (x i ) η(x i )) 2 i=1 is minimized... but we don t know the true η. So we choose ρ minimizing: V 0 (ρ) = 1 n n i=1 (η [i] ρ (x i ) Y i ) 2 where η [k] ρ is the minimizer of the delete-one functional: 1 n (Y i η ρ (x i )) 2 + ρ i k 1 0 (d 2 η ρ dx 2 ) 2dx

44 Smoothing spline (7) y y y y y y λ = λ = λ = x λ = 5e x λ = 5e x λ = 5e x x x

45 log(λ) x Smoothing spline (8) λ = rss GCV Exact y

46 Smoothing spline (9): The theory (worked out by Grace Wahba) also gives us confidence bands Data, Conf. Band, Actual η y x

47 Application to real data We fitted to the last 30 s of the data set the following additive model: event 9 t t l + 10 isi 1.

48 The tests applied to the first 30 s

49 The functional forms

50 Outline

51 I would like to warmly thank: Matthieu Delescluse, Ofer Mazor, Gilles Laurent, Jean Diebolt and Pascal Viot for working with (or allowing me to work) on the spike sorting problem. Antoine Chaffiol and the whole Kloppenburg lab (Univ. of Cologne) for providing high quality data and for being patient enough with a slow developer like myself. Chong Gu, Pierre Bohec, Vilmos Prokaj, Olivier Faugeras, Jonhatan Touboul and Carl van Vreeswijk for working with me or providing feed-back on spike train. The AFM / Decrypthon project and the CNRS GDR 2904 for funding my research. The statisticians for showing me an example of a healthy scientific community. You for listening to / reading me up to that point.

Spike Train Measurement and Sorting

Spike Train Measurement and Sorting Spike Train Measurement and Sorting Christophe Pouzat May 7 2010 Contents 1 A brief introduction to a biological problem 2 2 Raw data properties 5 3 Spike sorting 6 4 Spike train analysis 15 4.1 "Qualitative"

More information

Fitting a Stochastic Neural Network Model to Real Data

Fitting a Stochastic Neural Network Model to Real Data Fitting a Stochastic Neural Network Model to Real Data Christophe Pouzat, Ludmila Brochini, Pierre Hodara and Guilherme Ost MAP5 Univ. Paris-Descartes and CNRS Neuromat, USP christophe.pouzat@parisdescartes.fr

More information

On Goodness of Fit Tests For Models of Neuronal Spike Trains Considered as Counting Processes

On Goodness of Fit Tests For Models of Neuronal Spike Trains Considered as Counting Processes On Goodness of Fit Tests For Models of Neuronal Spike Trains Considered as Counting Processes Christophe Pouzat and Antoine Chaffiol Laboratoire de Physiologie Cérébrale, CNRS UMR 8118 UFR biomédicale

More information

On the relation between neuronal size and extracellular spike amplitude and its consequence on extracellular recordings interpretation

On the relation between neuronal size and extracellular spike amplitude and its consequence on extracellular recordings interpretation On the relation between neuronal size and extracellular spike amplitude and its consequence on extracellular recordings interpretation Christophe Pouzat and Georgios Is. Detorakis MAP5, Paris-Descartes

More information

Dreem Challenge report (team Bussanati)

Dreem Challenge report (team Bussanati) Wavelet course, MVA 04-05 Simon Bussy, simon.bussy@gmail.com Antoine Recanati, arecanat@ens-cachan.fr Dreem Challenge report (team Bussanati) Description and specifics of the challenge We worked on the

More information

Neural Spike Train Analysis 1: Introduction to Point Processes

Neural Spike Train Analysis 1: Introduction to Point Processes SAMSI Summer 2015: CCNS Computational Neuroscience Summer School Neural Spike Train Analysis 1: Introduction to Point Processes Uri Eden BU Department of Mathematics and Statistics July 27, 2015 Spikes

More information

Gentle Introduction to Infinite Gaussian Mixture Modeling

Gentle Introduction to Infinite Gaussian Mixture Modeling Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Bayesian Modeling and Classification of Neural Signals

Bayesian Modeling and Classification of Neural Signals Bayesian Modeling and Classification of Neural Signals Michael S. Lewicki Computation and Neural Systems Program California Institute of Technology 216-76 Pasadena, CA 91125 lewickiocns.caltech.edu Abstract

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Nature Neuroscience: doi: /nn.2283

Nature Neuroscience: doi: /nn.2283 Supplemental Material for NN-A2678-T Phase-to-rate transformations encode touch in cortical neurons of a scanning sensorimotor system by John Curtis and David Kleinfeld Figure S. Overall distribution of

More information

Bayesian Classifiers and Probability Estimation. Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington

Bayesian Classifiers and Probability Estimation. Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington Bayesian Classifiers and Probability Estimation Vassilis Athitsos CSE 4308/5360: Artificial Intelligence I University of Texas at Arlington 1 Data Space Suppose that we have a classification problem The

More information

Nonparametric probability density estimation

Nonparametric probability density estimation A temporary version for the 2018-04-11 lecture. Nonparametric probability density estimation Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics

More information

Neural networks and optimization

Neural networks and optimization Neural networks and optimization Nicolas Le Roux INRIA 8 Nov 2011 Nicolas Le Roux (INRIA) Neural networks and optimization 8 Nov 2011 1 / 80 1 Introduction 2 Linear classifier 3 Convolutional neural networks

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

probability of k samples out of J fall in R.

probability of k samples out of J fall in R. Nonparametric Techniques for Density Estimation (DHS Ch. 4) n Introduction n Estimation Procedure n Parzen Window Estimation n Parzen Window Example n K n -Nearest Neighbor Estimation Introduction Suppose

More information

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London High-dimensional geometry of cortical population activity Marius Pachitariu University College London Part I: introduction to the brave new world of large-scale neuroscience Part II: large-scale data preprocessing

More information

CSE 546 Final Exam, Autumn 2013

CSE 546 Final Exam, Autumn 2013 CSE 546 Final Exam, Autumn 0. Personal info: Name: Student ID: E-mail address:. There should be 5 numbered pages in this exam (including this cover sheet).. You can use any material you brought: any book,

More information

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1 EEL 851: Biometrics An Overview of Statistical Pattern Recognition EEL 851 1 Outline Introduction Pattern Feature Noise Example Problem Analysis Segmentation Feature Extraction Classification Design Cycle

More information

HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems

HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems Silvia Chiappa and Samy Bengio {chiappa,bengio}@idiap.ch IDIAP, P.O. Box 592, CH-1920 Martigny, Switzerland Abstract. We compare the use

More information

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods) Machine Learning InstanceBased Learning (aka nonparametric methods) Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Non parametric CSE 446 Machine Learning Daniel Weld March

More information

Experiment 2 Random Error and Basic Statistics

Experiment 2 Random Error and Basic Statistics PHY9 Experiment 2: Random Error and Basic Statistics 8/5/2006 Page Experiment 2 Random Error and Basic Statistics Homework 2: Turn in at start of experiment. Readings: Taylor chapter 4: introduction, sections

More information

Machine Learning (CS 567) Lecture 2

Machine Learning (CS 567) Lecture 2 Machine Learning (CS 567) Lecture 2 Time: T-Th 5:00pm - 6:20pm Location: GFS118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment

Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ.

More information

The Perceptron. Volker Tresp Summer 2014

The Perceptron. Volker Tresp Summer 2014 The Perceptron Volker Tresp Summer 2014 1 Introduction One of the first serious learning machines Most important elements in learning tasks Collection and preprocessing of training data Definition of a

More information

The Perceptron algorithm

The Perceptron algorithm The Perceptron algorithm Tirgul 3 November 2016 Agnostic PAC Learnability A hypothesis class H is agnostic PAC learnable if there exists a function m H : 0,1 2 N and a learning algorithm with the following

More information

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017 CPSC 340: Machine Learning and Data Mining MLE and MAP Fall 2017 Assignment 3: Admin 1 late day to hand in tonight, 2 late days for Wednesday. Assignment 4: Due Friday of next week. Last Time: Multi-Class

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Lecture - 24 Radial Basis Function Networks: Cover s Theorem

Lecture - 24 Radial Basis Function Networks: Cover s Theorem Neural Network and Applications Prof. S. Sengupta Department of Electronic and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 24 Radial Basis Function Networks:

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Classification 2: Linear discriminant analysis (continued); logistic regression

Classification 2: Linear discriminant analysis (continued); logistic regression Classification 2: Linear discriminant analysis (continued); logistic regression Ryan Tibshirani Data Mining: 36-462/36-662 April 4 2013 Optional reading: ISL 4.4, ESL 4.3; ISL 4.3, ESL 4.4 1 Reminder:

More information

Fast learning rates for plug-in classifiers under the margin condition

Fast learning rates for plug-in classifiers under the margin condition Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,

More information

Probabilistic Machine Learning. Industrial AI Lab.

Probabilistic Machine Learning. Industrial AI Lab. Probabilistic Machine Learning Industrial AI Lab. Probabilistic Linear Regression Outline Probabilistic Classification Probabilistic Clustering Probabilistic Dimension Reduction 2 Probabilistic Linear

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

BAYESIAN DECISION THEORY

BAYESIAN DECISION THEORY Last updated: September 17, 2012 BAYESIAN DECISION THEORY Problems 2 The following problems from the textbook are relevant: 2.1 2.9, 2.11, 2.17 For this week, please at least solve Problem 2.3. We will

More information

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows Kn-Nearest

More information

Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur

Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Data Mining Prof. Pabitra Mitra Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture - 17 K - Nearest Neighbor I Welcome to our discussion on the classification

More information

Probability and statistics; Rehearsal for pattern recognition

Probability and statistics; Rehearsal for pattern recognition Probability and statistics; Rehearsal for pattern recognition Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských

More information

CENG 793. On Machine Learning and Optimization. Sinan Kalkan

CENG 793. On Machine Learning and Optimization. Sinan Kalkan CENG 793 On Machine Learning and Optimization Sinan Kalkan 2 Now Introduction to ML Problem definition Classes of approaches K-NN Support Vector Machines Softmax classification / logistic regression Parzen

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Analysis of Extracellular Recordings

Analysis of Extracellular Recordings Analysis of Extracellular Recordings 8 8.1. Introduction This chapter presents the methods of analysis required for the oldest types of brain machine interface. These methods are (strongly) invasive, as

More information

EE/CpE 345. Modeling and Simulation. Fall Class 9

EE/CpE 345. Modeling and Simulation. Fall Class 9 EE/CpE 345 Modeling and Simulation Class 9 208 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation - the behavior

More information

18.9 SUPPORT VECTOR MACHINES

18.9 SUPPORT VECTOR MACHINES 744 Chapter 8. Learning from Examples is the fact that each regression problem will be easier to solve, because it involves only the examples with nonzero weight the examples whose kernels overlap the

More information

Transformation of stimulus correlations by the retina

Transformation of stimulus correlations by the retina Transformation of stimulus correlations by the retina Kristina Simmons (University of Pennsylvania) and Jason Prentice, (now Princeton University) with Gasper Tkacik (IST Austria) Jan Homann (now Princeton

More information

Statistics of Radioactive Decay

Statistics of Radioactive Decay Statistics of Radioactive Decay Introduction The purpose of this experiment is to analyze a set of data that contains natural variability from sample to sample, but for which the probability distribution

More information

Machine Learning and Deep Learning! Vincent Lepetit!

Machine Learning and Deep Learning! Vincent Lepetit! Machine Learning and Deep Learning!! Vincent Lepetit! 1! What is Machine Learning?! 2! Hand-Written Digit Recognition! 2 9 3! Hand-Written Digit Recognition! Formalization! 0 1 x = @ A Images are 28x28

More information

ISyE 691 Data mining and analytics

ISyE 691 Data mining and analytics ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)

More information

TinySR. Peter Schmidt-Nielsen. August 27, 2014

TinySR. Peter Schmidt-Nielsen. August 27, 2014 TinySR Peter Schmidt-Nielsen August 27, 2014 Abstract TinySR is a light weight real-time small vocabulary speech recognizer written entirely in portable C. The library fits in a single file (plus header),

More information

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION 1 Outline Basic terminology Features Training and validation Model selection Error and loss measures Statistical comparison Evaluation measures 2 Terminology

More information

Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0

Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0 Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0 Brieuc Conan-Guez 1 and Fabrice Rossi 23 1 INRIA, Domaine de Voluceau, Rocquencourt, B.P. 105 78153 Le Chesnay Cedex,

More information

Motivating the Covariance Matrix

Motivating the Covariance Matrix Motivating the Covariance Matrix Raúl Rojas Computer Science Department Freie Universität Berlin January 2009 Abstract This note reviews some interesting properties of the covariance matrix and its role

More information

Independent Component Analysis

Independent Component Analysis Independent Component Analysis James V. Stone November 4, 24 Sheffield University, Sheffield, UK Keywords: independent component analysis, independence, blind source separation, projection pursuit, complexity

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Introduction to Biomedical Engineering

Introduction to Biomedical Engineering Introduction to Biomedical Engineering Biosignal processing Kung-Bin Sung 6/11/2007 1 Outline Chapter 10: Biosignal processing Characteristics of biosignals Frequency domain representation and analysis

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative

More information

Introduction to Statistics and Error Analysis II

Introduction to Statistics and Error Analysis II Introduction to Statistics and Error Analysis II Physics116C, 4/14/06 D. Pellett References: Data Reduction and Error Analysis for the Physical Sciences by Bevington and Robinson Particle Data Group notes

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

EE126: Probability and Random Processes

EE126: Probability and Random Processes EE126: Probability and Random Processes Lecture 18: Poisson Process Abhay Parekh UC Berkeley March 17, 2011 1 1 Review 2 Poisson Process 2 Bernoulli Process An arrival process comprised of a sequence of

More information

Linear Models for Classification

Linear Models for Classification Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,

More information

1. SS-ANOVA Spaces on General Domains. 2. Averaging Operators and ANOVA Decompositions. 3. Reproducing Kernel Spaces for ANOVA Decompositions

1. SS-ANOVA Spaces on General Domains. 2. Averaging Operators and ANOVA Decompositions. 3. Reproducing Kernel Spaces for ANOVA Decompositions Part III 1. SS-ANOVA Spaces on General Domains 2. Averaging Operators and ANOVA Decompositions 3. Reproducing Kernel Spaces for ANOVA Decompositions 4. Building Blocks for SS-ANOVA Spaces, General and

More information

The Perceptron. Volker Tresp Summer 2016

The Perceptron. Volker Tresp Summer 2016 The Perceptron Volker Tresp Summer 2016 1 Elements in Learning Tasks Collection, cleaning and preprocessing of training data Definition of a class of learning models. Often defined by the free model parameters

More information

Creating Non-Gaussian Processes from Gaussian Processes by the Log-Sum-Exp Approach. Radford M. Neal, 28 February 2005

Creating Non-Gaussian Processes from Gaussian Processes by the Log-Sum-Exp Approach. Radford M. Neal, 28 February 2005 Creating Non-Gaussian Processes from Gaussian Processes by the Log-Sum-Exp Approach Radford M. Neal, 28 February 2005 A Very Brief Review of Gaussian Processes A Gaussian process is a distribution over

More information

Neural networks and optimization

Neural networks and optimization Neural networks and optimization Nicolas Le Roux Criteo 18/05/15 Nicolas Le Roux (Criteo) Neural networks and optimization 18/05/15 1 / 85 1 Introduction 2 Deep networks 3 Optimization 4 Convolutional

More information

9 Classification. 9.1 Linear Classifiers

9 Classification. 9.1 Linear Classifiers 9 Classification This topic returns to prediction. Unlike linear regression where we were predicting a numeric value, in this case we are predicting a class: winner or loser, yes or no, rich or poor, positive

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

Simulation, Estimation and Applications of Hawkes Processes

Simulation, Estimation and Applications of Hawkes Processes Simulation, Estimation and Applications of Hawkes Processes A PROJECT SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Katarzyna Obral IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

Simulation and Calibration of a Fully Bayesian Marked Multidimensional Hawkes Process with Dissimilar Decays

Simulation and Calibration of a Fully Bayesian Marked Multidimensional Hawkes Process with Dissimilar Decays Simulation and Calibration of a Fully Bayesian Marked Multidimensional Hawkes Process with Dissimilar Decays Kar Wai Lim, Young Lee, Leif Hanlen, Hongbiao Zhao Australian National University Data61/CSIRO

More information

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical

More information

Functional Preprocessing for Multilayer Perceptrons

Functional Preprocessing for Multilayer Perceptrons Functional Preprocessing for Multilayer Perceptrons Fabrice Rossi and Brieuc Conan-Guez Projet AxIS, INRIA, Domaine de Voluceau, Rocquencourt, B.P. 105 78153 Le Chesnay Cedex, France CEREMADE, UMR CNRS

More information

EE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002

EE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002 EE/CpE 345 Modeling and Simulation Class 0 November 8, 2002 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation

More information

ICA. Independent Component Analysis. Zakariás Mátyás

ICA. Independent Component Analysis. Zakariás Mátyás ICA Independent Component Analysis Zakariás Mátyás Contents Definitions Introduction History Algorithms Code Uses of ICA Definitions ICA Miture Separation Signals typical signals Multivariate statistics

More information

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks Sections 18.6 and 18.7 Analysis of Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Univariate regression

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 20: HMMs / Speech / ML 11/8/2011 Dan Klein UC Berkeley Today HMMs Demo bonanza! Most likely explanation queries Speech recognition A massive HMM! Details

More information

Machine Learning Lecture 7

Machine Learning Lecture 7 Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Deep Learning for Gravitational Wave Analysis Results with LIGO Data

Deep Learning for Gravitational Wave Analysis Results with LIGO Data Link to these slides: http://tiny.cc/nips arxiv:1711.03121 Deep Learning for Gravitational Wave Analysis Results with LIGO Data Daniel George & E. A. Huerta NCSA Gravity Group - http://gravity.ncsa.illinois.edu/

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Online Learning

More information

Classification: The rest of the story

Classification: The rest of the story U NIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN CS598 Machine Learning for Signal Processing Classification: The rest of the story 3 October 2017 Today s lecture Important things we haven t covered yet Fisher

More information

STATISTICAL METHODS FOR EXPLORING NEURONAL INTERACTIONS

STATISTICAL METHODS FOR EXPLORING NEURONAL INTERACTIONS STATISTICAL METHODS FOR EXPLORING NEURONAL INTERACTIONS by Mengyuan Zhao B.S. Probability & Statistics, Peking University, Beijing, China 2005 M.A. Statistics, University of Pittsburgh, Pittsburgh, PA

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

Computational Genomics

Computational Genomics Computational Genomics http://www.cs.cmu.edu/~02710 Introduction to probability, statistics and algorithms (brief) intro to probability Basic notations Random variable - referring to an element / event

More information

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides Probabilistic modeling The slides are closely adapted from Subhransu Maji s slides Overview So far the models and algorithms you have learned about are relatively disconnected Probabilistic modeling framework

More information

Introduction to Convolutional Neural Networks 2018 / 02 / 23

Introduction to Convolutional Neural Networks 2018 / 02 / 23 Introduction to Convolutional Neural Networks 2018 / 02 / 23 Buzzword: CNN Convolutional neural networks (CNN, ConvNet) is a class of deep, feed-forward (not recurrent) artificial neural networks that

More information

High-frequency data modelling using Hawkes processes

High-frequency data modelling using Hawkes processes Valérie Chavez-Demoulin joint work with High-frequency A.C. Davison data modelling and using A.J. Hawkes McNeil processes(2005), J.A EVT2013 McGill 1 /(201 High-frequency data modelling using Hawkes processes

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

Algorithm Independent Topics Lecture 6

Algorithm Independent Topics Lecture 6 Algorithm Independent Topics Lecture 6 Jason Corso SUNY at Buffalo Feb. 23 2009 J. Corso (SUNY at Buffalo) Algorithm Independent Topics Lecture 6 Feb. 23 2009 1 / 45 Introduction Now that we ve built an

More information

10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode

10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode 10/8/014 9.07 INTRODUCTION TO STATISTICS FOR BRAIN AND COGNITIVE SCIENCES Lecture 4 Emery N. Brown The Multivariate Gaussian Distribution Case : Probability Model for Spike Sorting The data are tetrode

More information

High-frequency data modelling using Hawkes processes

High-frequency data modelling using Hawkes processes High-frequency data modelling using Hawkes processes Valérie Chavez-Demoulin 1 joint work J.A McGill 1 Faculty of Business and Economics, University of Lausanne, Switzerland Boulder, April 2016 Boulder,

More information

Kernel methods and the exponential family

Kernel methods and the exponential family Kernel methods and the exponential family Stéphane Canu 1 and Alex J. Smola 2 1- PSI - FRE CNRS 2645 INSA de Rouen, France St Etienne du Rouvray, France Stephane.Canu@insa-rouen.fr 2- Statistical Machine

More information

Introduction to Machine Learning Midterm Exam

Introduction to Machine Learning Midterm Exam 10-701 Introduction to Machine Learning Midterm Exam Instructors: Eric Xing, Ziv Bar-Joseph 17 November, 2015 There are 11 questions, for a total of 100 points. This exam is open book, open notes, but

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

The Curse of Dimensionality for Local Kernel Machines

The Curse of Dimensionality for Local Kernel Machines The Curse of Dimensionality for Local Kernel Machines Yoshua Bengio, Olivier Delalleau & Nicolas Le Roux April 7th 2005 Yoshua Bengio, Olivier Delalleau & Nicolas Le Roux Snowbird Learning Workshop Perspective

More information

Machine Learning, Fall 2009: Midterm

Machine Learning, Fall 2009: Midterm 10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I SYDE 372 Introduction to Pattern Recognition Probability Measures for Classification: Part I Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 Why use probability

More information