Statistical Data models, Non-parametrics, Dynamics
Non-informative, proper and improper priors For real quantity bounded to interval, standard prior is uniform distribution For real quantity, unbounded, standard is uniform - but with what density? For real quantity on half-open interval, standard prior is f(s)=1/s - but integral diverges! Divergent priors are called improper - they can only be used with convergent likelihoods
Dirichlet Distribution- prior for discrete distribution
Mean of Dirichlet - Laplaces estimator
Occurence table probability
Occurence table probability Uniform prior:
Non-parametric inference How to perform inference about a distribution without assuming a distribution family? A distribution over reals can be approximated by a piecewise uniform distribution a mixture of real distributions But how many parts? This is non-parametric inference
Non-parametric inference Change-points, Rao-Blackwell Given times for events (eg coal-mining disasters) Infer a piecewise constant intensity function (change-point problem) State is set of change-points with intensities inbetween But how many pieces? This is non-parametric inference MCMC: Given current state, propose change in segment bounadry or intensity But it is possible to integrate out intensities proposed
Probability ratio in MCMC For a proposed merge of intervals j and j+1, with sizes proportional to (α,1-α), were the counts and obtained by tossing a coin with success probability or not? Compute model probability ratio as in HW1. n j n j +1 " Also, the total number of breakpoints has prior distribution Poisson with parameter (average) ". Probability ratio in favor of split :
Averging MCMC run, positions and number of breakpoints
Averging MCMC run, positions with uniform test data
Mixture of Normals
Mixture of Normals elimination of nuisance parameters
Mixture of Normals elimination of nuisance parameters (integrate using normalization constant of Gaussian and Gamma distributions)
Matlab Mixture of Normals, MCMC (AutoClass method) function [lh,lab,trlpost,trm,trstd,trlab,trct,nbounc]= mmnonu1(x,n,k,labi,nn); %[lh,lab,trlpost,trm,trstd,trlab,trct,nbounc]= % MMNONU1(x,N,k,labi,NN); %inputs % 1D MCMC mixture modelling, % x - 1D data column vector % N - MCMC iterations. % k - number of components %lab,labi - component labelling of data vector) % NN - thinning (optional)
Matlab Mixture of Normals, MCMC function [lab,trlh,trm,trstd,trlab,trct,nbounc]= mmnonu1(x,n,k,labi,nn); %[lh,lab,trlpost,trm,trstd,trlab,trct,nbounc]= % MMNONU1(x,N,k,labi,NN); %outputs %trlh - thinned trace of log probability (optional) %trm - thinned trace of means vector (optional) %trstd - thinned vector of standard deviations (optional) %trlab - thinned trace of labels vector (size(x,1) by N/NN (optional) %trct - thinned trace of mixing proportions
Matlab Mixture of Normals, MCMC N=10000; NN=100; x=[randn(100,1)-1;randn(100,1)*3;randn(100,1)+1]; % 3 components synthetic data k=2; labi=ceil(rand(size(x))*k); [llhc,lab2,trl,trm,trstd,trlab,trct,nbounc]= mmnonu1(x,n,k,labi,nn); [llhc2,lab2,trl2,trm2,trstd2,trlab2,trct2,nbounc]= mmnonu1(x,n,k,lab2,nn); (k=3, 4, 5)
Matlab Mixture of Normals, MCMC The three components and the joint empirical distr
Matlab Mixture of Normals, MCMC Putting them together makes the identification seem harder.
Matlab Mixture of Normals, MCMC std K=2: mean
Matlab Mixture of Normals, MCMC std Burn in progressing K=3: mean
Matlab Mixture of Normals, MCMC Burnt in std K=3: mean
Matlab Mixture of Normals, MCMC std No focus- No interpretation as 4 clusters K=4: Low prob mean
Matlab Mixture of Normals, MCMC std K=5: Low prob mean
Matlab Mixture of Normals, Trace of state labels MCMC X sample: 1-100 : (-1 1) 101:200: (0 3) 201:300: (1 1) Unsorted sample label trace sorted
Mixtures of multivariate normals This works the same way, but instead of a Gamma distribution for the variance we use the Wishart distribution, a matrix-valued distribution over covariance matrices. Competes well with both clustering and Expectation Maximization, which are prone to overfitting (clustering cannot handle overlapping components)
Dynamic Systems, time series An abundance of linear prediction models exists For non-linear and Chaotic systems, method was developed in 1990:s (Santa Fe) Gershenfeld, Weigend: The Future of Time Series Online/offline: prediction/retrodiction
Hidden Markov Models Given a sequence of discrete signals xi Is there a model likely to have produced xi from a sequence of states si of a Finite Markov Chain? P(. s) - transition probability in state s S(. s) - signal probability in state s Speech Recognition, Bioinformatics,
Hidden Markov Models function [Pn,Sn,stn,trP,trS,trst,tll]= hmmsim(a,n,n,s,prop,po,so,sto,nn); %[Pn,Sn,stn,trP,trS,trst]=HMMSIM(A,N,n,s,prop,Po,So,sto,NN); % Compute trace of posterior for hmm parameters % A - the sequence of signals % N - the length of trace % n - number of states in Markov chain % s - number of signal values % prop - proposal stepsize % optional inputs: % Po - starting transition matrix (each of n columns a discrete pdf % in n-vector % So - starting signal matrix (each of n columns a discrete pdf
Hidden Markov Models function [Pn,Sn,stn,trP,trS,trst,tll]= hmmsim(a,n,n,s,prop,po,so,sto,nn); % in s-vector % sto - starting state sequence (congruent to vector A) % NN - thining of trace, default 10 % outputs % Pn - last transition matrix in trace % Sn - last signal emission matrix % stn - last hidden state vector (congruent to A) % trp - trace of transition matrices % trs - trace of signal matrices % trace of hidden state vectors
Hidden Markov Models
Hidden Markov Models
Evidence Based Education: EBE Home Page Evidence is often incomplete or equivocal. One of the problems that commonly afflicts politicians is feeling the need to act, or at least to be seen to be acting, despite the absence of any clear evidence about what action is most appropriate. A more mature response in many areas of educational policy would be to acknowledge that we do not really know enough to support a clear decision. Claims that have been enshrined in textbooks are suddenly unprovable. (Truth wears off, Lehrer, 2010)
Hidden Markov Models
Hidden Markov Models Over 100000 iterations, burnin is visible 2 states, 2 signals P-transition matrix S-signaling
3 states 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 10 20 30 40 50 60 70 80 90 100
2 vs 3 states 6000 7000 8000 2 states 9000 10000 11000 3 states 12000 0 0.5 1 1.5 2 2.5 3 3.5 x 10 4 Log probability traces
MCMC Convergence
Kolmogorov-Smirnov test Is sample of n points from distribution? D max of abs difference between empirical and theoretical CDF compute D*sqrt(n), if larger that ca 2, reject Are two samples size n1, n2, from same distribution D max abs difference between the empirical CDF:s, test statistic sqrt(n1*n2/(n1+n2)).
Block-wise KS test for 4 MCMC runs red/black=nonreject 100 long 100 short 80 80 60 60 40 40 20 20 0 0 20 40 60 80 100 0 0 20 40 60 80 100 100 conv 100 X 80 80 60 60 40 40 20 20 0 0 20 40 60 80 100 0 0 20 40 60 80 100
Berry and Linoff have eloquently stated their preferences with the often quoted sentence: "Neural networks are a good choice for most classification problems when the results of the model are more important than understanding how the model works". Neural networks typically give the right answer
Dynamic Systems and Taken s Theorem Lag vectors (xi,x(i-1), x(i-t), for all i, occupy a submanifold of E^T, if T is large enough This manifold is diffeomorphic to original state space and can be used to create a good dynamic model Taken s theorem assumes no noise and must be empirically verified.
Dynamic Systems and Taken s Theorem
Santa Fe 1992 Competition Unstable Laser Intensive Care Unit Data, Apnea Exchange rate Data Synthetic series with drift White Dwarf Star Data Bach s unfinished Fugue
Stereoscopic 3D view of state space manifold, series A (Laser) The points seem to lie on a surface, which means that a lag-vector of 3 gives good prediction of the time series. The surface is either produced for a training batch, or produced on-the-fly from neighboring data points (possibly downweighing very old points)
Figure in book misleading: Origin where surface touches ground
Variational Bayes
True trajectory in state space (Valpola-Karhunen 2002)
Reconstructed trajectory in inferred state space
Chapman Kolmogorov version of Bayes rule f (! t D t ) " f (d t! t )# f (! t! t $1 ) f (! t $1 D t $1 )d! t$1
Chapman Kolmogorov version of Bayes rule f (! t D t ) " f (d t! t )# f (! t! t $1 ) f (! t $1 D t $1 )d! t$1
Observation and video based particle filter tracking Defence: tracking with heterogeneous observations Crowd analysis: tracking from video
Cycle in Particle filter Time step cycle Importance (weighted) sample Resampled ordinary sample Diffused sample Weighted by likelihood X- state Z - Observation
Particle filter- general tracking