Dynamic network sampling
|
|
- Gabriel Ford
- 5 years ago
- Views:
Transcription
1 Dynamic network sampling Steve Thompson Simon Fraser University Graybill Conference Colorado State University June 10, 2013
2 Dynamic network sampling The population of interest has spatial structure, often has network structure and moves or changes over time Steve Thompson () Dynamic network sampling 2 / 24
3 Dynamic network sampling The population of interest has spatial structure, often has network structure and moves or changes over time Designs for selecting a sample units use spatial and network relationships and progress dynamically Steve Thompson () Dynamic network sampling 2 / 24
4 Population and sample processes Population: A stochastic process {Y t }. Sample: A stochastic process {S t }. Time t such as day, with t = 0, 1, 2,.... Values Y t of units such as locations, states, and relationships between units at time t. Sample S t the set of units in the sample at time t. Steve Thompson () Dynamic network sampling 3 / 24
5 Spatial-temporal population model Purpose of dynamic population model is to evaluate the effectiveness of different sampling designs. We want it to be simple but to incorporate those characteristics that affect the effectiveness of sampling strategies. Steve Thompson () Dynamic network sampling 4 / 24
6 Spatial-temporal population model Purpose of dynamic population model is to evaluate the effectiveness of different sampling designs. We want it to be simple but to incorporate those characteristics that affect the effectiveness of sampling strategies. clustering, mixing, migration point process Steve Thompson () Dynamic network sampling 4 / 24
7 Spatial-temporal population model Purpose of dynamic population model is to evaluate the effectiveness of different sampling designs. We want it to be simple but to incorporate those characteristics that affect the effectiveness of sampling strategies. clustering, mixing, migration point process movements within and among groups small random displacements, MCMC selections, autoregressive processes Steve Thompson () Dynamic network sampling 4 / 24
8 Spatial-temporal population model Purpose of dynamic population model is to evaluate the effectiveness of different sampling designs. We want it to be simple but to incorporate those characteristics that affect the effectiveness of sampling strategies. clustering, mixing, migration point process movements within and among groups small random displacements, MCMC selections, autoregressive processes insertions and deletions of objects birth and death process, immigration and emigration. Steve Thompson () Dynamic network sampling 4 / 24
9 Dynamic network model builds on spatial temporal point process link probabilities dependent on distance between nodes, node characteristics, and current degree or target degree distribution renewal process for link formation, persistence, and dissolution Steve Thompson () Dynamic network sampling 5 / 24
10 Sampling process A sampling design in the static situation is a procedure for selecting units to include in the sample. For the dynamic situation we use a sampling process that includes Steve Thompson () Dynamic network sampling 6 / 24
11 Sampling process A sampling design in the static situation is a procedure for selecting units to include in the sample. For the dynamic situation we use a sampling process that includes an acquisition process by which units are added to the sample Steve Thompson () Dynamic network sampling 6 / 24
12 Sampling process A sampling design in the static situation is a procedure for selecting units to include in the sample. For the dynamic situation we use a sampling process that includes an acquisition process by which units are added to the sample an attrition process by which units are removed from the sample Steve Thompson () Dynamic network sampling 6 / 24
13 Equilibrium distributions Many properties of the population and sample process are ergodic and have stationary and limiting distributions. Steve Thompson () Dynamic network sampling 7 / 24
14 Equilibrium distributions Many properties of the population and sample process are ergodic and have stationary and limiting distributions. Sample size tends to increase when acquisition rate exceeds attrition rate and decrease when attrition rate exceeds attrition rate Steve Thompson () Dynamic network sampling 7 / 24
15 Uses of sampling designs Inference about population characteristics Experiments on sample units Interventions on sample units Steve Thompson () Dynamic network sampling 8 / 24
16 Intervention strategy Select a sample of units from the population, make interventions on those units changing their values. Steve Thompson () Dynamic network sampling 9 / 24
17 Intervention strategy Select a sample of units from the population, make interventions on those units changing their values. Objective is to change the population, not just sample units, in a desired way. Steve Thompson () Dynamic network sampling 9 / 24
18 Intervention strategy Select a sample of units from the population, make interventions on those units changing their values. Objective is to change the population, not just sample units, in a desired way. One strategy interacts with another Steve Thompson () Dynamic network sampling 9 / 24
19 Effect of an intervention An intervention strategy consists of a sampling design for finding units in the population on which to make interventions, and a plan for the types of interventions to be made, which may depend on sample unit characteristics. A simple way to view the effect of a strategy is the difference in the resulting equilibrium distribution, compared with the equilibrium distribution without the strategy, or with a different strategy. Steve Thompson () Dynamic network sampling 10 / 24
20 Natural sampling strategies virus selects a sample of people with a link-tracing design insects select a sample of plants with a temporal spatial distance design. Steve Thompson () Dynamic network sampling 11 / 24
21 Dynamic network sampling Steve Thompson () Dynamic network sampling 12 / 24
22 Dynamic network sampling initially and at ongoing rate units are selected using conventional or spatial design Steve Thompson () Dynamic network sampling 12 / 24
23 Dynamic network sampling initially and at ongoing rate units are selected using conventional or spatial design new units are added through link tracing tracing rate may depend on unit and link values Steve Thompson () Dynamic network sampling 12 / 24
24 Dynamic network sampling initially and at ongoing rate units are selected using conventional or spatial design new units are added through link tracing tracing rate may depend on unit and link values units are removed from sample through removal probability or deletion from population. Steve Thompson () Dynamic network sampling 12 / 24
25 Random walk in static network The classic random walk in a graph starts with an arbitrary node and at each step selects at random one of the links out from the current node to reach the next node. Steve Thompson () Dynamic network sampling 13 / 24
26 Random walk in static network The classic random walk in a graph starts with an arbitrary node and at each step selects at random one of the links out from the current node to reach the next node. The current sample S t consists of that one node. Steve Thompson () Dynamic network sampling 13 / 24
27 Random walk in static network The classic random walk in a graph starts with an arbitrary node and at each step selects at random one of the links out from the current node to reach the next node. The current sample S t consists of that one node. When sampling is with replacement, the sequence S 0, S 1, S 2 is a Markov chain with constant transition matrix. Steve Thompson () Dynamic network sampling 13 / 24
28 Random walk in static network The classic random walk in a graph starts with an arbitrary node and at each step selects at random one of the links out from the current node to reach the next node. The current sample S t consists of that one node. When sampling is with replacement, the sequence S 0, S 1, S 2 is a Markov chain with constant transition matrix. Connected components of networks form closed classes. Steve Thompson () Dynamic network sampling 13 / 24
29 Random walk in dynamic network links between nodes change over time component structure changes a random walk temporarily stuck in one component can eventually reach nodes of other components Steve Thompson () Dynamic network sampling 14 / 24
30 Random walk in dynamic network links between nodes change over time component structure changes a random walk temporarily stuck in one component can eventually reach nodes of other components deletions and insertions of nodes can interrupt the walk, requiring reseeding Steve Thompson () Dynamic network sampling 14 / 24
31 Random set designs current sample S t is a set of nodes acquisition of nodes includes tracing links out from sample attrition through removal of nodes from sample or deletion from population Steve Thompson () Dynamic network sampling 15 / 24
32 Simple random set design with-replacement selection independent link tracing independent removals Steve Thompson () Dynamic network sampling 16 / 24
33 Design options selection and removal probabilities depend on node and link values with or without replacent links followed from active sample units only Steve Thompson () Dynamic network sampling 17 / 24
34 Design options selection and removal probabilities depend on node and link values with or without replacent links followed from active sample units only replacement, activeness values between 0 and 1 Steve Thompson () Dynamic network sampling 17 / 24
35 Desired features trace rapidly at first; reseeding. have a target sample size distribution. find units with high degree or interesting values. Steve Thompson () Dynamic network sampling 18 / 24
36 Epidemic example HIV virus spreads with dynamic network sampling design seek and treat designs for interventions to reduce incidence. combination of interventions and counter-responses leads to new equilibrium distribution. Steve Thompson () Dynamic network sampling 19 / 24
37 What influences equilibrium distribution sample volume = number of nodes sample surface = number of links out Steve Thompson () Dynamic network sampling 20 / 24
38 What influences equilibrium distribution sample volume = number of nodes sample surface = number of links out may be weighted by tracing probabilities Steve Thompson () Dynamic network sampling 20 / 24
39 What influences equilibrium distribution sample volume = number of nodes sample surface = number of links out may be weighted by tracing probabilities surface to volume ratio tends to decrease as sample size increases Steve Thompson () Dynamic network sampling 20 / 24
40 Also affecting equilibrium level new node entering sample (HIV) tends to have higher degree than average for population and higher proportion of links-out than average for sample especially in early stages of epidemic Steve Thompson () Dynamic network sampling 21 / 24
41 Steve Thompson () Dynamic network sampling 22 / 24
42 Proportion infected day Steve Thompson () Dynamic network sampling 23 / 24
43 Proportion infected day Steve Thompson () Dynamic network sampling 24 / 24
Markov Model. Model representing the different resident states of a system, and the transitions between the different states
Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior
More informationA&S 320: Mathematical Modeling in Biology
A&S 320: Mathematical Modeling in Biology David Murrugarra Department of Mathematics, University of Kentucky http://www.ms.uky.edu/~dmu228/as320/ Spring 2016 David Murrugarra (University of Kentucky) A&S
More informationHow to Use This Presentation
How to Use This Presentation To View the presentation as a slideshow with effects select View on the menu bar and click on Slide Show. To advance through the presentation, click the right-arrow key or
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationAll living organisms are limited by factors in the environment
All living organisms are limited by factors in the environment Monday, October 30 POPULATION ECOLOGY Monday, October 30 POPULATION ECOLOGY Population Definition Root of the word: The word in another language
More informationThe decoupling assumption in large stochastic system analysis Talk at ECLT
The decoupling assumption in large stochastic system analysis Talk at ECLT Andrea Marin 1 1 Dipartimento di Scienze Ambientali, Informatica e Statistica Università Ca Foscari Venezia, Italy (University
More informationProject 1 Modeling of Epidemics
532 Chapter 7 Nonlinear Differential Equations and tability ection 7.5 Nonlinear systems, unlike linear systems, sometimes have periodic solutions, or limit cycles, that attract other nearby solutions.
More informationSocial Influence in Online Social Networks. Epidemiological Models. Epidemic Process
Social Influence in Online Social Networks Toward Understanding Spatial Dependence on Epidemic Thresholds in Networks Dr. Zesheng Chen Viral marketing ( word-of-mouth ) Blog information cascading Rumor
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationAARMS Homework Exercises
1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality
More informationCS 798: Homework Assignment 3 (Queueing Theory)
1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationUNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.
UNIVERSITY OF YORK MSc Examinations 2004 MATHEMATICS Networks Time Allowed: 3 hours. Answer 4 questions. Standard calculators will be provided but should be unnecessary. 1 Turn over 2 continued on next
More informationPerformance Evaluation of Queuing Systems
Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems
More informationMarkov Chains and MCMC
Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,
More informationStatistics Canada International Symposium Series - Proceedings Symposium 2004: Innovative Methods for Surveying Difficult-to-reach Populations
Catalogue no. 11-522-XIE Statistics Canada International Symposium Series - Proceedings Symposium 2004: Innovative Methods for Surveying Difficult-to-reach Populations 2004 Proceedings of Statistics Canada
More informationAny live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.
2. Cellular automata, and the SIRS model In this Section we consider an important set of models used in computer simulations, which are called cellular automata (these are very similar to the so-called
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More informationreversed chain is ergodic and has the same equilibrium probabilities (check that π j =
Lecture 10 Networks of queues In this lecture we shall finally get around to consider what happens when queues are part of networks (which, after all, is the topic of the course). Firstly we shall need
More informationLIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE
International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION
More information4452 Mathematical Modeling Lecture 16: Markov Processes
Math Modeling Lecture 16: Markov Processes Page 1 4452 Mathematical Modeling Lecture 16: Markov Processes Introduction A stochastic model is one in which random effects are incorporated into the model.
More informationAnalytically tractable processes on networks
University of California San Diego CERTH, 25 May 2011 Outline Motivation 1 Motivation Networks Random walk and Consensus Epidemic models Spreading processes on networks 2 Networks Motivation Networks Random
More informationMarkov Chains and Web Ranking: a Multilevel Adaptive Aggregation Method
Markov Chains and Web Ranking: a Multilevel Adaptive Aggregation Method Hans De Sterck Department of Applied Mathematics, University of Waterloo Quoc Nguyen; Steve McCormick, John Ruge, Tom Manteuffel
More informationModule 6:Random walks and related areas Lecture 24:Random woalk and other areas. The Lecture Contains: Random Walk.
The Lecture Contains: Random Walk Ehrenfest Model file:///e /courses/introduction_stochastic_process_application/lecture24/24_1.htm[9/30/2013 1:03:45 PM] Random Walk As already discussed there is another
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationTEMPORAL EXPONENTIAL- FAMILY RANDOM GRAPH MODELING (TERGMS) WITH STATNET
1 TEMPORAL EXPONENTIAL- FAMILY RANDOM GRAPH MODELING (TERGMS) WITH STATNET Prof. Steven Goodreau Prof. Martina Morris Prof. Michal Bojanowski Prof. Mark S. Handcock Source for all things STERGM Pavel N.
More informationBirth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes
DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains
More informationCOPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition
Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15
More informationLecture Notes: Markov chains Tuesday, September 16 Dannie Durand
Computational Genomics and Molecular Biology, Lecture Notes: Markov chains Tuesday, September 6 Dannie Durand In the last lecture, we introduced Markov chains, a mathematical formalism for modeling how
More informationMarkov-Chain Monte Carlo
Markov-Chain Monte Carlo CSE586 Computer Vision II Spring 2010, Penn State Univ. References Recall: Sampling Motivation If we can generate random samples x i from a given distribution P(x), then we can
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationUsing Markov Chains To Model Human Migration in a Network Equilibrium Framework
Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School
More informationFigure The Threshold Theorem of epidemiology
K/a Figure 3 6. Assurne that K 1/ a < K 2 and K 2 / ß < K 1 (a) Show that the equilibrium solution N 1 =0, N 2 =0 of (*) is unstable. (b) Show that the equilibrium solutions N 2 =0 and N 1 =0, N 2 =K 2
More informationThe Living World Continued: Populations and Communities
The Living World Continued: Populations and Communities Ecosystem Communities Populations Review: Parts of an Ecosystem 1) An individual in a species: One organism of a species. a species must be genetically
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More informationChapter 5 Lecture. Metapopulation Ecology. Spring 2013
Chapter 5 Lecture Metapopulation Ecology Spring 2013 5.1 Fundamentals of Metapopulation Ecology Populations have a spatial component and their persistence is based upon: Gene flow ~ immigrations and emigrations
More informationAn Introduction to Tuberculosis Disease Modeling
An Introduction to Tuberculosis Disease Modeling Bradley G. Wagner, PhD Institute for Disease Modeling Seminar of Global TB/HIV Research August 3 rd 2018 1 Copyright 2015 Intellectual Ventures Management,
More informationESTIMATING LIKELIHOODS FOR SPATIO-TEMPORAL MODELS USING IMPORTANCE SAMPLING
To appear in Statistics and Computing ESTIMATING LIKELIHOODS FOR SPATIO-TEMPORAL MODELS USING IMPORTANCE SAMPLING Glenn Marion Biomathematics & Statistics Scotland, James Clerk Maxwell Building, The King
More informationInference in Bayesian Networks
Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)
More informationMarkov Chain Monte Carlo The Metropolis-Hastings Algorithm
Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability
More informationUse of Eigen values and eigen vectors to calculate higher transition probabilities
The Lecture Contains : Markov-Bernoulli Chain Note Assignments Random Walks which are correlated Actual examples of Markov Chains Examples Use of Eigen values and eigen vectors to calculate higher transition
More informationLink Analysis. Stony Brook University CSE545, Fall 2016
Link Analysis Stony Brook University CSE545, Fall 2016 The Web, circa 1998 The Web, circa 1998 The Web, circa 1998 Match keywords, language (information retrieval) Explore directory The Web, circa 1998
More informationDetailed Balance and Branching Processes Monday, October 20, :04 PM
stochnotes102008 Page 1 Detailed Balance and Branching Processes Monday, October 20, 2008 12:04 PM Looking for a detailed balance solution for a stationary distribution is a useful technique. Stationary
More informationStochastic Processes
Stochastic Processes Stochastic Process Non Formal Definition: Non formal: A stochastic process (random process) is the opposite of a deterministic process such as one defined by a differential equation.
More informationIEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.
IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas
More informationMath 142-2, Homework 2
Math 142-2, Homework 2 Your name here April 7, 2014 Problem 35.3 Consider a species in which both no individuals live to three years old and only one-year olds reproduce. (a) Show that b 0 = 0, b 2 = 0,
More informationFundamentals of Applied Probability and Random Processes
Fundamentals of Applied Probability and Random Processes,nd 2 na Edition Oliver C. Ibe University of Massachusetts, LoweLL, Massachusetts ip^ W >!^ AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS
More informationPROBABILITY AND STOCHASTIC PROCESSES A Friendly Introduction for Electrical and Computer Engineers
PROBABILITY AND STOCHASTIC PROCESSES A Friendly Introduction for Electrical and Computer Engineers Roy D. Yates Rutgers, The State University ofnew Jersey David J. Goodman Rutgers, The State University
More informationOn Influential Node Discovery in Dynamic Social Networks
On Influential Node Discovery in Dynamic Social Networks Charu Aggarwal Shuyang Lin Philip S. Yu Abstract The problem of maximizing influence spread has been widely studied in social networks, because
More informationCS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling
CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy
More informationArtificial Intelligence
ICS461 Fall 2010 Nancy E. Reed nreed@hawaii.edu 1 Lecture #14B Outline Inference in Bayesian Networks Exact inference by enumeration Exact inference by variable elimination Approximate inference by stochastic
More informationSTA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008
Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There
More informationAdaptive Web Sampling
Biometrics 62, 1224 1234 December 26 DOI: 1.1111/j.1541-42.26.576.x Adaptive Web Sampling Steven K. Thompson Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, British Columbia
More informationHorizontal versus vertical transmission of parasites in a stochastic spatial model
Mathematical Biosciences 168 (2000) 1±8 www.elsevier.com/locate/mbs Horizontal versus vertical transmission of parasites in a stochastic spatial model Rinaldo B. Schinazi * Department of Mathematics, University
More informationSlide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.
Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org #1: C4.5 Decision Tree - Classification (61 votes) #2: K-Means - Clustering
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationMachine Learning for Data Science (CS4786) Lecture 24
Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each
More informationLecture 15: MCMC Sanjeev Arora Elad Hazan. COS 402 Machine Learning and Artificial Intelligence Fall 2016
Lecture 15: MCMC Sanjeev Arora Elad Hazan COS 402 Machine Learning and Artificial Intelligence Fall 2016 Course progress Learning from examples Definition + fundamental theorem of statistical learning,
More informationLecture 20: Reversible Processes and Queues
Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationBayesian model selection in graphs by using BDgraph package
Bayesian model selection in graphs by using BDgraph package A. Mohammadi and E. Wit March 26, 2013 MOTIVATION Flow cytometry data with 11 proteins from Sachs et al. (2005) RESULT FOR CELL SIGNALING DATA
More informationAdaptive Web Sampling
Adaptive Web Sampling To appear in Biometrics, 26 Steven K. Thompson Department of Statistics and Actuarial Science Simon Fraser University, Burnaby, BC V5A 1S6 CANADA email: thompson@stat.sfu.ca Abstract
More informationThursday. Threshold and Sensitivity Analysis
Thursday Threshold and Sensitivity Analysis SIR Model without Demography ds dt di dt dr dt = βsi (2.1) = βsi γi (2.2) = γi (2.3) With initial conditions S(0) > 0, I(0) > 0, and R(0) = 0. This model can
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More informationPhylogenetics: Bayesian Phylogenetic Analysis. COMP Spring 2015 Luay Nakhleh, Rice University
Phylogenetics: Bayesian Phylogenetic Analysis COMP 571 - Spring 2015 Luay Nakhleh, Rice University Bayes Rule P(X = x Y = y) = P(X = x, Y = y) P(Y = y) = P(X = x)p(y = y X = x) P x P(X = x 0 )P(Y = y X
More information0.1 Naive formulation of PageRank
PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more
More informationChallenges in Privacy-Preserving Analysis of Structured Data Kamalika Chaudhuri
Challenges in Privacy-Preserving Analysis of Structured Data Kamalika Chaudhuri Computer Science and Engineering University of California, San Diego Sensitive Structured Data Medical Records Search Logs
More informationLearning in Bayesian Networks
Learning in Bayesian Networks Florian Markowetz Max-Planck-Institute for Molecular Genetics Computational Molecular Biology Berlin Berlin: 20.06.2002 1 Overview 1. Bayesian Networks Stochastic Networks
More informationSTAT 536: Migration. Karin S. Dorman. October 3, Department of Statistics Iowa State University
STAT 536: Migration Karin S. Dorman Department of Statistics Iowa State University October 3, 2006 Migration Introduction Migration is the movement of individuals between populations. Until now we have
More informationLAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC
LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC R. G. DOLGOARSHINNYKH Abstract. We establish law of large numbers for SIRS stochastic epidemic processes: as the population size increases the paths of SIRS epidemic
More informationFINAL REPORT. Project Title: Extreme Values of Queues, Point Processes and Stochastic Networks
FINAL REPORT Project Title: Extreme Values of Queues, Point Processes and Stochastic Networks Grant No.: AFOSR 84-0367 Project Director: Professor Richard F. Serfozo Industrial and Systems Engineering
More informationDe los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes:
Probabilidades y Estadística (M) Práctica 7 2 cuatrimestre 2018 Cadenas de Markov De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes: 6,2, 6,3, 6,7, 6,8,
More informationMultistate Modelling Vertical Transmission and Determination of R 0 Using Transition Intensities
Applied Mathematical Sciences, Vol. 9, 2015, no. 79, 3941-3956 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2015.52130 Multistate Modelling Vertical Transmission and Determination of R 0
More informationDISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition
DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths
More informationDie-out Probability in SIS Epidemic Processes on Networks
Die-out Probability in SIS Epidemic Processes on etworks Qiang Liu and Piet Van Mieghem Abstract An accurate approximate formula of the die-out probability in a SIS epidemic process on a network is proposed.
More information10.2 A Stochastic Spatiotemporal Analysis of the Contribution of Primary versus Secondary Spread of HLB.
Page 285 10.2 A Stochastic Spatiotemporal Analysis of the Contribution of Primary versus Secondary Spread of HLB. 1 Gottwald T., 1 Taylor E., 2 Irey M., 2 Gast T., 3 Bergamin-Filho A., 4 Bassanezi R, 5
More informationJUNIOR SECONDARY EXTERNAL EXAMINATION
Candidate name JUNIOR SECONDARY EXTERNAL EXAMINATION Grade 9 Geography Specimen Paper 1 Duration 2 hours Marks 90 2018 INSTRUCTIONS AND INFORMATION TO CANDIDATES Write your name in space provided Answer
More informationHomework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.
Stationary Distributions: Application Monday, October 05, 2015 2:04 PM Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM. To prepare to describe the conditions under which the
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationAt the boundary states, we take the same rules except we forbid leaving the state space, so,.
Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability
More informationCh.5 Evolution and Community Ecology How do organisms become so well suited to their environment? Evolution and Natural Selection
Ch.5 Evolution and Community Ecology How do organisms become so well suited to their environment? Evolution and Natural Selection Gene: A sequence of DNA that codes for a particular trait Gene pool: All
More informationResearch supported by NSF grant DMS
Research supported by NSF grant DMS-1309998 A critical threshold for network driven sampling Karl Rohe; karlrohe@stat.wisc.edu UW-Madison Department of Statistics Sampling process Seed node Wave 1 Wave
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationCh. 4 - Population Ecology
Ch. 4 - Population Ecology Ecosystem all of the living organisms and nonliving components of the environment in an area together with their physical environment How are the following things related? mice,
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More informationUnit 1 Lesson 3 Population Dynamics. Copyright Houghton Mifflin Harcourt Publishing Company
Movin Out How can a population grow or get smaller? If new individuals are added to the population, it grows. If individuals are removed from a population, it gets smaller. The population stays at about
More informationMathematical and Algorithmic Models of Refugee Crises
Mathematical and Algorithmic Models of Refugee Crises Mentor: James Unwin, University of Illinois May 19, 2018 Importance What and Why: 65.6 Million Refugees in the Past Year Asylum Registration Legal
More informationStochastic inference in Bayesian networks, Markov chain Monte Carlo methods
Stochastic inference in Bayesian networks, Markov chain Monte Carlo methods AI: Stochastic inference in BNs AI: Stochastic inference in BNs 1 Outline ypes of inference in (causal) BNs Hardness of exact
More informationThe parallel replica method for Markov chains
The parallel replica method for Markov chains David Aristoff (joint work with T Lelièvre and G Simpson) Colorado State University March 2015 D Aristoff (Colorado State University) March 2015 1 / 29 Introduction
More informationCopyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and
Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere
More informationAn Introduction to Reversible Jump MCMC for Bayesian Networks, with Application
An Introduction to Reversible Jump MCMC for Bayesian Networks, with Application, CleverSet, Inc. STARMAP/DAMARS Conference Page 1 The research described in this presentation has been funded by the U.S.
More informationLab 3. Adding Forces with a Force Table
Lab 3. Adding Forces with a Force Table Goals To describe the effect of three balanced forces acting on a ring or disk using vector addition. To practice adding force vectors graphically and mathematically
More informationGozo College Boys Secondary Victoria Gozo, Malta Ninu Cremona
Gozo College Boys Secondary Victoria Gozo, Malta Ninu Cremona Half Yearly Examination 2010 2011 Form 3 J.L. GEOGRAPHY (Option) Time: 1½ Hours Name: Class: Answer all questions in the space provided. Write
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationTemporal Point Processes the Conditional Intensity Function
Temporal Point Processes the Conditional Intensity Function Jakob G. Rasmussen Department of Mathematics Aalborg University Denmark February 8, 2010 1/10 Temporal point processes A temporal point process
More information