Reminder:!! homework grades and comments are on OAK!! my solutions are on the course web site
|
|
- Sibyl Melton
- 5 years ago
- Views:
Transcription
1 PSY318 Week 10
2 Reminder: homework grades and comments are on OAK my solutions are on the course web site
3 Modeling Response Times
4 Nosofsky & Palmeri (1997)
5 Nosofsky & Palmeri (1997)
6 Palmeri (1997)
7
8 Response Probabilities A B S1 S2 S Sm
9 Response Probabilities A B A Response Times B S , 432, 675, 434, 754,, 421, , 798, 509, 686, 523,, 602, 776 S , 534, 782, 432, 534,, 873, , 523, 476, 756, 546,, 554, 699 S , 754, 634, 845, 456,, 547, , 873, 888, 934, 687,, 743, Sm , 643, 523, 532, 256, , 633, 623, 587, 677,, 454, 765
10 Response Probabilities A B A Response Times B S , 432, 675, 434, 754,, 421, , 798, 509, 686, 523,, 602, 776 S , 534, 782, 432, 534,, 873, , 523, 476, 756, 546,, 554, 699 S , 754, 634, 845, 456,, 547, , 873, 888, 934, 687,, 743, Sm , 643, 523, 532, 256, , 633, 623, 587, 677,, 454, 765 these can be different stimuli or different conditions
11 Response Probabilities A B A Response Times B S , 432, 675, 434, 754,, 421, , 798, 509, 686, 523,, 602, 776 S , 534, 782, 432, 534,, 873, , 523, 476, 756, 546,, 554, 699 S , 754, 634, 845, 456,, 547, , 873, 888, 934, 687,, 743, Sm , 643, 523, 532, 256, , 633, 623, 587, 677,, 454, 765 how do we summarize the RT data, and what do we actually fit?
12 mean RT RT j = 1 n n i=1 - overall mean RT in a given condition, or for a given stimulus - mean RT conditionalized on the response rt i
13 mean RT RT j = 1 n n i=1 rt i cumulative distribution function (CDF) F j(t) = # elements in sample t n
14 mean RT RT j = 1 n n i=1 rt i cumulative distribution function (CDF) F j(t) = # elements in sample t n probability density function (PDF) histogram kernel density estimation
15
16 Histograms vs Kernel Density Estimation
17 pts = 0:.1:20; [pdf_kernel, t_kernel] = ksdensity(data, pts, 'support', 'positive', 'function', pdf'); 'support' 'function' 'positive' 'unbounded' 'pdf' 'cdf'
18 mean RT RT j = 1 n n i=1 rt i cumulative distribution function (CDF) F j(t) = # elements in sample t n probability density function (PDF) histogram kernel density estimation hazard function h( t) = f 1 ( t) F( t) From actuary science: The probability that you will die in the next instant given that you re still alive. h(t) increases as you get older and older. Sigh. probability a process will terminate in the next instant given that it has not terminated yet
19
20
21 mean RT RT j = 1 n n i=1 rt i aren t means enough? cumulative distribution function (CDF) F j(t) = # elements in sample t n probability density function (PDF) histogram kernel density estimation hazard function h( t) = f 1 ( t) F( t)
22 What can Response Time Distributions tell you?
23 What can Response Time Distributions tell you? So why would anyone ever care about a response time distribution?
24 What can Response Time Distributions tell you? Mean response times tell you everything you need to know
25 What can Response Time Distributions tell you? Right?
26 What can Response Time Distributions tell you? Imagine an experiment with three conditions A, B, and C Mean RTs are µ A < µ B < µ C
27 What can Response Time Distributions tell you? Imagine an experiment with three conditions A, B, and C Mean RTs are P(RT<t) 1 0 µ A < µ B < µ C A B C Different qualitative patterns of response time variability can give rise to The same ordering of mean RTs. These different possibilities could imply different mechanisms. P(RT<t) 1 0 A B C
28 1 A B C P(RT<t) 0
29 1 A B C P(RT<t) 0 A animal B bird C sparrow
30 1 A B C P(RT<t) 0 A animal perception Yes B bird C sparrow
31 1 A B C P(RT<t) 0 A animal perception Yes B bird perception Yes C sparrow perception Yes
32 1 A B C P(RT<t) 0 A animal perception Yes B bird perception Yes C sparrow perception Yes
33 1 A B C P(RT<t) 0 A animal B bird C sparrow
34 1 A B C P(RT<t) 0 A animal perception Yes B bird C sparrow
35 1 A B C P(RT<t) 0 A animal perception Yes B bird perception Yes C sparrow perception Yes
36 1 A B C P(RT<t) 0 A animal perception Yes B bird perception Yes C sparrow perception Yes
37 What can Response Time Distributions tell you? Wow. That s actually the kind of differences we d predict. I need to go look at my data again.
38
39 Exemplar- Based Random Walk (EBRW) Model Nosofsky & Palmeri 1997, Palmeri 1997
40 A2 B1 A1 B2 A3 B3 From the Generalized Context Model (Nosofsky, 1986) - categories are represented in terms of stored exemplars - exemplars are represented as points in MDS space - similarity is an exponentially decreasing function of distance
41 A2 B1 A1 B2 A3 B3 From the Generalized Context Model (Nosofsky, 1986) - categories are represented in terms of stored exemplars - exemplars are represented as points in MDS space - similarity is an exponentially decreasing function of distance From Instance Theory (Logan, 1988) - each experience is stored as a new instance (exemplar) - exemplars race to be retrieved
42 Race
43 Race
44
45 Race
46 Race
47
48 Race Logan (1988) I1 I2 I3 Rule
49 Race Logan (1988) I1 I2 I3 Rule early in learning (non- automatic)
50
51 Race Logan (1988) I1 I2 I3 I4 I5 I6 I7 Rule
52 Race Logan (1988) I1 I2 I3 I4 I5 I6 I7 Rule later in learning (automatic)
53
54 Race Nosofsky & Palmeri (1997) A1 A2 A3 A4 B1 B2 B3 B4
55 Race Nosofsky & Palmeri (1997) A1 A2 A3 A4 B1 B2 B3 B4
56
57 Race Nosofsky & Palmeri (1997) A1 A2 A3 A4 B1 B2 B3 B4
58 Race Nosofsky & Palmeri (1997) A1 A2 A3 A4 B1 B2 B3 B4
59 EBRW Nosofsky & Palmeri (1997) d ij = m r i j m m s ij = exp( c d ij p ) f ij (t) = s ij exp( s ij t) 1/r similarity is an exponentially decreasing function of distance retrieval time for exemplar j given probe item i is exponentially distributed with rate sij
60 EBRW Nosofsky & Palmeri (1997) d ij = m r i j m m s ij = exp( c d ij p ) f ij (t) = s ij exp( s ij t) 1/r similarity is an exponentially decreasing function of distance retrieval time for exemplar j given probe item i is exponentially distributed with rate sij
61 Properties of Races of Exponentials Suppose there are n independent runners with exponentially distributed finishing times with rates λ1, λ2, λ3,, λn What is the probability that runner j wins?
62 Properties of Races of Exponentials Suppose there are n independent runners with exponentially distributed finishing times with rates λ1, λ2, λ3,, λn What is the probability that runner j wins? P( j wins) = λ j λ k k Luce s choice model falls out of assumption of racing exponentials
63 Properties of Races of Exponentials P( j wins) = λ j λ k k P(A runner wins) = j Arunners k λ j λ k P(A runner wins) = P(A runner wins) = j Arunners λ k k λ j λ j j Arunners λ j j Arunners j Brunners + λ j
64 Properties of Races of Exponentials P( j wins) = λ j λ k k P(A runner wins) = j Arunners k λ j λ k P(A runner wins) = P(A runner wins) = j Arunners λ k k λ j λ j j Arunners λ j j Arunners j Brunners + λ j
65 Properties of Races of Exponentials P( j wins) = λ j λ k k P(A runner wins) = j Arunners k λ j λ k P(A runner wins) = P(A runner wins) = j Arunners λ k k λ j λ j j Arunners λ j j Arunners j Brunners + λ j
66 Properties of Races of Exponentials P( j wins) = λ j λ k k P(A runner wins) = j Arunners k λ j λ k P(A runner wins) = j Arunners λ k k λ j P(A S i ) = s ij j A s ij + s ij j A j B this is the GCM
67 Properties of Races of Exponentials Suppose there are n independent runners with exponentially distributed finishing times with rates λ1, λ2, λ3,, λn What is the probability that runner j wins? P( j wins) = λ j λ k k What is the average time of the winner?
68 Properties of Races of Exponentials Suppose there are n independent runners with exponentially distributed finishing times with rates λ1, λ2, λ3,, λn What is the probability that runner j wins? P( j wins) = λ j λ k k What is the average time of the winner? E[t] = 1 λ k k the more runners there are, the faster the winning time speed up with experience
69 Logan s (1988) Instance Theory stimulus perceptual processing retrieval race response time - assumes the retrieval times are distributed as Weibulls, generalization of an exponential distribution - only runners in the race are exact matches to the stimulus - predicts response times, speedups in response times - cannot predict accuracy, changes in accuracy with learning
70 Nosofsky and Palmeri (1997) EBRW stimulus perceptual processing retrieval race response time - assumes the retrieval times are distributed as exponentials - runners race according to their similarity* - can predict accuracy and response times - the winner of each race provides incremental evidence that drives a random walk decision process * instance theory assumes that all mismatches have similarity 0
71 Nosofsky and Palmeri (1997) EBRW stimulus perceptual processing response time - assumes the retrieval times are distributed as exponentials - runners race according to their similarity - can predict accuracy and response times - the winner of each race provides incremental evidence that drives a random walk decision process
72 Nosofsky and Palmeri (1997) EBRW stimulus perceptual processing response TR time
73 Nosofsky and Palmeri (1997) EBRW time
74 Nosofsky and Palmeri (1997) EBRW A B time
75 Nosofsky and Palmeri (1997) EBRW A probe exemplar memory with test object i B time
76 Nosofsky and Palmeri (1997) EBRW A exemplars race to be retrieved with rates given by their similarity sij B time
77 Nosofsky and Palmeri (1997) EBRW A Δt Δx imagine exemplar j wins and j is a member of category A B time
78 Nosofsky and Palmeri (1997) EBRW A Δt Δx imagine exemplar j wins and j is a member of category A B Δx = +1 since category A time
79 Nosofsky and Palmeri (1997) EBRW A Δx Δt = α + t w imagine exemplar j wins and j is a member of category A B Δx = +1 since category A Δt = step time t w = retrieval time of winner α = minimum step time
80 Nosofsky and Palmeri (1997) EBRW A B time
81 Nosofsky and Palmeri (1997) EBRW A B time
82 Nosofsky and Palmeri (1997) EBRW A B time
83 Nosofsky and Palmeri (1997) EBRW A B time
84 Nosofsky and Palmeri (1997) EBRW A B time
85 Nosofsky and Palmeri (1997) EBRW A B time
86 Nosofsky and Palmeri (1997) EBRW A B time
87 Nosofsky and Palmeri (1997) EBRW A B time
88 Nosofsky and Palmeri (1997) EBRW A B time
89 Nosofsky and Palmeri (1997) EBRW A A B time
90 Nosofsky and Palmeri (1997) EBRW
91 Easy Stimulus Hard Stimulus perceptual processing perceptual processing TR TR
92 Nosofsky & Palmeri (1997)
93 Early in Learning Later in Learning perceptual processing perceptual processing TR TR
94 Palmeri (1997)
95 perceptual processing race between rules and exemplars TR rule-based algorithmic mechanism following Logan (1988)
96
97 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response
98 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response objects are represented as points in multidimensional psychological space
99 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response EA = evidence object in Category A EB = evidence object in Category B
100 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response E A i P(A i) = E A i + E B i probabilistic response rule
101 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response E A i > E B i respond A if else respond B
102 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response respond A if E A i E B i > 0 else respond B
103 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response respond A if E A i E B i > c else respond B deterministic response rule
104 imagine comparing two models model 1 timulus perceptual processing rule-based knowledge deterministic response rule response model 2 stimulus perceptual processing exemplarbased knowledge probabilistic response rule response
105 model 1 if testing category knowledge, equate decision rules timulus perceptual processing rule-based knowledge deterministic response rule response model 2a stimulus perceptual processing exemplarbased knowledge deterministic response rule response
106 Probabilistic versus Deterministic Response Rules timulus perceptual processing category knowledge decision response respond A if E A i E B i > c + noise else respond B deterministic response rule
107 Ashby, F. G., & Maddox, W. T. (1993). Relations between prototype, exemplar, and decision bound models of categorization. Journal of Mathematical Psychology, 37, pdf
108 Properties of Races of Exponentials P( j wins) = λ j λ k k P(A runner wins) = j Arunners k λ j λ k P(A runner wins) = j Arunners λ k k λ j P(A S i ) = s ij j A s ij + s ij j A j B this is the GCM
109 What if racing exponentials feed a random walk (like EBRW)?
110 What if racing exponentials feed a random walk (like EBRW)? P(A S i ) = j A s ij j A K s ij K + s ij j B K EBRW predicts more deterministic decision rules
111
112 Diffusion Model Ratcliff (1979), Ratcliff & Rouder (1998)
113 diffusion is a random walk in the limit as Δt approaches zero time
114 evidence time
115 decision A boundary evidence decision B boundary time
116 Yes boundary evidence No boundary time
117 decision A boundary decision time evidence decision B boundary time
118 decision A boundary evidence drift rate diffusion coefficient (noise) starting point decision B boundary time
119 move boundaries in to stress speed over accuracy (homunculus) time
120 move the starting point to bias one response over another response (homunculus) time
121 drift rate diffusion coefficient (noise) determined by stimuli and knowledge time
122 decision time RT = TR + decision time time
123
124
125 Diffusion model: - naturally predicts shapes of observed RT distributions - naturally predicts speed- accuracy tradeoffs - parameters are identifiable - experimental manipulations map onto expected parameters
126 Diffusion model: - naturally predicts shapes of observed RT distributions - naturally predicts speed- accuracy tradeoffs - parameters are identifiable - experimental manipulations map onto expected parameters - there is no theory of drift rates - there is no theory of what happens during TR - there is no theory of how starting point and bounds change but see: Purcell, B.A., Schall, J.D., Logan, G.D., & Palmeri, T.J. (2012). Gated stochastic accumulator model of visual search decisions in FEF. Journal of Neuroscience, 32(10), [PDF] Mack, M.L., & Palmeri, T.J. (2010). Modeling categorization of scenes containing consistent versus inconsistent objects. Journal of Vision, 10(3):11, [PDF] Nosofsky, R.M., & Palmeri, T.J. (2014). Exemplar-based random walk model. To appear in J.R. Busemeyer, J. Townsend, Z.J. Wang, & A. Eidels (Eds.), Mathematical and Computational Models of Cognition, Oxford University Press. [PDF]
127 a evidence z dt dx 0 time evidence = z; while (evidence<a & evidence>0) time = time + dt; r = rand; if r < f(mu,sigma) evidence = evidence + dx; else evidence = evidence dx; end end
128 dx = σ dt dt dx p = 1 " $ 1+ µ 2 # σ dt % ' & q = 1 " $ 1 µ 2 # σ dt % ' &
129 dx = σ dt dt dx p = 1 " $ 1+ µ 2 # σ dt % ' = & 2 2 µ dt σ q = 1 " $ 1 µ 2 # σ dt % ' = 1 1 & 2 2 µ dt σ μ = drift rate σ = noise so μ/σ is a signal-to-noise ratio probability of moving up dx with probability p, or moving down dx with probability q=1-p, is a simple function of the signal-to-noise ratio scaled by the time increment
130 function [time,which]=diffusion_simulation(mu,s2,tr,a,z) time=tr; % start the time at TR tau=.0001; % time per step of the diffusion (more accurate with tau= ) evidence=z; % starting point while(evidence<a && evidence>0) time = time + tau; dx=sqrt(s2.*tau); r=rand; p=0.5.*(1 + mu.*dx./s2); if r < p evidence = evidence + dx; else evidence = evidence - dx; end end if evidence < 0 which = 0; end if evidence > a which = 1; end
131
132 a perceptual processing TR z 0 drift core free parameters
133 a perceptual processing TR z 0 drift trial- by- trial variability in TR, z, and drift
134 Homework Assignment explore predictions of diffusion model explore role of trial- to- trial variability
135 How do we simulate trial- to- trial variability? function([time,which]=diffusion_simulation(mu,s2,tr,a,z)( ( time=tr;((((((((%(start(the(time(at(tr( tau=.0001;((((((%(time(per(step(of(the(diffusion((more(accurate(with(tau= )( evidence=z;(((((%(starting(point( ( while(evidence<a(&&(evidence>0)( ((((( ((((time(=(time(+(tau;( ( ((((dx=sqrt(s2.*tau);( ( ((((r=rand;( ( ((((p=0.5.*(1(+(mu.*dx./s2);( ((((( ((((if(r(<(p( ((((((((evidence(=(evidence(+(dx;( ((((else( ((((((((evidence(=(evidence(n(dx;( ((((end( end( ( if(evidence(<(0( ((((which(=(0;( end( ( if(evidence(>(a( ((((which(=(1;( end(
136 How do we simulate trial- to- trial variability? TR is uniform distribution z is uniform distribution mu is normal distribution why? function([time,which]=diffusion_simulation(mu,s2,tr,a,z)( ( time=tr;((((((((%(start(the(time(at(tr( tau=.0001;((((((%(time(per(step(of(the(diffusion((more(accurate(with(tau= )( evidence=z;(((((%(starting(point( ( while(evidence<a(&&(evidence>0)( ((((( ((((time(=(time(+(tau;( ( ((((dx=sqrt(s2.*tau);( ( ((((r=rand;( ( ((((p=0.5.*(1(+(mu.*dx./s2);( ((((( ((((if(r(<(p( ((((((((evidence(=(evidence(+(dx;( ((((else( ((((((((evidence(=(evidence(n(dx;( ((((end( end( ( if(evidence(<(0( ((((which(=(0;( end( ( if(evidence(>(a( ((((which(=(1;( end(
137
Quantum dynamics I. Peter Kvam. Michigan State University Max Planck Institute for Human Development
Quantum dynamics I Peter Kvam Michigan State University Max Planck Institute for Human Development Full-day workshop on quantum models of cognition 37 th Annual Meeting of the Cognitive Science Society
More informationA Stochastic Version of General Recognition Theory
Journal of Mathematical Psychology 44, 310329 (2000) doi:10.1006jmps.1998.1249, available online at http:www.idealibrary.com on A Stochastic Version of General Recognition Theory F. Gregory Ashby University
More informationA Comparison of Sequential Sampling Models for Two-Choice Reaction Time
Psychological Review Copyright 2004 by the American Psychological Association 2004, Vol. 111, No. 2, 333 367 0033-295X/04/$12.00 DOI: 10.1037/0033-295X.111.2.333 A Comparison of Sequential Sampling Models
More information2 Donders, an old Dutch Dude (1868)
Homework: Reading Assignment: Logan, 1992, JEPHPP, Ratcliff & Rouder, 1998, Psych Sci 1 Response Time We have only modeled discrete RVs via binomial and multinomial models. We have encounted continuous
More information+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing
Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable
More informationStochastic Accumulation of Evidence Models
Stochastc Accumulaton of Evdence Models perceptual processng /me drft a ac/on z T R T M tme These models have free parameters that defne the tme for perceptual processng, startng pont and threshold for
More informationThe P rep statistic as a measure of confidence in model fitting
Psychonomic Bulletin & Review 008, 5 (), 6-7 doi: 0.3758/PBR.5..6 The P rep statistic as a measure of confidence in model fitting F. GREGORY ASHBY AND JEFFREY B. O BRIEN University of California, Santa
More informationThe Diffusion Model of Speeded Choice, from a Rational Perspective
The Diffusion Model of Speeded Choice, from a Rational Perspective Matt Jones, University of Colorado August 7, 017 1 Binary Decision Tasks This chapter considers tasks in which an experimental subject
More informationNIH Public Access Author Manuscript J Math Psychol. Author manuscript; available in PMC 2012 June 1.
NIH Public Access Author Manuscript Published in final edited form as: J Math Psychol. 2011 June ; 55(3): 267 270. doi:10.1016/j.jmp.2011.02.002. An Extension of SIC Predictions to the Wiener Coactive
More informationA parsimonious alternative to the pacemaker/accumulator process in animal timing
Behavioural Processes 44 (1998) 119 125 A parsimonious alternative to the pacemaker/accumulator process in animal timing Alliston K. Reid a, *, David L. Allen b a Department of Psychology, Wofford College,
More informationExtending a biologically inspired model of choice: multialternatives, nonlinearity and value-based multidimensional choice
This article is in press in Philosophical Transactions of the Royal Society, B. Extending a biologically inspired model of choice: multialternatives, nonlinearity and value-based multidimensional choice
More informationGeneral Recognition Theory Extended to Include Response Times: Predictions for a Class of Parallel Systems
Wright State University CORE Scholar Psychology Faculty Publications Psychology 1-1-2012 General Recognition Theory Extended to Include Response Times: Predictions for a Class of Parallel Systems James
More information6 Extending a biologically inspired model of choice: multi-alternatives, nonlinearity, and value-based multidimensional choice
6 Extending a biologically inspired model of choice: multi-alternatives, nonlinearity, and value-based multidimensional choice Rafal Bogacz, Marius Usher, Jiaxiang Zhang, and James L. McClelland Summary
More informationCS 160: Lecture 16. Quantitative Studies. Outline. Random variables and trials. Random variables. Qualitative vs. Quantitative Studies
Qualitative vs. Quantitative Studies CS 160: Lecture 16 Professor John Canny Qualitative: What we ve been doing so far: * Contextual Inquiry: trying to understand user s tasks and their conceptual model.
More informationMulti-stage sequential sampling models with finite or infinite time horizon and variable boundaries
1 2 3 4 5 6 Multi-stage sequential sampling models with finite or infinite time horizon and variable boundaries Adele Diederich & Peter Oswald a,b, a Department of Psychology & Methods, Jacobs University,
More informationThe concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.
The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes
More informationCS599 Lecture 1 Introduction To RL
CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationReminders. Thought questions should be submitted on eclass. Please list the section related to the thought question
Linear regression Reminders Thought questions should be submitted on eclass Please list the section related to the thought question If it is a more general, open-ended question not exactly related to a
More informationProbability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014
Probability Machine Learning and Pattern Recognition Chris Williams School of Informatics, University of Edinburgh August 2014 (All of the slides in this course have been adapted from previous versions
More informationOutline. Limits of Bayesian classification Bayesian concept learning Probabilistic models for unsupervised and semi-supervised category learning
Outline Limits of Bayesian classification Bayesian concept learning Probabilistic models for unsupervised and semi-supervised category learning Limitations Is categorization just discrimination among mutually
More informationCSC2515 Assignment #2
CSC2515 Assignment #2 Due: Nov.4, 2pm at the START of class Worth: 18% Late assignments not accepted. 1 Pseudo-Bayesian Linear Regression (3%) In this question you will dabble in Bayesian statistics and
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More informationSystem Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models
System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models Fatih Cavdur fatihcavdur@uludag.edu.tr March 20, 2012 Introduction Introduction The world of the model-builder
More informationA Theory of Interactive Parallel Processing: New Capacity Measures and Predictions for a Response Time Inequality Series
Psychological Review Copyright 2004 by the American Psychological Association 2004, Vol. 111, No. 4, 1003 1035 0033-295X/04/$12.00 DOI: 10.1037/0033-295X.111.4.1003 A Theory of Interactive Parallel Processing:
More informationIntroduction to Information Entropy Adapted from Papoulis (1991)
Introduction to Information Entropy Adapted from Papoulis (1991) Federico Lombardo Papoulis, A., Probability, Random Variables and Stochastic Processes, 3rd edition, McGraw ill, 1991. 1 1. INTRODUCTION
More informationSequential Processes and the Shapes of Reaction Time Distributions
Psychological Review 2015 American Psychological Association 2015, Vol. 122, No. 4, 830 837 0033-295X/15/$12.00 http://dx.doi.org/10.1037/a0039658 THEORETICAL NOTE Sequential Processes and the Shapes of
More informationProbabilistic Models in Theoretical Neuroscience
Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction
More informationSeismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi. Lecture 03 Seismology (Contd.
Seismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi Lecture 03 Seismology (Contd.) In the previous lecture, we discussed about the earth
More informationCalculus with Algebra and Trigonometry II Lecture 21 Probability applications
Calculus with Algebra and Trigonometry II Lecture 21 Probability applications Apr 16, 215 Calculus with Algebra and Trigonometry II Lecture 21Probability Apr applications 16, 215 1 / 1 Histograms The distribution
More informationA Theoretical Study of Process Dependence for Standard Two-Process Serial Models and Standard Two-Process Parallel Models
A Theoretical Study of Process Dependence for Standard Two-Process Serial Models and Standard Two-Process Parallel Models Ru Zhang, Yanjun Liu, and James T. Townsend, Indiana University Introduction The
More informationUnfalsifiability and Mutual Translatability of Major Modeling Schemes for Choice Reaction Time
This article may not exactly replicate the final version published in the APA journal. It is not the copy of record. URL for data and model code: http://matt.colorado.edu/universality Unfalsifiability
More informationSTAT 479: Short Term Actuarial Models
STAT 479: Short Term Actuarial Models Jianxi Su, FSA, ACIA Purdue University, Department of Statistics Week 1. Some important things about this course you may want to know... The course covers a large
More informationThe Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception
The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception Takashi Kanamaru Department of Mechanical Science and ngineering, School of Advanced
More informationAn Introduction to Model Selection: Tools and Algorithms
Tutorials in Quantitative Methods for Psychology 006, Vol. (), p. -0 An Introduction to Model Selection: Tools and Algorithms Sébastien Hélie Université du Québec À Montréal Model selection is a complicated
More informationIn Defense of Jeffrey Conditionalization
In Defense of Jeffrey Conditionalization Franz Huber Department of Philosophy University of Toronto Please do not cite! December 31, 2013 Contents 1 Introduction 2 2 Weisberg s Paradox 3 3 Jeffrey Conditionalization
More informationA Statistical Test for the Capacity Coefficient
Wright State University CORE Scholar Psychology Faculty Publications Psychology 7-16-2011 A Statistical Test for the Capacity Coefficient Joseph W. Houpt Wright State University - Main Campus, joseph.houpt@wright.edu
More informationPUBLISHED VERSION Massachusetts Institute of Technology. As per received 14/05/ /08/
PUBLISHED VERSION Michael D. Lee, Ian G. Fuss, Danieal J. Navarro A Bayesian approach to diffusion models of decision-making and response time Advances in Neural Information Processing Systems 19: Proceedings
More informationWeek 1 Quantitative Analysis of Financial Markets Distributions A
Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October
More informationMasters Comprehensive Examination Department of Statistics, University of Florida
Masters Comprehensive Examination Department of Statistics, University of Florida May 6, 003, 8:00 am - :00 noon Instructions: You have four hours to answer questions in this examination You must show
More informationStrength-based and time-based supervised networks: Theory and comparisons. Denis Cousineau Vision, Integration, Cognition lab. Université de Montréal
Strength-based and time-based supervised networks: Theory and comparisons Denis Cousineau Vision, Integration, Cognition lab. Université de Montréal available at: http://mapageweb.umontreal.ca/cousined
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationPart IV: Monte Carlo and nonparametric Bayes
Part IV: Monte Carlo and nonparametric Bayes Outline Monte Carlo methods Nonparametric Bayesian models Outline Monte Carlo methods Nonparametric Bayesian models The Monte Carlo principle The expectation
More informationRandom number generation
CE 391F April 4, 2013 ANNOUNCEMENTS Homework 3 due today Homework 4 coming... Announcements Webinar announcement Femke van Wageningen-Kessels from TU Delft will be giving a webinar titled Traffic Flow
More information0 t < 0 1 t 1. u(t) =
A. M. Niknejad University of California, Berkeley EE 100 / 42 Lecture 13 p. 22/33 Step Response A unit step function is described by u(t) = ( 0 t < 0 1 t 1 While the waveform has an artificial jump (difficult
More informationCHAPTER 3 ANALYSIS OF RELIABILITY AND PROBABILITY MEASURES
27 CHAPTER 3 ANALYSIS OF RELIABILITY AND PROBABILITY MEASURES 3.1 INTRODUCTION The express purpose of this research is to assimilate reliability and its associated probabilistic variables into the Unit
More informationModeling Individual Differences with Dirichlet Processes
Modeling Individual Differences with Dirichlet Processes Daniel J. Navarro (daniel.navarro@adelaide.edu.au) Department of Psychology, University of Adelaide, SA 55, Australia Thomas L. Griffiths (thomas
More informationCS 1538: Introduction to Simulation Homework 1
CS 1538: Introduction to Simulation Homework 1 1. A fair six-sided die is rolled three times. Let X be a random variable that represents the number of unique outcomes in the three tosses. For example,
More informationMarkov Chains. Chapter 16. Markov Chains - 1
Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider
More informationHST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007
MIT OpenCourseWare http://ocw.mit.edu HST.582J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationIntroduction to Reliability Theory (part 2)
Introduction to Reliability Theory (part 2) Frank Coolen UTOPIAE Training School II, Durham University 3 July 2018 (UTOPIAE) Introduction to Reliability Theory 1 / 21 Outline Statistical issues Software
More informationDecision making and problem solving Lecture 1. Review of basic probability Monte Carlo simulation
Decision making and problem solving Lecture 1 Review of basic probability Monte Carlo simulation Why probabilities? Most decisions involve uncertainties Probability theory provides a rigorous framework
More informationDS-GA 1002 Lecture notes 2 Fall Random variables
DS-GA 12 Lecture notes 2 Fall 216 1 Introduction Random variables Random variables are a fundamental tool in probabilistic modeling. They allow us to model numerical quantities that are uncertain: the
More informationLIBRARY OF THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY
LIBRARY OF THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY THE A CONTINUOUS LIMIT FOR "CONTAGIOUS" BINOMIAL DISTRIBUTION 312-68 David B. Montgomery MASSACHUSETTS TUTE OF TECHNOLOGY 50 MEMORIAL DRIVE AMBRIDGE,
More informationOutline PMF, CDF and PDF Mean, Variance and Percentiles Some Common Distributions. Week 5 Random Variables and Their Distributions
Week 5 Random Variables and Their Distributions Week 5 Objectives This week we give more general definitions of mean value, variance and percentiles, and introduce the first probability models for discrete
More informationJ. R. Busemeyer Psychological and Brain Sciences, Indiana University, Bloomington, In, 47405, USA
1 J. R. Busemeyer Psychological and Brain Sciences, Indiana University, Bloomington, In, 47405, USA E-mail: jbusemey@indiana.edu J. S. Trueblood Cognitive Science Program, Indiana University, Bloomington,
More informationThese notes will supplement the textbook not replace what is there. defined for α >0
Gamma Distribution These notes will supplement the textbook not replace what is there. Gamma Function ( ) = x 0 e dx 1 x defined for >0 Properties of the Gamma Function 1. For any >1 () = ( 1)( 1) Proof
More informationConcerns of the Psychophysicist. Three methods for measuring perception. Yes/no method of constant stimuli. Detection / discrimination.
Three methods for measuring perception Concerns of the Psychophysicist. Magnitude estimation 2. Matching 3. Detection/discrimination Bias/ Attentiveness Strategy/Artifactual Cues History of stimulation
More informationData Analysis for an Absolute Identification Experiment. Randomization with Replacement. Randomization without Replacement
Data Analysis for an Absolute Identification Experiment 1 Randomization with Replacement Imagine that you have k containers for the k stimulus alternatives The i th container has a fixed number of copies
More informationArtificial Neural Networks. Q550: Models in Cognitive Science Lecture 5
Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand
More informationMath489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7
Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7 Steve Dunbar Due Mon, November 2, 2009. Time to review all of the information we have about coin-tossing fortunes
More informationLecture 4 - Survival Models
Lecture 4 - Survival Models Survival Models Definition and Hazards Kaplan Meier Proportional Hazards Model Estimation of Survival in R GLM Extensions: Survival Models Survival Models are a common and incredibly
More informationThese slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
More informationMethods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie)
Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie) Week 1 1 Motivation Random numbers (RNs) are of course only pseudo-random when generated
More informationBayesian Inference. Will Penny. 24th February Bayesian Inference. Will Penny. Bayesian Inference. References
24th February 2011 Given probabilities p(a), p(b), and the joint probability p(a, B), we can write the conditional probabilities p(b A) = p(a B) = p(a, B) p(a) p(a, B) p(b) Eliminating p(a, B) gives p(b
More informationOn the relation between the mean and the variance of a diffusion model response time distribution
Journal of Mathematical Psychology 49 (25) 95 24 Notes and comments On the relation between the mean and the variance of a diffusion model response time distribution Eric-Jan Wagenmakers, Raoul P.P.P.
More informationDEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE
Data Provided: None DEPARTMENT OF COMPUTER SCIENCE Autumn Semester 203 204 MACHINE LEARNING AND ADAPTIVE INTELLIGENCE 2 hours Answer THREE of the four questions. All questions carry equal weight. Figures
More informationAdaptive Velocity Tuning for Visual Motion Estimation
Adaptive Velocity Tuning for Visual Motion Estimation Volker Willert 1 and Julian Eggert 2 1- Darmstadt University of Technology Institute of Automatic Control, Control Theory and Robotics Lab Landgraf-Georg-Str.
More informationDuality Between Feature and Similarity Models, Based on the Reproducing-Kernel Hilbert Space
Duality Between Feature and Similarity Models, Based on the Reproducing-Kernel Hilbert Space Matt Jones (University of Colorado) and Jun Zhang (University of Michigan) December 22, 2016 1 Introduction
More informationWill Landau. Feb 21, 2013
Iowa State University Feb 21, 2013 Iowa State University Feb 21, 2013 1 / 31 Outline Iowa State University Feb 21, 2013 2 / 31 random variables Two types of random variables: Discrete random variable:
More informationCHAPTER 1 ENVIRONMENTAL CHANGE AND TRAIT DISTRIBUTION
CHAPTER 1 ENVIRONMENTAL CHANGE AND TRAIT DISTRIBUTION 1.4.1: WARM-UP Observing Populations at Two Generations The histograms below show the distribution of fur-level traits in a population at two different
More informationRandom variables. DS GA 1002 Probability and Statistics for Data Science.
Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities
More informationPart I Stochastic variables and Markov chains
Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)
More informationPredicting response time and error rates in visual search
Predicting response time and error rates in visual search Bo Chen Caltech bchen3@caltech.edu Vidhya Navalpakkam Yahoo! Research nvidhya@yahoo-inc.com Pietro Perona Caltech perona@caltech.edu Abstract A
More informationExploring EDA. Exploring EDA, Clustering and Data Preprocessing Lecture 1. Exploring EDA. Vincent Croft NIKHEF - Nijmegen
Exploring EDA, Clustering and Data Preprocessing Lecture 1 Exploring EDA Vincent Croft NIKHEF - Nijmegen Inverted CERN School of Computing, 23-24 February 2015 1 A picture tells a thousand words. 2 Before
More informationEE4304 C-term 2007: Lecture 17 Supplemental Slides
EE434 C-term 27: Lecture 17 Supplemental Slides D. Richard Brown III Worcester Polytechnic Institute, Department of Electrical and Computer Engineering February 5, 27 Geometric Representation: Optimal
More informationIntroduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations.
19th May 2011 Chain Introduction We will Show the relation between stochastic differential equations, Gaussian processes and methods This gives us a formal way of deriving equations for the activity of
More informationPhysics 2A. Lecture 2A. "You must learn from the mistakes of others. You can't possibly live long enough to make them all yourself.
Physics 2A Lecture 2A "You must learn from the mistakes of others. You can't possibly live long enough to make them all yourself." --Sam Levenson 1 Motion Chapter 2 will focus on motion in one dimension.
More informationIntroduction to Mobile Robotics Probabilistic Robotics
Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative
More informationChapter Learning Objectives. Probability Distributions and Probability Density Functions. Continuous Random Variables
Chapter 4: Continuous Random Variables and Probability s 4-1 Continuous Random Variables 4-2 Probability s and Probability Density Functions 4-3 Cumulative Functions 4-4 Mean and Variance of a Continuous
More informationDeep Feedforward Networks
Deep Feedforward Networks Yongjin Park 1 Goal of Feedforward Networks Deep Feedforward Networks are also called as Feedforward neural networks or Multilayer Perceptrons Their Goal: approximate some function
More informationReliability of Technical Systems
Main Topics 1. Introduction, Key Terms, Framing the Problem 2. Reliability Parameters: Failure Rate, Failure Probability, etc. 3. Some Important Reliability Distributions 4. Component Reliability 5. Software
More information(u v) = f (u,v) Equation 1
Problem Two-horse race.0j /.8J /.5J / 5.07J /.7J / ESD.J Solution Problem Set # (a). The conditional pdf of U given that V v is: The marginal pdf of V is given by: (u v) f (u,v) Equation f U V fv ( v )
More informationECE 302, Final 3:20-5:20pm Mon. May 1, WTHR 160 or WTHR 172.
ECE 302, Final 3:20-5:20pm Mon. May 1, WTHR 160 or WTHR 172. 1. Enter your name, student ID number, e-mail address, and signature in the space provided on this page, NOW! 2. This is a closed book exam.
More informationStochastic Processes and Advanced Mathematical Finance. Intuitive Introduction to Diffusions
Steven R. Dunbar Department of Mathematics 03 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 40-47-3731 Fax: 40-47-8466 Stochastic Processes and Advanced
More informationMITOCW MITRES6_012S18_L23-05_300k
MITOCW MITRES6_012S18_L23-05_300k We will now go through a beautiful example, in which we approach the same question in a number of different ways and see that by reasoning based on the intuitive properties
More informationECE 546 Lecture 23 Jitter Basics
ECE 546 Lecture 23 Jitter Basics Spring 2018 Jose E. Schutt-Aine Electrical & Computer Engineering University of Illinois jesa@illinois.edu ECE 546 Jose Schutt Aine 1 Probe Further D. Derickson and M.
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationFor True Conditionalizers Weisberg s Paradox is a False Alarm
For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Department of Philosophy University of Toronto franz.huber@utoronto.ca http://huber.blogs.chass.utoronto.ca/ July 7, 2014; final
More informationFor True Conditionalizers Weisberg s Paradox is a False Alarm
For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Abstract: Weisberg (2009) introduces a phenomenon he terms perceptual undermining He argues that it poses a problem for Jeffrey
More informationFrom Determinism to Stochasticity
From Determinism to Stochasticity Reading for this lecture: (These) Lecture Notes. Lecture 8: Nonlinear Physics, Physics 5/25 (Spring 2); Jim Crutchfield Monday, May 24, 2 Cave: Sensory Immersive Visualization
More informationIntro to Info Vis. CS 725/825 Information Visualization Spring } Before class. } During class. Dr. Michele C. Weigle
CS 725/825 Information Visualization Spring 2018 Intro to Info Vis Dr. Michele C. Weigle http://www.cs.odu.edu/~mweigle/cs725-s18/ Today } Before class } Reading: Ch 1 - What's Vis, and Why Do It? } During
More informationIntro to probability concepts
October 31, 2017 Serge Lang lecture This year s Serge Lang Undergraduate Lecture will be given by Keith Devlin of our main athletic rival. The title is When the precision of mathematics meets the messiness
More informationChapter 3 Acceleration
Chapter 3 Acceleration Slide 3-1 Chapter 3: Acceleration Chapter Goal: To extend the description of motion in one dimension to include changes in velocity. This type of motion is called acceleration. Slide
More informationModel Fitting. Jean Yves Le Boudec
Model Fitting Jean Yves Le Boudec 0 Contents 1. What is model fitting? 2. Linear Regression 3. Linear regression with norm minimization 4. Choosing a distribution 5. Heavy Tail 1 Virus Infection Data We
More informationKøbenhavns Universitet. Testing the race inequality Gondan, Matthias; Heckel, A. Published in: Journal of Mathematical Psychology
university of copenhagen Københavns Universitet Testing the race inequality Gondan, Matthias; Heckel, A. Published in: Journal of Mathematical Psychology DOI: 10.1016/j.jmp.2008.08.002 Publication date:
More informationESS 461 Notes and problem set - Week 1 - Radioactivity
ESS 461 Notes and problem set - Week 1 - Radioactivity Radioactive decay: Statistical law of radioactivity: The probability that a radioactive atom will decay in a given time interval is constant and not
More informationMachine Learning I Reinforcement Learning
Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:
More informationFailure rate in the continuous sense. Figure. Exponential failure density functions [f(t)] 1
Failure rate (Updated and Adapted from Notes by Dr. A.K. Nema) Part 1: Failure rate is the frequency with which an engineered system or component fails, expressed for example in failures per hour. It is
More information