Three methods for measuring perception Concerns of the Psychophysicist. Magnitude estimation 2. Matching 3. Detection/discrimination Bias/ Attentiveness Strategy/Artifactual Cues History of stimulation Who controls stimulation Detection / discrimination Yes/no method of constant stimuli In a detection experiment, the subject's task is to detect small differences in the stimuli. Procedures for detection/discrimination experiments Method of adjustment Method of limits Yes-o/method of constant stimuli Forced choice Percent yes responses Tone intensity Do these data indicate that Laurie s threshold is lower than Chris s threshold? Forced Choice Present signal on some trials, no signal on other trials (catch trials). Subject is forced to respond on every trial either Yes the thing was presented or o it wasn t. If they're not sure then they must guess. Advantage: With the forced-choice method, we have both types of trials so we can count both the number of hits and the number of false alarms to get an estimate of discriminability independent on the criterion. Versions: Yes-no, 2AFC, 2IFC
Forced choice: four possible outcomes yes no Information acquisition yes no present Hit Miss present Hit Miss absent False alarm Correct reject absent False alarm Correct reject shift yes no Information and criterion Two components to the decision-making: signal strength and criterion. present Hit Miss Signal strength: Acquiring more information is good. The effect of information is to increase the likelihood of getting either a hit or a correct rejection, while reducing the likelihood of an outcome in the two error boxes. absent False alarm Correct reject : Different people may have different bias/ criterion. Some may may choose to err toward yes decisions. Others may choose to be more conservative and say no more often. : probability of occurrence curves The criterion
Applications of SDT: Examples Vision Detection (something vs. nothing) Discrimination (lower vs greater level of: intensity, contrast, depth, slant, size, frequency, loudness,... Memory (internal response = trace strength = familiarity) eurometric function/discrimination by neurons (internal response = spike count) Hits (response yes on signal trial) Correct rejects (response no on no-signal trial) Misses (response no on signal trial) False Alarms (response yes on no-signal trial) shift
Discriminability (d-prime, d ) Discriminability (d-prime, d ) d-prime is the distance between the and curves d d = separation spread = signal noise Receiver Operating Characteristic (ROC) SDT: Gaussian Case z[p(cr)] z[p(h)] 0 c d x d = z[p(h)] + z[p(cr)] = z[p(h)] " z[p(fa)] c = z[p(cr)] = p(x = c S + ) p(x = c ) = e"(c " d )2 / 2 e "c2 / 2 G(x;µ,) = 2" e(x µ) 2 / 2 2 ROC: Gaussian Case Optimal Payoff Matrix Response Yes o c c 2 Yes V S + o V S + z(p(h)) Stimulus V Yes V o z(p(fa))
Optimal Optimal Response Yes o E(Yes x) = V Yes S + p(s + x) +V Yes p( x) Stimulus Yes V S + V Yes o V S + V o E(o x) = V o S + p(s + x) +V o p( x) Say yes if E(Yes x) E(o x) E(Yes x) = V Yes S + p(s + x) +V Yes p( x) E(o x) = V o S + p(s + x) +V o p( x) Say yes if p(s + x) p( x) V o Yes "V = V Yes o S + "V S + V(Correct ) V(Correct S + ) Say yes if E(Yes x) E(o x) Aside: Bayes Rule Apply Bayes Rule A A & B B Posterior Likelihood Prior p(x S + )p(s + ) p(s + x) = p(x) uisance normalizing term p(a B) = probability of A given that B is asserted to be true = p(a & B) = p(b)p(a B) = p(a)p(b A) p(a B) = p(b A)p(A) p(b) p(a & B) p(b) Posterior odds p( x) = p(s + x) p( x) p(x )p(), hence p(x) = " Likelihood ratio p(x S + ) p(x ) & " p(s + ) p() & Prior odds Optimal Likelihood Ratio Say yes if i.e., if p(s + x) p( x) p(x S + ) p(x ) V(Correct ) V(Correct S + ) p() V(Correct ) p(s + ) V(Correct S + ) = " Example, if equal priors and equal payoffs, say yes if the likelihood ratio is greater than one: Optimal binary decisions are a function of the likelihood ratio (it is a sufficient statistic, i.e., no more information is needed or useful). p(x ) LR p(x ) log(p(x )) log(lr) log(p(x )) x x
Gaussian Unequal Variance Gaussian Unequal Variance LR c c 2 p(x ) SDT (suboptimal): Say yes if x c 2 p(x ) Optimal strategy: Say yes if x c 2 or x " c x Gaussian Unequal Variance Poisson Distribution Counting distribution for processes consisting of events that occur randomly at a fixed rate events/sec. c The time until the next event occurs follows an exponential distribution: p(t) = e "t Since is the rate, then over a period of " seconds, the expected count is ". The distribution of event count is Poisson: p(k) = e" (") k k p(k) = e" (") k k 0.4 " = 0.2 Poisson Distribution 2 3 5 0 0 0 0 20 Probability Poisson Distribution SDT p(k) = e" (") k k 0.25 0.2 0.5 0. 0.05 " = 3 " = 8 0 0 2 4 6 8 0 2 4 6 8 20 Response LR(k) " & S + ( ' " noise signal k
2-IFC and SDT 2-IFC and SDT " Say Interval 2 d 2 Say Interval & 0 d P(Correct) = P(( d,) > (0,)) = P(( d,) " (0,) > 0) = P(( d, 2) > 0) = P(( d / 2,) > 0) = P((0,) < d / 2) d " x2 d " x Where (µ,) means a random variable with mean µ and SD. " " ote: the criterion is d / 2 away from each peak. 2-IFC and the Yes-o ROC Measuring thresholds Low intensity High intensity Hits P( c) 0 0 False Alarms P(Correct in 2-IFC) = " P(S = c)p( c)dc = Area under the ROC dp(s = c) = P(S = c)dc Proportion Correct 0 Signal intensity Signal Strength d 3 2 Signal intensity Assumptions: x signal strength, " constant Aside: 2-IFC and Estimation of Threshold General Gaussian Classification Frequently one wishes to estimate the signal strength corresponding to a fixed, arbitrary value of d, defined as threshold signal strength. For this, one can measure performance at multiple signal strengths, estimate d for each, fit a function (as in the previous slide) and interpolate to estimate threshold. Staircase methods are often used as a more timeefficient method. The signal strength tested on each trial is based on the data collected so far, trying to concentrate testing at levels that are most informative. Methods: -up/-down (for PSE: point of subjective equality), -up/2-down, etc., QUEST, APE, PEST,... " & Say Class A d " x2 " " Say Class B d " x
General Gaussian Classification General Gaussian Classification The d-dimensional Gaussian distribution: p( x) = exp (2) d / 2 " / 2 & 2 ( x µ) T " ( x µ) ' ) ( In 2 dimensions: Mahalanobis distance p( x) = exp 2 " / 2 & 2 ( x µ) T " ( x µ) ' ) ( " = the covariance matrix " = 2 * x * xy ' & ) 2 &* xy * y ( ) µ y µ x General Gaussian Classification General Gaussian Classification Say Category A if exp LR( (2) d / 2 / 2 " 2 ( x µ A ) T " A ( x ' & µ A )) A ( x) = exp (2) d / 2 / 2 " 2 ( x µ B ) T " B ( x > ' & µ B )) B ( Take log and simplify. Say Category A if ( x µ B ) T " B ( x µ B ) ( x µ A ) T " A ( x µ A ) = MD( x, µ B ) MD( x, µ A ) > C which is a quadratic in x. General Gaussian Classification Application: Absolute Threshold Question: What is the minimal number of photons required for a stimulus to be visible under the best of viewing conditions? Hecht, Schlaer & Pirenne (942)
Sensitivity by Eccentricity Spatial Summation Spatial Summation Temporal Summation Choice of Wavelength The Data
Stimulus Variability (Poisson) Model with no Subject Variability Model with no Subject Variability Model with no Subject Variability Model with Subject Variability Application: Ideal Observer Question: What is the minimal contrast required to carry out various detection and discrimination tasks? Geisler (989)
Sequential Ideal Observer Point-Spread Function Idealized Receptor Lattice Poisson Statistics Ideal Discriminator Example: 2-Point Resolution Given two known possible targets A and B with expected photon absorbtions ai and bi and actual photon catches Zi in receptor i, respectively, calculate the likelihood ratio p(zi ai)/p(zi bi), take the log, do some algebra and discover the following quantity is monotonic in likelihood ratio: ( Z = Z i ln ai / bi i ) Decisions are thus based on an ideal receptive field.