Sequential Approval. Paola Manzini Marco Mariotti Levent Ülkü. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 1 / 36

Similar documents
Fair Divsion in Theory and Practice

Consideration Sets. Mark Dean. Behavioral Economics G6943 Fall 2015

Chapter 1 - Preference and choice

Bounded Rationality I: Consideration Sets

Bounded Rationality I: Consideration Sets

Choice, Consideration Sets and Attribute Filters

CHAPTER 12 Boolean Algebra

This corresponds to a within-subject experiment: see same subject make choices from different menus.

Lecture Notes 8

Constitutional Rights and Pareto Efficiency

Choice, Consideration Sets and Attribute Filters

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Categories, Functors, Natural Transformations

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

Microeconomics. Joana Pais. Fall Joana Pais

Neuro Observational Learning

Revealed Attention. Y. Masatlioglu D. Nakajima E. Ozbay. Jan 2011

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Entropy as a measure of surprise

Endogenous Information Choice

Hans Peters, Souvik Roy, Ton Storcken. Manipulation under k-approval scoring rules RM/08/056. JEL code: D71, D72

Intro to Economic analysis

Stochastic dominance with imprecise information

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer Social learning and bargaining (axiomatic approach)

Preferences and Utility

3 Intertemporal Risk Aversion

On multiple discount rates

Burak Can, Ton Storcken. A re-characterization of the Kemeny distance RM/13/009

Menu-Dependent Stochastic Feasibility

Michælmas 2012 Operations Research III/IV 1

ECOM 009 Macroeconomics B. Lecture 2

CS/EE 181a 2010/11 Lecture 4

Decomposition Orders. another generalisation of the fundamental theorem of arithmetic

Modulation of symmetric densities

MATH Midterm 1 Sample 4

A Optimal User Search

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Quitting games - An Example

Stochastic Complementarity

The distribution inherited by Y is called the Cauchy distribution. Using that. d dy ln(1 + y2 ) = 1 arctan(y)

The ambiguous impact of contracts on competition in the electricity market Yves Smeers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

Statistics for scientists and engineers

Practice Test III, Math 314, Spring 2016

Entropy Rate of Stochastic Processes

Session 4: Money. Jean Imbs. November 2010

Tutorial 3: Optimisation

Algorithms and Data Structures

6 Evolution of Networks

Lecture 21: Spectral Learning for Graphical Models

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Lecture 4 Noisy Channel Coding

Grundlagen der Künstlichen Intelligenz

8. Prime Factorization and Primary Decompositions

1 Recursive Competitive Equilibrium

CS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm

Coupling. 2/3/2010 and 2/5/2010

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Chapter 2: Switching Algebra and Logic Circuits

A Random Attention Model

Measuring the informativeness of economic actions and. market prices 1. Philip Bond, University of Washington. September 2014

Finite Dictatorships and Infinite Democracies


Follow links for Class Use and other Permissions. For more information send to:

September Math Course: First Order Derivative

WHEN ORDER MATTERS FOR ITERATED STRICT DOMINANCE *

Lecture 28: April 26

Problem Set 1 Welfare Economics

Adaptive Policies for Online Ad Selection Under Chunked Reward Pricing Model

STAT J535: Introduction

Symmetries and Polynomials

2. Prime and Maximal Ideals

1 Closest Pair of Points on the Plane

MI 4 Mathematical Induction Name. Mathematical Induction

Game Theory: Spring 2017

Formal Specification of Distributed Systems: Communicating Sequential Processes (CSP)

Definitions: A binary relation R on a set X is (a) reflexive if x X : xrx; (f) asymmetric if x, x X : [x Rx xr c x ]

CSE 4111/5111/6111 Computability Jeff Edmonds Assignment 3: Diagonalization & Halting Problem Due: One week after shown in slides

4 Hilbert s Basis Theorem and Gröbner basis

Solution Homework 1 - EconS 501

Advanced Microeconomics

Web Structure Mining Nodes, Links and Influence

Click Prediction and Preference Ranking of RSS Feeds

6.207/14.15: Networks Lecture 24: Decisions in Groups

Economics 2102: Final Solutions

Online Appendix to Strategy-proof tie-breaking in matching with priorities

A Rothschild-Stiglitz approach to Bayesian persuasion

Exercises for Unit VI (Infinite constructions in set theory)

A Rothschild-Stiglitz approach to Bayesian persuasion

Duopolistic Competition with Choice-Overloaded Consumers

Revealed Complementarity

Individual decision-making under certainty

Recursive Ambiguity and Machina s Examples

Strategic Abuse and Accuser Credibility

Static Decision Theory Under Certainty

Revealed Preference 2011

EECS 1028 M: Discrete Mathematics for Engineers

LIMITED ATTENTION AND STATUS QUO BIAS

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

Online Social Networks and Media. Link Analysis and Web Search

Transcription:

Sequential Approval Paola Manzini Marco Mariotti Levent Ülkü Manzini Mariotti Ülkü Sequential Approval BRIColumbia 1 / 36

Motivation Consider an individual who: Scans headlines and reads some articles (or stores away for later reading) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 2 / 36

Motivation Consider an individual who: Scans headlines and reads some articles (or stores away for later reading) Likes or shares various posts when going down his social media feed Manzini Mariotti Ülkü Sequential Approval BRIColumbia 2 / 36

Motivation Consider an individual who: Scans headlines and reads some articles (or stores away for later reading) Likes or shares various posts when going down his social media feed Progressively fills her online shopping cart Manzini Mariotti Ülkü Sequential Approval BRIColumbia 2 / 36

Motivation Consider an individual who: Scans headlines and reads some articles (or stores away for later reading) Likes or shares various posts when going down his social media feed Progressively fills her online shopping cart Progressively matches with potential partners on an online dating site Manzini Mariotti Ülkü Sequential Approval BRIColumbia 2 / 36

Motivation - ctnd. These are diverse examples of behaviour. What do they have in common? Manzini Mariotti Ülkü Sequential Approval BRIColumbia 3 / 36

Motivation - ctnd. These are diverse examples of behaviour. What do they have in common? i) Objects come sequentially to the attention of the agent: they form a list Manzini Mariotti Ülkü Sequential Approval BRIColumbia 3 / 36

Motivation - ctnd. These are diverse examples of behaviour. What do they have in common? i) Objects come sequentially to the attention of the agent: they form a list ii) Objects are not quite chosen, but merely approved : - there s not going to be a final choice between articles read or posts Liked or shared. - an item/partner may or may not be finally selected from the cart/match set - the whole cart/match set may even be abandoned (they act as consideration sets ) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 3 / 36

Motivation - ctnd. These are diverse examples of behaviour. What do they have in common? i) Objects come sequentially to the attention of the agent: they form a list ii) Objects are not quite chosen, but merely approved : - there s not going to be a final choice between articles read or posts Liked or shared. - an item/partner may or may not be finally selected from the cart/match set - the whole cart/match set may even be abandoned (they act as consideration sets ) iii) The set of potentially approvable objects is very large, even endless, so: - the order aspect overrides the menu aspect - some capacity constraint is likely to apply (cannot go on forever) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 3 / 36

Sequential Approval Build a model of sequential approval with features i-iii.: X is a finite set of alternatives, n its cardinality (thought of as being very large) X, and N = {1,..., n}. A list is any linear order λ on X, sometimes denoted xyz... Λ is the set of all lists. A stochastic approval function is a map p : X Λ [0, 1]. The number p(x, λ) is the probability that x is approved when the decision maker is facing list λ. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 4 / 36

Sequential approval - ctnd. Have we forgotten the adding up constraint on probabilities? Manzini Mariotti Ülkü Sequential Approval BRIColumbia 5 / 36

Sequential approval - ctnd. Have we forgotten the adding up constraint on probabilities? No: the sum of the p(x, λ) over alternatives is typically not one. This is approval, not choice... Manzini Mariotti Ülkü Sequential Approval BRIColumbia 5 / 36

Sequential approval - ctnd. Have we forgotten the adding up constraint on probabilities? No: the sum of the p(x, λ) over alternatives is typically not one. This is approval, not choice... Formally, a stochastic approval function could be equivalently defined as a stochastic correspondence C : 2 X Λ [0, 1] associating with each list the probability of the possible approval sets. The adding up constraint applies over these objects, provided that C (, λ) is allowed to absorb residual probability. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 5 / 36

Sequential approval - ctnd. Have we forgotten the adding up constraint on probabilities? No: the sum of the p(x, λ) over alternatives is typically not one. This is approval, not choice... Formally, a stochastic approval function could be equivalently defined as a stochastic correspondence C : 2 X Λ [0, 1] associating with each list the probability of the possible approval sets. The adding up constraint applies over these objects, provided that C (, λ) is allowed to absorb residual probability. Unlike a standard stochastic choice correspondence, our domain comprises lists, not menus. The menu X is held fixed in the analysis. The variation comes only from lists. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 5 / 36

Two models of sequential approval - 1 of 4 At the moment of approving the agent is defined by: 1) A preference (a linear order over X ). 2) An approval threshold (an element of X ). 3) A stopping rule (a number expressing a capacity constraint - more on this later). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 6 / 36

Two models of sequential approval - 2 of 4 We take preference to be the stable element of the agent s psychology. Approval thresholds and capacity constraints are subject to random shocks. E.g: - each morning you may be more or less strict with your FB Likes - each morning you may have more or less time/patience to go down the list. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 7 / 36

Two models of sequential approval - 3 of 4 Let π be a (strictly positive) joint probability distribution over N X that describes this randomness. π (i, t) is the joint probability that the approval threshold is t and the capacity constraint is i. An agent is a pair (, π). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 8 / 36

Two models of sequential approval - 3 of 4 Let π be a (strictly positive) joint probability distribution over N X that describes this randomness. π (i, t) is the joint probability that the approval threshold is t and the capacity constraint is i. An agent is a pair (, π). Consider two possibilities regarding the capacity constraint: i) Depth constraint: the constraint acts on the number of alternatives you examine. ii) Approval constraint: the constraint acts on the number of alternatives you approve. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 8 / 36

Two models of sequential approval - 4 of 4 Let λ (x) be the position of x in list λ. The DCM is represented as p D (x, λ) = x t π(i, t) i λ(x) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 9 / 36

Two models of sequential approval - 4 of 4 Let λ (x) be the position of x in list λ. The DCM is represented as p D (x, λ) = x t i λ(x) π(i, t) Let b (λ, j, t) be the number of alternatives that are t and that in list λ are in a position j The ACM is represented as p A (x, λ) = x t i b(λ,λ(x),t) π(i, t) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 9 / 36

Related literature - Rubinstein & Salant TE06 (mostly observable lists, menu variation, choice functions - or correspondences by taking unions of lists) - Yildiz TE16 (menu variation, rationalisation by lists) - Aguiar, Boccardi & Dean JET16 (menu variation, rationalisation by - random - lists) - Kovach & Ülkü 2017 (menu variation, rationalisation by lists, random threshold). - Caplin, Dean & Martin AER11 (experimental choice process data, infer search order). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 10 / 36

Our Questions 1) Identification: 2) List design: 3) Characterisation: 4) Comparative statics: Manzini Mariotti Ülkü Sequential Approval BRIColumbia 11 / 36

Our Questions 1) Identification: Assume that approvals are generated by the model(s). Can an observer of approvals and lists identify the parameters, i.e. preferences and the joint probabilites π (i, t)? 2) List design: 3) Characterisation: 4) Comparative statics: Manzini Mariotti Ülkü Sequential Approval BRIColumbia 11 / 36

Our Questions 1) Identification: Assume that approvals are generated by the model(s). Can an observer of approvals and lists identify the parameters, i.e. preferences and the joint probabilites π (i, t)? 2) List design: Given an objective (e.g. total number of approvals), which list maximises the objective? 3) Characterisation: 4) Comparative statics: Manzini Mariotti Ülkü Sequential Approval BRIColumbia 11 / 36

Our Questions 1) Identification: Assume that approvals are generated by the model(s). Can an observer of approvals and lists identify the parameters, i.e. preferences and the joint probabilites π (i, t)? 2) List design: Given an objective (e.g. total number of approvals), which list maximises the objective? 3) Characterisation: Which exact constraints on observed approval behaviour do the models impose? 4) Comparative statics: Manzini Mariotti Ülkü Sequential Approval BRIColumbia 11 / 36

Our Questions 1) Identification: Assume that approvals are generated by the model(s). Can an observer of approvals and lists identify the parameters, i.e. preferences and the joint probabilites π (i, t)? 2) List design: Given an objective (e.g. total number of approvals), which list maximises the objective? 3) Characterisation: Which exact constraints on observed approval behaviour do the models impose? 4) Comparative statics: How are changes in the primitives manifested in behaviour? Manzini Mariotti Ülkü Sequential Approval BRIColumbia 11 / 36

3-Alternative example: DCM Let x y z. Denote the marginals of π by π i and π t. DCM p (x, λ) p (y, λ) p (z, λ) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 12 / 36

3-Alternative example: DCM Let x y z. Denote the marginals of π by π i and π t. DCM p (x, λ) p (y, λ) p (z, λ) λ = xyz Manzini Mariotti Ülkü Sequential Approval BRIColumbia 12 / 36

3-Alternative example: DCM Let x y z. Denote the marginals of π by π i and π t. DCM p (x, λ) p (y, λ) p (z, λ) 3 λ = xyz π i,t = 1 i=1t {x,y,z} Manzini Mariotti Ülkü Sequential Approval BRIColumbia 12 / 36

3-Alternative example: DCM Let x y z. Denote the marginals of π by π i and π t. DCM p (x, λ) p (y, λ) p (z, λ) 3 3 λ = xyz π i,t = 1 i=1t {x,y,z} π i,t i=2t {y,z} Manzini Mariotti Ülkü Sequential Approval BRIColumbia 12 / 36

3-Alternative example: DCM Let x y z. Denote the marginals of π by π i and π t. DCM p (x, λ) p (y, λ) p (z, λ) 3 3 λ = xyz π i,t = 1 π i,t π 3,z i=1t {x,y,z} i=2t {y,z} Manzini Mariotti Ülkü Sequential Approval BRIColumbia 12 / 36

3-Alternative example: DCM Let x y z. Denote the marginals of π by π i and π t. DCM p (x, λ) p (y, λ) p (z, λ) 3 3 λ = xyz π i,t = 1 π i,t π 3,z λ = xzy λ = yxz i=1t {x,y,z} 3 i=1t {x,y,z} π i,t = 1 i=2t {y,z} π 3,t t {y,z} 3 π i,z i=2 3 π i π y + π z π 3,z i=2 λ = yzx π 3 π y + π z 3 λ = zxy 3 π i i=2 λ = zyx π 3 3 Only the position in the list matters. π 3,t t {y,z} π i,t i=2t {y,z} π i,z i=2 Manzini Mariotti Ülkü Sequential Approval BRIColumbia 12 / 36 π z π z

3-Alternative example: ACM ACM p (x, λ) p (y, λ) p (z, λ) 3 3 λ = xyz π i,t = 1 π i,t π 3,z λ = xzy λ = yxz i=1t {x,y,z} 3 i=1t {x,y,z} π i,t = 1 i=2t {y,z} 3 π i,y + π 3,z i=2 3 π i,z i=2 Manzini Mariotti Ülkü Sequential Approval BRIColumbia 13 / 36

3-Alternative example: ACM ACM p (x, λ) p (y, λ) p (z, λ) 3 3 λ = xyz π i,t = 1 π i,t π 3,z λ = xzy λ = yxz i=1t {x,y,z} 3 i=1t {x,y,z} π i,t = 1 π i i=2 π 1,x + 3 i=2t {y,z} 3 π i,y + π 3,z i=2 3 π i,z i=2 Manzini Mariotti Ülkü Sequential Approval BRIColumbia 13 / 36

3-Alternative example: ACM ACM p (x, λ) p (y, λ) p (z, λ) 3 3 λ = xyz π i,t = 1 π i,t π 3,z λ = xzy λ = yxz i=1t {x,y,z} 3 i=1t {x,y,z} λ = yzx π 1,x + λ = zxy π i,t = 1 i=2t {y,z} 3 π i,y + π 3,z i=2 3 π i,z i=2 π 1,x + 3 π i π y + π z π 3,z t {x,y} t {x,y} λ = zyx π 1,x + i=2 π 1,t + 3 t {x,y} π 2,t + π 3 π y + π z 3 π i i=2 π 2,t + π 3 3 π i,y + π 3,z i=2 π 1,y + 3 π i,t i=2t {y,z} π i,z i=2 Here predecessor set matters for approval probs, not just position Manzini Mariotti Ülkü Sequential Approval BRIColumbia 13 / 36 π z π z

Identification (DCM) The identification news regarding DCM is good: Manzini Mariotti Ülkü Sequential Approval BRIColumbia 14 / 36

Identification (DCM) The identification news regarding DCM is good: Theorem In the DCM, preferences and joint probabilities π (i, t) are uniquely identified by approval probabilities. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 14 / 36

Identification (DCM) The identification news regarding DCM is good: Theorem In the DCM, preferences and joint probabilities π (i, t) are uniquely identified by approval probabilities. The result hinges strongly on the fact that here the position of an alternative in a list determines the approval probability. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 14 / 36

Proof (sketch) Preferences between any two alternatives x and y are identified by the ranking of approval probabilities in any lists in which x and y are in the same position (see example). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 15 / 36

Proof (sketch) Preferences between any two alternatives x and y are identified by the ranking of approval probabilities in any lists in which x and y are in the same position (see example). Relabel alternatives in decreasing order of preference: x 1, x 2,..., x n. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 15 / 36

Proof (sketch) Preferences between any two alternatives x and y are identified by the ranking of approval probabilities in any lists in which x and y are in the same position (see example). Relabel alternatives in decreasing order of preference: x 1, x 2,..., x n. The approval prob of the worst alternative x n in a list where it is in last position identifies π (n, x n ). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 15 / 36

Proof (sketch) Preferences between any two alternatives x and y are identified by the ranking of approval probabilities in any lists in which x and y are in the same position (see example). Relabel alternatives in decreasing order of preference: x 1, x 2,..., x n. The approval prob of the worst alternative x n in a list where it is in last position identifies π (n, x n ). If x n is moved up one position, then: - x n still chosen when the threshold is x n and the capacity is maximal; - but now also chosen with the same threshold when capacity is only n 1. Hence the difference in approval when x n is in last position and when it is in position n 1 identifies π (n 1, x n ): Manzini Mariotti Ülkü Sequential Approval BRIColumbia 15 / 36

Proof (sketch) Preferences between any two alternatives x and y are identified by the ranking of approval probabilities in any lists in which x and y are in the same position (see example). Relabel alternatives in decreasing order of preference: x 1, x 2,..., x n. The approval prob of the worst alternative x n in a list where it is in last position identifies π (n, x n ). If x n is moved up one position, then: - x n still chosen when the threshold is x n and the capacity is maximal; - but now also chosen with the same threshold when capacity is only n 1. Hence the difference in approval when x n is in last position and when it is in position n 1 identifies π (n 1, x n ): Pushing x n up in the list notch by notch then pins down π (n 2, x n ),..., π (1, x n ). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 15 / 36

Proof (sketch) - ctnd. Consider the difference in approval between the k th best alternative and the (k + 1) th best alternative when they are last. The only event in which x k is approved while x k+1 is not is when the threshold is x k and the capacity is n (if capacity < n or threshold < x k then neither is approved). Hence the difference pins down π (n, x k ) for all k < n. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 16 / 36

Proof (sketch) - and finally... Fix a position j < n and for any k < n compare the approval probs of x k and x k+1 at position j. The difference in approval probs is the prob of all the events in which the threshold is x k and the capacity is at least j. Namely π (j, x k ) +... + π (n, x k ) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 17 / 36

Proof (sketch) - and finally... Fix a position j < n and for any k < n compare the approval probs of x k and x k+1 at position j. The difference in approval probs is the prob of all the events in which the threshold is x k and the capacity is at least j. Namely π (j, x k ) +... + π (n, x k ) Repeat the exercise with position j + 1: now the difference is π (j + 1, x k ) +... + π (n, x k ) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 17 / 36

Proof (sketch) - and finally... Fix a position j < n and for any k < n compare the approval probs of x k and x k+1 at position j. The difference in approval probs is the prob of all the events in which the threshold is x k and the capacity is at least j. Namely π (j, x k ) +... + π (n, x k ) Repeat the exercise with position j + 1: now the difference is π (j + 1, x k ) +... + π (n, x k ) As good applied economists, now take a diff-in-diff to identify π (j, x k ). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 17 / 36

Identification (ACM) In ACM, we need an additional assumption for full identification. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 18 / 36

Identification (ACM) In ACM, we need an additional assumption for full identification. Theorem In the ACM, preferences are uniquely identified. Moreover, if the probability distributions on thresholds and capacities are independent, they are uniquely identified by approval probabilities. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 18 / 36

Identification (ACM) In ACM, we need an additional assumption for full identification. Theorem In the ACM, preferences are uniquely identified. Moreover, if the probability distributions on thresholds and capacities are independent, they are uniquely identified by approval probabilities. The need for some restriction is seen from the simplest example: x y. Preferences are identified as in DCM. However... Manzini Mariotti Ülkü Sequential Approval BRIColumbia 18 / 36

Identification (ACM) - ctd....although we also identify π (2, y) = p (y, xy) and π (1, y) = p (y, yx) p (y, xy) it is impossible to break down π (1, x) + π (2, x) from π (1, x) + π (2, x) + π (2, y) = p (x, yx) and 1 = p (x, xy). ACM p (x, λ) p (y, λ) λ = xy π 1,x + π 2,x + π 1,y + π 2,y π 2,y λ = yx π 1,x + π 2,x + π 2,y π 1,y + π 2,y Manzini Mariotti Ülkü Sequential Approval BRIColumbia 19 / 36

Identification (ACM) - cntd. In general, for any k < n, we can only hope to identify l k π(l, x k). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 20 / 36

Identification (ACM) - cntd. In general, for any k < n, we can only hope to identify l k π(l, x k). On the other hand, assume independence, i.e. π (i, t) = π i π t. Then in the two alternative case we have: Manzini Mariotti Ülkü Sequential Approval BRIColumbia 20 / 36

Identification (ACM) - cntd. In general, for any k < n, we can only hope to identify l k π(l, x k). On the other hand, assume independence, i.e. π (i, t) = π i π t. Then in the two alternative case we have: π y = p (y, yx) identifying π y (and therefore π x ) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 20 / 36

Identification (ACM) - cntd. In general, for any k < n, we can only hope to identify l k π(l, x k). On the other hand, assume independence, i.e. π (i, t) = π i π t. Then in the two alternative case we have: π y = p (y, yx) identifying π y (and therefore π x ) π y π 2 = p (y, xy), identifying π 2 (and therefore π 1 ) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 20 / 36

Identification (ACM) - cntd. In general, for any k < n, we can only hope to identify l k π(l, x k). On the other hand, assume independence, i.e. π (i, t) = π i π t. Then in the two alternative case we have: π y = p (y, yx) identifying π y (and therefore π x ) π y π 2 = p (y, xy), identifying π 2 (and therefore π 1 ) The fact that π 1 is identified by the approval probs of only x 2 does generalise: the approval probs of x k+1,..., x n identify π k. This is the key for the recursive identifying algorithm in the proof (spared). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 20 / 36

List Design Primitives can be learned from the observation of approval behaviour across lists. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 21 / 36

List Design Primitives can be learned from the observation of approval behaviour across lists. Suppose now you: (1) can control the lists, and (2) have an objective you want to maximise (list is a choice variable). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 21 / 36

List Design Primitives can be learned from the observation of approval behaviour across lists. Suppose now you: (1) can control the lists, and (2) have an objective you want to maximise (list is a choice variable). Which list maximises the objective? Manzini Mariotti Ülkü Sequential Approval BRIColumbia 21 / 36

List Design Suppose you want to max the total number of approvals (with the large number assumption that approval probs are identified with the fraction of times, over a large total of times, that an alternative is approved). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 22 / 36

List Design Suppose you want to max the total number of approvals (with the large number assumption that approval probs are identified with the fraction of times, over a large total of times, that an alternative is approved). This objective makes sense in several instances: - maximise the number of clicks; - maximise the number of news pieces read; - maximise social network involvement through Likes and sharing; - maximise the size of an online shopping cart, etc. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 22 / 36

List Design Suppose you want to max the total number of approvals (with the large number assumption that approval probs are identified with the fraction of times, over a large total of times, that an alternative is approved). This objective makes sense in several instances: - maximise the number of clicks; - maximise the number of news pieces read; - maximise social network involvement through Likes and sharing; - maximise the size of an online shopping cart, etc. We consider a (much) more general objective: maximize a weighted sum of the p(x, λ). The weights w (x) allow to include objectives such as revenue per click or favouring some specific alternatives. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 22 / 36

Two benchmark results in list design The two models have very contrasting implications for list design in respect of the stated objective. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 23 / 36

Two benchmark results in list design The two models have very contrasting implications for list design in respect of the stated objective. Theorem In the ACM, a list λ is optimal iff it agrees with order of the weights w (x), i.e. w (x) > w (y) xλy. Corollary (List Invariance Principle): In the ACM, if the weigths are all the same, then any list is optimal. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 23 / 36

Two benchmark results in list design The two models have very contrasting implications for list design in respect of the stated objective. Theorem In the ACM, a list λ is optimal iff it agrees with order of the weights w (x), i.e. w (x) > w (y) xλy. Corollary (List Invariance Principle): In the ACM, if the weigths are all the same, then any list is optimal. Flash proof of Corollary (Credit: Yuhta Ishii): Take any capacity-threshold pair (i, t). 1) If {x : x t} = k i, then k items are approved. 2) Otherwise, i items are approved. 3) Neither k nor i depends on the list. QED Manzini Mariotti Ülkü Sequential Approval BRIColumbia 23 / 36

Two benchmark results in list design (ctnd.) Theorem In the DCM: 1) If all weights are the same, then the unique maximiser of the number of approvals is the list that coincides with the preference order. 2) If the weights can differ and the probability distributions on thresholds and capacities are independent, then a list λ is optimal iff w(x) x t π(t) > w(y) y t π(t) xλy 3) In general, a list is optimal iff... Manzini Mariotti Ülkü Sequential Approval BRIColumbia 24 / 36

Intuition for proof with equal weights (DCM) Since there are finitely many lists, a maximiser exists. Suppose x y and look at swaps. 1. Take a list λ in which λ (x) > λ (y) 2. Swap x and y. Let λ be the same as λ apart from λ (y) = λ (x) and λ (x) = λ (y) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 25 / 36

Intuition for proof with equal weights (DCM) Since there are finitely many lists, a maximiser exists. Suppose x y and look at swaps. 1. Take a list λ in which λ (x) > λ (y) 2. Swap x and y. Let λ be the same as λ apart from λ (y) = λ (x) and λ (x) = λ (y) The approval probs of all z different from x and y are not affected by the swap. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 25 / 36

Intuition for proof with equal weights (DCM) Since there are finitely many lists, a maximiser exists. Suppose x y and look at swaps. 1. Take a list λ in which λ (x) > λ (y) 2. Swap x and y. Let λ be the same as λ apart from λ (y) = λ (x) and λ (x) = λ (y) The approval probs of all z different from x and y are not affected by the swap. Any loss for y is a gain for x. If (i, t) leads to the approval of y in λ but not in λ, then y t and i = λ(y). Hence (i, t) leads to the approval of x in λ and not in λ. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 25 / 36

Intuition for proof with equal weights (DCM) Since there are finitely many lists, a maximiser exists. Suppose x y and look at swaps. 1. Take a list λ in which λ (x) > λ (y) 2. Swap x and y. Let λ be the same as λ apart from λ (y) = λ (x) and λ (x) = λ (y) The approval probs of all z different from x and y are not affected by the swap. Any loss for y is a gain for x. If (i, t) leads to the approval of y in λ but not in λ, then y t and i = λ(y). Hence (i, t) leads to the approval of x in λ and not in λ. Some gain for x is not a loss for y. The pair (i, t) = (λ(y), x) leads to the approval of x in λ but not in λ. But it never leads to the approval of y. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 25 / 36

Intuition for proof (ACM) As in DCM, a maximiser exists. Suppose x y. 1. Take a list λ in which λ (x) = λ (y) + 1 2. Swap to λ - the same as λ apart from λ (y) = λ (x) and λ (x) = λ (y) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 26 / 36

Intuition for proof (ACM) As in DCM, a maximiser exists. Suppose x y. 1. Take a list λ in which λ (x) = λ (y) + 1 2. Swap to λ - the same as λ apart from λ (y) = λ (x) and λ (x) = λ (y) Because x and y are adjacent, this can only affect the approval probabilities of x and y. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 26 / 36

Intuition for proof (ACM) As in DCM, a maximiser exists. Suppose x y. 1. Take a list λ in which λ (x) = λ (y) + 1 2. Swap to λ - the same as λ apart from λ (y) = λ (x) and λ (x) = λ (y) Because x and y are adjacent, this can only affect the approval probabilities of x and y. Any loss for y is a gain for x. Suppose (i, t) leads to the approval of y in λ but not in λ. Then (i, t) cannot lead to the approval of x in λ (capacity is exhausted) but has to lead to the approval of x in λ. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 26 / 36

Intuition for proof (ACM) - ctnd. Is any gain for x a loss for y? Manzini Mariotti Ülkü Sequential Approval BRIColumbia 27 / 36

Intuition for proof (ACM) - ctnd. Is any gain for x a loss for y? Yes - this is the crucial difference from DCM. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 27 / 36

Intuition for proof (ACM) - ctnd. Is any gain for x a loss for y? Yes - this is the crucial difference from DCM. If (i, t) leads to the approval of x in λ but not in λ, then y must have exhausted capacity in λ, meaning y t. Furthermore capacity cannot have been exhausted in λ when the agent reaches y. Hence (i, t) must lead to the approval of y in λ. But since x will consume the last bit of capacity, (i, t) cannot lead to the approval of y in λ. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 27 / 36

Intuition for proof (ACM) - ctnd. Is any gain for x a loss for y? Yes - this is the crucial difference from DCM. If (i, t) leads to the approval of x in λ but not in λ, then y must have exhausted capacity in λ, meaning y t. Furthermore capacity cannot have been exhausted in λ when the agent reaches y. Hence (i, t) must lead to the approval of y in λ. But since x will consume the last bit of capacity, (i, t) cannot lead to the approval of y in λ. (ln DCM x gains from the swap in events that do not benefit y before the swap. In ACM this cannot happen: y must have absorbed capacity pre-swap for x to gain from the swap. Hence y loses what x gains.) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 27 / 36

Comparatives For a given preference consider probability distributions π a and π b and their associated approval functions p a and p b, respectively. Say that a is strongly more approving than b iff p a (x, λ) p b (x, λ) for all alternatives x and lists λ. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 28 / 36

Comparatives For a given preference consider probability distributions π a and π b and their associated approval functions p a and p b, respectively. Say that a is strongly more approving than b iff p a (x, λ) p b (x, λ) for all alternatives x and lists λ. Say that a is weakly more approving than b iff, for any list λ, the total number of approvals by a in λ is greater than that by b, i.e. x p a (x, λ) x p b (x, λ). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 28 / 36

Comparatives For a given preference consider probability distributions π a and π b and their associated approval functions p a and p b, respectively. Say that a is strongly more approving than b iff p a (x, λ) p b (x, λ) for all alternatives x and lists λ. Say that a is weakly more approving than b iff, for any list λ, the total number of approvals by a in λ is greater than that by b, i.e. x p a (x, λ) x p b (x, λ). Numbering the alternatives from best to worst, any π defines uniquely a (univariate) numerical random variable Xπ on {1,..., n} that gives the minimum of any capacity-threshold pair (m, i), i.e. Pr ( X π = i ) = π ({(m, x j ) : min (m, j) = i}) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 28 / 36

Comparatives Given a π a, let F a denote the cdf. Theorem. In the DCM, a is strongly more approving than b if and only if F a first order stochastically dominates F b Manzini Mariotti Ülkü Sequential Approval BRIColumbia 29 / 36

Comparatives Given a π a, let F a denote the cdf. Theorem. In the DCM, a is strongly more approving than b if and only if F a first order stochastically dominates F b Theorem. In the ACM, a is weakly more approving than b if and only if E ( X π a ) E ( X πb ). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 29 / 36

Characterisation How do we know whether the agent is representable through DCM or ACM? Manzini Mariotti Ülkü Sequential Approval BRIColumbia 30 / 36

Characterisation How do we know whether the agent is representable through DCM or ACM? A1. If λ(x) = λ (x), then p(x, λ) = p(x, λ ) (only the position matters) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 30 / 36

Characterisation How do we know whether the agent is representable through DCM or ACM? A1. If λ(x) = λ (x), then p(x, λ) = p(x, λ ) (only the position matters) A2. If λ(x) < λ (x), then p(x, λ) > p(x, λ ). (higher positions are better) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 30 / 36

Characterisation How do we know whether the agent is representable through DCM or ACM? A1. If λ(x) = λ (x), then p(x, λ) = p(x, λ ) (only the position matters) A2. If λ(x) < λ (x), then p(x, λ) > p(x, λ ). (higher positions are better) A3. If λ(x) = λ (y) = k, µ(x) = µ (y) = k 1 and p(x, λ) > p(y, λ ) then p(x, µ) p(y, µ ) > p(x, λ) p(y, λ ). (supermodularity in quality and position) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 30 / 36

Characterisation How do we know whether the agent is representable through DCM or ACM? A1. If λ(x) = λ (x), then p(x, λ) = p(x, λ ) (only the position matters) A2. If λ(x) < λ (x), then p(x, λ) > p(x, λ ). (higher positions are better) A3. If λ(x) = λ (y) = k, µ(x) = µ (y) = k 1 and p(x, λ) > p(y, λ ) then p(x, µ) p(y, µ ) > p(x, λ) p(y, λ ). (supermodularity in quality and position) A4a. There exists x such that if λ(x) = 1, then p(x, λ) = 1. (dominant alternative) A4b. For all x and λ, p(x, λ) > 0. (positivity) A4c. If p(x, λ) = p(y, λ ) and λ(x) = λ (y) then x = y. (linearity) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 30 / 36

Characterisation (DCM) Theorem. A stochastic approval function is a DCM if and only if it satisfies A1-A4. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 31 / 36

Characterisation (DCM) Theorem. A stochastic approval function is a DCM if and only if it satisfies A1-A4. Various extensions are possible with minor variations of the axioms: - Preferences can be weak orders - Depth can be zero - Probabilities can be zero Manzini Mariotti Ülkü Sequential Approval BRIColumbia 31 / 36

Characterisation (DCM) - cntd. B1. If λ(x) λ (x), then p(x, λ) p(x, λ ). (higher positions are weakly better) B2. If λ(x) = λ (y) = k, µ(x) = µ (y) = k 1 and p(x, λ) p(y, λ ) then p(x, µ) p(y, µ ) p(x, λ) p(y, λ ). (the advantage of better alternatives is weakly enhanced in higher positions) Manzini Mariotti Ülkü Sequential Approval BRIColumbia 32 / 36

Characterisation (DCM) - cntd. B1. If λ(x) λ (x), then p(x, λ) p(x, λ ). (higher positions are weakly better) B2. If λ(x) = λ (y) = k, µ(x) = µ (y) = k 1 and p(x, λ) p(y, λ ) then p(x, µ) p(y, µ ) p(x, λ) p(y, λ ). (the advantage of better alternatives is weakly enhanced in higher positions) Theorem A stochastic approval function is a generalised DCM if and only if it satisfies B1 and B2. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 32 / 36

Characterisation (ACM) Partly but not entirely parallel to that of DCM. A1 Only predecessor set matters A2 Smaller predecessor sets are better A3 Supermodularity in quality and smallness of predecessor set A4 Positivity, linearity, dominant alternative A5 If consecutive x and y are switched, y gains exactly what y loses. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 33 / 36

Characterisation (ACM) Wishful Theorem. A stochastic approval function is an ACM only if it satisfies A1-A5, and perhaps also if. The big problem for a neat characterisation is that only the number of better alternatives counts in the predecessor set, whereas also the identity of worse alternatives counts. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 34 / 36

Concluding remarks A model of approval, not choice. We have studied situations in which both the menu and the selections are typically large (either pre-choice or non-choice ). Manzini Mariotti Ülkü Sequential Approval BRIColumbia 35 / 36

Concluding remarks A model of approval, not choice. We have studied situations in which both the menu and the selections are typically large (either pre-choice or non-choice ). Approval is intrinsically related to satisficing behaviour. We have provided two models that seems plausible, but others are possible. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 35 / 36

Concluding remarks A model of approval, not choice. We have studied situations in which both the menu and the selections are typically large (either pre-choice or non-choice ). Approval is intrinsically related to satisficing behaviour. We have provided two models that seems plausible, but others are possible. In particular, natural to look at non-stationary thresholds. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 35 / 36

Concluding remarks A model of approval, not choice. We have studied situations in which both the menu and the selections are typically large (either pre-choice or non-choice ). Approval is intrinsically related to satisficing behaviour. We have provided two models that seems plausible, but others are possible. In particular, natural to look at non-stationary thresholds. List design seems to offer ample scope for further relevant research. Manzini Mariotti Ülkü Sequential Approval BRIColumbia 35 / 36

THANK YOU! Manzini Mariotti Ülkü Sequential Approval BRIColumbia 36 / 36