Dynamic Matching under Preferences Martin Hoefer Max-Planck-Institut für Informatik mhoefer@mpi-inf.mpg.de Kolkata, 11 March 2015
How to find a stable relationship?
Stable Marriage Set of Women Set of Men
Stable Marriage Set of Women Set of Men Every person has a preference list.
Stable Marriage Set of Women Set of Men Every person has a preference list.
Stable Marriage Set of Women Set of Men Every person has a preference list.
Stable Matching {x, y} is blocking pair if and only if x and y prefer each other to their current matches. x y Matching M is a stable matching if and only if it has no blocking pair. x y
Stable Matching {x, y} is blocking pair if and only if x and y prefer each other to their current matches. x y Matching M is a stable matching if and only if it has no blocking pair. x y Some Results and Extensions: A stable matching always exists and there is an efficient algorithm to compute it. [Gale, Shapley 62] Many further results since the 60s: Roommates, Ties, Incomplete Preferences, Many-to-Many Matchings...
Applications Residents/Hospitals College Admission Job Market Trading P2P Networks etc.
Applications Residents/Hospitals College Admission Job Market Trading P2P Networks etc. Question What happens when...... there is no central authority that dictates the matches, and...... agents have only limited information about the population? Do agents reach a stable matching? How long does it take?
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Blocking-Pair Dynamics Matching not stable Choose a blocking pair and resolve it
Results Blocking-pair dynamics can cycle. [Knuth 90] There is always a sequence of blocking-pair resolutions to a stable matching of polynomial length. [Roth, Vande Vate 90] The last result shows that when blocking pairs are chosen uniformly at random in each step, such random dynamics converge with probability 1. Random dynamics might take exponential time with high probability to reach a stable matching. [Ackermann, Goldberg, Mirrokni, Röglin, Vöcking 08]
Locally Stable Matching [Arcaute, Vassilvitskii 09] Agents are nodes in a static (social) network N with undirected links L. Network imposes an information structure based on triadic closure:
Locally Stable Matching [Arcaute, Vassilvitskii 09] Agents are nodes in a static (social) network N with undirected links L. Network imposes an information structure based on triadic closure: Each man (woman) can match to any woman (man) in the 2-hop neighborhood in N
Locally Stable Matching [Arcaute, Vassilvitskii 09] Agents are nodes in a static (social) network N with undirected links L. Network imposes an information structure based on triadic closure: Each man (woman) can match to any woman (man) in the 2-hop neighborhood in N Match {x, y} x (y) can re-match to a direct neighbor of y (x) in N.
Locally Stable Matching [Arcaute, Vassilvitskii 09] Given a (partial) matching M, x and y are accessible if they are at hop-distance 2 in the graph G = (V, L M). {x, y} is a local blocking pair if it is a blocking pair and x and y are accessible. Matching M is a locally stable matching if it has no local blocking pair.
Locally Stable Matching [Arcaute, Vassilvitskii 09] Given a (partial) matching M, x and y are accessible if they are at hop-distance 2 in the graph G = (V, L M). {x, y} is a local blocking pair if it is a blocking pair and x and y are accessible. Matching M is a locally stable matching if it has no local blocking pair.
Locally Stable Matching [Arcaute, Vassilvitskii 09] Given a (partial) matching M, x and y are accessible if they are at hop-distance 2 in the graph G = (V, L M). {x, y} is a local blocking pair if it is a blocking pair and x and y are accessible. Matching M is a locally stable matching if it has no local blocking pair.
Locally Stable Matching [Arcaute, Vassilvitskii 09] What happens if agents iteratively re-match in local blocking pairs?
Locally Stable Marriage with Strict Preferences [H., Wagner ICALP 13] Reachability for locally stable marriage with arbitrary strict preferences Stable Matching Locally Stable Matching Reachability Yes NP-hard [RVV 90]
Locally Stable Marriage with Strict Preferences [H., Wagner ICALP 13] Reachability for locally stable marriage with arbitrary strict preferences Stable Matching Locally Stable Matching Reachability Yes NP-hard [RVV 90] Shortest Sequence O(n 2 ) 2 Ω(n) [RVV 90]
Reaching a LSM might be impossible Impossible to create enough edges: (, ) (, ) (, ) (, ) Edge improvements are mutually destroying: [...]
Reaching a LSM might be impossible Impossible to create enough edges: (, ) (, ) (, ) (, ) Edge improvements are mutually destroying: [...]
Reaching a LSM might be impossible Impossible to create enough edges: (, ) (, ) (, ) (, ) Edge improvements are mutually destroying: [...]
Reaching a LSM might be impossible Impossible to create enough edges: (, ) (, ) (, ) (, ) Edge improvements are mutually destroying: [...]
Reaching a LSM might be impossible Impossible to create enough edges: (, ) (, ) (, ) (, ) Edge improvements are mutually destroying: [...]
Memory What if agents can recall some of their previous matches?
Memory What if agents can recall some of their previous matches? Random Memory Each agent picks uniformly at random one agent that he was matched to before to become temporarily accessible in the next round.
Memory What if agents can recall some of their previous matches? Random Memory Each agent picks uniformly at random one agent that he was matched to before to become temporarily accessible in the next round. Deterministic Memory Each agent deterministically remembers one previous match every round. Quality Memory: Remember highest benefit match. Recency Memory: Remember most recent match.
Locally Stable Marriage with Strict Preferences [H., Wagner ICALP 13] Reachability for locally stable marriage with arbitrary strict preferences Stable Matching Locally Stable Matching Reachability Yes NP-hard [RVV 90] Shortest Sequence O(n 2 ) 2 Ω(n) [RVV 90] Reachability Quality Memory Yes, O(n 2 ) NP-hard even if one side no internal links
Quality Memory is Easily Fooled! For a hardness instance starting from empty matching, attach separate gadget to selected vertices for memory wipe-out.
Quality Memory is Easily Fooled! For a hardness instance starting from empty matching, attach separate gadget to selected vertices for memory wipe-out.
Quality Memory is Easily Fooled! For a hardness instance starting from empty matching, attach separate gadget to selected vertices for memory wipe-out.
Locally Stable Marriage with Strict Preferences [H., Wagner ICALP 13] Reachability for locally stable marriage with arbitrary strict preferences Stable Matching Locally Stable Matching Reachability Yes NP-hard [RVV 90] Shortest Sequence O(n 2 ) 2 Ω(n) [RVV 90] Reachability Quality Memory Yes, O(n 2 ) NP-hard even if one side no internal links Reachability Recency Memory Yes, O(n 2 ) Yes, O(n 3 ) if one side no internal links
Recency Memory is More Powerful Convergence with full information works in two phases: 1 Only matched men resolve blocking pairs (man increase) 2 Only unmatched men resolve blocking pairs (woman increase) No network between men Augment second phase with recency memory.
Recency Memory is More Powerful Convergence with full information works in two phases: 1 Only matched men resolve blocking pairs (man increase) 2 Only unmatched men resolve blocking pairs (woman increase) No network between men Augment second phase with recency memory.
Recency Memory is More Powerful Convergence with full information works in two phases: 1 Only matched men resolve blocking pairs (man increase) 2 Only unmatched men resolve blocking pairs (woman increase) No network between men Augment second phase with recency memory.
Recency Memory is More Powerful Convergence with full information works in two phases: 1 Only matched men resolve blocking pairs (man increase) 2 Only unmatched men resolve blocking pairs (woman increase) No network between men Augment second phase with recency memory.
Recency Memory is More Powerful Convergence with full information works in two phases: 1 Only matched men resolve blocking pairs (man increase) 2 Only unmatched men resolve blocking pairs (woman increase) No network between men Augment second phase with recency memory.
Recency Memory is More Powerful Convergence with full information works in two phases: 1 Only matched men resolve blocking pairs (man increase) 2 Only unmatched men resolve blocking pairs (woman increase) No network between men Augment second phase with recency memory.
Recency Memory is More Powerful Convergence with full information works in two phases: 1 Only matched men resolve blocking pairs (man increase) 2 Only unmatched men resolve blocking pairs (woman increase) No network between men Augment second phase with recency memory.
Locally Stable Marriage with Strict Preferences [H., Wagner ICALP 13] Reachability for locally stable marriage with arbitrary strict preferences Stable Matching Locally Stable Matching Reachability Yes NP-hard [RVV 90] Shortest Sequence O(n 2 ) 2 Ω(n) [RVV 90] Reachability Quality Memory Yes, O(n 2 ) NP-hard even if one side no internal links Reachability Recency Memory Yes, O(n 2 ) Yes, O(n 3 ) if one side no internal links Reachability Random Memory w. prob. 1 w. prob. 1
Correlated or Weighted Matching Each possible match e = {x, y} has a benefit b e > 0. 35 12 (, ) (, ) 25 (, ) 20 (, )
Correlated or Weighted Matching Each possible match e = {x, y} has a benefit b e > 0. 35 12 (, ) (, ) 25 (, ) 20 (, ) Blocking-pair dynamics do not cycle the sorted vector of match benefits is lexicographically increasing. Best-response dynamics resolve always the blocking pair of maximum benefit. They converge in time O(n) to a stable matching. [Ackermann, Goldberg, Mirrokni, Röglin, Vöcking 08]
Results for Weighted Matching [H. ICALP 11] Parameters: There is a subset E of allowed matches, and m = E. Shortest Seq. Rand. Dynamics Best-Response 2 Ω(n) 2 Ω(n) Better-Response O(n m 2 ) 2 Ω(n)
Results for Weighted Matching [H. ICALP 11] Parameters: There is a subset E of allowed matches, and m = E. Each agent can build k 1 matching edges. Each agent has lookahead l 2 in the graph G = (V, L M). Shortest Seq. Rand. Dynamics Best-Response k = 1, l = 2 2 Ω(n) 2 Ω(n) Better-Response k = 1, l = 2 O(n m 2 ) 2 Ω(n) k > 1 or l > 2 2 Ω(n) 2 Ω(n)
Results for Weighted Matching [H. ICALP 11] Parameters: There is a subset E of allowed matches, and m = E. Each agent can build k 1 matching edges. Each agent has lookahead l 2 in the graph G = (V, L M). Shortest Seq. Rand. Dynamics w. Rand. Memory Best-Response k = 1, l = 2 2 Ω(n) 2 Ω(n) O(n m 2 ) Better-Response k = 1, l = 2 O(n m 2 ) 2 Ω(n) O(n m 2 ) k > 1 or l > 2 2 Ω(n) 2 Ω(n) O(n k m 2 ) Random memory allows random dynamics to converge in polynomial time.
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1 2
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1 2
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1 2 3
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1 2 3
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1 2 3 4
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Dynamic Matching with Preferences Agents are nodes of a graph G = (V, E) Each agent has a strict preference list over neighbors In each round, one edge appears or disappears Sequence of subgraphs G t = (V, E t ), for t = 1, 2,... E 0 =, every E t and E t+1 differ by exactly one edge Maintain stable matching in all G t with small amortized number of changes 1 2 3 4
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Maintaining a stable matching requires Θ(n) amortized number of changes, even for a simple path, with correlated preferences and arrivals only. 1 2 3 4
Dynamic Matching [Bhattacharya, H, Huang, Kavitha, Wagner 15] Maintaining a stable matching requires Θ(n) amortized number of changes, even for a simple path, with correlated preferences and arrivals only. Stable matching is not a robust concept in dynamic markets. [Khuller, Mitchell, Vazirani 91] 1 2 3 4
Popular Matchings 35 17 25 15 35 17 25 15 A more global idea: [Gärdenfors 78] Matching M is more popular than M: Strict majority of agents prefer partner in M over partner in M. M is popular matching: No M is more popular.
Popular Matchings 35 17 25 15 35 17 25 15 A more global idea: [Gärdenfors 78] Matching M is more popular than M: Strict majority of agents prefer partner in M over partner in M. M is popular matching: No M is more popular.
Popular Matchings 35 17 25 + 35 + 17 25 15 15 A more global idea: [Gärdenfors 78] Matching M is more popular than M: Strict majority of agents prefer partner in M over partner in M. M is popular matching: No M is more popular.
Popular Matchings 35 17 25 + 35 + 17 25 15 15 A more global idea: [Gärdenfors 78] Matching M is more popular than M: Strict majority of agents prefer partner in M over partner in M. M is popular matching: No M is more popular.
Popular Matchings 35 35 17 17 25 25 15 + 15 + A more global idea: [Gärdenfors 78] Matching M is more popular than M: Strict majority of agents prefer partner in M over partner in M. M is popular matching: No M is more popular.
Popular Matchings 35 35 17 17 25 25 15 + 15 + A more global idea: [Gärdenfors 78] Matching M is more popular than M: Strict majority of agents prefer partner in M over partner in M. M is popular matching: No M is more popular. Maintaining popular matchings also requires Θ(n) amortized number of changes, in the same instance as before.
Unpopularity Factor Unpopularity factor α 1: [McCutchen 08] V + agents that strictly prefer partner in M V agents that strictly prefer partner in M M is α-more popular than M: V + > α V M is α-popular matching: No M is α-more popular. 35 17 25 21 35 17 25 21
Unpopularity Factor Unpopularity factor α 1: [McCutchen 08] V + agents that strictly prefer partner in M V agents that strictly prefer partner in M M is α-more popular than M: V + > α V M is α-popular matching: No M is α-more popular. 35 17 25 21 + 35 + 17 25 + 21
Unpopularity Factor Unpopularity factor α 1: [McCutchen 08] V + agents that strictly prefer partner in M V agents that strictly prefer partner in M M is α-more popular than M: V + > α V M is α-popular matching: No M is α-more popular. 35 17 + 35 0 17 25 21 25 0 21 +
Greedy Algorithm Greedy Algorithm: 1 Given some α 1, let M = and repeat indefinitely: 2 Decide if for M there is α-more popular M ; if yes, replace M by M.
Greedy Algorithm Greedy Algorithm: 1 Given some α 1, let M = and repeat indefinitely: 2 Decide if for M there is α-more popular M ; if yes, replace M by M. Theorem Let be the maximum degree of any node in any G t. For every k > 0, the Greedy Algorithm can be used to maintain a ( + k)-popular matching while making O( + 2 /k) amortized number of changes per round.
Greedy Algorithm Greedy Algorithm: 1 Given some α 1, let M = and repeat indefinitely: 2 Decide if for M there is α-more popular M ; if yes, replace M by M. Theorem Let be the maximum degree of any node in any G t. For every k > 0, the Greedy Algorithm can be used to maintain a ( + k)-popular matching while making O( + 2 /k) amortized number of changes per round. Extensions and Notes: Same result for one-sided instances, or when G is not bipartite. Existence of nearly-popular matchings in sparse graphs Step 2 can be implemented in polynomial time. Greedy computes a voting path of approximately popular matchings
Greedy Algorithm Greedy Algorithm: 1 Given some α 1, let M = and repeat indefinitely: 2 Decide if for M there is α-more popular M ; if yes, replace M by M. Theorem Let be the maximum degree of any node in any G t. For every k > 0, the Greedy Algorithm can be used to maintain a ( + k)-popular matching while making O( + 2 /k) amortized number of changes per round. Can we improve the unpopularity factor? One-sided instances with all matchings being -popular. Two sided instances have 1-popular matchings, but Greedy might not converge for α = ( 1).
Greedy Algorithm Greedy Algorithm: 1 Given some α 1, let M = and repeat indefinitely: 2 Decide if for M there is α-more popular M ; if yes, replace M by M. Theorem Let be the maximum degree of any node in any G t. For every k > 0, the Greedy Algorithm can be used to maintain a ( + k)-popular matching while making O( + 2 /k) amortized number of changes per round. Can we improve the amortized number of changes? Can we improve the running time of the algorithm?
Take-Home Points Matching Dynamics and Locality Classic convergence results do not extend under local information Random memory helps to overcome locality constraints Cache-based memory is effective only in special cases Convergence in poly-time for weighted matching with random memory
Take-Home Points Matching Dynamics and Locality Classic convergence results do not extend under local information Random memory helps to overcome locality constraints Cache-based memory is effective only in special cases Convergence in poly-time for weighted matching with random memory Dynamic Popular Matching Stable and popular matchings can change entirely each round. Unpopularity factor to trade number of changes and agent preference. Greedy algorithm as a dynamics of α-popular matchings. Additional results on computing α-more popular matchings, existence of nearly-popular matchings in sparse graphs, voting paths, etc
Thanks for your attention!