Adaptive Rumor Spreading

Similar documents
Social Influence in Online Social Networks. Epidemiological Models. Epidemic Process

Distributed Systems Gossip Algorithms

MobiHoc 2014 MINIMUM-SIZED INFLUENTIAL NODE SET SELECTION FOR SOCIAL NETWORKS UNDER THE INDEPENDENT CASCADE MODEL

Surge Pricing and Labor Supply in the Ride- Sourcing Market

Performance Evaluation. Analyzing competitive influence maximization problems with partial information: An approximation algorithmic framework

Information Dissemination in Social Networks under the Linear Threshold Model

Voting and Mechanism Design

Learning to Predict Opinion Share in Social Networks

CSCI 3210: Computational Game Theory. Cascading Behavior in Networks Ref: [AGT] Ch 24

Low-Regret for Online Decision-Making

Local Interactions in a Market with Heterogeneous Expectations

Worst case analysis for a general class of on-line lot-sizing heuristics

Influence Maximization in Dynamic Social Networks

The greedy independent set in a random graph with given degr

WITH the recent advancements of information technologies,

Detecting Anti-majority Opinionists Using Value-weighted Mixture Voter Model

Time-Critical Influence Maximization in Social Networks with Time-Delayed Diffusion Process

A Framework for Automated Competitive Analysis of On-line Scheduling of Firm-Deadline Tasks

Probability Models of Information Exchange on Networks Lecture 6

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 7: Prior-Free Multi-Parameter Mechanism Design. Instructor: Shaddin Dughmi

CORRECTNESS OF A GOSSIP BASED MEMBERSHIP PROTOCOL BY (ANDRÉ ALLAVENA, ALAN DEMERS, JOHN E. HOPCROFT ) PRATIK TIMALSENA UNIVERSITY OF OSLO

On service level measures in stochastic inventory control

Crowd-Learning: Improving the Quality of Crowdsourcing Using Sequential Learning

Multi-Round Influence Maximization

Maximizing the Spread of Influence through a Social Network. David Kempe, Jon Kleinberg, Éva Tardos SIGKDD 03

How to deal with uncertainties and dynamicity?

An Ins t Ins an t t an Primer

CS 322: (Social and Information) Network Analysis Jure Leskovec Stanford University

Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. Date: Thursday 17th May 2018 Time: 09:45-11:45. Please answer all Questions.

Modelling self-organizing networks

On the Submodularity of Influence in Social Networks

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

Distributed Optimization over Networks Gossip-Based Algorithms

Viral Marketing and the Diffusion of Trends on Social Networks

Load Balancing in Distributed Service System: A Survey

Modeling face-to-face social interaction networks

8. Cake cutting. proportionality and envy-freeness. price of fairness. approximate envy-freeness. Cake cutting 8-1

Tyler Hofmeister. University of Calgary Mathematical and Computational Finance Laboratory

Heuristic Search Algorithms

Distributed Optimization. Song Chong EE, KAIST

Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers

Quantized Average Consensus on Gossip Digraphs

Maximizing the Spread of Influence through a Social Network

Recoverabilty Conditions for Rankings Under Partial Information

Resilient and energy-aware algorithms

A Note on Maximizing the Spread of Influence in Social Networks

Dynamic Call Center Routing Policies Using Call Waiting and Agent Idle Times Online Supplement

Multi-Dimensional Online Tracking

Controlling conventional generation to minimize forecast error cost

Introduction to Modern Cryptography. Benny Chor

Political Cycles and Stock Returns. Pietro Veronesi

When to Ask for an Update: Timing in Strategic Communication

Upper Bounds on Expected Hitting Times in Mostly-Covered Delay-Tolerant Networks

Temporal Reachability Graphs

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING

DS504/CS586: Big Data Analytics Graph Mining II

Welfare Maximization with Friends-of-Friends Network Externalities

Discrete random structures whose limits are described by a PDE: 3 open problems

T. Liggett Mathematics 171 Final Exam June 8, 2011

communication complexity lower bounds yield data structure lower bounds

Sequential Change-Point Approach for Online Community Detection

RENEWAL PROCESSES AND POISSON PROCESSES

Optimal Power Allocation for Cognitive Radio under Primary User s Outage Loss Constraint

Routing. Topics: 6.976/ESD.937 1

Competitive Equilibrium and the Welfare Theorems

Embedded Systems 15. REVIEW: Aperiodic scheduling. C i J i 0 a i s i f i d i

A Merchant Mechanism for Electricity Transmission Expansion

Modeling, Analysis, and Control of Information Propagation in Multi-layer and Multiplex Networks. Osman Yağan

14. Direct Sum (Part 1) - Introduction

Rate of Convergence of Learning in Social Networks

Lecture 21 Representations of Martingales

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Networks: space-time stochastic models for networks

NETWORKS, in all forms and sizes, are at heart of almost

Online Contextual Influence Maximization with Costly Observations

Power Controlled FCFS Splitting Algorithm for Wireless Networks

DS504/CS586: Big Data Analytics Graph Mining II

The Beginning of Graph Theory. Theory and Applications of Complex Networks. Eulerian paths. Graph Theory. Class Three. College of the Atlantic

LogFeller et Ray Knight

Dynamic Power Allocation and Routing for Time Varying Wireless Networks

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

An Adaptive Algorithm for Selecting Profitable Keywords for Search-Based Advertising Services

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

Generating p-extremal graphs

Analysis of random-access MAC schemes

A STAFFING ALGORITHM FOR CALL CENTERS WITH SKILL-BASED ROUTING: SUPPLEMENTARY MATERIAL

Fast Convergence in Evolutionary Equilibrium Selection

Time and Schedulability Analysis of Stateflow Models

SIS epidemics on Networks

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Rumour spreading and graph conductance

TODAY, a considerable fraction of data requirements in

arxiv: v3 [math.oc] 11 Dec 2018

PERCOLATION IN SELFISH SOCIAL NETWORKS

Social Welfare Functions for Sustainable Development

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

An Approximation Algorithm for Approximation Rank

Whom to Ask? Jury Selection for Decision Making Tasks on Micro-blog Services

Improved Direct Product Theorems for Randomized Query Complexity

Transcription:

José Correa 1 Marcos Kiwi 1 Neil Olver 2 Alberto Vera 1 1 2 VU Amsterdam and CWI July 27, 2015 1/21

The situation 2/21

The situation 2/21

The situation 2/21

The situation 2/21

Introduction Rumors in social networks: contents, updates, new technology, etc. 3/21

Introduction Rumors in social networks: contents, updates, new technology, etc. In viral marketing campaigns, the selection of vertices is crucial. Domingos and Richardson (2001) 3/21

Introduction Rumors in social networks: contents, updates, new technology, etc. In viral marketing campaigns, the selection of vertices is crucial. Domingos and Richardson (2001) An agent (service provider) wants to efficiently speed up the communication process. 3/21

Rumor spreading Models differ in time and communication protocol. Demers et al. (1987) and Boyd et al. (2006) 4/21

Rumor spreading Models differ in time and communication protocol. Demers et al. (1987) and Boyd et al. (2006) In simple cases, the time to activate all the network is mostly understood. 4/21

Rumor spreading Models differ in time and communication protocol. Demers et al. (1987) and Boyd et al. (2006) In simple cases, the time to activate all the network is mostly understood. Even in random networks the estimates are logarithmic in the number of nodes. Doerr et al. (2012) and Chierichetti et al. (2011) 4/21

Opportunistic networks We have an overload problem, an option is to exploit opportunistic communications. 5/21

Opportunistic networks We have an overload problem, an option is to exploit opportunistic communications. A fixed deadline scenario has been studied heuristically along with real large-scale data. Whitbeck et al. (2011) 5/21

Opportunistic networks We have an overload problem, an option is to exploit opportunistic communications. A fixed deadline scenario has been studied heuristically along with real large-scale data. Whitbeck et al. (2011) Control theory based algorithms greatly outperform static ones. Sciancalepore et al. (2014) 5/21

The model Bob communicates and shares information. 6/21

The model Bob communicates and shares information. Bob meets Alice according to a Poisson process of rate λ/n. λ/n 6/21

The model Bob communicates and shares information. Bob meets Alice according to a Poisson process of rate λ/n. Every pair of nodes can meet and gossip. λ/n 6/21

The problem There is a unit cost for pushing the rumor. Opportunistic communications have no cost. At time τ all of the graph must be active. 7/21

The problem There is a unit cost for pushing the rumor. Opportunistic communications have no cost. At time τ all of the graph must be active. We want a strategy that minimizes the overall number of pushes. 7/21

Adaptive and non-adaptive A non-adaptive strategy pushes only at times t = 0 and t = τ. 8/21

Adaptive and non-adaptive A non-adaptive strategy pushes only at times t = 0 and t = τ. An adaptive strategy may push at any time, with the full knowledge of the process evolution. 8/21

Introduction Model Proofs Other results and open questions Adaptive and non-adaptive A non-adaptive strategy pushes only at times t = 0 and t = τ. An adaptive strategy may push at any time, with the full knowledge of the process evolution. Number of active nodes Number of active nodes 5 5 4 4 Push 3 3 2 2 1 1 0 τ t 0 t3 τ t 8/21

Main result Define the adaptivity gap as the ratio between the expected costs of non-adaptive and adaptive. Theorem In the complete graph the adaptivity gap is constant. 9/21

Adaptive can be arbitrarily better r v1 v2 v3 vk 10/21

Adaptive can be arbitrarily better With a small deadline, non-adaptive activates all of the v i s. Adaptive activates only the root, then at some time t pushes to the inactive v i s. r v1 v2 v3 vk 10/21

Adaptive can be arbitrarily better With a small deadline, non-adaptive activates all of the v i s. Adaptive activates only the root, then at some time t pushes to the inactive v i s. An adaptivity gap of log k log log k is easy to prove. r v1 v2 v3 vk 10/21

Non-adaptive λ k k N n k N λ = 1. λ k := k(n k) n. 1 n/2 n k 11/21

Non-adaptive Optimal non-adaptive pays almost the same at t = 0 and at t = τ. λ k k N n k N λ = 1. λ k := k(n k) n. 1 n/2 n k 11/21

Non-adaptive Optimal non-adaptive pays almost the same at t = 0 and at t = τ. - A 2-approximation is easy to see. λ k k N n k N λ = 1. λ k := k(n k) n. 1 n/2 n k 11/21

Non-adaptive Optimal non-adaptive pays almost the same at t = 0 and at t = τ. - A 2-approximation is easy to see. Non-adaptive does not push more than n/2 rumors. Therefore, neither adaptive. λ k k N n k N λ = 1. λ k := k(n k) n. 1 n/2 n k 11/21

Big deadline: τ (2 + δ) log n Starting from a single active node, the time until everyone is active is 2 log n + O(1). The time is exponentially concentrated. Jason (1999) Just starting with one node has cost 1 + ε, therefore adaptivity does not help. 12/21

Small deadline: τ 2 log log n A Poisson process of unit rate gives the randomness. t 13/21

Small deadline: τ 2 log log n A Poisson process of unit rate gives the randomness. Given the points S i and S i+1, the rescaling S i+1 S i λ i inter-arrival time. is the k λ k λ k+1 λ i t 13/21

Small deadline: τ 2 log log n A Poisson process of unit rate gives the randomness. Given the points S i and S i+1, the rescaling S i+1 S i λ i inter-arrival time. A push can be seen as adding a point. is the k λ k λ k+1 λ i λ i+1 t 13/21

Small deadline (cont.): τ 2 log log n A clairvoyant strategy knows the realization, therefore outperforms adaptive. k λ k λ k+1 λ i λ i+1 t 14/21

Small deadline (cont.): τ 2 log log n A clairvoyant strategy knows the realization, therefore outperforms adaptive. We show that clairvoyant adds points only at the beginning. k λ k λ k+1 λ i λ i+1 t 14/21

Small deadline (cont.): τ 2 log log n A clairvoyant strategy knows the realization, therefore outperforms adaptive. We show that clairvoyant adds points only at the beginning. Clairvoyant chooses the best number of initial pushes, given the realization. k λ k λ k+1 λ i λ i+1 t 14/21

Small deadline (cont.): τ 2 log log n Say we start with k initial pushes. 15/21

Small deadline (cont.): τ 2 log log n Say we start with k initial pushes. We know the inter-arrival distributions. We know the non-adaptive cost; it pays Ω( n log n ). 15/21

Small deadline (cont.): τ 2 log log n Say we start with k initial pushes. We know the inter-arrival distributions. We know the non-adaptive cost; it pays Ω( n log n ). Lemma Clairvoyant is considerably better than non-adaptive with probability at most 1 n 2. 15/21

Small deadline (cont.): τ 2 log log n Say we start with k initial pushes. We know the inter-arrival distributions. We know the non-adaptive cost; it pays Ω( n log n ). Lemma Clairvoyant is considerably better than non-adaptive with probability at most 1 n 2. In this case we can prove the gap to be 1 + o(1). 15/21

Other deadlines Insight: adaptive interferes when the cost of pushing is less than or equal to that of not pushing, i.e., 1 + cost(k + 1 active nodes) cost(k active nodes). ( ) 16/21

Other deadlines Insight: adaptive interferes when the cost of pushing is less than or equal to that of not pushing, i.e., 1 + cost(k + 1 active nodes) cost(k active nodes). ( ) The expected cost remains the same, it is a martingale, thus the condition should be met only a few times. 16/21

Other deadlines Insight: adaptive interferes when the cost of pushing is less than or equal to that of not pushing, i.e., 1 + cost(k + 1 active nodes) cost(k active nodes). ( ) The expected cost remains the same, it is a martingale, thus the condition should be met only a few times. A relaxed strategy pushes for free, but with certain conditions. 16/21

Other deadlines Insight: adaptive interferes when the cost of pushing is less than or equal to that of not pushing, i.e., 1 + cost(k + 1 active nodes) cost(k active nodes). ( ) The expected cost remains the same, it is a martingale, thus the condition should be met only a few times. A relaxed strategy pushes for free, but with certain conditions. - Pushes only when ( ) holds. - Does not push after n/2. 16/21

Other deadlines Insight: adaptive interferes when the cost of pushing is less than or equal to that of not pushing, i.e., 1 + cost(k + 1 active nodes) cost(k active nodes). ( ) The expected cost remains the same, it is a martingale, thus the condition should be met only a few times. A relaxed strategy pushes for free, but with certain conditions. - Pushes only when ( ) holds. - Does not push after n/2. Relaxed outperforms adaptive. 16/21

Other deadlines (cont.) Relaxed adaptive can be described by thresholds φ k. 17/21

Introduction Model Proofs Other results and open questions Other deadlines (cont.) Relaxed adaptive can be described by thresholds φ k. Let K(t) be the number of active nodes at time t. cost( K(t)) L(t ) L( tk+1) φk+2 φk+1 φk cost( K(0)) t tk+1 t H(0) H(L(t)) 17/21

Introduction Model Proofs Other results and open questions Other deadlines (cont.) Relaxed adaptive can be described by thresholds φ k. Let K(t) be the number of active nodes at time t. We transform the process: H (L(t)) := λ K(t) log cost( K(t)) φ K(t). cost( K(t)) L(t ) L( tk+1) φk+2 φk+1 φk cost( K(0)) t tk+1 t H(0) H(L(t)) 17/21

Introduction Model Proofs Other results and open questions Other deadlines (cont.) Relaxed adaptive can be described by thresholds φ k. Let K(t) be the number of active nodes at time t. We transform the process: H (L(t)) := λ K(t) log cost( K(t)) φ K(t). cost( K(t)) L(t ) ) L( tk+1) L(t) φk+2 φk+1 φk cost( K(0)) t t tk+1 H(0) tt H(L(t)) 17/21

Introduction Model Proofs Other results and open questions Other deadlines (cont.) We show that each time H touches zero, relaxed wins exactly 1 compared to non-adaptive. L(t ) L( t k+1) L(t) H(0) t k+1 t H(L(t)) 18/21

Introduction Model Proofs Other results and open questions Other deadlines (cont.) We show that each time H touches zero, relaxed wins exactly 1 compared to non-adaptive. Essentially H (s) is dominated by s 2 Poiss(s). L(t ) L( t k+1) L(t) H(0) t k+1 t H(L(t)) 18/21

Introduction Model Proofs Other results and open questions Other deadlines (cont.) We show that each time H touches zero, relaxed wins exactly 1 compared to non-adaptive. Essentially H (s) is dominated by s 2 Poiss(s). The number of times H (s) touches zero is constant. L(t ) L( t k+1) L(t) H(0) t k+1 t H(L(t)) 18/21

Additional results The target set version has a constant adaptivity gap. 19/21

Additional results The target set version has a constant adaptivity gap. The maximization problem has a 1 + o(1) adaptivity gap. 19/21

General model 2 1 3 5 4 20/21

General model 2 1 λ 1,3 3 5 4 20/21

General model λ 1,2 = 1 2 λ 1,3 3 λ 4,5 = 0 5 4 20/21

General model We need to keep track of the set of active nodes. λ 1,2 = 1 2 λ 1,3 3 λ 4,5 = 0 5 4 20/21

General model We need to keep track of the set of active nodes. Even the non-adaptive problem is difficult in this setting! λ 1,2 = 1 2 λ 1,3 3 λ 4,5 = 0 5 4 20/21

Conjectures and open problems Is there a broader class of graphs maintaining the constant gap result? - High conductance/connectivity. - Metric induced rates. 21/21

Conjectures and open problems Is there a broader class of graphs maintaining the constant gap result? - High conductance/connectivity. - Metric induced rates. Additive gap for the complete graph is constant, i.e., cost NA cost A = O(1). 21/21