Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Similar documents
Dynamic Systems on Graphs

Changing Topology and Communication Delays

Lecture Notes on Linear Regression

Problem Set 9 Solutions

Calculation of time complexity (3%)

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

ECE559VV Project Report

Chapter Newton s Method

Report on Image warping

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

VQ widely used in coding speech, image, and video

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Tracking with Kalman Filter

Portfolios with Trading Constraints and Payout Restrictions

Lecture 12: Classification

DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.

Lecture 4: November 17, Part 1 Single Buffer Management

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Eigenvalues of Random Graphs

Split alignment. Martin C. Frith April 13, 2012

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Last Time. Priority-based scheduling. Schedulable utilization Rate monotonic rule: Keep utilization below 69% Static priorities Dynamic priorities

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Errors for Linear Systems

Neuro-Adaptive Design - I:

The Second Anti-Mathima on Game Theory

Coarse-Grain MTCMOS Sleep

Information Weighted Consensus

Temperature. Chapter Heat Engine

EEE 241: Linear Systems

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

Analysis of Queuing Delay in Multimedia Gateway Call Routing

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Polynomial Regression Models

Lecture 7: Boltzmann distribution & Thermodynamics of mixing

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

COS 521: Advanced Algorithms Game Theory and Linear Programming

find (x): given element x, return the canonical element of the set containing x;

NUMERICAL DIFFERENTIATION

A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning

Differentiating Gaussian Processes

Queueing Networks II Network Performance

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

The Minimum Universal Cost Flow in an Infeasible Flow Network

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Market structure and Innovation

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Tokyo Institute of Technology Periodic Sequencing Control over Multi Communication Channels with Packet Losses

Lecture 10: May 6, 2013

Spatial Statistics and Analysis Methods (for GEOG 104 class).

Module 9. Lecture 6. Duality in Assignment Problems

4DVAR, according to the name, is a four-dimensional variational method.

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

How Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists *

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Lecture 3: Shannon s Theorem

CHAPTER IV RESEARCH FINDING AND DISCUSSIONS

Grover s Algorithm + Quantum Zeno Effect + Vaidman

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

Adaptive Consensus Control of Multi-Agent Systems with Large Uncertainty and Time Delays *

Pricing and Resource Allocation Game Theoretic Models

Markov Chain Monte Carlo Lecture 6

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Linear Approximation with Regularization and Moving Least Squares

Homework 9 Solutions. 1. (Exercises from the book, 6 th edition, 6.6, 1-3.) Determine the number of distinct orderings of the letters given:

Physics 2A Chapter 3 HW Solutions

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Finding Dense Subgraphs in G(n, 1/2)

MMA and GCMMA two methods for nonlinear optimization

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

CS286r Assign One. Answer Key

( ) = ( ) + ( 0) ) ( )

Kernel Methods and SVMs Extension

Chapter 7 Channel Capacity and Coding

Lecture Randomized Load Balancing strategies and their analysis. Probability concepts include, counting, the union bound, and Chernoff bounds.

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

CHAPTER 17 Amortized Analysis

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

On the Throughput of Clustered Photolithography Tools:

CHAPTER IV RESEARCH FINDING AND ANALYSIS

JAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger

Supporting Information

Search sequence databases 2 10/25/2016

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

Chapter - 2. Distribution System Power Flow Analysis

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

J. Parallel Distrib. Comput.

Transcription:

DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm Synchronzed Task Swtchng Combnng tasks An algorthm for synchronzed task swtchng Tme complexty Summary Communcaton 4 Bellman Ford Algorthm Algorthm Restrctons n Dynamc Case Setup: Network of agents, transmttng data to the base. Communcaton costs, whch ncrease wth ncreasng dstance between agents, should be kept low Routng protocol needed, to fnd the shortest path to the base Bellman Ford Shortest Path [] Bellman Ford Example 6 Setup: Set of edges whch are connected over vertces. Goal: Fnd shortest path from each agent to base. Notaton: 2 2 c = communcaton cost to base of agent N = set of neghbors of agent vj = com. cost from agent to agent j wth j N d = downstream neghbor of agent Update rule for every agent: c = mn( cj + vj) j N d = arg(mn( c + v )) j N j j 2 CCtB = + 2 = 2 6. Intalzaton a) c B = b) c = 2. Agents search for possble new shortest CCtB = + 2 = path to base. 2 CCtB = + = 2 c = mn( cj + vj) j N d = arg(mn( c + v )) j j j N [] Rchard Bellman On a Routng Problem 9

Bellman Ford Example 7. Intalzaton a) c B = b) c = 2. Agents search for possble new shortest path to base.. Shortest path s found after maxmum of N teratons Bellman Ford Algorthm Algorthm Restrctons n Dynamc Case Dynamc shortest path search Loopng n dynamc case 9 Changes to statc case: Topology of network changes Weghtngs of vertces vary over tme Certan problems occur n dynamc case loopng communcaton loops occur, connecton to base gets lost longest path search because of old nformaton n the network, agents choose wrong path to base ) Agents n steady state 2) Downstream Neghbor moves away communcaton cost ncreases ) Agents search for new shortest path Loopng n dynamc case ) Agents n steady state 2) Downstream Neghbor moves away communcaton cost ncreases ) Agents search for new shortest path 2 Longest path search

Idea of Dynamc Shortest Path Search 4 Dynamc Shortest Path Search Regular Search Hgh frequency search Problem: changng weghts and tme delayed nformaton propagaton leads to loops and wrong pathes But: No problem n statc case, because here, the communcaton cost only decreases whle convergng to shortest path Idea: Fx the communcaton costs and topology between agents and use statc computaton to fnd shortest path Advantages: Fnds real shortest path for gven setup (no longest path, loops) Dsadvantages: Needs tme to converge, durng ths tme, not optmal path Realzaton of Dynamc Shortest Path Search Realzaton of Dynamc Shortest Path Search 6 Procedure: ) Fx communcaton costs to all negbor agents 2) Start new statc shortest path search ) When shortest path search s fnshed, set new found downstream negbhor as new d.n. 4) Go to frst step. Two Problems: How do agents know when to fnsh the shortest path search and start wth a new one wth updated communcaton costs? How assure that new found shortest path downstream neghbor s stll n communcaton range? Frst Problem: Idea: Base s central processng unt and therefore can be used as a quas synchronsaton module to start the new search. New search should start, after shortest path to base was found. Wat worst case tme for shortest path search. New search sgnal wll be propagated over whole communcaton range from each agent. New search sgnal s faster than shortest path search. Base sends new search sgnal to all agents n communcaton range after worst case computaton tme for shortest path Realzaton of Dynamc Shortest Path Search Realzaton of Dynamc Shortest Path Search 7 Worst case tme for new shortest path to base: Worst case topology s connected chan Worst case update s, f agents farest away from base update frst. Second Problem: Idea: Agents should not move out of communcaton range whle shortest path search assume maxmum speed of agent v max worst case f agents move n opposte drecton wth maxmum speed, durng whole worst case computaton tme R S R R S = 2 * v max * (N ) * t R t maxmum tme wthn communcator updates all communcators update at least once wthn tmespan t v max v 2 * v max max R S = R 2 * v max * (N ) * t t total = (N ) * t s worst case tme to fnd shortest path 2 * v max * (N ) * t

N = 4 + t =.2s v max = m/s R = 2m 2 Dynamc Shortest Path Search Analyss R S = m t total =.s TP: L [2] + KTF 9 Dynamc shortest path search [2] L et al Dstrbuted Cooperatve Coverage Control of Sensor Networks 2 Dynamc Shortest Path Search Lmtatons 2 Worst case watng tme: (N ) * t 22 Dynamc Shortest Path Search Increases wth growng number of agents N and computaton tme Δt. Tme between new searches becomes to bg, and therefore the error ncreases. Regular Search Hgh frequency search Lmted neghbor range: R S = R 2 * v max * (N ) * t Decreases wth growng number of Agents N, computaton tme Δt and v max. Maybe R S becomes to small for a proper shortest path search. 2 Hgh search frequency Motvaton N = 4 + t =.2s v max = m/s R = 2m Idea: Increase the frequency wth whch a new search starts. Securty Radus R S would be bgger and tme between searches smaller R S = 2m t total =.s TP: L [2] + KTF steps 7 steps 24 Hgher new search frequency [2] L et al Dstrbuted Cooperatve Coverage Control of Sensor Networks 2

Hgh search frequency Hgh search frequency 2 26 Error n % of optmal communcaton cost For hgh frequency search, shortest path s not guaranteed! realcctb - optcctb * optcctb 27 Dscusson Regular search Postve: Fnds shortest path usng only local nformaton (no loopng etc.) Negatve: Strong dependence on number of agents etc. Error whle watng for worst case convergence tme Hgh frequency search Postve: Reduces dstance to optmum Negatve: Shortest path s not guaranteed to be found n computaton tme Rough knowledge of topology needed 2 Synchronzed Task Swtchng Combnng Tasks An Algorthm for Synchronzed Task Swtchng Tme Complexty Smulaton Results Combnng Tasks Combnng Tasks 29 Coverage control: Maxmzng the probablty of detectng events. Most mportant areas of the msson space are well covered. Exploraton of the msson space: Use of deployment algorthms. Maxmze the area covered by all agents. Combne both tasks: Frst explore the msson space. Then cover the most mportant areas. Enables the agents to cover areas unreachable f only usng coverage control. Swtch task when the exploraton task s fnshed.

Combnng Tasks How do the agents know that the exploraton task s fnshed? For each agent, only local nformaton s avalable. But, nformaton about all agents (=global nformaton) s necessary. Use of consensus lke algorthms Enables each agent to determne the state of the network. Task swtch s performed, when all agents agree that the exploraton task s fnshed. 2 Notatons B drectonal communcaton between agent and ts neghbors N N( k) = { j {,.., n} s s < R} j Communcaton topology s undrected graph G: A Adjacency matrx of G aj = a = f j N ( k) j D Degree matrx of G d = aj j State varables for consensus: z Task state of agents z = x Consensus state f agent has fnshed frst task An Algorthm Assumptons: There exsts a tme k such that A(k)=A(k ) for all k k. There exsts a tme k k s.t. z(k)= for all k k. If A(k+)A(k), that s there exsts s.t. N (k+)n (k), then z (k+)= even f agent has fnshed frst task. More Notatons: Z dag(z(k)) I nn dentty matrx n vector wth all elements equal to d d = N s the cardnalty of the set N 4 An Algorthm Algorthm: If z =, set each agent s consensus varable x to the average value of the sum of ts own task state z and the consensus states of ts neghbors. Else, set x =. Update rule: For each agent: x( k+ ) = z xj( k) + d + j N Whole network: xk ( + ) = Z ( D+ I) [ A xk ( ) + ] An Algorthm State of the network and task stwtch: If at least one agent has not fnshed the frst task, x (k)< for all agents. If all agents have fnshed the frst task, z(k)= and for every agent, x (k) for k. Perform task swtch f x s suffcently close to. Convergence of Algorthm: For constant z(k), system s an asymptotcally stable LTI system wth constant nput. There s always one unque equlbrum pont x EP and x EP = for z=. 6 Threshold for Task Swtch When s x suffcently close to? Task swtch f x > δ, wth δ < If δ s too small, false task swtch mght happen. How to determne δ? Derve from statc case where no topology changes happen. Show that even under swtchng topology, x (k) s never larger than max x wc n the worst case statc topology.

Tme Complexty Tme Complexty 7 In [] the term Tme Complexty s ntroduced: The Tme Complexty TC s the tme an algorthm needs to perform, dependng on the number of agents n. For the task swtch, a sensble noton s the tme from when the last agent fnshes the frst task untl the last agent starts wth the second task. Upper bound: An upper bound to the order of the tme complexty s gven by: ln( δ ( n)) TCA O( ) n ln ( ) n Proof: At step k, let be the agent such that x mn (k ) := x (k ) x j (k ) for all agents j. Then n the next step for agent : x( k+ ) = xj( k) + z ( k) ( d x( k) + ) d + j N ( ) k d x ( ) + k = T The smallest possble value for x (k +) s acheved by maxmzng the number of neghbors. x( k+ ) ( ( n ) x( k) + ) n [] Martnez, Bullo, Cortes, Frazzol On synchronous robotc networks Part II: Tme complexty of rendezvous and deployment algorthms Tme Complexty Tme Complexty 9 4 Proof: The value x mn (k +) := x (k +) provdes a lower bound on the consensus values of all agents j n step k +: xj ( k + ) xmn ( k + ) = ( ( n ) xmn n ( k) + ) Ths can easly be seen: Suppose there exsts x l (k +) < x mn (k +) xl( k+ ) = xj( k) + ( dl xmn dl + j N ( ) k dl + ( k) + ) ( ( n ) xmn N ( k ) + ) = xmn ( k + ) Proof of Tme Complexty: Ths lower bound on the consensus value of all agents can be generally descrbed by: xmn ( k+ ) = ( ( n ) xmn ( k) + ) n The soluton to ths dfference equaton for k k s: k k n xmn ( k) = ( xmn ( kt )) n Wth the swtchng condton x > δ and k T the step when all agents have swtched to the second task t follows: ln ( δ ( n) ) TC = kt k n ln ( n ) Tme Complexty Tme Complexty 4 42 Smulatons: Smulaton of task swtch for dfferent topologes and numbers of agents n task before swtch. Chan topology: Watng for only one agent: Watng for all agents:..6.4.2..6.4.2..2..4. Smulatons: Random topology: Watng for one agent: Watng for all agents:..6.4.2..6.4.2..2..4...2..4...2..4.

Tme Complexty 4 Smulatons: Random topology: Watng for one agent:..6.4.2..2..4. Watng for all agents:..6.4.2 44 Smulaton Results: No Exploraton..2..4. Smulaton Results 46 Jont Event Detecton Rate Wthout Exploraton Wth Exploraton 4 4 2 2 6 4 6 4 Task-Swtch 2 2 4 Smulaton Results: Wth Exploraton 2 2 2 4 6 Summary Summary We dscussed problems n searchng the shortest communcaton path n the dynamc case. We presented an algorthm to compute the shortest path n the dynamc case. We ntroduced an algorthm to synchronze a task swtch n a dstrbuted network. We dscussed the tme complexty of the presented algorthm. We presented a example smulaton to show the mproved performance of the combned tasks.