Australian National University WORKSHOP ON SYSTEMS AND CONTROL

Similar documents
Reaching a Consensus in a Dynamically Changing Environment - Convergence Rates, Measurement Delays and Asynchronous Events

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach

The Multi-Agent Rendezvous Problem - The Asynchronous Case

Gossip algorithms for solving Laplacian systems

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors

CS 664 Segmentation (2) Daniel Huttenlocher

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Distributed Optimization over Networks Gossip-Based Algorithms

A Scheme for Robust Distributed Sensor Fusion Based on Average Consensus

Solving linear systems (6 lectures)

A State-Space Approach to Control of Interconnected Systems

Consensus-Based Distributed Optimization with Malicious Nodes

Structural Resilience of Cyberphysical Systems Under Attack

University of Groningen. Agreeing asynchronously Cao, Ming; Morse, A. Stephen; Anderson, Brian D. O.

Full-State Feedback Design for a Multi-Input System

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules

Spectral Clustering on Handwritten Digits Database

Shortest paths with negative lengths

Convergence speed in distributed consensus and averaging

Randomized Gossip Algorithms

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

r-robustness and (r, s)-robustness of Circulant Graphs

On Agreement Problems with Gossip Algorithms in absence of common reference frames

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Privacy and Fault-Tolerance in Distributed Optimization. Nitin Vaidya University of Illinois at Urbana-Champaign

Multi-Robotic Systems

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

Parameterization of Stabilizing Controllers for Interconnected Systems

Maintaining an Autonomous Agent s Position in a Moving Formation with Range-Only Measurements

Recent Advances in Consensus of Multi-Agent Systems

Convergence in Multiagent Coordination, Consensus, and Flocking

On the Scalability in Cooperative Control. Zhongkui Li. Peking University

The Multi-Agent Rendezvous Problem - Part 1 The Synchronous Case

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

Convex Optimization of Graph Laplacian Eigenvalues

Applied Linear Algebra in Geoscience Using MATLAB

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

Minimizing Interference in Wireless Networks. Yves Brise, ETH Zürich, Joint work with Marek, Michael, and Tobias.

Resilient Asymptotic Consensus in Robust Networks

Introduction to Arti Intelligence

Chapter 7 Network Flow Problems, I

Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications

CS 246 Review of Linear Algebra 01/17/19

Randomized Kaczmarz Nick Freris EPFL

MULTISCALE DISTRIBUTED ESTIMATION WITH APPLICATIONS TO GPS AUGMENTATION AND NETWORK SPECTRA

15-780: LinearProgramming

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Finite-Time Resilient Formation Control with Bounded Inputs

In this section again we shall assume that the matrix A is m m, real and symmetric.

arxiv: v1 [math.oc] 29 Sep 2018

Convex Optimization of Graph Laplacian Eigenvalues

Asymptotics, asynchrony, and asymmetry in distributed consensus

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Chetek-Weyerhaeuser High School

Properties of Matrices and Operations on Matrices

Lectures 25 & 26: Consensus and vehicular formation problems

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

SWITCHED SYSTEMS THAT ARE PERIODICALLY STABLE MAY BE UNSTABLE

Fast Linear Iterations for Distributed Averaging 1

Convergence Rate for Consensus with Delays

Graph Partitioning Using Random Walks

MAA507, Power method, QR-method and sparse matrix representation.

Section 3.9. Matrix Norm

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

NORMS ON SPACE OF MATRICES

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

MANY research problems in science and engineering can

A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Math 301 Test I. M. Randall Holmes. September 8, 2008

Hybrid static/dynamic scheduling for already optimized dense matrix factorization. Joint Laboratory for Petascale Computing, INRIA-UIUC

Networked Control System Protocols Modeling & Analysis using Stochastic Impulsive Systems

3 The Simplex Method. 3.1 Basic Solutions

Supplementary Material to Distributed Consensus-based Weight Design for Cooperative Spectrum Sensing

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

Generating Functions

Numerical Methods for Differential Equations Mathematical and Computational Tools

Locating the eigenvalues for graphs of small clique-width

Introduction to Integer Programming

Relations Graphical View

Distribution of the Number of Encryptions in Revocation Schemes for Stateless Receivers

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Examples of linear systems and explanation of the term linear. is also a solution to this equation.

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Compression, Matrix Range and Completely Positive Map

Notes for CS542G (Iterative Solvers for Linear Systems)

Lecture 13 Spectral Graph Algorithms

11 Block Designs. Linear Spaces. Designs. By convention, we shall

Consensus of Information Under Dynamically Changing Interaction Topologies

Math 290, Midterm II-key

Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion

Matrix Derivatives and Descent Optimization Methods

Chapter III. Stability of Linear Systems

1 Last time: least-squares problems

Communication constraints and latency in Networked Control Systems

Consensus Problems on Small World Graphs: A Structural Study

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

Mathematical Optimization Models and Applications

Transcription:

Australian National University WORKSHOP ON SYSTEMS AND CONTROL Canberra, AU December 7, 2017

Australian National University WORKSHOP ON SYSTEMS AND CONTROL A Distributed Algorithm for Finding a Common Fixed Point of a Family of Paracontractions and Some of Its Applications A S Morse Yale University Lili Wang Daniel Fulmer Ji Liu Canberra, AU December 7, 2017

The Network r i neighbors of agent i agent i Each agent has its own reception radius r i Each agent is a neighbor of itself m > 1 agents with labels 1,2,... m

Motivating Problem: Solving Ax = b... over a network of m agents Each agent i knows a pair of real matrices Standing assumption: At least one solution to the equation Ax = b exists where There are no other assumptions about the A i and b i Time is discrete: t 2 {1,2,...} At time t each agent Set of labels of agent i s neighbors at time t stores an n 1 state vector x i (t) which it can update. receives the state vectors x j (t), j 2 N i (t), of each of its current neighbors knows nothing more Problem: Devise local update rules for the x i which will cause all m agents to iteratively arrive at the same solution to Ax = b

Motivating Problem Algorithm Positive definite gain matrix chosen small enough so that spectrum (I A 0 i G i A i ) ½ ( 1; 1] Number of labels in N i (t) Therefore any solution to Ax = b is a common fixed point of the M i and conversely. Can a distributed algorithm be constructed for computing a common fixed point for other types of possibly nonlinear M i?

Problem Generalization Each agent i knows a (possibly nonlinear) map Suppose the M i have at least one common fixed point y; that is Each agent i generates a state x i (t) 2 R n which is its current estimate of y. Each agent i receives the state x j (t) of each of its current neighbors j 2 N i (t). Problem: Devise a distributed recursive algorithm which enables all m agents to asymptotically compute the same common fixed point of the M i. But for what kinds of M i? Each M i is a paracontraction with respect to the 2-norm

Paracontraction is a paracontraction with respect to a given norm on R n if it is continuous and for all y 2 R n satisfying M(y) = y and all x 2 R n satisfying M(x) x Thus if M is a paracontraction and x is not a fixed point, then M(x) is closer to the set of fixed points than x was.

Paracontraction Example Orthogonal Projection w/r 2 Fixed points of M are vectors in C C = closed convex set Can be used to find a vector in the intersection of m closed convex sets. C 2 C 1 C 1 ÅC 2

The Problem {again} Paracontractions: Common fixed point y: Agent i has a state x i (t) which is to be its current estimate of a common fixed point. Agent i knows M i and receives the state x j (t) of each of its current neighbors j 2 N i (t). Problem: Devise a distributed recursive algorithm which enables all m agents to asymptotically compute the same common fixed point of the M i.

The Algorithm number of labels in N i (t) labels of agent i s neighbors Generalization: Require that for each i and j, and all t 1, w ij (t) 2 W ij = finite set

Neighbor Graph G = all directed graphs with vertex set V = {1,2,,m} N(t) = graph in G with an arc from j to i whenever j 2 N i (t), i 2 {1,2,,m} 3 1 i (1,2) j 2 j is a neighbor of i 4 7 5 6

Main Result Suppose that for some finite integer p > 1, the maps M 1,M 2,..., M m are all paracontractions with respect to the same norm p and, in addition, that the M i all share at least one common fixed point. Then for any sequence of strongly connected neighbor graph N(t), t 1, all x i (t) converge to the same point as t! 1, and this point is a fixed point of all of the M i. {NOLCOS 2016} This result also holds if the sequence of neighbor graphs N(t), t 1, is repeatedly jointly strongly connected. Repeatedly jointly strongly connected means that for some finite integer T > 0, the union of each successive sub-sequence of T graphs in the sequence N(t), t 1, is strongly connected. This result also holds if the agents do not share a common clock and updates are performed asynchronously. {CDC 2016}

Elsner, Koltracht, Neumann (1992): Suppose P is a finite set of paracontractions P : R n! R n with respect to some norm on R n. Suppose the paracontractions share a common fixed point. Then the state of the iteration converges to a common fixed point of the paracontractions which occur in the sequence P t, t 1, infinitely often.

Simulation Positive definite gain matrix chosen small enough so that spectrum (I A 0 i G i A i ) ½ ( 1; 1] Assume for simplicity that each A i has linearly independent rows

Simulation Assume for simplicity that each A i has linearly independent rows

Simulation Assume for simplicity that each A i has linearly independent rows

Two simulations: one with the other with

Application: Distributed State Estimation

n - dimensional Process Agent i knows C i and A and measures y i Joint observability: is an observable pair Objective: Devise a family of m estimators, one for each agent, whose outputs x i, i 2 {1,2,...,m} are all asymptotically correct estimates of x Suppose that each agent i generates estimate x i as the output of an n i - dimensional linear time invariant system with state z i

Process n - dimensional z j j 2 N i y j j 2 N i estimator i x i ¼ x z i y i Limitations: Parks & Martins TAC 2017 Cannot be used with most time-varying graphs Wang, Fullmer, Morse ACC 2017 Not resilient Objective: Devise a family of m estimators, one for each agent, whose outputs x i, i 2 {1,2,...,m} are all asymptotically correct estimates of x Suppose that each agent i generates estimate x i as the output of an n i - dimensional linear time invariant system with state z i Suppose that each agent i broadcasts its state z i and measurement y i

Hybrid Estimator = local observers and linear equation solvers n - dimensional Local Observer for Agent i First construct a classical observer for each agent i which is capable of correctly estimating that part of the process state, i.e., L i x, which is observable to agent i: Here L i is any n i n matrix for which Let be the unique pair of matrices which satisfy the linear equations and note that is an observable pair. Choose K i to make the convergence rate of as large as desired. Local observer i: Estimation error: satisfies

Estimation error: satisfies

Pick a set of equally spaced event times t 0, t 1, t 2,... where t 0 = 0 and t j - t j - 1 = T, j 1. Generate agent i's estimate x i (t) of x(t) as follows: 1. For each fixed j > 0, iterate the previously discussed linear equation solver q times within the interval [t j - 1, t j ) to obtain an estimate z ij of the parameter p j assuming p j satisfies the equations 2. Prompted by the fact that take z ij to be an after the fact estimate of x(t j - 1 ) and define 3. Between event times generate x i (t) using Can prove that with q properly chosen and with appropriate network connectivity {eg, strong connectivity 8 t}, all x i (t) converge to x(t) exponentially fast at any pre-specified rate. CDC 2017

Additional Properties Robustness: The estimation is robust to small differences in agents' event times provided A is a stability matrix. Asynchronism: Linear equation solver computations can be performed asynchronously. Resilience: With enough redundancy the overall estimator is "resilient."

Resilience By a resilient algorithm for a distributed process is meant an algorithm which, by exploiting built-in network and data redundancies, is able to continue to function correctly in the face of abrupt changes in the number of nodes and edges in the inter-agent communication graph upon which the algorithm depends. Such changes might arise as a result of a network communication failure, a component failure, a sensor temporarily being put to sleep to conserve energy, or even possibly a malicious attack Distributed estimators which are linear time-invariant systems are not resilient. It is easy to see that the hybrid estimator just described, is.

Simulations _x = Ax + bv v =noise y i = C i x; i 2 f 1; 2; 3; 4g System: 4 agent, stable, 4-dimensional with eigenvalues at -0.1, -0.1, -0.05 j0.614

Simulations _x = Ax + bv y i = C i x; i 2 f 1; 2; 3; 4g System: 4 agent, stable, 4-dimensional with eigenvalues at -0.1, -0.1, -0.05 j0.614 redundantly jointly observable remains jointly observable if one agent dies. Neighbor graph: 1 1 2 1 Redundant strongly connected 1 3 1 1 4 1 Remains strongly connected if any one vertex is removed. Not a complete graph simulation with º a sinusoid = 7cos(10 t) simulation with º white noise = {0 mean, variance 1} In both simulations, agent 4 leaves the network at t = 5 and returns at t = 7

3 rd components of x and x 1 Simulations º = sine wave _x = Ax + bv y i = C i x; i 2 f 1; 2; 3; 4g x 1 (3) x (3) state of estimator 1 Agent 4 leaves the network at t = 5 and returns at t = 7

3 rd components of x and x 1 Simulations º = white noise _x = Ax + bv y i = C i x; i 2 f 1; 2; 3; 4g x 1 (3) x (3) state of estimator 1 Agent 4 leaves the network at t = 5 and returns at t = 7

Why is Stability Needed for Robustness? Process: A = unstable Estimator: Estimation Error: Unbounded signal if Therefore for state estimation problems for processes without controlled inputs, internal stability must be assumed if the estimation problem is to make sense! The same conclusion applies to distributed state estimation problems.

Concluding Remarks Concerning para-contractions, it would be interesting to 1. derive necessary conditions on neighbor graph sequences for convergence 2. derive worst case bounds on convergence rate 3. discover other types of maps which are paracontractions Concerning hybrid estimators, it would be interesting to 1. study what happends if different agents have different clocks 2. take into account transmission delays across the network Concerning linear time invariant distributed observers it would be interesting to 1. prove that they cannot be resilient