Communication constraints and latency in Networked Control Systems

Similar documents
Towards the Control of Linear Systems with Minimum Bit-Rate

Optimal Communication Logics in Networked Control Systems

Networked Control System Protocols Modeling & Analysis using Stochastic Impulsive Systems

Data Rate Theorem for Stabilization over Time-Varying Feedback Channels

NONLINEAR CONTROL with LIMITED INFORMATION. Daniel Liberzon

Towards control over fading channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Stochastic Hybrid Systems: Modeling, analysis, and applications to networks and biology

UNIVERSITY of CALIFORNIA Santa Barbara. Communication scheduling methods for estimation over networks

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

Lecture 7 MIMO Communica2ons

Distributed Power Control for Time Varying Wireless Networks: Optimality and Convergence

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Principles of Communications

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations

QUANTIZED SYSTEMS AND CONTROL. Daniel Liberzon. DISC HS, June Dept. of Electrical & Computer Eng., Univ. of Illinois at Urbana-Champaign

On the Effect of Quantization on Performance at High Rates

Stochastic Hybrid Systems: Applications to Communication Networks

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks

RECENT advances in technology have led to increased activity

Chapter 1 Elements of Information Theory for Networked Control Systems

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

QSR-Dissipativity and Passivity Analysis of Event-Triggered Networked Control Cyber-Physical Systems

STATE AND OUTPUT FEEDBACK CONTROL IN MODEL-BASED NETWORKED CONTROL SYSTEMS

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Electrical Engineering Written PhD Qualifier Exam Spring 2014

AQUANTIZER is a device that converts a real-valued

Distributed Optimization over Networks Gossip-Based Algorithms

Feedback Stabilization over Signal-to-Noise Ratio Constrained Channels

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

Controlo Switched Systems: Mixing Logic with Differential Equations. João P. Hespanha. University of California at Santa Barbara.

Effects of time quantization and noise in level crossing sampling stabilization

On Optimal Coding of Hidden Markov Sources

Why Should I Care About. Stochastic Hybrid Systems?

Event-triggered stabilization of linear systems under channel blackouts

Decentralized Detection In Wireless Sensor Networks

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels

12.4 Known Channel (Water-Filling Solution)

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Control of Tele-Operation Systems Subject to Capacity Limited Channels and Uncertainty

Distributed Receding Horizon Control of Cost Coupled Systems

Multimedia Communications. Scalar Quantization

EL2520 Control Theory and Practice

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions

Estimating a linear process using phone calls

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Stability Analysis of Stochastically Varying Formations of Dynamic Agents

Dynamic Power Allocation and Routing for Time Varying Wireless Networks

Switched Systems: Mixing Logic with Differential Equations

NOWADAYS, many control applications have some control

arxiv: v1 [cs.sy] 5 May 2018

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Sparse Regression Codes for Multi-terminal Source and Channel Coding

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

arxiv: v1 [cs.sy] 30 Sep 2015

ELEC546 Review of Information Theory

Sufficient Statistics in Decentralized Decision-Making Problems

K User Interference Channel with Backhaul

State Estimation Utilizing Multiple Description Coding over Lossy Networks

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 2, FEBRUARY

Hybrid Control and Switched Systems. Lecture #1 Hybrid systems are everywhere: Examples

REDUCING POWER CONSUMPTION IN A SENSOR NETWORK BY INFORMATION FEEDBACK

Control Over Noisy Channels

Feedback Stabilization of Uncertain Systems Using a Stochastic Digital Link

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Improved Spectrum Utilization in Cognitive Radio Systems

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

On Source-Channel Communication in Networks

Networked Control Systems: Estimation and Control over Lossy Networks

Average-Consensus of Multi-Agent Systems with Direct Topology Based on Event-Triggered Control

The Nash Equilibrium Region of the Linear Deterministic Interference Channel with Feedback

Data rate theorem for stabilization over fading channels

Lecture 7 September 24

Stochastic Hybrid Systems: Applications to Communication Networks

A Simple Encoding And Decoding Strategy For Stabilization Over Discrete Memoryless Channels

Multimedia Communications. Differential Coding

Reinforcement Learning and NLP

Book review for Stability and Control of Dynamical Systems with Applications: A tribute to Anthony M. Michel

FAULT-TOLERANT CONTROL OF CHEMICAL PROCESS SYSTEMS USING COMMUNICATION NETWORKS. Nael H. El-Farra, Adiwinata Gani & Panagiotis D.

Capacity of a Two-way Function Multicast Channel

Data Transmission over Networks for Estimation and Control

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

Information Theory Meets Game Theory on The Interference Channel

Zeno-free, distributed event-triggered communication and control for multi-agent average consensus

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Lecture 4 Noisy Channel Coding

A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology

An Uplink-Downlink Duality for Cloud Radio Access Network

ECE Information theory Final

arxiv: v1 [cs.it] 20 Jan 2018

Compression methods: the 1 st generation

Embedded Systems Design: Optimization Challenges. Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden

Delay compensation in packet-switching network controlled systems

Necessary and sufficient bit rate conditions to stabilize quantized Markov jump linear systems

Level Crossing Sampling in Feedback Stabilization under Data-Rate Constraints

Control over finite capacity channels: the role of data loss, delay and signal-to-noise limitations

Revision of Lecture 5

Transcription:

Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega (USC) and Yonggang Xu (UCSB)

Outline 1. Feedback control over a digital channel with limited bit-rate: process decoder digital channel encoder 2. Decentralized cooperative control with limited message-rate and delays process process process node node node communication network

Control with finite bit-rate reconstructed output y q (t) control input u(t) disturbance d(t) process noise n(t) output y(t) decoder digital channel b k {0,1} encoder b 0 b 1 b 2 b 3 b 4 t 0 t 1 t 2 t 3 t 4 t average bit-rate Motivation: Control of systems with sensors and actuators far from each other, connected by a digital network. E.g., control of an autonomous flying vehicle using measurements from a camera on the ground.

Control with finite bit-rate reconstructed output y q (t) control input u(t) disturbance d(t) process noise n(t) output y(t) decoder digital channel b k {0,1} encoder b 0 b 1 b 2 b 3 b 4 t 0 t 1 t 2 t 3 t 4 t average bit-rate Questions: 1. What is the minimum bit-rate for which stabilization (boundedness) is possible? 2. How to divide the bits among the distinct components of the output? 3. How to choose quantization intervals?

Minimum bit-rate y q (t) u(t) no noise/disturbance y(t) decoder b k {0,1} encoder average bit-rate Theorem: Stabilization is not possible with average bit-rate smaller than continuous-time process λ i eigenvalues of A discrete-time process [Tatikonda & Mitter]

Proof outline decoder b 0 b 1 b 2 b 3 b 4 encoder t 0 t 1 t 2 t 3 t 4 t A stable x decays to zero without control minimum bit-rate is zero (just set u ú 0) A has asympt. stable component of x in goes inv. subspace to zero without control can reduce A to its unstable inv. subspace we will assume that all eigenvalues of A have real part 0

Proof outline y q (t) decoder b 0 b 1 b 2 b 3 b 4 encoder y(t) = x(t) t 0 t 1 t 2 t 3 t 4 t 0 set to which x(t 0 ) is known to belong after bit b 0 is received 1 set to which x(t 1 ) is known to belong after bit b 1 is received 1 x(t 1 ) 0 x(t 0 ) x(t 2 ) 2

Proof outline y q (t) decoder b 0 b 1 b 2 b 3 b 4 encoder y(t) t 0 t 1 t 2 t 3 t 4 t 0 set to which x(t 0 ) is known to belong after bit b 0 is received 1 set to which x(t 1 ) is known to belong after bit b 1 is received Case 1: For some k, k has a single element could take x to zero (even in finite time) after t k diameter of k Case 2: As k, ρ( k ) 0 could take x to zero as k Case 3: As k, ρ( k ) unbounded no matter what control we use, x(t k ) is unbounded

Proof outline y q (t) decoder encoder y(t) b 0 b 1 b 2 b 3 b 4 t 0 t 1 t 2 t 3 t 4 t 0 set to which x(t 0 ) is known to belong after bit b 0 is received 1 set to which x(t 1 ) is known to belong after bit b 1 is received Case 1: For some k, k has a single element could take x to zero (even in finite time) after t k diameter of k Case 2: As k, ρ( k ) 0 could take x to zero as k Case 3: As k, ρ( k ) unbounded no matter what control we use, x(t k ) is unbounded Main idea: bit rate r min µ( k ) unbounded ρ( k ) unbounded volume of k diameter of k Stabilization not possible

Proof outline b k b k+1 t k t k+1 t k before b k+1 is received, it is only known that u k volume of k+1 volume of k

Proof outline b k b k+1 t k t k+1 t k before b k+1 is received, it is only known that u k volume of k+1 Q: Which coding would make µ( k+1 ) as small as possible? A: Divide k+1 into two sets of equal volume & use bit b k+1 to locate x(t k+1 ) in one of them volume of k coding that minimizes volume

Proof outline b k b k+1 t k t k+1 t At best with = only for coding that minimizes volume iterating explodes if average bit-rate is smaller than

Minimum bit-rate y q (t) u(t) no noise/disturbance y(t) decoder b k {0,1} encoder average bit-rate Theorem: Stabilization is not possible with average bit-rate smaller than continuous-time process λ i eigenvalues of A discrete-time process [Tatikonda & Mitter]

y q (t) Minimum bit-rate u(t) no noise/disturbance y(t) decoder b k {0,1} encoder average bit-rate Theorem: Assume A is diagonalizable 1. It is possible to keep the state bounded with any average bit rate larger or equal to r min 2. It is possible to make the state converge to zero with any average bit rate strictly larger than r min previous bound is tight But minimum volume coding may not work because unbounded volume unbounded diameter bounded volume ; bounded diameter

y q (t) Encoding/decoding schemes d(t) n(t) u(t) y(t) decoder w k {1,,N} T s (fixed) sampling interval encoder bit-rate Assume: closed-loop is stable for transparent encoding/decoding, i.e., A + B K asymptotically stable Questions: 1. How to design the encoder/decoder pair to make the closed-loop stable? 2. How much larger than r min does the bit-rate need to be for stability?

y q (t) State-prediction coding d(t) n(t) u(t) y(t) decoder w k {1,,N} encoder Inspired by Differential Pulse Code Modulation (DPCM) 1. Encoder and decoder maintain consistent estimates of the state, based only on the quantized information sent to the decoder. 2. At sampling times, the difference between the measured state and its estimate (based on previously transmitted data) is quantized and transmitted digitally. (hopefully state estimation error has smaller dynamic range than state itself) 3. Upon transmission, the state-estimates are corrected using the quantized error.

y q (t) State-prediction coding d(t) n(t) u(t) y(t) decoder w k {1,,N} encoder 1. Encoder and decoder maintain consistent estimates of the state, based only on the quantized information sent to the decoder. process estimate

y q (t) State-prediction coding d(t) n(t) u(t) y(t) decoder w k {1,,N} encoder 1. Encoder and decoder maintain consistent estimates of the state, based only on the quantized information sent to the decoder. smaller dynamic range 2. At sampling times, the difference between the measured state and its estimate is quantized and transmitted digitally. Q y T s A2D

y q (t) State-prediction coding d(t) n(t) u(t) y(t) decoder w k {1,,N} encoder 1. Encoder and decoder maintain consistent estimates of the state, based only on the quantized information sent to the decoder. 2. At sampling times, the difference between the measured state and its estimate is quantized and transmitted digitally. 3. Upon transmission, the state-estimates are corrected using the quantized error: without quantization (Q = identity) and noise, we would have

Encoder/decoder pair y Q A2D D2A encoder decoder 1. Encoder and decoder maintain consistent estimates of the state, based only on the quantized information sent to the decoder. 2. At sampling times, the difference between the measured state and its estimate is quantized and transmitted digitally 3. Upon transmission, the state-estimates are corrected using the quantized error.

Fixed-step quantization For simplicity assume A diagonal with real eigenvalues quantize the vector by using a scalar quantizer on each of its components If A was not diagonal one would precede the componentwise quantization by a diagonalizing linear transformation (see [JH, Ortega, Vasudevan, 02] for case of A with complex eigenvalues) i K i levels (equidistributed) i saturation level for the ith component quantizer K i # of quantization levels used for the ith component of # of words needed bit-rate

Fixed-step quantization quantizer saturation level Theorem: The state of the closed-loop system will remain bounded provided that # of quant. levels bit allocation & η i, δ i constants that depend on upper bounds on the noise/disturbance λ i large e λi Ts large K i large many bits needed for ith eigenspace

Fixed-step quantization quantizer saturation level Theorem: The state of the closed-loop system will remain bounded provided that # of quant. levels bit allocation & η i, δ i constants that depend on upper bounds on the noise/disturbance λ i large e λi Ts large K i large many bits needed for ith eigenspace required bit-rate bit-rate # of symbols bit-rate approximately r min when i À η i +δ i (but course quantization leads to large x even without noise/disturbance)

Variable-step quantization For simplicity assume A diagonal with real eigenvalues quantize the vector by using a scalar quantizer on each of its components If A was not diagonal one would precede the componentwise quantization by a diagonalizing linear transformation (see paper for case of A with complex eigenvalues) i K i levels (equidistributed) i (k) saturation level for the ith component quantizer at sampling time t k K i # of quantization levels used for the ith component of # of words needed bit-rate

Variable-step quantization Theorem: The state of the closed-loop system will remain bounded provided that η i, δ i constants that depend on upper bounds on the noise/disturbance bit allocation λ i large e λi Ts large K i large many bits needed for ith eigenspace required bit-rate bit-rate minimum rate can be achieved!!!

Conclusions process decoder digital channel encoder There exists a minimum rate below which stabilization is not possible We proposed encoder/decoder pairs (inspired by DPCM) that can achieve rates arbitrarily close to the minimal Variable-step quantization allows one to achieve the minimum bit rate Performance/robustness vs. bit-rate tradeoffs are still poorly understood Need to investigate problem in stochastic setting (Will entropy-like coding work with lower rates?)

Distributed Control physical couplings process process process spatially distributed process to be controlled (e.g., autonomous vehicles) sensing actuation node node node couplings supported by a communication network minimize communication (stealth, bandwidth) study the effect of nonideal communication (delays, drops, blackouts)

Scenarios Rendezvous in minimum-time or using minimum-energy (in spite of disturbances) Group of autonomous agents cooperate in searching for a target (perhaps mobile search & pursuit)

Communication minimization node node node couplings supported by a communication network current focus previous case The every bit-counts paradigm Goal: Design each to minimize the number of bits/second that need to be exchanged between nodes (quantization, compression, ) Domain: Media with little capacity and low-overhead protocols (bit at-a-time) E.g., underwater acoustic comm. between a small number of nodes. The cost-per-message paradigm Goal: Design each to minimize the number of message exchanges between nodes (scheduling, estimation, ) Domain: Media shared by a large number of nodes with nontrivial media access control (MAC) protocol (packet at-a-time) E.g., 802.11 wireless comm. between a large number of nodes.

Prototype problem process process process node node node In this talk: decoupled linear processes (with stochastic disturbance d) coupled quadratic control objective notation: x + x(k+1)

Prototype problem process process process node node node In this talk: decoupled linear processes (with stochastic disturbance d) coupled quadratic control objective E.g., rendez-vous of two vehicles notation: x + x(k+1)

Prototype problem process process process node node node In this talk: decoupled linear processes (with stochastic disturbance d) Minimum-cost solution (centralized) Completely decentralized solution

Communication performance trade-off control cost optimal communication achievable cost/comm. pairs nonachievable cost/comm. pairs minimum-cost (centralized) communication minimum comm. needed for solvability (zero, when decentralized solution yields finite cost)

Centralized architecture ith local process ith local constant communication between local and external process(es) jth external process Closed-loop system for simplicity here we assume only two processes

Estimator-based distributed architecture ith local process [Yook & Tilbury, Montestruque & Antsaklis, Xu & JH] ith local continuous open-loop estimator with discrete updates from the network ith local estimator for jth external process jth external process Closed-loop system additive perturbation w.r.t centralized equations

Communication logic ith local estimator for jth external process fusion logic network sched. logic jth external process How to fuse data? With no noise & delay (for now ) x j (k) received from network at time k best-estimate based on data received When to send data? Scheduling logic action Options periodically a j (k) = {k divisible by T?}, feedback policy a j (k) = F (x j (k), ) optimal T N

Optimal Scheduling Logic ith local estimator for jth external process fusion logic network sched. logic d is Gaussian i.i.d. with zero mean and covariance Σ jth external process Goals: minimize the estimation error minimize cost-penalty w.r.t. centralized minimize the number of transitions minimize communication bandwidth average L-2 norm average transmission rate relative weight of two criteria (will lead to Pareto-optimal solution)

Dynamic Programming solution undiscounted average-cost problem Theorem dynamic programming (DP) operator 1. There exists J * R and bounded h * : R n R such that 2. J * is the optimal cost and is achieved by the (deterministic) static policy 3. h can be found by value iteration

Dynamic Programming solution Proof outline: 1. e(k) is Markov and its transition distribution satisfies an Ergodic property (requires a mild restriction on the set of admissible policies omitted here) 2. T is a span-contraction [Hernandez-Lerma 96] 3. Result follows using standard arguments based on Banach s Fixed-Point Theorem for semi-norms. Theorem dynamic programming (DP) operator 1. There exists J * R and bounded h * : R n R such that 2. J * is the optimal cost and is achieved by the (deterministic) static policy 3. h can be found by value iteration

Example (2-dim) local process e 2 10 optimal scheduling λ= 100 5 a(k) = 0 a(k) = 1 0-5 not ellipses! -10-10 -5 0 5 e 10 1

Example (2-dim) local process e 2 10 optimal scheduling λ= 100 5 0-5 λ= 10 a(k) = 0 a(k) = 1 large weight in comm. cost large error threshold only communicate when error is very large -10-10 -5 0 5 e 10 1

Communication with latency ith local estimator for jth external process fusion logic network sched. logic jth external process delay of τ sched. logic sends x j (k) fusion logic recs. x j (k) x j (k) is incorporated into estimate k k+τ k+τ+1 τ= 0 in previous case

Example (1-dim) 10 2 10 1 10 0 10-1 estimation error variance τ = 0 optimal comm. vs. estim. error trade-off τ = 1 τ = 2 τ = 3 communication rate.1.25.5 optimal scheduling with network latency same error variance requires more bandwidth estimation error, given all information enroute

Conclusions process process process node node node communication network We constructed communications logics that minimize communication (measured in messages sending rate) We considered networks with (fixed) latency Study the effect of packet losses (especially important in wireless networks) Nonlinear processes Coupled control/communication-logic design Papers available at http://www.ece.ucsb.edu/~hespanha