Neuromorphic Sensing and Control of Autonomous Micro-Aerial Vehicles

Similar documents
A Blade Element Approach to Modeling Aerodynamic Flight of an Insect-scale Robot

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

Energy-Aware Coverage Path Planning of UAVs

Improved Kalman Filter Initialisation using Neurofuzzy Estimation

ONR Mine Warfare Autonomy Virtual Program Review September 7, 2017

Joint GPS and Vision Estimation Using an Adaptive Filter

Spiking Neural Network (SNN) Control of a Flapping Insect-scale Robot

Learning Spatio-Temporally Encoded Pattern Transformations in Structured Spiking Neural Networks 12

Neuromorphic Network Based on Carbon Nanotube/Polymer Composites

Evolution of the Average Synaptic Update Rule

Nonlinear Landing Control for Quadrotor UAVs

Forecasting Data Streams: Next Generation Flow Field Forecasting

Gaussian with mean ( µ ) and standard deviation ( σ)

Convolutional networks. Sebastian Seung

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Fast classification using sparsely active spiking networks. Hesham Mostafa Institute of neural computation, UCSD

Linear Regression, Neural Networks, etc.

Autonomous Mobile Robot Design

Design of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation

Simulating Neural Networks. Lawrence Ward P465A

Statistical Filters for Crowd Image Analysis

Evaluation of different wind estimation methods in flight tests with a fixed-wing UAV

CONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT

Intelligent Systems Discriminative Learning, Neural Networks

Artifical Neural Networks

Learning Gaussian Process Models from Uncertain Data

Nonlinear and Neural Network-based Control of a Small Four-Rotor Aerial Robot

Multiscale Adaptive Sensor Systems

Modulation-Feature based Textured Image Segmentation Using Curve Evolution

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

Multi-layer Flight Control Synthesis and Analysis of a Small-scale UAV Helicopter

Outline. Convolution. Filtering

On the Computation of the Ego-Motion and Distance to Obstacles for a Micro Air Vehicle

Route-Planning for Real-Time Safety-Assured Autonomous Aircraft (RTS3A)

Mathematical Modelling and Dynamics Analysis of Flat Multirotor Configurations

A Blade Element Approach to Modeling Aerodynamic Flight of an Insect-scale Robot

Oil Field Production using Machine Learning. CS 229 Project Report

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

CDS 110: Lecture 2-2 Modeling Using Differential Equations

Vision for Mobile Robot Navigation: A Survey

Pose tracking of magnetic objects

Stochastic and Adaptive Optimal Control

Estimation of State Noise for the Ensemble Kalman filter algorithm for 2D shallow water equations.

Adaptive Robust Control (ARC) for an Altitude Control of a Quadrotor Type UAV Carrying an Unknown Payloads

Fast neural network simulations with population density methods

Developed Blimp Robot Based On Ultrasonic Sensors Using Possibilities Distribution and Fuzzy Logic

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

encoding and estimation bottleneck and limits to visual fidelity

A Novel Activity Detection Method

A Three-dimensional Physiologically Realistic Model of the Retina

Biologically inspired sensor fusion for real-time wind gust estimation in autonomous UAV navigation

CMSC 421: Neural Computation. Applications of Neural Networks

Artificial Neural Networks

4. Multilayer Perceptrons

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Nonlinear Control of a Quadrotor Micro-UAV using Feedback-Linearization

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

1 Introduction. 2 Successive Convexification Algorithm

Autonomous Navigation for Flying Robots

with Application to Autonomous Vehicles

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

arxiv: v1 [cs.et] 20 Nov 2014

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

Computational Intelligence Lecture 6: Associative Memory

1 INTRODUCTION 2 PROBLEM DEFINITION

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Autonomous Robotic Vehicles

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Image Analysis. Feature extraction: corners and blobs

Hover Control for Helicopter Using Neural Network-Based Model Reference Adaptive Controller

Deep Learning: Approximation of Functions by Composition

Reconnaissance d objetsd et vision artificielle

Feedforward Neural Nets and Backpropagation

UAV Navigation: Airborne Inertial SLAM

CS 532: 3D Computer Vision 6 th Set of Notes

LINEAR SYSTEMS. J. Elder PSYC 6256 Principles of Neural Coding

OPTIMAL TRAJECTORY PLANNING AND LQR CONTROL FOR A QUADROTOR UAV. Ian D. Cowling James F. Whidborne Alastair K. Cooke

From Bayes to Extended Kalman Filter

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Information Exchange in Multi-rover SLAM

ECE 592 Topics in Data Science

Generalized Laplacian as Focus Measure

Role of Assembling Invariant Moments and SVM in Fingerprint Recognition

16.410/413 Principles of Autonomy and Decision Making

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

Computational statistics

Multi-layer Neural Networks

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Preliminaries

Adaptation in the Neural Code of the Retina

Neural Encoding: Firing Rates and Spike Statistics

A First Attempt of Reservoir Pruning for Classification Problems

Machine Learning Lecture 7

Mobile Robots Localization

Short Term Memory and Pattern Matching with Simple Echo State Networks

Delayed Fusion of Relative State Measurements by Extending Stochastic Cloning via Direct Kalman Filtering

A Quick Optimization of a Rocket Trajectory Using MCMC Method.

Nonlinear State Estimation! Particle, Sigma-Points Filters!

FLAT FIELDS FROM THE MOONLIT EARTH

Transcription:

Neuromorphic Sensing and Control of Autonomous Micro-Aerial Vehicles Commercialization Fellowship Technical Presentation Taylor Clawson Ithaca, NY June 12, 2018

About Me: Taylor Clawson 3 rd year PhD student in Mechanical and Aerospace Engineering Advisor: Silvia Ferrari Past Experience Mechanical Engineering BS, Utah State University, 2013 Lead Developer of custom reliability analysis software at Northrop Grumman Graduate Research Dynamics and Controls emphasizing neuromorphic sensing and control Flight modeling for insect-scale robots Neuromorphic autonomy for micro aerial vehicles Autonomous target tracking and obstacle avoidance

Neuromorphic Sensing and Control The Innovation Neuromorphic sensing and control algorithms for intelligent, energy-efficient autonomy 1 Spiking neural networks (SNNs) can learn online to improve performance or adapt to new conditions 2 Neuromorphic cameras have 1μs temporal resolution and require at most a few milliwatts of power inivation (https://inivationcom/) 3

Motivation: Small-Scale Autonomous Flight Safer: weigh less than 1 pound and thus are safe to operate near humans Smaller: can access narrow or confined spaces inaccessible to other vehicles Autonomous flight expands the capability of a single operator to monitor a larger area More effective search and rescue Agricultural monitoring Public event security 27g 9cm Crazyflie 20 (https://wwwbitcrazeio/crazyflie-2/) RoboBee [Ma, 2013]

Neuromorphic Control 5

Single-layer SNN Controller SNN function approximation by connection weights M, W, and b y( t) = Ws( t) = WF( Mx( t) + b) Output connection weights W determined offline by supervised learning W = arg min f ( x ) VF( Mx + b) V j j j 2 Training data set generated by a stabilizing target control law (eg optimal linear control) ( x f x ) = j, ( j) j = 1,, M M W b s() t F f(x j ) M Input Connection Weights Output Connection Weights Input bias Post-synaptic current Nonlinear activation function Target control law data Number of training data points 6

SNN Control Model Neurons generate spike trains ρ(t) based on input current I(t) Synapses filter the spikes and generate postsynaptic current s(t) Synapses modeled as first-order low-pass filters h(t) M ( t) = ( t) = ( t t ) M k k= 1 k= 1 t s( t) = h( t ) ( ) d 0 1 / s h() t = e t s k t k M s Dirac delta Time of k th spike Spike count Synaptic time constant 7

Adaptive SNN Controller (Hovering Only) First step towards full flight envelope control Control signal u(t) provided entirely by spiking neural networks u( t) = u ( t) + u ( t) 0 adapt Non-adaptive term u 0 (t) trained offline by supervised learning to approximate traditional control law Adaptive term u adapt (t) adapts online to minimize output error x ref u 0 u adapt Δx u a u p u r Reference state Non-adaptive control input Adaptive control input State error Amplitude input Pitch input Roll input [T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017] 8

Adaptive SNN Controller (Hovering Only) Adaptive term u adapt comprised of inputs for flapping amplitude u a, pitch u p, roll u r u ( t) = u ( t) u ( t) u ( t) T adapt a p r Output weights adapt online to minimize output error T Et ( ) = Λ ( xt ( ) + x( t)) Every element of u adapt computed from a single network of 100 neurons, eg u a = Ws () a a t The connection weights are updated online to minimize output error W a ( t) = s ( t) E( t) a Λ x W a s a Error weight matrix State error Output connection weights Post-synaptic current Learning rate [T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017] 9

s Adaptive SNN Controller (Hovering Only) Comparison: SNN initialized with traditional controller SNN controller quickly adapts to wing asymmetries to stabilize hovering flight Traditional controller maintains stability, but drifts significantly from the target Ad p ive SNN IF overi g rge Ad p ive SNN IF [T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017] s 10

SNN Controller Full Flight Envelope SNN trained to approximate steady-state gain of gain-scheduled controller Gain matrices dependent on scheduling variables a u( t) = K ( a) x( t) K ( a) u( t) K ( a) ξ( t) 1 2 3 Steady-state gain computed using transfer function and final value theorem G( s) ( si + K ( a)) K ( a) 1 2 1 G(0) = K( a) K ( a) K ( a) 1 2 1 Network output weights computed to approximate steady-state gain matrix K ss W = argmin K ( a) VF( Ma + b) V j SNN Control input is a linear transformation of post-synaptic current ss ss j 2 Ki u a s W PIF gain matrices Control deviation Scheduling variables Laplace variable Output connection weights x G() s K ss s State deviation Integral of output error Transfer function Steady-state gain matrix Post-synaptic current u( t) = W( a) s( t) 11

SNN Control Climbing Turn 12

SNN Control Complete Turn 13

Neuromorphic Sensing 14

Exteroceptive Sensing Motivation Onboard exteroceptive sensors required for full flight autonomy Fast dominant time scales of small-scale flight require high sensing rate and low latency Traditional sensors consume large amounts of power for high sensing rate (eg ~100 watts for high speed camera) High data rate requires additional data processing Neuromorphic vision sensors have 1μs temporal resolution and require at most a few milliwatts of power [Lichtsteiner, 8], [Brandli, ] Image Credit: inivation (https://inivationcom) 15

t (s) t (s) Background: Neuromorphic Vision Neuromorphic cameras generate asynchronous events instead of frames An event at (x, y) is generated at time t i, with polarity p i 1, if ln( I( x, y, ti 1)) ln( I( x, y, ti)) = = 1, if ln( I( x, y, ti 1)) ln( I( x, y, ti)) = O eve s whe Off eve s whe The ith event e i is described by the tuple e = ( x, y, t, p) i i p = 1 i p = 1 i Source xy, + t + p { 1,1} The set of all events is = { e i i = 1,, N} O Eve s Off Eve s 16

The Optical Flow Problem Standard Optical Flow Problem Assume: (,, ) di x y t dt Determine horizontal and vertical flow (v x, v y ) from di( x, y, t) vx ( x, y, t) = I x ( x, y, t) I y ( x, y, t) It ( x, y, t) 0 dt + = vy ( x, y, t) Neuromorphic Optical Flow T Coordinates of some point r = rx r y in the image plane determined by optical flow t2 r ( t2) r ( 1) ( ) x x t v dt v ( ) d x, ( ) x ry ( t2) ry ( t1) = vydt vy ( ) v v = t 1 = 0 Scattered events are generated by motion of the point Determine optical flow by estimating the motion of points in the scene using the scattered events

Neuromorphic Optical Flow Existing neuromorphic optical flow methods rely on optimization [Be os, ], [Rueckauer, ] Estimate continuous motion from discrete events Introduce continuous event rate f through convolution of events with continuous kernel K n f ( x, y, t) = K( x, y, t) E( x, y, t) N E( x, y, t) = ( x xi, y yi, t ti ) i= 1 Assume gradient n of event rate is normal to the motion of points in the scene Speed of the motion is inversely proportional to magnitude of gradient Optical flow is written directly in terms of the vx 1 w c = event rate gradient 2 2 n = a b c T T t t a b w = = x y c c a v = y a + b b w w T

Neuromorphic Optical Flow Results Mean processing time per event: 07 μs 19

Neuromorphic Motion Detection Detect motion relative to the environment using a rotating neuromorphic camera Assumptions Known camera motion Camera motion dominated by rotation Total derivative of pixel intensity is zero World View Camera View Neuromorphic Camera 20

Neuromorphic Motion Detection Pixel intensity can be recovered by integrating the event rate By previous assumptions, future pixel intensity can be predicted if image-plane motion field (v x, v y ) is known Motion field due to camera rotation is ( ) t 0 I( x, y, t) = exp f ( x, y, ) d + I( x, y,0) 2 x xy vx ( x, y) = y ( t) + x( t) f y ( t) + yz ( t) f f 2 xy y vy ( x, y) = y ( t) + x( t) x z ( t) + f x( t) f f Pixel intensity after a short time Δt is predicted from motion field: I ( x, y, t) = I( x v t, y v t, t t) x y 21

Neuromorphic Motion Detection Results 1 Compute difference between predicted and measured intensity I( x, y, t) = I( x, y, t) I ( x, y, t) I 1 2 I ' 2 Denoise by convolving with a multivariate Gaussian kernel K Σ with covariance Σ I '( x, y, t) = K ( x, y, t) I( x, y, t) 3 Detect motion by comparing smoothed intensity difference with a threshold γ 3a I 3b m 1, if I (x, y, t) m( x, y, t) = 0, otherwise 22

Summary Innovation Neuromorphic sensing and control algorithms for intelligent, energy-efficient autonomy Potential Applications Autonomous flight for small-scale UAVs Real-time target detection Obstacle avoidance 27g 9cm Crazyflie 20 (https://wwwbitcrazeio/crazyflie-2/) Standard Camera Neuromorphic Camera Direction of Motion Detected Motion 23

Neuromorphic Sensing and Control of Autonomous Micro-Aerial Vehicles Taylor Clawson June 12, 2018 Related Work S C wso, S Ferr ri, S B Fu er, R J Wood, Spiki g Neur Ne work SNN Co ro of F ppi g I sec -scale Robo, Proc of the IEEE Conference on Decision and Control, Las Vegas, NV, pp 3381-3388, December 2016 S C wso, S B Fu er, R J Wood, S Ferr ri A B de E e e Appro ch o Mode i g Aerod ic F igh of Insect-sc e Robo, American Control Conference (ACC), Seattle, WA, May 2017 T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -scale Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017 24