Neuromorphic Sensing and Control of Autonomous Micro-Aerial Vehicles Commercialization Fellowship Technical Presentation Taylor Clawson Ithaca, NY June 12, 2018
About Me: Taylor Clawson 3 rd year PhD student in Mechanical and Aerospace Engineering Advisor: Silvia Ferrari Past Experience Mechanical Engineering BS, Utah State University, 2013 Lead Developer of custom reliability analysis software at Northrop Grumman Graduate Research Dynamics and Controls emphasizing neuromorphic sensing and control Flight modeling for insect-scale robots Neuromorphic autonomy for micro aerial vehicles Autonomous target tracking and obstacle avoidance
Neuromorphic Sensing and Control The Innovation Neuromorphic sensing and control algorithms for intelligent, energy-efficient autonomy 1 Spiking neural networks (SNNs) can learn online to improve performance or adapt to new conditions 2 Neuromorphic cameras have 1μs temporal resolution and require at most a few milliwatts of power inivation (https://inivationcom/) 3
Motivation: Small-Scale Autonomous Flight Safer: weigh less than 1 pound and thus are safe to operate near humans Smaller: can access narrow or confined spaces inaccessible to other vehicles Autonomous flight expands the capability of a single operator to monitor a larger area More effective search and rescue Agricultural monitoring Public event security 27g 9cm Crazyflie 20 (https://wwwbitcrazeio/crazyflie-2/) RoboBee [Ma, 2013]
Neuromorphic Control 5
Single-layer SNN Controller SNN function approximation by connection weights M, W, and b y( t) = Ws( t) = WF( Mx( t) + b) Output connection weights W determined offline by supervised learning W = arg min f ( x ) VF( Mx + b) V j j j 2 Training data set generated by a stabilizing target control law (eg optimal linear control) ( x f x ) = j, ( j) j = 1,, M M W b s() t F f(x j ) M Input Connection Weights Output Connection Weights Input bias Post-synaptic current Nonlinear activation function Target control law data Number of training data points 6
SNN Control Model Neurons generate spike trains ρ(t) based on input current I(t) Synapses filter the spikes and generate postsynaptic current s(t) Synapses modeled as first-order low-pass filters h(t) M ( t) = ( t) = ( t t ) M k k= 1 k= 1 t s( t) = h( t ) ( ) d 0 1 / s h() t = e t s k t k M s Dirac delta Time of k th spike Spike count Synaptic time constant 7
Adaptive SNN Controller (Hovering Only) First step towards full flight envelope control Control signal u(t) provided entirely by spiking neural networks u( t) = u ( t) + u ( t) 0 adapt Non-adaptive term u 0 (t) trained offline by supervised learning to approximate traditional control law Adaptive term u adapt (t) adapts online to minimize output error x ref u 0 u adapt Δx u a u p u r Reference state Non-adaptive control input Adaptive control input State error Amplitude input Pitch input Roll input [T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017] 8
Adaptive SNN Controller (Hovering Only) Adaptive term u adapt comprised of inputs for flapping amplitude u a, pitch u p, roll u r u ( t) = u ( t) u ( t) u ( t) T adapt a p r Output weights adapt online to minimize output error T Et ( ) = Λ ( xt ( ) + x( t)) Every element of u adapt computed from a single network of 100 neurons, eg u a = Ws () a a t The connection weights are updated online to minimize output error W a ( t) = s ( t) E( t) a Λ x W a s a Error weight matrix State error Output connection weights Post-synaptic current Learning rate [T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017] 9
s Adaptive SNN Controller (Hovering Only) Comparison: SNN initialized with traditional controller SNN controller quickly adapts to wing asymmetries to stabilize hovering flight Traditional controller maintains stability, but drifts significantly from the target Ad p ive SNN IF overi g rge Ad p ive SNN IF [T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017] s 10
SNN Controller Full Flight Envelope SNN trained to approximate steady-state gain of gain-scheduled controller Gain matrices dependent on scheduling variables a u( t) = K ( a) x( t) K ( a) u( t) K ( a) ξ( t) 1 2 3 Steady-state gain computed using transfer function and final value theorem G( s) ( si + K ( a)) K ( a) 1 2 1 G(0) = K( a) K ( a) K ( a) 1 2 1 Network output weights computed to approximate steady-state gain matrix K ss W = argmin K ( a) VF( Ma + b) V j SNN Control input is a linear transformation of post-synaptic current ss ss j 2 Ki u a s W PIF gain matrices Control deviation Scheduling variables Laplace variable Output connection weights x G() s K ss s State deviation Integral of output error Transfer function Steady-state gain matrix Post-synaptic current u( t) = W( a) s( t) 11
SNN Control Climbing Turn 12
SNN Control Complete Turn 13
Neuromorphic Sensing 14
Exteroceptive Sensing Motivation Onboard exteroceptive sensors required for full flight autonomy Fast dominant time scales of small-scale flight require high sensing rate and low latency Traditional sensors consume large amounts of power for high sensing rate (eg ~100 watts for high speed camera) High data rate requires additional data processing Neuromorphic vision sensors have 1μs temporal resolution and require at most a few milliwatts of power [Lichtsteiner, 8], [Brandli, ] Image Credit: inivation (https://inivationcom) 15
t (s) t (s) Background: Neuromorphic Vision Neuromorphic cameras generate asynchronous events instead of frames An event at (x, y) is generated at time t i, with polarity p i 1, if ln( I( x, y, ti 1)) ln( I( x, y, ti)) = = 1, if ln( I( x, y, ti 1)) ln( I( x, y, ti)) = O eve s whe Off eve s whe The ith event e i is described by the tuple e = ( x, y, t, p) i i p = 1 i p = 1 i Source xy, + t + p { 1,1} The set of all events is = { e i i = 1,, N} O Eve s Off Eve s 16
The Optical Flow Problem Standard Optical Flow Problem Assume: (,, ) di x y t dt Determine horizontal and vertical flow (v x, v y ) from di( x, y, t) vx ( x, y, t) = I x ( x, y, t) I y ( x, y, t) It ( x, y, t) 0 dt + = vy ( x, y, t) Neuromorphic Optical Flow T Coordinates of some point r = rx r y in the image plane determined by optical flow t2 r ( t2) r ( 1) ( ) x x t v dt v ( ) d x, ( ) x ry ( t2) ry ( t1) = vydt vy ( ) v v = t 1 = 0 Scattered events are generated by motion of the point Determine optical flow by estimating the motion of points in the scene using the scattered events
Neuromorphic Optical Flow Existing neuromorphic optical flow methods rely on optimization [Be os, ], [Rueckauer, ] Estimate continuous motion from discrete events Introduce continuous event rate f through convolution of events with continuous kernel K n f ( x, y, t) = K( x, y, t) E( x, y, t) N E( x, y, t) = ( x xi, y yi, t ti ) i= 1 Assume gradient n of event rate is normal to the motion of points in the scene Speed of the motion is inversely proportional to magnitude of gradient Optical flow is written directly in terms of the vx 1 w c = event rate gradient 2 2 n = a b c T T t t a b w = = x y c c a v = y a + b b w w T
Neuromorphic Optical Flow Results Mean processing time per event: 07 μs 19
Neuromorphic Motion Detection Detect motion relative to the environment using a rotating neuromorphic camera Assumptions Known camera motion Camera motion dominated by rotation Total derivative of pixel intensity is zero World View Camera View Neuromorphic Camera 20
Neuromorphic Motion Detection Pixel intensity can be recovered by integrating the event rate By previous assumptions, future pixel intensity can be predicted if image-plane motion field (v x, v y ) is known Motion field due to camera rotation is ( ) t 0 I( x, y, t) = exp f ( x, y, ) d + I( x, y,0) 2 x xy vx ( x, y) = y ( t) + x( t) f y ( t) + yz ( t) f f 2 xy y vy ( x, y) = y ( t) + x( t) x z ( t) + f x( t) f f Pixel intensity after a short time Δt is predicted from motion field: I ( x, y, t) = I( x v t, y v t, t t) x y 21
Neuromorphic Motion Detection Results 1 Compute difference between predicted and measured intensity I( x, y, t) = I( x, y, t) I ( x, y, t) I 1 2 I ' 2 Denoise by convolving with a multivariate Gaussian kernel K Σ with covariance Σ I '( x, y, t) = K ( x, y, t) I( x, y, t) 3 Detect motion by comparing smoothed intensity difference with a threshold γ 3a I 3b m 1, if I (x, y, t) m( x, y, t) = 0, otherwise 22
Summary Innovation Neuromorphic sensing and control algorithms for intelligent, energy-efficient autonomy Potential Applications Autonomous flight for small-scale UAVs Real-time target detection Obstacle avoidance 27g 9cm Crazyflie 20 (https://wwwbitcrazeio/crazyflie-2/) Standard Camera Neuromorphic Camera Direction of Motion Detected Motion 23
Neuromorphic Sensing and Control of Autonomous Micro-Aerial Vehicles Taylor Clawson June 12, 2018 Related Work S C wso, S Ferr ri, S B Fu er, R J Wood, Spiki g Neur Ne work SNN Co ro of F ppi g I sec -scale Robo, Proc of the IEEE Conference on Decision and Control, Las Vegas, NV, pp 3381-3388, December 2016 S C wso, S B Fu er, R J Wood, S Ferr ri A B de E e e Appro ch o Mode i g Aerod ic F igh of Insect-sc e Robo, American Control Conference (ACC), Seattle, WA, May 2017 T S Clawson, T C Stewart, C Eliasmith, S Ferr ri A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -scale Robo s, IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017 24