ARBITRARY-ORDER REAL-TIME EXACT ROBUST DIFFERENTIATION A. Levant Applied Mathematics Dept., Tel-Aviv University, Israel E-mail: levant@post.tau.ac.il Homepage: http://www.tau.ac.il/~levant/ 5th-order differentiation
Differentiation problematics: f ( t + τ) f ( t) Division by zero: f (t) = lim τ 0 τ Let f(t) = f 0 (t) + η(t), η(t) - noise f ( t + τ) f ( t) f η η = +, (-, ) τ τ τ τ 1st way: sin(ωt) = ω cos(ωt) Fourier transform, high harmonics neglection linear filter with transfer function Pm ( p) p p, m n. p Qn ( p) (0.01p Advantages: robustness Drawbacks: not exact, asymptotic convergence 2nd way: numeric differentiation Df(t) = F(f(t 1 ),..., f(t n )) + 1) 2 Advantages: exact on certain functions (for example polynomials P m (t)) Drawbacks: actual division by zero with t i t
Transfer Functions y (n) + a 1 y (n-1) +... + a n y = b 0 u (m) + b 1 u (m-1) +... + b m u (D n + a 1 D n-1 +... + a n )y = (b 0 D m + b 1 D m-1 +... + b m )u G(p) = b 0 p p n m + + b a 1 1 p p m 1 n 1 +... + b +... + a n m u G( p) y u = e λt a y = G(λ) e λt in particular λ = iω, ω y = G(iω) with n < m Realization: Thus, only n m is feasible y (n) + a 1 y (n-1) +... + a n y = u, G = p n + a 1 p 1 n 1 +... + a n Y = (b 0 p m + b 1 p m-1 +... + b m ) n 1 n 1 p + a p +... + a 1 n U
Differentiation error = intrinsic differentiator error + noise differentiation error The goal: to build a differentiator with zero intrinsic error for a large class of functions and with error continuously dependent on the noise magnitude; + finite-time convergence The main trade-of: d D: f 1 a dt f1, D: f 2 a dt noise η = f 2 - f 1 d f2, D: f 1 + η a dt d f1 + dt d η no robust differentiator is exact on all polynomials or on all smooth functions
Differentiation Problem Input: f(t) = f 0 (t) + η(t), η < ε η(t) - Lebesgue-measurable function, f 0,η, ε are unknown, known : f (n+1) 0 (t) L (or Lipschitz constant of f (n) 0 LC ) The goal: real-time estimation of f & 0(t), & f 0(t),..., f (n) 0 (t) Theorem: Optimal differentiator existence γ ι (n) 1, D n : f( ) a D n i (f )( ), i = 0, 1,..., n sup f D i n (f )(t) - f (i) 0 (t) C = γ ι (n) L i/(n+1) (n-i+1) / (n+1) ε In particular γ 1 (1) = 2 (Landau, Kolmogorov) sup f D n n (f )(t) - f 0 (n) (t) L n/(n+1) ε 1/(n+1) ε = 10-6 f, f 0 (6) (t) 1: sup t D 5 5 (f )(t) - f 0 (5) (t) 0.1 That asymptotics cannot be improved
First-order differentiator idea Auxiliary dynamic system: Tracking problem: &z = u σ = z - f(t) 0 Any 2-sliding controller being applied, σ = σ& = 0 z - f(t) = u - f& (t) = 0 With input noises x - f(t) 0, u - f& (t) 0
A proof sketch Consider a smooth noise η(t) = ε sin (L/ε) 1/(n+1) t sup η (i) (t) = L i/(n+1) ε (n-i+1)/ (n+1), i = 0, 1,..., n +1 D differentiates both 0 and 0 + η(t) Lemma. βi(n) 1, β 0 (n) = β n (n +1) = 1: L i/(n+1) ε (n-i+1)/ (n+1) sup f ( i) ( t) ( ) L.const f n L, sup f ( t) ε β i (n) L i/(n+1) ε (n-i+1)/ (n+1). Some evaluations: β 1 (1) = 2 2, β 1 (2) = 7.07, β 2 (2) = 6.24. W(C,n) = {f: [a, b] R, L.const of f (n) L} Let : g a g, g is closest to g in W(L,n) (not unique). Dg = ( g) (i).
Robust first order differentiator Input: f(t) = f 0 (t) + η(t), & f 0 < L η < ε, η(t) - measurable (Lebesgue) function z& 0 = v 0, v 0 = - λ 0 z 0 - f 1/2 sign(z 0 - f ) + z 1, z& 1= - λ 1 sign(z 0 - f ) (= - λ 1 sign(z 1 - v 0 )). Output: z 1 (t), v 0 (t) z 1-0 f& ~ L 1/2 ε 1/2 σ = z 0 - f(t) 0 For example: λ 1 = 1.1L, λ 0 =1.5L 1/2
Filippov Definition x& = v(x) x& V(x) x(t) is an absolutely continuous function I I V(x) = conv v( O ( x) \ N ) ε> 0µ N = 0 v +, v - - limit values, &x = pv + + (1-p)v -, p [0,1] Solutions exist for any locally bounded Lebesgue-measurable v(x); or for any upper-semicontinuous, convex, closed, non-empty, locally-bounded V(x) ε
1-differentiator explanation Let σ = z 0 - f 0 (t), ε = 0 σ& & = - & f 0-1 2 λσ& σ -1/2 - α sign σ. σ& & - 1 2 λσ& σ -1/2 - [α - L, α + L] sign σ. invariant with respect to g ν : (σ, σ&, t) a (ν 2 σ, νσ&, νt)
Optimal n-differentiator z& 0 = v 0, v 0 = -λ 0 L 1/(n+1) z 0 - f(t) n/(n + 1) sign(z 0 -f(t)) + z 1, z& 1 = v 1, v 1 = -λ 1 L 1/n z 1 - v 0 (n-1)/ n sign(z 0 - v 0 ) + z 2,... z& i = v i, v i = -λ i L 1/(n-i+1) z i -v i-1 (n-i)/(n-i+1) sign(z i -v i-1 )+ z i+1,... z& n-1 = v n-1, v n-1 = -λ n-1 L 1/2 z n-1 -v n-2 1/ 2 sign(z n-1 -v n-2 )+ z n, z& n = -λ n L sign(z n - v n-1 ) D n : z& 0 = v, v = -µ z 0 - f(t) n/( n + 1) sign(z 0 - f(t)) + z 1, z 1 = 0 n 1 D (v( ), L),..., z n = n 1 n 1 D (v( ), L). D 0 : z& = -µ sign(z - f(t)), µ > L. Non-recursive form: z& 0 = -κ 0 z 0 - f(t) n/( n + 1) sign(z 0 - f(t)) + z 1,... z& i = -κ i z 0 - f(t) (n - i)/( n + 1) sign(z 0 - f(t)) + z i+1, i = 1,..., n-1... z& n = -κ n sign(z 0 - f(t))
5th-order differentiator, L = 1 z& 0 = v 0, v 0 = -12 z 0 - f(t) 5/6 sign(z 0 - f(t)) + z 1, z& 1 = v 1, v 1 = -8 z 1 - v 0 4/5 sign(z 1 - v 0 ) + z 2, z& 2 = v 2, v 2 = -5 z 2 - v 1 3/4 sign(z 2 - v 1 ) + z 3, z& 3 = v 3, v 3 = -3 z 3 - v 2 2/3 sign(z 3 - v 2 ) + z 4, z& 4 = v 4, v 4 = -1.5 z 4 - v 3 1/2 sign(z 4 - v 3 ) + z 5, z& 5 = -1.1 sign(z 5 - v 4 ); f(t ) = 0.5 sin 0.5t + 0.5 cos t
Theorem 1. f(t) = f 0 (t) in finite time z 0 = f 0 (t); z i = v i-1 = f 0 (i) (t), i = 1,..., n. 2-sliding modes: z i = f 0 (i) (t) i = 0,..., n-1. Theorem 2. f(t) - f 0 (t) ε z i - f 0 (i) (t) µ i ε (n - i +1)/(n + 1), i = 0,..., n; v i - f 0 (i+1) (t) ν i ε (n - i )/(n + 1), i = 0,..., n-1. z n - f (n) 1/(n + 1) 0 (t) µ n ε Discrete-sampling case: z 0 (t j ) - f(t j ) is substituted for z 0 - f(t) with t j t < t j+1, t j+1 - t j = τ > 0. Theorem 3. z i - f 0 (i) (t) µ i τ n - i + 1, i = 0,..., n; v i - f 0 (i+1) (t) ν i τ n - i, i = 0,..., n - 1. z n - f 0 (n) (t) µ n τ
Software restrictions of higher-order differentiation 5th-order differentiator τ = 5 10-4 z 0 - f 0 1.46 10-13, z 1 - f& 0 7.16 10-10, z 2 - & f& 0 9.86 10-7, z 3 -& f& 0 3.76 10-4 z 4 - f (4) 0 0.0306, z 5 - f (5) 0 0.449 τ = 5 10-5 z 0 - f 0 4.44 10-16, z 1 - f& 0 6.34 10-12, z 2 - & f 0 2.24 10-8, z 3 -& f& 0 2.44 10-5 z 4 - f (4) 0 0.00599, z 5 - f (5) 0 0.265
Sensitivity to noises 3rd-order differentiator, ε = 0.0001, ω 1000 D 3 1-0 f& 2.6 10-3, D 2 3 - f& 0 & 0.14, 3 D3 -& f& 0 0.48 5th-order differentiator, ε = 0.01, ω 10 (the worst case) z 1 - f & 0 0.014, z 2 - & f 0 0.18, z 3 - & f& 0 0.65 z 4 - f (4) 0 1.05, z 5 - f (5) 0 0.74
On-line calculation of the angular motor velocity and acceleration (data from Volvo Ltd) Experimental data, τ = 0.004 2nd order differentiation L = 625
On-line 2nd order differentiation comparison with optimal spline approximation
Image Processing: Crack Elimination
Edge Detection