Optimal input design for nonlinear dynamical systems: a graph-theory approach Patricio E. Valenzuela Department of Automatic Control and ACCESS Linnaeus Centre KTH Royal Institute of Technology, Stockholm, Sweden Seminar Uppsala university January 16, 2015
System identification v t {}}{ v m t u t System y t System identification: Modeling based on input-output data. Basic entities: data set, model structure, identification method. 2
System identification The maximum likelihood method: where As n seq, we have: ˆθ nseq θ 0 ˆθ nseq = arg max θ Θ l θ(y 1:nseq ) l θ (y 1:nseq ) := log p θ (y 1:nseq ) ) nseq (ˆθnseq θ 0 Normal with zero mean and covariance {I e F } 1 3
System identification where ) nseq (ˆθnseq θ 0 Normal with zero mean and covariance {IF e } 1 { IF e := E θ l θ(y 1:nseq ) θ=θ0 θ l θ(y 1:nseq ) } θ=θ0 u 1:n seq Different u 1:nseq s different IF e s. Covariance matrix of ) n seq (ˆθnseq θ 0 affected by u 1:nseq! 4
Input design for dynamic systems Input design: Maximize information from an experiment. Existing methods: focused on linear systems. Recent developments for nonlinear systems (Hjalmarsson 2007, Larsson 2010, Gopaluni 2011, De Cock-Gevers-Schoukens 2013, Forgione-Bombois-Van den Hof-Hjalmarsson 2014). 5
Input design for dynamic systems Challenges for nonlinear systems: Problem complexity (usually non-convex). Model restrictions. Input restrictions. How could we overcome these limitations? 6
Summary 1. We present a method for input design for dynamic systems. 2. The method is also suitable for nonlinear systems. 7
Outline Problem formulation for output-error models Input design based on graph theory Extension to nonlinear SSM Closed-loop application oriented input design Conclusions and future work 8
Problem formulation for output-error models Input design based on graph theory Extension to nonlinear SSM Closed-loop application oriented input design Conclusions and future work 9
10 Problem formulation for output-error models e t u t G 0 y t G 0 is a system, where: -e t : white noise (variance λ e ) -u t : input -y t : system output Goal: Design u 1:nseq := (u 1,..., u nseq ) as a realization of a stationary process maximizing I F.
11 Problem formulation for output-error models Here, I F := 1 λ e E { nseq t=1 = 1 n seq λ e t=1 ψ θ 0 t (u t ) := θ ŷ t (u t ) θ=θ0 ŷ t (u t ) := G(u t ; θ) ψ θ 0 t (u t)ψ θ 0 t (u t) } ψ θ 0 t (u t)ψ θ 0 t (u t) dp(u 1:nseq ) Design u 1:nseq R nseq Design P(u 1:nseq ) P.
12 Problem formulation for output-error models Here, I F := 1 λ e E { nseq t=1 = 1 n seq λ e t=1 ψ θ 0 t (u t )ψ θ 0 t (u t ) } ψ θ 0 t (u t )ψ θ 0 t (u t ) dp(u 1:nseq ) Assumption u t C (C finite set) I F = 1 λ e n seq u 1:nseq C nseq t=1 ψ θ 0 t (u t)ψ θ 0 t (u t) p(u 1:nseq )
13 Problem formulation for output-error models Characterizing p(u 1:nseq ) P C : p nonnegative, p(x) = 1, p is shift invariant.
14 Problem formulation for output-error models Problem Design u opt 1:n seq C nseq as a realization from p opt (u 1:nseq ), where p opt (u 1:nseq ) := arg max p P C h(i F (p)) where h : R n θ n θ R is a matrix concave function, and I F (p) = 1 λ e n seq u 1:nseq C nseq t=1 ψ θ 0 t (u t)ψ θ 0 t (u t) p(u 1:nseq )
15 Problem formulation for output-error models Input design based on graph theory Extension to nonlinear SSM Closed-loop application oriented input design Conclusions and future work
16 Input design problem Problem Design u opt 1:n seq C nseq as a realization from p opt (u 1:nseq ), where p opt (u 1:nseq ) := arg max p P C h(i F (p)) where h : R n θ n θ R is a matrix concave function, and Issues: I F (p) = 1 λ e n seq u 1:nseq C nseq t=1 ψ θ 0 t (u t)ψ θ 0 t (u t) p(u 1:nseq ) 1. I F (p) requires a sum of n seq -dimensional terms (n seq large). 2. How could we represent an element in P C?
17 Input design problem Solving the issues: 1. I F (p) requires a sum of n seq -dimensional terms (n seq large). Assumption u 1:nseq is a realization of a stationary process with memory n m (n m < n seq ). I F (p) requires a sum of n m -dimensional terms. Minimum n m : related with the memory of the system.
Input design problem P C Solving the issues: 2. How could we represent an element in P C? P C is a polyhedron. V PC : Set of extreme points of P C. P C can be described as a convex combination of V PC. The elements in V PC can be found by using Graph theory! 18
Graph theory in input design Example: de Bruijn graph, C := {0, 1}, n m := 2. (u t 1, u t) (0, 0) (u t 1, u t) (0, 1) (u t 1, u t) (1, 0) (u t 1, u t) (1, 1) Elements in V PC Prime cycles in G C nm Prime cycles in G C nm Elementary cycles in G C (nm 1) 19
Graph theory in input design Example: de Bruijn graph, C := {0, 1}, n m := 2. (u t 1, u t) (0, 0) (u t 1, u t) (0, 1) (u t 1, u t) (1, 0) (u t 1, u t) (1, 1) Elements in V PC Prime cycles in G C nm Prime cycles in G C nm Elementary cycles in G C (nm 1) 20
Graph theory in input design Example: de Bruijn graph, C := {0, 1}, n m := 2. (u t 1, u t) (0, 0) (u t 1, u t) (0, 1) (u t 1, u t) (1, 0) (u t 1, u t) (1, 1) Elements in V PC Prime cycles in G C nm Prime cycles in G C nm Elementary cycles in G C (nm 1) 21
Graph theory in input design Example: de Bruijn graph, C := {0, 1}, n m := 2. (u t 1, u t) (0, 0) (u t 1, u t) (0, 1) (u t 1, u t) (1, 0) (u t 1, u t) (1, 1) Elements in V PC Prime cycles in G C nm Prime cycles in G C nm Elementary cycles in G C (nm 1) 22
Graph theory in input design Example: de Bruijn graph, C := {0, 1}, n m := 2. (u t 1, u t) (0, 0) (u t 1, u t) (0, 1) (u t 1, u t) (1, 0) (u t 1, u t) (1, 1) Elements in V PC Prime cycles in G C nm Prime cycles in G C nm Elementary cycles in G C (nm 1) 23
Graph theory in input design Example: de Bruijn graph, C := {0, 1}, n m := 2. (u t 1, u t) (0, 0) (u t 1, u t) (0, 1) (u t 1, u t) (1, 0) (u t 1, u t) (1, 1) There are algorithms to find elementary cycles (Johnson 1975, Tarjan 1972). 24
Graph theory in input design Once v i V PC is known The distribution for each v i is known. An input signal {u i t }t=n t=0 can be drawn from v i. Therefore, I (i) F := 1 λ e 1 λ e N for all v i V PC. n m u 1:nm C nm t=1 N t=1 ψ θ 0 t (u t)ψ θ 0 t (u t) v i (u 1:nm ) ψ θ 0 t (u t)ψ θ 0 t (u t) 25
26 Graph theory in input design Therefore, I (i) F := 1 λ e 1 λ e N for all v i V PC. n m u 1:nm C nm t=1 N t=1 ψ θ 0 t (u t )ψ θ 0 t (u t ) v i (u 1:nm ) ψ θ 0 t (u t)ψ θ 0 t (u t) The sum is approximated by Monte-Carlo!
27 Input design based on graph theory To design an experiment in C nm : 1. Compute all the prime cycles of G C nm. 2. Generate the input signals {u i t} t=n t=0 G C nm, for each i {1,..., n V }. 3. For each i {1,..., n V }, approximate I (i) F I (i) F 1 λ e N N t=1 from the prime cycles of ψ θ 0 t (u t)ψ θ 0 t (u t) by using
28 Input design based on graph theory To design an experiment in C nm : 4. Define γ := {α 1,..., α nv } R n V. Solve γ opt := arg max h(i app γ R n F (γ)) V where I app F n V i=1 (γ) := n V α i = 1 i=1 α i I (i) F α i 0, for all i {1,..., n V }
29 Input design based on graph theory To design an experiment in C nm : 5. The optimal pmf p opt is given by p opt = n V i=1 α opt i v i 6. Sample u 1:nseq from p opt using Markov chains. I app F (γ) linear in the decision variables The problem is convex!
30 Example I e t u t G 0 y t 1 x t+1 = θ 1 + x 2 + u t t G(u t ; θ) = y t = θ 2 x 2 t + e t x 1 = 0 [ ] [. with θ = θ 1 θ 2 = θ0 = 0.8 2] e t : white noise, Gaussian, zero mean, variance λ e = 1.
31 Example I e t u t G 0 y t with θ = 1 x t+1 = θ 1 + x 2 + u t t G(u t ; θ) = y t = θ 2 x 2 t + e t x 1 = 0 [ ] [. θ 1 θ 2 = θ0 = 0.8 2] We consider h( ) = log det( ), and n seq = 10 4.
32 Example I Results: 1 x t+1 = θ 1 + x 2 t G(u t ; θ) = y t = θ 2 x 2 t + e t x 1 = 0 + u t h(i F ) Case 1 Case 2 Case 3 Binary log{det(i F )} 3.82 4.50 4.48 3.47 Case 1: n m = 2, C = { 1, 0, 1} Case 2: n m = 1, C = { 1, 1/3, 1/3, 1} Case 3: n m = 1, C = { 1, 0.5, 0, 0.5, 1}
Example I Results (95 % confidence ellipsoids): 2.02 2.015 2.01 2.005 ˆθ2 2 1.995 1.99 1.985 1.98 0.79 0.8 0.81 ˆθ 1 Red: Case 2; Blue: Binary input. 33
34 Problem formulation for output-error models Input design based on graph theory Extension to nonlinear SSM Closed-loop application oriented input design Conclusions and future work
35 Extension to nonlinear SSM Nonlinear state space model: x 0 µ(x 0 ) x t x t 1 f θ (x t x t 1, u t 1 ) y t x t g θ (y t x t, u t ) where θ Θ. -f θ, g θ, µ: pdfs -x t : states -u t : input -y t : system output Goal: Design u 1:nseq := (u 1,..., u nseq ) as a realization of a stationary process maximizing I F.
36 Extension to nonlinear SSM Here, { } I F = E S(θ 0 )S (θ 0 ) S(θ 0 ) = θ log p θ (y 1:nseq u 1:nseq ) θ=θ0 Design u 1:nseq R nseq Design P(u 1:nseq ) P.
37 Extension to nonlinear SSM Fisher s identity: θ log p θ (y 1:T u 1:T ) = E { θ log p θ (x 1:T, y 1:T u 1:T ) y 1:T, u 1:T } θ log p θ (y 1:T u 1:T ) = T t=1 X 2 ξ θ (x t 1:t, u t )p θ (x t 1:t y 1:T )dx t 1:t with [ ] ξ θ (x t 1:t, u t ) = θ log f θ (x t x t 1, u t 1 ) + log g θ (y t x t, u t ) Assumption u t C (C finite set)
38 Extension to nonlinear SSM Recall P C : p nonnegative, p(x) = 1, p is shift invariant.
39 Extension to nonlinear SSM Problem Design u opt 1:n seq C nseq as a realization from p opt (u 1:nseq ), where p opt (u 1:nseq ) := arg max p P C h(i F (p)) where h : R n θ n θ R is a matrix concave function, and { } I F (p) = E S(θ 0 )S (θ 0 )
40 Input design problem for nonlinear SSM Problem Design u opt 1:n seq C nseq as a realization from p opt (u 1:nseq ), where p opt (u 1:nseq ) := arg max p P C h(i F (p)) where h : R n θ n θ R is a matrix concave function, and { } I F (p) = E S(θ 0 )S (θ 0 ) Issues: 1. How could we represent an element in P C? Use the graph theory approach! 2. How could we compute I F (p)?
41 Input design based on graph theory (revisited) To design an experiment in C nm : 1. Compute all the prime cycles of G C nm. 2. Generate the input signals {u i t} t=n t=0 G C nm, for each i {1,..., n V }. 3. For each i {1,..., n V }, approximate I (i) F I (i) { } F := E v i (u 1:nm ) S(θ 0 )S (θ 0 ) (new expression required!) from the prime cycles of by using
42 Estimating I F Approximate I F as Î F := 1 M M S m (θ 0 )Sm (θ 0) m=1 Difficulty: S m (θ 0 ) is not available. Solution: Estimate S m (θ 0 ) using particle methods!
43 Particle methods to estimate S m (θ 0 ) Goal: Approximate {p θ (x 1:t y 1:t )} t 1. {x (i) 1:t, w(i) t } N i=1 : Particle system. Approach: Auxiliary particle filter + Fixed-lag smoother.
44 Particle methods to estimate S m (θ 0 ) Estimate S m (θ 0 ) as Ŝ m (θ 0 ) := T N w (i) t=1 i=1 κ t ξ θ0 (x a(i) κ t,t 1 t 1, x a(i) κ t,t t, u t ) where [ ] ξ θ (x t 1:t, u t ) = θ log f θ (x t x t 1, u t 1 ) + log g θ (y t x t, u t )
45 Input design based on graph theory (revisited) To design an experiment in C nm : 1. Compute all the prime cycles of G C nm. 2. Generate the input signals {u i t} t=n t=0 G C nm, for each i {1,..., n V }. 3. For each i {1,..., n V }, approximate I (i) F I (i) { } F := E v i (u 1:nm ) S(θ 0 )S (θ 0 ) 1 M Ŝ m (θ 0 M )Ŝ m(θ 0 ) m=1 from the prime cycles of by using
Compute γ as the sample mean of {γ opt,k } K k=1. 46 Input design based on graph theory (revisited) To design an experiment in C nm : 4. Define γ := {α 1,..., α nv } R n V. For k {1,..., K}, solve where γ opt,k := arg max h(i app,k γ R n F (γ k )) V I app,k F (γ k ) := n V i=1 α i,k = 1 n V i=1 α i,k I (i),k F α i,k 0, for all i {1,..., n V }
47 Input design based on graph theory (revisited) To design an experiment in C nm : 5. The optimal pmf p opt is given by p opt = n V i=1 α opt i v i 6. Sample u 1:nseq from p opt using Markov chains. I app F (γ) linear in the decision variables The problem is convex!
48 Example II Nonlinear state space model: x t x t+1 = θ 1 x t + θ 2 + x 2 t + u t + v t, v t N (0, 0.1 2 ) where θ = y t = 1 2 x t + 2 5 x2 t + e t, e t N (0, 0.1 2 ) [ ], [. θ 1 θ 2 θ0 = 0.7 0.6] Input design: n seq = 5 10 3, n m = 2, C = { 1, 0, 1}, and h( ) = log det( ).
49 Example II Input sequence: 1.5 1 0.5 ut 0 0.5 1 1.5 0 20 40 60 80 100 t
50 Example II Results: Input / h(îf ) log det(îf ) Optimal 25.34 Binary 24.75 Uniform 24.38
51 Problem formulation for output-error models Input design based on graph theory Extension to nonlinear SSM Closed-loop application oriented input design Conclusions and future work
Closed-loop application oriented input design e t (ext. excitation) r t u t System y t H K y,t ( ) y d (setpoint) System (θ 0 Θ): x t+1 = A θ0 x t + B θ0 u t y t = C θ0 x t + ν t ν t = H(q; θ 0 )e t {e t }: white noise, known distribution. Feedback: u t = r t + K y,t (y t y d ) 52
53 Closed-loop application oriented input design Model: θ Θ. x t+1 = A(θ)x t + B(θ)u t y t = C(θ)x t + ν t ν t = H(q; θ)e t Goal: Perform an experiment to obtain ˆθ nseq. design r 1:nseq! Requirements: 1. y t, u t should not be perturbed excessively. 2. ˆθnseq must satisfy quality constraints.
54 Closed-loop application oriented input design Minimize control objective: J = E { nseq } y t y d 2 + u t u t 1 2 Q R t=1 Requirements: 1. y t, u t should not be perturbed excessively. Probabilistic bounds: P{ y t y d y max } > 1 ǫ y P{ u t u max } > 1 ǫ x for t = 1,..., n seq
55 Closed-loop application oriented input design Θ SI (α) Requirements: 2. ˆθnseq must satisfy quality constraints. Maximum likelihood: As n seq, we have: nseq (ˆθnseq θ 0 ) AsN (0, {I e F } 1 ) Identification set: { } Θ SI (α) = θ : (θ θ 0 ) IF e (θ θ 0) χ 2 α (n θ)
56 Closed-loop application oriented input design Θ app (γ) Requirements: 2. ˆθnseq must satisfy quality constraints. Quality constraint: Application set { Θ(γ) = θ : V app (θ) 1 } γ Relaxation: Application ellipsoid { Θ app (γ) := θ : (θ θ 0 ) 2 θ V app(θ) (θ θ 0 ) 2 } θ=θ0 γ
Closed-loop application oriented input design Θ app (γ) Θ SI (α) Requirements: 2. ˆθnseq must satisfy quality constraints. Quality constraint: Θ SI (α) Θ app (γ) achieved by 1 χ 2 α (n θ) Ie F γ 2 2 θv app (θ) θ=θ0 57
58 Closed-loop application oriented input design Optimization problem: { nseq } min J = E y t y d 2 {r t} nseq Q + u t 2 R t=1 t=1 s. t. System constraints P{ y t y d y max } > 1 ǫ y P{ u t u max } > 1 ǫ x I F γχ2 α (n θ) 2 2 θv app (θ) Difficulty: P (and IF e ) hard to optimize. Solution: Use the graph-theory approach!
59 Closed-loop application oriented input design Graph theory approach: r 1:nseq realization from p(r 1:nm ) with alphabet C. To design an experiment in C nm : 1. Compute all the prime cycles of G C nm. 2. Generate the signals {r i t} t=n t=0 from the prime cycles of G C nm, for each i {1,..., n V }.
60 Closed-loop application oriented input design Graph theory approach: r 1:nseq realization from p(r 1:nm ) with alphabet C. Given e 1:Nsim, r (j) 1:N sim, approximate J (j) 1 N sim N sim y (j) t y d 2 u + (j) Q t t=1 u (j) t 1 2 R I (j) F P et,r (j) t { u (j) t u max } Monte Carlo P et,r (j) { y (j) t y d y max } Monte Carlo t computed as in previous parts.
61 Closed-loop application oriented input design Optimization problem (graph-theory): min {β 1,..., β nv } n v β j J (j) j=1 s. t. System constraints Constraints on {β j } nv j=1 n v j=1 n v j=1 n v j=1 β j P et,r (j) t { u (j) t u max } > 1 ǫ x β j P et,r (j) { y (j) t y d y max } > 1 ǫ y t β j I (j) F γχ2 α (n) 2 2n θv app (θ) seq
62 Closed-loop application oriented input design Optimal pmf: n v p opt := β opt j p j j=1
63 Example III Consider the open-loop, SISO state space system [ ] [. θ1 0 θ2 0 = 0.6 0.9] x t+1 = θ 0 2 x t + u t y t = θ 0 1 x t + e t Input: k y = 0.5 known. [ Goal: Estimate θ 1 u t = r t k y y t θ 2 ] using indirect identification.
64 Example III Design {r t } 500 t=1, n m = 2, r t C = { 0.5, 0.25, 0, 0.25, 0.5}. Performance degradation: V app (θ) = 1 500 y t (θ 0 ) y t (θ) 2 2 500 t=1 y d = 0 ǫ y = ǫ x = 0.07 y max = 2, u max = 1
65 Example III Reference: 0.5 rt 0 0.5 0 20 40 60 80 100 0.5 rt 0 0.5 0 20 40 60 80 100 0.5 rt 0 0.5 0 20 40 t 60 80 100
66 Example III Input: ut 1 1 0 20 40 60 80 100 ut 1 1 0 20 40 60 80 100 ut 1 1 0 20 40 t 60 80 100
67 Example III Output: yt 2 0 2 0 20 40 60 80 100 yt 2 0 2 0 20 40 60 80 100 yt 2 0 2 0 20 40 t 60 80 100
68 Example III Application and identification ellipsoids: (98% confidence level) 1.1 1 Θ app (γ) Θ binary SI (α) θ2 0.9 0.8 0.7 0.4 0.6 0.8 θ 1 Θ opt SI (α)
69 Problem formulation for output-error models Input design based on graph theory Extension to nonlinear SSM Closed-loop application oriented input design Conclusions and future work
70 Conclusions A new method for input design was introduced. The method can be used for nonlinear systems. Convex problem even for nonlinear systems.
71 Future work Reducible Markov chains. Computational complexity. Robust input design. Application oriented input design for MPC.
Thanks for your attention. 72
Optimal input design for nonlinear dynamical systems: a graph-theory approach Patricio E. Valenzuela Department of Automatic Control and ACCESS Linnaeus Centre KTH Royal Institute of Technology, Stockholm, Sweden Seminar Uppsala university January 16, 2015
74 Outline Problem formulation for output-error models Input design based on graph theory Extension to nonlinear SSM Closed-loop application oriented input design Conclusions and future work
75 Appendix I: Graph theory in input design Example: Elementary cycles for a de Bruijn graph, C := {0, 1}, n m := 1. u t = 0 u t = 1 One elementary cycle: z = (0, 1, 0). Prime cycles for a de Bruijn Graph, C := {0, 1}, n m := 2: v 1 = ((0, 1), (1, 0), (0, 1)).
76 Appendix II: Graph theory in input design Example: Generation of input signal from a prime cycle. Consider a de Bruijn graph, C := {0, 1}, n m := 2. Finally, v 1 = ((0, 1), (1, 0), (0, 1)) {u 1 t }t=n t=0 : Take last element of each node. {u i t }t=n t=0 = {1, 0, 1, 0,..., (( 1)N + 1)/2}
77 Appendix III: Building A For i C nm, define (the set of ancestors of i). A i := {j C nm : (j, i) E}. For each i C nm, let P{i} P{k}, if j A i and P{k} 0 k A i k A i A ij = 1 #A i, if j A i and P{k} = 0 k A i 0, otherwise.
Apendix IV: Example nonlinear case e t u t G 0 y t G 0 (u t ) = G 1 (q, θ) u t + G 2 (q, θ) u 2 t where G 1 (q, θ) = θ 1 + θ 2 q 1 G 2 (q, θ) = θ 3 + θ 4 q 1 e t : Gaussian white noise, zero mean, variance λ e = 1. 78
Apendix IV: Example nonlinear case e t u t G 0 y t G 0 (u t ) = G 1 (q, θ) u t + G 2 (q, θ) u 2 t where G 1 (q, θ) = θ 1 + θ 2 q 1 G 2 (q, θ) = θ 3 + θ 4 q 1 We consider h( ) = det( ), c seq = 3, n m = 2, C := { 1, 0, 1}, and N = 5 10 3. 79
Apendix IV: Example nonlinear case Stationary probabilities: 1 ut 1 0 1 1 0 1 u t det(i app F ) = 0.1796. Results consistent with previous contributions (Larsson et al. 2010). 80
81 Appendix V: Particle methods to estimate S m (θ 0 ) Auxiliary particle filter: {x (i) 1:t, w(i) t ˆp θ (x 1:t y 1:t ) := N i=1 } N i=1 : Particle system. w (i) t Nk=1 w (k) t δ(x 1:t x (i) 1:t ) Two step procedure to compute {x (i) 1:t, w(i) t } N i=1 : 1. Sampling/propagation. 2. Weighting.
82 Appendix V: Particle methods to estimate S m (θ 0 ) Two step procedure to compute {x (i) 1:t, w(i) t } N i=1 : 1. Sampling/propagation: {a (i) t, x (i) t } wt 1 at Nk=1 w (k) R θ,t (x t x at t 1, u t 1) t 1
83 Appendix V: Particle methods to estimate S m (θ 0 ) Two step procedure to compute {x (i) 1:t, w(i) t } N i=1 : 2. Weighting: w (i) t := g θ(y t x (i) t, u t ) f θ (x t x (i) t 1, u t 1) R θ,t (x t x (i) t 1, u t 1)
84 Appendix V: Particle methods to estimate S m (θ 0 ) Difficulty: Auxiliary particle filter suffers particle degeneracy. Solution: Use fixed-lag smoother. Main idea FL-smoother: p θ (x t y 1:T, u 1:T ) p θ (x t y 1:κt, u 1:κt ) for κ t = min(t +, T ), for some fixed-lag > 0.
85 Apendix VI: Equivalence time and frequency domain Frequency Time Design variable Φ u (ω) p(u 1:nm ) Restrictions Φ u (ω) 0 1 π π 2π Φ u(ω)dω 1 p 0 u 1:nm p(u 1:nm ) = 1 p stationary Information matrix I F (Φ u ) I F (p)