Quiescent Steady State (DC) Analysis The Newton-Raphson Method J. Roychowdhury, University of California at Berkeley Slide 1
Solving the System's DAEs DAEs: many types of solutions useful DC steady state: no time variations transient: ckt. waveforms changing with time periodic steady state: changes periodic w time d dt ~q (~x(t)) + f ~ (~x(t)) + ~ b(t) = ~0 linear(ized): all sinusoidal waveforms: AC analysis nonlinear steady state: shooting, harmonic balance noise analysis: random/stochastic waveforms sensitivity analysis: effects of changes in circuit parameters J. Roychowdhury, University of California at Berkeley Slide 2
QSS: Quiescent Steady State ( DC ) Analysis d dt ~q (~x(t)) + ~ f (~x(t)) + ~ b(t) = ~0 Assumption: nothing changes with time x, b are constant vectors; d/dt term vanishes ~g(~x) z } { ~f (~x) + ~ b = ~0 Why do QSS? quiescent operation: first step in verifying functionality stepping stone to other analyses: AC, transient, noise,... Nonlinear system of equations the problem: solving them numerically most common/useful technique: Newton-Raphson method J. Roychowdhury, University of California at Berkeley Slide 3
The Newton Raphson Method Iterative numerical algorithm to solve 1 start with some guess for the solution 2 repeat a i ii check if current guess solves equation if yes: done! if no: do something to update/improve the guess Newton-Raphson algorithm start with initial guess ~x 0; i=0 repeat until convergence (or max #iterations) compute Jacobian matrix: solve for update : update guess: i++; ±~x J i = d~g(~x i) d~x J i ±~x = ~g(~x i ) ~x i+1 = ~x i + ±~x ~g(~x) = ~0 J. Roychowdhury, University of California at Berkeley Slide 4
Newton-Raphson Graphically g(x) Scalar case above Key property: generalizes to vector case J. Roychowdhury, University of California at Berkeley Slide 5
Newton Raphson (contd.) Does it always work? No. Conditions for NR to converge reliably g(x) must be smooth : continuous, differentiable starting guess close enough to solution practical NR: needs application-specific heuristics J. Roychowdhury, University of California at Berkeley Slide 6
NR: Convergence Rate Key property of NR: quadratic convergence Suppose x is the exact solution of g(x) = 0 At the i th NR iteration, define the error ² i = x i x meaning of quadratic convergence: (where c is a constant) NR's quadratic convergence properties if g(x) is smooth (at least continuous 1 st and 2 nd derivatives) and g 0 (x ) 6= 0 and kx i x k is small enough, then: NR features quadratic convergence ² i+1 < c² 2 i J. Roychowdhury, University of California at Berkeley Slide
Convergence Rate in Digits of Accuracy Quadratic convergence Linear convergence J. Roychowdhury, University of California at Berkeley Slide 8
NR: Convergence Strategies reltol-abstol on deltax stop if norm(deltax) <= tolerance tolerance = abstol + reltol*x reltol ~ 1e-3 to 1e-6 abstol ~ 1e-9 to 1e-12 better apply to individual vector entries (and AND) organize x in variable groups: e.g., voltages, currents, (scale DAE equations/unknowns first) more sophisticated possible e.g., use sequence of x values to estimate conv. rate residual convergence criterion stop if k~g(~x)k < ² residual Combinations of deltax and residual ultimately: heuristics, tuned to application J. Roychowdhury, University of California at Berkeley Slide 9
Newton Raphson Update Step Need to solve linear matrix equation : Ax = b problem J = d~g(~x) : Jacobian matrix d~x Derivatives of vector functions If J ~x = ~g(~x) ~x(t) = then 2 6 4 x 1. x n 3 5 ; ~g(~x) = d~g d~x, 2 6 4 2 3 g 1 (x 1 ; ; x n ) 6 4 5. g 1 (x 1 ; ; x n ) dg 1 dg 1 dx 1 dg 2 dg 2 dx 1. dg n 1 dg n 1 dx 1 dg n dx 1 dx 2 dx 2. dx 2 dg n dx 2 dg dg 1 dx n 1 dg 2 1 dx n dg 2 dx n 1 dx n J. Roychowdhury, University of California at Berkeley Slide 10. dg n 1. dx n 1 dg n dx n dg n dg n 1 dx n 1 dx n 3 5
DAE Jacobian Matrices Ckt DAE: d dt ~q (~x(t)) + ~ f (~x(t)) + ~ b(t) = ~0 1 i E i L 2 ~x(t) = 2 3 2 3 e 1 (t) 0 6e 2 (t) 4i L (t) 5 ~q(~x) = 6 Ce 2 4 0 i E (t) Li L 5 ~ f(~x) = 2 3 diode( e 1 ; I S ; V t ) i E 6 4 i E + i L + e 2 R e 2 e 1 e 2 5 ~ b(t) = 2 3 0 6 0 4 E(t) 5 0 J q, d~q d~x = 2 3 0 0 0 0 60 C 0 0 40 0 0 05 0 0 L 0 J f, d~ f d~x = 2 6 4 3 dv ( e 1 ) 0 0 1 1 0 R 1 1 1 1 0 0 5 0 1 0 0 ddiode J. Roychowdhury, University of California at Berkeley Slide 11
Newton Raphson: Computation Need to solve linear matrix equation : Ax = b problem Ax=b: where much of the computation lies large circuits (many nodes): large DAE systems, large Jacobian matrices in general (for arbitrary matrices of size n) J ~x = ~g(~x) solving Ax = b requires O(n 2 ) memory O(n 3 ) computation! (using, e.g., Gaussian Elimination) but for most circuit Jacobian matrices O(n) memory, ~O(n 1.4 ) computation because circuit Jacobians are typically sparse J. Roychowdhury, University of California at Berkeley Slide 12
Dense vs Sparse Matrices Sparse Jacobians: typically 3N-4N non-zeros compare against N2 for dense J. Roychowdhury, University of California at Berkeley Slide 13