Passivity Preserving Model Reduction and Selection of Spectral Zeros M A R Y A M S A A D V A N D I
|
|
- Wilfrid Johnson
- 5 years ago
- Views:
Transcription
1 Passivity Preserving Model Reduction and Selection of Spectral Zeros M A R Y A M S A A D V A N D I Master of Science Thesis Stockholm, Sweden 28
2 Passivity Preserving Model Reduction and Selection of Spectral Zeros M A R Y A M S A A D V A N D I Master s Thesis in Numerical Analysis (3 ECTS credits) at the Scientific Computing International Master Program Royal Institute of Technology year 28 Supervisor at CSC was Axel Ruhe Examiner was Michael Hanke TRITA-CSC-E 28:15 ISRN-KTH/CSC/E--8/15--SE ISSN Royal Institute of Technology School of Computer Science and Communication KTH CSC SE-1 44 Stockholm, Sweden URL:
3 Abstract In this work we will show projection methods, developed by Sorensen and Antoulas, for model order reduction. The algorithms are designed for passivity preserving model reduction of linear time invariant systems. The algorithms are based upon interpolation at selected spectral zeros of the original transfer function to produce a reduced transfer function that has the specified roots as its spectral zeros. We show a (numerical) problem which might occur during application of the methods and discuss ways to deal with it. We also discuss which spectral zeros we should take to have a better approximation.
4 Referat Att beräkna passiva reducerade modeller, val av spektrala nollställen Sammanfattning: Vi studerar två projektionsmetoder för modellreduktion som utvecklats av Antoulas och Sorensen. De är avsedda för passivitetsbevarande modellreduktion för linjära tidsinvarianta system. De bygger på interpolation i utvalda spektrala nollställen hos den ursprungliga överföringsfunktionen, så att den reducerade överföringsfunktionen har de utvalda rötterna som nollställen. Vi visar på ett numeriskt problem som uppstår vid denna beräkning. Vi diskuterar även vilka spektralnollställen vi skall välja för att få bästa approximation.
5 Acknowledgments This research has been divided between Kungliga Tekniska Högskolan (KTH) at Stockholm, Sweden and NXP Semiconductors/Croporate I&T/DTF/DM/PDM at Eindhoven, Netherlands. This study has been financed by NXP Semiconductors. I would like to appreciate it gratefully. Professor Axel Ruhe was my supervisor at KTH, Dr. Jan ter Maten and Dr. Joost Rommes were my industrial supervisors. I would like to appreciate their support and contributions. I would like to express my gratitude to Dr. Lennart Edsberg the coordinator of the scientific computing program at KTH. I would like to thank my colleagues and friends at NXP. Last but not the least; I would like to thank especially my husband, Kasra Mohaghegh, and my parents because of their permanent love and support.
6 Contents Contents 1 Introduction 1 2 Circuits Introduction Electric Circuits Kirchhoff s Laws Kirchhoff s Current Law (KCL) Kirchhoff s Voltage Law (KVL) Branch Constitutive Relations (BCR) Circuit components Resistive components Reactive Components Controlled Components Circuit Equations Introduction Incidence matrix Nodal Analysis (NA) Modified Nodal Analysis (MNA) Analysis Of Circuit Equations 17
7 4.1 Introduction Direct Current Analysis (DC) Small Signal (Alternating Current) Analysis (AC) Transient Analysis (TR) Pole-zero Analysis (PZ) System Poles and Zeros Transfer Function Backward Differential Formula Method (BDF) Newton-Raphson Method Differential Algebraic Equation (DAE) Introduction Theory of Differential Algebraic Equations Initial Value Problem and Solvability Stability Index of DAEs Semi-Explicit DAE Dynamical Systems and Passivity Preserving MOR Introduction Dynamical System Model Reduction via Projection Matrices Passive Systems Spectral Zeros Spectral Zeros and Generalized Eigenvalue Problem Passivity Preserving Model Reduction Projection Method Model Reduction by Projection (Sorensen) Model Reduction by Projection (Antoulas)
8 7 Numerical Results Introduction Choosing the Spectral Zeros Preserving Real Spectral Zeros Common Poles and Spectral Zeros Effect of Real Spectral Zeros Reducing the Descriptor System (E I) Conclusions 63 Bibliography 65
9 Chapter 1 Introduction This thesis is concerned with linear time invariant (LTI) systems which result in circuit simulation. The tendency to analyze and design systems of ever increasing complexity is becoming more and more a dominating factor in progress of chip design. Along with this tendency, the complexity of the mathematical models increases both in structure and dimension. Complex models are more difficult to analyze, and due to this it is also more difficult to develop control algorithms. Therefore Model Order Reduction is of utmost importance [1]. One of the most important targets is that the reduction procedure preserves the stability and passivity of the original system [2, 25]. To acquire this goal we need some information about circuits. In chapter 2 we introduce the electric circuits and Kirchhoff s laws. Circuit components and some of their features are explained as well [8, 16]. Nodal analysis, a way to define the circuit equations, is studied in chapter 3 [22, 27]. In chapter 4 we discuss the analysis of circuit equations and their usages. Definition of poles and zeros are explained. The transfer function is introduced in this chapter as well. The transfer function is one of the most important concepts which we use in this thesis. For time-integration, the backward differential formula method is explained [22]. Most of the circuit equations are differential algebraic equations (DAEs). DAEs and their stability are studied in chapter 5 [3]. The main contribution of this work starts in chapter 6. In this chapter the LTI systems are introduced. We want to reduce the system by projection methods which are presented by Antoulas and Sorensen. For reduction by projection methods, we construct the projection matrices via interpolation of the spectral zeros. The concept of spectral zeros and their computation is explained completely in this chapter [2, 15, 19, 25]. In chapter 7 we study some examples and apply the projection method. Finally in chapter 8 we conclude which spectral zeros have effect in low and high frequency in reduced model. 1
10
11 Chapter 2 Circuits 2.1 Introduction In this chapter a brief introduction to electrical circuits will be given. Kirchhoff s voltage law, Kirchhoff s current law and various circuit components will be discussed. Also we define the circuit components and show their symbols and related equations. 2.2 Electric Circuits In this report electric circuits are defined as a graph with nodes and branches. The branches connect the nodes to each other. i k represents the current through the branch k and v j denotes the voltage of node j. The electric properties of some branches like a voltage source require a concrete direction due to the difference between their positive and negative end nodes. Figure 2.1 shows a RCL-circuit including 3 nodes and 4 branches. There are two kinds of equations that describe the circuit : 1 R 2 e C L + Figure 2.1. A RCL circuit with 3 nodes and 4 branches. 3
12 CHAPTER 2. CIRCUITS Equations that reflect the topology of the circuit. Equations that reflect the properties of the circuit elements. First the equations are described that reflect the topology of the circuit. 2.3 Kirchhoff s Laws The equations that reflect the topology of the circuit do not depend on the type of branches, but describe the way in which the branches are connected. These equations are given by Kirchhof s laws: Kirchhoff s Current Law (KCL): current is not stored in any loop and the algebraic sum of the current at each node is zero. Kirchhoff s Voltage Law (KVL)or Kirchhoff s loop rule: This rule is a result of electrostatic field being conservative. It states that the total voltage around a closed loop must be zero Kirchhoff s Current Law (KCL) The sum of all incoming currents is equal to the sum of all outgoing currents: i k = (2.1) k node Kirchhoff s Voltage Law (KVL) The sum of all branch voltage through each closed circuit is equal to zero. v k = (2.2) k loop Note that the v k is the potential difference between the two nodes the branch connects: v k = v + k v k (2.3) 2.4 Branch Constitutive Relations (BCR) The branch constitutive relation (BCR) shows the electrical features of branches and the two Kirchhoff s laws and complete the description of the circuit together. The branch equations 4
13 2.5. CIRCUIT COMPONENTS can contain branch variables, such as current through a branch, and expressions containing branch variables. In general a branch equation only contains branch variables of the branch concerned. However it is possible that branch variables associated with other branches are included. These branch variables, called controlling variables, such as a voltage-controlled current source, make the branch a controlled branch. 2.5 Circuit components In this section circuit components and BCRs of circuit elements are introduced. There are two types circuit components: Resistive components Reactive components and each type has specific properties Resistive components Resistor components are defined by the algebraic branch equation x i = f(t, x), (2.4) where x i R is the circuit variable concerned, x R n is a vector containing all circuit variables and f : R R n R is a function depending on one or more circuit variables. Resistor In figure 2.2 a resistor is shown: Resistors are characterized by a relation between their current + Figure 2.2. A Resistor. and voltage by the Ohm s law. Ohm s law is V = IR. (2.5) 5
14 CHAPTER 2. CIRCUITS The general BCR of resistor, in the linear case, is The BCR is given by i R = V R. (2.6) i R = i(v R ), (2.7) covering linear and non-linear resistors, where v R is the potential difference between v + and v at the two nodes that are connected by resistor. Because in linear resistors the currents are explicitly known in terms of the voltage, they are called current-defined and voltagecontrolled. Independent Current Source The symbol of the current source is: In circuit theory, an ideal current source is a circuit + A Figure 2.3. An Independent Current Source. element where the current through it is independent of the voltage across it. If the current through an ideal current source can be specified independently of any other variable in a circuit, it is called an independent current source. Conversely if the current through an ideal current source is determined by some other voltage or current in a circuit it is called dependent or controlled current source. The BCR of an independent current source is: i I = I(t) and v I = any value where v I will be implicity determined by the system of equations. Independent Voltage Source In circuit theory, an ideal voltage source is a circuit element where the voltage across it is independent of the current through it. However, it may be a function of time. If the voltage across an ideal voltage source can be specified independently of any other variable in a circuit, it is called an independent voltage source. The symbol of voltage source is shown in figure 2.4 by the following: An independent voltage source is given by the BCR: v V = V (t) and I V is implicity defined by the system. I V = any value. 6
15 2.5. CIRCUIT COMPONENTS + e Figure 2.4. An Independent Voltage Source Reactive Components The reactive components are determined by the differential equation: x i = d f(t, x), (2.8) dt where x is the number of the unknowns such as voltage node or current through the component. Note that the notation can be used. Capacitor ẋ = dx dt (2.9) The capacitor s capacitance (C) is a measure of the amount of charge (q) stored on each plate for a given potential difference or voltage (v) which appears between the plates: C = q v (2.1) or q = Cv The capacitor is given by the following symbol in figure 2.5: The current I through the capac- + Figure 2.5. A Capacitor. itor is the rate at which charge q is forced through the capacitor ( d dt q c). The charge-voltage relationship for a capacitor (may be nonlinear)is given by: 7
16 CHAPTER 2. CIRCUITS Therefore the BCR of a linear capacitor is q c = q(v c ). or i c = d dt q c = C dv c dt i c = q = Cv c with constant capacitance C. Inductor An inductor is a passive electrical device employed in electrical circuits for its property of inductance. While a capacitor contrasts changes in voltage, an inductor contrasts changes in current. An inductor is characterized by a relationship between its current (i L ) and the + Figure 2.6. A Inductor. magnetic flux (Φ L ). The inductor law is Φ L = Φ(i L ). Hence, the magnetic flux is related to the voltage through the inductor by v L = Φ L. As it is known that Φ L = Li then it follows that, d dt Φ L = L di dt Therefore, the BCR for the inductor becomes Φ = L i where L is the inductance. v L = L i L, (2.11) 8
17 2.6. CONTROLLED COMPONENTS 2.6 Controlled Components If the current through an ideal current source is determined by some other voltage or current in a circuit, it is called a dependent or controlled current source. Also, if the voltage across an ideal voltage source is determined by some other voltage or current in a circuit, it is called a dependent or controlled voltage source. 9
18
19 Chapter 3 Circuit Equations 3.1 Introduction In this chapter we studied the relations between the incidence matrix and the Kirchhoff s lows. The nodal analysis for a test circuit is introduced. This chapter ends with Modified Nodal Analysis (MNA) for a test example. 3.2 Incidence matrix The incidence matrix is defined by A R n b (n corresponds to the number of nodes and b corresponds to the number of branches), where a ij = 1 branch j is incident at node i and the current direction is pointing toward node i. branch j is not incident at node i. 1 branch j is incident at node i and the current direction is pointing away from node i. The circuit considered here is a directed graph and each branch is connected to two distinct nodes. Every column of incidence matrix A has exactly two nonzero elements, a 1 and a 1 (and the rest of them are zeros). One of the important properties of the incidence matrix is time independency. Figure 3.1 shows the current of the figure 2.1 and their directions. 11
20 CHAPTER 3. CIRCUIT EQUATIONS 1 R 2 e C L + Figure 3.1. Define a current direction in a RCL circuit. and incidence matrix of the figure 3.1 becomes: A = b 1 b 2 b 3 b n n 1 n 2 For a circuit with n nodes and b branches the incidence matrix has rank(a) = n 1. The values v n R 3, i b R 4 contain the nodal voltages and branch currents, respectively. Both of them are arranged in the same order as rows and columns of A. Simple algebra shows that at any time, which is KCL (2.1), and according to KVL (2.2) Ai b =, (3.1) A T v n = v b, (3.2) at any time and v b R 4 contains the branch voltages. The formulations (3.1) and (3.2) are re-formulations of Kirchhoff s laws. 3.3 Nodal Analysis (NA) According to Kirchhoff s laws for each node at least two equations need to be written: First the current through the component i component, second the voltage across the component v component = v n1 v n2. The current through a component can be non-linear and depends on the electric properties. Note that writing these equations using the branch constitutive 12
21 3.3. NODAL ANALYSIS (NA) relation (BCR) with the Kirchhof s laws is called nodal analysis. The equation for the k-th branch is: i k = d dt q(t, v b) + j(t, v b ) (3.3) with i k is the current through k-th branch, v b R b contains the branch voltages, q : R R b R a function that represents reactive components and jr R b R a function that represents resistive components. Now rewrite the nodal analysis equation in matrix-vector for all currents: i b = d dt q(t, v b) + j(t, v b ) (3.4) where q, j : R R b R b. The system (3.4) can not be solved because it contains 2b unknowns (b unknowns belong to i b and b unknowns relate to v b ) and b equations. First left-multiply system (3.4) with the incidence matrix A: Ai b = d dt A q(t, v b) + A j(t, v b ). The system of equations draw up by KCL (3.1) and KVL (3.2) becomes: = d dt A q(t,at v n ) + A j(t,a T v n ). (3.5) According to section 3.2, rank(a) = n 1 so the rows of A are not linearly independent. For solving the systems one of the unknowns should be chosen as a ground node. The related row to the ground node in matrix A is omitted and the ground node itself is omitted from the vector of unknowns v n. Hence, v n is reduced to ˆv n, so the system has n 1 equations and n 1 unknowns: d dtâ q(t,ât ˆv n + v k Ae k ) + Â j(t,ât ˆv n + v k Ae k ) =, (3.6) where e k is a k-th unit vector in R n. If v k is ground node then v k Ae k = Now, Â( d dt q(t,ât ˆv n ) + j(t,ât ˆv n )) =, Â and by defining q(t, x) = Â q(t,ât ˆv n ) j(t, x) = Â j(t,ât ˆv n ) 13
22 CHAPTER 3. CIRCUIT EQUATIONS we have d q(t, x) + j(t, x) =. (3.7) dt 3.4 Modified Nodal Analysis (MNA) Except the nodal voltage, other unknowns exist that must be found. For this purpose, by applying KCL law write the equations for each nodes same as before: Ai b =. Then replace i k of the voltage-controlled components by their BCR. Substitute the BCR equations of voltage-controlled components into the KCL equations. Treat the currents i k of the current-controlled components as unknowns additionally to the nodal voltages. At the end add the voltage-current relations for all current-controlled components, that define implicity the i k. This procedure is called Modified Nodal Analysis (MNA). Apply an example to explain MNA. The circuit in Figure 3.1 and its incidence matrix A are considered, then to continue step by step: A = and current vector i b is: i b = ( i e i R i C i L ) T then using Ai b = and give following equation: i e + i C i L = i e i R = i R i C + i L =, (3.8) which are exactly the equations of the KCL for all nodes. Now replace the currents by their BCR. i e + C d dt (v v 2 ) i L = i e + v 1 v 2 R = (3.9) v 2 v 1 R + C d dt (v 2 v ) + i L = The next step is to add the voltage-current relations: { v1 v = e v 2 v = L d dt i L. (3.1) 14
23 3.4. MODIFIED NODAL ANALYSIS (MNA) The matrix form for the above system is Cv Cv 2 d dt Cv + Cv 2 + Li L i e i L 1 R v 1 1 R v 2 + i e 1 R v R v 2 + i L v + v 1 e v + v 2 = (3.11) The node is grounded, so v = and the first row in above matrices can be omitted. The result is rewritten in general form (3.7) and d dt Cv 2 Li L + 1 R v 1 1 R v 2 + i e 1 R v R v 2 + i L v 1 e v 2 = q(t, x) = Cv 2 Li L, j(t, x) = 1 R v 1 1 R v 2 + i e 1 R v R v 2 + i L v 1 e v 2, and x = v 1 v 2 i e i L. 15
24
25 Chapter 4 Analysis Of Circuit Equations 4.1 Introduction In this chapter the following methods of circuit analysis are described: DC analysis: computes an equilibrium steady state of the circuit. AC analysis: computes the linearized effect of a sinusoidal input source in the circuit. TR analysis: studies time domain behavior of the circuit, and PZ analysis: gives information about the stability of circuit. The circuit equations have the following form: f(t, x, x ) = d q(t, x) + j(t, x) =. (4.1) dt Equation (4.1) is called a differential algebraic equation or DAE because it contains differential equations related to capacitors and/or inductors and algebraic equations corresponding to resistors. 4.2 Direct Current Analysis (DC) DC analysis is the very basic analysis in a circuit simulation and concerns to the steady state solution of a circuit. So all time derivation are zero and the time dependent expressions are constant. Hence the system is i(v DC = ). If the circuit is linear, then the circuit equations for DC analysis form a system of linear equations, which needs only a method to descibe the circuit into the equations and linear solver to solve the system of equations. MNA is the easiest method to transfer the circuit in terms of equations and the system of these linear 17
26 CHAPTER 4. ANALYSIS OF CIRCUIT EQUATIONS equations can be solved by LU decomposition, for instance [16]. Steady-state x DC is time independent and defined by: Equation (4.1) is time independent and becomes: ẋ DC =. (4.2) f DC (x,) = (4.3) t q DC(x) + x q DC(x)ẋ + j DC (x) = (time derivations are zero) j DC = (4.4) The latter system is nonlinear and can be solved by an iterative method such as the Newton- Raphson method. Again we write the equation for Figure 3.1 and consider v o = : i e + v 1 v 2 R = v 2 v 1 R + C d dt v 2 + i L = v 1 = e v 2 = L d dt i L. (4.5) In DC analysis all time derivatives are zero and the time independent expressions are constant: Then the system (4.5) becomes C d dt v k =, L d dt i L =. and the matrix form is: i e + v 1 v 2 R = v 2 v 1 R + i L = v 1 = e v 2 = (4.6) 1 R 1 R The solution of the linear system is: 1 R 1 1 R v 1 v 2 i e i L = e x DC = (v 1,v, i e,i L ) T = (e,, e R, e R )T. 18. (4.7)
27 4.3. SMALL SIGNAL (ALTERNATING CURRENT) ANALYSIS (AC) 4.3 Small Signal (Alternating Current) Analysis (AC) AC analysis determines the perturbation of the solution of the circuit due to small sinusoidal sources, in adding a small time varying signal e(t). Time independent elements such as capacitors and inductors are described as well. Start with considering equation (3.7): d q(t, x) + j(t, x) =. dt AC analysis is used to initial value (steady-state) x(t) = x DC which is the solution of a time independent system (4.3): f DC (x,) =. The AC analysis problem is defined by adding the small signal e(t) to system (3.7): f AC (t, x) = d q(t, x) + j(t, x) = e(t), (4.8) dt where x AC = x DC + x(t). We apply the Taylor expansion for function q and j at point x DC by assuming that e(t) is independent of x: d dt (q(x DC) + q x x(t) + O( x 2 )) + j(x DC ) + j xdc x x(t) + O( x 2 ) = e(t) (4.9) xdc where and q DC is constant d dt q(x DC) = q x is time independent d xdc dt ( q x x(t)) = q xdc x ẋ(t) xdc since x DC is solution of the system j(x) =, we have j(x DC ) =. Note that this whole contribution can be treated as an O( x 2 ) effect. Then equation (4.9) is reduced to: We choose this notation for two Jacobian matrices: q x ẋ(t) + j xdc x x(t) = e(t). (4.1) xdc C = q x xdc G = j. xdc x (4.11) 19
28 CHAPTER 4. ANALYSIS OF CIRCUIT EQUATIONS Substituting C and G in equation (4.1) gives: Cẋ(t) + Gx(t) = e(t), (4.12) which is a system of linear ordinary differential algebraic equations that describes the small signal e(t). The behavior of the solution can also be studied in frequency domain. By defining x(t) = X exp(iωt) and e(t) = E exp(iωt), where X and E are time independent vectors and ω is common angular frequency, the system (4.12) becomes Since exp(iωt), it follows that C d dtx exp(iωt) + GX exp(iωt) = E exp(iωt) CiωX exp(iωt) + GX exp(iωt) = E exp(iωt) For small signal e(t) = E exp(iωt) with E 1 it follows that (iωc + G)X = E. (4.13) X = (iωc + G) 1 E. Small signal analysis is also called Alternating Current analysis because the small signal added to the vector e(t) is a sinewave and it can be interpreted as an alternating current [8, 22]. 4.4 Transient Analysis (TR) Since the solution of the system (3.7) is time dependent, one has to integrate it in a time interval [, T]. This is called Transient Analysis. So we divide the time interval in to the small intervals [,t 1,t 2,...,T]. At each time interval [t k 1,t k ] the differential equations will be transformed by a numerical integration algorithm into algebraic equation. Therefore a system of non-linear algebraic equations should be solved. The solution at t = is determined by the DC-solution and the solution of the transient analysis then can be found iteratively. This numerical integration can be implicit as well as explicit. For circuit simulation it is preferred to use an implicit method, because of the algebraic equations and the possible stiff behavior, like the Euler backward method, which is a backward differential formula (BDF) method. We consider d dt q(t, x k) 1 t (q(t, x k) q(t, x k 1 )) (4.14) where t = t k t k 1 is the time step. Substituting (4.14) in (3.7): q(t, x k ) = q(t, x k 1 ) tj(t, x k ) (4.15) where q(t, x k 1 ) is known at t k. For more accurate approximation, the time step needs to be chosen small enough. 2
29 4.5. POLE-ZERO ANALYSIS (PZ) 4.5 Pole-zero Analysis (PZ) Pole-zero analysis is used in electrical engineering to analyze the stability of the electrical circuit. For example if the circuit is designed to be an oscillator, pole-zero analysis is one of the ways to verify that the circuit indeed oscillates. As the circuit becomes complicated nowadays, there is an urgent need for fast and accurate algorithms. Here we just introduce the pole-zero analysis and the two different strategies for dealing with the problem System Poles and Zeros The transfer function provides a basis for determining important system response characteristics without solving the complete differential equation. As defined, the transfer function is a rational function in the complex variable s = σ + jω, that is H(s) = b ms m + b m 1 s m b 1 s + b a n s n + a n 1 s n a 1 s + a (4.16) It is often convenient to factor the polynomials in the numerator and denominator, and to write the transfer function in terms of those factors: H(s) = N(s) D(s) = K (s z 1)(s z 2 )... (s z m 1 )(s z m ) (s p 1 )(s p 2 )... (s p n 1 )(s p n ) (4.17) where the numerator and denominator polynomials, N(s) and D(s), have real coefficients defined by the system s differential equation and K = b m /a n. As written in equation (4.17) the z i s are the roots of the equation N(s) = and are defined to be the system zeros and p i s are the roots of the equation D(s) = and are defined to be the system poles. In equation (4.17) the factors in the numerator and denominator are written so that when s = z i the denominator N(s) = and the transfer function vanishes: lim H(s) =, (4.18) s z i and similarly when s = p i the denominator polynomial D(s) = and the value of the transfer function becomes unbounded, lim H(s) =. (4.19) s p i All of the coefficients of polynomials N(s) and D(s) are real, therefore the poles and zeros must be either purely real, or appear in complex conjugate pairs. In general for the poles, either p i = σ i, or p i,p i+1 = σ i ± jω. The existence of a single pole without a corresponding conjugate pole would generate complex coefficients in the polynomial D(s). Similary, the system zeros are either real or appear in complex conjugate pairs. 21
30 CHAPTER 4. ANALYSIS OF CIRCUIT EQUATIONS Transfer Function As already mentioned in section (4.1) the way which the system (4.1) is solved depends on the kind of analysis (DC-analysis, AC-analysis, Transient analysis and Pole-zero analysis). To compute the dynamic response of a circuit variable or expression to small pulse excitations by an independent source we use a pole-zero analysis. It is explained by the transfer function. The transfer function is defined by its pole-zero representation. Pole-zero analysis provides with the pole-zero representation stability properties of the circuit. The circuit transfer function H(s) approaches the response of a linear circuit to source variations in the Laplace (frequency) domain: H(s) = L(zero stateresponse)(s) L(sourcevariation)(s) (4.2) where L(f)(s) is the Laplace transform of function f (defined in the time domain) and s is the (complex) variable in the frequency domain. The zero state response represents the response to the stationary solution. The zero state response independents on the initial condition (solution), only depends on the excitation. Starting from a linearization around the operating point, the time domain formulation is as follows: { C d dtx(t) + Gx(t) = e(t) x() = (4.21) where e(t) models the excitation, C and G are defined in (4.11). Because not all properties can be computed in the time domain, the problem is transformed to the frequency domain by applying a Laplace transform (sc + G)X(s) = E(s), (4.22) where X(s), E(s) are the Laplace transforms of the variables x, e and s is the variable in the frequency domain. The response of the circuit to a variation of the excitation is given by the transfer function H(s) = X(s) E(s). (4.23) Hence, H(s) = (sc + G) 1. (4.24) 4.6 Backward Differential Formula Method (BDF) One of the best solutions for solving the DAE is a combination of an implicit integration method and a nonlinear solver. Instead of using one step methods, it is possible to use a 22
31 4.6. BACKWARD DIFFERENTIAL FORMULA METHOD (BDF) Linear Multistep Method (LMM). The integration method can be chosen from the class of LMM. A linear k-step method can be used to compute the solution of q(t i, x i ) by using k 1 q(t i+k, x i+k ) α j q(t i+j, x i+j ) + t j= k j= β j d dt q(t i+j, x i+j ) (4.25) where t = t i t i, k N, α,α 1,...α k 1 R, β,β 1,... β k R and α 2 + β2 > (It is guarantees The formula is a real k-th step method and it is not k -method with k < k ). Now, if k = α = β = 1 and β 1 = equation (4.25) becomes: q(t i+1, x i+1 ) = q(t i, x i ) + t d dt q(t i, x i ) This is an Euler Forward scheme. If k = α = β 1 = 1 and β = the equation (4.25) is reduced to Euler Backward scheme q(t i+1, x i+1 ) = q(t i, x i ) + t d dt q(t i+1, x i+1 ). We pursue the argument leading to the backward Euler method to derive the family of Backward Differential formulas (BDF). Using the (i + 1)-th iteration of d dtq(t, x) of the equation (3.7) with α k = 1 can be approximated by: d dt q(t i+1, x i+1 ) Then we define b i+k as: 1 β k t k α j q(t i+j, x i+j ) 1 k 1 d β j β k dt q(t i+j, x i+j ). (4.26) j= j= b i+k = 1 k 1 β k t j= Substitution b i+k in (4.26) gives: α j q(t i+j, x i+j ) 1 k 1 β k j= β j d dt q(t i+j, x i+j ). and hence equation (3.7) can be written as: d dt q(t i+1, x i+1 ) b i+k + 1 β k t q(t i+k, x i+k ) b i+k + 1 β k t q(t i+k, x i+k ) + j(t i+k, x i+k ) =. (4.27) A Newton method can be used to compute x i+k. In a Newton method at each time step a nonlinear equation must be solved. For solving this equation it needs the Jacobian of the equation (4.27), which is given by 23
32 CHAPTER 4. ANALYSIS OF CIRCUIT EQUATIONS 1 C(t, x) + G(t, x). β k t Newton-Raphson Method Application of the MNA to a circuit and the usage of a suitable time integration method, will lead to a system of nonlinear algebraic equations for each discretisation point t i. Consider the equation f(x) = (4.28) with x = x i is the vector of the unknown variables (can be voltages or currents) at the moment t i. The Newton-Raphson method is usually used in simulation programs because of its efficiency. It starts with initial value (solution) x. The i-th iteration of the Newton- Raphson is x i+1 = x i (Jf(x i )) 1 f(x i ), (4.29) where Jf(x i ) = f xi is the Jacobian matrix of f computed at x i. x Convergence of Newton-Rephson Method We want to state a convergence theorem of Newton-Rephson s method, so at first we recall a following lemma. Lemma 4.1. If the Jacobian matrix Jf(x) exists for all x in a convex region C R n, and if a constant γ exists with then for all x, y C the estimate Jf(x) Jf(y) γ x y for all x, y C, f(x) f(y) Jf(y)(x y) γ x y 2 2 holds. 1 Recall that a set M R n is convex if x, y M implies that the line segment [x, y] := z = λx + (1 λ)y λ 1 is contained within M. Now we can show that Newton-Rephson s method is quadratically convergent. 1 Proof [26], page
33 4.6. BACKWARD DIFFERENTIAL FORMULA METHOD (BDF) Theorem 4.1. Let C R n be a given open set. Further, let C be a convex set with C C, and let f : C R n be a function which is differentiable for all x C and continuous for all x C. For x C let positive constants r,α,β,γ,h be given with the following properties: and let f(x) have the following properties S r (x ) := x x x < r C, h := αβγ 2 < 1, α r := (1 h), Jf(x) Jf(y) γ x y for all x, y C. Jf(x) 1 exists and satisfies Jf(x) 1 β for all x C. Jf(x) 1 f(x ) α. then 1. beginning at x, each point x i+1 := x i Jf(x i ) 1 f(x i ), i =,1,..., is well defined and satisfies x i S r (x ) for all i. 2. lim i x i = ξ exists and satisfies ξ S r (x ) and f(ξ) =. 3. for all i 1 x i ξ α h2i 1 1 h 2i. since < h < 1, Newton-Raphson s method is at least quadratically convergent. 2 We give a short proof of the quadratically convergence. By applying (4.29) we get because Jf(x i ) 1 exists and f(ξ) =, we get x i+1 ξ = x i Jf(x i ) 1 f(x i ) ξ, x i+1 ξ Jf(x i ) 1 Jf(x i )(x i ξ) f(x i ) + f(ξ). Applying lemma 4.1 and quadratically convergence is shown: 2 Proof [26], page 27 x i+1 ξ βγ 2 x i ξ 2. 25
34 CHAPTER 4. ANALYSIS OF CIRCUIT EQUATIONS It will be applied to the system given in figure 7.1 for explaining the Newton-Rephson method. One of the resistors is defined by variable resistance and the current through it is described by function i 1 (v) = v 2 + v. This will lead to a nonlinear system. 1 R 1 i 1 (v) = v 2 + v 2 + e R 2 R 3 Figure 4.1. A Non-linear Circuit. The equations are: 1 R R 3 1 R R 3 1 2v 2 v 1 1 v R R 3 2v 2 v 1 1 v R R v v 1 v 2 i e = e. If node is grounded (v = ), the system becomes:. 2v 2 v 1 1 v v 1 + 2v 2 1 v R R 3 1 v 1 v 2 i e = e Now writing in equation form f(x) = f(x) = v2 2 v v 1v 2 + v 2 v 1 i e v1 2 v v v 1v 2 + v 2 v 2 1 v 1 e 26 R 2 + v 2 R 3 = (4.3)
35 4.6. BACKWARD DIFFERENTIAL FORMULA METHOD (BDF) and the its Jacobian matrix becomes Jf(x) = 2v 1 + 2v 2 1 2v 2 + 2v v 2 2v 1 1 2v 2 + 2v R R 3 1 and the unknowns are x = v 1 v 2 i e. It is assumed R 2 = 1Ω, R 3 = 5Ω and the voltage across the voltage source is e = 1V. As an initial value take the DC solution x = ( 1, 8.8, 2.64). Then the functional matrix looks like: 2v 2 v 1 1 v v 1 + 2v 2 1 v , therefore the equation (4.3) becomes f(x) = v2 2 v v 1v 2 + v 2 v 1 i e v1 2 v v v 1v 2 + v 2 v v 2 5 v 1 1 = The k-th iteration of the Newton-Raphson process can be described as equation (4.29). For this example, the system of equation will converge after 6 iterations steps. The values for x at the each iterations steps are (see Table 4.1): Table 4.1. The value for x at each steps #iteration v v i e The error x i x, where x is the exact solution, is given in Table
36 CHAPTER 4. ANALYSIS OF CIRCUIT EQUATIONS Table 4.2. The error x i x where x is the exact solution at each steps #iteration v 1 v e e-14 i e e-3.222e-14 The quadratic convergence is clearly seen in the table
37 Chapter 5 Differential Algebraic Equation (DAE) 5.1 Introduction In this chapter the Differential Algebraic Equation (DAE) is explained. Their solvability and stability are also studied. Finally the definition of the index of a DAEs is defined. More information about the dynamical systems can be found in [17, 18]. 5.2 Theory of Differential Algebraic Equations Initial Value Problem and Solvability We consider a certain electrical circuit system which is defined by the following differential algebraic equation: d dtq(t, x) + j(t, x) = x() = x, (5.1) where x R b is a state vector and q, x : [,T] R b R b (which can be find by Modified Nodal Analysis (section (3.4))). The solution x(t) of (5.1) describes the dynamic behavior of the system for a known initial value, the steady state for instance. Ordinary differential equation (ODE) is a special case of the DAEs { ẋ = f(t, x) x() = x, (5.2) where f : [,T] R b R b. In this case TR analysis can compute the solution. According to 29
38 CHAPTER 5. DIFFERENTIAL ALGEBRAIC EQUATION (DAE) the uniqueness theorem the IVP (5.2) with f(t, x) Lip(I Ω) 1 for some domain (I Ω) containing (t, x ) has at most one solution [11]. Here there is a problem in solving DAEs. Most of the DAEs can not be represented by ODEs. We consider system (5.1) and apply the expansion of derivatives for function q: Note that with d dx q(t, x)ẋ + q(t, x) + j(t, x) =. (5.3) t equation (5.3) becomes C(t, x) = d q(t, x), dx C(t, x)ẋ + q(t, x) + j(t, x) =. t For solving this equation we need to discuss about C(t, x); if C(t, x) is nonsingular and invertible for all x, the DAE can be changed to an ODE and the system is solved. But in many cases C(t, x) is singular. The reason of this singularity are the algebraic equations. So the solution has to satisfy a number of algebraic equation also in t = a proper initial solution has to satisfy the algebraic equations which initial solution is called consistent. This means that all initial values are not consistent. If the initial solution is equal to the steady state, it also satisfies the algebraic equations. Hence, for solving the DAEs and to find the best approximation of solution in close form, we should choose the accurate and efficient tools to approximate the solution. The time is discretized in small time interval, while for each time interval the DAE is approximated by a numerical integration scheme [27]. Theorem 5.1. The DAE system (4.12) Cẋ(t) + Gx(t) = e(t) is linear both in x and ẋ is solvable if and only if the matrix pencil λc + G is regular Stability We like to have a numerical method with the property that the numerical solution is close to the exact solution. That means beside the solvability of the problem, stability also is important and necessary. 1 Lipschitz Continuity: The vector field f(t, x) is Lipschitz continuous on I Ω if a constant L exists such that for all x, y Ω and all t I: f(t, y) f(t, x) 2 L y x 2. If f is Lipschitz continuous on (I Ω) we denote this as f Lip(I Ω) [11] 2 You can find the proof [17] 3
39 5.2. THEORY OF DIFFERENTIAL ALGEBRAIC EQUATIONS Definition 5.1. Consider the perturbation IVP of (5.1) with initial value ˆx and solution ˆx(t). The system is stable if: ǫ > δ > ˆx x < δ t ˆx(t) x(t) < ǫ Stability guarantees if the initial value is changed the difference between the approximated solution and the exact solution also is small. In electrical circuits, the time behavior is only caused by sources function. This means that the system (5.1) can be changed to: d dtq(x) + j(x) + u(t) = x() = x, where u(t) is an input function and time dependent function. To check local stability around the initial value is easier than the global stability. A linear time invariant system is stable if the Jacobian matrix is stable 3. (5.4) Theorem 5.2. Let x be the steady state of (5.4), with j(x ) =. Consider the linearised homogeneous system around x Cẋ + Gx = (5.5) where C = x q(x ) and G = x j(x ). This system is stable if all roots of the next equation have strict negative real part: det(λc + G) =. If G is invertible and G 1 C is stable matrix, then this condition has been satisfied. If (5.5) is stable, then the nonlinear system is locally stable around x Index of DAEs To distinguish the degree in solving DAEs we associate an index. We consider the following DAE: F(t, y, ẏ) =. (5.6) This system contains an algebraic and differential parts. As described in section (5.2.1) if F ẏ is nonsingular and invertible then the system (5.6) can be changed to an ODE system. Although, if F show the dynamics of an electrical circuit, it is not the case, but DAE system 3 A square matrix is said to be a stable matrix if every eigenvalue of has negative real part. 4 You can find the proof [18] 31
40 CHAPTER 5. DIFFERENTIAL ALGEBRAIC EQUATION (DAE) can be transferred into the ODE system by differentiating the DAE system and substitute the algebraic equations by extra derived differential equations. Definition 5.2. For general DAE system (5.6), the index along a solution y(t) is a minimum number of differentiations of the system which would be required to solve for y uniquely in terms of y and t (i.e., to define an ODE for y). Thus, the index is defined in terms of the overdetermined system F(t, y, y ) =, df dt (t, y, y, y ) =,. d p F dt p (t, y, y,..., y (p+1) ) = (5.7) to be the smallest integer p so that y in (5.7) can be solved for in terms of y and t [3]. In practice, differentiation of the system as in (5.7) is rarely done in a computation. Nevertheless, such a definition is very useful in understanding the underlying mathematical structure of the DAE system, and hence in selecting an appropriate numerical method. Theorem 5.3. If the pencil matrix λc + G is regular (invertible), there exist nonsingular matrices P and Q such that: PCQ = [ ] I, PGQ = N [ ] A I l (for some l) where the matrix N consists of nilpotent Jordan blocks N i, in other words N = diag(n 1,...,N k ) for some k, with N i, i = 1,...,k given by: 1 1 N i = (or possibility N i = ) and A consists of Jordan blocks with nonzero eigenvalue. The nilpotency index µ is defined as: µ = min{k N,N k = }. If C is nonsingular, we define µ =, because then N is empty. The nilpotency index is also called the local index of the DAE (5.1)[7]. 32
41 5.2. THEORY OF DIFFERENTIAL ALGEBRAIC EQUATIONS Semi-Explicit DAE Most applications of either linear constant coefficient or nonlinear DAE s have led to linear time varying DAE s (5.5) Cẋ + Gx = where C singular. It shows the behavior which distinguishes general DAE s from linear constant coefficient DAE s. System (5.5) is the general or fully-implicit linear time varying DAE [7]. The general (or fully-implicit) nonlinear DAE is F(t, x, ẋ) =. Depending on the application, we sometimes refer to a system as semi-explicit if it is in the form: where Fẋ is nonsingular. { F(ẋ, x, y,t) = G(x, y, t) =, The advantage of the semi-explicit form is that distinguish the differential equations from the algebraic equations. The semi-explicit form of (5.1) is: { ẋ = j(t, x) = y q(t, x). 33
42
43 Chapter 6 Dynamical Systems and Passivity Preserving MOR 6.1 Introduction This chapter begins with introducing dynamical systems and their transfer functions (Section 6.2), [23]. In Section 6.3 we give some information about the reduction model by projection matrices In next two sections we define the passivity of the system and how to preserving passivity after reduction. In Section 6.5 the spectral zeros and the method for computing them are introduced. In the following we describe the projection method for reducing the system. Hence we introduce two methods for finding the projection matrices, which are discussed by Sorensen [25] and Antoulas [2]. These two approaches are based on a projection method by selecting spectral zeros of the original transfer function to produce a reduced transfer function that has the specified roots as its spectral zeros. 6.2 Dynamical System This chapter is concerned with dynamical systems = (E,A,B,C,D) of the form { Eẋ(t) = Ax(t) + Bu(t) y(t) = C x(t) + Du(t), (6.1) where A,E R n n, E may be singular (we assume E is symmetric and positive (semi) definite), B R n m, C R n p, D R p m, x(t) R n, y(t) R p and u(t) R m. The matrix E is called descriptor matrix, the matrix A is called state space matrix, the matrices B and C are called input and output map, respectively, and D is direct transmission map. The vectors u(t) and x(t) are called input and state vector, respectively, and y(t) is called the output of the system. The dimension n of the state is defined as the complexity of the system. These systems have been shown in circuit simulation for instance and in this application the system 35
44 CHAPTER 6. DYNAMICAL SYSTEMS AND PASSIVITY PRESERVING MOR is often passive 1. The transfer function G : C m C p, of (6.1), G(s) = C (se A) 1 B + D, can be obtained by applying the Laplace transform to (6.1) under the condition x()=. The transfer function relates outputs to inputs in the frequency domain via Y(s) = G(s)U(s) where Y(s) and U(s) are the Laplace transforms to y(t) and u(t), respectively 2. We want to reduce the original system to a reduced order model ˆ = (Ê,Â, ˆB,Ĉ,D) { Ê ˆx(t) = ˆx(t) + ˆBu(t) ŷ(t) = Ĉ ˆx(t) + Du(t), (6.2) where Â,Ê Rk k, ˆB R k m, Ĉ R k p, D R p m, ˆx(t) R k, ŷ(t) R p, u(t) R m and k n. It is important to produce a reduced model that preserves stability (which is discussed in more details in chapter 5) and passivity. Remark 6.1. Throughout the reminder of this chapter it is assumed that: m = p such that B R n p, C R p n and D R p p. A is a stable matrix i.e. Re(λ i ) < with λ i σ(a),i = 1,,n. The system is observable and controllable [29] and it is passive. 6.3 Model Reduction via Projection Matrices The reduction method in this thesis is based on a projection method. In section 6.7 we introduce two methods which are projection methods. In this section you can find a formulation of projection matrices. We develop a structure of a projection methods for linear time invariant (LTI) systems (6.1) { Eẋ(t) = Ax(t) + Bu(t) y(t) = C x(t) + Du(t), where A,E R n n, E may be singular (E is symmetric and positive (semi) definite), B R n m, C R n p, D R p m, x(t) R n, y(t) R p and u(t) R m. 1 Passivity condition is one of the important concepts and many researches have been studied it, [4, 5, 6, 9, 1, 14, 2, 21]. 2 see Subsection
45 6.4. PASSIVE SYSTEMS Now it is assumed that M and N are k-dimensional subspaces of R n. V and W are built for reducing the system by a projection method. So we construct V = {v 1,,v k } R n k where column vectors form a basis of M and W = {w 1,,w k } R n k where column vectors form a basis of N, (we are interested in W V = I k ). Assuming system ˆΣ is reduced model of original system Σ where k is order of ˆΣ. Therefore the ˆΣ is acquired as a projection of Σ on M and the residual of ˆΣ with respect to Σ is orthogonal to N. We suppose x is an approximate solution of Σ where x satisfies the above structure which means x is a projection of the solution on M and the residual is orthogonal to N. So we can define x = Vˆx, where ˆx R k and ẋ = V ˆx. Then the residual is Eẋ Ax Bu = EV ˆx AVˆx Bu. The residual is orthogonal to N W (EV ˆx AVˆx Bu) = W EV ˆx W AVˆx W Bu = The reduced model ˆΣ becomes: { Ê ˆx(t) = ˆx(t) + ˆBu(t) ŷ(t) = Ĉ ˆx(t) + Du(t), where  = W AV R k k, Ê = W EV R k k, ˆB = W B R k m, Ĉ = CV R k p, ˆx(t) = Vˆx R k and y = ŷ(t) R p [19]. 6.4 Passive Systems We can reduce the model by V and W which are constructed in the previous section 6.3. With arbitrary V and W, some features of the original system may not be preserved. One of these properties which we are interested in to preserve is passivity. When we want to reduce the system, we should preserve the passivity and stability. The matrix A is assumed to be stable which means all its eigenvalues are in the open left halfplane. Definition 6.1. A system is passive if it does not generate energy internally, and strictly passive if it consumes or dissipates input energy [25]. 37
46 CHAPTER 6. DYNAMICAL SYSTEMS AND PASSIVITY PRESERVING MOR In other words Σ is passive if or strictly passive if t Re u(τ) y(τ)dτ, t R, u L 2 (R) δ > t t Re u(τ) y(τ)dτ δre u(τ) u(τ)dτ, t R, u L 2 (R) The transfer function of system Σ is G(s) = C (se A) 1 B + D which shows the relation between input u(s) and output y(s) in frequency domain 3. Another more practical definition of passivity is in the following Definition 6.2. [25] The system Σ is passive iff the transfer function G(s) is positive real, which means that: 1. G(s) is analytic for Re(s) >, 2. G( s) = G(s), s C, 3. G(s) + (G(s)) for Re(s) > where (G(s)) = B (se A ) 1 C + D. Property 3 implies the existence of a stable rational matrix function K(s) R p p (with stable inverse) such that G(s) + (G( s)) = K(s)K ( s). We try to construct the V and W in such a way the transfer function of reduced model has these three properties. 6.5 Spectral Zeros Again we consider the Σ system (6.1) and its transfer function is 3 See Section 6.2 G(s) = C (se A) 1 B + D. 38
47 6.5. SPECTRAL ZEROS In section 6.4 we have seen that if Σ is passive then there exists a stable rational matrix function K(s) R p p (with stable inverse) i.e if we assume G(s) = n(s) d(s) then (G( s)) = n ( s) d ( s). Now we have G(s) + (G( s)) = n(s) d(s) + n ( s) d ( s) = n(s)d ( s)+d(s)n ( s) d(s)d ( s) (because numerator of a fraction is a polynomial) = r(s)r ( s) d(s)d ( s) = K(s)K ( s) This is the spectral factorization of G. K is a spectral factor of G. The zeros of K i.e. λ i, i = 1,,n such that det(k(λ i )) =, are the spectral zeros of G Spectral Zeros and Generalized Eigenvalue Problem We start this section with explaining a generalized eigenvalue problem which Sorensen used it [25]. It brings together the theory of positive real interpolation by Antoulas and invariant subspace method for interpolating the spectral zeros by Sorensen. The components of the generalized eigenvalue problem are constructed from those of the realization of an LTI system. We recall the system Σ (6.1) { Eẋ(t) = Ax(t) + Bu(t) and also recall its transfer function y(t) = C x(t) + Du(t), G(s) = C (se A) 1 B + D. Now we consider (G( s)) = B ( se A ) 1 C + D = B (se ( A )) 1 ( C) + D. Then we compute G + G, 4 G(s) + (G( s)) = (C (se A) 1 B + D) + (B (se ( A )) 1 ( C) + D ) = [ C B ][ (se A) 1 (se ( A )) 1 ][ B C ] + (D + D ) 4 Block wise inversion:» 1» A B A 1 + A 1 B(D CA 1 B) 1 CA 1 A 1 B(D CA 1 B) 1 = C D (D CA 1 B) 1 CA 1 (D CA 1 B) 1 39
ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67
1/67 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 6 Mathematical Representation of Physical Systems II State Variable Models for Dynamic Systems u 1 u 2 u ṙ. Internal Variables x 1, x 2 x n y 1 y 2. y m Figure
More informationJacobi-Davidson methods and preconditioning with applications in pole-zero analysis
Nat.Lab. Unclassified Report 2002/817 Date of issue: 05/2002 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis Master s Thesis Joost Rommes Unclassified Report 2002/817
More informationIdentification of Electrical Circuits for Realization of Sparsity Preserving Reduced Order Models
Identification of Electrical Circuits for Realization of Sparsity Preserving Reduced Order Models Christof Kaufmann 25th March 2010 Abstract Nowadays very-large scale integrated circuits contain a large
More informationSome of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e
Transform methods Some of the different forms of a signal, obtained by transformations, are shown in the figure. X(s) X(t) L - L F - F jw s s jw X(jw) X*(t) F - F X*(jw) jwt e z jwt z e X(nT) Z - Z X(z)
More informationRANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis
RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department
More informationModel Order Reduction using SPICE Simulation Traces. Technical Report
Model Order Reduction using SPICE Simulation Traces Paul Winkler, Henda Aridhi, and Sofiène Tahar Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada pauwink@web.de,
More informationModel order reduction of electrical circuits with nonlinear elements
Model order reduction of electrical circuits with nonlinear elements Andreas Steinbrecher and Tatjana Stykel 1 Introduction The efficient and robust numerical simulation of electrical circuits plays a
More informationBALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS
BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007
More informationModel reduction of nonlinear circuit equations
Model reduction of nonlinear circuit equations Tatjana Stykel Technische Universität Berlin Joint work with T. Reis and A. Steinbrecher BIRS Workshop, Banff, Canada, October 25-29, 2010 T. Stykel. Model
More informationQUESTION BANK SUBJECT: NETWORK ANALYSIS (10ES34)
QUESTION BANK SUBJECT: NETWORK ANALYSIS (10ES34) NOTE: FOR NUMERICAL PROBLEMS FOR ALL UNITS EXCEPT UNIT 5 REFER THE E-BOOK ENGINEERING CIRCUIT ANALYSIS, 7 th EDITION HAYT AND KIMMERLY. PAGE NUMBERS OF
More informationProblem Set 3: Solution Due on Mon. 7 th Oct. in class. Fall 2013
EE 56: Digital Control Systems Problem Set 3: Solution Due on Mon 7 th Oct in class Fall 23 Problem For the causal LTI system described by the difference equation y k + 2 y k = x k, () (a) By first finding
More informationKrylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations
Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction
More informationModel Reduction for Unstable Systems
Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model
More informationComputing Transfer Function Dominant Poles of Large Second-Order Systems
Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson
More informationEG4321/EG7040. Nonlinear Control. Dr. Matt Turner
EG4321/EG7040 Nonlinear Control Dr. Matt Turner EG4321/EG7040 [An introduction to] Nonlinear Control Dr. Matt Turner EG4321/EG7040 [An introduction to] Nonlinear [System Analysis] and Control Dr. Matt
More informationẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)
EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and
More informationECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77
1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),
More informationReview of Linear Time-Invariant Network Analysis
D1 APPENDIX D Review of Linear Time-Invariant Network Analysis Consider a network with input x(t) and output y(t) as shown in Figure D-1. If an input x 1 (t) produces an output y 1 (t), and an input x
More informationLinear Systems Theory
ME 3253 Linear Systems Theory Review Class Overview and Introduction 1. How to build dynamic system model for physical system? 2. How to analyze the dynamic system? -- Time domain -- Frequency domain (Laplace
More informationJacobi-Davidson methods and preconditioning with applications in pole-zero analysis Rommes, J.; Vorst, van der, H.A.; ter Maten, E.J.W.
Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis Rommes, J.; Vorst, van der, H.A.; ter Maten, E.J.W. Published: 01/01/2003 Document Version Publisher s PDF, also known
More information10 Transfer Matrix Models
MIT EECS 6.241 (FALL 26) LECTURE NOTES BY A. MEGRETSKI 1 Transfer Matrix Models So far, transfer matrices were introduced for finite order state space LTI models, in which case they serve as an important
More informationReview: control, feedback, etc. Today s topic: state-space models of systems; linearization
Plan of the Lecture Review: control, feedback, etc Today s topic: state-space models of systems; linearization Goal: a general framework that encompasses all examples of interest Once we have mastered
More informationDESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES
DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the
More informationControl Systems. Laplace domain analysis
Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output
More informationAPPLICATION TO TRANSIENT ANALYSIS OF ELECTRICAL CIRCUITS
EECE 552 Numerical Circuit Analysis Chapter Nine APPLICATION TO TRANSIENT ANALYSIS OF ELECTRICAL CIRCUITS I. Hajj Application to Electrical Circuits Method 1: Construct state equations = f(x, t) Method
More informationControl Systems Engineering (Chapter 2. Modeling in the Frequency Domain) Prof. Kwang-Chun Ho Tel: Fax:
Control Systems Engineering (Chapter 2. Modeling in the Frequency Domain) Prof. Kwang-Chun Ho kwangho@hansung.ac.kr Tel: 02-760-4253 Fax:02-760-4435 Overview Review on Laplace transform Learn about transfer
More informationTransient Sensitivity Analysis CASA Day 13th Nov 2007 Zoran Ilievski. Zoran Ilievski Transient Sensitivity Analysis
CASA Day 13th Nov 2007 Talk Structure Talk Structure Introduction Talk Structure Introduction Recap Sensitivity Talk Structure Introduction Recap Sensitivity Examples and Results Talk Structure Introduction
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationI Laplace transform. I Transfer function. I Conversion between systems in time-, frequency-domain, and transfer
EE C128 / ME C134 Feedback Control Systems Lecture Chapter 2 Modeling in the Frequency Domain Alexandre Bayen Department of Electrical Engineering & Computer Science University of California Berkeley Lecture
More informationModule 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control
Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/
More informationLaplace Transforms Chapter 3
Laplace Transforms Important analytical method for solving linear ordinary differential equations. - Application to nonlinear ODEs? Must linearize first. Laplace transforms play a key role in important
More informationNetwork Graphs and Tellegen s Theorem
Networ Graphs and Tellegen s Theorem The concepts of a graph Cut sets and Kirchhoff s current laws Loops and Kirchhoff s voltage laws Tellegen s Theorem The concepts of a graph The analysis of a complex
More informationChapter 6: The Laplace Transform. Chih-Wei Liu
Chapter 6: The Laplace Transform Chih-Wei Liu Outline Introduction The Laplace Transform The Unilateral Laplace Transform Properties of the Unilateral Laplace Transform Inversion of the Unilateral Laplace
More informationPhysics 116A Notes Fall 2004
Physics 116A Notes Fall 2004 David E. Pellett Draft v.0.9 Notes Copyright 2004 David E. Pellett unless stated otherwise. References: Text for course: Fundamentals of Electrical Engineering, second edition,
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationNONLINEAR DC ANALYSIS
ECE 552 Numerical Circuit Analysis Chapter Six NONLINEAR DC ANALYSIS OR: Solution of Nonlinear Algebraic Equations I. Hajj 2017 Nonlinear Algebraic Equations A system of linear equations Ax = b has a
More informationSchool of Engineering Faculty of Built Environment, Engineering, Technology & Design
Module Name and Code : ENG60803 Real Time Instrumentation Semester and Year : Semester 5/6, Year 3 Lecture Number/ Week : Lecture 3, Week 3 Learning Outcome (s) : LO5 Module Co-ordinator/Tutor : Dr. Phang
More informationModel reduction via tangential interpolation
Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationKrylov Techniques for Model Reduction of Second-Order Systems
Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of
More informationLinearization problem. The simplest example
Linear Systems Lecture 3 1 problem Consider a non-linear time-invariant system of the form ( ẋ(t f x(t u(t y(t g ( x(t u(t (1 such that x R n u R m y R p and Slide 1 A: f(xu f(xu g(xu and g(xu exist and
More informationModule 03 Linear Systems Theory: Necessary Background
Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationLaplace Transform Analysis of Signals and Systems
Laplace Transform Analysis of Signals and Systems Transfer Functions Transfer functions of CT systems can be found from analysis of Differential Equations Block Diagrams Circuit Diagrams 5/10/04 M. J.
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationSeries RC and RL Time Domain Solutions
ECE2205: Circuits and Systems I 6 1 Series RC and RL Time Domain Solutions In the last chapter, we saw that capacitors and inductors had element relations that are differential equations: i c (t) = C d
More information4/27 Friday. I have all the old homework if you need to collect them.
4/27 Friday Last HW: do not need to turn it. Solution will be posted on the web. I have all the old homework if you need to collect them. Final exam: 7-9pm, Monday, 4/30 at Lambert Fieldhouse F101 Calculator
More informationCHAPTER 10: Numerical Methods for DAEs
CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct
More informationDefinition of differential equations and their classification. Methods of solution of first-order differential equations
Introduction to differential equations: overview Definition of differential equations and their classification Solutions of differential equations Initial value problems Existence and uniqueness Mathematical
More informationMusic 420 Lecture Elementary Finite Different Schemes
Music 420 Lecture Elementary Finite Different Schemes Julius O. Smith III (jos@ccrma.stanford.edu) Center for Computer Research in Music and Acoustics (CCRMA) Department of Music, Stanford University Stanford,
More informationSimulation of RF integrated circuits. Dr. Emad Gad
Simulation of RF integrated circuits Dr. Emad Gad 2007 2 Contents 1 Formulation of Circuit Equations 3 1.1 Modified Nodal Analysis........................... 3 1.1.1 Resistor Stamps............................
More informationApproximation of the Linearized Boussinesq Equations
Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan
More informationPerspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems
ECE 3640 Lecture State-Space Analysis Objective: systems To learn about state-space analysis for continuous and discrete-time Perspective Transfer functions provide only an input/output perspective of
More informationECE2262 Electric Circuit
ECE2262 Electric Circuit Chapter 7: FIRST AND SECOND-ORDER RL AND RC CIRCUITS Response to First-Order RL and RC Circuits Response to Second-Order RL and RC Circuits 1 2 7.1. Introduction 3 4 In dc steady
More informationControl Systems Design
ELEC4410 Control Systems Design Lecture 14: Controllability Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 14: Controllability p.1/23 Outline
More informationCONTROL DESIGN FOR SET POINT TRACKING
Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded
More informationEINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science
EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science RANA03-15 July 2003 The application of preconditioned Jacobi-Davidson methods in pole-zero analysis by J. Rommes, C.W.
More informationKrylov Subspace Methods for Nonlinear Model Reduction
MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute
More informationKirchhoff's Laws and Circuit Analysis (EC 2)
Kirchhoff's Laws and Circuit Analysis (EC ) Circuit analysis: solving for I and V at each element Linear circuits: involve resistors, capacitors, inductors Initial analysis uses only resistors Power sources,
More informationdx n a 1(x) dy
HIGHER ORDER DIFFERENTIAL EQUATIONS Theory of linear equations Initial-value and boundary-value problem nth-order initial value problem is Solve: a n (x) dn y dx n + a n 1(x) dn 1 y dx n 1 +... + a 1(x)
More information9.5 The Transfer Function
Lecture Notes on Control Systems/D. Ghose/2012 0 9.5 The Transfer Function Consider the n-th order linear, time-invariant dynamical system. dy a 0 y + a 1 dt + a d 2 y 2 dt + + a d n y 2 n dt b du 0u +
More informationTime Response Analysis (Part II)
Time Response Analysis (Part II). A critically damped, continuous-time, second order system, when sampled, will have (in Z domain) (a) A simple pole (b) Double pole on real axis (c) Double pole on imaginary
More informationAnalog Signals and Systems and their properties
Analog Signals and Systems and their properties Main Course Objective: Recall course objectives Understand the fundamentals of systems/signals interaction (know how systems can transform or filter signals)
More informationLinear Systems. Linear systems?!? (Roughly) Systems which obey properties of superposition Input u(t) output
Linear Systems Linear systems?!? (Roughly) Systems which obey properties of superposition Input u(t) output Our interest is in dynamic systems Dynamic system means a system with memory of course including
More informationEE/ME/AE324: Dynamical Systems. Chapter 7: Transform Solutions of Linear Models
EE/ME/AE324: Dynamical Systems Chapter 7: Transform Solutions of Linear Models The Laplace Transform Converts systems or signals from the real time domain, e.g., functions of the real variable t, to the
More informationIntroduction to Modern Control MT 2016
CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear
More informationIdentification Methods for Structural Systems
Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from
More informationNumerical Algorithms as Dynamical Systems
A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive
More informationLocal Parametrization and Puiseux Expansion
Chapter 9 Local Parametrization and Puiseux Expansion Let us first give an example of what we want to do in this section. Example 9.0.1. Consider the plane algebraic curve C A (C) defined by the equation
More informationLinear dynamical systems with inputs & outputs
EE263 Autumn 215 S. Boyd and S. Lall Linear dynamical systems with inputs & outputs inputs & outputs: interpretations transfer function impulse and step responses examples 1 Inputs & outputs recall continuous-time
More information3 Gramians and Balanced Realizations
3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations
More informationSubject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)
Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must
More informationNumerical Methods for Differential Equations
Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Linear Multistep methods Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationIntroduction to Controls
EE 474 Review Exam 1 Name Answer each of the questions. Show your work. Note were essay-type answers are requested. Answer with complete sentences. Incomplete sentences will count heavily against the grade.
More informationLecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.
Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationTime Response of Systems
Chapter 0 Time Response of Systems 0. Some Standard Time Responses Let us try to get some impulse time responses just by inspection: Poles F (s) f(t) s-plane Time response p =0 s p =0,p 2 =0 s 2 t p =
More informationControl Systems (ECE411) Lectures 7 & 8
(ECE411) Lectures 7 & 8, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2016 Signal Flow Graph Examples Example 3: Find y6 y 1 and y5 y 2. Part (a): Input: y
More information7 Planar systems of linear ODE
7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution
More informationControl Systems. Frequency domain analysis. L. Lanari
Control Systems m i l e r p r a in r e v y n is o Frequency domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic
More informationEE Experiment 11 The Laplace Transform and Control System Characteristics
EE216:11 1 EE 216 - Experiment 11 The Laplace Transform and Control System Characteristics Objectives: To illustrate computer usage in determining inverse Laplace transforms. Also to determine useful signal
More informationZeros and zero dynamics
CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)
More informationEE C128 / ME C134 Final Exam Fall 2014
EE C128 / ME C134 Final Exam Fall 2014 December 19, 2014 Your PRINTED FULL NAME Your STUDENT ID NUMBER Number of additional sheets 1. No computers, no tablets, no connected device (phone etc.) 2. Pocket
More informationFirst and Second Order Circuits. Claudio Talarico, Gonzaga University Spring 2015
First and Second Order Circuits Claudio Talarico, Gonzaga University Spring 2015 Capacitors and Inductors intuition: bucket of charge q = Cv i = C dv dt Resist change of voltage DC open circuit Store voltage
More informationFIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland
FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material
More information2.161 Signal Processing: Continuous and Discrete Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 2.6 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MASSACHUSETTS
More informationECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52
1/52 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 2 Laplace Transform I Linear Time Invariant Systems A general LTI system may be described by the linear constant coefficient differential equation: a n d n
More informationSinusoidal Steady State Analysis (AC Analysis) Part I
Sinusoidal Steady State Analysis (AC Analysis) Part I Amin Electronics and Electrical Communications Engineering Department (EECE) Cairo University elc.n102.eng@gmail.com http://scholar.cu.edu.eg/refky/
More informationCHAPTER 5: Linear Multistep Methods
CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order
More informationIntroduction to AC Circuits (Capacitors and Inductors)
Introduction to AC Circuits (Capacitors and Inductors) Amin Electronics and Electrical Communications Engineering Department (EECE) Cairo University elc.n102.eng@gmail.com http://scholar.cu.edu.eg/refky/
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationTopic # Feedback Control Systems
Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Fall 2007 16.31 17 1 Linear Quadratic Regulator (LQR) Have seen the solutions to the
More informationControl Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich
Control Systems I Lecture 6: Poles and Zeros Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 27, 2017 E. Frazzoli (ETH) Lecture 6: Control Systems I 27/10/2017
More informationOrdinary Differential Equation Theory
Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit
More informationUnit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform. The Laplace Transform. The need for Laplace
Unit : Modeling in the Frequency Domain Part : Engineering 81: Control Systems I Faculty of Engineering & Applied Science Memorial University of Newfoundland January 1, 010 1 Pair Table Unit, Part : Unit,
More information9. Introduction and Chapter Objectives
Real Analog - Circuits 1 Chapter 9: Introduction to State Variable Models 9. Introduction and Chapter Objectives In our analysis approach of dynamic systems so far, we have defined variables which describe
More informationChapter 2. Engr228 Circuit Analysis. Dr Curtis Nelson
Chapter 2 Engr228 Circuit Analysis Dr Curtis Nelson Chapter 2 Objectives Understand symbols and behavior of the following circuit elements: Independent voltage and current sources; Dependent voltage and
More information16. Local theory of regular singular points and applications
16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic
More informationNotes for course EE1.1 Circuit Analysis TOPIC 4 NODAL ANALYSIS
Notes for course EE1.1 Circuit Analysis 2004-05 TOPIC 4 NODAL ANALYSIS OBJECTIVES 1) To develop Nodal Analysis of Circuits without Voltage Sources 2) To develop Nodal Analysis of Circuits with Voltage
More information