Review of Linear Time-Invariant Network Analysis

Similar documents
Frequency Response. Re ve jφ e jωt ( ) where v is the amplitude and φ is the phase of the sinusoidal signal v(t). ve jφ

EE 3054: Signals, Systems, and Transforms Summer It is observed of some continuous-time LTI system that the input signal.

2.161 Signal Processing: Continuous and Discrete Fall 2008

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e

Laplace Transform Part 1: Introduction (I&N Chap 13)

First and Second Order Circuits. Claudio Talarico, Gonzaga University Spring 2015

LTI Systems (Continuous & Discrete) - Basics

ECE 3620: Laplace Transforms: Chapter 3:

Dynamic circuits: Frequency domain analysis

Refinements to Incremental Transistor Model

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation

Chapter 10: Sinusoids and Phasors

Definition of the Laplace transform. 0 x(t)e st dt

Chapter 8: Converter Transfer Functions

A system that is both linear and time-invariant is called linear time-invariant (LTI).

Circuit Analysis Using Fourier and Laplace Transforms

09/29/2009 Reading: Hambley Chapter 5 and Appendix A

EE100Su08 Lecture #11 (July 21 st 2008)

The Laplace Transform

Single-Time-Constant (STC) Circuits This lecture is given as a background that will be needed to determine the frequency response of the amplifiers.

Core Concepts Review. Orthogonality of Complex Sinusoids Consider two (possibly non-harmonic) complex sinusoids

Review of 1 st Order Circuit Analysis

Fourier Series and Fourier Transforms

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

Advanced Analog Building Blocks. Prof. Dr. Peter Fischer, Dr. Wei Shen, Dr. Albert Comerma, Dr. Johannes Schemmel, etc

EE348L Lecture 1. EE348L Lecture 1. Complex Numbers, KCL, KVL, Impedance,Steady State Sinusoidal Analysis. Motivation

Linear systems, small signals, and integrators

The Laplace Transform

Electric Circuit Theory

Basic Electronics. Introductory Lecture Course for. Technology and Instrumentation in Particle Physics Chicago, Illinois June 9-14, 2011

ANALOG AND DIGITAL SIGNAL PROCESSING CHAPTER 3 : LINEAR SYSTEM RESPONSE (GENERAL CASE)

Definitions. Decade: A ten-to-one range of frequency. On a log scale, each 10X change in frequency requires the same distance on the scale.

Music 270a: Complex Exponentials and Spectrum Representation

Fourier series. XE31EO2 - Pavel Máša. Electrical Circuits 2 Lecture1. XE31EO2 - Pavel Máša - Fourier Series

LINEAR SYSTEMS. J. Elder PSYC 6256 Principles of Neural Coding

GATE EE Topic wise Questions SIGNALS & SYSTEMS

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Chapter 6: The Laplace Transform. Chih-Wei Liu

R. W. Erickson. Department of Electrical, Computer, and Energy Engineering University of Colorado, Boulder

Frequency Response and Continuous-time Fourier Series

The Fourier Transform (and more )

The Laplace Transform

(amperes) = (coulombs) (3.1) (seconds) Time varying current. (volts) =

LOPE3202: Communication Systems 10/18/2017 2

LECTURE 12 Sections Introduction to the Fourier series of periodic signals

2.161 Signal Processing: Continuous and Discrete

Circuit Analysis-III. Circuit Analysis-II Lecture # 3 Friday 06 th April, 18

Fourier Transform for Continuous Functions

CIRCUIT ANALYSIS II. (AC Circuits)

SIGNAL PROCESSING. B14 Option 4 lectures. Stephen Roberts

EE292: Fundamentals of ECE

To find the step response of an RC circuit

Chapter 10: Sinusoidal Steady-State Analysis

EE 40: Introduction to Microelectronic Circuits Spring 2008: Midterm 2

Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform. The Laplace Transform. The need for Laplace

Module 4. Related web links and videos. 1. FT and ZT

Dynamic Response. Assoc. Prof. Enver Tatlicioglu. Department of Electrical & Electronics Engineering Izmir Institute of Technology.

Source-Free RC Circuit

Chapter 5 Frequency Domain Analysis of Systems

EE/ME/AE324: Dynamical Systems. Chapter 7: Transform Solutions of Linear Models

Lecture 7: Laplace Transform and Its Applications Dr.-Ing. Sudchai Boonto

Solving a RLC Circuit using Convolution with DERIVE for Windows

ECE 255, Frequency Response

EE -213 BASIC CIRCUIT ANALYSIS LAB MANUAL

Sinusoidal steady-state analysis

Notes for ECE-320. Winter by R. Throne

EA2.3 - Electronics 2 1

Chapter 5 Frequency Domain Analysis of Systems

Sinusoidal Steady-State Analysis

Explanations and Discussion of Some Laplace Methods: Transfer Functions and Frequency Response. Y(s) = b 1

7.2 Relationship between Z Transforms and Laplace Transforms

3 What You Should Know About Complex Numbers

Linear Systems Theory

Complex Numbers Review

Laplace Transform Analysis of Signals and Systems

BIOEN 302, Section 3: AC electronics

EC Signals and Systems

Identification Methods for Structural Systems. Prof. Dr. Eleni Chatzi System Stability - 26 March, 2014

K.K. Gan L3: R-L-C AC Circuits. amplitude. Volts. period. -Vo

Identification Methods for Structural Systems

Sinusoids and Phasors

SIGNALS AND SYSTEMS: PAPER 3C1 HANDOUT 6a. Dr David Corrigan 1. Electronic and Electrical Engineering Dept.

Line Spectra and their Applications

EE221 Circuits II. Chapter 14 Frequency Response

Discrete-Time David Johns and Ken Martin University of Toronto

Chapter 4 The Fourier Series and Fourier Transform

Module 1: Signals & System

3.2 Complex Sinusoids and Frequency Response of LTI Systems

2. The following diagram illustrates that voltage represents what physical dimension?

SIGNALS AND SYSTEMS: PAPER 3C1 HANDOUT 6. Dr Anil Kokaram Electronic and Electrical Engineering Dept.

NETWORK ANALYSIS WITH APPLICATIONS

OPERATIONAL AMPLIFIER APPLICATIONS

Solutions to Problems in Chapter 4

Basic. Theory. ircuit. Charles A. Desoer. Ernest S. Kuh. and. McGraw-Hill Book Company

Chapter 9 Objectives

Dr. Ian R. Manchester

CMPT 889: Lecture 2 Sinusoids, Complex Exponentials, Spectrum Representation

Phasor mathematics. Resources and methods for learning about these subjects (list a few here, in preparation for your research):

Physics 116A Notes Fall 2004

Chapter 1 Fundamental Concepts

Transcription:

D1 APPENDIX D Review of Linear Time-Invariant Network Analysis Consider a network with input x(t) and output y(t) as shown in Figure D-1. If an input x 1 (t) produces an output y 1 (t), and an input x 2 (t) produces an output y 2 (t), then the network is linear if and only if an input x(t) = ax 1 (t) + bx 2 (t) produces an output y(t) = ay 1 (t) + by 2 (t). This property is known as superposition and can be described mathematically by y(t) = N k =1 a k y k ( t τ k ), (D.1) where y(t) is the overall output, y k (t) is the output corresponding to a particular input x k (t), and the overall input x(t) is given by x(t) = N k =1 a k x k ( t τ k ). (D.2) Figure D-1 An electronic network

D2 Superposition is an important property and can be considered to be the defining property of a linear network; that is, a network is linear if and only if superposition holds. If, in addition to being linear, the response of the network is independent of when an input is applied, then the network is called a linear time-invariant (LTI) network. The analysis of an LTI network can be simplified even further by recognizing that if arbitrary signals can be represented as a summation of a particular set of basis 1 signals, then knowing the output of a network to each of these basis signals is sufficient to predict the output for any input. The property of time invariance is important here since we only need to know the outputs corresponding to the basis signals at one time, even though the input may be made up of a summation of scaled and time shifted versions of these basis signals. Therefore, it is of value to explore ways of representing arbitrary signals in terms of simple basis signals 2. In Section D1 we examine the use of singularity functions to represent signals. Then, in Section D2, the Fourier series representation of periodic signals is examined, and in Section D3 the Fourier series is extended to non-periodic signals, resulting in the Fourier and Laplace transforms. Sections D4 and D5 tie all the material together by presenting the complex exponential representation of signals and the response of networks to these signals. D1 Singularity Functions One way of representing complicated signals is to use singularity functions like the unit step function defined by 0, t < 0 U1() t =. (D.3) 1, t > 0 For example, the function x(t) in Figure D-2 can be approximated by x(t) [ x(n t) x((n 1) t) ]U 1 (t n t), (D.4) n= 1 A geometrical picture is sometimes useful here. Picture these basis signals as representing basis vectors in some space (perhaps with more than three dimensions - but, fortunately, this need not be pictured!). This space is formally called a signal space and, if every signal in the space can be represented by a summation of some simple set of signals, then that set of signals spans the space. This geometrical picture can be pursued further; if the set of signals chosen as the basis are all orthogonal to one another, then they represent the minimum set of signals necessary to span the space and the problem of finding the components of a complicated signal is simplified (the components are the projections of the signal onto the basis signals) - see (D.10). 2 The following treatment closely parallels that presented in The Analysis of Linear Circuits, by C.M. Close, 1966, Harcourt, Brace & World.

D3 which is a stair-step approximation as shown in the figure. If the response to the unit step function is known, then an approximation of the response to the input x(t) can be found using superposition and (D.4). If t is allowed to go to zero, then an exact equation for y(t) can be found. Figure D-2 Representing a complicated function of time as a summation of step functions. In the limit as t goes to zero, each of the steps in the stair-step approximation in Figure D-2 becomes an impulse and the summation in (D.4) becomes an integral. Remember that the Dirac delta function is defined by: δ(t) = 0, t 0, t = 0 and b>0 δ(t)dt =1, (D.5) a<0 and the impulse response of a network, h(t), is defined to be the output of the network when δ(t) is the input (i.e., if x(t) = δ(t ), then y(t) = h(t)). Therefore, the output of a network having impulse response h(t) and input x(t) is given by y(t) = lim t 0 n= h(t n t)x(n t) t, (D.6) where x(n t) t is equal to the area of the step starting at time n t as shown in Figure D- 2. In the limit as t goes to zero, (D.6) becomes an integral;

D4 yt () = ht ( λ) x( λ) dλ. (D.7) Equation (D.7) is called the superposition or, more commonly, the convolution integral. Because the operation of convolution occurs so frequently and is so important, it is given a special notation. Equation (D.7) can, therefore, also be written y(t) = h(t) x(t). (D.8) Although the convolution integral is sometimes difficult to evaluate, it does have the advantage of decomposing any complicated (but real) function of time into an infinite sum of scaled, time shifted, versions of a single function. It also turns out (as will soon be shown), that convolution in the time domain is equivalent to multiplication in the frequency domain, so the calculation indicated by (D.8) can indeed be carried out fairly simply through the use of transforms. D2 Fourier Series The Fourier series can be used to decompose any periodic function of time into a sum of sinusoids. These sinusoids all occur at frequencies that are integer multiples of the frequency of the periodic input. In other words, if x( t+ T) = x( t) for all t, then x(t) is periodic in t with period T and frequency ω o = 2π T, and can be represented as x(t) = a o + (a n cosnω o t + b n sin nω o t). (D.9) The coefficients in (D.9) are determined by the following equations. a o = 1 T a n = 2 T b n = 2 T T 2 T 2 T 2 n=1 x(t)dt x(t) cosnω o tdt (n 0). (D.10) T 2 T 2 T 2 x(t)sinnω o tdt The a o term is just the average value of x(t). As an example of how the other terms are calculated, consider a n ; if both sides of (D.9) are multiplied by cos( nω ot) and integrated over one period, then all of the integrals on the right side except for one are equal to zero and the middle equation in (D.10) results. The fact that the other integrals are equal to

D5 zero shows that the different terms in the Fourier series are orthogonal on the period T. The a n ' s represent the amplitudes of the co-sinusoidal components of x(t) at the frequencies nω o and, similarly, the b n ' s represent the amplitudes of the sinusoidal components of x(t) at the frequencies nω o. Therefore, these coefficients describe a discrete frequency spectrum for the signal x(t). The frequency spectrum is just a picture that shows how the amplitude (or energy if we square the coefficients) of x(t) is distributed in frequency rather than in time. All periodic signals have spectra that are discrete (i.e., they consist of lines at discrete frequencies that are multiples of ω o ). The Fourier series combined with superposition shows why the sinusoidal steady-state response of an LTI network is so important. If the response is known for all frequencies, then the output due to any periodic function can be calculated. Before moving on to the third example of how to represent complicated signals, it is worthwhile to show a different form for the Fourier series; the complex exponential form. The complex exponential form is more compact and leads directly to the Fourier transform as shown in the following section. In order to present the complex exponential form of the Fourier series, a short aside on complex exponentials will be helpful. Aside AD.1 Complex Exponentials Remember that solving RLC circuits requires solving differential equations (since i C = C dv C dt and v L = Ldi L dt ), therefore the solutions will be combinations of exponentials (since de at dt = ae at ). The complex exponential can be defined by requiring that; for z = x + jy, e z must posses the same characteristics as the real exponential e x (e.g., e x e y = e x+y ). In particular, the derivative of e z must also be e z. These requirements lead to e z e x (cos y + j sin y). (AD.1) For purely imaginary numbers (i.e., z = jθ ) we get e jθ = cosθ + j sinθ, (AD.2) which leads directly to Euler s identities; cosθ = e jθ + e jθ 2 sinθ = e jθ e jθ. (AD.3) 2 j

D6 Using Euler s identities, the complex form of the Fourier series can be found directly from (D.9); x(t) = c n = 1 T n= T 2 T 2 c n e jnω ot D3 Fourier and Laplace Transforms. (D.11) x(t)e jnω ot dt In this section, we modify the Fourier series to allow for non-periodic signals. The result is the Fourier transform, wherein the original non-periodic signal is essentially decomposed into an infinite summation of complex exponentials. Starting from (D.11), we want to examine the limit as T approaches infinity. The equation must be modified slightly however, since c n goes to zero as T approaches infinity. Since c n is the amplitude of the complex exponential component at the frequency nω o, it makes sense to define an analogous quantity for the continuous spectrum. Therefore, define the amplitude per unit frequency (or spectral density) x(ω ) as (note that the spacing of the components is ω = ω o ). x(n ω ) = c n ω = c n = 1 ω o 2π Now substitute c n = x(nω o ) ω into (D.11) to get x(t) = T 2 x(t)e jnω ot dt. (D.12) T 2 x(nω o )e jnω ot ω. (D.13) n= Finally, take the limits of (D.12) and (D.13) as T approaches infinity (or, equivalently, as ω goes to zero) to find the relationships between the time and frequency domain representations for a non-periodic signal. The result is the Fourier transform pair shown below (we have substituted x(ω ) = X(ω )2π to obtain the standard form). X(ω) = x(t) = 1 2π x(t)e jωt dt X(ω )e jωt dω. (D.14)

D7 In (D.14), X(ω) is the Fourier transform of x(t), and x(t) is the inverse Fourier transform of X(ω). From (D.14) it can be seen that the frequency spectrum of a nonperiodic signal is a continuous function of frequency. The Fourier transform is a very useful tool for the analysis of LTI networks, but it does suffer from one problem, it fails to converge for many interesting functions of time (e.g., strictly speaking, the Fourier transform of sinω t does not exist). This difficulty can be overcome in a straightforward way however. If the integral for finding X(ω) does not converge, then a new function can be defined as x (t) = x(t)e σt where σ is chosen to be large enough to guarantee convergence of the Fourier transform of x (t); X (σ,ω) = e σt x(t)e jωt dt = x(t)e (σ +jω)t dt. (D.15) The Fourier transform of x (t) is a function of both ω and σ, and is equal to the Laplace transform of the original function x(t). Making the substitution s = σ + jω in both (D.15) and the second integral in (D.14) results in the Laplace transform pair; X(s) = x(t) = 1 2πj x(t)e st dt σ +j σ j. (D.16) X(s)e st ds In practice, the inverse Laplace transform is rarely determined directly from the integral relation since the complex integration can be very difficult to perform. Instead, partialfraction expansions 3 and a table of Laplace transforms are usually used. We have finally arrived at the third and most powerful way of representing complicated input signals as summations of simple signals. In the case of the Laplace transform, the signal is decomposed into an infinite summation (i.e., an integration) of complex exponentials in the variable s = σ + jω. Therefore, remembering superposition, the problem of analyzing the response of an LTI network to any arbitrary input signal reduces to finding the network s response to complex exponential signals. The next section presents some examples of complex exponential signals. 3 Most books on linear system theory have a discussion of partial-fraction expansions.

D4 Complex Exponential Signals D8 By replacing z with st in (AD.1), a complex exponential signal can be expressed as st σ t x() t = e = e (cosωt+ jsin ωt). (D.17) Note that if a real sinusoidal function of time is desired, two complex exponentials with complex conjugate arguments must be used. From (D.17) it follows that x(t) = e st + e s*t = e σt cosωt. (D.18) Equation (D.18) is a generalization of the first equation in (AD.3). It is instructive to examine several different functions of time as described by (D.18) with different values for s. Figure D-3 shows some values for s plotted in the complex plane (marked with an x ) along with the corresponding function of time as given by (D.18). If only one value of s is shown, then (D.18) simplifies to a single exponential. Note that signals with corresponding values of s in the left half plane (LHP) have amplitudes that decrease exponentially with time. These signals do have Fourier transforms and, in fact, the Fourier transform still converges even if the signal is multiplied by an exponentially increasing function of time (with exponent σ) as in (D.15) (remember that σ is negative for s in the LHP). Signals with corresponding values of s in the right half plane (RHP) have amplitudes that increase exponentially with time. These signals do not have Fourier transforms unless they are first multiplied by a function of time that exponentially decreases with time (with exponent -σ). Signals with corresponding values of s on the jω axis have constant amplitudes. It is somewhat paradoxical that real signals, that is signals that can exist steady state, must have corresponding values of s on the imaginary axis. This statement is true because complex exponential signals with s in either the LHP or the RHP have amplitudes that either grow without bound, or have been decreasing forever. Therefore, it would require infinite energy to generate such a signal for all time (i.e., steady state). These signals can, of course, be approximated for short periods of time, but they cannot exist steady state.

D9 Figure D-3 Several values of s and their corresponding signals given by (D.18). When only one value of s is given, (D.18) simplifies to a single exponential.

D10 D5 Response of an LTI Network to e st Using complex exponential representations of signals leads to a simplified method for finding the forced and transient responses of LTI networks. Rather than solving differential equations, the problem reduces to algebraic manipulation of equations that are functions of the complex frequency s. To see how this simplification comes about, consider the response of a capacitor when the voltage across it is assumed to have the form v C (t) = V c e st, i C = C dv C dt = V c sce st. (D.19) The complex impedance of the capacitor is then defined by Z c (s) = v C (t) i C (t) = 1 sc. (D.20) Similarly, the complex impedance of an inductor is found to be Z L (s) = sl. For the steady-state response of an LTI circuit with a forcing function of the form v(t) = Ve st (the forcing function could just as well be a current), all of the voltages and currents in the circuit will have the same form. That is, they will all be some multiple of e st as is demonstrated by (D.19). The multipliers may, in general, be complex (as, for example, the one in (D.19) is). Therefore, in performing circuit analysis, the e st can be left off with the understanding that it multiplies all voltages and currents in the circuit. As an example of circuit analysis using complex exponential forcing functions, consider the single-pole low-pass filter (LPF) shown in Figure D-4. Using the complex impedance for the capacitor we can write 4 and I(s) = V i(s) R + 1, (D.21) sc V o (s) = I(s)Z C (s) = I(s) sc, (D.22) 4 Upper-case variables with lower-case subscripts are used to denote complex quantities that are not explicit functions of time (i.e., the e st is removed), but which are understood to represent functions of time. These quantities are called phasors.

D11 which leads to the transfer function for the circuit, H(s); H(s) = V o (s) V i (s) = 1 1+ src. (D.23) Figure D-4 A single-pole low-pass filter (LPF). Even though the analysis has only been carried out once (for an input v i (t) = V i e st ), the resulting transfer function is valid for all complex frequencies since they all have the same form. The analysis also proceeded along the same lines as the dc analysis of a purely resistive circuit. All we had to do was use the complex impedance of the capacitor in precisely the same way you would use the resistance of a resistor. We can, therefore, see that the Laplace transform makes the job of analyzing the frequency response of LTI circuits much easier. Before we get too excited though, we need to finish the example. What is the output, v o (t), for an arbitrary input v i (t)? Suppose the input can be expressed as a sum of complex exponentials, N v i (t) = V ik e s k t. (D.24) k=1 What is the output for this case? Using superposition, (D.23), and (D.24), the output is N v o (t) = V ik H(s k )e s kt. (D.25) k=1 By using a summation of discrete frequencies for v i (t), we have limited the input to being periodic. A more general result can be obtained by converting the amplitude for each discrete complex frequency into an amplitude density (as was done in converting (D.11) into (D.14)) V i (s k ) 2πV ik s. (D.26)

D12 Substituting (D.26) into (D.25), letting the summation be over k = - to +, and taking the limit as s goes to zero leads to the inverse Laplace transform for the output; v o (t) = 1 2πj σ + j H(s)V i (s)e st ds. (D.27) σ j In (D.27), we recognize that H(s)V i (s) = V o (s). This final equation, therefore, points out a very important property. The transform of the output of an LTI network is equal to the transform of the input multiplied by the Laplace-domain transfer function. So, to find the output in the transform (or frequency) domain, we only have to perform a multiplication, which is much easier than the convolution required in the time domain. Also note that if v i (t) =δ (t), then V i (s) =1 and V o (s) = H(s) ; but, by definition, the response to δ(t) is h(t), so H(s) must be the Laplace transform of h(t). Therefore, we have proven that convolving two functions in the time domain is equivalent to multiplying their transforms together, and that the impulse response and transfer function of an LTI network are a Laplace transform pair. Let s now reconsider the H(s) as given by (D.23). Figure D-5 shows both the position of the pole of H(s) in the complex plane and the magnitude and phase plots for H( jω ) given by H( jω ) = 1 1+ jωrc = p jω p. (D.28) where p = 1 RC. Remember that a pole of a network is simply a value of s for which the magnitude of H(s) goes to infinity. Unless a computer is used, these magnitude and phase plots are usually drawn using an asymptotic approximation. The results are called Bode plots and are described in Aside AD.2 at the end of this appendix. Since steady-state signals are restricted to the imaginary axis, the plots for H( jω ) correspond to the response of the network when drive by an exponential input of the form given in (D.18) while sweeping s from zero to j along the imaginary axis. We only show positive frequencies in Figure D-5 because the magnitude and phase responses are both symmetric about the ω = 0 axis. The Bode plots correspond to the magnitude and phase responses you would measure for the network if you used a sinusoidal input signal and swept the frequency; remember from (AD.3) that we need to sum two exponentials with purely imaginary arguments to obtain a sinusoidal waveform. The negative frequencies that we don t show are an artifact of using complex exponentials to represent the sinusoidal signal.

D13 The denominator of H( jω ) corresponds to the vector shown in Figure D-5 when ω = ω 1. As ω is swept from dc to ω p, the magnitude of the vector changes slowly from 1 RC to 2 RC. As ω increases to infinity, the magnitude of the vector goes to infinity. Since the magnitude of H( jω ) is inversely proportional to the magnitude of the vector, it starts at zero db, goes to -3 db at ω p, and then goes to negative infinity db (i.e., the gain goes to zero) as ω goes to infinity. The angle of H( jω ) is the negative of the angle of the vector and can be seen to start at 0û, go through -45û at ω p, and then go to -90û as ω goes to infinity. Figure D-5 The pole position in the complex plane and the Bode plots for the LPF of Figure D-4. So far, we have only analyzed the steady-state (or forced) response of an LTI network using the Laplace transform. The s-domain transfer function of a circuit, however, also contains information about its natural response. Notice that when H(s) goes to infinity, which for this example occurs at s = p = 1 RC, the implication is that an output can occur with no input. In other words, v o (t) can be finite even when v i (t) is zero. Therefore, if there is some initial stored energy in the circuit (in this example it would be charge stored on the capacitor), the natural response will be a complex exponential with s = 1 RC. In general, the natural response of an LTI network is described by the roots of the denominator polynomial in H(s). In fact, the reason we used x s to mark the values of s in Figure D-3 is that if these are the poles of a network, the corresponding

D14 time function shown represents the natural response of the network. In other words, it is the transient response that results if some initial conditions are present. We will not pursue this topic further here because we are mostly concerned with the forced response. Aside AD.2 Bode Plots Bode plots (pronounced B o da, where the a is like the final a in banana, not Bo de ) are useful approximations for magnitude and phase plots like those shown in Figure D-5. Consider a single term (1 + jω ω o ) ; this term might appear in either the numerator or the denominator of a transfer function. For extreme values of ω, the magnitude of this term can be approximated as 1 ; ω! ωo jω 1+ ω ω ; ω ω o ". (AD.4) o ω o The phase of this term is 1 tan ( ) ωω o and can be approximated in different ways as will be presented later. Note that the two approximations indicated in (AD.4) represent straight lines if we plot magnitude versus ω. For ω << ω o the magnitude is approximately constant and equal to 1 (0 db), while for ω >> ω o the magnitude is increasing with a constant slope. If we plot amplitude in db (i.e., 20 times the log of the magnitude) versus the log of frequency, then every time ω is increased ten times (i.e., a change of one decade), the magnitude goes up by 20 db, in other words, the slope is 20 db/decade. Figure AD-1 shows both the exact plot of the magnitude of (1 + jω ω o ) and the two straight lines indicated by the approximations in (AD.4). Notice that the two straight lines intersect at ω =ω o. If we use the Bode approximation to the magnitude plot, the largest error occurs for ω =ω o and is 3 db as shown in the figure. Considering the ease with which this straight-line asymptotic approximation can be drawn, the error is not too bad.

D15 Figure AD-1 Exact magnitude plot of (1 + jω ω o ) and the asymptotic approximation that is called a Bode plot. Now consider a complete transfer function; H o 1 + jω ω 1 H( jω ) = 1 + jω 1 + jω. (AD.5) The magnitude of this transfer function, in db, can be written ω 2 H( jω ) in db = 20log H o + 20log1 + jω ω 1 20 log1+ jω ω 2 20log1 + jω ω 3. (AD.6) Using the approximation given by (AD.4) for the last three terms in (AD.6), Figure AD-2 shows plots of each individual term in (AD.6) as well as the sum of all the terms and the exact magnitude plot. Notice that so long as the three break frequencies are well separated (in practice one decade is sufficient), the resulting Bode plot for (AD.5) will be off by about 3 db at each break frequency, but never by more than that. ω 3

D16 Figure AD-2 Exact plots and Bode plots for (AD.6) (a) each individual term in (AD.6) (b) The sum of all the terms.

D17 Different straight-line approximations to the phase of (1 + jω ω o ) are in use. The most commonly employed approximation is shown in Figure AD-3 along with the exact phase. The approximation uses a low-frequency asymptote of 0û, a highfrequency asymptote of 90û, and a straight line going through 45û at the break frequency with a slope of 45 û/decade. The largest error occurs at the breakpoints, 0.1ω o and 10ω o, and is equal to 5.7û. Figure AD-3 Exact phase plot of (1 + j ω ω o ) and the Bode phase plot.