DDAEs in Control Theory

Size: px
Start display at page:

Download "DDAEs in Control Theory"

Transcription

1 DDAEs in Control Theory L.F. Shampine P. Gahinet March 17, 2004 Abstract Linear time invariant systems are basic models in control theory. When they are generalized to include state delays, the resulting models are described by a system of DDAEs, delay-differential-algebraic equations. We exploit the special properties of the models that arise in our control theory application to solve these difficult problems with enough accuracy and enough speed for design purposes. 1 Introduction Linear time invariant (LTI) systems are basic models in control theory. Models with state delays arise naturally when building feedback interconnections of elementary transfer functions with input or output delays. It is well-known that delayed arguments can affect profoundly the behavior of a dynamical system. Accordingly, in this paper we study the numerical solution of generalized LTI (GENLTI) systems of the form dx dt = Ax(t) + B 1u(t) + B 2 w(t) (1) z(t) = C 2 x(t) + D 21 u(t) + D 22 w(t) (2) The matrices A, B 1, B 2, C 2, D 21, D 22 are constant and the input function u(t) is piecewise smooth for t > 0. The function w is defined in terms of the (column) vector z of N components and N (constant) lags τ 1,..., τ N as w(t) = [z 1 (t τ 1 ),..., z N (t τ N )] T The simulation begins at t = 0 from rest: (u(t), x(t), z(t)) (0, 0, 0) for t 0. Along with computing x(t), z(t) on an interval [0, T f ], we must evaluate the output function y(t) = C 1 x(t) + D 11 u(t) + D 12 w(t) (3) Mathematics Department, Southern Methodist University, Dallas, TX 75275, lshampin@mail.smu.edu The MathWorks, 3 Apple Hill, Natick, MA, 01760, pascal@mathworks.com 1

2 defined by constant matrices C 1, D 11, D 12. Typically the input function u(t) has a jump at t = 0. The solution x(t) is continuous there by definition, but a jump in u(t) induces jumps in z(t), x (t), y(t). The lags then cause these jumps to propagate throughout [0, T f ]. We recognize that (1,2) is a system of delay differential algebraic equations (DDAEs). As explained in the introductory texts [2, 9], we still have much to learn about how to solve numerically DAEs and DDEs by themselves, much less in combination. DDAEs have received little attention in the literature and the results available are rather specific to the application, c.f. [1] and the references therein. That will also be true of the present application. Because we intend to use GENLTI models in design, it is crucial that they be solved quickly. To accomplish this, we must take full advantage of their special form. For instance, we can preprocess the models to guarantee that the lags are all positive. A novelty is that in this natural formulation of the task, the lags may not be distinct. We begin with a few more details about the task and a concrete example so as to understand better what must be done. We then consider the existence and smoothness of solutions. Although it is better for our purposes to study the equations directly, (1,2) can be reduced to a set of DDEs by differentiating the algebraic equations. These equations are of neutral type, which helps us understand why discontinuities propagate throughout the interval of integration. We believe that it is best to deal directly with the DDAEs, but even if we were to solve the equivalent DDEs, we would need a special code instead of relying on an effective code for neutral DDEs such as ARCHI [7], DKLAG6 [3], and DDVERK [4]. Like these codes for DDEs we have implemented a method of steps based on a one-step method. Unlike them, we must be able to solve stiff problems, too. By exploiting the linearity of the equations (1,2), we can integrate them much faster than a code intended for general equations. We can further exploit the fact that only certain kinds of input functions u(t) arise in the application. Like the DDE solvers DDVERK and ddesd [8], we estimate and control the size of the residual. This is key to solving efficiently problems like (1, 2) that can have many jump discontinuities. On physical grounds we expect that the sizes of jumps will decay in the course of integrating a stable GENLTI model and our codes take this into account. As with the DDE solvers cited, we must also cope with lags that are small compared to the length of the interval of integration. Clearly the numerical solution of GENLTI models is a challenging task, but we have found that by exploiting the form of the equations, it is possible to solve them with an efficiency acceptable for design work. The reference [5] provides more details about how GENLTI models arise in control theory and how the numerical methods discussed are used in a tool for analysis and design. 2 An Example To illuminate the task we present some further details here and consider a concrete example that we call the sisofeed5 problem. The equations (1,2) show 2

3 the nature of the task, but there is an additional complication in practice the input function might have a delay in its argument and the same might be true of the output function. This complication is mostly in the description of the problem. Because of the way that the task is formulated in terms of blocks, it is natural to speak of a given input function u(t) and a delay in its argument that might differ from one run to the next. The function u(t) that appears in (1,2) is this input function with a delayed argument. This is something of an abuse of the notation, but it conforms to practice and will cause no real difficulties. Similarly, the output function might have a delay in its argument. It will be necessary to account for a delays in evaluating and plotting the output function, but again it is convenient to write the output function as y(t). For the sisofeed5 problem, u(t) is a step function that changes from u(t) = 0 for t 0 to u(t) = 1 for t > 0. The problem is to be solved for 0 t The matrices in (1) are A = [ and those in (2) are [ C 2 = 0 0 There are two lags that lead to The matrices in (3) are ] [ 0, B 1 = 0 ] [ 0, D 21 = 1 w(t) = [ z1 (t 0.02) z 2 (t 0.08) ] [ 0 4, B 2 = 0 0 ] [, D 22 = ] ] C 1 = [ 0 0 ], D 11 = [0], D 12 = [ 1 0 ] For the computations reported here, the input function had a delay of 0.1, i.e., wherever u(t) appears in the equations (1, 2, 3) above, it was replaced by u(t 0.1). This causes a delay in the response of the system as seen for x(t) in Fig. 1. There was also a delay of 0.2 in the output function so the plot shows y(t 0.2). In addition to the initial large jump in y(t), there is a jump discontinuity barely visible in Fig. 2 that we show more clearly in Fig. 3. A jump of this magnitude is not important as regards the output per se, but it is important to the numerical solution of this problem. 3 Properties of Solutions The behavior of solutions of equations (1,2) corresponds in some ways to what we might expect of a DAE and in some, to what we might expect of a DDE. We begin by examining these matters. The smoothness of the input function will play a role, so let us suppose that it has d continuous derivatives for t > 0. ] 3

4 Figure 1: x(t) for sisofeed5 example Figure 2: y(t) for sisofeed5 example. 4

5 Figure 3: Expanded view of Fig. 2 showing propagated jump. This does not apply directly to the piecewise constant input functions that we consider in 4. However, we show there how to solve such a problem by solving N problems that have input functions which are constant for t > 0. The first issue when solving a DAE is to determine consistent initial conditions. The matter is easily resolved for GENLTI. The simulation starts from rest and we require that x(t) be continuous at t = 0. Using this and the fact that τ j > 0 for each j, we find that the initial values x (0+) = Ax(0+) + B 1 u(0+) + B 2 w(0+) = B 1 u(0+), z(0+) = C 2 x(0+) + D 21 u(0+) + D 22 w(0+) = D 21 u(0+) are consistent. Typically u(t) has a jump at t = 0 which generally results in a jump in the first derivative of x(t) and a jump in z(t) itself. Indeed, the system stays in the rest state until the input function has a non-zero value. In the case of the sisofeed5 problem, the input function is zero until t = 0.1 because of a delay. There x (t) and z(t) jump to the values [ ] [ ] x 0 0 (0.1+) =, z(0.1+) = 0 1 The index of a DAE is of crucial importance. If we differentiate the algebraic equation (2) to obtain the differential equation z (t) = C 2 x (t) + D 21 u (t) + D 22 w (t) and substitute the expression (1) for x (t) into this equation, we find that z (t) = C 2 Ax(t) + C 2 B 1 u(t) + D 21 u (t) + 5

6 C 2 B 2 [z 1 (t τ 1 ),..., z N (t τ N )] T + D 22 [z 1(t τ 1 ),..., z N (t τ N )] T (4) Along with the differential equation (1), this amounts to a system of ordinary differential equations for x(t) and z(t) with lags. In the parlance of DAEs, we have just shown that the problem is of index 1. However, in the parlance of DDEs, we have just learned that the underlying problem is of neutral type, meaning that the first order equations have delayed terms involving the first derivative of the solution. There are results that assert the existence and uniqueness of solutions of DDEs of neutral type and so assert this for the DDAEs (1,2). We need to understand the smoothness of the solution and how discontinuities propagate, so we investigate this matter directly for the original DDAEs using the method of steps. Because the behavior of solutions is fundamentally different, it is worth noting that GENLTI models with D 22 = 0 are not unusual. For such models, equation (4) does not have delayed terms involving the first derivative of z(t). The DDEs (1,4) are then of retarded type rather than neutral type and solutions become smoother as the integration proceeds. It will be convenient to let τ = min(τ 1,..., τ N ) On the interval (0, τ), the coefficients of (1) and (2) are at least as smooth as u(t) because all the arguments t τ j t τ < 0, hence z j (t τ j ) has its rest value of zero. Because all the values of z that appear in (1) are known for t in this interval, the equation is an ODE for x(t) which obviously has a solution throughout the interval. Clearly x(t) has at least d + 1 continuous derivatives and has a limit as t increases to τ. With x(t) defined on the interval, (2) is an explicit recipe for z(t) there. We see that z(t) has at least d continuous derivatives and approaches a limit as t increases to τ. The idea of the method of steps is to solve a succession of initial value problems. To continue on, we must first define consistent initial conditions at t = τ+. We define x(τ+) by continuity and then define x (τ+) = Ax(τ) + B 1 u(τ) + B 2 w(τ+) As t increases through τ, one of the arguments t τ j increases through 0 where z j (t τ j ) generally has a jump. Accordingly, this definition generally implies a jump in the first derivative of x. Similarly, we define z(τ+) = C 2 x(τ) + D 21 u(τ) + D 22 w(τ+) and there is generally a jump in z. With these initial values, we can apply the argument made for (0, τ) to draw the same conclusions for an interval in t that starts with τ and extends as far as the next jump in one of the z j (t τ j ). The question now is where these jumps occur. Starting from a rest state, the set of discontinuity points of level 0 is the initial point t = 0. The set of discontinuity points of level L is obtained by adding 6

7 all the lags to each of the discontinuity points of level L 1 and retaining only the points that lie in [0, T f ]. Because the first discontinuity point of this level is bigger by τ than the first discontinuity point of the previous level, eventually there are no discontinuity points in [0, T f ] and the construction comes to an end. Let D = {t 0 < t 1 <... < t M } be the union of the discontinuity points of all levels. Clearly t 0 = 0 and t 1 = τ. We have shown the existence and uniqueness of x(t), z(t) for t (t 0, t 1). In this we found that z(t) has as at least d continuous derivatives and it has a limit as t t 1. Also, x(t) has at least d+1 continuous derivatives and it has a limit as t t 1. Suppose that all this is true of the intervals (t r 1, t r) for r k. We define consistent initial conditions for the interval (t k, t k+1 ) by continuity of x(t) and x (t k+) = Ax(t k) + B 1 u(t k) + B 2 w(t k+) z(t k+) = C 2 x(t k) + D 21 u(t k) + D 22 w(t k+) The properties assumed of preceding intervals can be established for this interval exactly as we did for the first interval once we appreciate that none of the delayed terms z j (t τ j ) can have a jump: If there were a jump in such a term for t = ˆt, we would have ˆt τ j = t r for some t r D. From the construction of D, we see that ˆt = t r + τ j is a discontinuity point in (t k, t k+1 ), contradicting the definition of t k+1 as the next discontinuity point. We conclude that with the assumptions we make, a GENLTI model has a unique solution. The algebraic variable z(t) generally has discontinuities at the points of D, but otherwise has at least d continuous derivatives. The differential variable x(t) is continuous, but generally has discontinuities in its first derivative at the points of D. At all other points it has at least d+1 continuous derivatives. The method of steps is not only a procedure for the theoretical construction of a solution, but also a way of solving the problem numerically. Unfortunately, there are practical difficulties of two kinds. One is that when τ is small compared to T f, there can be a good many discontinuity points. In the example of the sisofeed5 problem, the shorter lag is τ = 0.02 and T f = It is not hard to see that D is (in Matlab notation) 0:0.02: This is a considerable number of discontinuities and there would be many more if the second lag were not a multiple of the first. The other difficulty is that some of the discontinuity points are close to one another when two lags are nearly equal. The fact that discontinuities persist in z(t) itself and in the first derivative of x(t) can be very troublesome for numerical procedures. To have its usual order, a numerical method must be applied to a smooth function, implying here that on reaching t n, we cannot take a step that would carry us past the next point in D. Generally u(t) is smooth for t > 0, so x(t) and z(t) are smooth between discontinuity points and easy to approximate. As a consequence, the step size may well be determined by the discontinuities of the solution. If there are many discontinuities or some discontinuities are close to one another, this restriction on the step size can present serious difficulties. 7

8 4 Input Functions Only a few kinds of input functions arise in the application and their forms have important implications for the numerical solution of (1,2). Because the response of a system to a step function is so important in the application and because we were able to take significant advantage of the form of the input function, we developed a special integrator, ddaeresp, for them. These problems have u(t) equal to a column of an identity matrix for t > 0. Optimization of the integrator is facilitated by preprocessing the matrices defining the problem so that the input function seen by the integrator is always a scalar step function. With this input function, the only discontinuities are those propagated from the jump at t = 0. ddaeresp solves this reduced problem and returns not just solution values at mesh points, but also the information needed to evaluate the continuous extensions. To approximate y(t) and, if required, x(t) for an input function u(t) that is made up of more than one column of an identity matrix, we use ddaeresp to solve for the response for each of the columns and then use linearity to obtain the response to u(t). An auxiliary function mergeresp does this, but it is not merely a matter of using linearity. The various solutions are computed on different meshes, so it is necessary to select a common mesh that reveals jump discontinuities and use the continuous extensions to compute y(t), x(t) on this mesh. The sisofeed5 example of 2 illustrates this kind of input function. All other input functions are defined by sampled data in one of two ways. Zero-order hold, ZOH, defines u(t) as a piecewise constant function and firstorder hold, FOH, defines it as a piecewise linear function. Tracking discontinuities is the biggest single difficulty in solving GENLTI models. Generally ZOH input functions involve relatively few samples, but they introduce a severe lack of smoothness in the solution throughout the interval. Fortunately it is possible to overcome this serious difficulty by using the special form of the problems. In the integrator ddaesim we begin the solution of a problem with ZOH input function by computing successively all the solutions for step response. It is obvious that the equations (1,2) are linear, but closer examination shows that their solutions are time invariant. Using linearity and a shift in the independent variable, the step function responses can be combined to form the solution of the given problem. This is just a matter of interpolating some numerical solutions and forming linear combinations of vectors, so it is not expensive even when there are many samples, i.e., even when u(t) has many jump discontinuities. The FOH case is fundamentally different because we must deal with input functions that are only piecewise smooth. In contrast to ZOH problems, it is common that there are many samples. For instance, we routinely solved test problems with u(t) defined by 1000 equally spaced samples taken from a smooth function. Jumps in u (t) imply jumps in derivatives of x(t), z(t), y(t), not just at corresponding points, but also later in the integration because of lags. It is impractical to track all these places where the unknown functions have reduced smoothness. Instead we must develop algorithms that can cope with this. In particular, we must have a concept of error in the solution and an estimate of 8

9 this error that is meaningful when the solution is only piecewise smooth. For step function response we require the capability of evaluating y(t), and optionally x(t), anywhere in [0, T f ]. For this reason we return the results computed in ddaeresp as a structure that holds all the information needed to interpolate y(t) and x(t). The mergeresp function uses this information both to account for the meshes selected in the various integrations and to get answers on a common mesh by interpolation. Input functions in discrete form, whether ZOH or FOH, are input to ddaesim as arrays tin and uin. For such an input function the application requires y(t), and optionally x(t), only at t in tin. Correspondingly the integrator ddaesim works entirely with arrays for speed and returns answers only at the designated values of t. 5 Choosing a Formula Stiffness is a rather vague concept, but its effects and formulas for combating them are well-known when solving ODEs. LTIs can be stiff, but when we add the effects of the algebraic equations and especially the effects of delays that are present for GENLTIs, it is far from clear what stiffness means for these problems. Indeed, even for stiff DDEs there are only a few codes and a modest literature, see, for example, [6]. In the context of ODEs, a problem is stiff when the step size is determined by stability. Correspondingly, as long as the effects of lags dominate the selection of step size, a problem cannot be very stiff. In the first instance this is a matter of how the distance between one point of D and the next compares to the length of the interval of integration. Still, it is easy to imagine problems for which the solution of interest is very stable and easy to approximate, problems for which stiffness would be expected when solving ODEs. In the circumstances it seems prudent to choose a formula that will deal with stiff ODEs. After all, stiff LTIs are included in the class of models to be solved. Methods appropriate for stiff ODEs are all implicit, a fact that causes a variety of difficulties. Fortunately, many of these difficulties are not present when solving GENLTIs. We decided to use a one-step method because a memory is inconvenient when dealing with discontinuous solutions. We return to the general case in 7, but for now let us think of using the method of steps so that algebraic variables appear explicitly. The key observation is that the differential equations are linear and the Jacobian is constant, so the algebraic equations defining a step with an implicit Runge Kutta method amount to a system of linear equations. Solving linear algebraic equations is efficient in Matlab. Typical codes for stiff ODEs work with a constant step size until it is necessary to reduce the step size or clearly advantageous to increase it. This is done to reduce the costs of linear algebra. Similarly, in the present context we save an LU factorization and use it as long as the step size is unchanged. Unlike typical codes for stiff ODEs, there is no iteration for evaluating an implicit formula and therefore no reason to change the linearization and factorization because convergence is unsatisfactory. 9

10 Our first choice was the implicit Runge Kutta formula based on the twopoint Gauss Legendre quadrature formula. This formula has the highest order possible with two stages, namely 4, and is A-stable. Our experience with this formula was quite satisfactory until we came upon the following test problem that we call slow. For t > 0 the equations are x (t) = [ ] [ 64 x(t) + 0 z(t) = [ ] x(t) + 1 ] z(t 0.01) The eigenvalues of the A matrix are { 1, 10000} and the interval of integration is [0, 20], so this would be a stiff ODE if not for the delay term. This is a retarded DDE, so the solution becomes smoother as the integration proceeds, setting the scene for stiffness. When we tried to solve this problem with the A-stable Gauss-Legendre formula, we encountered behavior typical of stiffness when solving ODEs: Although the solution is smooth, the code had many failed steps and on average the step size was rather short compared to T f. Eventually we came to suspect that this behavior was due to the combined effects of a delay and a formula that is not damped at infinity. This led us to try the two-stage Radau IIA formula. It has an order of three, but it is strongly stable. It is also known to have good stability properties when solving DDEs [6]. The new formula cleared up our difficulties with this problem: A small step size is necessary to resolve an initial transient with the smallest being about The average step size was about and the integration was accomplished in a satisfactory way. Using this formula all of our test problems were solved with an efficiency comparable to using the other formula. It might be a little surprising that the higher order of the Gauss-Legendre formula was not more advantageous, but we solve problems to modest accuracy and their solutions may not be smooth. Overhead is important in this application. Because the last stage of the Radau IIA formula provides the result at the end of the step, the implementation of this formula is simpler and runs faster. We might remark that we used the same scheme for estimating and controlling error for both formulas, a scheme that is explained in the next section. All the numerical results of this paper were computed with the two-stage Radau IIA formula. 6 Estimation and Control of Error Certainly we need to pay close attention to the jump discontinuities in the algebraic variables as they propagate through the interval of integration. However, if we had to track them all, it would not be practical to solve many of the models that interest us. Fortunately, it seems that in the application, the sizes of the jumps decay quickly so that it is not necessary to track the discontinuities past many levels of propagation. In another section we return to this matter and here note only that in the first portion of an integration there are typically jumps in 10

11 z(t) and x (t) at the points D. Later in the integration both are approximately continuous, but there are discontinuities in low order derivatives. In the first instance they are due to propagation of jumps, but there are also discontinuities of this kind induced by the piecewise-linear input functions of FOH. We represent the solution components as piecewise cubic polynomial functions. Except at points in D, the approximation to x(t) has a continuous first derivative and the approximation to z(t) is continuous. The approximation to x(t) is defined on each [t n, t n+1 ] as the cubic Hermite interpolant to the values and slopes at the two ends of the interval. The approximation to z(t) is the cubic interpolant to the values at the ends of the interval and two values at interior points. Details will be provided later. If all is going well, the interpolants preserve the accuracy of the implicit Runge Kutta formula at mesh points and in any case, extends the definition of the numerical solution to a function that is piecewise smooth. We can substitute such a solution into the equations and measure how well they are satisfied. If we use upper case to indicate these numerical solutions, the residual R(t) is defined by where X (t) = AX(t) + B 1 u(t) + B 2 W (t) + R(t) W (t) = [Z 1 (t τ 1 ),..., Z N (t τ N )] T Taking the point of view of backward error analysis, we describe the numerical solution as the exact solution of a perturbed problem and measure the quality of the solution by the size of the perturbation R(t). This is a natural measure of error throughout the interval of integration that is well-defined provided only that the Runge Kutta formula can be evaluated. This is an important point the size of the residual is a meaningful measure of the quality of the solution that can be evaluated reliably even when the step size is big and the solution has discontinuities in low-order derivatives. To compute reliably a measure of the size of the residual in difficult circumstances we use an integral norm. Specifically, we use a weighted RMS norm and approximate the integral with a 5-point Lobatto quadrature formula. By construction the residual is zero at the ends of each step, so this requires evaluation of the residual three times in the course of each step. It will be helpful to give these points the names t L1, t L2, t L3. Each evaluation of the residual involves evaluating the interpolants for x(t), z(t), the derivative of the interpolant for x(t) as an approximation to x (t), and the differential equations. For the weight on each component, we use a similar measure of the size of the corresponding component of the solution, hence use five values of x(t) in the span of the step. Of course the values at the end of the step are always available and the values interior to the step are available from the computation of the residual. An integral norm approximated by a Lobatto quadrature formula of this many points provides an exceptionally robust and reliable scheme for assessing error in the solution. The approximations to the value and slope of x(t) at both ends of the step define a cubic Hermite interpolant that approximates x(t) throughout the step. 11

12 This provides an approximation that has a continuous derivative except where special action is taken to account for a jump. Similarly, values approximating z(t) that are formed at the Lobatto points for estimation and control of the error are used to define an approximation to z(t) throughout the step. Specifically, a cubic is defined by interpolating at t n, t L1, t L3, t n+1. This approximation is continuous except where special action is taken to account for a jump. At each step we control the product of the step size and the weighted RMS norm of the residual. Introducing the step size here connects the size of the residual to the size of the local error when all is going well [8]. Proceeding as we do, we compute a solution that satisfies the equations in a meaningful way even when the formulas do not have their usual order because the step size is too big or the solution is not sufficiently smooth. A relative error tolerance of 10 3 is hard-coded as being appropriate to the application. The user specifies a vector of absolute error tolerances which by default have the value The computations reported in this paper were all obtained with default error tolerances. 7 Long Steps Till now we have been supposing that the step size is no bigger than τ so that the formulas we have considered are only linearly implicit. For many problems it is impractical to work with step sizes this short and the smoothness of the solution does not require it. For instance, in 8.1 we discuss the choppy example which has τ = and T f = 5. If we limited the step size to τ, we would need at least steps to solve this problem, but ddaeresp solves it in 343 steps. When taking a step longer than the shortest lag, at least one of the terms z j (t τ j ) has its argument in (t n, t n+1 ]. We have not yet computed z j (t) on this interval, so the numerical solution is only implicitly defined. This is true even when the Runge Kutta method is itself explicit. We use simple iteration to take steps longer than τ. First we predict z(t) on (t n, t n+1 ] by extrapolating the approximation on [t n 1, t n ]. In this a function is used to compute values of z(t) at the three new points used to define the cubic polynomial approximation to z(t) on the current step, namely t L1, t L3, t n+1. Using these values another function evaluates w(t) at the arguments needed to compute the stages of the Runge Kutta formula. With these stages we compute x n+1, x n+1, and then z n+1. A cubic Hermite interpolant to x(t) is evaluated to obtain approximations to x(t) and x (t) at the nodes of the Lobatto quadrature formula for assessment of the error. The approximation to w(t) is evaluated at the same points. These values and those for u(t) and x(t) are then used to correct the approximations to z(t) at t L1, t L3. If the step size is no bigger than τ, this completes the step and the program goes on to assess the error. Otherwise, the program tests the relative change in z n+1. If it is greater than 10 3, the computation is repeated up to 5 times. It is obviously advantageous to evaluate u(t) outside the loop. Useful savings are possible for step sizes comparable to τ when evaluating the stages of the formula by testing whether the stage is 12

13 implicitly defined, hence whether it must be evaluated in each iteration. Because run time is so important in the application, we have made frequent use of the Matlab profiler to identify where the integrators were spending their time. Often this was not the most obvious place. For instance, the function that evaluates w(t) is a bottleneck. Because the argument of w j (t) = z j (t τ j ) depends on j, the evaluation of w(t) is not vectorized. The algebraic variables z(t) are held in the form of a piecewise cubic (vector) polynomial function, so a search is necessary to locate the piece representing the component of interest and then the cubic is evaluated by Lagrangian interpolation. The matter is complicated further by the presence of jump discontinuities. Although we have coded this function carefully, it is an obvious candidate for coding in C. 8 Discontinuities GENLTI models typically have jump discontinuities in z(t) and x (t) at the origin that propagate throughout the interval of interest. Certainly it would be best to track these discontinuities and solve the problem numerically only on the portions where the solutions are at least continuous. This can be so expensive that many codes do not track discontinuities or at least have an option for this. We have implemented a compromise. There are physical reasons for thinking that for the GENLTI models of interest, the sizes of the jumps will decrease as the simulation proceeds. This is our working hypothesis. If it is not valid, i.e., if the model is such that the jumps increase or the solution is unstable, we expect that the user will terminate the run early. We first consider how jumps propagate and then discuss how our programs deal with these jumps. 8.1 Propagation From our discussion of the input functions we see that we can assume u(t) and x(t) are continuous for t > 0. It will be convenient to use the standard notation z(t + 0) z(t 0) = [z(t )] for the size of a jump at t. Using equation (2) it is seen that at a point t D, [z(t )] = D 22 [w(t )] (5) It is not at all unusual that D 22 = 0. In this situation (5) tells us that an initial jump in z(t) does not propagate at all. There are two other situations in which z(t) does not have jumps. For general input function it may be that u(0+) = 0 so that there is no initial jump in u(t). Correspondingly there is no initial jump in z(t) and the algebraic variables are continuous on [0, T f ]. A much more common situation is D 21 = 0. The typical jump in the input function at t = 0 is then suppressed in (2) with the consequence that the algebraic variables are continuous on all of [0, T f ]. In the solvers we recognize all three of these cases and give them special treatment. 13

14 In the case of scalar D 22, it is immediate from (5) that the discontinuities in the algebraic variable are damped exponentially fast when D 22 < 1. If D 22 > 1, the jumps grow exponentially fast and the model is not physically interesting. The case of general D 22 is much more complex and we have only limited results. Typically most of the components of [w(t )] in (5) are zero. By definition, at least one of t τ 1,..., t τ N is in D, but the other points might, or might not, be in the set. If, say, t τ k is not in D, the term [z k (t τ k )] is zero because z is continuous. We have not seen how to exploit this observation in general, but we can get some insight by considering a special situation in which only one component is non-zero. Let ζ be a point in D where there is a jump in z j (ζ). This jump is propagated forward to ζ + τ j, ζ + 2τ j,..., so by definition these points are in D. The propagation of discontinuities is complicated because other discontinuities might be propagated to one of these points by other lags. Let us investigate a simple situation in which at least the first few of these points are reached in no other way. For these points the only non-zero component of [w(t )] is component j. We then have [z j (t )] = (D 22 ) j,j [z j (t τ j )] If (D 22 ) j,j is rather less than 1, this shows that the effects of the lag τ j decay exponentially fast. Note that the decay is with respect to levels of propagation, not distance into the integration. This last result helps us understand a situation that is computationally very difficult, namely τ = τ j much smaller than the other lags. A discontinuity at ζ propagates to ζ + τ, ζ + 2τ,... until it has advanced a distance equal to the second smallest lag. If (D 22 ) j,j is rather less than 1, we see that the effects of a very small delay τ j decay exponentially fast. A concrete example is the choppy test problem. The task is to compute the step response of a system with one input variable, five differential variables, three algebraic variables, and one output variable. The lags are 0.3, , , respectively, and D 22 = (It is inconvenient to provide here all the numerical data for this problem and another discussed in 8.2. The data is available as *.mat files from the authors.) The shortest lag τ = τ 2 = is quite small compared to T f = 5, so if we had to track all the discontinuities at this spacing, the integration would be impractical. Because the (2, 2) entry of D 22 is zero, this lag does not much affect the integration and our solver does not find this model difficult, solving it in 1.25s. The situation would be quite different if the shortest lag were the third one because the (3, 3) entry of D 22 is sufficiently large that the effects of the lag would be relatively long-lasting. With τ so small compared to T f, it is not clear that tracking all the discontinuities would be practical. Even with the limited tracking of our solver, it took more than 12s to integrate this artificial problem to t = 1. 14

15 8.2 Tracking As we have seen, some of the quantities of interest typically have jump discontinuities that propagate throughout the interval of integration. It complicates the coding, but when our solvers are tracking discontinuities, the numerical solution has a jump at a point of discontinuity t. This representation of the numerical solution is not convenient for the application, so in the solution returned by the solvers, we split a discontinuity point into two closely spaced points t δ, t + δ to the left and right of t and then assign the left and right limits of the solution components to these points, respectively. As seen in Fig. 3 this results in a continuous graph that has sharp corners at discontinuities. In our first draft of a solver we implemented the computation of D essentially as described in 3 and tracked all discontinuities. This worked well enough until we ran into a problem for which this computation exhausted the working memory! We then recoded the computation so as to generate discontinuity points in blocks as needed. Unfortunately, there can be so many discontinuities that it is impractical in the application to track them all. We believe that in this application, stable problems have jumps that decrease in magnitude as the integration proceeds. Accordingly we monitored the jumps and stopped tracking when they became sufficiently small. This approach relies upon a robust and reliable measure of error like that of 6 because after we stop tracking, the solution is smooth in pieces, but where the pieces join, the solution may be only approximately continuous. Profiling showed that even this scheme spent an inordinate amount of time tracking discontinuities. For the solvers to be useful as a design tool, we found that we had to compromise on tracking discontinuities. The scheme we have implemented is to propagate discontinuities only to a preset maximum number of levels. The default is maxlevel = 4. At most we hope to track discontinuities until the solution components are approximately continuous. As explained in 8.1, three situations can be recognized easily for which the solution is continuous after propagating discontinuities only once. In these situations we set maxlevel = 1. Even with a limit of four levels, there can be many discontinuities, so we further limit the number of discontinuities we track. The set of discontinuities D is constructed by repeatedly propagating discontinuities to another level and purging duplicates. If at any level we find there are more than MaxPoints = 100 points, we break off the construction. In what follows, D is this truncated set of discontinuity points. When tracking discontinuities, the solvers step to the points of D and compute jumps in x (t), z(t), y(t) there. Let p be the last point in D and let T = max(τ 1,..., τ N ). At points t p the equations do not depend on the values of variables at points t < p T. In preparation for discontinuing the tracking of discontinuities at t = p, the solvers step to the discontinuities in [p T, p], but do not compute jumps at these points. This causes the solvers to recognize the presence of discontinuities when choosing a step size and gradually suppress jumps in the numerical solution. From t = p on, the solvers are free to use whatever step size appears to provide both accuracy and stability. We use the artefact3 test problem to show what can happen when we limit 15

16 Figure 4: artefact3 example. Lower curve was computed with usual discontinuity tracking and upper with extended tracking. Lower curve is approximation to y(t) and upper is approximation to y(t) the tracking of discontinuities. The task is to compute the step response of a system defined on [0, 1] that has one input variable, five differential variables, two algebraic variables, and one output variable. The lags τ = [0.3000, ] and there is no lag in either the input or output variable. The lower curve in Fig. 4 shows the output variable y(t) computed with our usual values for maxlevel and MaxPoints. The upper curve shows what happens when the solver tracks to 50 times as many levels and allows 50 times as many discontinuities. To make it easier to compare the resolution of discontinuities in the two integrations, we added 0.2 to the better solution before plotting it. As we might have expected, the sharp discontinuities are increasingly smeared after we stop tracking. Still, we resolve discontinuities in the first part of the integration where they are relatively large and we do obtain an accurate solution in the sense that its residual is no bigger than a specified tolerance. As it happens, it is not expensive to compute the solution of this particular problem with extended tracking, but it does appear to be necessary to limit tracking in these solvers for them to be useful as design tools. References [1] U. Ascher and L. Petzold, The numerical solution of delay-differentialalgebraic equations of retarded and neutral type, SIAM J. Numer. Anal. 32 (1995)

17 [2] U. Ascher and L. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, Philadelphia, [3] S.P. Corwin, D. Sarafyan, and S. Thompson, DKLAG6: a Code Based on Continuously Imbedded Sixth Order Runge Kutta Methods for the Solution of State Dependent Functional Differential Equations, Appl. Numer. Math. 24 (1997) [4] W.H. Enright and H. Hayashi, A Delay Differential Equation Solver Based on a Continuous Runge Kutta Method with Defect Control, Numer. Alg. 16 (1997) [5] P. Gahinet and L.F. Shampine, Software for Modeling and Analysis of Linear Systems with Delays, Proc. American Control Conf., Boston, [6] N. Guglielmi and E. Hairer, Implementing Radau IIA methods for stiff delay differential equations, Computing 67 (2001) [7] C.A.H. Paul, A user-guide to ARCHI, Numer. Anal. Rept. No. 283, Maths. Dept., Univ. of Manchester, UK, [8] L.F. Shampine, Solving ODEs and DDEs with residual control, [9] L.F. Shampine, I. Gladwell, and S. Thompson, Solving ODEs with Matlab, Cambridge Univ. Press, New York,

Delay Differential Equations Part II: Time and State dependent Lags

Delay Differential Equations Part II: Time and State dependent Lags Delay Differential Equations Part II: Time and State dependent Lags L.F. Shampine Department of Mathematics Southern Methodist University Dallas, Texas 75275 shampine@smu.edu www.faculty.smu.edu/shampine

More information

Delay Differential Equations with Constant Lags

Delay Differential Equations with Constant Lags Delay Differential Equations with Constant Lags L.F. Shampine Mathematics Department Southern Methodist University Dallas, TX 75275 shampine@smu.edu S. Thompson Department of Mathematics & Statistics Radford

More information

Southern Methodist University.

Southern Methodist University. Title: Continuous extensions Name: Lawrence F. Shampine 1, Laurent O. Jay 2 Affil./Addr. 1: Department of Mathematics Southern Methodist University Dallas, TX 75275 USA Phone: +1 (972) 690-8439 E-mail:

More information

Solving DDEs in Matlab

Solving DDEs in Matlab Solving DDEs in Matlab L.F. Shampine Mathematics Department Southern Methodist University Dallas, TX 75275 lshampin@mail.smu.edu S. Thompson Department of Mathematics & Statistics Radford University Radford,

More information

Error Estimation and Control for ODEs

Error Estimation and Control for ODEs Error Estimation and Control for ODEs L.F. Shampine Mathematics Department Southern Methodist University Dallas, TX 75275 lshampin@mail.smu.edu February 3, 2004 Abstract This article is about the numerical

More information

Delay Differential Equations Part I: Constant Lags

Delay Differential Equations Part I: Constant Lags Delay Differential Equations Part I: Constant Lags L.F. Shampine Department of Mathematics Southern Methodist University Dallas, Texas 75275 shampine@smu.edu www.faculty.smu.edu/shampine Delay Differential

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

arxiv: v2 [math.na] 21 May 2018

arxiv: v2 [math.na] 21 May 2018 SHORT-MT: Optimal Solution of Linear Ordinary Differential Equations by Conjugate Gradient Method arxiv:1805.01085v2 [math.na] 21 May 2018 Wenqiang Yang 1, Wenyuan Wu 1, and Robert M. Corless 2 1 Chongqing

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25. Logistics Week 12: Monday, Apr 18 HW 6 is due at 11:59 tonight. HW 7 is posted, and will be due in class on 4/25. The prelim is graded. An analysis and rubric are on CMS. Problem du jour For implicit methods

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

16.7 Multistep, Multivalue, and Predictor-Corrector Methods 740 Chapter 16. Integration of Ordinary Differential Equations 16.7 Multistep, Multivalue, and Predictor-Corrector Methods The terms multistepand multivaluedescribe two different ways of implementing essentially

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

The Derivation of Interpolants for Nonlinear Two-Point Boundary Value Problems

The Derivation of Interpolants for Nonlinear Two-Point Boundary Value Problems European Society of Computational Methods in Sciences and Engineering ESCMSE) Journal of Numerical Analysis, Industrial and Applied Mathematics JNAIAM) vol. 1, no. 1, 2006, pp. 49-58 ISSN 1790 8140 The

More information

Initial value problems for ordinary differential equations

Initial value problems for ordinary differential equations AMSC/CMSC 660 Scientific Computing I Fall 2008 UNIT 5: Numerical Solution of Ordinary Differential Equations Part 1 Dianne P. O Leary c 2008 The Plan Initial value problems (ivps) for ordinary differential

More information

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH

Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH Consistency & Numerical Smoothing Error Estimation An Alternative of the Lax-Richtmyer Theorem Tong Sun Department of Mathematics and Statistics Bowling Green State University, Bowling Green, OH 43403

More information

Solving DDEs in MATLAB

Solving DDEs in MATLAB Applied Numerical Mathematics 37 (2001) 441 458 Solving DDEs in MATLAB L.F. Shampine a,, S. Thompson b a Mathematics Department, Southern Methodist University, Dallas, TX 75275, USA b Department of Mathematics

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit V Solution of Differential Equations Part 1 Dianne P. O Leary c 2008 1 The

More information

Algebra Exam. Solutions and Grading Guide

Algebra Exam. Solutions and Grading Guide Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full

More information

Chapter 6 - Ordinary Differential Equations

Chapter 6 - Ordinary Differential Equations Chapter 6 - Ordinary Differential Equations 7.1 Solving Initial-Value Problems In this chapter, we will be interested in the solution of ordinary differential equations. Ordinary differential equations

More information

NAG Library Chapter Introduction d02 Ordinary Differential Equations

NAG Library Chapter Introduction d02 Ordinary Differential Equations NAG Library Chapter Introduction d02 Ordinary Differential Equations Contents 1 Scope of the Chapter.... 2 2 Background to the Problems... 2 2.1 Initial Value Problems... 3 2.2 Boundary Value Problems...

More information

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations

A First Course on Kinetics and Reaction Engineering Supplemental Unit S5. Solving Initial Value Differential Equations Supplemental Unit S5. Solving Initial Value Differential Equations Defining the Problem This supplemental unit describes how to solve a set of initial value ordinary differential equations (ODEs) numerically.

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

(again assuming the integration goes left to right). Standard initial value techniques are not directly applicable to delay problems since evaluation

(again assuming the integration goes left to right). Standard initial value techniques are not directly applicable to delay problems since evaluation Stepsize Control for Delay Differential Equations Using Continuously Imbedded Runge-Kutta Methods of Sarafyan Skip Thompson Radford University Radford, Virginia Abstract. The use of continuously imbedded

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

The Milne error estimator for stiff problems

The Milne error estimator for stiff problems 13 R. Tshelametse / SAJPAM. Volume 4 (2009) 13-28 The Milne error estimator for stiff problems Ronald Tshelametse Department of Mathematics University of Botswana Private Bag 0022 Gaborone, Botswana. E-mail

More information

NUMERICAL SOLUTION OF ODE IVPs. Overview

NUMERICAL SOLUTION OF ODE IVPs. Overview NUMERICAL SOLUTION OF ODE IVPs 1 Quick review of direction fields Overview 2 A reminder about and 3 Important test: Is the ODE initial value problem? 4 Fundamental concepts: Euler s Method 5 Fundamental

More information

Defect-based a-posteriori error estimation for implicit ODEs and DAEs

Defect-based a-posteriori error estimation for implicit ODEs and DAEs 1 / 24 Defect-based a-posteriori error estimation for implicit ODEs and DAEs W. Auzinger Institute for Analysis and Scientific Computing Vienna University of Technology Workshop on Innovative Integrators

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods Introduction In this workshop we will introduce you to the least-squares spectral element method. As you can see from the lecture notes, this method is a combination of the weak formulation derived from

More information

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang

More information

Dense Output. Introduction

Dense Output. Introduction ense Output 339 ense Output Lawrence F. Shampine 1 and Laurent O. Jay 2 1 epartment of Mathematics, Southern Methodist University, allas, TX, USA 2 epartment of Mathematics, The University of Iowa, Iowa

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Ordinary Differential Equations

Ordinary Differential Equations CHAPTER 8 Ordinary Differential Equations 8.1. Introduction My section 8.1 will cover the material in sections 8.1 and 8.2 in the book. Read the book sections on your own. I don t like the order of things

More information

AM 205 Final Project The N-Body Problem

AM 205 Final Project The N-Body Problem AM 205 Final Project The N-Body Problem Leah Birch Elizabeth Finn Karen Yu December 14, 2012 Abstract The N-Body Problem can be solved using a variety of numeric integrators. Newton s Law of Universal

More information

Computational Fluid Dynamics Prof. Dr. SumanChakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

Computational Fluid Dynamics Prof. Dr. SumanChakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Computational Fluid Dynamics Prof. Dr. SumanChakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Lecture No. #11 Fundamentals of Discretization: Finite Difference

More information

Analysis of a class of high-order local absorbing boundary conditions

Analysis of a class of high-order local absorbing boundary conditions Report No. UCB/SEMM-2011/07 Structural Engineering Mechanics and Materials Analysis of a class of high-order local absorbing boundary conditions By Koki Sagiyama and Sanjay Govindjee June 2011 Department

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Ordinary differential equations - Initial value problems

Ordinary differential equations - Initial value problems Education has produced a vast population able to read but unable to distinguish what is worth reading. G.M. TREVELYAN Chapter 6 Ordinary differential equations - Initial value problems In this chapter

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector

More information

MAT300/500 Programming Project Spring 2019

MAT300/500 Programming Project Spring 2019 MAT300/500 Programming Project Spring 2019 Please submit all project parts on the Moodle page for MAT300 or MAT500. Due dates are listed on the syllabus and the Moodle site. You should include all neccessary

More information

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES J. WONG (FALL 2017) What did we cover this week? Basic definitions: DEs, linear operators, homogeneous (linear) ODEs. Solution techniques for some classes

More information

Numerical Analysis Exam with Solutions

Numerical Analysis Exam with Solutions Numerical Analysis Exam with Solutions Richard T. Bumby Fall 000 June 13, 001 You are expected to have books, notes and calculators available, but computers of telephones are not to be used during the

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Physically Based Modeling: Principles and Practice Differential Equation Basics

Physically Based Modeling: Principles and Practice Differential Equation Basics Physically Based Modeling: Principles and Practice Differential Equation Basics Andrew Witkin and David Baraff Robotics Institute Carnegie Mellon University Please note: This document is 1997 by Andrew

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Physically Based Modeling Differential Equation Basics

Physically Based Modeling Differential Equation Basics Physically Based Modeling Differential Equation Basics Andrew Witkin and David Baraff Pixar Animation Studios Please note: This document is 2001 by Andrew Witkin and David Baraff. This chapter may be freely

More information

A Two-dimensional Mapping with a Strange Attractor

A Two-dimensional Mapping with a Strange Attractor Commun. math. Phys. 50, 69 77 (1976) Communications in Mathematical Physics by Springer-Verlag 1976 A Two-dimensional Mapping with a Strange Attractor M. Henon Observatoire de Nice, F-06300 Nice, France

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

Finite Difference and Finite Element Methods

Finite Difference and Finite Element Methods Finite Difference and Finite Element Methods Georgy Gimel farb COMPSCI 369 Computational Science 1 / 39 1 Finite Differences Difference Equations 3 Finite Difference Methods: Euler FDMs 4 Finite Element

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004 Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question

More information

SYMBOLIC AND NUMERICAL COMPUTING FOR CHEMICAL KINETIC REACTION SCHEMES

SYMBOLIC AND NUMERICAL COMPUTING FOR CHEMICAL KINETIC REACTION SCHEMES SYMBOLIC AND NUMERICAL COMPUTING FOR CHEMICAL KINETIC REACTION SCHEMES by Mark H. Holmes Yuklun Au J. W. Stayman Department of Mathematical Sciences Rensselaer Polytechnic Institute, Troy, NY, 12180 Abstract

More information

Numerical Analysis. Introduction to. Rostam K. Saeed Karwan H.F. Jwamer Faraidun K. Hamasalh

Numerical Analysis. Introduction to. Rostam K. Saeed Karwan H.F. Jwamer Faraidun K. Hamasalh Iraq Kurdistan Region Ministry of Higher Education and Scientific Research University of Sulaimani Faculty of Science and Science Education School of Science Education-Mathematics Department Introduction

More information

On the Diagonal Approximation of Full Matrices

On the Diagonal Approximation of Full Matrices On the Diagonal Approximation of Full Matrices Walter M. Lioen CWI P.O. Box 94079, 090 GB Amsterdam, The Netherlands ABSTRACT In this paper the construction of diagonal matrices, in some

More information

Richarson Extrapolation for Runge-Kutta Methods

Richarson Extrapolation for Runge-Kutta Methods Richarson Extrapolation for Runge-Kutta Methods Zahari Zlatevᵃ, Ivan Dimovᵇ and Krassimir Georgievᵇ ᵃ Department of Environmental Science, Aarhus University, Frederiksborgvej 399, P. O. 358, 4000 Roskilde,

More information

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit V Solution of Differential Equations Part 2 Dianne P. O Leary c 2008 The Plan

More information

Finite Volume Method for Scalar Advection in 2D

Finite Volume Method for Scalar Advection in 2D Chapter 9 Finite Volume Method for Scalar Advection in D 9. Introduction The purpose of this exercise is to code a program to integrate the scalar advection equation in two-dimensional flows. 9. The D

More information

Writing Circuit Equations

Writing Circuit Equations 2 C H A P T E R Writing Circuit Equations Objectives By the end of this chapter, you should be able to do the following: 1. Find the complete solution of a circuit using the exhaustive, node, and mesh

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Lecture No. #12 Fundamentals of Discretization: Finite Volume Method

More information

4 Stability analysis of finite-difference methods for ODEs

4 Stability analysis of finite-difference methods for ODEs MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs 4.1 Consistency, stability, and convergence of a numerical method; Main Theorem In this Lecture

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

STATE VARIABLE (SV) SYSTEMS

STATE VARIABLE (SV) SYSTEMS Copyright F.L. Lewis 999 All rights reserved Updated:Tuesday, August 05, 008 STATE VARIABLE (SV) SYSTEMS A natural description for dynamical systems is the nonlinear state-space or state variable (SV)

More information

Tricky Asymptotics Fixed Point Notes.

Tricky Asymptotics Fixed Point Notes. 18.385j/2.036j, MIT. Tricky Asymptotics Fixed Point Notes. Contents 1 Introduction. 2 2 Qualitative analysis. 2 3 Quantitative analysis, and failure for n = 2. 6 4 Resolution of the difficulty in the case

More information

Answers to Problem Set Number 04 for MIT (Spring 2008)

Answers to Problem Set Number 04 for MIT (Spring 2008) Answers to Problem Set Number 04 for 18.311 MIT (Spring 008) Rodolfo R. Rosales (MIT, Math. Dept., room -337, Cambridge, MA 0139). March 17, 008. Course TA: Timothy Nguyen, MIT, Dept. of Mathematics, Cambridge,

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Mathematics Background

Mathematics Background For a more robust teacher experience, please visit Teacher Place at mathdashboard.com/cmp3 Patterns of Change Through their work in Variables and Patterns, your students will learn that a variable is a

More information

CS520: numerical ODEs (Ch.2)

CS520: numerical ODEs (Ch.2) .. CS520: numerical ODEs (Ch.2) Uri Ascher Department of Computer Science University of British Columbia ascher@cs.ubc.ca people.cs.ubc.ca/ ascher/520.html Uri Ascher (UBC) CPSC 520: ODEs (Ch. 2) Fall

More information

0. Introduction 1 0. INTRODUCTION

0. Introduction 1 0. INTRODUCTION 0. Introduction 1 0. INTRODUCTION In a very rough sketch we explain what algebraic geometry is about and what it can be used for. We stress the many correlations with other fields of research, such as

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Comparison of Modern Stochastic Optimization Algorithms

Comparison of Modern Stochastic Optimization Algorithms Comparison of Modern Stochastic Optimization Algorithms George Papamakarios December 214 Abstract Gradient-based optimization methods are popular in machine learning applications. In large-scale problems,

More information

AP Calculus. Derivatives.

AP Calculus. Derivatives. 1 AP Calculus Derivatives 2015 11 03 www.njctl.org 2 Table of Contents Rate of Change Slope of a Curve (Instantaneous ROC) Derivative Rules: Power, Constant, Sum/Difference Higher Order Derivatives Derivatives

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Modeling and Experimentation: Compound Pendulum

Modeling and Experimentation: Compound Pendulum Modeling and Experimentation: Compound Pendulum Prof. R.G. Longoria Department of Mechanical Engineering The University of Texas at Austin Fall 2014 Overview This lab focuses on developing a mathematical

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes Jim Lambers MAT 460 Fall Semester 2009-10 Lecture 2 Notes These notes correspond to Section 1.1 in the text. Review of Calculus Among the mathematical problems that can be solved using techniques from

More information

The collocation method for ODEs: an introduction

The collocation method for ODEs: an introduction 058065 - Collocation Methods for Volterra Integral Related Functional Differential The collocation method for ODEs: an introduction A collocation solution u h to a functional equation for example an ordinary

More information

Numerical Integration of Equations of Motion

Numerical Integration of Equations of Motion GraSMech course 2009-2010 Computer-aided analysis of rigid and flexible multibody systems Numerical Integration of Equations of Motion Prof. Olivier Verlinden (FPMs) Olivier.Verlinden@fpms.ac.be Prof.

More information

Comments on An Improvement to the Brent s Method

Comments on An Improvement to the Brent s Method Comments on An Improvement to the Brent s Method Steven A. Stage IEM 8550 United Plaza Boulevard, Suite 501 Baton Rouge, Louisiana 70808-000, United States of America steve.stage@iem.com Abstract Zhang

More information

Maps and differential equations

Maps and differential equations Maps and differential equations Marc R. Roussel November 8, 2005 Maps are algebraic rules for computing the next state of dynamical systems in discrete time. Differential equations and maps have a number

More information

Report on Numerical Approximations of FDE s with Method of Steps

Report on Numerical Approximations of FDE s with Method of Steps Report on Numerical Approximations of FDE s with Method of Steps Simon Lacoste-Julien May 30, 2001 Abstract This is a summary of a meeting I had with Hans Vangheluwe on Thursday May 24 about numerical

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time

More information

Time Series Analysis. Smoothing Time Series. 2) assessment of/accounting for seasonality. 3) assessment of/exploiting "serial correlation"

Time Series Analysis. Smoothing Time Series. 2) assessment of/accounting for seasonality. 3) assessment of/exploiting serial correlation Time Series Analysis 2) assessment of/accounting for seasonality This (not surprisingly) concerns the analysis of data collected over time... weekly values, monthly values, quarterly values, yearly values,

More information

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 7: Feedback and the Root Locus method Readings: Jacopo Tani Institute for Dynamic Systems and Control D-MAVT ETH Zürich November 2, 2018 J. Tani, E. Frazzoli (ETH) Lecture 7:

More information

EEE 480 LAB EXPERIMENTS. K. Tsakalis. November 25, 2002

EEE 480 LAB EXPERIMENTS. K. Tsakalis. November 25, 2002 EEE 480 LAB EXPERIMENTS K. Tsakalis November 25, 2002 1. Introduction The following set of experiments aims to supplement the EEE 480 classroom instruction by providing a more detailed and hands-on experience

More information

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

16.7 Multistep, Multivalue, and Predictor-Corrector Methods 16.7 Multistep, Multivalue, and Predictor-Corrector Methods 747 } free_vector(ysav,1,nv); free_vector(yerr,1,nv); free_vector(x,1,kmaxx); free_vector(err,1,kmaxx); free_matrix(dfdy,1,nv,1,nv); free_vector(dfdx,1,nv);

More information

Math 409/509 (Spring 2011)

Math 409/509 (Spring 2011) Math 409/509 (Spring 2011) Instructor: Emre Mengi Study Guide for Homework 2 This homework concerns the root-finding problem and line-search algorithms for unconstrained optimization. Please don t hesitate

More information