Testing time series for nonlinearity

Size: px
Start display at page:

Download "Testing time series for nonlinearity"

Transcription

1 Statistics and Computing 11: , 2001 C 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Testing time series for nonlinearity MICHAEL SMALL, KEVIN JUDD and ALISTAIR MEES Centre for Applied Dynamics and Optimization, Department of Mathematics and Statistics, University of Western Australia M.A.Small@hw.ac.uk, kevin@maths.uwa.edu.au, alistair@maths.uwa.edu.au Received March 1998 and accepted February 1999 The technique of surrogate data analysis may be employed to test the hypothesis that an observed data set was generated by one of several specific classes of dynamical system. Current algorithms for surrogate data analysis enable one, in a generic way, to test for membership of the following three classes of dynamical system: (0) independent and identically distributed noise, (1) linearly filtered noise, and (2) a monotonic nonlinear transformation of linearly filtered noise. We show that one may apply statistics from nonlinear dynamical systems theory, in particular those derived from the correlation integral, as test statistics for the hypothesis that an observed time series is consistent with each of these three linear classes of dynamical system. Using statistics based on the correlation integral we show that it is also possible to test much broader (and not necessarily linear) hypotheses. We illustrate these methods with radial basis models and an algorithm to estimate the correlation dimension. By exploiting some special properties of this correlation dimension estimation algorithm we are able to test very specific hypotheses. Using these techniques we demonstrate the respiratory control of human infants exhibits a quasi-periodic orbit (the obvious inspiratory/expiratory cycle) together with cyclic amplitude modulation. This cyclic amplitude modulation manifests as a stable focus in the first return map (equivalently, the sequence of successive peaks). Keywords: surrogate data analysis, nonlinear surrogates, correlation dimension, infant respiratory patterns, Floquet theory, first return map 1. Introduction Nonlinear measures such as correlation dimension, Lyapunov exponents, and nonlinear prediction error are often applied to time series with the intention of identifying the presence of nonlinear, possibly chaotic behavior (see for example (Casdagli et al. 1996, Schmid and Dünki 1996, Small et al. 1999, Vibe and Vesin 1996) and the references therein). Estimating these quantities and making unequivocal classification can prove difficult and the method of surrogate data (Theiler et al. 1992) is a version of bootstrapping which is often employed to clarify and quantify statements about the presence of nonlinear effects. Surrogate methods compare the value of (nonlinear) statistics for the data and the approximate distribution for various classes of linear systems, so as to test if the data has some characteristics which are distinct from stochastic linear systems. Surrogate analysis provides a regime to test specific hypotheses about the Current address: Department of Physics, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS, UK. Fax: C 2001 Kluwer Academic Publishers nature of the system responsible for data; nonlinear measures are often used as the discriminating statistic in this hypothesis testing. In this paper we demonstrate that statistics derived from the correlation integral provide a natural choice of test statistic for surrogate data analysis. Using such statistics it is possible to test a broad range of linear and nonlinear hypotheses. In particular, one may use correlation integral based statistics to test the hypothesis that the data came from one of many classes of nonlinear dynamical system. We illustrate these methods with an application to the analysis of human respiration. In the following section, we introduce some terminology and review some common methods of generating linear surrogates. Following this we introduce the correlation integral and discuss reconstruction from experimental data. In Section 4 we derive some results regarding the usefulness of correlation integral based statistics for nonlinear surrogate data analysis. Finally, we demonstrate the application of these methods to experimental data.

2 258 Small, Judd and Mees 2. The rationale and language of surrogate data The general procedure of surrogate data methods has been described by Theiler (Theiler 1995, Theiler et al. 1992, Theiler and Prichard 1996, Theiler and Rapp 1996) and Takens (1993). One first assumes that the data comes from some specific class of dynamical process, possibly fitting a parametric model to the data. One then generates surrogate data from this hypothetical process and calculates various statistics of the surrogates and original data. The surrogate data will have some distribution of values of each statistic and one can check that the statistic of the original data is typical. If the original data has atypical statistics, we reject the hypothesis that the process that generated the original data is of the assumed class. One always progresses from simple and specific assumptions to broader and more sophisticated models. Let φ be a specific hypothesis and F φ the set of all processes (or systems) consistent with that hypothesis. Let z R N be a time series (consisting of N scalar measurements) under consideration, and let T : R N U be a statistic which we will use to test the hypothesis φ that z was generated by some process F F φ. Generally U will be R and one can discriminate between the data z and surrogates z i consistent with the hypothesis given the approximate probability density p T,F (t), i.e. the probability density of T given F. In a recent paper, Theiler and Prichard (1996) suggests that there are two fundamentally different types of test statistics: pivotal; and non-pivotal. Definition 1. A test statistic T is pivotal if the probability distribution p T,F is the same for all processes F consistent with the hypotheses; otherwise it is non-pivotal. Similarly there are two different types of hypotheses: simple hypotheses and composite hypotheses. Definition 2. A hypothesis is simple if the set of all processes consistent with the hypothesis F φ is a singleton. Otherwise the hypothesis is composite. When one has a composite hypothesis the problem is not only to generate surrogates consistent with F (a particular process) but also to estimate F F φ. Theiler argues that it is highly desirable to use a pivotal test statistic if the hypothesis is composite. In the case when the hypothesis is composite, one must specify F unless the test statistic T is pivotal, in which case p T,F is the same for all F F φ. In cases when non-pivotal statistics are to be applied to hypotheses which are composite (as most interesting hypotheses are), Theiler suggests that a constrained realization scheme be employed. Definition 3. Let Fˆ F φ be the process estimated from the data z, and let z i be a surrogate data set generated from F i F φ. Let Fˆ i F φ be the process estimated from z i. Then a surrogate z i is a constrained realization if ˆ F i = ˆF. Otherwise it is nonconstrained. That is, as well as generating surrogates that are typical realizations of a model of the data, one should ensure that the surrogates are realizations of a process that gives identical estimates of the parameters (of that process) to the estimates of those parameters from the data. In Small and Judd (1998c) we discuss constrained realizations in more detail Linear surrogates Different types of surrogate data are generated to test membership of specific dynamical system classes, referred to as hypotheses. The three types of surrogates described by Theiler et al. (1992), referred to as algorithms 0, 1 and 2, address the three hypotheses: (0) white noise; (1) linearly filtered white noise; (2) monotonic nonlinear transformation of linearly filtered noise. Constrained realization consistent with each of these hypotheses can be generated by (0) shuffling the data, (1) randomizing (or shuffling) the phases of the Fourier transform of the data, and (2) applying a phase randomizing (shuffling) procedure to amplitude adjusted Gaussian noise. Algorithm 0. The surrogate z i is created by shuffling the order of the data z. Generate an i.i.d Gaussian data set y and reorder z so that it has the same rank distribution as y. Algorithm 1. An algorithm 1 surrogate z i is produced by applying algorithm 0 to the phases of the Fourier transform of z. Calculate Z, the Fourier transform of z. Either randomize the phases of Z ( preserving the complex conjugate pairs) or shuffle them by applying algorithm 0. Take the inverse Fourier transform to produce the surrogate z i. Algorithm 2. The procedure for generating surrogates consistent with algorithm 2 is the following (Theiler et al. 1992): start with the data set z, generate an i.i.d. Gaussian data set y and reorder y so that it has the same rank distribution as z. Then create an algorithm 1 surrogate y i of y (either by shuffling or randomizing the phases of the Fourier transform of y). Finally, reorder the original data z to create a surrogate z i which has the same rank distribution as y i. Algorithm 2 surrogates are also referred to as amplitude adjusted Fourier transformed (AAFT) surrogates. Surrogates generated by these three algorithms have become known as algorithm 0, 1 and 2 surrogates. Each of these hypotheses should be rejected for data generated by a nonlinear system. However, rejecting these hypotheses does not necessarily indicate the presence of a nonlinear system, only that it is unlikely that the data is generated by a monotonic nonlinear transformation of linearly filtered noise. The system could,

3 Nonlinear surrogates for hypothesis testing 259 for example, involve a non-monotonic transformation or non Gaussian or state dependent noise. In the case of an approximately periodic signal it would be useful to be able to determine the presence of temporal correlation between cycles. In recent papers Theiler (1995) and Theiler and Rapp (1996) address this problem and propose that a logical choice of surrogate for strongly periodic data, should also be periodic. To achieve this Theiler decomposes the signal into cycles, and shuffles the individual cycles. Theiler s hypothesis for strongly periodic signals is rather simple, but in many ways powerful. Theiler proposes that surrogates generated by shuffling the cycles addresses the hypothesis that there is no dynamical correlation between cycles. 3. The correlation integral The correlation integral is a property of the spatial distribution of the system variables of a dynamical system. To estimate the correlation integral from an experimental time series it is necessary to reconstruct the dynamics of the dynamical system which generated the time series. To do this we employ the reconstruction technique of time delay embedding. In this section we briefly review time delay embedding, the correlation integral, and an algorithm which we employ to estimate the correlation dimension Reconstruction Attractor reconstruction using the method of time delays is now widely applied. We will briefly describe the key points of this technique and the methods we utilize to select an appropriate embedding strategy. Let M be a compact m dimensional manifold, Z : M M a C 2 vector field on M, and h : M R a C 2 function (the measurement function). The vector field Z gives rise to an associated evolution operator (flow) φ t : M M. Ifz t M is the state at time t then the state at some latter time t + τ is given by z t+τ = φ τ (z t ). Observations of this state can be made so that at time t we observe h(z t ) R and at time t + τ we can make a second measurement h(φ τ (z t )) = h(z t+τ ). Takens embedding theorem (Takens 1981) guarantees that given the above situation, the system generated by the map Z,h : M R 2m+1 where Z,h (z t ):= (h(z t ), h(φ τ (z t )),...,h(φ 2mτ (z t ))) = (h(z t ), h(z t+τ ),...,h(z t+2mτ )) (1) is an embedding. By embedding we mean that the asymptotic behavior of Z,h (z t ) and z t are diffeomorphic. We can apply this result to reconstruct from a time series of experimental observations {y t } t=1 N (where y t = h(z t )) a system which is (asymptotically) diffeomorphic to that which generated the underlying dynamics (subject to the usual restrictions of finite data and observational error). We produce from our scalar time series y 1, y 2, y 3,...,y N a d e -dimensional vector time series via the embedding (1): y t τ v t = (y t τ, y t 2τ,...,y t de τ ) t > d e τ. To perform this transformation one must first identify the embedding lag τ and the embedding dimension d e. A sufficient condition on d e is that it must exceed 2m + 1 where m is the attractor dimension. However, to estimate m, one must already have embedded the time series. We describe the selection of suitable values of these parameters in the following paragraphs. An embedding depends on two parameters, the lag τ and the embedding dimension d e. For an embedding to be suitable for successful estimation of dimension and modeling of the system dynamics, one must choose suitable values of these parameters. The following two subsections discuss some commonly used methods to estimate embedding lag τ and embedding dimension d e. Takens embedding theorem (Noakes 1991, Takens 1981) and more recently work of Ding et al. (1993) give sufficient conditions on d e. Ding et al. give a sufficient condition on the value of d e necessary to estimate the correlation dimension of an attractor, not to avoid all possible self intersections. Unfortunately, the conditions require a prior knowledge of the fractal dimension of the object under study. In practice one could guess a suitable value for d e by successively embedding in higher dimensions and looking for consistency of results; this is the method that is generally employed. However, other methods, such as the false nearest neighbor technique (Farmer et al. 1983, Theiler 1990, Kennel et al. 1992), are now available to suggest the value of d e. Any value of τ is theoretically acceptable, but the shape of the embedded time series will depend critically on the choice of τ and it is wise to select a value of τ which separates the data as much as possible. Studies in nonlinear time series (Abarbanel et al. 1993) suggest the first minimum of the mutual information criterion (Rissanen 1989), the first zero of the autocorrelation function (Priestly 1989) or one of several other criteria to choose τ. Our experience and numerical experiments suggest that selecting a lag approximately equal to one quarter of the quasi-period of the time series produce comparable results to the autocorrelation function but is more expedient. Note that the first zero of the autocorrelation function will be approximately the same as one quarter of the quasi-period if the data is approximately periodic. Numerical experiments with infant respiratory data (Small et al. 1999) have shown that either of these methods produce superior results to the mutual information criterion (MIC) Correlation dimension and the correlation integral To define the correlation dimension in a meaningful way we generalize the concept of integer dimension to fractal objects with non-integer dimension. In dimensions of one, two, or three it is easily established, and intuitively obvious, that a measure of

4 260 Small, Judd and Mees Fig. 1. Correlation dimension from the distribution of inter point distances. The logarithm of the distribution of inter point distances, and an approximation to the derivative for one of our sets of data embedded in three dimensions. The approximate derivative is a smoothed numerical difference. This calculation used a recording of infant abdominal movement during natural sleep. The data was embedded in 3 dimensions with a lag of 19 data points (380 ms). Even with well behaved data and a smooth approximately monotonic distribution of inter point distances the choice of scaling region is still subjective volume V (e.g. length, area or volume) varies as V ε d, (2) where ε is a length scale (e.g. the length of a cube s side or the radius of a sphere) and d is the dimension of the object. For a general fractal it is natural to assume that a relation like equation (2) holds true, in which case its dimension is given by, d log V log ε. (3) Let {v t } t=1 N be an embedding of a time series in Rd e.define the correlation function, C N (ε), by ( ) N 1 C N (ε) = I ( v i v j <ε). (4) 2 0 i< j N Here I (X) is a function whose value is 1 if condition X is satisfied and 0 otherwise, and is the usual distance function in R d e. The sum i I ( v i v j <ε) is the number of points within a distance ε of v j. If the points v i are distributed uniformly within an object, then this sum is proportional to the volume of the intersection of a sphere of radius ε with the object and C N (ε) is proportional to the average of such volumes. Comparing with equation (2) one expects that C N (ε) ε d c, where d c is the dimension of the object. The correlation integral is defined as lim N C N (ε). Define the correlation dimension d c by d c = lim lim ε 0 N log C N (ε). (5) log ε The method most often employed to estimate the correlation dimension is the Grassberger-Procaccia algorithm (Grassberger and Procaccia 1983). In this method one calculates the correlation function and plots log C N (ε) against log ε. The gradient of this graph in the limit as ε 0 should approach the correlation dimension. Unfortunately, when using a finite amount of data the graph will jump about irregularly for small values of ε. To avoid this one instead looks at the behavior of this graph for moderately small ε. A typical correlation integral plot will contain a scaling region over which the slope of log C N (ε) remains relatively constant. A common way to examine the slope in the scaling region is to numerically differentiate (or fit a line to) the plot of log ε against log C N (ε). This ought to produce a function which is constant over the scaling region, and its value on this region should be the correlation dimension (see Fig. 1). Unfortunately, as Judd (1992) points out, there are several problems with this procedure. The most obvious of these is that the choice of the scaling region is entirely subjective (Fig. 1). For many data sets a slight change in the region used can lead to substantially different results. Judd assumes that locally the attractor can be modeled as the cross product of a bounded connected subset of a smooth manifold and a Cantor-like set. Judd demonstrates that for such objects (which include smooth manifolds and many fractals), a better description of C N (ε) is that for ε less than some ε 0 C N (ε) ε d c q(ε), where q(ε) is a polynomial of order t, the topological dimension of the set. Consequently we consider correlation dimension d c as a function of ε 0 and write d c (ε 0 ), and call this the dimension at the scale ε 0.

5 Nonlinear surrogates for hypothesis testing 261 The Grassberger Procaccia method assumes that C N (ε) ε d c, but this new method allows for the presence of a further polynomial term that takes into account variations of the slope within and outside of a scaling region. This new method dispenses with the need for a scaling region and substitutes a single scale parameter ε 0. This has an interesting benefit. For many natural objects the dimension is not the same at all length scales. If one observes a large river stone its surface at its largest length scale is very nearly two-dimensional, but at smaller length scales one can discern the details of grains which add to the complexity and increase the dimension at smaller scales. Consequently, it is natural to consider dimension d c as a function of ε 0 and write d c (ε 0 ). By allowing our dimension to be a function of scale we produce estimates that are both more accurate and more informative. We avoid some of the approximation necessary to define correlation dimension as a single number and we can extract more detailed information about the changes in dimension with scale. For more detail on this correlation dimension estimatation algorithm see (Judd 1992, 1994); for an alternative treatment of this algorithm see, for example (Ikeguchi and Aihara 1997). 4. On pivotal statistics Surrogate analysis enables us to test whether the dynamics are consistent with linearly filtered noise or a nonlinear dynamical system. Surrogate data analysis is not, however, entirely straightforward. Theiler s original work on surrogate methods (Theiler et al. 1992) suggested a hierarchy of hypotheses that should be tested with a battery of test statistics. More recent work (Theiler 1995, Theiler and Rapp 1996) has demonstrated that not all test statistics are equally good. Furthermore, not all hypotheses are as straightforward, or interesting, as they may appear. It is possible that one of the surrogate generating algorithms is flawed (Schreiber and Schmitz 1996) and the choice of test statistic and surrogate generation algorithm should be made very carefully (Theiler and Prichard 1996). In a previous publication we have shown that correlation dimension, in particular, offers a good choice of test statistic for surrogate data analysis (Small and Judd 1998c). Using this we have built nonlinear models to test fairly broad hypotheses (Small and Judd 1997, 1998b, Small et al. 1999). Existing surrogate methods are largely non parametric and concerned with rejecting the hypothesis that a given data set is generated by some form of linear system. We suggest a new type of surrogate generation method which is both parametric and nonlinear. Identifying a given time series as either chaotic or simply nonlinear is beyond the scope of this paper. We address the simpler set of hypotheses that the data is consistent with a noise driven nonlinear system of a particular form. We model the data using methods described in (Judd and Mees 1995, Small and Judd 1998a) and generate noise driven simulations from that model. It is not necessary to employ this particular modeling algorithm. Any algorithm which can produce independent realizations from a given data set may be applied in exactly the way we outline here. Any other model or modeling algorithm will require only a slight alteration to our methods. Using correlation dimension (or another nonlinear statistic) we are then able determine which properties are common to both data and model The pivotalness of dynamic measures In this section we will show that nonlinear models can be used to generate surrogates (simulations of the data) to test various nonlinear hypotheses. Unlike Theiler s algorithm 0, 1 and 2 surrogates, the hypothesis being tested is not known a priori, but will be determined by the pivotalness of the test statistic. To illustrate our approach we choose to use correlation dimension. Other statistics, particularly measures derived from dynamical system theory that are invariant under diffeomorphisms and can be reliably estimated (i.e. any quantity one can usually estimate from a time-delay embedding) may serve equally well. It is important to show that one s estimate is close to the actual quantity. We choose to use correlation dimension as a test statistic because we have a reliable algorithm to estimate it, which is well understood (Galka et al. 1998, Judd 1992, 1994). Correlation dimension is a very complicated statistic. When using correlation dimension to test composite hypotheses we would want to ensure that it is pivotal or asymptotically pivotal. Since correlation dimension is a measure of the complexity and the number of active degrees of freedom of a system this is a relatively simple matter. Takens (1993) shows that the correlation integral lim N C N (ε 0 ) is independent of the embedding and observation function. Correlation dimension will be pivotal provided all F F φ have the same number of degrees of freedom. However, one must ensure that the estimation algorithm provides an accurate estimate of correlation dimension. The linear systems are all forms of filtered noise, and effectively infinite dimensional. The correlation dimension estimates of a monotonic nonlinear transformation of linearly filtered noise will have the same probability distribution regardless of exactly what the power spectrum is (see Small and Judd (1998c)). For classes of linear systems there are some useful and widely applied algorithms to generate surrogate data consistent with three hypotheses (Theiler et al. 1992). These hypotheses were described in Section 2.1. The power of these surrogate generation algorithms is that they will generate constrained realization surrogates. Theiler and Prichard (1996) argue that by using these algorithms to generate surrogates, one is free to use any statistic one wishes. On the other hand if one does not use such methods to generate surrogates, it is necessary to select a statistic which has exactly the same distribution of statistic values for all realizations consistent with the hypothesis being tested. When generating nonlinear surrogates, we suggest that it may be easier to use a pivotal test statistic, and choose realizations of any process consistent with that hypothesis as representative. With such a statistic it would be possible to build a nonlinear model (usually with reference to the data) and generate (noise

6 262 Small, Judd and Mees driven) simulations from that model as surrogates. If F φ is the set of all noise driven processes then d c (ε) will not be pivotal. However, if we restrict ourselves to F φ F φ where T is pivotal on F φ then the problem is resolved. It is important that we can identify the set F φ of processed for which Tispivotal. However, with this approach it is necessary to check that the probability distribution of the test statistic is independent of the particular model we have built, or determine for which models the distribution is the same. We can only test a hypothesis as broad as the set of all processes which have the same probability distribution of test statistic values. For example, if the distribution of the test statistic is different for every model then the only hypothesis we can test is that the data is consistent with a specific model. However, if all models within some class (for example, two dimensional periodic orbits) have the same distribution of statistic values then the hypothesis which we can test with realizations from any one of these models is much broader (for example, the hypothesis that the system has a two dimensional periodic orbit). The most useful test statistics are those for which the probability distribution of statistic values is the same for all models (processes) consistent with the hypothesis being tested. Many statistical measures derived from dynamical systems theory are such statistics; in this paper we focus on correlation dimension. Neither correlation dimension nor the algorithm we employ to estimate it are necessarily unique in their suitability as test statistics. The rest of this paper presents some theoretical and experimental results concerning the application of correlation dimension as a test statistic for specific (linear and nonlinear) hypotheses. In Small and Judd (1998c) we have shown that correlation dimension is a useful test statistic for linear surrogates generated by traditional (Theiler et al. 1992) or more naive (parametric) methods, as well as nonlinear surrogates generated as noise driven simulations of nonlinear parametric models. In this paper we demonstrate the application of correlation dimension as a test statistic for nonlinear hypothesis testing with specific experimental data sets. In Section 5 we apply these methods to some experimental data collected from sleeping infants. In Section 4.2 we present some conditions under which a correlation integral based statistic will be pivotal Employing the correlation integral to estimate pivotal test statistics The arguments of the previous section apply equally to surrogates consistent with Theiler s classes of linear surrogates and nonlinear, parametric, model based surrogates. The linear processes consistent with the hypotheses addressed by algorithm 0, 1 and 2 are all forms of filtered noise, and hence are infinite dimensional. That is, the correlation dimension will be infinite. We will argue that a dimension estimation algorithm which relies on a time delay embedding will (or should) produce the same probability density of estimates of correlation dimension for any data set consistent with one of these hypotheses. To do this in general we could invoke Takens embedding theorem (Takens 1981). Takens theorem ensures that a time delay embedding scheme will produce faithful reconstruction of an attractor (provided d e > 2d c + 1) if the measurement function is C 2. When d c is finite, one simply needs a sufficiently large value of d e. In the case when d c is infinite, Takens theorem no longer applies. However if d c is infinite (or indeed if d c > d e ) the embedded time series will fill the embedding space. If the time series is of infinite length then the dimension d c of the embedded time series will then be equal to d e.if the time series is finite then the dimension d c of the embedded time series will be less than d e. This is particularly likely for a short time series and large embedding dimension. For a moderately small embedding dimension this difference is typically not great and is dependent on the estimation algorithm and the length of the time series, and independent of the particular realization. Hence, if the correlation dimension d c of all surrogates consistent with the hypothesis under consideration exceeds d e then correlation dimension is a pivotal test statistic for that value of d e. An examination of the pivotalness of the correlation integral (and therefore correlation dimension) can be found in a recent paper of Takens (Takens 1993). Takens approach is to observe that, if ρ and ρ are two metrics in the embedded space X and k is some constant and for all x, y X k 1 ρ(x, y) ρ (x, y) kρ(x, y) (6) then the correlation integral lim N C N (ε) with respect to either metric is similarly bounded and hence the correlation dimension with respect to each metric will be the same. This result is independent of the conditions of Takens embedding theorem (i.e. that n > 2d c + 1 for X = R n ). Hence if we (for example) embed a stochastic signal in R n the correlation dimension will have the same value with respect to the two different metrics ρ and ρ. To show that d c is pivotal for the various linear hypotheses addressed by algorithm 0, 1 and 2 it is only necessary to show that various transformations can be applied to a realization of such processes which have the affect of producing i.i.d. noise and are equivalent to a bounded change of norm as in (6). Our approach is to show that surrogates consistent with each of the three standard linear hypotheses are at most a C 2 function from Gaussian noise N(0, 1). A C 2 function on a bounded set (a bounded attractor or a finite time series) distorts distance only by a bounded factor (as in equation (6)) and so the correlation dimension is invariant. We therefore have the following result. Proposition 1. The correlation dimension d c is a pivotal test statistic for a hypothesis φ if F 1, F 2 F φ and embeddings ξ 1,2 : R X 1,2 there exists a C 2 function f : X 1 X 2 such that t f(ξ 1 (F 1 (t))) = ξ 2 (F 2 (t)). Proof: The proof of this proposition is in outline as follows. Let F 1, F 2 F φ be particular processes consistent with a given hypothesis and F 1 (t) and F 2 (t) be realizations of

7 Nonlinear surrogates for hypothesis testing 263 those processes. We have that tf(ξ 1 (F 1 (t))) = ξ 2 (F 2 (t)), and so if ξ 1 (x 1 ),ξ 1 (y 1 ) X 1 and ξ 2 (x 2 ),ξ 2 (y 2 ) X 2 are points on the embeddings ξ 1 and ξ 2 of F 1 (t) and F 2 (t) respectively, then f (ξ 1 (x 1 )) = ξ(x 2 ) and f (ξ 1 (y 1 )) = ξ 2 (y 2 ). Let ρ 2 be a distance function on X 2, then define ρ 1 (ξ 1 (x 1 ),ξ 1 (y 1 )) := ρ 2 ( f (ξ 1 (x 1 )), f (ξ 1 (y 1 ))) = ρ 2 (ξ 2 (x 2 ),ξ 2 (y 2 )). Clearly (6) is satisfied and so lim N C N (ε)onx 1 and X 2 are similarly bounded, and therefore the correlation dimension of X 1 and X 2 are identical. Hence, if any particular realization of a surrogate consistent with a given hypothesis is a C 2 function from i.i.d. noise (which in turn is a C 2 function from Gaussian noise) then correlation dimension is a pivotal statistic for that hypothesis. 5. Examples If a set of data is inconsistent with each of the three linear hypotheses addressed by algorithms 0, 1 and 2, one may wish to ask more specific questions: is the data consistent with (for example) a noise driven periodic orbit? In particular, a hypothesis similar to this is treated by Theiler and Rapp (Theiler 1995, Theiler and Rapp 1996). We have applied this method elsewhere (Small and Judd 1998b). In this section we focus on more general hypotheses. In Small et al. (1999) we test the hypothesis that infant respiration during quiet sleep is distinct from a noise driven (or chaotic) quasi-periodic or toroidal attractor (with at least two identifiable periods). Such an apparently abstract hypothesis can have real value: these results have been confirmed with observations of cyclic amplitude modulation in the breathing of sleeping infants (Small et al. 1996, 1999) during quiet sleep and in the resting respiration of adults at high altitude (Waggener et al. 1984). To apply such complex hypotheses we build cylindrical basis models using a minimum description length selection criterion (Judd and Mees 1995, Small and Judd 1998a) and generate noise driven simulations (surrogate data sets) from these models. This modeling scheme has been successful in modeling a wide variety of nonlinear phenomena. However, it involves a stochastic search algorithm. This method of surrogate generation does not produce surrogates that can be used with a constrained realization scheme (the modeling algorithm described in (Judd and Mees 1995, Small and Judd 1998a) is partially stochastic), and so a pivotal statistic is needed. It is important to determine if the data is generated by a system consistent with a specific model or a general class of models. To do this we need to determine exactly how representative a particular model is for a given test statistic how big is the set F φ for which T is pivotal? By comparing a data set and surrogates generated by a specific model are we just testing the hypothesis that a system consistent with this specific model generated the data or can we infer a broader class of models? In either case (unlike constrained realization linear surrogates), it is likely that the hypothesis being tested will be determined by the results of the modeling procedure and therefore depend on the particular data set one has. The hypothesis one can test will be as broad as the class of all systems with metric bounded by equation (6) (in the case of correlation integral based test statistics). In particular proposition 4.2 holds. We wish for T to be a pivotal test statistic for the hypotheses φ. But φ is a broad class of nonlinear dynamical systems. For example, if F φ is the set of all noise driven processes then d c (ε) will not be pivotal. However, if we are able to restrict ourselves to F φ F φ where T is pivotal on F φ then the problem is resolved. To do this we simply rephrase the hypothesis to be that the data is generated by a noise driven nonlinear function (modeled by a cylindrical basis model) of dimension d. For example this allows us to test if the data is generated by a periodic orbit with 2 degrees of freedom driven by Gaussian noise. Furthermore the scale dependent properties of our estimate of d c (ε 0 ) allow some sensitivity to the size (relative to the size of the data) of structure of a particular dimension. This is a much more useful hypothesis than that the system is noisy and nonlinear if this was our hypothesis, then what would be the alternative? 5.1. Calculations We wish to test the hypothesis that a data set is consistent with a particular class of nonlinear dynamical system. First we must check that a particular model is representative of the general class of nonlinear models by calculating probability estimates of the test statistics for the particular model we wish to test and for the general class of models. Then, we need to compare the value of the test statistic for the data to that probability distribution. Figures 3 and 4 give examples of Fig. 2. Experimental data. The abdominal movement measured with inductance plethysmography for a 2 month old male child in quiet (stage 3 4) sleep. The 1600 data points were sampled at 12.5 Hz, and digitized using a 12 bit analogue to digital convertor during a sleep study at Princess Margaret Hospital for Children, Subiaco, Western Australia

8 264 Small, Judd and Mees Fig. 3. Probability distribution for correlation dimension estimates for linear surrogates of experimental data. Shown are contour plots which represent the probability distribution of correlation dimension estimate for various values of ε 0. The data used in this calculation is illustrated in Fig. 2. The figures are probability density estimates for surrogates generated from: (i) constrained realization algorithm 2 surrogates; (ii) a monotonic nonlinear transformation of a parametric linear model. In each calculation 50 realizations of 1600 points were calculated, and their correlation dimension calculated for d e = 3. The value of d c (ε 0 ) for the data is shown on each plot as a dashed line. The two distributions are practically identical, despite the fact that only one of them generates constrained realisations. The correlation dimension of the data is clearly distinct from this probability distribution this method for the experimental data in Fig. 2. We clearly reject the linear hypothesis associated with the calculations of Fig. 3, but are unable to reject the nonlinear hypothesis of Fig. 4. In Small et al. (1999) we have performed similar calculations for 27 observations from 10 infants, and in Small and Judd (1998b) for 14 observations from 14 infants. These calculations concluded that the simulations produced by cylindrical basis models have distributions of correlation dimension estimates which match that of the data. The data is clearly distinct from the linear surrogates but consistent with the nonlinear surrogates. However, the probability density function of the correlation dimension estimate is the same for the constrained (algorithm 2) surrogates and simple parametric surrogates. The parametric surrogates are generated by rescaling the data to be Gaussian, building a reduced autoregressive model (as described in (Judd and Mees 1995, Small and Judd 1999)) on the data from the data, generating a noise driven simulation of that model and rescaling it to have the same distribution as the data. This essentially requires a parameterized estimate of the monotonic nonlinear transformation (parameterized by the data) and a parametric linear model Inference We have established that radial basis models are consistent with the data. The next important issue is to determine which properties are exhibited by these models, and (preferably) only by models which exhibits this distribution of statistic values. If a property is exhibited by models with the observed distribution of values of the statistic then we can infer that this property is consistent with the data (just as the model is). If a property is exhibited only by models withthe observed distributionof statistic values then we can conclude that this property is necessary for a system to generate data consistent with this hypothesis. The most obvious feature is correlation dimension. The correlation dimension of a system should be evident in an estimate of the correlation dimension of data from that system. However, because of the scale dependent properties of the estimate of correlation dimension which we employ, we able to make more specific observation. For example, Fig. 5 shows distribution of correlation dimension estimates for two monotonic nonlinear transformation of linearly filtered noise. The shapes of these distributions are quite different and are a characteristic property of the estimates of correlation dimension. The transformation g(x) = x 3 has the effect of compressing the

9 Nonlinear surrogates for hypothesis testing 265 Fig. 4. Probability distribution for correlation dimension estimates for nonlinear surrogates of experimental data. Shown are contour plots which represent the probability distribution of correlation dimension estimate for various values of ε 0. The data used in this calculation is illustrated in Fig. 2. The figures are probability density estimates for surrogates generated from: (i) realizations of distinct models; (ii) realizations for one of the models used in (i) with the maximum value of correlation dimension (d c (ε 0 )for log ε 0 = 1.8 ). In each calculation 5 realizations of 1600 points were calculated, and their correlation dimension calculated for d e = 3. The value of d c (ε 0 ) for the data is shown on each plot as a dashed line. The two distributions are practically identical, despite the fact that the model used in panel (ii) had the highest correlation dimension estimate of the distribution of models in (i). The correlation dimension of the data is clearly similar to this probability distribution blob of points produced by realizations of the linear stochastic process this decreases the scale ε 0 and increase the correlation dimension at this scale to almost the embedding dimension. The second transformation g(x) = sign(x) x 1/4 stretched the data out creating a shell of points and a relatively empty interior. Hence the two distinct values of correlation dimension. For large length scale the structure is low dimensional it is only the shell. For smaller observation scales one observes the d e dimensional behavior within the surface of that shell. A property characteristic of these models is the periodic orbit they exhibit. The number of degrees of freedom of that periodic orbit are characterized by the correlation dimension lim ε0 0 d c (ε 0 ). For large values of ε 0, d c (ε 0 ) tells us more about the general structure and distribution of the attractor. For example, the stability of this periodic orbit. Although other models with different probability distributions of correlation dimension may exhibit periodic orbits with similar stability properties, these properties are exhibited by all the models we have built from this data. Hence this is a property of these models, but not only of these models. To infer the stability of the respiratory motion we apply Floquet theory to analyze the stability of the periodic orbit of the models Floquet theory From a data set we can build a map F, an approximation to the dynamics of respiration. This is the (cylindrical basis) model. Let z be a point on a periodic orbit of period p, that is z = F p (z) = F F... F(z). }{{} p times Hence z is a fixed point of the map F p and we can calculate the eigenvectors and eigenvalues of that fixed point. These eigenvectors and eigenvalues correspond exactly to the linearized dynamics of the periodic orbit: one eigenvector will be in the direction DF(z) and will have associated eigenvalue 1, the others will be determined by the dynamics (Guckenheimer and Holmes 1983). To calculate these eigenvectors and eigenvalues we must first linearize F p at z. We have that D z F p (z) = D F p 1 (z)f(f p 1 (z))d z F p 1 (z) = D F p 1 (z)f(f p 1 (z))d F p 2 (z)f(f p 2 (z))...d z F(z) = p 1 k=0 D F k (z)f(f k (z)). (7)

10 266 Small, Judd and Mees Fig. 5. Probability distribution for correlation dimension estimates for two monotonic nonlinear transformations of linearly filtered noise. Shown are contour plots which represent the probability distribution of correlation dimension estimate for various values of ε 0. Panels (i), (ii) and (iii) are probability density estimates of correlation dimension for linearly filtered noise with a monotonic nonlinear transformation given by g(x) = x 3, embedded in R 3, R 4, and R 5, respectively. Panel (iv), (v) and (vi) show similar plots of probability density estimates for realizations of the same linear process and same embedding dimensions. The monotonic nonlinear transformation in this case is g(x) = sign(x) x 1/4. Note that the shapes of these estimates are completely different the horizontal scale is different on each plot One may then calculate the eigenvalues of the matrix p 1 k=0 D F k (z)f(f k (z)) to determine the stability of the periodic orbit of z. Unfortunately, the application of this method has several problems. To calculate (7) one must first be able to identify a point z on a periodic orbit. In practice, a model built by the methods described we employ will typically have been embedded in approximately 20 dimensional space. In this situation, we limit ourselves to the study of stable periodic orbits. Fortunately this is a common feature of these models. However, the orbit may not be exactly periodic. The map F is an approximation to the dynamics of a flow and it is unlikely that the periodic orbit of interest will be periodic with exactly period p, which will be of the order of the embedding dimension. In most cases it is only possible to find a point z of an approximately periodic orbit. By this we mean that z and F p (z) are close. If the map F is not chaotic then one can choose a point z such that {F p (z)} p=1 is bounded and p will be chosen to be the first local minimum of F p (z) z for p > 1. Having found a point z such that {z, F(z), F 2 (z),..., F p 1 (z)} form points of an almost periodic orbit the expression (7) may be evaluated. However, for our data p is approximately 20 and the periodic orbit {z, F(z), F 2 (z),...,f p 1 (z)} is (presumably) stable: hence, the calculation of the eigenvalues of (7) will be numerically highly sensitive. The eigenvalues will be close to zero and the matrix p 1 k=0 D F k (z)f(f k (z)) will be nearly singular. By embedding the data in a lower dimension (perhaps not using a variable embedding strategy) this calculation becomes more stable. However, as the calculation of p 1 k=0 D F k (z)f(f k (z)) becomes more stable the periodic orbit itself will be more approximate, and the model will possibly provide a worse fit of the data. Figure 6 demonstrates some of the common features of models with a low embedding dimension. This is clearly problematic. The probability distribution of such models is typically different to the data. Models that predict a short time (less than 1 (quasi-period)) ahead by only 4 using the immediately preceding values provide a poor fit of the data. If we embed using a uniform embedding strategy such as (y t, y t τ, y t 2τ ), where τ 1 (quasi-period) we can build 4 a model y t+1 = f (y t, y t τ, y t 2τ ). However, it is impossible to iterate a model of this form to produce a free run prediction. Models of the form y t+τ = f (y t, y t τ, y t 2τ ) are not likely to produce periodic orbits as it is unlikely that the relationship 4τ = (quasi-period of data) will hold exactly. For a given embedding lag τ and embedding dimension d determined by the methods discussed earlier, we have applied this technique to models of the form y t+1 = f (y t, y t τ,...,y t dτ ) (effectively producing periodic orbits with period dτ). Note

11 Nonlinear surrogates for hypothesis testing 267 Fig. 6. Free run prediction from a model with uniform embedding: The top plot shows a free run prediction of a model y t+τ = f(y t,y t τ,y t 2τ ) where τ is the closest integer to 1 4 (quasi-period) of the data. The bottom two panels show an embedding (x 1,x 2,x 3 ) = (y t,y t τ,y t 2τ ) of that free run prediction. The plot on the left shows that the free run prediction is not periodic, the one on the right demonstrates that it does have a bounded 1 dimensional attractor. The problem with this model is that the quasi-period of the model and 4τ do not agree precisely that, using time delay embeddings one will have that F(z t ) = F( y t, y t 2,...,y t dτ ) = [ y t+1, y t, y t 2,...,y t dτ+1 ], where y t+1 = f (y t, y t τ,...,y t dτ ). From these models we calculate the eigenvalues and eigenvectors of the periodic orbits. We comparing the 6 largest eigenvalues of an almost periodic orbit of the map F generated by models of 38 data sets from 14 infants. These maps are an approximation to a (presumably) periodic orbit of the flow of the original data. In almost all cases the 6 largest eigenvalues include complex conjugate pairs: evidence of a stable focus in the first return map. Most of these models produces complex eigenvalues with the magnitude of the real part less than one. This indicates that the map F p has a stable focus, or that trajectories will spiral towards the periodic orbit. This provides additional evidence for the presence of CAM. 6. Conclusion We have shown that statistics based on the correlation integral often will be pivotal statistics for surrogate data analysis. Utilizing this property we may apply (for example) correlation dimension estimates as a test statistic for the hypotheses addressed by Theiler s algorithm 0, 1 and 2 surrogates. Because our test statistic is pivotal there is no requirement to ensure that the surrogate generation method we employ is constrained. Therefore, we are not bound to use these three algorithms to generate surrogate data. More importantly, this greatly expands the scope of surrogate data testing. Because we have precise conditions on the pivotalness of correlation dimension, it is possible to extend surrogate data hypothesis to nonlinear hypothesis testing. With the help of minimum description length cylindrical basis modeling techniques (Judd and Mees 1995, Small and Judd 1998a) correlation dimension provides a useful statistic to test membership of particular classes of nonlinear dynamical processes. The hypothesis being tested is influenced by the results of the modeling procedure and cannot be determined a priori. After checking that all models have the same distribution of test statistic values and are representative of the data (in the sense that the models produce simulations that have qualitative features of the data), one is able to build a single nonlinear model of the data and test the hypothesis that the data was generated from a process in the class of dynamical processes that share the characteristics (such as periodic structure) of that model. In general one may take a data set, build nonlinear models of that data set and generate many noise driven simulation from each of these models and compare the distributions of a test statistic for each model and for broader groups of models (based on qualitative features, such as fixed points or periodic orbits, of these models). By comparing the value of the test statistic for the data to each of these distribution (for groups of models) one may either accept or reject the hypothesis that the data was

1 Introduction Surrogate data techniques can be used to test specic hypotheses, such as: is the data temporally correlated; is the data linearly ltere

1 Introduction Surrogate data techniques can be used to test specic hypotheses, such as: is the data temporally correlated; is the data linearly ltere Correlation dimension: A Pivotal statistic for non-constrained realizations of composite hypotheses in surrogate data analysis. Michael Small, 1 Kevin Judd Department of Mathematics, University of Western

More information

Phase-Space Reconstruction. Gerrit Ansmann

Phase-Space Reconstruction. Gerrit Ansmann Phase-Space Reconstruction Gerrit Ansmann Reprise: The Need for Non-Linear Methods. Lorenz oscillator x = 1(y x), y = x(28 z) y, z = xy 8z 3 Autoregressive process measured with non-linearity: y t =.8y

More information

Removing the Noise from Chaos Plus Noise

Removing the Noise from Chaos Plus Noise Removing the Noise from Chaos Plus Noise Steven P. Lalley Department of Statistics University of Chicago November 5, 2 Abstract The problem of extracting a signal x n generated by a dynamical system from

More information

Understanding the dynamics of snowpack in Washington State - part II: complexity of

Understanding the dynamics of snowpack in Washington State - part II: complexity of Understanding the dynamics of snowpack in Washington State - part II: complexity of snowpack Nian She and Deniel Basketfield Seattle Public Utilities, Seattle, Washington Abstract. In part I of this study

More information

Studies in Nonlinear Dynamics & Econometrics

Studies in Nonlinear Dynamics & Econometrics Studies in Nonlinear Dynamics & Econometrics Volume 7, Issue 3 2003 Article 5 Determinism in Financial Time Series Michael Small Chi K. Tse Hong Kong Polytechnic University Hong Kong Polytechnic University

More information

Algorithms for generating surrogate data for sparsely quantized time series

Algorithms for generating surrogate data for sparsely quantized time series Physica D 231 (2007) 108 115 www.elsevier.com/locate/physd Algorithms for generating surrogate data for sparsely quantized time series Tomoya Suzuki a,, Tohru Ikeguchi b, Masuo Suzuki c a Department of

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Effects of data windows on the methods of surrogate data

Effects of data windows on the methods of surrogate data Effects of data windows on the methods of surrogate data Tomoya Suzuki, 1 Tohru Ikeguchi, and Masuo Suzuki 1 1 Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo

More information

Delay Coordinate Embedding

Delay Coordinate Embedding Chapter 7 Delay Coordinate Embedding Up to this point, we have known our state space explicitly. But what if we do not know it? How can we then study the dynamics is phase space? A typical case is when

More information

Modeling and Predicting Chaotic Time Series

Modeling and Predicting Chaotic Time Series Chapter 14 Modeling and Predicting Chaotic Time Series To understand the behavior of a dynamical system in terms of some meaningful parameters we seek the appropriate mathematical model that captures the

More information

Analysis of Electroencephologram Data Using Time-Delay Embeddings to Reconstruct Phase Space

Analysis of Electroencephologram Data Using Time-Delay Embeddings to Reconstruct Phase Space Dynamics at the Horsetooth Volume 1, 2009. Analysis of Electroencephologram Data Using Time-Delay Embeddings to Reconstruct Phase Space Department of Mathematics Colorado State University Report submitted

More information

Testing for Chaos in Type-I ELM Dynamics on JET with the ILW. Fabio Pisano

Testing for Chaos in Type-I ELM Dynamics on JET with the ILW. Fabio Pisano Testing for Chaos in Type-I ELM Dynamics on JET with the ILW Fabio Pisano ACKNOWLEDGMENTS B. Cannas 1, A. Fanni 1, A. Murari 2, F. Pisano 1 and JET Contributors* EUROfusion Consortium, JET, Culham Science

More information

ESTIMATING THE ATTRACTOR DIMENSION OF THE EQUATORIAL WEATHER SYSTEM M. Leok B.T.

ESTIMATING THE ATTRACTOR DIMENSION OF THE EQUATORIAL WEATHER SYSTEM M. Leok B.T. This paper was awarded in the I International Competition (99/9) First Step to Nobel Prize in Physics and published in the competition proceedings (Acta Phys. Pol. A 8 Supplement, S- (99)). The paper is

More information

Detection of Nonlinearity and Stochastic Nature in Time Series by Delay Vector Variance Method

Detection of Nonlinearity and Stochastic Nature in Time Series by Delay Vector Variance Method International Journal of Engineering & Technology IJET-IJENS Vol:10 No:02 11 Detection of Nonlinearity and Stochastic Nature in Time Series by Delay Vector Variance Method Imtiaz Ahmed Abstract-- This

More information

Periodic Sinks and Observable Chaos

Periodic Sinks and Observable Chaos Periodic Sinks and Observable Chaos Systems of Study: Let M = S 1 R. T a,b,l : M M is a three-parameter family of maps defined by where θ S 1, r R. θ 1 = a+θ +Lsin2πθ +r r 1 = br +blsin2πθ Outline of Contents:

More information

Math 328 Course Notes

Math 328 Course Notes Math 328 Course Notes Ian Robertson March 3, 2006 3 Properties of C[0, 1]: Sup-norm and Completeness In this chapter we are going to examine the vector space of all continuous functions defined on the

More information

Takens embedding theorem for infinite-dimensional dynamical systems

Takens embedding theorem for infinite-dimensional dynamical systems Takens embedding theorem for infinite-dimensional dynamical systems James C. Robinson Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. E-mail: jcr@maths.warwick.ac.uk Abstract. Takens

More information

WAVELET RECONSTRUCTION OF NONLINEAR DYNAMICS

WAVELET RECONSTRUCTION OF NONLINEAR DYNAMICS International Journal of Bifurcation and Chaos, Vol. 8, No. 11 (1998) 2191 2201 c World Scientific Publishing Company WAVELET RECONSTRUCTION OF NONLINEAR DYNAMICS DAVID ALLINGHAM, MATTHEW WEST and ALISTAIR

More information

K. Pyragas* Semiconductor Physics Institute, LT-2600 Vilnius, Lithuania Received 19 March 1998

K. Pyragas* Semiconductor Physics Institute, LT-2600 Vilnius, Lithuania Received 19 March 1998 PHYSICAL REVIEW E VOLUME 58, NUMBER 3 SEPTEMBER 998 Synchronization of coupled time-delay systems: Analytical estimations K. Pyragas* Semiconductor Physics Institute, LT-26 Vilnius, Lithuania Received

More information

Chapter 6: The metric space M(G) and normal families

Chapter 6: The metric space M(G) and normal families Chapter 6: The metric space MG) and normal families Course 414, 003 04 March 9, 004 Remark 6.1 For G C open, we recall the notation MG) for the set algebra) of all meromorphic functions on G. We now consider

More information

1.3.1 Definition and Basic Properties of Convolution

1.3.1 Definition and Basic Properties of Convolution 1.3 Convolution 15 1.3 Convolution Since L 1 (R) is a Banach space, we know that it has many useful properties. In particular the operations of addition and scalar multiplication are continuous. However,

More information

Extension of continuous functions in digital spaces with the Khalimsky topology

Extension of continuous functions in digital spaces with the Khalimsky topology Extension of continuous functions in digital spaces with the Khalimsky topology Erik Melin Uppsala University, Department of Mathematics Box 480, SE-751 06 Uppsala, Sweden melin@math.uu.se http://www.math.uu.se/~melin

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

1 Random walks and data

1 Random walks and data Inference, Models and Simulation for Complex Systems CSCI 7-1 Lecture 7 15 September 11 Prof. Aaron Clauset 1 Random walks and data Supposeyou have some time-series data x 1,x,x 3,...,x T and you want

More information

Dynamics of partial discharges involved in electrical tree growth in insulation and its relation with the fractal dimension

Dynamics of partial discharges involved in electrical tree growth in insulation and its relation with the fractal dimension Dynamics of partial discharges involved in electrical tree growth in insulation and its relation with the fractal dimension Daniela Contreras Departamento de Matemática Universidad Técnica Federico Santa

More information

PERIODIC POINTS OF THE FAMILY OF TENT MAPS

PERIODIC POINTS OF THE FAMILY OF TENT MAPS PERIODIC POINTS OF THE FAMILY OF TENT MAPS ROBERTO HASFURA-B. AND PHILLIP LYNCH 1. INTRODUCTION. Of interest in this article is the dynamical behavior of the one-parameter family of maps T (x) = (1/2 x

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Trust Regions. Charles J. Geyer. March 27, 2013

Trust Regions. Charles J. Geyer. March 27, 2013 Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but

More information

Two Decades of Search for Chaos in Brain.

Two Decades of Search for Chaos in Brain. Two Decades of Search for Chaos in Brain. A. Krakovská Inst. of Measurement Science, Slovak Academy of Sciences, Bratislava, Slovak Republic, Email: krakovska@savba.sk Abstract. A short review of applications

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

DYNAMICAL SYSTEMS

DYNAMICAL SYSTEMS 0.42 DYNAMICAL SYSTEMS Week Lecture Notes. What is a dynamical system? Probably the best way to begin this discussion is with arguably a most general and yet least helpful statement: Definition. A dynamical

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

NONLINEAR TIME SERIES ANALYSIS, WITH APPLICATIONS TO MEDICINE

NONLINEAR TIME SERIES ANALYSIS, WITH APPLICATIONS TO MEDICINE NONLINEAR TIME SERIES ANALYSIS, WITH APPLICATIONS TO MEDICINE José María Amigó Centro de Investigación Operativa, Universidad Miguel Hernández, Elche (Spain) J.M. Amigó (CIO) Nonlinear time series analysis

More information

Tips and Tricks in Real Analysis

Tips and Tricks in Real Analysis Tips and Tricks in Real Analysis Nate Eldredge August 3, 2008 This is a list of tricks and standard approaches that are often helpful when solving qual-type problems in real analysis. Approximate. There

More information

PHONEME CLASSIFICATION OVER THE RECONSTRUCTED PHASE SPACE USING PRINCIPAL COMPONENT ANALYSIS

PHONEME CLASSIFICATION OVER THE RECONSTRUCTED PHASE SPACE USING PRINCIPAL COMPONENT ANALYSIS PHONEME CLASSIFICATION OVER THE RECONSTRUCTED PHASE SPACE USING PRINCIPAL COMPONENT ANALYSIS Jinjin Ye jinjin.ye@mu.edu Michael T. Johnson mike.johnson@mu.edu Richard J. Povinelli richard.povinelli@mu.edu

More information

Global Attractors in PDE

Global Attractors in PDE CHAPTER 14 Global Attractors in PDE A.V. Babin Department of Mathematics, University of California, Irvine, CA 92697-3875, USA E-mail: ababine@math.uci.edu Contents 0. Introduction.............. 985 1.

More information

Cross validation of prediction models for seasonal time series by parametric bootstrapping

Cross validation of prediction models for seasonal time series by parametric bootstrapping Cross validation of prediction models for seasonal time series by parametric bootstrapping Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna Prepared

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems Scott Zimmerman MATH181HM: Dynamical Systems Spring 2008 1 Introduction The Hartman-Grobman and Poincaré-Bendixon Theorems

More information

Vulnerability of economic systems

Vulnerability of economic systems Vulnerability of economic systems Quantitative description of U.S. business cycles using multivariate singular spectrum analysis Andreas Groth* Michael Ghil, Stéphane Hallegatte, Patrice Dumas * Laboratoire

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

SURROGATE DATA PATHOLOGIES AND THE FALSE-POSITIVE REJECTION OF THE NULL HYPOTHESIS

SURROGATE DATA PATHOLOGIES AND THE FALSE-POSITIVE REJECTION OF THE NULL HYPOTHESIS International Journal of Bifurcation and Chaos, Vol. 11, No. 4 (2001) 983 997 c World Scientific Publishing Company SURROGATE DATA PATHOLOGIES AND THE FALSE-POSITIVE REJECTION OF THE NULL HYPOTHESIS P.

More information

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad Fundamentals of Dynamical Systems / Discrete-Time Models Dr. Dylan McNamara people.uncw.edu/ mcnamarad Dynamical systems theory Considers how systems autonomously change along time Ranges from Newtonian

More information

However, in actual topology a distance function is not used to define open sets.

However, in actual topology a distance function is not used to define open sets. Chapter 10 Dimension Theory We are used to the notion of dimension from vector spaces: dimension of a vector space V is the minimum number of independent bases vectors needed to span V. Therefore, a point

More information

Evaluating nonlinearity and validity of nonlinear modeling for complex time series

Evaluating nonlinearity and validity of nonlinear modeling for complex time series Evaluating nonlinearity and validity of nonlinear modeling for complex time series Tomoya Suzuki, 1 Tohru Ikeguchi, 2 and Masuo Suzuki 3 1 Department of Information Systems Design, Doshisha University,

More information

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

Chapter 23. Predicting Chaos The Shift Map and Symbolic Dynamics

Chapter 23. Predicting Chaos The Shift Map and Symbolic Dynamics Chapter 23 Predicting Chaos We have discussed methods for diagnosing chaos, but what about predicting the existence of chaos in a dynamical system. This is a much harder problem, and it seems that the

More information

INTRODUCTION TO CHAOS THEORY T.R.RAMAMOHAN C-MMACS BANGALORE

INTRODUCTION TO CHAOS THEORY T.R.RAMAMOHAN C-MMACS BANGALORE INTRODUCTION TO CHAOS THEORY BY T.R.RAMAMOHAN C-MMACS BANGALORE -560037 SOME INTERESTING QUOTATIONS * PERHAPS THE NEXT GREAT ERA OF UNDERSTANDING WILL BE DETERMINING THE QUALITATIVE CONTENT OF EQUATIONS;

More information

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 9, SEPTEMBER 2003 1569 Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback Fabio Fagnani and Sandro Zampieri Abstract

More information

Nonlinear Biomedical Physics

Nonlinear Biomedical Physics Nonlinear Biomedical Physics BioMed Central Research Estimating the distribution of dynamic invariants: illustrated with an application to human photo-plethysmographic time series Michael Small* Open Access

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Approximation Metrics for Discrete and Continuous Systems

Approximation Metrics for Discrete and Continuous Systems University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science May 2007 Approximation Metrics for Discrete Continuous Systems Antoine Girard University

More information

If one wants to study iterations of functions or mappings,

If one wants to study iterations of functions or mappings, The Mandelbrot Set And Its Julia Sets If one wants to study iterations of functions or mappings, f n = f f, as n becomes arbitrarily large then Julia sets are an important tool. They show up as the boundaries

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325 Dynamical Systems and Chaos Part I: Theoretical Techniques Lecture 4: Discrete systems + Chaos Ilya Potapov Mathematics Department, TUT Room TD325 Discrete maps x n+1 = f(x n ) Discrete time steps. x 0

More information

The harmonic map flow

The harmonic map flow Chapter 2 The harmonic map flow 2.1 Definition of the flow The harmonic map flow was introduced by Eells-Sampson in 1964; their work could be considered the start of the field of geometric flows. The flow

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Investigation of Dynamical Systems in Pulse Oximetry Time Series

Investigation of Dynamical Systems in Pulse Oximetry Time Series Investigation of Dynamical Systems in Pulse Oximetry Time Series Jun Liang Abstract Chaotic behavior of human physiology is a problem that can be investigated through various measurements. One of the most

More information

APPPHYS217 Tuesday 25 May 2010

APPPHYS217 Tuesday 25 May 2010 APPPHYS7 Tuesday 5 May Our aim today is to take a brief tour of some topics in nonlinear dynamics. Some good references include: [Perko] Lawrence Perko Differential Equations and Dynamical Systems (Springer-Verlag

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

dynamical zeta functions: what, why and what are the good for?

dynamical zeta functions: what, why and what are the good for? dynamical zeta functions: what, why and what are the good for? Predrag Cvitanović Georgia Institute of Technology November 2 2011 life is intractable in physics, no problem is tractable I accept chaos

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2014

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2014 Department of Mathematics, University of California, Berkeley YOUR 1 OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2014 1. Please write your 1- or 2-digit exam number on

More information

0. Introduction 1 0. INTRODUCTION

0. Introduction 1 0. INTRODUCTION 0. Introduction 1 0. INTRODUCTION In a very rough sketch we explain what algebraic geometry is about and what it can be used for. We stress the many correlations with other fields of research, such as

More information

01. Review of metric spaces and point-set topology. 1. Euclidean spaces

01. Review of metric spaces and point-set topology. 1. Euclidean spaces (October 3, 017) 01. Review of metric spaces and point-set topology Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 017-18/01

More information

What is Chaos? Implications of Chaos 4/12/2010

What is Chaos? Implications of Chaos 4/12/2010 Joseph Engler Adaptive Systems Rockwell Collins, Inc & Intelligent Systems Laboratory The University of Iowa When we see irregularity we cling to randomness and disorder for explanations. Why should this

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Analysis II: The Implicit and Inverse Function Theorems

Analysis II: The Implicit and Inverse Function Theorems Analysis II: The Implicit and Inverse Function Theorems Jesse Ratzkin November 17, 2009 Let f : R n R m be C 1. When is the zero set Z = {x R n : f(x) = 0} the graph of another function? When is Z nicely

More information

One dimensional Maps

One dimensional Maps Chapter 4 One dimensional Maps The ordinary differential equation studied in chapters 1-3 provide a close link to actual physical systems it is easy to believe these equations provide at least an approximate

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008

Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008 Math 541 Fall 2008 Connectivity Transition from Math 453/503 to Math 541 Ross E. Staffeldt-August 2008 Closed sets We have been operating at a fundamental level at which a topological space is a set together

More information

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a Vol 12 No 6, June 2003 cfl 2003 Chin. Phys. Soc. 1009-1963/2003/12(06)/0594-05 Chinese Physics and IOP Publishing Ltd Determining the input dimension of a neural network for nonlinear time series prediction

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Generalized Pigeonhole Properties of Graphs and Oriented Graphs Europ. J. Combinatorics (2002) 23, 257 274 doi:10.1006/eujc.2002.0574 Available online at http://www.idealibrary.com on Generalized Pigeonhole Properties of Graphs and Oriented Graphs ANTHONY BONATO, PETER

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

LINEAR CHAOS? Nathan S. Feldman

LINEAR CHAOS? Nathan S. Feldman LINEAR CHAOS? Nathan S. Feldman In this article we hope to convience the reader that the dynamics of linear operators can be fantastically complex and that linear dynamics exhibits the same beauty and

More information

Chaos, Complexity, and Inference (36-462)

Chaos, Complexity, and Inference (36-462) Chaos, Complexity, and Inference (36-462) Lecture 4 Cosma Shalizi 22 January 2009 Reconstruction Inferring the attractor from a time series; powerful in a weird way Using the reconstructed attractor to

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

The small ball property in Banach spaces (quantitative results)

The small ball property in Banach spaces (quantitative results) The small ball property in Banach spaces (quantitative results) Ehrhard Behrends Abstract A metric space (M, d) is said to have the small ball property (sbp) if for every ε 0 > 0 there exists a sequence

More information

Multifractal Models for Solar Wind Turbulence

Multifractal Models for Solar Wind Turbulence Multifractal Models for Solar Wind Turbulence Wiesław M. Macek Faculty of Mathematics and Natural Sciences. College of Sciences, Cardinal Stefan Wyszyński University, Dewajtis 5, 01-815 Warsaw, Poland;

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f)) 1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

Chaos and Liapunov exponents

Chaos and Liapunov exponents PHYS347 INTRODUCTION TO NONLINEAR PHYSICS - 2/22 Chaos and Liapunov exponents Definition of chaos In the lectures we followed Strogatz and defined chaos as aperiodic long-term behaviour in a deterministic

More information

Detecting chaos in pseudoperiodic time series without embedding

Detecting chaos in pseudoperiodic time series without embedding Detecting chaos in pseudoperiodic time series without embedding J. Zhang,* X. Luo, and M. Small Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon,

More information

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations MATH 415, WEEKS 14 & 15: Recurrence Relations / Difference Equations 1 Recurrence Relations / Difference Equations In many applications, the systems are updated in discrete jumps rather than continuous

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Quasi-conformal maps and Beltrami equation

Quasi-conformal maps and Beltrami equation Chapter 7 Quasi-conformal maps and Beltrami equation 7. Linear distortion Assume that f(x + iy) =u(x + iy)+iv(x + iy) be a (real) linear map from C C that is orientation preserving. Let z = x + iy and

More information

Documents de Travail du Centre d Economie de la Sorbonne

Documents de Travail du Centre d Economie de la Sorbonne Documents de Travail du Centre d Economie de la Sorbonne Forecasting chaotic systems : The role of local Lyapunov exponents Dominique GUEGAN, Justin LEROUX 2008.14 Maison des Sciences Économiques, 106-112

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

PHY411 Lecture notes Part 5

PHY411 Lecture notes Part 5 PHY411 Lecture notes Part 5 Alice Quillen January 27, 2016 Contents 0.1 Introduction.................................... 1 1 Symbolic Dynamics 2 1.1 The Shift map.................................. 3 1.2

More information

THE RESIDUE THEOREM. f(z) dz = 2πi res z=z0 f(z). C

THE RESIDUE THEOREM. f(z) dz = 2πi res z=z0 f(z). C THE RESIDUE THEOREM ontents 1. The Residue Formula 1 2. Applications and corollaries of the residue formula 2 3. ontour integration over more general curves 5 4. Defining the logarithm 7 Now that we have

More information