Sparse Reconstruction of Systems of Ordinary Differential Equations

Size: px
Start display at page:

Download "Sparse Reconstruction of Systems of Ordinary Differential Equations"

Transcription

1 Sparse Reconstruction of Systems of Orinary Differential Equations Manuel Mai a, Mark D. Shattuck b,c, Corey S. O Hern c,a,,e, a Department of Physics, Yale University, New Haven, Connecticut 06520, USA b Benjamin Levich Institute an Physics Department, The City College of New York, New York, New York 10031, USA c Department of Mechanical Engineering & Materials Science, Yale University, New Haven, Connecticut 06520, USA Department of Applie Physics, Yale University, New Haven, Connecticut 06520, USA e Grauate Program in Computational Biology an Bioinformatics, Yale University, New Haven, Connecticut 06520, USA Abstract We evelop a numerical metho to reconstruct systems of orinary ifferential equations (ODEs) from time series ata without a priori knowlege of the unerlying ODEs an without pre-selecting a set of basis functions using sparse basis learning an sparse function reconstruction. We show that employing sparse representations provies more accurate ODE reconstruction compare to least-squares reconstruction techniques for a given amount of time series ata. We test an valiate the ODE reconstruction metho on known 1D, 2D, an 3D systems of ODEs. The 1D system possesses two stable fixe points; the 2D system possesses an oscillatory fixe point with close orbits; an the 3D system isplays chaotic ynamics on a strange attractor. We etermine the amount of ata require to achieve an error in the reconstructe functions to less than 0.1%. For the reconstructe 1D an 2D systems, we are able to match the trajectories from the original ODEs even at long times. For the 3D system with chaotic ynamics, as expecte, the trajectories from the original an reconstructe systems o not match at long times, but the reconstructe an original moels possess similar Lyapunov exponents. Now that we have valiate this ODE reconstruction metho on known moels, it can be employe in future stuies to ientify new systems of ODEs using time series ata from eterministic systems for which there is no currently known ODE moel. Keywors: Viral iseases, Immune system iseases, Data analysis: algorithms an implementation, Time series analysis, Nonlinear ynamics an chaos PACS: x, xw, Kf, Tp, a 1. Introuction We present a methoology to generate systems of orinary ifferential equations (ODEs) that can accurately reprouce measure time series ata. In the past, physicists have constructe ODEs by writing own the simplest mathematical expressions that are consistent with the symmetries an fixe points of the physical system. One can then compare the ynamics preicte by the ODEs an that obtaine from the physical system. However, there are now many large ata sets from physical an biological systems for which we know much less about the unerlying physical principles. In light of this, an important goal is to be able to generate systems of ODEs that recover the measure time series ata an can be use to preict system behavior in parameter regimes or time omains that have not yet been measure. ODE moels are use extensively in computational biology. For example, in systems biology, genetic circuits are moele as networks of electronic circuit elements [1, 2]. In aition, systems of ODEs are often employe to investigate viral ynamics (e.g. HIV [3, 4, 5], hepatitis [6, 7], an influenza [8, 9]) an the immune system response to infection [10, 11]. Population ynamics an epiemics have also been successfully Corresponing Author aress: corey.ohern@yale.eu (Corey S. O Hern) moele using systems of ODEs [12]. In some cases, an a hoc ODE moel with several parameters is posite [13], an solutions of the moel are compare to experimental ata to ientify the relevant range of parameter values. A more sophisticate approach [14, 15, 16, 17, 18, 19, 20, 21] is to reconstruct systems of ODEs by expaning the ODEs in a particular, fixe basis such as polynomials, rational, harmonic, or raial basis functions. The basis expansion coefficients can then be obtaine by fitting the moel to the time series ata. A recent stuy has evelope a more systematic computational approach to ientify the best ODE moel to recapitulate time series ata. The approach iteratively generates ranom mathematical expressions for a caniate ODE moel. At each iteration, the ODE moel is solve an the solution is compare to the time series ata to ientify the parameters in the caniate moel. The selecte parameters minimize the istance between the input trajectories an the solutions of the caniate moel. Caniate moels with small errors are then co-evolve using a genetic algorithm to improve the fit to the input time series ata [22, 23, 24, 25]. The avantage of this metho is that it yiels an approximate analytical expression for the ODE moel for the ynamical system. The isavantages of this approach inclue the computational expense of repeately solving ODE moels an the ifficulty in fining optimal solutions for multiimensional nonlinear regression. Preprint submitte to Physica D July 27, 2017

2 Here, we evelop a new methoology to buil numerical expressions of a system of ODEs that can recapitulate time series ata of a ynamical system. This metho has the avantage of not neeing any input except the time series ata, although a priori information about the fixe point structure an basins of attraction of the ynamical system woul improve reconstruction. Our metho inclues several steps. We first learn a basis to sparsely represent the time series ata using sparse ictionary learning [26, 27, 28]. We then fin the sparsest expansion in the learne basis that is consistent with the measure ata. This step can be formulate as solving an uneretermine system of linear equations. We will solve the uneretermine systems using L 1 -norm regularize regression, which fins the solution to the system with the fewest nonzero expansion coefficients in the learne basis. An avantage of our approach is that we learn a sparse representation (basis an expansion coefficients) of the time series ata with no prior assumptions concerning the form of the basis. In contrast, many previous stuies have pre-selecte a particular set of basis functions to reconstruct a system of ODEs from time series ata. In this case, the representation of the ata is limite by the forms of the basis functions that are pre-selecte by the investigator. In fact, other bases that are not selecte may yiel better representations. A key aspect of our work is that we apply two recent avances in machine learning, sparse learning of a basis an sparse function reconstruction from a known basis, to ODE reconstruction. Sparse reconstruction of ODEs from a known basis has been recently investigate [29, 30, 31]. However, a coupling of the two techniques, i.e. learning a basis from incomplete ata that is suitable for sparse reconstruction of a system of ODEs, has not yet been implemente. We test our ODE reconstruction metho on time series ata generate from known ODE moels in one-, two-, an threeimensional systems, incluing both non-chaotic an chaotic ynamics. We quantify the accuracy of the reconstruction for each system of ODEs as a function of the amount of ata use by the metho. Further, we solve the reconstructe system of ODEs an compare the solution to the original time series ata. The metho evelope an valiate here can now be applie to large ata sets for physical an biological systems for which there is no known system of ODEs. Ientifying sparse representations of ata (i.e. sparse coing) is well stuie. For example, sparse coing has been wiely use for ata compression, yieling the JPEG, MPEG, an MP3 ata formats. Sparse coing relies on the observation that for most signals a basis can be ientifie for which only a few of the expansion coefficients are nonzero [32, 33, 34]. Sparse representations can provie accurate signal recovery, while at the same time, reuce the amount of information require to efine the signal. For example, keeping only the ten largest coefficients out of 64 possible coefficients in an 8 8 two-imensional iscrete cosine basis (JPEG), leas to a size reuction of approximately a factor of 6. Recent stuies have shown that in many cases perfect recovery of a signal is possible from only a small number of measurements of the signal [35, 36, 37, 38, 39]. This work provie 2 a new lower boun for the amount of ata require for perfect reconstruction of a signal; for example, in many cases, one can take measurements at frequencies much below the Nyquist sampling rate an still achieve perfect signal recovery. The relate fiel of compresse sensing emphasizes sampling the signal in compresse form to achieve perfect signal reconstruction [37, 38, 39, 40, 41, 42, 43]. Compresse sensing has a wie range of applications from spee up of magnetic resonance image reconstruction [44, 45, 46] to more efficient an higher resolution cameras [47, 48]. Our ODE reconstruction metho relies on the assumption that the functions that comprise the right-han sies of the systems of ODEs can be sparsely represente in some basis. A function f ( x) can be sparsely represente by a set of basis functions {φ i }, i = 1,..., n if f ( x) = n i=1 c i φ i ( x) with only a small number s n of nonzero coefficients c i. This assumption is not as restrictive as it may seem at first. For example, suppose we sample a two-imensional function on a iscrete gri. Since there are = inepenent gri points, a complete basis woul require at least n = basis functions. For most applications, we expect that a much smaller set of basis functions woul lea to accurate recovery of the function. In fact, the sparsest representation of the function is the basis that contains the function itself, where only one of the coefficients c i is nonzero. Ientifying sparse representations of the system of ODEs is also consistent with the physics paraigm of fining the simplest moel to explain a ynamical system. The remainer of this manuscript is organize as follows. In the Materials an Methos section, we provie a formal efinition of sets of ODEs an etails about obtaining the right-han sie functions of ODEs from numerically ifferentiating time series ata. We then introuce the concept of L 1 -regularize regression an apply it to the reconstruction of a sparse unersample signal. We introuce the concept of sparse basis learning to ientify a basis in which the ODE can be represente sparsely. At the en of the Materials an Methos section, we efine the error metric that we will use to quantify the accuracy of the ODE reconstruction. In the Results section, we perform ODE reconstruction on moels in one-, two-, an three-imensional systems. For each system, we measure the reconstruction accuracy as a function of the amount of ata that is use for the reconstruction, showing examples of both accurate an inaccurate reconstructions. We en the manuscript in the Discussion section with a summary an future applications of our metho for ODE reconstruction. 2. Materials an Methos In this section, we first introuce the mathematical expressions that efine sets of orinary ifferential equations (ODEs). We then escribe how we obtain the system of ODEs from time series ata. We then present the sparse reconstruction an sparse basis learning methos that we will employ to buil sparse representations of the ODE moels. We also compare the accuracy of sparse versus non-sparse methos for signal reconstruction. At the en of this section, we introuce the specific

3 one-, two-, an three-imensional ODE moels that we use to valiate our ODE reconstruction methos Systems of orinary ifferential equations A general system of N nonlinear orinary ifferential equations is given by x 1 = f 1 (x 1, x 2,..., x N ) t x 2 = f 2 (x 1, x 2,..., x N ) t x N t. = f n (x 1, x 2,..., x N ), where f i with i = 1,..., N are arbitrary nonlinear functions of all N variables x i an /t enotes a time erivative. Although the functions f i (x 1,..., x N ) are efine for all values of x i within a given omain, the time erivatives of the solution x(t) = (x 1 (t), x 2 (t),..., x N (t)) T only sample the functions along particular trajectories. The functions f i can be obtaine by taking numerical erivatives of the solutions with respect to time: (1) f i (t 0 ) x i(t 0 + t) x i (t 0 ). (2) t (Note that higher-orer methos for etermining numerical erivatives will improve ODE reconstruction [19].) We will reconstruct the functions from a set of m measurements, y ik = f i ( x k ) at positions { x k } with k = 1,..., m. To o this, we will express the functions f i as linear superpositions of arbitrary, nonlinear basis functions φ j ( x): f i ( x) = n c i j φ j ( x), (3) j=1 where c i j are the expansion coefficients an n is the number of basis functions use in the expansion. The measurements y ik = f i ( x k ) impose the following constraints on the expansion coefficients: n f i ( x k ) = y ik = c i j φ j ( x k ) (4) for each k = 1,..., m. The constraints in Eq. 4 can also be expresse as a matrix equation: n j=1 Φ k j c i j = y ik for i = 1,..., m or φ 1 ( x 1 )... φ n ( x 1 ) c i1 φ 1 ( x 2 )... φ n ( x 2 ) c i2 j=1 c in y i1 y i2 =... φ 1 ( x m )... φ n ( x m ) y im, (5) where Φ k j = φ j ( x k ). In most cases of ODE reconstruction, the number of rows of Φ, i.e. the number of measurements m, is smaller than the number of columns of Φ, which is equal to the number of basis functions n use to represent the signals f i. Thus, in general, the system of equations (Eq. 5) is uneretermine with n > m. 3 Sparse coing is ieally suite for solving uneretermine systems because it seeks to ientify the minimum number of basis functions to represent the signals f i. If we ientify a basis that can represent a given set of signals f i sparsely, an L 1 - regularize minimization scheme will be able to fin the sparsest representation of the signals [49] Sparse Coing In general, the least squares (L 2 ) solution of Eq. 5 possesses many nonzero coefficients c i, whereas the minimal-l 1 solution of Eq. 5 is sparse an possesses only a few non-zero coefficients. In the case of uneretermine systems with many available basis functions, it has been shown that a sparse solution obtaine via L 1 regularization more accurately represents the solution compare to those that are superpositions of many basis functions [49]. A sparse solution to Eq. 5 can be obtaine by minimizing the square ifferences between the measurements of the signal y an the reconstructe signal Φ c subject to a constraint on the L 1 norm of c [50]: ĉ = arg min c 1 2 y Φ c λ r c 1, (6) where. p enotes the L p vector norm an λ r is a Langrange multiplier that penalizes a large L 1 norm of c. The L p norm of an n-imensional vector x is efine as 1/p N x p = x i p (7) i=1 for p 1, where N is the number of components of the vector x. An important point to note is that recent work in sparse coing [36] has shown that the minimal-l 1 solution to Eq. 6 is the sparsest. We thus minimize Eq. 6 for each λ r an choose the particular solution with the minimal L 1 norm (an a sufficiently small error in the ODE reconstruction). The optimal value of λ r epens on the ODE system an the specific implementation of the ODE reconstruction methoology. To emphasize the avantages of sparse representations, we now emonstrate how the sparse signal reconstruction metho compares to a stanar least squares fit. We first construct a sparse signal (with sparsity s) in a given basis. We then sample the signal ranomly an attempt to recover the signal using the regularize-l 1 an least-squares reconstruction methos. We note that for least-squares regression, we choose the solution with the minimal-l 2 norm, which is stanar for uneretermine least-squares regression algorithms. For this example, we choose the iscrete cosine basis. For a signal size of 100 values, we have a complete an orthonormal basis of 100 functions φ n (i) (n = 0,..., 99) each with 100 values (i = 0,..., 99): [ ( π φ n (i) = F(n) cos i + 1 ) ] n, (8) where F(n) is a normalization factor 1 F(n) = 100 for n = for n = 1,..., 99. (9)

4 Note that an orthonormal basis is not a prerequisite for the sparse reconstruction metho. Similar to Eq. 3, we can express the signal as a superposition of basis functions, (a) (c) g(i) = 99 j=0 c j φ j (i). (10) () (e) (f) The signal g of sparsity s is generate by ranomly selecting s of the coefficients c j an assigning them a ranom amplitue in the range [ 1, 1]. We then evaluate g(i) at m ranomly chosen positions i an attempt to recover g(i) from the measurements. If m < 100, recovering the original signal involves solving an uneretermine system of linear equations. Recovering the full signal g from a given number of measurements procees as follows. After carrying out the m measurements, we can rewrite Eq. 4 as y = P g, (11) where y is the vector of the measurements of g an P is the projection matrix with m n entries that are either 0 or 1. Each row has one nonzero element that correspons to the position of the measurement. For each ranom selection of measurements of g, we solve the reuce equation Figure 1: Comparison of the least squares L 2 (green ashe lines) an L 1 (blue soli lines) regularize regression to recover a ranomly generate, iscrete signal (re ots in panel (a)) obtaine by ranomly sampling points i from the function g(i) = 99 n=0 c nφ n (i), where φ n (i) is given by Eq. 8 an i = 0,..., 99 is chosen so that all n frequencies can be resolve. The function was constructe to be 3 sparse (s = 3) with only c 0, c 4, an c The six panels (a)-(f) show the reconstructions for the fractions M = 1, 0.9, 0.8, 0.5, 0.2, an 0.1 of the input signal, respectively, inclue in the measurements. The re ots in panels (a)-(f) inicate the fraction of the signal that was measure an use for the reconstruction (a) (c) y = Θ c, (12) where Θ = PΦ. After solving Eq. 12 for c (with solution ĉ), we obtain a reconstruction of the original signal g rec = Φ ĉ. (13) Fig. 1 shows examples of L 1 an L 2 reconstruction methos of a signal as a function of the fraction of the total signal M = N m /m, where N m is the number of measurements (out of m) use for the reconstruction. We stuie values of M in the range 0.2 to 1 for the reconstructions. Even when only a small fraction of the signal is inclue (own to M = 0.2), the L 1 reconstruction metho achieves nearly perfect signal recovery. In contrast, the least-squares metho only achieves aequate recovery of the signal for M > 0.9. Moreover, when only a small fraction of the signal is inclue, the L 2 metho is ominate by the mean of the measure points an oscillates rapily about the mean to match each measurement. In Fig. 2, we measure the recovery error between the original ( g) an recovere ( g rec ) signals as a function of the fraction M of the signal inclue, for several sparsities s an reconstruction penalties λ r. We efine the recovery error as ( g, g rec ) = 1 g g rec g 2 g rec 2, (14) where g g rec enotes the inner prouct between the two vectors g an g rec. This istance function satisfies 0 2, where 1 signifies a large ifference between g an g rec an = 0 inicates g = g rec. For sparsity values s = 1 an 3, we fin that there is an optimal regularization parameter value λ r = Large values M M M Figure 2: Comparison of the least-squares L 2 (black ashe lines) an L 1 (symbols) reconstruction errors (Eq. 14) as a function of the fraction M of the input signal g(i) in Fig. 1 for three values of the sparsity (a) s = 1, 3, an (c) 20. The ifferent symbols inicate ifferent L 1 -regularization parameters λ r = 10 1 (circles), 10 2 (stars), 10 4 (triangles), 10 5 (squares), an 10 7 (iamons). of λ r imply a large penalty for non-zero reconstruction coefficients, which yiels solutions that have mostly, or only coefficients that are zero. For small λ r, the solutions converge towars the least-squares solutions (see Eq. 6). For the optimal λ r, the L 1 -reconstruction gives small errors ( 0) for M 0.2 (Fig. 2). In contrast, the error for the L 2 -reconstruction metho is nonzero for all M < 1 for all s. For a non-sparse signal (s = 20), the L 1 an L 2 reconstruction methos give similar errors for M 0.2. In this case, the measurements truly unersample the signal, an thus proviing less than 20 measurements is not enough to constrain the 20 nonzero coefficients c j. However, when M 0.2, from the L 1 -reconstruction metho is less than that from the L 2 metho an is nearly zero for M Sparse Basis Learning The L 1 -reconstruction metho escribe in the previous section works well if 1) the signal has a sparse representation in some basis an 2) the basis Φ (or a subset of it) contains functions similar to the basis in which the signal is sparse. How

5 (a) (a) Figure 3: (a) pixel image of a cat. Using the sparse basis learning metho, we obtaine pixel basis functions (or patches ) for the image in (a). Figure 4: (a) A ranom collection of pixels (sampling mask) that represents 30% of the image in Fig. 3 (a). Sparse basis learning reconstruction of the image using samples of the image at the locations of the black pixels in (a). o we procee with signal reconstruction if we o not know a basis in which the signal is sparse? One metho is to use one of the common basis sets, such as wavelets, sines, cosines, or polynomials [51, 52, 53]. Another metho is to employ sparse basis learning that ientifies a basis in which a signal can be expresse sparsely. This approach is compelling because it oes not require significant prior knowlege about the signal, it oes not employ pre-selecte basis functions, an it allows the basis to be learne even from noisy or incomplete ata. Sparse basis learning seeks to fin a basis Φ that can represent an input of several signals sparsely. We ientify Φ by ecomposing the signal matrix Y = ( y 1, y 2,..., y m ), where m is the number of signals, into the basis matrix Φ an coefficient matrix C, Y = ΦC. (15) Columns c i of C are the sparse coefficient vectors that represent the signals y i in the basis Φ. Both C an Φ are unknown an can be etermine by minimizing the square ifferences between the signals an their representations in the basis Φ subject to the constraint that the coefficient matrix is sparse [54]: Ĉ, ˆΦ = arg min C,Φ m i=1 ( ) 1 2 y i Φ c i λ l c i 1, (16) where λ l is a Langrange multiplier that etermines the sparsity of the coefficient matrix C. To solve Eq. 16, we employe the open source solver MiniBatchDictionaryLearning, which is part of the python machine learning package scikit-learn [50]. Again, we minimize Eq. 16 for each λ l an choose the particular solution with the minimal L 1 norm (an a sufficiently small error in the ODE reconstruction). To illustrate an provie an example of the basis learning metho, we show the results of sparse basis learning on the complex, two-imensional image of a cat shown in Fig. 3 (a). To learn a sparse basis for this image, we ecompose the original image ( pixels) into all possible 8 8 patches, which totals 14, 641 unique patches. The patches were then reshape into one-imensional signals y i each containing 64 values. We chose 100 basis functions, or patches, (columns of Φ) to sparsely represent the input matrix Y. Fig. 3 shows the 100 basis functions (patches) that were obtaine by solving Eq. 16. The matrix Φ was reshape into 8 8 pixel basis functions before plotting. Note that some of the basis functions isplay complicate features, e.g., lines an ripples of ifferent withs an angles, whereas others are more uniform. To emonstrate the utility of the sparse basis learning metho, we seek to recover the image in Fig. 3 (a) from an unersample version using the learne basis functions in Fig. 3 an then performing sparse reconstruction (Eq. 6) for each 8 8 patch of the unersample image. For this example, we ranomly sample 30% of the original image. In Fig. 4 (a), the black pixels inicate the ranom pixels use for the sparse reconstruction of the unersample image. We ecompose the unersample image into all possible 8 8 patches, using only the measurements marke by the black pixels in the sampling mask in Fig. 4 (a). While the reconstruction of the image in Fig. 4 is somewhat grainy, this reconstruction metho clearly resembles the original image even when it is 70% unersample. In this work, we show that one may also use incomplete ata to learn a sparse basis. For example, the case of a iscrete representation of a two-imensional system of ODEs is the same problem as basis learning for image reconstruction (Fig. 3). However, learning the basis from solutions of the system of ODEs, oes not provie full sampling of the signal (i.e. the right-han sie of the system of ODEs in Eq. 1), because the ynamics of the system is strongly affecte by the fixe point structure an the functions are not uniformly sample. To learn a basis from incomplete ata, we ecompose the signal into patches of a given size that epens on the specific problem uner consieration. However, there are general guielines to follow concerning the choice of an optimal patch size. The patch size shoul be chosen to be sufficiently small so that there are enough patches to create variability in the basis set, but also large enough so that there is at least one measurement in each patch. We will stuy how the reconstruction error epens on the patch size for each ODE moel we consier. After we ecompose the signal into patches, we fill in missing ata with ranom numbers. Note that other methos, such as filling in missing ata with the mean of the ata in a given patch or interpolation techniques, can also be use to obtain similar quality ODE reconstruction. 5

6 After filling in missing ata, we convert the pae patches (i.e. original plus ranom signal) into a signal matrix Y an learn a basis Φ to sparsely represent the signal by solving Eq. 16. To recover the signal, we fin a sparse representation ĉ of the unpae signal (i.e. without ae ranom values) in the learne basis Φ by solving Eq. 12, where P is the matrix that selects only the signal entries that have been measure. We then obtain the reconstructe patch by taking the prouct Φĉ. We repeat this process for each patch to reconstruct the full omain. For cases in which we obtain ifferent values for the signal at the same location from ifferent patches, we average the result Moels We test our methos for the reconstruction of systems of ODEs using synthetic ata, i.e. ata generate by numerically solving systems of ODEs, which allows us to test quantitatively the accuracy as a function of the amount of ata use in the reconstruction. We present results from systems of ODEs in one, two, an three imensions with increasing complexity in the ynamics. For an ODE in one imension (1D), we only nee to reconstruct one nonlinear function f 1 of one variable x 1. In two imensions (2D), we nee to reconstruct two functions ( f 1 an f 2 ) of two variables (x 1 an x 2 ), an in three imensions (3D), we nee to reconstruct three functions ( f 1, f 2, an f 3 ) of three variables (x 1, x 2, an x 3 ) to reprouce the ynamics of the system. Each of the systems that we stuy possesses a ifferent fixe point structure in phase space. The 1D moel has two stable fixe points an one unstable fixe point, an thus all trajectories evolve towar one of the stable fixe points. The 2D moel has one sale point an one oscillatory fixe point with close orbits as solutions. The 3D moel we stuy has no stable fixe points an instea possesses chaotic ynamics on a strange attractor. 1D moel. For 1D, we stuy the Reynols moel for the immune response to infection [10]: x 1 t ( = f 1 (x 1 ) = k pg x 1 1 x ) 1 k pms m x 1, (17) x µ m + k mp x 1 (a) Figure 5: (a) The function f 1 (x 1 ) for the Reynols ODE moel in one spatial imension (Eq. 17) for the pathogen loa in the range 0 x Fixe points are marke by soli circles. Close-up of f 1 (x 1 ) near the stable fixe point at x 1 = 0 an unstable fixe point at x 1 = These two fixe points are ifficult to iscern on the scale shown in panel (a). Trajectories with initial conitions between x 1 = 0 an 0.31 will move towars the stable fixe point at x 1 = 0, while initial conitions with x 1 > 0.31 will move towar the fixe point at x 1 = (a) Figure 6: (a) Pathogen loa x 1 as a function of time t in the range 0 t 30 for the Reynols ODE moel in one spatial imension (Eq. 17). Close-up of the region 0 t 15, which highlights the behavior of the system near the unstable fixe point at x 1 = Solutions (re soli lines) with initial conitions 0 x are attracte to the stable fixe point at x 1 = 0, while solutions (blue soli lines) with initial conitions x 1 > 0.31 are attracte to the stable fixe point at x 1 = Fig. 6, solutions to Eq. 17 with initial conitions 0 x are attracte to the stable fixe point at x 1 = 0, while solutions with initial conitions x 1 > 0.31 are attracte to the stable fixe point at x 1 = D moel. In 2D, we focuse on the Lotka-Volterra system of ODEs that escribe preator-prey ynamics [56]: where the pathogen loa x 1 is unitless, an the other parameters k pg, k pm, k mp, an s m have units of inverse hours. The right-han sie of Eq. 17 is the sum of two terms. The first term enables logistic growth of the pathogen loa. In the absence of any other terms, any positive initial value will cause x 1 to grow logistically to the steay-state value x. The secon term mimics a local, non-specific response to an infection, which reuces the pathogen loa. For small values of x 1, the ecrease is proportional to x 1. For larger values of x 1, the ecrease cause by the secon term is constant. We employe the parameter values k pg = 0.6, x = 20, k pm = 0.6, s m = 0.005, µ m = 0.002, an k mp = 0.01, which were use in previous stuies of this ODE moel [10, 55]. In this parameter regime, Eq. 17 exhibits two stable fixe points at x 1 = 0 an an one unstable fixe point, separating the two stable fixe points, at x 1 = 0.31 (Fig. 5). As shown in 6 x 1 = f 1 (x 1, x 2 ) = α x 1 β x 1 x 2 t x 2 = f 2 (x 1, x 2 ) = γ x 2 + δ x 1 x 2, t (18) where x 1 an x 2 escribe the prey an preator population sizes, respectively, an are unitless. In this moel, prey have a natural growth rate α. In the absence of preators, the prey population x 1 woul grow exponentially with time. With preators present, the prey population ecreases at a rate proportional to the prouct of both the preator an prey populations with a proportionality constant β (with units of inverse time). Without preation, the preator population x 2 woul ecrease with eath rate γ. With the presence of prey x 1, the preator population grows proportional to the prouct of the two population sizes x 1 an x 2 with a proportionality constant δ (with units of inverse time).

7 x x 1 Figure 7: Solutions of the two-imensional Lotka-Volterra ODE moel (Eq. 18) with α = 0.4, β = 0.4, γ = 0.1, an δ = 0.2 plotte parametrically (x 2 (t) versus x 1 (t)) for 15 ranom initial conitions in the omain 0 x an 0 x 2 1. This moel has an unstable fixe point at x 1 = 0 an x 2 = 0 an an oscillatory fixe point at x 1 = 0.5 an x 2 = 1. Figure 8: Solutions of the three-imensional Lorenz ODE moel (Eq. 20) with σ = 10, ρ = 28, an β = 8/3 plotte parametrically in the x 1 -x 2 -x 3 plane for two initial conitions (x 1, x 2, x 3 ) = ( 7.57, 11.36, 9.51) (re soli line) an ( 12.17, 5.23, 11.60) (blue soli line). For this parameter regime, the system has three unstable fixe points, lives on the Lorenz attractor, an possesses chaotic ynamics. For the Lotka-Volterra system of ODEs, there are two fixe points, one at x 1 = 0 an x 2 = 0 an one at x 1 = γ/δ an x 2 = α/β. The stability of the fixe points is etermine by the eigenvalues of the Jacobian matrix evaluate at the fixe points. The Jacobian of the Lotka-Volterra system is given by J LV = ( ) α βx2 βx 1. (19) δx 2 γ + δx 1 The eigenvalues of the Jacobian J LV at the origin are J 1 = α, J 2 = γ. Since the moel is restricte to positive parameters, the fixe point at the origin is a sale point. The interpretation is that for small populations of preator an prey, the preator population ecreases exponentially ue to the lack of a foo source. While unharme by the preator, the prey population can grow exponentially, which rives the system away from the zero population state, x 1 = 0 an x 2 = 0. The eigenvalues of the Jacobian J LV at the secon fixe point x 1 = γ/δ an x 2 = α/β are purely imaginary complex conjugates, J 1 = i αγ an J 2 = i αγ, where i 2 = 1. The purely imaginary fixe point causes trajectories to revolve aroun it an form close orbits. The interpretation of this fixe point is that the preators ecrease the number of prey, then the preators begin to ie ue to a lack of foo, which in turn allows the prey population to grow. The growing prey population provies an abunant foo supply for the preator, which allows the preator to grow faster than the foo supply can sustain. The prey population then ecreases an the cycle repeats. For the results below, we chose the parameters α = 0.4, β = 0.4, γ = 0.1, an δ = 0.2 for the Lotka-Volterra system, which locates the oscillatory fixe point at x 1 = 0.5 an x 2 = 1.0 (Fig. 7). is heate from below an coole from above: x 1 = f 1 (x 1, x 2, x 3 ) = σ(x 2 x 1 ) t x 2 = f 2 (x 1, x 2, x 3 ) = x 1 (ρ x 3 ) x 2 t x 3 = f 3 (x 1, x 2, x 3 ) = x 1 x 2 βx 3, t (20) where σ, ρ, β are positive, imensionless parameters that represent properties of the flui. In ifferent parameter regimes, the flui can isplay quiescent, convective, an chaotic ynamics. The three imensionless variables x 1, x 2, an x 3 escribe the intensity of the convective flow, temperature ifference between the ascening an escening flui, an spatial epenence of the temperature profile, respectively. The system possesses three fixe points at (x 1, x 2, x 3 ) = (0, 0, 0), ( β 1/2 (ρ 1) 1/2, β 1/2 (ρ 1) 1/2, ρ 1), an (β 1/2 (ρ 1) 1/2, β 1/2 (ρ 1) 1/2, ρ 1). The Jacobian of the system is given by σ σ 0 J L = ρ x 3 1 x 1. (21) x 2 x 1 β When we evaluate the Jacobian (Eq. 21) at the fixe points, we fin that each of the three eigenvalues possesses two stable an one unstable eigenirection in the parameter regime σ = 10, ρ = 28 an β = 8/3. With these parameters, the Lorenz system isplays chaotic ynamics with Lyapunov exponents l 1 = 0.9, l 2 = 0.0, an l 3 = 14.6 [58]. In Fig. 8, we show the time evolution of two initial conitions in x 1 -x 2 -x 3 configuration space for this parameter regime. 3. Results 3D moel. In 3D, we focuse on the Lorenz system of ODEs [57], which escribes flui motion in a container that 7 In this section, we present the results of our methoology for ODE reconstruction of ata generate from the three systems of

8 ODEs escribe in Materials an Methos. As a summary, our ODE reconstruction methoology involves the following eight steps: 1) Choose N t ranom initial conitions, solve the system of ODEs until time t = t en for each initial conition, an sample the solutions at times space by t; 2) Calculate numerical erivatives from the trajectories x i (t) to obtain the functions f i ; 3) Decompose each f i into all possible patches of a given size; 4) Fill in missing values in the patches with ranom numbers; 5) Learn a basis to sparsely represent all of the patches; 6) Use the learne basis to sparsely reconstruct the functions f i base only on the measurements obtaine from the N t trajectories an then calculate the non-measure values of f i ; 7) Compute the error in the reconstruction by comparing all values of the reconstructe functions to all values in the original moel, even at locations that were not measure; an 8) Average the error over at least 20 reconstructions by repeating steps 1 through 7. For each system, we measure the error in the reconstruction as a function of the size of the patches use to ecompose the signal for basis learning, the sampling time interval t, an the number of trajectories N t. For each moel, we make sure that the total integration time is sufficiently large that the system can reach the stable fixe points or sample the chaotic attractor in the case of the Lorenz system Reconstruction of ODEs in 1D We first focus on the reconstruction of the Reynols ODE moel in 1D (Eq. 17) using time series ata. We iscretize the omain 0 x using 256 points, i = 0,..., 255. Because the unstable fixe point at x 1 = 0.31 is much closer to the stable fixe point at x 1 = 0 than to the stable fixe point at x 1 = 19.49, we sample more frequently in the region 0 x compare to the region 0.6 < x In particular, we uniformly sample 128 points from the small omain, an uniformly sample the same number of points from the large omain. In Fig. 9, we show the error (Eq. 14) in recovering the right-han sie of Eq. 17 ( f 1 (x 1 )) as a function of the size p of the patches use for basis learning. Each ata point in Fig. 9 represents an average over 20 reconstructions using N t = 10 trajectories with a sampling time interval t = 1. We fin that the error achieves a minimum below 10 3 in the patch size range 30 < p < 50. Patch sizes that are too small o not aequately sample f 1 (x 1 ), while patch sizes that are too large o not inclue enough variability to select a sufficiently iverse basis set to reconstruct f 1 (x 1 ). For example, in the extreme case that the patch size is the same size as the signal, we are only able to learn the input ata itself, which may be missing ata. For the remaining stuies of the 1D moel, we set p = 50 as the basis patch size. In Fig. 10, we show the epenence of the reconstruction error on the two λ parameters use for sparse reconstruction, λ r (Eq. 6), an basis learning, λ l (Eq. 16), for the 1D Reynols moel. While we fin a minimum in the reconstruction error near λ r 10 4, is essentially inepenent of λ l. Thus, we chose λ r = λ l = 10 4 for ODE reconstruction of the 1D Reynols moel. This result inicates that the sparse learning p Figure 9: Error in the ODE reconstruction for the 1D Reynols moel (Eq. 17) as a function of the patch size p use for basis learning using N t = 10 trajectories with sampling time interval t = 1. Note that the maximum signal size inclues 256 points. proceure is much less epenent on the regularization than the sparse reconstruction process. In Fig. 11, we plot the error in the reconstruction of f 1 (x 1 ) as a function of the sampling time interval t for several numbers of trajectories N t = 1, 2, 20, 50, an 200. We fin that the error ecreases with the number of trajectories use in the reconstruction. For N t = 1, the error is large with > 0.1. For large numbers of trajectories (e.g. N t = 200), the error ecreases with ecreasing t, reaching 10 5 for small t. The fact that the error in the ODE reconstruction increases with t is consistent with notion that the accuracy of the numerical erivative of each trajectory ecreases with increasing sampling interval. We also investigate the reconstruction error as a function of Gaussian noise ae to the measurements of erivatives in Fig. 12. We quantify the level of noise by the coefficient of variation ν. As expecte, the reconstruction error increases with ν for the 1D Reynols moel. In Fig. 13, we show the error in the reconstruction of f 1 (x 1 ) as a function of the total integration time t en. We fin that ecreases strongly as t en increases for t en < 20. For t en > 20, reaches a plateau value below 10 4, which epens weakly on t. For characteristic time scales t > 20, the Reynols ODE moel reaches one of the two stable fixe points, an therefore becomes inepenent of t en. In Fig. 14, we compare accurate (using N t = 50 an t = 0.1) an inaccurate (using N t = 10 an t = 5) reconstructions of f 1 (i 1 ) for the 1D Reynols ODE moel. Note that we plot f 1 as a function of the scale variable i 1. The inexes i 1 = 0,..., 127 inicate uniformly space x 1 values in the interval 0 x 1 0.6, an i 1 = 128,..., 256 inicate uniformly space x 1 values in the interval 0.6 < x We fin that using large t gives rise to inaccurate measurements of the time erivative of x 1 an, thus of f 1 (x 1 ). In aition, large t oes not allow ense sampling of phase space, especially in regions where the trajectories evolve rapily. The inaccurate reconstruction in Fig. 14 is even worse than it

9 10 0 (a) r Figure 10: Reconstruction error for the 1D Reynols moel as a function of the two λ parameters use for (a) sparse reconstruction, λ r (Eq. 6), an basis learning, λ l (Eq. 16), for several ifferent numbers of trajectories N t = 10 (blue), 20 (green), an 50 (re) use in the reconstruction. The ata for each N t is average over 20 inepenent reconstructions l t Figure 12: Reconstruction error for the 1D Reynols moel as a function of the sampling interval t for several ifferent values of a coefficient of variation ν of Gaussian noise ae to the measurements of the erivatives. We show errors for ν = 0 (circles), 0.05 (stars), 0.1 (triangles), 0.2 (squares), an 0.5 (iamons) use in the reconstruction. The ata for each ν is average over 20 inepenent reconstructions t Figure 11: Reconstruction error for the 1D Reynols moel as a function of the sampling interval t for several ifferent numbers of trajectories N t = 1 (circles), 5 (stars), 10 (triangles), 20 (squares), 50 (iamons), an 200 (pentagons) use in the reconstruction. The ata for each N t is average over 20 inepenent reconstructions. seems at first glance. The reconstructe function is ientically zero over a wie range of i 1 (0 i 1 50) where f (i 1 ) is not well sample, since the efault output of a faile reconstruction is zero. It is a coincience that f 1 (i 1 ) 0 in Eq. 17 over the same range of i 1. We now numerically solve the reconstructe 1D Reynols ODE moel for ifferent initial conitions an times comparable to t en an compare these trajectories to those obtaine from the original moel (Eq. 17). In Fig. 15, we compare the trajectories x 1 (t) for the accurate ( 10 5 ; Fig. 14 (a)) an inaccurate ( 10 2 ; Fig. 14 ) representations of f 1 (x 1 ) to the original moel for six initial conitions. All of the trajectories for the accurate representation of f 1 (x 1 ) are nearly inistin t en Figure 13: Reconstruction error for the 1D Reynols ODE moel as a function of the total integration time t en use for the reconstruction for N t = 20 trajectories at several values of the sampling time t = 0.05 (circles), 0.1 (stars), an 0.5 (triangles). guishable from the trajectories for the original moel, whereas we fin large eviations between the original an reconstructe trajectories even at short times for the inaccurate representation of f 1 (x 1 ) Reconstruction of Systems of ODEs in 2D We now investigate the reconstruction accuracy of our metho for the Lotka-Volterra system of ODEs in 2D. We fin that the results are qualitatively similar to those for the 1D Reynols ODE moel. We map the numerical erivatives for N t trajectories with a sampling time interval t onto a gri with 0 x 1, x 2 2. Similar to the 1D moel, we fin that the error in the reconstruction of f 1 (x 1, x 2 ) an f 2 (x 1, x 2 ) possesses a minimum as a function of the patch area use for basis learning, where the location an value at the minimum epens on the parameters use for the reconstruction. For example, for N t = 200, t = 0.1, an averages over 20 inepenent

10 (a) (a) Figure 14: Reconstructions (soli blue lines) of f 1 (i 1 ) for the 1D Reynols ODE moel in Eq. 17 using (a) N t = 50 an t = 0.1 an N t = 10 an t = 5. f 1 is iscretize using 256 points. The inices i 1 = 0,..., 127 inicate uniformly space x 1 values in the interval 0 x 1 0.6, an i 1 = 128,..., 256 inicate uniformly space x 1 values in the interval 0.6 < x The exact expression for f 1 (i 1 ) is represente by the open circles. Figure 15: (a) Comparison of the trajectories x 1 (t) for the 1D Reynols ODE moel using the accurate (soli blue lines) an inaccurate (ashe re lines) reconstructions of f 1 (x 1 ) shown in Fig. 14 (a) an, respectively, for six initial conitions: x 1 (0) = 0.1, 0.2, 0.3, 1, 2, 5, an 10. The trajectories obtaine from f 1 (x 1 ) in Eq. 17 are given by the open circles. Close up of the trajectories in the omain 0 x reconstructions, the error reaches a minimum ( 10 5 ) near p 2 min 100, where the regularization parameters have been chosen to be λ r = λ l = Note that the maximum signal size inclues points. In Fig. 16, we show the reconstruction error as a function of the sampling time interval t for several values of N t from 5 to 500 trajectories, for a total time t en that allows several revolutions aroun the close orbits, an for patch size p 2 min. As in 1D, we fin that increasing N t reuces the reconstruction error. For N t = 5, 10 1, while < 10 3 for N t = 500. also ecreases with ecreasing t, although reaches a plateau in the small t limit, which epens on the number of trajectories inclue in the reconstruction. In Figs. 17 an 18, we show examples of inaccurate ( 10 2 ) an accurate ( 10 5 ) reconstructions of f 1 (i 1, i 2 ) an f 2 (i 1, i 2 ). The inexes i 1,2 = 0,..., 127 inicate uniformly 10 space x 1 an x 2 values in the interval 0 x 1,2 2. The parameters for the inaccurate reconstructions were N t = 30 trajectories an t = 5 (enabling f 1 an f 2 to be sample only over 2% of the omain), whereas the parameters for the accurate reconstructions were N t = 100 trajectories an t = 0.01 (enabling f 1 an f 2 to be sample over 68% of the omain). These results emphasize that even though the erivatives are unetermine over nearly one-thir of the omain, we can reconstruct the functions f 1 an f 2 extremely accurately over the full omain. Using the reconstructions of f 1 an f 2, we solve for the trajectories x 1 (t) an x 2 (t) (for times comparable to t en ) an compare them to the trajectories from the original Lotka-Volterra moel (Eq. 18). In Fig. 19 (a) an, we show parametric plots (x 2 (t) versus x 1 (t)) for the inaccurate (Fig. 17) an accurate (Fig. 18) reconstructions of f 1 an f 2, respectively. We

11 10 0 (a) (c) 10-1 () (e) (f) t Figure 16: Reconstruction error for the 2D Lotka-Volterra moel (Eq. 18) as a function of the sampling time interval t for ifferent numbers of trajectories N t = 5 (circles), 10 (stars), 20 (rightwar triangles), 50 (squares), 100 (iamons), 200 (pentagons), an 500 (upwar triangles) average over 20 inepenent reconstructions. (a) (c) Figure 18: Examples of accurate reconstructions of f 1 (i 1, i 2 ) an f 2 (i 1, i 2 ) (with errors = an , respectively) for the 2D Lotka-Volterra system of ODEs (Eq. 18). The inexes i 1,2 = 0,..., 128 inicate uniformly space x 1 an x 2 values in the interval 0 x 1,2 2. Panels (a) an () give the exact functions f 1 (i 1, i 2 ) an f 2 (i 1, i 2 ), panels an (e) give the reconstructions, an panels (c) an (f) inicate the i 1 -i 2 values that were sample (68% of the gri). The white regions in panels (c) an (f) inicate missing ata. The color scales are the same as in Fig. 17. This reconstruction was obtaine using N t = 100 trajectories, a sampling interval of t = 0.01, an a basis patch area p 2 min. contrast, for the accurate reconstruction, the reconstructe trajectories are very close to those of the original moel an all possess close orbits. (a) () (e) (f) Figure 17: Examples of inaccurate reconstructions of f 1 (i 1, i 2 ) an f 2 (i 1, i 2 ) (with errors = 0.04 an 0.03, respectively) for the 2D Lotka-Volterra system of ODEs (Eq. 18). The inexes i 1,2 = 0,..., 127 inicate uniformly space x 1 an x 2 values in the interval 0 x 1,2 2. Panels (a) an () give the exact functions f 1 (i 1, i 2 ) an f 2 (i 1, i 2 ), panels an (e) give the reconstructions, an panels (c) an (f) inicate the i 1 -i 2 values that were sample (2% of the gri). The white regions in panels (c) an (f) inicate missing ata. In the top row, the color scale varies from 0.8 to 0.8 (from ark blue to re). In the bottom row, the color scale varies from 0.2 to 0.6 (from ark blue to re). This reconstruction was obtaine using N t = 30 trajectories, a sampling interval t = 5, an a basis patch area p 2 = 625. solve both the inaccurate an accurate moels with the same four sets of initial conitions. For the inaccurate reconstruction, most of the trajectories from the reconstructe ODE system o not match the original trajectories. In fact, some of the trajectories spiral outwar an o not form close orbits. In 11 Figure 19: Parametric plots of the trajectories x 1 (t) an x 2 (t) (soli lines) for the (a) inaccurate an accurate reconstructions of f 1 an f 2 in Figs. 17 an 18, respectively, for four ifferent initial conitions inicate by the crosses. The open circles inicate the trajectories from the Lotka-Volterra system of ODEs (Eq. 18) Reconstruction of systems of ODEs in 3D For the Lorenz ODE moel, we nee to reconstruct three functions of three variables: f 1 (x 1, x 2, x 3 ), f 2 (x 1, x 2, x 3 ), an f 3 (x 1, x 2, x 3 ). Base on the selecte parameters σ = 10, ρ = 28 an β = 8/3, we chose a iscretization of the omain 21 x 1 21, 29 x 2 29, an 2 x We employe patches of size from each of the 32 slices (along x 3 ) of size (in the x 1 -x 2 plane) to perform the basis learning. The choice for the patches to be in the x 1 -x 2 plane is arbitrary; one may also choose patches that exten into 3D. Again, we chose λ r = λ l = 10 4.

12 In Fig. 20, we plot the reconstruction error versus the sampling time interval t for several N t from 500 to 10 4 trajectories. As foun for the 1D an 2D ODE moels, the reconstruction error ecreases with ecreasing t an increasing N t. reaches a low- t plateau that epens on the value of N t. For N t = 10 4, the low- t plateau value for the reconstruction error approaches t Figure 20: Reconstruction error plotte versus the sampling time interval t for the 3D Lorenz moel Eq. (20) for several ifferent numbers of trajectories N t = 500 (circles), 1000 (stars), 5000 (rightwar triangles), an (squares). Each ata point is average over 20 inepenent reconstructions. 12 In Fig. 21, we visualize the reconstructe functions f 1, f 2, an f 3 for the Lorenz system of ODEs. Panels (a)-(c) represent f 1, ()-(f) represent f 2, an (g)-(i) represent f 3. The 3D omain is broken into 32 slices (along x 3 ) of gri points in the x 1 -x 2 plane. Panels (a), (), an (g) give the original functions f 1, f 2, an f 3 in the Lorenz system of ODEs (Eq. 20). Panels, (e), an (h) give the reconstructe versions of f 1, f 2, an f 3, an panels (c), (f), an (i) provie the ata that was use for the reconstructions (with white regions inicating missing ata). The central regions of the functions are recovere with high accuracy. (The eges of the omain were not well-sample, an thus the reconstruction was not as accurate.) These results show that even for chaotic systems in 3D we are able to achieve accurate ODE reconstruction. In Fig. 22, we compare trajectories from the reconstructe functions to those from the original Lorenz system of ODEs for times comparable to the inverse of the largest Lyapunov exponent. In this case, we fin that some of the the trajectories from the reconstructe moel closely match those from the original moel, while others iffer from the trajectories of the original moel. Since chaotic systems are extremely sensitive to initial conitions, we expect that all trajectories of the reconstructe moel will iffer from the trajectories of the original moel at long times. Despite this, the trajectories from the reconstructe moel isplay chaotic ynamics with similar Lyapunov exponents to those for the Lorenz system of ODEs. In particular, our estimate for the largest Lyapunov exponent for the reconstructe Lorenz moel is approximately 0.8, which is obtaine by measuring the exponential growth in the separation between two trajectories that possess an initial separation of Thus, we are able to recover the controlling ynamics of the original moel. A more thorough iscussion of ODE moel valiation can be foun in Ref. [59]. 4. Discussion We evelope a metho for reconstructing sets of nonlinear ODEs from time series ata using machine learning methos involving sparse function reconstruction an sparse basis learning. Using only information from the system trajectories, we first learne a sparse basis, with no a priori knowlege of the unerlying functions in the system of ODEs an no pre-selecte basis functions, an then reconstructe the system of ODEs in this learne basis. A key feature of our metho is its reliance on sparse representations of the system of ODEs. Our results also emphasize that sparse representations provie more accurate reconstructions of systems of ODEs than least-squares approaches. We teste our ODE reconstruction metho on time series ata obtaine from systems of ODEs in 1D, 2D, an 3D. In 1D, we stuie the Reynols moel for the immune response to infection. In the parameter regime we consiere, this system possesses only two stable fixe points, an thus all initial conitions converge to these fixe points in the long-time limit. In 2D, we stuie the Lotka-Volterra moel for preator-prey ynamics. In the parameter regime we stuie, this system possesses an oscillatory fixe point with close orbits. In 3D, we stuie the Lorenz moel for convective flows. In the parameter regime we consiere, the system isplays chaotic ynamics on a strange attractor. For each moel, we measure the error in the reconstructe system of ODEs as a function of parameters of the reconstruction metho incluing the sampling time interval t, number of trajectories N t, total time t en of the trajectory, an size of the patches use for basis function learning. For the 1D moel, we also stuie the effect of measurement noise on the reconstruction error. In general, the error ecreases as more ata is use for the reconstruction. We etermine the parameter regimes for which we coul achieve highly accurate reconstruction with errors < We then generate trajectories from the reconstructe systems of ODEs an compare them to the trajectories of the original moels. For the 1D moel with two stable fixe points, we were able to achieve extremely accurate reconstruction an recapitulation of the trajectories of the original moel. Our reconstruction for the 2D moel is also accurate an is able to achieve close orbits for most initial conitions. For some of the initial conitions, smaller sampling time intervals an longer trajectories were neee to achieve reconstructe solutions with close orbits. In future stuies, we will investigate methos to a a constraint that imposes a constant of the motion on the reconstruction metho, which will allow us to use larger sampling time intervals an shorter trajectories an still achieve close orbits. For the 3D chaotic Lorenz system, we can only match the trajectories of the reconstructe an original systems for times that are small compare to the inverse of the largest Lyapunov exponent. Even though the trajectories of the reconstructe an original systems will iverge, we have shown

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

Semiclassical analysis of long-wavelength multiphoton processes: The Rydberg atom

Semiclassical analysis of long-wavelength multiphoton processes: The Rydberg atom PHYSICAL REVIEW A 69, 063409 (2004) Semiclassical analysis of long-wavelength multiphoton processes: The Ryberg atom Luz V. Vela-Arevalo* an Ronal F. Fox Center for Nonlinear Sciences an School of Physics,

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France APPROXIMAE SOLUION FOR RANSIEN HEA RANSFER IN SAIC URBULEN HE II B. Bauouy CEA/Saclay, DSM/DAPNIA/SCM 91191 Gif-sur-Yvette Ceex, France ABSRAC Analytical solution in one imension of the heat iffusion equation

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

θ x = f ( x,t) could be written as

θ x = f ( x,t) could be written as 9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)

More information

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13) Slie10 Haykin Chapter 14: Neuroynamics (3r E. Chapter 13) CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Neural Networks with Temporal Behavior Inclusion of feeback gives temporal characteristics to

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

Qubit channels that achieve capacity with two states

Qubit channels that achieve capacity with two states Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March

More information

On combinatorial approaches to compressed sensing

On combinatorial approaches to compressed sensing On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu

More information

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21 Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting

More information

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information Parameter estimation: A new approach to weighting a priori information J.L. Mea Department of Mathematics, Boise State University, Boise, ID 83725-555 E-mail: jmea@boisestate.eu Abstract. We propose a

More information

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection an System Ientification Borhan M Sananaji, Tyrone L Vincent, an Michael B Wakin Abstract In this paper,

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

The effect of nonvertical shear on turbulence in a stably stratified medium

The effect of nonvertical shear on turbulence in a stably stratified medium The effect of nonvertical shear on turbulence in a stably stratifie meium Frank G. Jacobitz an Sutanu Sarkar Citation: Physics of Fluis (1994-present) 10, 1158 (1998); oi: 10.1063/1.869640 View online:

More information

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS THE PUBISHING HOUSE PROCEEDINGS O THE ROMANIAN ACADEMY, Series A, O THE ROMANIAN ACADEMY Volume, Number /, pp. 6 THE ACCURATE EEMENT METHOD: A NEW PARADIGM OR NUMERICA SOUTION O ORDINARY DIERENTIA EQUATIONS

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

Equilibrium in Queues Under Unknown Service Times and Service Value

Equilibrium in Queues Under Unknown Service Times and Service Value University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 1-2014 Equilibrium in Queues Uner Unknown Service Times an Service Value Laurens Debo Senthil K. Veeraraghavan University

More information

Stable and compact finite difference schemes

Stable and compact finite difference schemes Center for Turbulence Research Annual Research Briefs 2006 2 Stable an compact finite ifference schemes By K. Mattsson, M. Svär AND M. Shoeybi. Motivation an objectives Compact secon erivatives have long

More information

Application of the homotopy perturbation method to a magneto-elastico-viscous fluid along a semi-infinite plate

Application of the homotopy perturbation method to a magneto-elastico-viscous fluid along a semi-infinite plate Freun Publishing House Lt., International Journal of Nonlinear Sciences & Numerical Simulation, (9), -, 9 Application of the homotopy perturbation metho to a magneto-elastico-viscous flui along a semi-infinite

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

On Characterizing the Delay-Performance of Wireless Scheduling Algorithms

On Characterizing the Delay-Performance of Wireless Scheduling Algorithms On Characterizing the Delay-Performance of Wireless Scheuling Algorithms Xiaojun Lin Center for Wireless Systems an Applications School of Electrical an Computer Engineering, Purue University West Lafayette,

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Chapter 6: Energy-Momentum Tensors

Chapter 6: Energy-Momentum Tensors 49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.

More information

Conservation laws a simple application to the telegraph equation

Conservation laws a simple application to the telegraph equation J Comput Electron 2008 7: 47 51 DOI 10.1007/s10825-008-0250-2 Conservation laws a simple application to the telegraph equation Uwe Norbrock Reinhol Kienzler Publishe online: 1 May 2008 Springer Scienceusiness

More information

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy IPA Derivatives for Make-to-Stock Prouction-Inventory Systems With Backorers Uner the (Rr) Policy Yihong Fan a Benamin Melame b Yao Zhao c Yorai Wari Abstract This paper aresses Infinitesimal Perturbation

More information

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems Construction of the Electronic Raial Wave Functions an Probability Distributions of Hyrogen-like Systems Thomas S. Kuntzleman, Department of Chemistry Spring Arbor University, Spring Arbor MI 498 tkuntzle@arbor.eu

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains Hyperbolic Systems of Equations Pose on Erroneous Curve Domains Jan Norström a, Samira Nikkar b a Department of Mathematics, Computational Mathematics, Linköping University, SE-58 83 Linköping, Sween (

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

COUPLING REQUIREMENTS FOR WELL POSED AND STABLE MULTI-PHYSICS PROBLEMS

COUPLING REQUIREMENTS FOR WELL POSED AND STABLE MULTI-PHYSICS PROBLEMS VI International Conference on Computational Methos for Couple Problems in Science an Engineering COUPLED PROBLEMS 15 B. Schrefler, E. Oñate an M. Paparakakis(Es) COUPLING REQUIREMENTS FOR WELL POSED AND

More information

u t v t v t c a u t b a v t u t v t b a

u t v t v t c a u t b a v t u t v t b a Nonlinear Dynamical Systems In orer to iscuss nonlinear ynamical systems, we must first consier linear ynamical systems. Linear ynamical systems are just systems of linear equations like we have been stuying

More information

How the potentials in different gauges yield the same retarded electric and magnetic fields

How the potentials in different gauges yield the same retarded electric and magnetic fields How the potentials in ifferent gauges yiel the same retare electric an magnetic fiels José A. Heras a Departamento e Física, E. S. F. M., Instituto Politécnico Nacional, México D. F. México an Department

More information

Harmonic Modelling of Thyristor Bridges using a Simplified Time Domain Method

Harmonic Modelling of Thyristor Bridges using a Simplified Time Domain Method 1 Harmonic Moelling of Thyristor Briges using a Simplifie Time Domain Metho P. W. Lehn, Senior Member IEEE, an G. Ebner Abstract The paper presents time omain methos for harmonic analysis of a 6-pulse

More information

ANALYSIS OF A GENERAL FAMILY OF REGULARIZED NAVIER-STOKES AND MHD MODELS

ANALYSIS OF A GENERAL FAMILY OF REGULARIZED NAVIER-STOKES AND MHD MODELS ANALYSIS OF A GENERAL FAMILY OF REGULARIZED NAVIER-STOKES AND MHD MODELS MICHAEL HOLST, EVELYN LUNASIN, AND GANTUMUR TSOGTGEREL ABSTRACT. We consier a general family of regularize Navier-Stokes an Magnetohyroynamics

More information

Energy behaviour of the Boris method for charged-particle dynamics

Energy behaviour of the Boris method for charged-particle dynamics Version of 25 April 218 Energy behaviour of the Boris metho for charge-particle ynamics Ernst Hairer 1, Christian Lubich 2 Abstract The Boris algorithm is a wiely use numerical integrator for the motion

More information

Dynamical Systems and a Brief Introduction to Ergodic Theory

Dynamical Systems and a Brief Introduction to Ergodic Theory Dynamical Systems an a Brief Introuction to Ergoic Theory Leo Baran Spring 2014 Abstract This paper explores ynamical systems of ifferent types an orers, culminating in an examination of the properties

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

Polynomial Inclusion Functions

Polynomial Inclusion Functions Polynomial Inclusion Functions E. e Weert, E. van Kampen, Q. P. Chu, an J. A. Muler Delft University of Technology, Faculty of Aerospace Engineering, Control an Simulation Division E.eWeert@TUDelft.nl

More information

Function Spaces. 1 Hilbert Spaces

Function Spaces. 1 Hilbert Spaces Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure

More information

Optimization of Geometries by Energy Minimization

Optimization of Geometries by Energy Minimization Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.

More information

Advanced Partial Differential Equations with Applications

Advanced Partial Differential Equations with Applications MIT OpenCourseWare http://ocw.mit.eu 18.306 Avance Partial Differential Equations with Applications Fall 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.eu/terms.

More information

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies We G15 5 Moel Reuction Approaches for Solution of Wave Equations for Multiple Frequencies M.Y. Zaslavsky (Schlumberger-Doll Research Center), R.F. Remis* (Delft University) & V.L. Druskin (Schlumberger-Doll

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

Differentiability, Computing Derivatives, Trig Review. Goals:

Differentiability, Computing Derivatives, Trig Review. Goals: Secants vs. Derivatives - Unit #3 : Goals: Differentiability, Computing Derivatives, Trig Review Determine when a function is ifferentiable at a point Relate the erivative graph to the the graph of an

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

Situation awareness of power system based on static voltage security region

Situation awareness of power system based on static voltage security region The 6th International Conference on Renewable Power Generation (RPG) 19 20 October 2017 Situation awareness of power system base on static voltage security region Fei Xiao, Zi-Qing Jiang, Qian Ai, Ran

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions. Paul M. Mwasame, Norman J. Wagner, Antony N.

Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions. Paul M. Mwasame, Norman J. Wagner, Antony N. Submitte to the Journal of Rheology Moeling the effects of polyispersity on the viscosity of noncolloial har sphere suspensions Paul M. Mwasame, Norman J. Wagner, Antony N. Beris a) epartment of Chemical

More information

On the Surprising Behavior of Distance Metrics in High Dimensional Space

On the Surprising Behavior of Distance Metrics in High Dimensional Space On the Surprising Behavior of Distance Metrics in High Dimensional Space Charu C. Aggarwal, Alexaner Hinneburg 2, an Daniel A. Keim 2 IBM T. J. Watson Research Center Yortown Heights, NY 0598, USA. charu@watson.ibm.com

More information

ELEC3114 Control Systems 1

ELEC3114 Control Systems 1 ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.

More information

A Review of Multiple Try MCMC algorithms for Signal Processing

A Review of Multiple Try MCMC algorithms for Signal Processing A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications

More information

The Three-dimensional Schödinger Equation

The Three-dimensional Schödinger Equation The Three-imensional Schöinger Equation R. L. Herman November 7, 016 Schröinger Equation in Spherical Coorinates We seek to solve the Schröinger equation with spherical symmetry using the metho of separation

More information

II. First variation of functionals

II. First variation of functionals II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent

More information

Applications of First Order Equations

Applications of First Order Equations Applications of First Orer Equations Viscous Friction Consier a small mass that has been roppe into a thin vertical tube of viscous flui lie oil. The mass falls, ue to the force of gravity, but falls more

More information

Differentiability, Computing Derivatives, Trig Review

Differentiability, Computing Derivatives, Trig Review Unit #3 : Differentiability, Computing Derivatives, Trig Review Goals: Determine when a function is ifferentiable at a point Relate the erivative graph to the the graph of an original function Compute

More information

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations Optimize Schwarz Methos with the Yin-Yang Gri for Shallow Water Equations Abessama Qaouri Recherche en prévision numérique, Atmospheric Science an Technology Directorate, Environment Canaa, Dorval, Québec,

More information

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1 Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of

More information

Image Denoising Using Spatial Adaptive Thresholding

Image Denoising Using Spatial Adaptive Thresholding International Journal of Engineering Technology, Management an Applie Sciences Image Denoising Using Spatial Aaptive Thresholing Raneesh Mishra M. Tech Stuent, Department of Electronics & Communication,

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation A Novel ecouple Iterative Metho for eep-submicron MOSFET RF Circuit Simulation CHUAN-SHENG WANG an YIMING LI epartment of Mathematics, National Tsing Hua University, National Nano evice Laboratories, an

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

ON THE OPTIMALITY SYSTEM FOR A 1 D EULER FLOW PROBLEM

ON THE OPTIMALITY SYSTEM FOR A 1 D EULER FLOW PROBLEM ON THE OPTIMALITY SYSTEM FOR A D EULER FLOW PROBLEM Eugene M. Cliff Matthias Heinkenschloss y Ajit R. Shenoy z Interisciplinary Center for Applie Mathematics Virginia Tech Blacksburg, Virginia 46 Abstract

More information

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

inflow outflow Part I. Regular tasks for MAE598/494 Task 1 MAE 494/598, Fall 2016 Project #1 (Regular tasks = 20 points) Har copy of report is ue at the start of class on the ue ate. The rules on collaboration will be release separately. Please always follow the

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Switching Time Optimization in Discretized Hybrid Dynamical Systems Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set

More information

Flexible High-Dimensional Classification Machines and Their Asymptotic Properties

Flexible High-Dimensional Classification Machines and Their Asymptotic Properties Journal of Machine Learning Research 16 (2015) 1547-1572 Submitte 1/14; Revise 9/14; Publishe 8/15 Flexible High-Dimensional Classification Machines an Their Asymptotic Properties Xingye Qiao Department

More information

4. Important theorems in quantum mechanics

4. Important theorems in quantum mechanics TFY4215 Kjemisk fysikk og kvantemekanikk - Tillegg 4 1 TILLEGG 4 4. Important theorems in quantum mechanics Before attacking three-imensional potentials in the next chapter, we shall in chapter 4 of this

More information

The effect of dissipation on solutions of the complex KdV equation

The effect of dissipation on solutions of the complex KdV equation Mathematics an Computers in Simulation 69 (25) 589 599 The effect of issipation on solutions of the complex KV equation Jiahong Wu a,, Juan-Ming Yuan a,b a Department of Mathematics, Oklahoma State University,

More information

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

Lie symmetry and Mei conservation law of continuum system

Lie symmetry and Mei conservation law of continuum system Chin. Phys. B Vol. 20, No. 2 20 020 Lie symmetry an Mei conservation law of continuum system Shi Shen-Yang an Fu Jing-Li Department of Physics, Zhejiang Sci-Tech University, Hangzhou 3008, China Receive

More information

F 1 x 1,,x n. F n x 1,,x n

F 1 x 1,,x n. F n x 1,,x n Chapter Four Dynamical Systems 4. Introuction The previous chapter has been evote to the analysis of systems of first orer linear ODE s. In this chapter we will consier systems of first orer ODE s that

More information

arxiv:hep-th/ v1 3 Feb 1993

arxiv:hep-th/ v1 3 Feb 1993 NBI-HE-9-89 PAR LPTHE 9-49 FTUAM 9-44 November 99 Matrix moel calculations beyon the spherical limit arxiv:hep-th/93004v 3 Feb 993 J. Ambjørn The Niels Bohr Institute Blegamsvej 7, DK-00 Copenhagen Ø,

More information

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary

More information

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,

More information

Divergence-free and curl-free wavelets in two dimensions and three dimensions: application to turbulent flows

Divergence-free and curl-free wavelets in two dimensions and three dimensions: application to turbulent flows Journal of Turbulence Volume 7 No. 3 6 Divergence-free an curl-free wavelets in two imensions an three imensions: application to turbulent flows ERWAN DERIAZ an VALÉRIE PERRIER Laboratoire e Moélisation

More information

MATH 205 Practice Final Exam Name:

MATH 205 Practice Final Exam Name: MATH 205 Practice Final Eam Name:. (2 points) Consier the function g() = e. (a) (5 points) Ientify the zeroes, vertical asymptotes, an long-term behavior on both sies of this function. Clearly label which

More information

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION UNIFYING AND MULISCALE APPROACHES O FAUL DEECION AND ISOLAION Seongkyu Yoon an John F. MacGregor Dept. Chemical Engineering, McMaster University, Hamilton Ontario Canaa L8S 4L7 yoons@mcmaster.ca macgreg@mcmaster.ca

More information

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract Pseuo-Time Methos for Constraine Optimization Problems Governe by PDE Shlomo Ta'asan Carnegie Mellon University an Institute for Computer Applications in Science an Engineering Abstract In this paper we

More information

Simulation of Angle Beam Ultrasonic Testing with a Personal Computer

Simulation of Angle Beam Ultrasonic Testing with a Personal Computer Key Engineering Materials Online: 4-8-5 I: 66-9795, Vols. 7-73, pp 38-33 oi:.48/www.scientific.net/kem.7-73.38 4 rans ech ublications, witzerlan Citation & Copyright (to be inserte by the publisher imulation

More information

Simple Electromagnetic Motor Model for Torsional Analysis of Variable Speed Drives with an Induction Motor

Simple Electromagnetic Motor Model for Torsional Analysis of Variable Speed Drives with an Induction Motor DOI: 10.24352/UB.OVGU-2017-110 TECHNISCHE MECHANIK, 37, 2-5, (2017), 347-357 submitte: June 15, 2017 Simple Electromagnetic Motor Moel for Torsional Analysis of Variable Spee Drives with an Inuction Motor

More information