Stable Adaptive Control and Recursive Identication. Using Radial Gaussian Networks. Massachusetts Institute of Technology. functions employed.

Size: px
Start display at page:

Download "Stable Adaptive Control and Recursive Identication. Using Radial Gaussian Networks. Massachusetts Institute of Technology. functions employed."

Transcription

1 NSL , Sept To appear: IEEE CDC, Dec Stable Aaptive Control an Recursive Ientication Using Raial Gaussian Networks Robert M. Sanner an Jean-Jacques E. Slotine Nonlinear Systems Laboratory Massachusetts Institute of Technology Cambrige, MA 0139 USA Abstract Previous work has provie the theoretical founations of a constructive esign proceure for uniform approximation of smooth functions to a chosen egree of accuracy using networks of gaussian raial basis functions. This construction an the guarantee uniform bouns were then shown to provie the basis for stable aaptive neurocontrol algorithms for a class of nonlinear plants. This paper etails an extens these ieas in three irections: rst some practical etails of the construction are provie, explicitly illustrating the relation between the free parameters in the network esign an the egree of approximation error on a particular set. Next, the original aaptive control algorithm is moie to permit incorporation of aitional prior knowlege of the system ynamics, allowing the neurocontroller to operate in parallel with conventional xe or aaptive controllers. Finally, it is shown how the gaussian network construction may also be utilize in recursive ientication algorithms with similar guarantees of stability an convergence. 1 Introuction The current algorithms for stable aaptive control an recursive system ientication can be seen as subsets of the more general learning problem. Each of these methos is attempting to suciently recover, in some manner, an unknown, possibly nonlinear function, f(x), of a set of input signals, x. In orer that this problem be practical, something must be assume about the nature of f, for example that it is continuous, or ierentiable, or perioic, etc. Full-state, linear aaptive control an ientication algorithms assume that f can be written as a linear combination of the input signals x i; or in other wors the function f(x) lies in the span of the basis functions Y i(x) = x i. This intuition also allows the linear aaptive schemes to be extene to nonlinear systems for which a set of spanning basis functions is assume known a priori [17]. In the more general case where f is perhaps only known to be continuous, such basis function expansions are still possible, but require generally an innite number of Y i(x). For example, the classical Weierstrass theorem states that a univariate f(x) can be represente as a linear combination of Y i(x) = x i, for i = 0;... ; 1; the theorem extens naturally to n-imensional input spaces. Of course, any application of these innite imensional expansions must acknowlege that a practical implementation will be capable of evaluating only a nite (possibly very large) number of the basis functions. The resulting expansions will thus be capable of only approximating the actual function f on a particular subset of the input space. In this context, it is necessary to ensure that the innite expansion converges uniformly on the chosen set as the number of basis functions increases, an to quantify the relation between the size of the set, the magnitue of the uniform approximation error, an the number of basis functions employe. Exploiting these ieas for ientication an control is not new, see especially [1, 19], an more recently [9]; however, such ieas have not generally taken hol in the aaptive community, largely because of the computational expense (an resulting control loop elay) incurre computing the require basis functions. However, the recent results emonstrating the uniform approximation capabilities of neural networks, couple with the extremely ecient computational paraigm oere by these moels, especially if implemente in electronic or optical harware, suggests that it is useful to re-examine the basis function approach, an rigorously erive stable aaptation algorithms which explicitly account for the limite region of valiity of the approximation, an the unavoiable approximation errors even within this region. Any aitional prior information about f can be use to select the set of basis functions employe. Uner the assumption that f is innitely ierentiable, [13] show that, for a class of least squares learning problems, gaussian raial functions are natural basis functions, an map irectly onto a class of three-layere, feeforwar neural networks [3, 5] By assuming further that f can be well approximate by a function with compact spectral support, [14] emonstrate a proceure for explicitly esigning a gaussian network whose uniform approximation errors can be guarantee to be within any prespecie boun; this construction an the guarantee boun are then use to erive stable, closeloop aaptation laws for a class of nonlinear systems controlle by the resulting networks. In this paper, we consier a number of extentions an clari- cations to the ieas in [14, 15]. First, practical etails of the construction are provie, explicitly illustrating the relation between the free parameters in the network esign an the egree of approximation error on a particular set. Next, the original aaptive control algorithm is moie to permit incorporation of aitional prior knowlege of the system ynamics, allowing the neurocontroller to operate in parallel with conventional xe or aaptive controllers. Finally, it is shown how the gaussian network construction an aaptive control esigns may also be

2 utilize in recursive ientication algorithms which provie similar guarantees of stability an convergence. Gaussian Approximation It is by now well known that three layer feeforwar neural networks, i.e. networks with one hien layer of nonlinear noes, can uniformly approximate continuous functions over compact subsets of their omains, e.g. [4, 6, 5]. This hols both for networks with hien noes which output smoothly saturating functions of their argument, such as the stanar \sigmoi" operating on the ot prouct of the input signals an fee-in weights, as well as the \raial basis function" noes, which output nonlinear functions of the eucliean istance from the network inputs to the corresponing set of fee-in weights. The approximation implemente by such three layere networks can be represente mathematically as: f A(x) = N c i g i(x; i ) (:1) where g i is the nonlinear function implemente by noe i, the vectors i represent the input weights (or \center" in the raial basis function literature) of noe i, an c i represents the output weight for that noe. The above reference theorems thus state that, given a network (.1), with the g i chosen from a large class of \neural" moels, an a esire tolerance level f, the inequality jf(x)? f A(x)j f hols for all x in a chosen set A for some (unspecie) values of the free parameters N, c i, an i. As such they are existence theorems, yieling no information about the number of noes or require values of the fee-in or fee-out weights. Many heuristics have been propose for selecting these parameters, usually involving the size an istribution of the set of examples use to train the network. A key metric in evaluating the performance of these heuristics is the extent to which the network can \generalize" from the examples in the training set, which is really a metho of assessing the f provie by the chosen algorithm. As will be emonstrate below, however, in orer to erive aaptation laws which are guarantee stable an convergent, it is necessary to ensure that the network structure is capable of being tune to achieve a prespecie f; goo generalization is not just esirable, it is essential when using networks for estimation an control. Further, the points in the \training set" of the neurocontroller will be the continuously evolving values of the outputs of a ynamic system; what subset of these shoul be use to esign an train the network? Intuitively, the answers to these questions will epen upon the egree of smoothness exhibite by the function. Smoothness can be quantie in several ways; from an engineering stanpoint one of the most familiar an useful of these is through the spatial Fourier transform: Z T F () = (Ff)() = f(x) e?j x x n R Note that f is being consiere as a multivariate function of its the components of the vector x here, an not as an implicit function of time, as is more usual in control an ltering applications. Assume that f is continuous an that Ff exists an has compact support Supp (F ()) B, where B = [?; ] n. Uner these conitions f can be exactly represente in terms of the values it assumes on a uniform, rectangular lattice over R n [11]: f(x) = f( I ) g c(x? I ); (:) IZ n provie the lattice mesh, 1=(). Here g c is the canonical interpolating function, whose spectrum is n on B an zero elsewhere. The subscript I is an n-tuple of integers use to label the sampling points, I, so that if I = fi 1;... ; i ng, the corresponing lattice point is given by I = i 1e1 + i e + + i nen where the ei are the stanar basis vectors for R n. Starting from this \carinal" series (.), the evelopment of [14] exploite the invariance (moulo scale factors) of the raial gaussian uner Fourier transformation, inepenent of the imension of the unerlying space, an the spatial low-pass structure of this function, to approximately recover f, uniformly on compact sets, with a truncate expansion in raial gaussians. That is, provie that f is continuous an suciently smooth (in a sense to be mae precise below), by ening an G () = exp(?kk ) g (x; I ) = 1 n F?1 (G )(x? I ) = exp(?kx? I k ); a representation of the form f A(x) = c( I ) g (x; I ) (:3) has the property that jf(x)? f A(x)j f at every point in a chosen set A, an f can be mae as small as require by the appropriate prior choice of network parameters as etaile below. The summation in (.3) is over the inex set I o = fi Z n j I?g, where? = fx j kx? yk 1 l; 9y Ag; that is,? is the rectangle containing all lattice points within a istance = l of A, for some positive integer l. Note that since reconstruction of f is require only on the compact set A, many of the technical iculties associate with the existence an invertibility of the Fourier transform can be avoie [15]. This raial gaussian expansion is clearly of the form (.1), an hence also maps irectly onto a class of three layer networks. There is one hien layer noe for every term in the series, an thus the number of noes in the is network equal to the number of elements in the inex set I o. The current point, x, at which the approximation f A(x) is require, forms the input to the network, an the weights connecting these inputs to each gaussian noe encoe the uniform rectangular lattice I. Each noe computes a quaratic \activation energy" from its inputs an fee-in weights, given by ri = kx? I k = (x? I ) T (x? I ); an outputs the gaussian with variance of this activation, exp(?ri ). The network output is forme by weighting each noal output by the corresponing c I = c( I ), an summing together all the weighte outputs. The result represents the approximation f A(x). Note that each noe has the same variance in this esign. While graient methos are usually employe for etermining the centers ( I ) an variances () of the gaussians, the one-toone corresponence between (.3) an the network representation permits a precise ientication of the contribution of each network component to the approximation, allowing specication of a constructive network esign proceure. In particular, given an estimate of the spectral support of f an the size of the set A, each of the free parameters in the esign can be inepenently selecte to reuce the sources of approximation error to achieve a prespecie f. Given a require f, an the set, A, on which this approximation accuracy is require, the construction proceure starts by specifying the egree of smoothness which the unknown function f is assume to exhibit. This is expresse through both an upper boun on the magnitue of the spectrum jf ()j, an by the parameter, representing the size of the \essential support" of the spectrum of this function: components of F () for kk 1 > are assume to contribute to f negligibly, in a sense to be quantie below, for all x A. The uniform approximation boun can be ecompose as f = , where 1 represents the error introuce by approximating f using only frequencies of absolute value less than, is the error introuce into the carinal series by the fact that the raial gaussian is not an ieal spatial low-pass lter, an 3 is the error introuce by truncating the carinal series

3 to a nite number of terms. Each of these can be boune in terms of the prior information an the network esign parameters (; ; l): Z 1 jf ()j (.4) B c Z n ) (.5) exp(?kk C c Z k+1 n k exp(? ) : (.6) k l kk where the inex set K is: K = f0 k n? 1 j n? k is o g: The rst of these bouns is immeiate from the enition of the Fourier transform, the secon an thir are erive in the appenix. Here B c = R n? B; C = [?1=(); 1=()] n, an the constants an 3 are given by = exp( n ) sup jf ()j; an 3 = n n n : (:7) B The essential support raius,, is assume known suciently that 1 is as small as require; an 3 can be upper boune in terms of the remaining parameters in the network esign. The mesh size ( 1=()) an a variance can be chosen so as to reuce as much as require. Clearly these values are couple: escribes (roughly) the essential support of each gaussian in frequency, an to be an eective low-pass lter this support must be containe in C. A rule of thumb which yiels goo results for low imensional networks is to take v = an = 1 8v : Finally, l can be chosen to minimize the truncation error, 3. Since l an A ene the set?, this choice plus the choice of together etermine the number of noes in the network an the istribution of their centers. For example, if A = [?m; m], then? = [?(m + l); (m + l)], an there woul be a total of N = [(m + l) + 1] noes in the network, centere at fi1 ;i g = [i 1; i ] T ; for i 1; i =?(m + l);... ; (m + l). The require output-layer weights c I coul be foun explicitly by evaluating the function c(x) = n n F?1 (F G?1 B)(x) at the lattice points I, where B is the characteristic function of the set B. However, by assumption only the smoothness constraint,, an an upper boun on the magnitue of the spectrum is assume known about f; no prior information about the exact values of f or its spectrum are assume. Hence, the goal of the training algorithms examine below will be to recursively ajust estimates, ^c I, of the output weights to attempt to match their correct values c I. Any aitional information known about f can be use in sharpening the above bouns or reucing the size of the require representation. If information about the shape an orientation of the spectrum is known, the sampling lattice an mesh size can be ajuste accoringly. For example, if f is known to have a support which is much smaller in one irection in R n the mesh size for the corresponing irection in the sampling gri on R n can be taken proportionally larger, reucing the number of noes require. If B is a ball in R n, instea of a cube, the require sampling ensity (an hence the number of lattice points containe in any compact?) can similarly be reuce by employing a hexagonal, as oppose to rectangular, sampling scheme [11, 7]. 3 Aaptive Control Consier eveloping an aaptive control algorithm for a class of ynamic systems whose equations of motion can be expresse in the canonical form: x (n) (t) + f(x(t); _x(t);... ; x (n?1) (t)) = bu(t) (3:1) where u(t) is the control input, f is an unknown linear or nonlinear function, an for simplicity in this paper, take b = 1; [15] extens the evelopment to the general case, incluing state-epenent control gains. The control objective is to force the plant state vector, x = [x; _x;... ; x (n?1) ] T, assume available for measurement, to follow a specie esire trajectory, x = [x ; _x ;... ; x (n?1) ] T. Dening the tracking error vector, ~x(t) = x(t)? x(t), the problem is thus to esign a control law u(t) which ensures that ~x(t)! 0 as t! 1. One approach to this problem is to take as the control input u(t) = u p(t) + u a(t) + x (n) where u p is a \PD" type control component consisting of a linear combination of tracking error states, u a(t) is an aaptive control law which will attempt to recover, an cancel, the unknown function f(x), an x (n) is a feeforwar of the nth erivative of the esire trajectory. With use of this control law, (3.1) becomes: ~x (n) (t) =?k T ~x(t) + f(t) ~ where k is a vector of gains chosen to correspon to the coef- cients of a Hurwitz polynomial (that is, a polynomial whose roots lie strictly in the open left half of the complex plane), an an f(t) ~ is the mistuning of the aaptive control component, ~f(t) = (u a(t)? f(x(t))). If it is known a priori (as, e.g., in [17]) that f(x) lies in the span of a set of known (linear or nonlinear) basis functions Y i(x), i.e.: f(x) = N then the aaptive controller can be chosen as: u a(t) = N a iy i(x) (3:) ^a i(t)y i(x(t)) where ^a i(t) is an approximation to the ith coecient in the expansion of f. The controller mistuning can be expresse as: ~f(t) = N ~a i(t)y i(x(t)) where ~a i(t) = ^a i(t)? a i, an the tracking problem can be solve if a law for stably ajusting ^a i(t) can be create which guarantees ~x(t)! 0. The esign of such a law is possible for this system if a measure of the tracking error can be foun which is suciently correlate, in some appropriate sense, with the approximation errors, ~a i(t). Given the structure of the error ynamics, an the linear epenence of the control mistuning ~ f(t) on the parameter mistuning, ~a i(t), this correlation conition is satise by any linear combination of the error states: s(t) = T ~x(t) provie that the transfer function relating ~ f(t) to s(t) is strictly positive real [10, 18]. Of course, this analysis hols unchange if each Y i is the ith coorinate projection function, i.e. Y i(x) = x i, in which case these equations form a linear, full state feeback aaptive control algorithm. Note that (3.) has the same structure as the expansion (.3) provie the input weights (lattice points) are hel xe, reecting the conence in the smoothness estimate,. Further, if the esire trajectories are containe in a compact subset, A, of the state space, in principle the tracking problem pose by system (3.1) coul be solve by a control law capable of reconstructing the unknown function f only upon on A. Thus a gaussian network with xe centers an ajustable output weights coul in principle be use as the aaptive component in the above aaptive tracking architecture. The small approximation error inevitably introuce coul be viewe as a uniformly boune isturbance riving the plant ynamics, an stanar results of eazone aaptation [10, 1] use to erive stable aaptive laws.

4 However, it is possible that, using such a scheme, the tracking errors woul become suciently large uring the initial stages of aaptation that the plant state woul leave any prespecie A. The network esign proceure outline in the previous section provies no information about the approximation errors outsie the chosen set. The possibly rapi egraation of this approximation outsie a given subset of the state space thus presents an aitional complexity in the esign of a globally stabilizing controller. Simple eazone aaptation will not suce to ensure stability of the system uner these conitions. This problem can easily be overcome, however, by incluing in the control law an aitional component which takes over from the aaptive component as its approximation ability begins to egrae, an forces the plant state back into A. This component takes the form of a sliing controller with a bounary layer [16], an its esign is straightforwar given upper bouns on the magnitues of the functions being approximate. The action of this sliing term will be shown to be sucient to reuce the tracking error uring the times when the plant state lies outsie of A, while the bounary layer prevents control chattering in these regions. Previous algorithms [14] esigne this sliing component to provie guarantees that the state remain within a preesignate subset of R n. This guarantee, however, also require constraints on the esire trajectories which coul be commane. This approach can be avoie by instea introucing a mechanism for taking the network o-line whenever the state moves outsie the region on which the network approximation is vali. Similarly, it is benecial to turn o the sliing controller when the approximation implemente by the aaptive controller is accurate, since the sliing controller relies on crue upper bouns of the plant nonlinearities to reuce tracking errors an is hence likely to require large amounts of control authority when active. The complete control law thus has a ual character, acting as either a sliing or as an aaptive controller epening upon the instantaneous location of the plant state vector. However, iscontinuously switching between aaptive an sliing components creates the possibility that the controller might chatter along the bounary between the two types of operation. To prevent this, pure sliing operation is restricte to the exterior of a slightly larger set A, containing A, while pure aaptive operation is restricte to the interior of the set A ; in between, in the region A? A, the two moes are eectively blene using a continuous moulation function, which controls the egree to which each component contributes to the complete control law. The resulting controller smoothly transitions between aaptive an nonaaptive control strategies an, as will be shown below, is capable of globally stabilizing the systems uner consieration. 3.1 Controller Overview The structure of the conventional aaptive solutions to the tracking problem, combine with the above observations, suggests a control law with the general structure u(t) = u p(t) + (1? m(t)) u a(t) + m(t) u sl(t): Here u p(t) is a negative feeback term consisting of a weighte combination of both the measure tracking error states an a tracking metric s(t), to be ene below. The term u sl(t), represents the sliing component of the control law, an similarly the aaptive component is represente by u a(t). The function m(t) = m(x(t)) is a continuous, state epenent moulation which allows the controller to smoothly transition between sliing an aaptive moes of operation, chosen so that m(x) = 0 on A, m(x) = 1 on A c, an 0 < m(x) < 1 on A? A. Without loss of generality, A can be chosen so to correspon to a unit ball with respect to an appropriate weighte norm function. Thus, for example, A = fx j kx? x0k p;w 1g an A = fx j kx? x0k p;w 1+ g: Here is a positive constant representing the with of the transition region, x0 xes the absolute location of the sets in the state space of the plant, an kxk p;w is a weighte p-norm of the form kxk p;w = ( n p ) 1 p jxij ; w i or, in the limiting case p = 1; kxk 1;w = max n ( jx ij w i ); for a set of strictly positive weights fw ig n. In R, for example, with p = the sets A an A are ellipses, an with p = 1 these sets are rectangles. With these enitions, the moulation function can be taken as, m(x(t)) = max (0; sat( (r(t)? 1) )); (3:3) where r(t) = kx(t)? x0k p;w, an sat is the saturation function ( sat(y) = y if jyj < 1, an sat(y) = sign (y) otherwise). When r(t) 1, meaning that x A, the output of the saturation function is negative, hence the maximum which enes m(t) is zero, as esire. When r(t) 1 +, corresponing to x A c, the saturation function is unity, hence m(t) = 1, again as esire. In between, for x A?A, it is easy to check that 0 < m(x) < 1. The aaptive control component, u a(t); for this system implements an approximation, ^fa(x); to the plant nonlinearity f(x), which is realize as the output of the raial gaussian network escribe in Section, whose inputs are the measure values of the plant states. The variance an mesh size of this network are esigne consiering assume upper bouns on the spectral properties of the plant nonlinearity f(x), an the require uniform approximation error (whose relation to the steay state tracking errors will be mae explicit below), boune by expressions (.4)-(.6). Hence: u a(t) = ^f A(x(t)) = ^c I(t) g (x(t); I ): (3:4) The input weights, I, which encoe the sampling mesh, are xe in this architecture, while the output weights, ^c I(t), are to be ajuste to attempt to match their tune values, c I. A useful tracking error metric for both the sliing an aaptive control subsystems is ene by s(t) = ( t + )n?1 ~x(t) with > 0; which can be rewritten as s(t) = T ~x(t) with T = [ n?1 ; (n? 1) n? ;... ; 1]. The equation s(t) = 0 enes a time-varying hyperplane in R n on which the tracking error vector ecays exponentially to zero, so that perfect tracking can be asymptotically obtaine by maintaining this conition [0, 16]. Further, if the magnitue of s can be shown to be boune by a constant, the actual tracking errors can be shown [16] to be asymptotically boune by: j~x (i) (t)j i i?n+1 ; i = 0;... ; n? 1: (3:5) Using the metric s(t) an the saturation function ene above, the sliing control component can be represente as u sl(t) =?k sl(x(t)) sat(s(t)=) (3:6) where k sl(x) is the gain of the controller, an is the bounary layer with; the exact values require for each of these parameters will be specie in the specic esign which follows. The eazones require in the algorithm for upating the output weights of the gaussian network reect the fact that no useful information can be gaine about the quality of the current approximations when the tracking errors are less than a threshol etermine by the bouns on the approximation errors, e.g. f. However, as originally propose in [1], eazone aaptation requires iscontinuously starting an stopping the parametric ajustment mechanism accoring to the magnitue of the error signal. These iscontinuities can be eliminate using the metric s(t), as shown in [17], by introucing the continuous function s, ene as s (t) = s(t)? sat(s(t)=): (3:7) 3. Controller Design an Stability Assume that a prior upper boun M 0(x) is known on the magnitue of f for points outsie of the set A, i.e. jf(x)j M 0(x) when x A c :

5 Use of a given control law u(t) in the system (3.1) results in the error metric having a time erivative: _s(t) = a r(t)? f(x) + u(t): (3:8) where a r(t) = T v ~x(t)? x(n) with T v = [0; n?1 ; (n? 1) n? ;... ], an x (n) is the nth erivative of the esire trajectory. Having chosen a set A containing A as outline above, let f A(x) be a raial gaussian approximation to f(x) esigne such that jf? f Aj f uniformly on the set A. In terms of this approximation, (3.8) can be rewritten as: _s(t) = a r(t)? f A(x) + (x) + u(t); where (x) = f A(x)? f(x) satises jj f. This expression, an the consierations of the previous section, suggests use of the control law: u(t) =?k s(t)? a r(t) + (1? m(t)) ^f A(x(t)) + m(t) u sl(t) (3:9) where k is a constant feeback gain. The sliing component of this control law is given by (3.6) with state epenent gains k sl(x(t)) = M 0(x(t)) + f, an the aaptive component ^f A(x(t)) is realize as the gaussian network escribe by equation (3.4) above, whose tune output implements the approximation f A. The moulation function which blens the two controller moes is specie by equation (3.3). If the parameters in ^f A(x(t)) are upate as: _^c I(t) =? k a (1? m(t)) s g (x(t); I ); (3:10) where k a is a positive constant etermining the aaptation rate, an if is chosen so that f =k in both the sliing controller an the calculation (3.7) of s, then all states in the aaptive system will remain boune, an moreover the tracking errors will asymptotically converge to a neighborhoo of zero given by (3.5). To prove this assertion consier, similarly to [17], the Lyapunov function caniate: V = 1 (s + 1 k a ~c I) (3:11) where ~c I(t) = ^c I(t)? c I. While _s is not ene for jsj =, (=t)s is well ene an continuous everywhere an can be written (=t)s = s _s. Hence, using (3.10), _ V (t) = 0 when jsj. When jsj >, since s sat(s=) = js j, one has _V =?(k s + js j k )? s f + s (1? m) ^f A? js j m k sl + k?1 a There are now two possibilities: ~c I _^ci (i.) x A c : Here m = 1, so that the output of the aaptive controller (1? m) ^f A is ientically zero, an from (3.10) the time erivatives of the parameter estimates similarly vanish, hence: _V =?(k s + js j k )? s f? js j k sl?k s + js j (jfj? k sl): Since the sliing controller gains have been chosen so that k sl jfj for all x A c, one obtains _V?k s when x A c : (ii.) x A: Here 0 m 1, an by rewriting f as f = (1? m) f A + m f A? an using (3.10) with the fact that jf Aj jfj + jj, yiels: _V?k s + js j(jj? k ) + js j m (jfj + jj? k sl): (3:1) Since jj f by construction when x A, an has been chosen so that f =k, the rst term in parentheses is less than or equal to zero. Similarly, the choice of sliing controller gains shows that, with this boun on jj, k sl jfj + jj, so the secon term in parentheses is also nonpositive. Thus, _V?k s when x A: In summary, the above consierations prove that V _?k s for all t 0, an hence from (3.11) if s an all the ~c I are boune at time t = 0, they remain boune for all t 0. It remains to show that s! 0 as t! 1. One can easily establish the uniform bouneness of s (t); an application of Barbalat's lemma then establishes convergence of s to 0 [15]. Hence the inequality js(t)j hols asymptotically, an the asymptotic bouns on the iniviual tracking errors follow using (3.5). Since the esire trajectories are containe completely within A, an since the tracking errors converge to a neighborhoo of zero, the above proof also shows that eventually all the plant trajectories will converge to a set which is either within a small neighborhoo of the set A, or else is completely containe insie it. In particular, the sliing subsystem will asymptotically be use less an less, an, since the negative feeback terms ecay to zero as the tracking error oes, the control input will eventually be ominate by the outputs of the neural network, regarless of the initial conitions on the plant or network output weights. Simulation stuies have been carrie out for systems of the form (3.1) with a variety of f(x) an are reporte in [15]; all show the preicte stability an convergence properties. [15] also extens the above erivation to inclue systems with stateepenent control gains. 3.3 Implementation Consierations The above evelopments by no means suggest that more conventional aaptive architectures shoul be iscare. Every piece of information available to the esigner shoul be employe in constructing a control system, an in most cases, goo estimates of basis functions riving the plant ynamics may be available. A neural controller can then be employe in parallel with these to account for uncertainties which cannot be so easily parameterize over a particular operating range. In fact, there may be several ierent regions where the ynamic structure is more complicate than the elementary moel capture by the assume basis functions woul preict, an the network can be esigne to approximate the ierence over each of these regions; that is, the regions escribe above as A nee not be topologically connecte in the state space escribing the motion. Aing a set of known basis functions in parallel with the network prouces a controller which still has an aaptive component mathematically equivalent to (3.), but now N = N nn +N bf where N nn is the number of gaussian noes in the network approximation, an N bf is the number of basis functions in the assume ynamic structure. Even in the absence of knowlege of a particular set of basis functions, rather than have the network attempt to synthesize ane components of f irectly from gaussians, the terms Y i(x) = x i, an a bias term Y 0(x) = 1, can be irectly ae to the aaptive component of the control law in practical applications. The relative weights of these known basis functions are recursively ajuste by (3.10) replacing g with the corresponing Y i. However, since these aitional basis functions are generally assume to be globally exact, the moulation is omitte in computing their contribution to the control law an when ajusting their relative weights. Outsie the set A, the sliing component of the controller nees only oset the incremental ierence between the output of the known basis functions an the eects of the unmoele ynamics which the network attempts to eliminate when the state is within A. Aing ane terms to the representation can be eecte by aing irect connections in the original network between the input layer an the output noe, plus a constant bias term on the output. Incorporating a set of known basis functions can be picture as aing a secon three-layere network in parallel with the original gaussian network; each hien noe of this new network is connecte to each input with unity weight an computes the nonlinear transform Y i(x) of the incoming signals. The avantages of incorporating prior known basis functions are clear: the resiual (unmoelle) components of the \true" nonlinear function inuencing the ynamics are likely to be either very small in magnitue, or very restricte in their omain of inuence. In either case, the gaussian network require to approximate these unmoelle components shoul be very much

6 smaller than the network require if no prior information were avaliable. Gaussian network approximation, or more general basis function approximation techniques, are thus seen as methos for augmenting existing aaptive control schemes. The networks can be use to characterize elements of the plant ynamics for which explicit moels o not exist, while the known ynamic structure can be irectly exploite. A goo example of this iea can be foun in robotic applications, for which the basis functions governing the motion of the joints as a function of the applie torques are known with great accuracy, an for which stable aaptive schemes exploiting this knowlege are well known. What is much more icult to accurately characterize in these applications is the impact of Coulomb friction forces, which are most noticeable only at very low velocities. A neural network of the type propose coul be esigne to estimate these friction forces in a very small neighborhoo of the zero joint velocity region an operate alongsie a conventional aaptive controller. Once again, the sliing subsystem in this case nee oset only the incremental ierence between the output of the known basis functions an the neural approximation. The above algorithm lens itself immeiately to implementation on parallel electronic or optical harware; however, until such evices are reaily available, it is useful to consier moications which rener the algorithm practical if implemente in software on a conventional serial microprocessor. In orer to reuce the number of computations require at each time step, the output of each gaussian may be truncate to zero when the scale istance r I is greater than a few multiples of the variance. This will, of course, introuce another source of approximation error, but this too can be uniformly boune an relate to the (gaussian) truncation raius using frequency omain arguments. Similarly, to avoi the nee to reserve huge amounts of storage at the outset, each noe can be instantiate only when the state enters the support of the truncate gaussian. The set? is thus constructe on-line, an woul resemble an n imensional \tube" surrouning the trajectory of the plant through its state space. 4 Discrete Recursive Ientication The esign proceure outline in the previous sections prouces networks which form a linear regression moel in the parameters ^c I, implying that stanar system ientication techniques can be use to evelop moels of processes whose input-output time history is available. This section provies a brief sketch an example of this iea. While the above ieas can be irectly applie to prouce recursive ientication algorithms for physical, continuous time, nonlinear ynamic systems, traitionally ientication algorithms have consiere iscrete time processes, an this section conforms to that convention; the iscrete nature of the parameter t is emphasize in the evelopment by using a square bracket notation, e.g. y[t] instea of y(t). As oppose to iterating over, or \batch processing", a complete input-output time history, the form of the ientier consiere is recursive; the network operates alongsie the process as it evolves, continuously upating its moel in an attempt to rive the preiction error to zero. To this en, assume that the process moel has the form: y[t] = f(y[t? 1];... ; y[t? N]; u[t];... ; u[t? M + 1]) where y[t] is the process output at time t, u[t] is the process input at time t, an upper bouns on N an M are known. It is assume also that both y[t] an u[t] are available for measurement for every t. As above, given an estimate of the spatial banwith of f in terms of its N + M arguments, a gaussian network can be constructe sucient to uniformly approximate f on some chosen set A. For simplicity, assume that prior bouns on the range of signals y[t] an u[t] are known, so that an A can be chosen which will encompass the entire range of signals encountere uring operation of the ientier. Note that this can be a reasonable assumption in an ientication moel, where the estimates, ^y[t] o not inuence the actual process y[t]. In the aaptive control situation consiere above, however, the estimates of f irectly inuence the evolution of the states x, necessitating the more complex algorithm an analysis. For consistency with the notation of the previous section, collect the network inputs into the single vector x[t] [y[t? 1];... ; y[t? N]; u[t];... ; u[t? M + 1]]; the network preiction of the output at time t can then be written as ^y[t] = ^c I[t? 1]g (x[t]; I ): Dening the preiction error as e[t] = ^y[t]? y[t], one has e[t] = ~c I[t? 1] g (x[t]; I ) + [t] where [t] = f A(x[t])? f(x[t]) satises [t] f everywhere on A, an hence for all t. Similarly to (3.7), ene the aaptation signal to be e [t] = e[t]? sat(e[t]=); an take as the aaptation law for the output weights ^c I[t] = ^c I[t? 1]? k ae [t]g (x[t]; I ); (4:1) where k a is a strictly positive aaptation gain. If the eazone is then chosen so that > f, an the aaptation gain is taken as 0 <? k a sup g (x; I ) (4:) xa the parameter estimates will remain boune an the preiction error will converge asymptotically to je[t]j as t! 1. To emonstrate the convergence of this algorithm, take as a Lyapunov function caniate: V [t] = (~c I[t]) so that, using the aaptation law (4.1), V [t] = V [t]? V [t? 1] can be expane as V [t] = k ae [t] g (x[t]; I )?k ae [t] ~c I[t?1]g (x[t]; I ): But since, ~c I[t? 1]g (x[t]; I ) = e [t] + sat( e[t] ) + [t]; substituting an re-arranging reveals V [t]?k ae [t] (? k a?k ae [t] : g (x[t]; I ) )? k aje [t]j(? f ) The parameter estimates are thus boune; since by construction the regressors g (x[t]; I ) are uniformly boune, one can also conclue that e [t]! 0 as t! 1 (c.f. []). Note that the require boun on k a epens only upon the number an centers of the noes in the gaussian network an can easily be compute o-line. Alternately, the error signal can be normalize by total magnitue of the regressors at each time step, in which case one requires? k a = > 0 []. If prior bouns are not known for the process inputs an outputs, the moulation function of Section 3 can be use to halt estimation an aaptation when the process leaves the set for which the network was esigne. The parameter estimates will remain boune in this case, but it is not possible to guarantee convergence of the preiction error.

7 Preiction of Chaotic Time Series As an example of the above ieas, consier for simplicity a one imensional autonomous process moel, y[t] = f(y[t? 1]): The ientier in this case has just the scalar input x[t] = y[t? 1], leaing to the gaussian network ientication moel ^y[t] = ^f(y[t? 1]) = ^c I[t? 1]g (x[t]; I ): Process Output Chaotic Time Series To make the example concrete, assume the actual transition map f is spatially banlimite with = 100, an that the observations y[t] [0; 1] for all t. The above esign proceure suggests an array of gaussian noes with variance = 10 4 with the centers arrange on a uniform lattice of mesh = 10?3. Choosing a truncation raius of = 0:5, the network consists of noes corresponing to the lattice points containe in? = [?0:5; 1:5], or a total of 001 noes. Using the upper bouns (.4)-(.6) these parameters conservatively yiel f 10?3. Accoringly, is taken as 10?3, an the aaptation gains are k a = 0:1. The actual process was taken as the quaratic map y[t] = 3:7513 y[t? 1](1? y[t? 1]) which is known to be chaotic for this choice of gain [8]. Note that if y[0] [0; 1], so also is y[t] for all t, agreeing with the choice of A. Figure 1 shows a typical time series for this system, an the behavior of preiction error, e[t], using this gaussian network. As expecte, the preiction error eventually converges to the small eazone. Error Time Step Preiction Errors Further Applications The above general moel applies irectly to the general problem of learning mappings from examples by taking N = 0 an M = 1, so that y[t] = f(u[t]). The stability properties of the constant gain recursive ientier are in this case exactly the stability an convergence properties of the corresponing graient learning algorithm for these gaussian networks (again, leaving the input weights xe). Note that arbitrary k a cannot be chosen, but must be selecte to satisfy (4.), while the convergence guarantee epens upon stopping aaptation insie an appropriately ene eazone in the preiction error. The recursive ientication paraigm, couple with these constructive uniform approximation esigns for gaussian networks, forms a powerful framework in which to evaluate many of the theoretical aspects of the general neural network learning problem. By casting the training algorithm as a ynamic system, one can take avantage of the many existing stability an convergence analysis techniques. In particular, this viewpoint gives insight into the ability of a network to \generalize" beyon the examples in a given training set. Even given the uniform approximation capabilities of a particular network, goo generalization for the pattern learning problem cannot be guarantee unless the examples in the training set form a persistently exciting sequence. Roughly, persistency of excitation is a measure of the extent to which matching the training examples requires the full span of all the basis functions, an not just a linear subspace [10,, 18]. Since most learning from examples algorithms iterate to convergence over the training set, persistency can be evaluate by re-inexing time in the above moels moulo the number of examples in the training set, an evaluating the excitation properties of the resulting innite sequence of regressors as t! 1. Finally, note that a great number of very complicate phenomenon may be accurately characterize by the above moel. Stock prices, sunspot cycles, an weather patterns all may t these assumptions, an hence the neural ientication technique suggeste here can be consiere another tool for attempting to preict these processes. As in the previous section, the network moel can be reaily employe in parallel with other, especially linear, basis functions. Figure 1: Recursive ientication. Top: chaotic time series. Bottom: gaussian network preiction errors. 5 Concluing Remarks Uner the assumption that the nonlinear functions riving the plant ynamics are suciently smooth, we have shown that a network of neurons possessing raial gaussian input-output characteristics can be esigne which is capable of uniformly approximating these functions on a compact set. By placing the elements of a gaussian network in one-to-one corresponence with elements of a moie carinal series inspire by sampling theory, we have been able to assign a precise interpretation to each of the network parameters, an to emonstrate how these parameters are relate to the properties of the function the network is require to approximate. By consiering the fee-in weights to each noe of this network as representing a xe sampling mesh on R n, an ajusting only the fee-out weights uring operation, we have emonstrate that, for both the control an the ienti- cation algorithms, a suitable choice of aaptation law results in a globally stable close loop system an tracking or preiction errors which converge to a neighborhoo of zero. A unique feature of the control law is its ability to smoothly transition from aaptive to nonaaptive moes of operation, becoming a sliing controller in the regions of the state space where the network has poor approximating capability, a purely aaptive controller where the network approximating power is goo, or a stabilizing blen of the two moes in an intermeiate transition region. Further, prior knowlege about the structure of the plant ynamics can be irectly incorporate into the algorithm, allowing the network to operate in parallel with conventional xe or aaptive controllers, perhaps serving only to characterize those elements of the system ynamics for which no aequate physical moel is known.

8 6 Appenix: Boun on Error Sources In this appenix are erive crue upper bouns for the components in the uniform approximation error to f on A: the error, arising from the eparture of the raial gaussian from the ieal low pass lter, an the error 3, introuce by truncating the innite series expansion (.). In practice, of course, only orer of magnitue estimates of these bouns are neee; the require eazones can then be taken a factor of ve or ten greater. 6.1 Gaussian Low-pass Filtering In [15], this contribution is shown to be boune by Z C c G () " II C(? I ) # e jt x ; where C() = n F ()G?1 () B(). Since has been chosen so that 1=(), 3 = = 1 jf ()G?1 IZ n c I g ( J ; I )? m=1 IIm 0 ()j n exp( n ) sup B c I g ( J ; I ) (6.3) c I g ( J ; I ) ; (6.4) where I 0 = I m? I o. By enition of the set?, Im 0 = ; so long as m l. This equation simply states that the noes in the sampling gri which lie on cubes of raius up to l centere on the point J are by construction inclue in the network esign an hence contribute to f A. Thus for m l the inner summation vanishes ientically. From its enition, C() is boune with compact support, hence jc(x)j is boune for all x R n by a nite constant: jc(x)j n Vol (B) sup jc()j = n n n = 3; B where is as ene in (.7). The leaing factor of n is the constant in the inverse transform of G which has been absorbe into the enition of c(x) in expansion (.3). Substituting, m=l+1 IIm 0 g ( J ; I ): For m > l, the inner summation no longer vanishes ientically for every J I o, reecting the outlying noes omitte in the network construction. Since g ( J ; I ) is always positive, it is certainly true, although somewhat conservative, to write g ( J ; I ): m=l+1 II m The conservatism here arises since, for l + 1 m m 0, Im 0 I m, where m 0 epens upon the iameter of A an the location within A of J ; hence this inequality counts as omitte some of the noes actually present in the network esign. Noting that each hypercube of raius m aligne with the sampling lattice inclues (m + 1) n lattice points, where n is the imension x, it is easy to conrm that p(m) Car (I m) = (m + 1) n? (m? 1) n is the number of terms in the inner summation for each m. Since min IIm k J? I k = m, an g is a monotone ecreasing function of this norm, the error boun can be rewritten as m=l+1 p(m) exp(?m ): C(? I ) n sup jf ()j The gaussian ecays to zero faster than any power of m, hence B II for l suciently large the general term of this series is positive an monotone ecreasing. With a shift of coorinates, the series represents a Riemannian lower sum boune above by the Inequality (.5) follows irectly. integral: Z 1 6. Truncation error 3 3 p() exp(? ): l Consier a point x A taken to coincie with one of the points in the sampling lattice (the general case follows with only slight To evaluate p(x) explicitly, use the binomial theorem to compute moications [14]); i.e. x = J for some J I o. Assume for simplicity that the truncation raius is taken as a multiple of the lattice mesh size, so that = l for some positive integer l. p() = k+1 n k ; k To begin the analysis, picture the sampling lattice as consisting kk of a series of neste hypercubes surrouning the point J. Each point in the lattice thus lies on the face of exactly one of these where the inex set K is as ene in Section. Hence, neste hypercubes. Introucing the notation Z 1 R m(y) = fx j kx? yk 1 = mg 3 3 k+1 n k exp(? ) (6:5) k l to enote the faces of the hypercube of raius m from a point kk y R n, an the inex set an the convergent integrals can be evaluate by elementary I m = fi Z n j I R m( J )g methos. to enote the inices of the noes which lie on the hypercube faces To answer how large l must be taken for the transition to the a istance m from the selecte point J, allows the truncation integral representation to be vali, note that for positive, irect error to be expresse as: calculation inicates that the prouct becomes ecreasing at: r k exp(? ) = 0 ) = k Thus the integran is certainly monotone ecreasing as long as l is taken larger than r n? 1 l : In practice, this constraint is likely to be much smaller than the value of l require to achieve the esire minimization of the integral in (.6). References [1] Aiserman, M. A., Braverman, E. M., an Rozonoer, L. I., \Potential functions technique an extrapolation in learning systems theory", Proc. 3r IFAC Congress, 14G.1-14G.1, Lonon, [] Astrom, K. J., an Wittenmark, B., Aaptive Control, Aison- Wesley, Reaing, MA, [3] Broomhea, D. S., an Lowe, D., \Multivariable functional interpolation an aaptive networks", Complex Systems,, , [4] Funahashi, K., \On the approximate realization of continuous mappings by neural networks", Neural Networks,, , 1989.

9 [5] Girosi, F., an Poggio, T., \Networks an the best approximation property", Articial Intelligence Lab. Memo, No. 1164, MIT, Cambrige, MA, October [6] Hornik, K., Stinchcombe, M., an White, H., \Multilayer feeforwar networks are universal approximators", Neural Networks,, , [7] Marks, R., J., Introuction to Shannon Sampling an Interpolation Theory, Springer-Verlag, New York, [8] May, R. M., \Simple mathematical moels with very complicate ynamics", Nature, 61, , [9] Messner, W., Horowitz, R., Kao, W.-W., an Boals, M., \A New Aaptive Learning Rule", IEEE Trans. Autom. Cont., 36,, , [10] Narenra, K. S., an Annaswamy, A., Stable Aaptive Systems, Prentice Hall, Englewoo Clis, NJ, [11] Petersen, D. P., an Mileton, D., \Sampling an reconstruction of wave number limite functions in n-imensional Eucliean spaces", Information an Control, 5, 79-33, 196. [1] Peterson, B. B., an Narenra, K. S., \Boune Error Aaptive Control", IEEE Trans. Autom. Cont., 7,6, , 198. [13] Poggio, T., an Girosi, F., \A theory of networks for approximation an learning", Articial Intelligence Lab. Memo, No. 1140, MIT, Cambrige, MA, July [14] Sanner. R., an Slotine, J.-J. E, \Gaussian networks for irect aaptive control", Proc ACC, Boston, vol. 3, pp., June, [15] Sanner. R., an Slotine, J.-J. E, \Gaussian networks for irect aaptive control", NSL Report, No , MIT, Cambrige, MA, May, 1991; submitte to IEEE Trans. Neural Networks [16] Slotine, J.-J. E., \Sliing controller esign for nonlinear systems", Intl. Journal of Control, 40, 41, [17] Slotine, J.-J. E. an Coetsee, J.A., \Aaptive sliing controller synthesis for nonlinear systems", Intl. Journal of Control, [18] Slotine, J.-J. E. an Li, W., Applie Nonlinear Control, Prentice- Hall, Englewoo Clis, NJ, [19] Tsypkin, Y. A., Aaptation an Learning in Automatic Systems, Acaemic Press, New York, [0] Utkin, V. I., \Variable structure systems with sliing moe: a survey", IEEE Trans. Auto. Contr.,, 1, 1977.

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract

Problems Governed by PDE. Shlomo Ta'asan. Carnegie Mellon University. and. Abstract Pseuo-Time Methos for Constraine Optimization Problems Governe by PDE Shlomo Ta'asan Carnegie Mellon University an Institute for Computer Applications in Science an Engineering Abstract In this paper we

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Neuro-Fuzzy Processor

Neuro-Fuzzy Processor An Introuction to Fuzzy State Automata L.M. Reyneri Dipartimento i Elettronica - Politecnico i Torino C.so Duca Abruzzi, 24-10129 Torino - ITALY e.mail reyneri@polito.it; phone ++39 11 568 4038; fax ++39

More information

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13) Slie10 Haykin Chapter 14: Neuroynamics (3r E. Chapter 13) CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Neural Networks with Temporal Behavior Inclusion of feeback gives temporal characteristics to

More information

STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING

STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING Mark A. Kon Department of Mathematics an Statistics Boston University Boston, MA 02215 email: mkon@bu.eu Anrzej Przybyszewski

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek

More information

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary

More information

Exponential Tracking Control of Nonlinear Systems with Actuator Nonlinearity

Exponential Tracking Control of Nonlinear Systems with Actuator Nonlinearity Preprints of the 9th Worl Congress The International Feeration of Automatic Control Cape Town, South Africa. August -9, Exponential Tracking Control of Nonlinear Systems with Actuator Nonlinearity Zhengqiang

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

Chapter 6: Energy-Momentum Tensors

Chapter 6: Energy-Momentum Tensors 49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.

More information

Accelerate Implementation of Forwaring Control Laws using Composition Methos Yves Moreau an Roolphe Sepulchre June 1997 Abstract We use a metho of int

Accelerate Implementation of Forwaring Control Laws using Composition Methos Yves Moreau an Roolphe Sepulchre June 1997 Abstract We use a metho of int Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 1997-11 Accelerate Implementation of Forwaring Control Laws using Composition Methos 1 Yves Moreau, Roolphe Sepulchre, Joos Vanewalle

More information

Advanced Partial Differential Equations with Applications

Advanced Partial Differential Equations with Applications MIT OpenCourseWare http://ocw.mit.eu 18.306 Avance Partial Differential Equations with Applications Fall 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.eu/terms.

More information

Experimental Robustness Study of a Second-Order Sliding Mode Controller

Experimental Robustness Study of a Second-Order Sliding Mode Controller Experimental Robustness Stuy of a Secon-Orer Sliing Moe Controller Anré Blom, Bram e Jager Einhoven University of Technology Department of Mechanical Engineering P.O. Box 513, 5600 MB Einhoven, The Netherlans

More information

Some Remarks on the Boundedness and Convergence Properties of Smooth Sliding Mode Controllers

Some Remarks on the Boundedness and Convergence Properties of Smooth Sliding Mode Controllers International Journal of Automation an Computing 6(2, May 2009, 154-158 DOI: 10.1007/s11633-009-0154-z Some Remarks on the Bouneness an Convergence Properties of Smooth Sliing Moe Controllers Wallace Moreira

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Adaptive Gain-Scheduled H Control of Linear Parameter-Varying Systems with Time-Delayed Elements

Adaptive Gain-Scheduled H Control of Linear Parameter-Varying Systems with Time-Delayed Elements Aaptive Gain-Scheule H Control of Linear Parameter-Varying Systems with ime-delaye Elements Yoshihiko Miyasato he Institute of Statistical Mathematics 4-6-7 Minami-Azabu, Minato-ku, okyo 6-8569, Japan

More information

Neural Network Controller for Robotic Manipulator

Neural Network Controller for Robotic Manipulator MMAE54 Robotics- Class Project Paper Neural Network Controller for Robotic Manipulator Kai Qian Department of Biomeical Engineering, Illinois Institute of echnology, Chicago, IL 666 USA. Introuction Artificial

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

1 Introuction In the past few years there has been renewe interest in the nerson impurity moel. This moel was originally propose by nerson [2], for a

1 Introuction In the past few years there has been renewe interest in the nerson impurity moel. This moel was originally propose by nerson [2], for a Theory of the nerson impurity moel: The Schrieer{Wol transformation re{examine Stefan K. Kehrein 1 an nreas Mielke 2 Institut fur Theoretische Physik, uprecht{karls{universitat, D{69120 Heielberg, F..

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers Optimal Variable-Structure Control racking of Spacecraft Maneuvers John L. Crassiis 1 Srinivas R. Vaali F. Lanis Markley 3 Introuction In recent years, much effort has been evote to the close-loop esign

More information

Switching Time Optimization in Discretized Hybrid Dynamical Systems

Switching Time Optimization in Discretized Hybrid Dynamical Systems Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set

More information

Error Floors in LDPC Codes: Fast Simulation, Bounds and Hardware Emulation

Error Floors in LDPC Codes: Fast Simulation, Bounds and Hardware Emulation Error Floors in LDPC Coes: Fast Simulation, Bouns an Harware Emulation Pamela Lee, Lara Dolecek, Zhengya Zhang, Venkat Anantharam, Borivoje Nikolic, an Martin J. Wainwright EECS Department University of

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain Nonlinear Aaptive Ship Course Tracking Control Base on Backstepping an Nussbaum Gain Jialu Du, Chen Guo Abstract A nonlinear aaptive controller combining aaptive Backstepping algorithm with Nussbaum gain

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Optimal Multingered Grasp Synthesis. J. A. Coelho Jr. R. A. Grupen. Department of Computer Science. approach. 2 Grasp controllers

Optimal Multingered Grasp Synthesis. J. A. Coelho Jr. R. A. Grupen. Department of Computer Science. approach. 2 Grasp controllers Optimal Multingere Grasp Synthesis J. A. Coelho Jr. R. A. Grupen Laboratory for Perceptual Robotics Department of Computer Science University of Massachusetts, Amherst, 3 Abstract This paper iscusses how

More information

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

DAMTP 000/NA04 On the semi-norm of raial basis function interpolants H.-M. Gutmann Abstract: Raial basis function interpolation has attracte a lot of

DAMTP 000/NA04 On the semi-norm of raial basis function interpolants H.-M. Gutmann Abstract: Raial basis function interpolation has attracte a lot of UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports On the semi-norm of raial basis function interpolants H.-M. Gutmann DAMTP 000/NA04 May, 000 Department of Applie Mathematics an Theoretical Physics Silver

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

the solution of ()-(), an ecient numerical treatment requires variable steps. An alternative approach is to apply a time transformation of the form t

the solution of ()-(), an ecient numerical treatment requires variable steps. An alternative approach is to apply a time transformation of the form t Asymptotic Error Analysis of the Aaptive Verlet Metho Stephane Cirilli, Ernst Hairer Beneict Leimkuhler y May 3, 999 Abstract The Aaptive Verlet metho [7] an variants [6] are time-reversible schemes for

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

2 Viktor G. Kurotschka, Rainer Schwabe. 1) In the case of a small experimental region, mathematically described by

2 Viktor G. Kurotschka, Rainer Schwabe. 1) In the case of a small experimental region, mathematically described by HE REDUION OF DESIGN PROLEMS FOR MULIVARIAE EXPERIMENS O UNIVARIAE POSSIILIIES AND HEIR LIMIAIONS Viktor G Kurotschka, Rainer Schwabe Freie Universitat erlin, Mathematisches Institut, Arnimallee 2{6, D-4

More information

Function Spaces. 1 Hilbert Spaces

Function Spaces. 1 Hilbert Spaces Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure

More information

A Class of Robust Adaptive Controllers for Innite. Dimensional Dynamical Systems. M. A. Demetriou K. Ito. Center for Research in Scientic Computation

A Class of Robust Adaptive Controllers for Innite. Dimensional Dynamical Systems. M. A. Demetriou K. Ito. Center for Research in Scientic Computation A Class of Robust Aaptive Controllers for Innite Dimensional Dynamical Systems M. A. Demetriou K. Ito Center for Research in Scientic Computation Department of Mathematics North Carolina State University

More information

A New Backstepping Sliding Mode Guidance Law Considering Control Loop Dynamics

A New Backstepping Sliding Mode Guidance Law Considering Control Loop Dynamics pp. 9-6 A New Backstepping liing Moe Guiance Law Consiering Control Loop Dynamics V. Behnamgol *, A. Vali an A. Mohammai 3, an 3. Department of Control Engineering, Malek Ashtar University of Technology

More information

Acute sets in Euclidean spaces

Acute sets in Euclidean spaces Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of

More information

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS THE PUBISHING HOUSE PROCEEDINGS O THE ROMANIAN ACADEMY, Series A, O THE ROMANIAN ACADEMY Volume, Number /, pp. 6 THE ACCURATE EEMENT METHOD: A NEW PARADIGM OR NUMERICA SOUTION O ORDINARY DIERENTIA EQUATIONS

More information

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

and from it produce the action integral whose variation we set to zero:

and from it produce the action integral whose variation we set to zero: Lagrange Multipliers Monay, 6 September 01 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21 Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting

More information

EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES

EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1117 1130 S 0025-5718(00)01120-0 Article electronically publishe on February 17, 2000 EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION

More information

Nested Saturation with Guaranteed Real Poles 1

Nested Saturation with Guaranteed Real Poles 1 Neste Saturation with Guarantee Real Poles Eric N Johnson 2 an Suresh K Kannan 3 School of Aerospace Engineering Georgia Institute of Technology, Atlanta, GA 3332 Abstract The global stabilization of asymptotically

More information

Sliding mode approach to congestion control in connection-oriented communication networks

Sliding mode approach to congestion control in connection-oriented communication networks JOURNAL OF APPLIED COMPUTER SCIENCE Vol. xx. No xx (200x), pp. xx-xx Sliing moe approach to congestion control in connection-oriente communication networks Anrzej Bartoszewicz, Justyna Żuk Technical University

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

Why Bernstein Polynomials Are Better: Fuzzy-Inspired Justification

Why Bernstein Polynomials Are Better: Fuzzy-Inspired Justification Why Bernstein Polynomials Are Better: Fuzzy-Inspire Justification Jaime Nava 1, Olga Kosheleva 2, an Vlaik Kreinovich 3 1,3 Department of Computer Science 2 Department of Teacher Eucation University of

More information

Robust Adaptive Control for a Class of Systems with Deadzone Nonlinearity

Robust Adaptive Control for a Class of Systems with Deadzone Nonlinearity Intelligent Control an Automation, 5, 6, -9 Publishe Online February 5 in SciRes. http://www.scirp.org/journal/ica http://x.oi.org/.436/ica.5.6 Robust Aaptive Control for a Class of Systems with Deazone

More information

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation A Novel ecouple Iterative Metho for eep-submicron MOSFET RF Circuit Simulation CHUAN-SHENG WANG an YIMING LI epartment of Mathematics, National Tsing Hua University, National Nano evice Laboratories, an

More information

Distributed coordination control for multi-robot networks using Lyapunov-like barrier functions

Distributed coordination control for multi-robot networks using Lyapunov-like barrier functions IEEE TRANSACTIONS ON 1 Distribute coorination control for multi-robot networks using Lyapunov-like barrier functions Dimitra Panagou, Dušan M. Stipanović an Petros G. Voulgaris Abstract This paper aresses

More information

Laplacian Cooperative Attitude Control of Multiple Rigid Bodies

Laplacian Cooperative Attitude Control of Multiple Rigid Bodies Laplacian Cooperative Attitue Control of Multiple Rigi Boies Dimos V. Dimarogonas, Panagiotis Tsiotras an Kostas J. Kyriakopoulos Abstract Motivate by the fact that linear controllers can stabilize the

More information

ELEC3114 Control Systems 1

ELEC3114 Control Systems 1 ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.

More information

Adaptive Robust Control: A Piecewise Lyapunov Function Approach

Adaptive Robust Control: A Piecewise Lyapunov Function Approach Aaptive Robust Control: A Piecewise Lyapunov Function Approach Jianming Lian, Jianghai Hu an Stanislaw H. Żak Abstract The problem of output tracking control for a class of multi-input multi-output uncertain

More information

Optimal Control of Spatially Distributed Systems

Optimal Control of Spatially Distributed Systems Optimal Control of Spatially Distribute Systems Naer Motee an Ali Jababaie Abstract In this paper, we stuy the structural properties of optimal control of spatially istribute systems. Such systems consist

More information

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS TIME-DEAY ESTIMATION USING FARROW-BASED FRACTIONA-DEAY FIR FITERS: FITER APPROXIMATION VS. ESTIMATION ERRORS Mattias Olsson, Håkan Johansson, an Per öwenborg Div. of Electronic Systems, Dept. of Electrical

More information

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides Reference 1: Transformations of Graphs an En Behavior of Polynomial Graphs Transformations of graphs aitive constant constant on the outsie g(x) = + c Make graph of g by aing c to the y-values on the graph

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

How to Minimize Maximum Regret in Repeated Decision-Making

How to Minimize Maximum Regret in Repeated Decision-Making How to Minimize Maximum Regret in Repeate Decision-Making Karl H. Schlag July 3 2003 Economics Department, European University Institute, Via ella Piazzuola 43, 033 Florence, Italy, Tel: 0039-0-4689, email:

More information

Optimal Control of Spatially Distributed Systems

Optimal Control of Spatially Distributed Systems Optimal Control of Spatially Distribute Systems Naer Motee an Ali Jababaie Abstract In this paper, we stuy the structural properties of optimal control of spatially istribute systems. Such systems consist

More information

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy IPA Derivatives for Make-to-Stock Prouction-Inventory Systems With Backorers Uner the (Rr) Policy Yihong Fan a Benamin Melame b Yao Zhao c Yorai Wari Abstract This paper aresses Infinitesimal Perturbation

More information

Transmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency

Transmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency Transmission Line Matrix (TLM network analogues of reversible trapping processes Part B: scaling an consistency Donar e Cogan * ANC Eucation, 308-310.A. De Mel Mawatha, Colombo 3, Sri Lanka * onarecogan@gmail.com

More information

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7. Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency

More information

G j dq i + G j. q i. = a jt. and

G j dq i + G j. q i. = a jt. and Lagrange Multipliers Wenesay, 8 September 011 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

An inductance lookup table application for analysis of reluctance stepper motor model

An inductance lookup table application for analysis of reluctance stepper motor model ARCHIVES OF ELECTRICAL ENGINEERING VOL. 60(), pp. 5- (0) DOI 0.478/ v07-0-000-y An inuctance lookup table application for analysis of reluctance stepper motor moel JAKUB BERNAT, JAKUB KOŁOTA, SŁAWOMIR

More information

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations Optimize Schwarz Methos with the Yin-Yang Gri for Shallow Water Equations Abessama Qaouri Recherche en prévision numérique, Atmospheric Science an Technology Directorate, Environment Canaa, Dorval, Québec,

More information

TRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS

TRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS TRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS Francesco Bullo Richar M. Murray Control an Dynamical Systems California Institute of Technology Pasaena, CA 91125 Fax : + 1-818-796-8914 email

More information

APPPHYS 217 Thursday 8 April 2010

APPPHYS 217 Thursday 8 April 2010 APPPHYS 7 Thursay 8 April A&M example 6: The ouble integrator Consier the motion of a point particle in D with the applie force as a control input This is simply Newton s equation F ma with F u : t q q

More information

sampling, resulting in iscrete-time, iscrete-frequency functions, before they can be implemente in any igital system an be of practical use. A consier

sampling, resulting in iscrete-time, iscrete-frequency functions, before they can be implemente in any igital system an be of practical use. A consier DISTANCE METRICS FOR DISCRETE TIME-FREQUENCY REPRESENTATIONS James G. Droppo an Les E. Atlas Department of Electrical Engineering University of Washington Seattle, WA fjroppo atlasg@u.washington.eu ABSTRACT

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

State observers and recursive filters in classical feedback control theory

State observers and recursive filters in classical feedback control theory State observers an recursive filters in classical feeback control theory State-feeback control example: secon-orer system Consier the riven secon-orer system q q q u x q x q x x x x Here u coul represent

More information

Charge { Vortex Duality. in Double-Layered Josephson Junction Arrays

Charge { Vortex Duality. in Double-Layered Josephson Junction Arrays Charge { Vortex Duality in Double-Layere Josephson Junction Arrays Ya. M. Blanter a;b an Ger Schon c a Institut fur Theorie er Konensierten Materie, Universitat Karlsruhe, 76 Karlsruhe, Germany b Department

More information

ELECTRON DIFFRACTION

ELECTRON DIFFRACTION ELECTRON DIFFRACTION Electrons : wave or quanta? Measurement of wavelength an momentum of electrons. Introuction Electrons isplay both wave an particle properties. What is the relationship between the

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

Connections Between Duality in Control Theory and

Connections Between Duality in Control Theory and Connections Between Duality in Control heory an Convex Optimization V. Balakrishnan 1 an L. Vanenberghe 2 Abstract Several important problems in control theory can be reformulate as convex optimization

More information

VIRTUAL STRUCTURE BASED SPACECRAFT FORMATION CONTROL WITH FORMATION FEEDBACK

VIRTUAL STRUCTURE BASED SPACECRAFT FORMATION CONTROL WITH FORMATION FEEDBACK AIAA Guiance, Navigation, an Control Conference an Exhibit 5-8 August, Monterey, California AIAA -9 VIRTUAL STRUCTURE BASED SPACECRAT ORMATION CONTROL WITH ORMATION EEDBACK Wei Ren Ranal W. Bear Department

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

Proof of SPNs as Mixture of Trees

Proof of SPNs as Mixture of Trees A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a

More information

IERCU. Institute of Economic Research, Chuo University 50th Anniversary Special Issues. Discussion Paper No.210

IERCU. Institute of Economic Research, Chuo University 50th Anniversary Special Issues. Discussion Paper No.210 IERCU Institute of Economic Research, Chuo University 50th Anniversary Special Issues Discussion Paper No.210 Discrete an Continuous Dynamics in Nonlinear Monopolies Akio Matsumoto Chuo University Ferenc

More information

The Press-Schechter mass function

The Press-Schechter mass function The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for

More information

Multi-agent Systems Reaching Optimal Consensus with Time-varying Communication Graphs

Multi-agent Systems Reaching Optimal Consensus with Time-varying Communication Graphs Preprints of the 8th IFAC Worl Congress Multi-agent Systems Reaching Optimal Consensus with Time-varying Communication Graphs Guoong Shi ACCESS Linnaeus Centre, School of Electrical Engineering, Royal

More information

Generalizing Kronecker Graphs in order to Model Searchable Networks

Generalizing Kronecker Graphs in order to Model Searchable Networks Generalizing Kronecker Graphs in orer to Moel Searchable Networks Elizabeth Boine, Babak Hassibi, Aam Wierman California Institute of Technology Pasaena, CA 925 Email: {eaboine, hassibi, aamw}@caltecheu

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

Adaptive Predictive Control with Controllers of Restricted Structure

Adaptive Predictive Control with Controllers of Restricted Structure Aaptive Preictive Control with Controllers of Restricte Structure Michael J Grimble an Peter Martin Inustrial Control Centre University of Strathclye 5 George Street Glasgow, G1 1QE Scotlan, UK Abstract

More information

arxiv: v2 [cond-mat.stat-mech] 11 Nov 2016

arxiv: v2 [cond-mat.stat-mech] 11 Nov 2016 Noname manuscript No. (will be inserte by the eitor) Scaling properties of the number of ranom sequential asorption iterations neee to generate saturate ranom packing arxiv:607.06668v2 [con-mat.stat-mech]

More information