Levenberg-Marquardt-based OBS Algorithm using Adaptive Pruning Interval for System Identification with Dynamic Neural Networks

Size: px
Start display at page:

Download "Levenberg-Marquardt-based OBS Algorithm using Adaptive Pruning Interval for System Identification with Dynamic Neural Networks"

Transcription

1 Proceedins of the 29 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, X, USA - October 29 evenber-marquardt-based OBS Alorithm usin Adaptive Prunin Interval for System Identification with ynamic Neural Networs Christian Endisch, Peter Stolze, Peter Endisch, Christoph Hacl and Ralph Kennel Institute for Electrical rive Systems and Power Electronics echnische Universität München 8333 München, Germany Abstract his paper presents a prunin alorithm usin adaptive prunin interval for system identification with eneral dynamic neural networs (GNN). GNNs are artificial neural networs with internal dynamics. All layers have feedbac connections with time delays to the same and to all other layers. he parameters are trained with the evenber-marquardt (M) optimization alorithm. herefore the Jacobian matrix is required. he Jacobian is calculated by real time recurrent learnin (RR). As both M and OBS need Hessian information, computin time can be saved, if OBS uses the scaled inverse Hessian already calculated for the M alorithm. his paper discusses the effect of usin the scaled Hessian instead of the real Hessian in the OBS prunin approach. In addition to that an adaptive prunin interval is introduced. ue to prunin the structure of the identification model is chaned drastically. So the parameter optimization tas between the prunin steps becomes more or less complex. o uarantee that the parameter optimization alorithm has enouh time to cope with the structural chanes in the GNN-model, it is suested to adapt the prunin interval durin the identification process. he proposed alorithm is verified simulatively for two standard identification examples. Index erms System identification, dynamic neural networ, recurrent neural networ, GNN, optimization, evenber- Marquardt, real time recurrent learnin, networ prunin, OBS. I. INROUCION A crucial tas in system identification with recurrent neural networs is the selection of an appropriate networ architecture. It is very difficult to decide a priori which architecture and size are adequate for an identification tas. here are no eneral rules for choosin the structure before identification. A common approach is to try different confiurations until one is found that wors well. his approach can be very time consumin if many topoloies have to be tested. In this paper we start with an oversized eneral dynamic neural networ (GNN, see Fi. ) and remove unnecessary parts. In GNN all layers have feedbac connections with many time delays. he output depends not only on the current input, but also on previous inputs and previous states of the networ. urin the identification process the networ architecture is reduced to find a model for the plant as simple as possible. For architecture reduction in static neural networs several prunin alorithms are nown [],[6],[],[6]. wo well nown methods in feedforward neural networs are optimal brain damae (OB) [] and optimal brain sureon (OBS) [6]. Both methods are based on weiht ranin due to the saliency, which is defined as the chane in the output error usin Hessian information. he OB method calculates the saliency only with the pivot elements of the Hessian without retrainin after the prunin step. he OBS uses the complete Hessian information to calculate the saliency, which is rearded as a continuation of the OB method. Contrary to OB the OBS alorithm includes a retrainin part for the remainin weihts. In [3] it is shown that networ prunin with OBS can also be applied to dynamic neural networs. In OBS prunin the calculation of the inverse Hessian causes a reat amount of computation. o overcome this disadvantae of OBS it is suested to use the OBS prunin alorithm in conjunction with the M trainin alorithm [3],[2]. hus OBS ets the inverse Hessian for free. he structure of the GNN is chaned drastically durin the prunin and identification process. Because of this the influence of one prunin step on the identification system can be quite different. he deletion of one weiht in a very lare neural networ usually does not influence the identification system very much and the error after prunin can be reduced quite fast, whereas the deletion of one weiht in a small neural networ can influence the system extremely leadin to hue model errors. In the latter case the identification system needs more time to cope with the structural chanes in the GNN. In this paper the time between two prunin steps, the so-called prunin interval, is adapted online. he interval adaption maes use of a scalin alorithm by evaluatin the mean error between successive prunin steps. he next section presents the recurrent neural networ used in this paper. Administration matrices are introduced to manae the prunin process. Section III deals with the parameter optimization method used throuhout this paper. In section IV the M-based OBS alorithm is discussed and the adaptive prunin interval approach is presented. Identification examples are shown in section V. Finally, in section VI we summarize the results. II. GENER YNAMIC NEUR NEWORK (GNN) e Jesus described in his doctoral thesis [9] a broad class of dynamic networs, he called the framewor layered diital /9/$ IEEE 352

2 dynamic networ (N). he sophisticated formulations and notations of the N allow an efficient computation of the Jacobian matrix usin RR [4]. herefore we follow these conventions suested by e Jesus. In [7]-[] the optimal networ topoloy is assumed to be nown. In this paper the networ topoloy is unnown and so we choose an oversized networ for identification. In GNN all feedbac connections exist with a complete tapped delay line (from a first-order time delay element z up to the maximum order time delay element z dmax ). he output of a tapped delay line () is a vector containin delayed values of the input. Also the networ inputs have a. Fi. shows a three-layer GNN. he simulation equation for layer m is n m (t) = m,l (d) a l (t d)+ W l f m d m,l l I m d I m,l IW m,l (d) p l (t d)+b m () n m (t) is the summation output of layer m, p l (t) is the l- th input to the networ, IWm,l is the input weiht matrix between input l and layer m, W m,l is the layer weiht matrix between layer l and layer m, b m is the bias vector of layer m, m,l is the set of all delays in the tapped delay line between layer l and layer m, I m,l is the set of all input delays in the tapped delay line between input l and layer m, I m is the set of indices of input vectors that connect to layer m, f m is the set of indices of layers that directly connect forward to layer m. he output of layer m is a m (t) =f m (n m (t)) (2) where f m ( ) are nonlinear activation functions. In this paper we use tanh-functions in the hidden layers and linear activation functions in the output layer. At each point of time the equations () and (2) are iterated forward throuh the layers. ime is incremented from t =to t = Q. (See[9]fora full description of the notation used here). In Fi. below the matrix-boxes and below the arrows the dimensions are shown. R m and S m respectively indicate the dimension of the input and the number of neurons in layer m. ŷ is the output of the GNN. urin the identification process the optimal networ architecture should be found. Administration matrices show which weihts are valid or not. A. Administration Matrices he layer weiht administration matrices m,l (d) have the same dimensions as the layer weiht matrices W m,l (d) of the GNN, the input weiht administration matrices AI m,l (d) have the same dimensions as the input weiht matrices IW m,l (d) and the bias weiht administration vectors Ab m have the same dimensions as the bias weiht vectors b m.he elements of the administration matrices can have the boolean values or, indicatin if a weiht is valid or not. If e.. the layer weiht lw m,l,i (d) =[W m,l (d)],i from neuron i of layer l to neuron of layer m with a dth-order time-delay is valid, then [ m,l (d) ],i = αlm,l,i (d) =. If the element in the administration matrix equals to zero the correspondin weiht has no influence on the GNN. With these definitions the th output of layer m can be computed by n m (t) = l f d m m,l + l I m d I m,l + b m αbm S l i= R l i= lw m,l,i (d) αlm,l,i (d) al i iw m,l,i (d) αim,l,i (d) pl i (t d) (t d) a m (t) =f m (nm (t)) (3) where S l is the number of neurons in layer l and R l is the dimension of the lth input. able I shows an example of administration matrices for a three-layer GNN with 3 neurons in each hidden layer and d max = 2 at the beinnin of a prunin process. Only the weiht l 2,2,3 () from neuron 3 of layer 2 to neuron of the same layer with first-order timedelay is deleted, because αl 2,2,3 () =. All other weihts are valid. ABE I EXAMPE OF AMINISRAION MARICES FOR A HREE-AYER GNN, 3 NEURONS IN HE HIEN AYERS AN d max =2 ayer 2 3 Administration Matrices AI, () AI, (2), (), (2),2 (),2 (2),3 (),3 (2)Ab 2, () 2,2 () 2,2 (2) 2,3 () 2,3 ()Ab 2 3,2 () 3,3 () 3,3 (2)Ab 3 B. Implementation For the simulations throuhout this paper the raphical prorammin lanuae Simulin (Matlab) was used. GNN, Jacobian calculation, optimization alorithm and prunin were implemented as S-function in C. III. PARAMEER OPIMIZAION First of all a quantitative measure of the networ performance has to be defined. In the followin we use the squared error E(w )= Q 2 (y q ŷ q (w )) (y q ŷ q (w )) q= (4) = Q 2 e q (w ) e q (w ) q= 353

3 p (t) a (t) a 2 (t) a 3 (t) =ŷ(t) f 2, f 2 3,2 IW, R Σ S Σ S 2 Σ f W 3 W S 3 S R S 2 S S 3 S 2 b b 2 b 3 z S S 2 S 3 W,3 S S 3 W,2 S S 2 2,3 3,3 W W S 2 S 3 S 3 S 3 W 2,2 S 2 S 2 z 2 W 2,2 () W 2,2 (2) Σ W, S S z dmax W 2,2 (dmax) ayer ayer 2 ayer 3 Fi.. hree-layer GNN (two hidden layers) where q denotes one pattern in the trainin set, y q and ŷ q (w ) are, respectively, the desired taret and actual model output of the q-th pattern. he vector w is composed of all weihts in the GNN. he cost function E(w ) is small if the trainin (and prunin) process performs well and lare if the it performs poorly. he cost function forms an error surface in a (n +)- dimensional space, where n is equal to the number of weihts in the GNN. In the next step this space has to be searched in order to reduce the cost function. A. evenber-marquardt Alorithm All Newton methods are based on the second-order aylor series expansion about the old weiht vector w : E(w + ) = E(w +Δw ) (5) = E(w )+ Δw + 2 Δw H Δw If a minimum on the error surface is found, the radient of the expansion (5) with respect to Δw is zero: E(w + ) = + H Δw = (6) Solvin (6) for Δw ives the Newton method Δw = H w + = w H (7) he vector H is nown as the Newton direction, which is a descent direction, if the Hessian matrix is positive definite. here are several difficulties with the direct H application of Newton s method. One problem is that the optimization step may move to a maximum or saddle point if the Hessian is not positive definite and the alorithm could become unstable. here are two possibilities to solve this problem. Either the alorithm uses a line search routine (e.. Quasi-Newton) or the alorithm uses a scalin factor (e.. M). he direct evaluation of the Hessian matrix is computationally demandin. Hence the Quasi-Newton approach (e.. BFGS formula) builds up an increasinly accurate term for the inverse Hessian matrix iteratively, usin first derivatives of the cost function only. he Gauss-Newton and M approach approximate the Hessian matrix by [5] H J (w ) J(w ) (8) and it can be shown that = J (w ) e(w ) (9) where J(w ) is the Jacobian matrix e (w ) e (w ) w w 2 e 2(w ) e 2(w ) J(w )= w w e Q(w ) e Q(w ) w w 2 e (w ) w n e 2(w ) w n e Q(w ) w n () which includes first derivatives only. n is the number of all weihts in the neural networ and Q is the number of time steps evaluated. With (7), (8) and (9) the Gauss-Newton method can be written as w + = w [ J (w ) J(w ) ] J (w ) e(w ) () Now the M method can be expressed with the scalin factor μ w + = w [ J (w ) J(w )+μ Ĩ ] J (w ) e(w ) (2) where Ĩ is the identity matrix. As the M alorithm is the best optimization method for small and moderate networs (up to a few hundred weihts), this alorithm is used for all simulations in this paper. M optimization normally is offline. his means that all trainin patterns have to be available before trainin is started. In order to et an online trainin alorithm we use a slidin time window that includes the information of the last Q time steps. With the last Q errors the Jacobian matrix J(w ) from equation () is calculated quasi-online. In every time step the oldest trainin pattern drops out of the time window and a new one (from the current time step) is added just lie a first in first out (FIFO) buffer. If the time window is lare enouh it can be assumed that the information content of the trainin data is constant. With this simple method we are able 354

4 to implement the M alorithm online. As this quasi-online optimization method wors surprisinly well it is not necessary to use a recurrent approach lie [5]. For the simulations in this paper the window size is set to Q = 5 (usin a samplin time of. sec.). B. Jacobian Calculations o create the Jacobian matrix, the derivatives of the errors have to be computed, see (). he GNN has feedbac elements and internal delays, so that the Jacobian cannot be calculated by the standard bacpropaation alorithm. here are two eneral approaches to calculate the Jacobian matrix for dynamic systems: By bacpropaation throuh time (BP) [8] or by real time recurrent learnin (RR) [9]. For Jacobian calculations the RR alorithm is more efficient than the BP alorithm []. Accordin to this the RR alorithm is used in this paper. herefore we mae use of the developed formulas of the layered diital dynamic networ. he interested reader is referred to [7]-[],[3] for further details. IV. OBS PRUNING IN GNNS he oal of OBS prunin is to set one of the GNN-weihts to zero while the cost function iven in (4) is minimized. his particular weiht is denoted as w,z. In the oriinal OBS alorithm proposed by Hassibi [6] all weihts in the model are evaluated by the saliency w 2,z S,z = 2 [ ] (3) H z,z he weiht with the smallest saliency is deleted. he contributed weiht update can be calculated by Δw = w,z [ H ] z,z i H z (4) where i z is the unit vector with only the z th element equal to. A. M-based OBS he OBS in (3) and (4) requires the complex calculation of the inverse Hessian. In subsection III-A the M optimization method made H an approximation of this calculation extended with the M-scalin factor μ = H ( J(w ) J(w )+μ Ĩ ) (5) Usin this scaled and approximated inverse Hessian matrix the OBS alorithm accordin to (3) and (4) can be rewritten H as w 2,z S,z = 2 (w [(J ) J(w )+μ Ĩ) (6) ] z,z with the optimum chane of the weihts Δw = w,z (J (w ) J(w )+μ Ĩ) i z (w [(J ) J(w )+μ Ĩ) (7) ] z,z hus the OBS approach is obtained with low computational cost. B. Effect of the scaled Hessian in OBS Now the question is: How does the M-scalin factor μ in (6) and (7) influence the OBS alorithm? First of all, let us have a loo at the trusted reion method overned by the M-scalin factor μ. For small values of μ the M optimization (2) wors as Newton method (7). In this case the second order aylor series expansion (5) (with and ) ives a ood approximation for the real cost function (4). H By contrast, lare values of μ lead to simple Gradient escent (G) optimization with step lenth μ, see (2). If so, the second order series expansion (with and ) is not suitable to approximate the real cost function. In M H the aylor series expansion provides a model for the real cost function, but the model is only trusted within some reion around the current search point w overned by the M-scalin factor μ. Next we consider the M-based prunin approach defined by (6) and (7). For small values of μ the expression J(w ) J(w )+μ Ĩ ives a ood approximation for the real Hessian matrix and we recover the oriinal OBS-formulas: μ : S,z S,z Δw Δw (8) More interestin is the case for lare values of μ. hen the second order series expansion (5) (with and ) is not suitable to approximate the real cost function (4). H hus the expression J(w ) J(w )+μ Ĩ does not represent the real Hessian and is not suitable for OBS-calculation. For lare values of μ the M-based OBS iven by (6) and (7) can be reduced to the form: μ : S,z μ 2 w2,z (9) Δw w,z i z (2) (9) and (2) comply with a very simple prunin concept (in the followin referred to as Manitude Prunin) [2]: It is supposed that small weihts in the neural networ are less important than lare weihts. he squared manitude of the weiht value is used as saliency, see (9). his prunin approach is not the best (even small weihts can play an important role in a neural networ). However, it is a ood alternative prunin method, because for lare values of μ we do not have Hessian information in order to apply the much more efficient OBS prunin. With (2) only the particular weiht with the smallest manitude is set to zero, no additional weiht update is done, the remainin weihts are ept constant. his feature is also desirable, since we do not have Hessian information for adequate weiht adaption. he effect of usin the scaled Hessian in OBS can be summarized as follows: For small values of μ the Hessian information is accurate and the standard OBS is applied. For lare M-scalin factors no suitable Hessian information is available and Manitude Prunin is obtained as an alternative to OBS. he M-based OBS switches smoothly between the two prunin methods OBS and Manitude Prunin. 355

5 C. Control on Prunin Success Every Δ p time steps one prunin step is executed. he time constant Δ p is called prunin interval. Every prunin step starts with the evaluation of the last prunin step. herefore the cost function value E p before prunin is stored in order to jude the success of the prunin operation. he parameter ΔE max defines the maximum relative error increase. his user defined parameter limits the error increase durin the prunin process. he last prunin step is canceled if the current cost function value E(w ) is too hih compared to the value before prunin (see Fi. 2): if E(w ) > ΔE max E p =ΔE max E(w Δp ) undo last prunin step (2) herefore also the GNN weihts have to be stored before prunin.. Adaptive Prunin Interval Usually the cost function increases after a prunin step.he parameter optimization procedure between the prunin steps tries to reduce the error. It can be observed that the deletion of one weiht in an oversized GNN does not have too much influence on the identification process. here are many redundant weihts available. By contrast, the smaller the networ, the more complex the retrainin process after a prunin step. For this reason the prunin operation can be improved, if the prunin interval Δ p is adapted online. It must be assured that the optimization alorithm is able to cope with the structural chanes in the GNN-model within the prunin interval Δ p. First of all we have to find a measure if the prunin interval Δ p is accurately chosen. herefore we recommend to averae over the last Δ p cost function values between the prunin steps (see Fi. 2 and 3): E mean = (E(w )+E(w )+...+ E(w ) Δp+) Δ p Cost Function E E p E(w ) Δ p Prunin Δ p E mean < ΔE max E p ΔE max E p E mean Prunin Iteration Prunin Fi. 2. Example of an adequate prunin interval with Δ p =. (22) his mean cost function indicates if the current prunin interval Δ p is suitable. If the mean error E mean is hih, i. e. the optimization alorithm cannot cope with the new optimization tas within the time Δ p, the prunin interval should be increased, see Fi. 3. If the mean error E mean is small, the prunin process could be sped up by decreasin the prunin interval. Fi. 2 depicts an example for an adequately chosen prunin interval. he prunin interval Δ p is adapted by the followin scalin alorithm: if E mean > ΔE max E p Δ p = ϑ Δ p else Δ p = ϑ Δ p (23) with the user defined scalin factor ϑ >. he followin identification examples in section V show the adaption of the prunin interval Δ p durin the prunin and parameter optimization process. Cost Function E E(w ) E p Δ p Prunin Fi. 3. Δ p E mean > ΔE max E p E mean ΔE max E p Prunin Prunin interval is too short Iteration Prunin V. IENIFICAION In this section two identification examples are presented usin the M-based prunin alorithm suested in (6) and (7) with adaptive prunin interval proposed in (22) and (23), Jacobian calculation with RR and the M alorithm (2). he first example to be identified is a simple linear P 2 system. his example is chosen to test the architecture selection ability of the suested prunin approach. his test is possible for linear systems because we now the required difference equation. If the prunin alorithm with adaptive prunin interval succeeds in deletin the riht weihts, the final GNN-model has to consist of the parameters of the difference equation. he second example is a more complex one. A nonlinear dynamic system has to be identified. A. Excitation Sinal In system identification it is a very important tas to use an appropriate excitation sinal. A nonlinear dynamic system requires that all combinations of frequencies and amplitudes (in the system s operatin rane) are represented in the sinal. In this paper an APRBS-sinal (Amplitude Modulated Pseudo Random Binary Sequence) is applied, uniformly distributed from - to + in amplitudes and from ms to 25ms in hold time. For more information the reader is referred to [4]. B. Identification of P 2 plant he simple P 2 -plant. s 2 +s+ is identified by a threelayer GNN (with three neurons in the hidden layers and d max =2 n =93). his networ architecture is obviously larer than necessary. If the continuous-time P 2 -plant is 356

6 discretized usin zero-order hold [7] with a sample time of ms the discrete model can be expressed as y() =.48 u( ) +.46 u( 2) +.8 y( ).948 y( 2) (24) he initial prunin interval is set to Δ p =5, the maximum error increase is ΔE max = and the scalin factor ϑ is defined to.. he prunin process starts after iteration Q = 5, when the slidin time window is filled up. Fi. Squared Error E(w ) n =93 n = Identification ime [sec] Fi. 4. Squared Error E(w ) and number of weihts n for P 2-identification example (at the beinnin: n =93, at the end: n =7) 4 shows the weiht reduction from n =93to n =7and the identification error, which is also reduced althouh the model is becomin smaller. After about 2 seconds a lot of prunin steps are canceled due to (2). able II summarizes ABE II EXAMPE OF AMINISRAION MARICES FOR A HREE-AYER GNN, 3 NEURONS IN HE HIEN AYERS AN d max =2 ayer Number of Weihts n Administration Matrices AI, () AI, (2), (), (2),2 (),2 (2),3 (),3 (2)Ab 2, () 2,2 () 2,2 (2) 2,3 () 2,3 ()Ab 2 3,2 () 3,3 () 3,3 (2)Ab 3 the administration matrices after the identification and prunin process. Most of the weihts are deleted. Alon with the weiht vector w =46 = [ ], (25) we are able to draw the final GNN-model, see Fi. 5. Each circle in the sinal flow raph includes a summin junction and an activation function. he input and the output of the model have two time delays. ue to the very small input weiht.27 all the tanh-activation functions can be nelected (only the linear reion near the oriin with slope is used) and p Fi. 5. z z z.85 z Identification result: GNN-model of P 2 the example we et the followin difference equation for the final GNNmodel: ŷ() =.48 u( ) +.46 u( 2) +.85 y( ).948 y( 2) (26) his equation loos similar to the real difference equation of the system (24). Fi. 6 depicts another P 2 -identification usin the same initial prunin interval Δ p =5but no adaption is conducted. he optimization alorithm cannot cope with the structural chanes and the identification error increases. Fi. 7 Squared Error E(w ) Identification ime [sec] Fi. 6. Constant prunin interval is too small: he Squared Error E(w ) increases. shows the output of the P 2 -plant y (solid ray line) and the output of the GNN-model ŷ (thic dashed blac line) for the first 3 seconds (sample time: ms). y and ŷ Identification ime [sec] Fi. 7. Outputs of the P 2-plant y (solid ray line) and of the GNN-model ŷ (thic dashed blac line) for the first 3 seconds. C. Identification of a nonlinear plant he first example is chosen to underpin the weiht reduction ability of the proposed prunin alorithm. In this subsection ŷ Number of Weihts n 357

7 a complex nonlinear dynamic system presented by Narendra [3] is considered: y( ) y( 2) y( 3) u( 2) [y( 3) ] + u( ) y() = +y 2 ( 2) + y 2 ( 3) For this identification example a three-layer GNN (with four neurons in the hidden layers and d max =3 n = 22) is used. he initial prunin interval is Δ p =, the maximum error increase is set to ΔE max =2and the scalin factor ϑ is defined to.. Fi. 8 depicts the weiht reduction from n = Squared Error E(w ) Identification ime [sec] Fi. 8. Squared Error E(w ) and number of weihts n for nonlinear identification example (at the beinnin: n = 22, at the end: n =57) 22 to n =57and the identification error for 2 seconds. he prunin process is stopped after 82 seconds due to the abrupt rise of the cost function. he last prunin step is undone. Fi. y, ŷ and p Identification ime [sec] Fi. 9. Output of the nonlinear plant y (solid ray line), output of GNNmodel ŷ (thic dashed blac line) and APRBS excitation sinal p (thin dashed blac line) for the first 3 seconds. 9 shows the output of the nonlinear plant y (solid ray line), the output of GNN-model ŷ (thic dashed blac line) and the APRBS excitation sinal (thin dashed blac line) durin the first 3 seconds (sample time: ms). VI. CONCUSION his paper uses the M-based OBS approach for system identification with GNNs. he advantae of this prunin method is that the OBS-method exploits Hessian information already calculated for M parameter optimization. he Mbased OBS switches smoothly between the two prunin methods OBS and Manitude Prunin overned by the M-scalin factor. o manae the prunin process in GNN administration matrices are introduced. hese matrices indicate which weihts still exist and which are already deleted. In every prunin step the success of the last prunin step is checed. If the increase 2 5 Number of Weihts n in the error is too hih, the last prunin step is canceled. For this revision the weihts have to be stored. he structure of the GNN is chaned drastically durin the prunin and identification process. So the influence of one prunin step to the identification system can be quite different. o uarantee that the parameter optimization alorithm has enouh time to cope with the structural chanes in the GNN-model, this paper suests an adaptive prunin interval. We defined the mean cost function value to measure the success of the current prunin interval. ependin on this value the prunin interval is adapted by a scalin alorithm. he proposed prunin alorithm is verified by one simple linear and one complex nonlinear dynamic system identification example. Networ prunin does not only wor for static neural networs. hese structure findin alorithms offer a very interestin application in system identification with dynamic neural networs. REFERENCES [] M. Atti,. Bourain, F. Alexandre, Optimal Brain Sureon Variants For Feature Selection, in IEEE Proc. of the Intern. Joint Conf. on Neural Networs, pp , 24. [2] C. M. Bishop, Neural Networs for Pattern Reconition. Oxford University Press, 995. [3] C. Endisch, C. Hacl and. Schröder, Optimal Brain Sureon For General ynamic Neural Networs, in ecture Notes in Artificial Intellience (NAI) 4874, Spriner-Verla Berlin, pp. 5-28, 27. [4] C. Endisch, P. Stolze, C. Hacl and. Schröder, Comments on Bacpropaation Alorithms for a Broad Class of ynamic Networs, IEEE rans. on Neural Networs, vol. 2, no. 3, pp , 29. [5] M. Haan, B. M. Mohammed, rainin Feedforward Networs with the Marquardt Alorithm, IEEE rans. on Neural Networs, vol. 5, no. 6, pp , 994. [6] B. Hassibi,. G. Stor, G. J. Wolff, Optimal Brain Sureon and General Networ Prunin, IEEE Intern. Conf. on Neural Networs, vol., pp , 993. [7] O. e Jesús, M. Haan, Bacpropaation Alorithms hrouh ime for a General Class of Recurrent Networ, IEEE Int. Joint Conf. Neural Networ, Washinton, pp , 2. [8] O. e Jesús, M. Haan, Forward Perturbation Alorithm For a General Class of Recurrent Networ, IEEE Int. Joint Conf. Neural Networ, Washinton, pp , 2. [9] O. e Jesús, rainin General ynamic Neural Networs, Ph.. dissertation, Olahoma State University, Stillwater, OK, 22. [] O. e Jesús, M. Haan, Bacpropaation Alorithms for a Broad Class of ynamic Networs, IEEE rans. on Neural Networs, vol. 8, no., pp. 4 27, 27. [] Y. e Cun, J. S. ener, S. A. Solla, Optimal Brain amae, in. S. ouretzy, Advances in Neural Information Processin Systems, Moran Kaufmann, pp , 99. [2] J. Men, Penalty OBS Scheme for Feedforward Neural Networ, Proceedins of the 7th IEEE Intern. Conf. on ools with Artificial Intellience (ICAI 5), pp , 25. [3] K. S. Narendra, K. Parthasarathy, Identification and Control of ynamical Systems Usin Neural Networs, IEEE rans. on Neural Networs, vol., no., pp. 4 27, 99. [4] O. Nelles, Nonlinear System Identification. Berlin Heidelber New Yor: Spriner-Verla, 2. [5].S.H. Nia, J. Sjöber, Efficient rainin of Neural Nets for Nonlinear Adaptive Filterin Usin a Recursive evenber-marquardt Alorithm, IEEE rans. on Sinal Processin, vol. 48, no. 7, pp , 2. [6] R. Reed, Prunin Alorithms A Survey, IEEE rans. on Neural Networs, vol. 4, no. 5, pp , 993. [7]. Schröder, Eletrische Antriebe - Reelun von Antriebssystemen. 2nd edn., Berlin Heidelber New Yor: Spriner-Verla, 2. [8] P.J. Werbos, Bacpropaation hrouh ime: What it is and how to do it, Proc. IEEE, vol. 78, no., pp , 99. [9] R.J. Williams,. Zipser, A earnin Alorithm for Continually Runnin Fully Recurrent Neural Networs, Neural Computin, vol., pp ,

Nonlinear Model Reduction of Differential Algebraic Equation (DAE) Systems

Nonlinear Model Reduction of Differential Algebraic Equation (DAE) Systems Nonlinear Model Reduction of Differential Alebraic Equation DAE Systems Chuili Sun and Jueren Hahn Department of Chemical Enineerin eas A&M University Collee Station X 77843-3 hahn@tamu.edu repared for

More information

Bidirectional Clustering of Weights for Finding Succinct Multivariate Polynomials

Bidirectional Clustering of Weights for Finding Succinct Multivariate Polynomials IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.5, May 28 85 Bidirectional Clusterin of Weihts for Findin Succinct Multivariate Polynomials Yusuke Tanahashi and Ryohei Nakano

More information

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many

More information

Scheduling non-preemptive hard real-time tasks with strict periods

Scheduling non-preemptive hard real-time tasks with strict periods Schedulin non-preemptive hard real-time tasks with strict periods Mohamed MAROUF INRIA Rocquencourt Domaine de Voluceau BP 05 785 Le Chesnay Cedex - France Email: mohamed.marouf@inria.fr Yves SOREL INRIA

More information

PHY 133 Lab 1 - The Pendulum

PHY 133 Lab 1 - The Pendulum 3/20/2017 PHY 133 Lab 1 The Pendulum [Stony Brook Physics Laboratory Manuals] Stony Brook Physics Laboratory Manuals PHY 133 Lab 1 - The Pendulum The purpose of this lab is to measure the period of a simple

More information

Neural Network Identification of Non Linear Systems Using State Space Techniques.

Neural Network Identification of Non Linear Systems Using State Space Techniques. Neural Network Identification of Non Linear Systems Using State Space Techniques. Joan Codina, J. Carlos Aguado, Josep M. Fuertes. Automatic Control and Computer Engineering Department Universitat Politècnica

More information

Efficient method for obtaining parameters of stable pulse in grating compensated dispersion-managed communication systems

Efficient method for obtaining parameters of stable pulse in grating compensated dispersion-managed communication systems 3 Conference on Information Sciences and Systems, The Johns Hopkins University, March 12 14, 3 Efficient method for obtainin parameters of stable pulse in ratin compensated dispersion-manaed communication

More information

Disclaimer: This lab write-up is not

Disclaimer: This lab write-up is not Disclaimer: This lab write-up is not to be copied, in whole or in part, unless a proper reference is made as to the source. (It is stronly recommended that you use this document only to enerate ideas,

More information

Adjustment of Sampling Locations in Rail-Geometry Datasets: Using Dynamic Programming and Nonlinear Filtering

Adjustment of Sampling Locations in Rail-Geometry Datasets: Using Dynamic Programming and Nonlinear Filtering Systems and Computers in Japan, Vol. 37, No. 1, 2006 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J87-D-II, No. 6, June 2004, pp. 1199 1207 Adjustment of Samplin Locations in Rail-Geometry

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

IT6502 DIGITAL SIGNAL PROCESSING Unit I - SIGNALS AND SYSTEMS Basic elements of DSP concepts of frequency in Analo and Diital Sinals samplin theorem Discrete time sinals, systems Analysis of discrete time

More information

Simple Optimization (SOPT) for Nonlinear Constrained Optimization Problem

Simple Optimization (SOPT) for Nonlinear Constrained Optimization Problem (ISSN 4-6) Journal of Science & Enineerin Education (ISSN 4-6) Vol.,, Pae-3-39, Year-7 Simple Optimization (SOPT) for Nonlinear Constrained Optimization Vivek Kumar Chouhan *, Joji Thomas **, S. S. Mahapatra

More information

Levenberg-Marquardt and Conjugate Gradient Training Algorithms of Neural Network for Parameter Determination of Solar Cell

Levenberg-Marquardt and Conjugate Gradient Training Algorithms of Neural Network for Parameter Determination of Solar Cell International Journal of Innovation and Applied Studies ISSN 2028-9324 Vol. 9 No. 4 Dec. 204, pp. 869-877 204 Innovative Space of Scientific Research Journals http://www.ijias.issr-journals.org/ Levenberg-Marquardt

More information

Altitude measurement for model rocketry

Altitude measurement for model rocketry Altitude measurement for model rocketry David A. Cauhey Sibley School of Mechanical Aerospace Enineerin, Cornell University, Ithaca, New York 14853 I. INTRODUCTION In his book, Rocket Boys, 1 Homer Hickam

More information

A Constant Complexity Fair Scheduler with O(log N) Delay Guarantee

A Constant Complexity Fair Scheduler with O(log N) Delay Guarantee A Constant Complexity Fair Scheduler with O(lo N) Delay Guarantee Den Pan and Yuanyuan Yan 2 Deptment of Computer Science State University of New York at Stony Brook, Stony Brook, NY 79 denpan@cs.sunysb.edu

More information

Impulse Response and Generating Functions of sinc N FIR Filters

Impulse Response and Generating Functions of sinc N FIR Filters ICDT 20 : The Sixth International Conference on Diital Telecommunications Impulse Response and eneratin Functions of sinc FIR Filters J. Le Bihan Laboratoire RESO Ecole ationale d'inénieurs de Brest (EIB)

More information

REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION. Scott Rickard, Radu Balan, Justinian Rosca

REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION. Scott Rickard, Radu Balan, Justinian Rosca REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEARATION Scott Rickard, Radu Balan, Justinian Rosca Siemens Corporate Research rinceton, NJ scott.rickard,radu.balan,justinian.rosca @scr.siemens.com ABSTRACT

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

1 CHAPTER 7 PROJECTILES. 7.1 No Air Resistance

1 CHAPTER 7 PROJECTILES. 7.1 No Air Resistance CHAPTER 7 PROJECTILES 7 No Air Resistance We suppose that a particle is projected from a point O at the oriin of a coordinate system, the y-axis bein vertical and the x-axis directed alon the round The

More information

Comparison of the Complex Valued and Real Valued Neural Networks Trained with Gradient Descent and Random Search Algorithms

Comparison of the Complex Valued and Real Valued Neural Networks Trained with Gradient Descent and Random Search Algorithms Comparison of the Complex Valued and Real Valued Neural Networks rained with Gradient Descent and Random Search Algorithms Hans Georg Zimmermann, Alexey Minin 2,3 and Victoria Kusherbaeva 3 - Siemens AG

More information

A Probabilistic Analysis of Propositional STRIPS. Planning. Tom Bylander. Division of Mathematics, Computer Science, and Statistics

A Probabilistic Analysis of Propositional STRIPS. Planning. Tom Bylander. Division of Mathematics, Computer Science, and Statistics A Probabilistic Analysis of Propositional STRIPS Plannin Tom Bylander Division of Mathematics, Computer Science, and Statistics The University of Texas at San Antonio San Antonio, Texas 78249 USA bylander@riner.cs.utsa.edu

More information

4.3. Solving Friction Problems. Static Friction Problems. Tutorial 1 Static Friction Acting on Several Objects. Sample Problem 1.

4.3. Solving Friction Problems. Static Friction Problems. Tutorial 1 Static Friction Acting on Several Objects. Sample Problem 1. Solvin Friction Problems Sometimes friction is desirable and we want to increase the coefficient of friction to help keep objects at rest. For example, a runnin shoe is typically desined to have a lare

More information

A Multigrid-like Technique for Power Grid Analysis

A Multigrid-like Technique for Power Grid Analysis A Multirid-like Technique for Power Grid Analysis Joseph N. Kozhaya, Sani R. Nassif, and Farid N. Najm 1 Abstract Modern sub-micron VLSI desins include hue power rids that are required to distribute lare

More information

MODEL SELECTION CRITERIA FOR ACOUSTIC SEGMENTATION

MODEL SELECTION CRITERIA FOR ACOUSTIC SEGMENTATION = = = MODEL SELECTION CRITERIA FOR ACOUSTIC SEGMENTATION Mauro Cettolo and Marcello Federico ITC-irst - Centro per la Ricerca Scientifica e Tecnoloica I-385 Povo, Trento, Italy ABSTRACT Robust acoustic

More information

Study on the use of neural networks in control systems

Study on the use of neural networks in control systems Study on the use of neural networks in control systems F. Rinaldi Politecnico di Torino & Italian Air Force, Italy Keywords: neural networks, automatic control Abstract The purpose of this report is to

More information

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS CR-28/5 he Compact Muon Solenoid Experiment Conference Report Mailin address: CMS CERN, CH-1211 GENEVA 23, Switzerland 14 February 28 Observin Heavy op Quarks with

More information

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch

More information

Nonlinear stabilization of high-energy and ultrashort pulses in passively modelocked lasers

Nonlinear stabilization of high-energy and ultrashort pulses in passively modelocked lasers 2596 Vol. 33, No. 12 / December 2016 / Journal of the Optical Society of America B Research Article Nonlinear stabilization of hih-enery and ultrashort pulses in passively modelocked lasers SHAOKANG WANG,*

More information

Stochastic learning feedback hybrid automata for dynamic power management in embedded systems

Stochastic learning feedback hybrid automata for dynamic power management in embedded systems Electrical and Computer Enineerin Faculty Publications Electrical & Computer Enineerin 2005 Stochastic learnin feedback hybrid automata for dynamic power manaement in embedded systems T. Erbes Eurecom

More information

Using Quasi-Newton Methods to Find Optimal Solutions to Problematic Kriging Systems

Using Quasi-Newton Methods to Find Optimal Solutions to Problematic Kriging Systems Usin Quasi-Newton Methos to Fin Optimal Solutions to Problematic Kriin Systems Steven Lyster Centre for Computational Geostatistics Department of Civil & Environmental Enineerin University of Alberta Solvin

More information

Round-off Error Free Fixed-Point Design of Polynomial FIR Predictors

Round-off Error Free Fixed-Point Design of Polynomial FIR Predictors Round-off Error Free Fixed-Point Desin of Polynomial FIR Predictors Jarno. A. Tanskanen and Vassil S. Dimitrov Institute of Intellient Power Electronics Department of Electrical and Communications Enineerin

More information

Optimization in Predictive Control Algorithm

Optimization in Predictive Control Algorithm Latest rends in Circits, Systems, Sinal Processin and Atomatic Control Optimization in Predictive Control Alorithm JAN ANOŠ, MAREK KUBALČÍK omas Bata University in Zlín, Faclty of Applied Informatics Nám..

More information

REVIEW: Going from ONE to TWO Dimensions with Kinematics. Review of one dimension, constant acceleration kinematics. v x (t) = v x0 + a x t

REVIEW: Going from ONE to TWO Dimensions with Kinematics. Review of one dimension, constant acceleration kinematics. v x (t) = v x0 + a x t Lecture 5: Projectile motion, uniform circular motion 1 REVIEW: Goin from ONE to TWO Dimensions with Kinematics In Lecture 2, we studied the motion of a particle in just one dimension. The concepts of

More information

Design and implementation of model predictive control for a three-tank hybrid system

Design and implementation of model predictive control for a three-tank hybrid system International Journal of Computer cience and Electronics Enineerin (IJCEE) Volume 1, Issue 2 (2013) IN 2320 4028 (Online) Desin and implementation of model predictive control for a three-tank hybrid system

More information

Early Brain Damage. Volker Tresp, Ralph Neuneier and Hans Georg Zimmermann Siemens AG, Corporate Technologies Otto-Hahn-Ring München, Germany

Early Brain Damage. Volker Tresp, Ralph Neuneier and Hans Georg Zimmermann Siemens AG, Corporate Technologies Otto-Hahn-Ring München, Germany Early Brain Damage Volker Tresp, Ralph Neuneier and Hans Georg Zimmermann Siemens AG, Corporate Technologies Otto-Hahn-Ring 6 81730 München, Germany Abstract Optimal Brain Damage (OBD is a method for reducing

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Fourier Optics and Image Analysis

Fourier Optics and Image Analysis Laboratory instruction to Fourier Optics and Imae Analysis Sven-Göran Pettersson and Anders Persson updated by Maïté Louisy and Henrik Ekerfelt This laboratory exercise demonstrates some important applications

More information

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)

More information

An eigen theory of waves in piezoelectric solids

An eigen theory of waves in piezoelectric solids Acta Mech Sin (010 6:41 46 DOI 10.1007/s10409-009-031-z RESEARCH PAPER An eien theory of waves in piezoelectric solids Shaohua Guo Received: 8 June 009 / Revised: 30 July 009 / Accepted: 3 September 009

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

IN neural-network training, the most well-known online

IN neural-network training, the most well-known online IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 1, JANUARY 1999 161 On the Kalman Filtering Method in Neural-Network Training and Pruning John Sum, Chi-sing Leung, Gilbert H. Young, and Wing-kay Kan

More information

An Improved Logical Effort Model and Framework Applied to Optimal Sizing of Circuits Operating in Multiple Supply Voltage Regimes

An Improved Logical Effort Model and Framework Applied to Optimal Sizing of Circuits Operating in Multiple Supply Voltage Regimes n Improved Loical Effort Model and Framework pplied to Optimal Sizin of Circuits Operatin in Multiple Supply Voltae Reimes Xue Lin, Yanzhi Wan, Shahin Nazarian, Massoud Pedram Department of Electrical

More information

Pattern Classification

Pattern Classification Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors

More information

Modified nonmonotone Armijo line search for descent method

Modified nonmonotone Armijo line search for descent method Numer Alor 2011 57:1 25 DOI 10.1007/s11075-010-9408-7 ORIGINAL PAPER Modified nonmonotone Armijo line search for descent method Zhenjun Shi Shenquan Wan Received: 23 February 2009 / Accepted: 20 June 2010

More information

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

C. Non-linear Difference and Differential Equations: Linearization and Phase Diagram Technique

C. Non-linear Difference and Differential Equations: Linearization and Phase Diagram Technique C. Non-linear Difference and Differential Equations: Linearization and Phase Diaram Technique So far we have discussed methods of solvin linear difference and differential equations. Let us now discuss

More information

A Comparison of Nonlinear Kalman Filtering Applied to Feed forward Neural Networks as Learning Algorithms

A Comparison of Nonlinear Kalman Filtering Applied to Feed forward Neural Networks as Learning Algorithms A Comparison of Nonlinear Kalman Filtering Applied to Feed forward Neural Networs as Learning Algorithms Wieslaw Pietrusziewicz SDART Ltd One Central Par, Northampton Road Manchester M40 5WW, United Kindgom

More information

Deterministic Algorithm Computing All Generators: Application in Cryptographic Systems Design

Deterministic Algorithm Computing All Generators: Application in Cryptographic Systems Design Int. J. Communications, Networ and System Sciences, 0, 5, 75-79 http://dx.doi.or/0.436/ijcns.0.5074 Published Online November 0 (http://www.scirp.or/journal/ijcns) Deterministic Alorithm Computin All Generators:

More information

Optimization of Mechanical Design Problems Using Improved Differential Evolution Algorithm

Optimization of Mechanical Design Problems Using Improved Differential Evolution Algorithm International Journal of Recent Trends in Enineerin Vol. No. 5 May 009 Optimization of Mechanical Desin Problems Usin Improved Differential Evolution Alorithm Millie Pant Radha Thanaraj and V. P. Sinh

More information

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks

( t) Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Identification and Control of a Nonlinear Bioreactor Plant Using Classical and Dynamical Neural Networks Mehmet Önder Efe Electrical and Electronics Engineering Boðaziçi University, Bebek 80815, Istanbul,

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Meta-Learning Evolutionary Artificial Neural Networks

Meta-Learning Evolutionary Artificial Neural Networks Abstract Meta-Learning Evolutionary Artificial Neural Networs Ajith Abraham Department of Computer Science, Olahoma State University 7 N Greenwood Avenue, ulsa, OK 7416-7, USA Email: ajith.abraham@ieee.org,

More information

Emotional Optimized Design of Electro-hydraulic Actuators

Emotional Optimized Design of Electro-hydraulic Actuators Sensors & Transducers, Vol. 77, Issue 8, Auust, pp. 9-9 Sensors & Transducers by IFSA Publishin, S. L. http://www.sensorsportal.com Emotional Optimized Desin of Electro-hydraulic Actuators Shi Boqian,

More information

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS Karima Amoura Patrice Wira and Said Djennoune Laboratoire CCSP Université Mouloud Mammeri Tizi Ouzou Algeria Laboratoire MIPS Université

More information

Conical Pendulum: Part 2 A Detailed Theoretical and Computational Analysis of the Period, Tension and Centripetal Forces

Conical Pendulum: Part 2 A Detailed Theoretical and Computational Analysis of the Period, Tension and Centripetal Forces European J of Physics Education Volume 8 Issue 1 1309-70 Dean onical Pendulum: Part A Detailed heoretical and omputational Analysis of the Period, ension and entripetal orces Kevin Dean Physics Department,

More information

Mechanics Cycle 3 Chapter 12++ Chapter 12++ Revisit Circular Motion

Mechanics Cycle 3 Chapter 12++ Chapter 12++ Revisit Circular Motion Chapter 12++ Revisit Circular Motion Revisit: Anular variables Second laws for radial and tanential acceleration Circular motion CM 2 nd aw with F net To-Do: Vertical circular motion in ravity Complete

More information

Convergence of DFT eigenvalues with cell volume and vacuum level

Convergence of DFT eigenvalues with cell volume and vacuum level Converence of DFT eienvalues with cell volume and vacuum level Sohrab Ismail-Beii October 4, 2013 Computin work functions or absolute DFT eienvalues (e.. ionization potentials) requires some care. Obviously,

More information

inear Adaptive Inverse Control

inear Adaptive Inverse Control Proceedings of the 36th Conference on Decision & Control San Diego, California USA December 1997 inear Adaptive nverse Control WM15 1:50 Bernard Widrow and Gregory L. Plett Department of Electrical Engineering,

More information

STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAMIC SYSTEMS MODELING

STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAMIC SYSTEMS MODELING STRUCTURED NEURAL NETWORK FOR NONLINEAR DYNAIC SYSTES ODELING J. CODINA, R. VILLÀ and J.. FUERTES UPC-Facultat d Informàtica de Barcelona, Department of Automatic Control and Computer Engineeering, Pau

More information

Wire antenna model of the vertical grounding electrode

Wire antenna model of the vertical grounding electrode Boundary Elements and Other Mesh Reduction Methods XXXV 13 Wire antenna model of the vertical roundin electrode D. Poljak & S. Sesnic University of Split, FESB, Split, Croatia Abstract A straiht wire antenna

More information

Termination criteria in the Moore-Skelboe Algorithm for Global Optimization by Interval Arithmetic

Termination criteria in the Moore-Skelboe Algorithm for Global Optimization by Interval Arithmetic To appear in series Nonconvex Global Optimization and Its Applications Kluwer Academic Publishers Termination criteria in the Moore-Skelboe Alorithm for Global Optimization by Interval Arithmetic M.H.

More information

The Markov Decision Process Extraction Network

The Markov Decision Process Extraction Network The Markov Decision Process Extraction Network Siegmund Duell 1,2, Alexander Hans 1,3, and Steffen Udluft 1 1- Siemens AG, Corporate Research and Technologies, Learning Systems, Otto-Hahn-Ring 6, D-81739

More information

Path Loss Prediction in Urban Environment Using Learning Machines and Dimensionality Reduction Techniques

Path Loss Prediction in Urban Environment Using Learning Machines and Dimensionality Reduction Techniques Path Loss Prediction in Urban Environment Usin Learnin Machines and Dimensionality Reduction Techniques Mauro Piacentini Francesco Rinaldi Technical Report n. 11, 2009 Path Loss Prediction in Urban Environment

More information

ONLINE: MATHEMATICS EXTENSION 2 Topic 6 MECHANICS 6.3 HARMONIC MOTION

ONLINE: MATHEMATICS EXTENSION 2 Topic 6 MECHANICS 6.3 HARMONIC MOTION ONINE: MATHEMATICS EXTENSION Topic 6 MECHANICS 6.3 HARMONIC MOTION Vibrations or oscillations are motions that repeated more or less reularly in time. The topic is very broad and diverse and covers phenomena

More information

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks

Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks Automatic Structure and Parameter Training Methods for Modeling of Mechanical System by Recurrent Neural Networks C. James Li and Tung-Yung Huang Department of Mechanical Engineering, Aeronautical Engineering

More information

Amplitude Adaptive ASDM without Envelope Encoding

Amplitude Adaptive ASDM without Envelope Encoding Amplitude Adaptive ASDM without Envelope Encodin Kaspars Ozols and Rolands Shavelis Institute of Electronics and Computer Science 14 Dzerbenes Str., LV-16, Ria, Latvia Email: kaspars.ozols@edi.lv, shavelis@edi.lv

More information

Efficient crystal size distribution estimation approach for growth dominated crystallisation processes

Efficient crystal size distribution estimation approach for growth dominated crystallisation processes Louhborouh University Institutional Repository Efficient crystal size distribution estimation approach for rowth dominated crystallisation processes This item was submitted to Louhborouh University's Institutional

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

What Causes Image Intensity Changes?

What Causes Image Intensity Changes? Ede Detection l Why Detect Edes? u Information reduction l Replace imae by a cartoon in which objects and surface markins are outlined create line drawin description l These are the most informative parts

More information

MODIFIED SPHERE DECODING ALGORITHMS AND THEIR APPLICATIONS TO SOME SPARSE APPROXIMATION PROBLEMS. Przemysław Dymarski and Rafał Romaniuk

MODIFIED SPHERE DECODING ALGORITHMS AND THEIR APPLICATIONS TO SOME SPARSE APPROXIMATION PROBLEMS. Przemysław Dymarski and Rafał Romaniuk MODIFIED SPHERE DECODING ALGORITHMS AND THEIR APPLICATIONS TO SOME SPARSE APPROXIMATION PROBLEMS Przemysław Dymarsi and Rafał Romaniu Institute of Telecommunications, Warsaw University of Technoloy ul.

More information

XI PHYSICS M. AFFAN KHAN LECTURER PHYSICS, AKHSS, K. https://promotephysics.wordpress.com

XI PHYSICS M. AFFAN KHAN LECTURER PHYSICS, AKHSS, K. https://promotephysics.wordpress.com XI PHYSICS M. AFFAN KHAN LECTURER PHYSICS, AKHSS, K affan_414@live.com https://promotephysics.wordpress.com [MOTION IN TWO DIMENSIONS] CHAPTER NO. 4 In this chapter we are oin to discuss motion in projectile

More information

ANALYTIC CENTER CUTTING PLANE METHODS FOR VARIATIONAL INEQUALITIES OVER CONVEX BODIES

ANALYTIC CENTER CUTTING PLANE METHODS FOR VARIATIONAL INEQUALITIES OVER CONVEX BODIES ANALYI ENER UING PLANE MEHODS OR VARIAIONAL INEQUALIIES OVER ONVE BODIES Renin Zen School of Mathematical Sciences honqin Normal Universit honqin hina ABSRA An analtic center cuttin plane method is an

More information

2.2 Differentiation and Integration of Vector-Valued Functions

2.2 Differentiation and Integration of Vector-Valued Functions .. DIFFERENTIATION AND INTEGRATION OF VECTOR-VALUED FUNCTIONS133. Differentiation and Interation of Vector-Valued Functions Simply put, we differentiate and interate vector functions by differentiatin

More information

Genetic Algorithm Approach to Nonlinear Blind Source Separation

Genetic Algorithm Approach to Nonlinear Blind Source Separation Genetic Alorithm Approach to Nonlinear Blind Source Separation F. Roas, C.G. Puntonet, I.Roas, J. Ortea, A. Prieto Dept. of Computer Architecture and Technoloy, University of Granada. fernando@atc.ur.es,

More information

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38

More information

Normed Distances and Their Applications in Optimal Circuit Design

Normed Distances and Their Applications in Optimal Circuit Design Optimization and Enineerin, 4, 197 213, 2003 c 2003 Kluwer Academic Publishers. Manufactured in he Netherlands Normed Distances and heir Applications in Optimal Circuit Desin ABDEL-KARIM S.O. HASSAN Department

More information

NEURAL PREDICTIVE CONTROL TOOLBOX FOR CACSD IN MATLAB ENVIRONMENT

NEURAL PREDICTIVE CONTROL TOOLBOX FOR CACSD IN MATLAB ENVIRONMENT Copyright 2002 IFAC 15th Triennial World Congress, Barcelona, Spain NEURAL PREDICTIVE CONTROL TOOLBOX FOR CACSD IN MATLAB ENVIRONMENT Jesús M. Zamarreño Dpt. System Engineering and Automatic Control. University

More information

First- and Second Order Phase Transitions in the Holstein- Hubbard Model

First- and Second Order Phase Transitions in the Holstein- Hubbard Model Europhysics Letters PREPRINT First- and Second Order Phase Transitions in the Holstein- Hubbard Model W. Koller 1, D. Meyer 1,Y.Ōno 2 and A. C. Hewson 1 1 Department of Mathematics, Imperial Collee, London

More information

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera Intelligent Control Module I- Neural Networks Lecture 7 Adaptive Learning Rate Laxmidhar Behera Department of Electrical Engineering Indian Institute of Technology, Kanpur Recurrent Networks p.1/40 Subjects

More information

Strong Interference and Spectrum Warfare

Strong Interference and Spectrum Warfare Stron Interference and Spectrum Warfare Otilia opescu and Christopher Rose WILAB Ruters University 73 Brett Rd., iscataway, J 8854-86 Email: {otilia,crose}@winlab.ruters.edu Dimitrie C. opescu Department

More information

RESISTANCE STRAIN GAGES FILLAMENTS EFFECT

RESISTANCE STRAIN GAGES FILLAMENTS EFFECT RESISTANCE STRAIN GAGES FILLAMENTS EFFECT Nashwan T. Younis, Younis@enr.ipfw.edu Department of Mechanical Enineerin, Indiana University-Purdue University Fort Wayne, USA Bonsu Kan, kan@enr.ipfw.edu Department

More information

Phase Diagrams: construction and comparative statics

Phase Diagrams: construction and comparative statics 1 / 11 Phase Diarams: construction and comparative statics November 13, 215 Alecos Papadopoulos PhD Candidate Department of Economics, Athens University of Economics and Business papadopalex@aueb.r, https://alecospapadopoulos.wordpress.com

More information

Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning

Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning Multilayer Perceptron Learning Utilizing Singular Regions and Search Pruning Seiya Satoh and Ryohei Nakano Abstract In a search space of a multilayer perceptron having hidden units, MLP(), there exist

More information

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Nonlinear System Identification Given a data set Z N = {y(k),

More information

Health Monitoring of a Truss Bridge using Adaptive Identification

Health Monitoring of a Truss Bridge using Adaptive Identification Proceedins of the 27 IEEE Intellient Transportation Systems Conference Seattle, WA, USA, Sept. 3 - Oct. 3, 27 WeC.4 Health Monitorin of a Truss Bride usin Adaptive Identification James W. Fonda, Student

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

SCALED CONJUGATE GRADIENT TYPE METHOD WITH IT`S CONVERGENCE FOR BACK PROPAGATION NEURAL NETWORK

SCALED CONJUGATE GRADIENT TYPE METHOD WITH IT`S CONVERGENCE FOR BACK PROPAGATION NEURAL NETWORK International Journal of Information echnoloy an Business Manaement 0-05 JIBM & ARF. All rihts reserve SCALED CONJUGAE GRADIEN YPE MEHOD WIH I`S CONVERGENCE FOR BACK PROPAGAION NEURAL NEWORK, Collee of

More information

Geodesics as gravity

Geodesics as gravity Geodesics as ravity February 8, 05 It is not obvious that curvature can account for ravity. The orbitin path of a planet, for example, does not immediately seem to be the shortest path between points.

More information

Iterative Learning Control for Tailor Rolled Blanks Zhengyang Lu DCT

Iterative Learning Control for Tailor Rolled Blanks Zhengyang Lu DCT Iterative Learning Control for Tailor Rolled Blans Zhengyang Lu DCT 2007.066 Master Internship Report Supervisors: Prof. Maarten Steinbuch (TU/e) Camile Hol (Corus) Eindhoven University of Technology Department

More information

Maximum Likelihood Diffusive Source Localization Based on Binary Observations

Maximum Likelihood Diffusive Source Localization Based on Binary Observations Maximum Lielihood Diffusive Source Localization Based on Binary Observations Yoav Levinboo and an F. Wong Wireless Information Networing Group, University of Florida Gainesville, Florida 32611-6130, USA

More information

V DD. M 1 M 2 V i2. V o2 R 1 R 2 C C

V DD. M 1 M 2 V i2. V o2 R 1 R 2 C C UNVERSTY OF CALFORNA Collee of Enineerin Department of Electrical Enineerin and Computer Sciences E. Alon Homework #3 Solutions EECS 40 P. Nuzzo Use the EECS40 90nm CMOS process in all home works and projects

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

arxiv: v2 [math.oc] 11 Jun 2018

arxiv: v2 [math.oc] 11 Jun 2018 : Tiht Automated Converence Guarantees Adrien Taylor * Bryan Van Scoy * 2 Laurent Lessard * 2 3 arxiv:83.673v2 [math.oc] Jun 28 Abstract We present a novel way of eneratin Lyapunov functions for provin

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

USE OF FILTERED SMITH PREDICTOR IN DMC

USE OF FILTERED SMITH PREDICTOR IN DMC Proceedins of the th Mediterranean Conference on Control and Automation - MED22 Lisbon, Portual, July 9-2, 22. USE OF FILTERED SMITH PREDICTOR IN DMC C. Ramos, M. Martínez, X. Blasco, J.M. Herrero Predictive

More information

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization A Compressive Sensing Based Compressed Neural Network for Sound Source Localization Mehdi Banitalebi Dehkordi Speech Processing Research Lab Yazd University Yazd, Iran mahdi_banitalebi@stu.yazduni.ac.ir

More information

Three Spreadsheet Models Of A Simple Pendulum

Three Spreadsheet Models Of A Simple Pendulum Spreadsheets in Education (ejsie) Volume 3 Issue 1 Article 5 10-31-008 Three Spreadsheet Models Of A Simple Pendulum Jan Benacka Constantine the Philosopher University, Nitra, Slovakia, jbenacka@ukf.sk

More information

Digital Electronics Paper-EE-204-F SECTION-A

Digital Electronics Paper-EE-204-F SECTION-A B.Tech 4 th Semester (AEIE) F Scheme, May 24 Diital Electronics Paper-EE-24-F Note : Attempt five questions. Question is compulsory and one question from each of the four sections.. What is a loic ate?

More information

Deep Neural Networks (1) Hidden layers; Back-propagation

Deep Neural Networks (1) Hidden layers; Back-propagation Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 2 October 2018 http://www.inf.ed.ac.u/teaching/courses/mlp/ MLP Lecture 3 / 2 October 2018 Deep

More information