Sensitivity of Nonlinear Network Training to Affine Transformed Inputs
|
|
- Susan Cannon
- 5 years ago
- Views:
Transcription
1 Sensitiity of Nonlinear Network raining to Affine ransformed Inuts Changhua Yu, Michael Manry, Pramod Lakshmi Narasimha Deartment of Electrical Engineering Uniersity of exas at Arlington, X changhua yu@utaedu, manry@utaedu, l ramod@utaedu Abstract In this aer, the effects of nonsingular affine transforms on arious nonlinear network training algorithms are analyzed It is shown that gradient related methods, are quite sensitie to an inut affine transform, while Newton related methods are inariant hese results gie a connection between re-rocessing techniques and weight initialization methods hey also exlain the adantages of Newton related methods oer other algorithms Numerical results alidate the theoretical analyses Introduction Nonlinear networks, such as multi-layer ercetron (MLP) and radial basis function (RBF) networks, are oular in the signal rocessing and attern recognition area he networks arameters are adjusted to best aroximate the underlying relationshi between inut and outut training data (Vanik 1995) Neural net training algorithms are time consuming, in art due to ill-conditioning roblems (Saarinen, Bramley, & Cybenko 1993) Pre-rocessing techniques, such as inut feature-decorrelation, whitening transform (LeCun et al 1998), (Brause & Ril 1998) are suggested to alleiate ill-conditioning and thus accelerate the network training rocedure (Š Raudys 2001) Other inut transforms, including inut re-scaling (Rigler, Irine, & Vogl 1991), unbiasing and normalization, are also widely used to equalize the influence of the inut features In addition, some researchers think of the MLP as a nonlinear adatie filter (Haykin 1996) Linear re-rocessing techniques, such as noise cancellation, can imroe the erformance of this non-linear filter Although widely used, the effects of these linear transforms on MLP training algorithms haen t been analyzed in detail Interesting issues, such as (1)whether these re-rocessing techniques hae same effects on different training algorithms, (2) whether the benefits of these rerocessing can be dulicated or cancelled out by other strategies, ie, adanced weight initialization method, still need his work was suorted by the Adanced echnology Program of the state of exas, under grant number Coyright c 2005, American Association for Artificial Intelligence (wwwaaaiorg) All rights resered further research In a reious aer (Yu, Manry, & Li 2004), the effects of an inut orthogonal transform on the conjugate gradient algorithm was analyzed using the concet of equialent states We show that the effect of inut orthogonal transform can be absorbed by roer weight initialization strategy Because all linear transforms can be exressed as an affine transform, in this aer, we analyze the influence of the more general affine transform on tyical training algorithms First, the conentional affine transform is described Second, tyical training algorithms are briefly reiewed hen, the sensitiity of arious training algorithms to inut affine transforms are analyzed Numerical simulations are resented to erify the theoretical results General Affine ransform Suggested re-rocessing techniques, such as feature decorrelation, whitening, inut unbiasing and normalization can all be ut into the form of nonsingular affine transform: z = Ax + b (1) where x is the original th inut feature ector, z is the affine transformed inut, and b = b 1 b 2 b N For examle, unbiasing the inut features is modelled as z = x m x (2) An affine transform of articular interest in this aer can be exressed as a linear transform by using extended inut ectors as: Z = A e X (3) where Z = z 1 z N, 1, X = x 1 x N, 1 and a 11 a 1N b 1 A e = a N1 a NN b, (4) N Conentional raining Algorithms Generally, training algorithms for nonlinear networks can be classified into three categories: gradient descent methods, conjugate gradient methods, and Newton related methods (Haykin 1999), (Møller 1997) In the following, we gie a brief reiew for each method
2 Figure 1: oology of a three-layer MLP Notation In this aer, we inestigate a three-layer fully connected MLP, as shown in figure 1 he training data consists of training atterns {(x, t )}, where x and the th desired outut ector t hae dimensions N and M, resectiely hresholds in the hidden and outut layers are handled by letting x,(n+1) = 1 For the jth hidden unit, the net function net j and the outut actiation O j for the th training attern are net j = N+1 w hi (j, n) x n (5) n=1 O j = f(net j ) 1 j N h where x n denotes the nth element of x, w hi (j, n) denotes the weight connecting the nth inut to the jth hidden unit and N h denotes the number of hidden units he actiation function f is sigmoidal 1 f(net j ) = 1 + e net j he kth outut y k for the th training attern is y k = N+1 n=1 (6) N h w oi (k, n) x n + w oh (k, j) O j (7) j=1 k = 1,, M, where w oi (k, n) denotes the weight connecting the nth inut node to the kth outut unit he network s training rocedure is to minimize the following global Mean Square Error (): E = 1 m=1 M t m y m 2 = 1 (t y )(t y ) (8) where t m denotes the mth element of t Many methods hae been roosed to sole this otimization roblem Gradient Descent Method he idea of gradient descent method is to minimize the linear aroximation of in (8): E(w + w) E(w) + w w = E(w) + w g (9) where g is the gradient ector of E(w) ealuated at current w he weight udating strategy is: w = Z g (10) where the learning factor Z is usually set to a small ositie constant or determined by an adatie or otimal method(magoulas, Vrahatis, & Androulakis 1999) he back roagation (BP) algorithm is actually a gradient descent method (Haykin 1999), (Rumelhart, Hiton, & Williams 1986) When using an otimal learning factor, the gradient descent method is also known as steeest descent (Fletcher 1987) Conjugate Gradient Method Peole hae looked for some comromise between slow conerging gradient methods and comutationally exensie Newton s methods (Møller 1997) Conjugate gradient (CG) is such an intermediate method he basic idea of CG is to search the minimum of the error function in conjugate directions (Fletcher 1987) In CG, the directions P(i) for the ith searching iteration and P(j) for the jth iteration hae to be conjugate with resect to the Hessian matrix: P (i)hp(j) = 0 for i j (11) he conjugate directions can be constructed by (Fletcher 1987): P(k) = g(k) + B 1 P(k 1) (12) Here, g(k) reresents the gradient of E with resect to corresonding weight and threshold for the kth searching iteration Initial ectors are g(0) = 0, P(0) = 0 B 1 in Eq (12) is the ratio of the gradient energies between the current iteration and the reious iteration he corresonding weight changes in CG are: w = B 2 P(k) (13) where B 2 is an otimal learning factor Newton s Method he goal in Newton s method is to minimize the quadratic aroximation of the error function E(w) around the current weights w: E(w + w) E(w) + w g w H w (14) Here, the matrix H is called the Hessian matrix, which is the second order artial deriaties of E with resect to w Eq (14) is minimized when w = H 1 g (15) For Newton s method to work, the Hessian matrix H has to be ositie definite, which is not guaranteed in most situations Many modified Newton algorithms hae been deeloed to ensure a ositie definite Hessian matrix (Battiti 1992) Sensitiity of raining to Affine ransform For affine transforms, the requirements to ensure equialent states (Yu, Manry, & Li 2004) in the network with original inuts and the network with transformed inuts are: w hi = u hi A e (16) w oh = u oh, w oi = u oi A e (17) where u denote the corresonding weights and thresholds in the transformed network
3 Sensitiity of CG to Affine ransform In CG, for the kth training iteration, the weight udating law in the transformed network is: u hi u hi + B 2 P dhi (k), u oi u oi + B 2 P doi (k) u oh u oh + B 2 P doh (k) (18) he weights and thresholds in the original network are udated in the same way If the two networks hae the same learning factors B 2, the conjugate directions P must satisfy following conditions to ensure equialent states after weights modification: P hi = P dhi A e, P oh = P doh, P oi = P doi A e (19) he gradients in the transformed network can be found: u oi(1,1) u oi(1,n+1) g doi = u oi = u oi (M,1) u oi (M,N+1) e d1 ( Z N ) e d X A e N e dm (20) where e dm = t m y dm m = 1, M g dhi = u hi = and g doh is: δ d1 δ dnh u hi (1,1) u hi (1,N) u hi (N h,1) g doh Z ( N u hi (N h,n) ) δ d X A e (21) e d O d (22) In the original network, the matrices are: g oi e X, g oh g hi δ X (23) e O From Eq (12), after the first training iteration in CG, the conjugate directions are: P(1) = g(1) (24) Assuming the two networks start from same initial states, for the first training iteration, we hae from (20) that: P doi = g doi = g oi A e = P oi A e (25) As A e is not orthogonal, condition (19) can t be satisfied So the training cures in the two networks dierge from the ery beginning B 1 and B 2 in the two networks won t be the same either herefore the CG algorithm is sensitie to inut affine transforms Affine ransforms and OWO-BP In outut weight otimization (OWO), which is a comonent of seeral training algorithms (Chen, Manry, & Chandrasekaran 1999), we sole linear equations for MLP outut weights In the following, we will use W o = w oi w oh and an augmented inut ector: Ô = x 1 x N 1 O 1 O Nh (26) For the three-layer network in figure 1, the outut weights can be found by soling linear equations, which result when gradients of E with resect to the outut weights are set to zero Using equation (7) and (8), we find the gradients of E with resect to the outut weights as ĝ o (m, j) = 1 2 L t m w o (m, i)ôi i=1 L C(m, j) w o (m, i)r(i, j) i=1 Ôj (27) where the autocorrelation matrix R has the elements: R(i, j) = 1 Ô i Ô j (28) the cross-correlation matrix C has the elements: C(m, j) = 1 t m Ô j (29) Setting ĝ o (m, j) to zero, the jth equation in the mth set of equations is L w o (m, i)r(i, j) = C(m, j) (30) i=1 So the OWO rocedure in the two networks means to sole W o R x = C x U o R z = C z (31) If before OWO the two networks hae equialent states, we hae Ô Ae 0 d = Ô (32) Nh N h And in the original network, R x = 1 Ô N Ô C x = 1 t N Ô (33) For the network with transformed inuts, these matrices are: R z = 1 Ae 0 = Ô dôd R x A e 0 C z = 1 t A N Ôd = C e 0 x (34) (35)
4 Inariance of OWO to Nonsingular Affine ransform In the following lemma, we describe the effects of affine transforms on the OWO rocedure Lemma 1 For any nonsingular matrix A e, if outut weights satisfy the conditions of Eq (17), then after the OWO rocedure it is still alid Proof As the two networks start from equialent states, U o R z = C z in Eq (31) can be rewritten as: Ae 0 A U o R e 0 x it can be simlified further as: U o Ae 0 = C x A e 0 (36) R x = C x (37) Comaring it with W o R x = C x, so we hae Ae 0 W o = U o which means condition (17) is still alid (38) he Sensitiity of BP to Affine ransform For BP rocedure, in order to kee equialent states in the two networks, the gradients must satisfy: g hi = g dhi A e (39) As we know, when B 1 in (12) is set to zero, CG degrades into BP algorithm hus the analyses for CG in is also alid for BP Condition (39) won t be satisfied after BP So BP is also sensitie to the inut affine transforms In OWO-BP training (Chen, Manry, & Chandrasekaran 1999), we alternately use OWO to find outut weights and use BP to modify hidden weights Because of the sensitiity of BP to affine transform, OWO-BP has a similar sensitiity Effects on Newton Related Method Generally, it is much more difficult to find the hidden weights in neural networks Due to its high conergence rate, Newton s method is referred for small scale roblems (Battiti 1992) he roosed hidden weight otimization (HWO) in (Chen, Manry, & Chandrasekaran 1999) can be considered as a Newton related method o ensure equialent states in the two networks, the hidden weights and thresholds must satisfy (16) and (17), and corresondingly, the hidden weight changes in the two networks must satisfy: w hi = u hi A e (40) Rearrange w hi in the form of a column ector: W = w hi (1, 1) w hi (N h, N + 1) (41) and the corresonding column ector in the transformed network is denoted as U So condition (40) is equialent to: W = à U (42) where A e A à = e A e q q (43) with q = N h (N + 1) he hidden weights are udated by the law of (15) he Hessian matrix of the original network is: H = 2 E whi 2 (1,1) 2 E 2 E 2 E whi 2 (N h,n+1) (44) he gradient ector used in the Newton method is: g = w hi (1,N+1) δ1 x 1 δ 1 x (N+1) δ Nh x (N+1) X δ 1 X δ Nh Λ x δ (45) where Λ x = diag{x 1 x (N+1) x 1 x (N+1) } is a q q matrix, and δ = δ 1 δ 1 δ Nh δ Nh (46) is a q 1 ector Similarly, in the transformed network, g d = u hi (1,1) u hi (N h,n+1) δd1 z 1 δ dnh z (N+1) Z δ d1 Z δ dnh Λ x δ d Ã Λ x δd (47) with δ d = δ d1 δ d1 δ dnh δ dnh (48) From Eq (44) and (45), the q q Hessian matrix in the original network is H Λ x δ 1 δ 1 δ Nh δ Nh δ 1 δ 1 δ Nh δ Nh (49)
5 In the rightmost matrix of (49), eery (N + 1) rows from the (i 1)(N + 1) + 1th row to the i (N + 1)th row are exactly same, with i = 1,, N h he elements δ i w hi (j, n) = γ ijx n (50) where i, j = 1,, N h, i j, n = 1,, (N + 1), and M γ ij = w oh (m, j)w oh (m, i)f jf i (51) m=1 It is easy to erify that γ ij = γ ji For the case i = j, with γ ii = M m=1 δ i w hi (i, n) = γ iix n (52) w 2 oh(m, i)(f i) 2 t m y m w oh (m, i)f i (53) Here, f i, f i are shorthand for the first order and second order deriaties of the sigmoid function f(net i ) So, we hae H = 2 Λ x Ψ (54) where γ 11 X γ 1Nh X Ψ = (55) γ Nh 1X γ Nh N h X where eery row in the q q matrix X is just the ector X Similarly, for the transformed network, the corresonding Hessian matrix is: H d = 2 à γ d11 Z γ d1nh Z Λ x N γ dnh 1Z γ dnh N h Z (56) where eery row in the q q matrix Z is just the ector Z Using the fact that Z = XA e (57) Eq (56) becomes with H d = { } 2 Ã Λ x Ψ d à (58) γ d11 Z γ d1nh Z Ψ d = (59) γ dnh 1Z γ dnh N h Z heorem 1 If two networks are initially equialent, and their inut ectors are related ia a nonsingular affine transform, the networks are still equialent after Newton training Proof When the two networks are in equialent states, conditions (16) is satisfied and we hae: δ = δ d, Ψ = Ψ d (60) so the change of the hidden weights and thresholds in the transformed network are { } ) 1 1 U = H 1 d g d = (Ã Λ x Ψ d { } à 1 Ã Λ x δ d (61) { } ) 1 { } 1 = (Ã Λ x Ψ Λ x δ ) 1 ) 1 = (à H 1 g = (à W So the required condition (42) would be satisfied after the Newton training rocedure, which means that the conditions (16) would be alid again Numerical Results In the exeriments, we use the unbiasing rocedure as an examle of the affine transform, and erify the theoretical results for two remote sensing data sets Inuts for training data set Oh7tra are V V and HH olarization at L 30, 40 deg, C 10, 30, 40, 50, 60 deg, and X 30, 40, 50 deg (Oh, Sarabandi, & Ulaby 1992) he corresonding desired oututs are Θ = {σ, l, m }, where σ is the rms surface height; l is the surface correlation length; m is the olumetric soil moisture content in g/cm 3 here are 20 inuts, 3 oututs, training atterns We use 20 hidden units raining data set wodtra contains simulated data based on models from backscattering measurements (Dawson, Fung, & Manry 1993) his training file is used in the task of inerting the surface scattering arameters from an inhomogeneous layer aboe a homogeneous half sace, where both interfaces are randomly rough he arameters to be inerted are the effectie ermittiity of the surface, the normalized rms height, the normalized surface correlation length, the otical deth, and single scattering albedo of an in-homogeneous irregular layer aboe a homogeneous half sace from back scattering measurements he data set has 8 inuts, 7 oututs, and 1768 atterns here are 10 hidden units (a) oh7dat (b) twodtra Figure 2: he sensitiity of CG to affine transforms
6 (a) oh7dat (b) twodtra Figure 3: he effect of affine transform on OWO-BP 18 (a) oh7dat (b) twodtra Figure 4: he inariance of OWO-HWO to affine transforms For both data sets, the original and transformed networks start from equialent states and are trained for 200 iterations From fig 2 and fig 3, we can see that CG and BP are quite sensitie to the affine transform Een though the two networks start from same initial states, the difference of their training cures are quite large For the Newton related OWO-HWO method, the training cures for the two networks in fig 4 are almost the same he slight differences are caused by the finite numerical recision roblem Summary and Discussion In this aer, we analyze the effects of the affine transform on different training algorithms Using inut unbiasing as an examle, we show that CG and BP are quite sensitie to inut affine transforms As a result, the inut unbiasing strategy hels to accelerate the training of these gradient algorithms As for Newton methods, the network with original inuts and the network with transformed inuts hae the same states during the training rocedure as long as they start from equialent states his imlies that OWO-HWO is more owerful than non-newton algorithms when dealing with correlated/biased data hat is, in Newton methods, the benefits of feature de-correlation or unbiasing hae already been realized by roer initial weights his gies a connection between the re-rocessing and the weight initialization rocedure Howeer, this also indicates a trade-off Sometimes the effects of re-rocessing may be cancelled out by the initial weights Naturally, further inestigation can be done on whether there is a similar connection between the rerocessing and other techniques, for examle, the learning factor setting With theoretical analyses on these techniques, better neural network training strategy can be deeloed References Battiti, R 1992 First- and second-order methods for learning: between steeest descent and newton s method Neural Comutation 4(2): Brause, R W, and Ril, M 1998 Noise suressing sensor encoding and neural signal orthonormalization IEEE rans Neural Networks 9(4): Š Raudys 2001 Statistical and Neural Classifiers: An Integrated Aroach to Design Berlin, Germany: Sringer- Verlag Chen, H H; Manry, M ; and Chandrasekaran, H 1999 A neural network training algorithm utilizing multile sets of linear equations Neurocomuting 25(1-3):55 72 Dawson, M S; Fung, A K; and Manry, M 1993 Surface arameter retrieal using fast learning neural networks Remote Sensing Reiews 7(1):1 18 Fletcher, R 1987 Practical Methods of Otimization Chichester, NY: John Wiley & Sons, second edition Haykin, S 1996 Adatie Filter heory Englewood Cliffs, NJ: Prentice Hall, third edition Haykin, S 1999 Neural Networks: A Comrehensie Foundation Englewood Cliffs, NJ: Prentice Hall, second edition LeCun, Y; Bottou, L; Orr, G B; and Muller, K R 1998 Efficient backro In Orr, G B, and Muller, K R, eds, Neural Networks: ricks of the rade Berlin, Germany: Sringer-Verlag Magoulas, G D; Vrahatis, M N; and Androulakis, G S 1999 Imroing the conergence of the backroagation algorithm using learning adatation methods Neural Comutation 11(8): Møller, M 1997 Efficient raining of Feed-forward Neural Networks PhD Dissertation, Aarhus Uniersity, Denmark Oh, Y; Sarabandi, K; and Ulaby, F 1992 An emirical model and an inersion technique for radar scattering from bare soil surfaces IEEE rans Geosci Remote Sensing 30: Rigler, A K; Irine, J M; and Vogl, P 1991 Rescaling of ariables in back roagation learning Neural Networks 4: Rumelhart, D E; Hiton, G E; and Williams, R J 1986 Learning internal reresentation by error roagation, olume 1 of Parallel Distributed Processing Cambridge, MA: MI Press Saarinen, S; Bramley, R; and Cybenko, G 1993 Illconditioning in neural network training roblems SIAM Journal on Scientific Comuting 14: Vanik, V N 1995 he Nature of Statistical Learning heory New York: Sringer-Verlag Yu, C; Manry, M ; and Li, J 2004 Inariance of ml training to inut feature de-correlation In Proceedings of the 17th International Conference of the Florida AI Research Society,
E( x ) [b(n) - a(n, m)x(m) ]
Homework #, EE5353. An XOR network has two inuts, one hidden unit, and one outut. It is fully connected. Gie the network's weights if the outut unit has a ste actiation and the hidden unit actiation is
More informationRecent Developments in Multilayer Perceptron Neural Networks
Recent Develoments in Multilayer Percetron eural etworks Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, Texas 75265 walter.delashmit@lmco.com walter.delashmit@verizon.net Michael
More informationE( x ) = [b(n) - a(n,m)x(m) ]
Exam #, EE5353, Fall 0. Here we consider MLPs with binary-valued inuts (0 or ). (a) If the MLP has inuts, what is the maximum degree D of its PBF model? (b) If the MLP has inuts, what is the maximum value
More informationEFFECTS OF NONSINGULAR PREPROCESSING ON FEED FORWARD NETWORK TRAINING
International Journal of Pattern Recognition and Artificial Intelligence Vol 19, No 2 (2005) 1 31 c World Scientific Publishing Company EFFECTS OF NONSINGULAR PREPROCESSING ON FEED FORWARD NETWORK TRAINING
More informationTHE multilayer perceptron (MLP) is a nonlinear signal
Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 013 Partially Affine Invariant Training Using Dense Transform Matrices Melvin D Robinson and Michael T
More informationMultilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale
World Alied Sciences Journal 5 (5): 546-552, 2008 ISSN 1818-4952 IDOSI Publications, 2008 Multilayer Percetron Neural Network (MLPs) For Analyzing the Proerties of Jordan Oil Shale 1 Jamal M. Nazzal, 2
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More informationNode-voltage method using virtual current sources technique for special cases
Node-oltage method using irtual current sources technique for secial cases George E. Chatzarakis and Marina D. Tortoreli Electrical and Electronics Engineering Deartments, School of Pedagogical and Technological
More informationJJMIE Jordan Journal of Mechanical and Industrial Engineering
JJMIE Jordan Journal of Mechanical and Industrial Engineering Volume, Number, Jun. 8 ISSN 995-6665 Pages 7-75 Efficiency of Atkinson Engine at Maximum Power Density using emerature Deendent Secific Heats
More informationLIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).
LIMITATIONS OF RECEPTRON XOR Problem The failure of the ercetron to successfully simle roblem such as XOR (Minsky and Paert). x y z x y z 0 0 0 0 0 0 Fig. 4. The exclusive-or logic symbol and function
More informationHermite subdivision on manifolds via parallel transport
Adances in Comutational Mathematics manuscrit No. (will be inserted by the editor Hermite subdiision on manifolds ia arallel transort Caroline Moosmüller Receied: date / Acceted: date Abstract We roose
More informationState Estimation with ARMarkov Models
Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,
More informationA Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve
In Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson and R.P. Limann, eds. Morgan Kaumann Publishers, San Mateo CA, 1995,. 950{957. A Simle Weight Decay Can Imrove Generalization
More informationA Recursive Block Incomplete Factorization. Preconditioner for Adaptive Filtering Problem
Alied Mathematical Sciences, Vol. 7, 03, no. 63, 3-3 HIKARI Ltd, www.m-hiari.com A Recursive Bloc Incomlete Factorization Preconditioner for Adative Filtering Problem Shazia Javed School of Mathematical
More informationBest approximation by linear combinations of characteristic functions of half-spaces
Best aroximation by linear combinations of characteristic functions of half-saces Paul C. Kainen Deartment of Mathematics Georgetown University Washington, D.C. 20057-1233, USA Věra Kůrková Institute of
More informationA Family of Binary Sequences from Interleaved Construction and their Cryptographic Properties
Contemorary Mathematics A Family of Binary Sequences from Interleaed Construction and their Crytograhic Proerties Jing Jane He, Daniel Panario, and Qiang Wang Abstract. Families of seudorandom sequences
More informationFinite-Sample Bias Propagation in the Yule-Walker Method of Autoregressive Estimation
Proceedings of the 7th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-, 008 Finite-Samle Bias Proagation in the Yule-Walker Method of Autoregressie Estimation Piet
More informationUncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning
TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment
More informationProbability Estimates for Multi-class Classification by Pairwise Coupling
Probability Estimates for Multi-class Classification by Pairwise Couling Ting-Fan Wu Chih-Jen Lin Deartment of Comuter Science National Taiwan University Taiei 06, Taiwan Ruby C. Weng Deartment of Statistics
More informationDEVELOPMENT OF A BRANCH AND PRICE APPROACH INVOLVING VERTEX CLONING TO SOLVE THE MAXIMUM WEIGHTED INDEPENDENT SET PROBLEM
DEVELOPMENT OF A BRANCH AND PRICE APPROACH INVOLVING VERTEX CLONING TO SOLVE THE MAXIMUM WEIGHTED INDEPENDENT SET PROBLEM A Thesis by SANDEEP SACHDEVA Submitted to the Office of Graduate Studies of Texas
More informationPrincipal Components Analysis and Unsupervised Hebbian Learning
Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material
More informationSolved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.
Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the
More informationA Method to Solve Optimization Problem in Model. Predictive Control for Twin Rotor MIMO System. Using Modified Genetic Algorithm
International Journal of Comuting and Otimization Vol. 4, 217, no. 1, 31-41 HIKARI Ltd, www.m-iari.com tts://doi.org/1.12988/ijco.217.734 A Metod to Sole Otimization Problem in Model Predictie Control
More informationThe Simultaneous Reduction of Matrices to the Block-Triangular Form
Physics Journal Vol., No., 05,. 54-6 htt://www.aiscience.org/ournal/ The Simultaneous Reduction of Matrices to the Bloc-Triangular Form Yuri N. Bazileich * Deartment of Alied Mathematics, Prydniros State
More informationRecursive Estimation of the Preisach Density function for a Smart Actuator
Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator
More information2-D Analysis for Iterative Learning Controller for Discrete-Time Systems With Variable Initial Conditions Yong FANG 1, and Tommy W. S.
-D Analysis for Iterative Learning Controller for Discrete-ime Systems With Variable Initial Conditions Yong FANG, and ommy W. S. Chow Abstract In this aer, an iterative learning controller alying to linear
More informationEstimation of the large covariance matrix with two-step monotone missing data
Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo
More informationHigher-order Logic Description of MDPs to Support Meta-cognition in Artificial Agents
Higher-order Logic escrition of MPs to Suort Meta-cognition in Artificial Agents Vincenzo Cannella, Antonio Chella, and Roberto Pirrone iartimento di Ingegneria Chimica, Gestionale, Informatica, Meccanica,
More informationOn Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.**
On Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.** * echnology and Innovation Centre, Level 4, Deartment of Electronic and Electrical Engineering, University of Strathclyde,
More informationFeedback-error control
Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller
More informationPROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)
PROFIT MAXIMIZATION DEFINITION OF A NEOCLASSICAL FIRM A neoclassical firm is an organization that controls the transformation of inuts (resources it owns or urchases into oututs or roducts (valued roducts
More informationConvex Optimization methods for Computing Channel Capacity
Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem
More informationarxiv: v1 [quant-ph] 20 Jun 2017
A Direct Couling Coherent Quantum Observer for an Oscillatory Quantum Plant Ian R Petersen arxiv:76648v quant-h Jun 7 Abstract A direct couling coherent observer is constructed for a linear quantum lant
More informationLinear diophantine equations for discrete tomography
Journal of X-Ray Science and Technology 10 001 59 66 59 IOS Press Linear diohantine euations for discrete tomograhy Yangbo Ye a,gewang b and Jiehua Zhu a a Deartment of Mathematics, The University of Iowa,
More informationCOMPUTER SIMULATION OF A LABORATORY HYDRAULIC SYSTEM WITH MATLAB-SIMULINK
DVNCED ENGINEERING 4(20101, ISSN 1846-5900 COMPUTER SIMULTION OF LORTORY HYDRULIC SYSTEM WITH MTL-SIMULINK Grego, G. & Siminiati, D. bstract: The article resents some selected roblems related to modeling
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More informationNonlinear Static Analysis of Cable Net Structures by Using Newton-Raphson Method
Nonlinear Static Analysis of Cable Net Structures by Using Newton-Rahson Method Sayed Mahdi Hazheer Deartment of Civil Engineering University Selangor (UNISEL) Selangor, Malaysia hazheer.ma@gmail.com Abstract
More informationThe Nottingham eprints service makes this work by researchers of the University of Nottingham available open access under the following conditions.
Li, Xia (2016) Internal structure quantification for granular constitutie modeling. Journal of Engineering Mechanics. C4016001/1-C4016001/16. ISSN 1943-7889 Access from the Uniersity of Nottingham reository:
More informationAn Analysis of Reliable Classifiers through ROC Isometrics
An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit
More informationRobot Motion Planning using Hyperboloid Potential Functions
Proceedings of the World Congress on Engineering 7 Vol II WCE 7, July - 4, 7, London, U.K. Robot Motion Planning using Hyerboloid Potential Functions A. Badawy and C.R. McInnes, Member, IAEN Abstract A
More informationGeneration of Linear Models using Simulation Results
4. IMACS-Symosium MATHMOD, Wien, 5..003,. 436-443 Generation of Linear Models using Simulation Results Georg Otte, Sven Reitz, Joachim Haase Fraunhofer Institute for Integrated Circuits, Branch Lab Design
More informationMODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS
MODEL-BASED MULIPLE FAUL DEECION AND ISOLAION FOR NONLINEAR SYSEMS Ivan Castillo, and homas F. Edgar he University of exas at Austin Austin, X 78712 David Hill Chemstations Houston, X 77009 Abstract A
More informationNEAR-OPTIMAL FLIGHT LOAD SYNTHESIS USING NEURAL NETS
NEAR-OPTIMAL FLIGHT LOAD SYNTHESIS USING NEURAL NETS Michael T. Manry, Cheng-Hsiung Hsieh, and Hema Chandrasekaran Department of Electrical Engineering University of Texas at Arlington Arlington, Texas
More informationAn Investigation on the Numerical Ill-conditioning of Hybrid State Estimators
An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators S. K. Mallik, Student Member, IEEE, S. Chakrabarti, Senior Member, IEEE, S. N. Singh, Senior Member, IEEE Deartment of Electrical
More informationDistributed Rule-Based Inference in the Presence of Redundant Information
istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced
More informationDynamic System Eigenvalue Extraction using a Linear Echo State Network for Small-Signal Stability Analysis a Novel Application
Dynamic System Eigenvalue Extraction using a Linear Echo State Network for Small-Signal Stability Analysis a Novel Alication Jiaqi Liang, Jing Dai, Ganesh K. Venayagamoorthy, and Ronald G. Harley Abstract
More informationUsing a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process
Journal of Industrial and Intelligent Information Vol. 4, No. 2, March 26 Using a Comutational Intelligence Hybrid Aroach to Recognize the Faults of Variance hifts for a Manufacturing Process Yuehjen E.
More informationGeneralized Coiflets: A New Family of Orthonormal Wavelets
Generalized Coiflets A New Family of Orthonormal Wavelets Dong Wei, Alan C Bovik, and Brian L Evans Laboratory for Image and Video Engineering Deartment of Electrical and Comuter Engineering The University
More informationParametrization of All Stabilizing Controllers Subject to Any Structural Constraint
49th IEEE Conference on Decision and Control December 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Parametrization of All Stabilizing Controllers Subject to Any Structural Constraint Michael C Rotkowitz
More informationComparison of the Complex Valued and Real Valued Neural Networks Trained with Gradient Descent and Random Search Algorithms
Comparison of the Complex Valued and Real Valued Neural Networks rained with Gradient Descent and Random Search Algorithms Hans Georg Zimmermann, Alexey Minin 2,3 and Victoria Kusherbaeva 3 - Siemens AG
More information216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are
Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment
More informationCOMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS
NCCI 1 -National Conference on Comutational Instrumentation CSIO Chandigarh, INDIA, 19- March 1 COMPARISON OF VARIOUS OPIMIZAION ECHNIQUES FOR DESIGN FIR DIGIAL FILERS Amanjeet Panghal 1, Nitin Mittal,Devender
More informationApproach to canonical pressure profiles in stellarators
1 TH/P8-4 Aroach to canonical ressure rofiles in stellarators Yu.N. Dnestroskij 1), A.. Melniko 1), L.G. Elisee 1), S.E. Lysenko 1), K.A. Razumoa 1),.D. Pustoito 1), A. Fujisawa ), T. Minami ), and J.H.
More informationPCA fused NN approach for drill wear prediction in drilling mild steel specimen
PCA fused NN aroach for drill wear rediction in drilling mild steel secimen S.S. Panda Deartment of Mechanical Engineering, Indian Institute of Technology Patna, Bihar-8000, India, Tel No.: 9-99056477(M);
More informationOptical Fibres - Dispersion Part 1
ECE 455 Lecture 05 1 Otical Fibres - Disersion Part 1 Stavros Iezekiel Deartment of Electrical and Comuter Engineering University of Cyrus HMY 445 Lecture 05 Fall Semester 016 ECE 455 Lecture 05 Otical
More informationOn Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm
On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm Gabriel Noriega, José Restreo, Víctor Guzmán, Maribel Giménez and José Aller Universidad Simón Bolívar Valle de Sartenejas,
More informationA Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split
A Bound on the Error of Cross Validation Using the Aroximation and Estimation Rates, with Consequences for the Training-Test Slit Michael Kearns AT&T Bell Laboratories Murray Hill, NJ 7974 mkearns@research.att.com
More informationOpen-air traveling-wave thermoacoustic generator
rticle Engineering Thermohysics July 11 Vol.56 No.: 167 173 doi: 1.17/s11434-11-447-x SPECIL TOPICS: Oen-air traeling-wae thermoacoustic generator XIE XiuJuan 1*, GO Gang 1,, ZHOU Gang 1 & LI Qing 1 1
More informationHomogeneous and Inhomogeneous Model for Flow and Heat Transfer in Porous Materials as High Temperature Solar Air Receivers
Excert from the roceedings of the COMSOL Conference 1 aris Homogeneous and Inhomogeneous Model for Flow and Heat ransfer in orous Materials as High emerature Solar Air Receivers Olena Smirnova 1 *, homas
More informationA Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression
Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi
More informationVerifying Two Conjectures on Generalized Elite Primes
1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 12 (2009), Article 09.4.7 Verifying Two Conjectures on Generalized Elite Primes Xiaoqin Li 1 Mathematics Deartment Anhui Normal University Wuhu 241000,
More informationGeneral Linear Model Introduction, Classes of Linear models and Estimation
Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)
More informationRANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES
RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete
More informationFuzzy Automata Induction using Construction Method
Journal of Mathematics and Statistics 2 (2): 395-4, 26 ISSN 1549-3644 26 Science Publications Fuzzy Automata Induction using Construction Method 1,2 Mo Zhi Wen and 2 Wan Min 1 Center of Intelligent Control
More informationChristopher Godwin Udomboso [a],*
Progress in Alied Mathematics Vol. 5, No., 03,. 40 47 DOI: 0.3968/j.am.955803050.994 ISSN 95-5X Print ISSN 95-58 Online www.cscanada.net www.cscanada.org On Some Proerties of a Hetereogeneous Transfer
More informationBayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis
HIPAD LAB: HIGH PERFORMANCE SYSTEMS LABORATORY DEPARTMENT OF CIVIL AND ENVIRONMENTAL ENGINEERING AND EARTH SCIENCES Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis Why use metamodeling
More informationWhy do non-gauge invariant terms appear in the vacuum polarization tensor?
Why do non-gauge inariant terms aear in the acuum olarization tensor? Author: Dan Solomon. Email: dsolom@uic.edu Abstract. It is well known that quantum field theory at the formal leel is gauge inariant.
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationScaling Multiple Point Statistics for Non-Stationary Geostatistical Modeling
Scaling Multile Point Statistics or Non-Stationary Geostatistical Modeling Julián M. Ortiz, Steven Lyster and Clayton V. Deutsch Centre or Comutational Geostatistics Deartment o Civil & Environmental Engineering
More informationFactor Analysis of Convective Heat Transfer for a Horizontal Tube in the Turbulent Flow Region Using Artificial Neural Network
COMPUTATIONAL METHODS IN ENGINEERING AND SCIENCE EPMESC X, Aug. -3, 6, Sanya, Hainan,China 6 Tsinghua University ess & Sringer-Verlag Factor Analysis of Convective Heat Transfer for a Horizontal Tube in
More informationAnalysis of Group Coding of Multiple Amino Acids in Artificial Neural Network Applied to the Prediction of Protein Secondary Structure
Analysis of Grou Coding of Multile Amino Acids in Artificial Neural Networ Alied to the Prediction of Protein Secondary Structure Zhu Hong-ie 1, Dai Bin 2, Zhang Ya-feng 1, Bao Jia-li 3,* 1 College of
More informationAn artificial neural networks (ANNs) model is a functional abstraction of the
CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly
More informationProgress In Electromagnetics Research Symposium 2005, Hangzhou, China, August
Progress In Electromagnetics Research Symposium 2005, Hangzhou, China, August 22-26 115 A Noel Technique for Localizing the Scatterer in Inerse Profiling of Two Dimensional Circularly Symmetric Dielectric
More informationMODELING OF UNSTEADY AERODYNAMIC CHARACTERISTCS OF DELTA WINGS.
IAS00 ONGRESS MODEING OF UNSTEADY AERODYNAMI HARATERISTS OF DETA WINGS. Jouannet hristoher, rus Petter inköings Uniersity eywords: Delta wings, Unsteady, Modeling, Preliminary design, Aerodynamic coefficient.
More informationAI*IA 2003 Fusion of Multiple Pattern Classifiers PART III
AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers
More informationMetrics Performance Evaluation: Application to Face Recognition
Metrics Performance Evaluation: Alication to Face Recognition Naser Zaeri, Abeer AlSadeq, and Abdallah Cherri Electrical Engineering Det., Kuwait University, P.O. Box 5969, Safat 6, Kuwait {zaery, abeer,
More informationCalculation of eigenvalue and eigenvector derivatives with the improved Kron s substructuring method
Structural Engineering and Mechanics, Vol. 36, No. 1 (21) 37-55 37 Calculation of eigenvalue and eigenvector derivatives with the imroved Kron s substructuring method *Yong Xia 1, Shun Weng 1a, You-Lin
More informationBiomechanical Analysis of Contemporary Throwing Technique Theory
MAEC Web of Conferences, 0 5 04 ( 05 DOI: 0.05/ matec conf / 0 5 0 5 04 C Owned by the authors, ublished by EDP Sciences, 05 Biomechanical Analysis of Contemorary hrowing echnique heory Jian Chen Deartment
More informationHANDWRITTEN SIGNATURE VERIFICATIONS USING ADAPTIVE RESONANCE THEORY TYPE-2 (ART-2) NET
Volume 3, No. 8, August 2012 Journal of Global Research in Comuter Science RESEARCH PAPER Aailable Online at.jgrcs.info HANDWRITTEN SIGNATURE VERIFICATIONS USING ADAPTIVE RESONANCE THEORY TYPE-2 (ART-2)
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationDIFFERENTIAL evolution (DE) [3] has become a popular
Self-adative Differential Evolution with Neighborhood Search Zhenyu Yang, Ke Tang and Xin Yao Abstract In this aer we investigate several self-adative mechanisms to imrove our revious work on [], which
More informationImproved Capacity Bounds for the Binary Energy Harvesting Channel
Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,
More informationRobust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System
Vol.7, No.7 (4),.37-38 htt://dx.doi.org/.457/ica.4.7.7.3 Robust Predictive Control of Inut Constraints and Interference Suression for Semi-Trailer System Zhao, Yang Electronic and Information Technology
More informationProximal methods for the latent group lasso penalty
Proximal methods for the latent grou lasso enalty The MIT Faculty has made this article oenly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Villa,
More informationDesign of NARMA L-2 Control of Nonlinear Inverted Pendulum
International Research Journal of Alied and Basic Sciences 016 Available online at www.irjabs.com ISSN 51-838X / Vol, 10 (6): 679-684 Science Exlorer Publications Design of NARMA L- Control of Nonlinear
More informationApproximating min-max k-clustering
Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost
More informationNeural network models for river flow forecasting
Neural network models for river flow forecasting Nguyen Tan Danh Huynh Ngoc Phien* and Ashim Das Guta Asian Institute of Technology PO Box 4 KlongLuang 220 Pathumthani Thailand Abstract In this study back
More informationBack-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationForced Oscillation Source Location via Multivariate Time Series Classification
Forced Oscillation Source Location via ultivariate ime Series Classification Yao eng 1,2, Zhe Yu 1, Di Shi 1, Desong Bian 1, Zhiwei Wang 1 1 GEIRI North America, San Jose, CA 95134 2 North Carolina State
More informationA Novel 2-D Model Approach for the Prediction of Hourly Solar Radiation
A Novel 2-D Model Approach for the Prediction of Hourly Solar Radiation F Onur Hocao glu, Ö Nezih Gerek, and Mehmet Kurban Anadolu University, Dept of Electrical and Electronics Eng, Eskisehir, Turkey
More informationEvaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models
Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu
More informationDETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS
Proceedings of DETC 03 ASME 003 Design Engineering Technical Conferences and Comuters and Information in Engineering Conference Chicago, Illinois USA, Setember -6, 003 DETC003/DAC-48760 AN EFFICIENT ALGORITHM
More informationA Derivation of Free-rotation in the Three-dimensional Space
International Conference on Adanced Information and Communication echnology for Education (ICAICE 013) A Deriation of Free-rotation in the hree-dimensional Space Legend Chen 1 1 Uniersity of Shanghai for
More informationMeshless Methods for Scientific Computing Final Project
Meshless Methods for Scientific Comuting Final Project D0051008 洪啟耀 Introduction Floating island becomes an imortant study in recent years, because the lands we can use are limit, so eole start thinking
More informationRotations in Curved Trajectories for Unconstrained Minimization
Rotations in Curved rajectories for Unconstrained Minimization Alberto J Jimenez Mathematics Deartment, California Polytechnic University, San Luis Obiso, CA, USA 9407 Abstract Curved rajectories Algorithm
More informationAsymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias
Asymptotic Normality of an Entropy Estimator with Exponentially Decaying Bias Zhiyi Zhang Department of Mathematics and Statistics Uniersity of North Carolina at Charlotte Charlotte, NC 28223 Abstract
More informationImproved Identification of Nonlinear Dynamic Systems using Artificial Immune System
Imroved Identification of Nonlinear Dnamic Sstems using Artificial Immune Sstem Satasai Jagannath Nanda, Ganaati Panda, Senior Member IEEE and Babita Majhi Deartment of Electronics and Communication Engineering,
More informationUpper Bound on Pattern Storage in Feedforward Networks
Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12-17, 2007 Upper Bound on Pattern Storage in Feedforward Networks Pramod L Narasimha, Michael T Manry and
More informationResearch Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inputs
Abstract and Alied Analysis Volume 203 Article ID 97546 5 ages htt://dxdoiorg/055/203/97546 Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inuts Hong
More information