NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University

Size: px
Start display at page:

Download "NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University"

Transcription

1 NBN Algorith Bogdan M. Wilaoswki Auburn University Hao Yu Auburn University Nicholas Cotton Auburn University. Introduction. -. Coputational Fundaentals - Definition of Basic Concepts in Neural Network Training Jacobian Matrix Coputation. Training Arbitrarily Connected Neural Networks..-5 Iportance of Training Arbitrarily Connected Neural Networks Creation of Jacobian Matrix for Arbitrarily Connected Neural Networks Solve Probles with Arbitrarily Connected Neural Networks.4 Forward-Only Coputation-9 Derivation Calculation of δ Matrix for FCC Architectures Training Arbitrarily Connected Neural Networks Experiental Results.5 Direct Coputation of Quasi-Hessian Matrix and Gradient Vector. -7 Meory Liitation in Levenberg Marquardt Algorith Review of Matrix Algebra Quasi-Hessian Matrix Coputation Gradient Vector Coputation Jacobian Row Coputation Coparison on Meory and Tie Consuption.6 Conclusion..- References..-. Introduction Since the developent of EBP error backpropagation algorith for training neural networks, any attepts were ade to iprove the learning process. There are soe well-known ethods like oentu or variable learning rate and there are less known ethods which significantly accelerate learning rate [WT9,AW95,WCM99,W9,WH]. The recently developed NBN (neuron-by-neuron) algorith [WCHK7,WCKD8,YW9] is very efficient for neural network training. Coparing with the wellknown Levenberg Marquardt algorith (introduced in Chapter ) [L44,M6], the NBN algorith has several advantages: () the ability to handle arbitrarily connected neural networks; () forward-only coputation (without backpropagation process); and () direct coputation of quasi-hessian atrix (no need to copute and store Jacobian atrix). This chapter is organized around the three advantages of the NBN algorith.. Coputational Fundaentals Before the derivation, let us introduce soe coonly used indices in this chapter: p is the index of patterns, fro to np, where np is the nuber of patterns. is the index of outputs, fro to no, where no is the nuber of outputs. - K49_C.indd 9// :: PM

2 - Intelligent Systes and k are the indices of neurons, fro to nn, where nn is the nuber of neurons. i is the index of neuron inputs, fro to ni, where ni is the nuber of inputs and it ay vary for different neurons. Other indices will be explained in related places. Su square error (SSE) E is defined to evaluate the training process. For all patterns and outputs, it is calculated by E = np no p= = e p, (.) where e p, is the error at output defined as e = o d (.) p, p, p, where d p, and o p, are desired output and actual output, respectively, at network output for training pattern p. In all algoriths, besides the NBN algorith, the sae coputations are being repeated for one pattern at a tie. Therefore, in order to siplify notations, the index p for patterns will be skipped in following derivations, unless it is essential... Definition of Basic Concepts in Neural Network Training Let us consider neuron with ni inputs, as shown in Figure.. If neuron is in the first layer, all its inputs would be connected to the inputs of the network; otherwise, its inputs can be connected to outputs of other neurons or to network inputs if connections across layers are allowed. Node y is an iportant and flexible concept. It can be y,i, eaning the ith input of neuron. It also can be used as y to define the output of neuron. In this chapter, if node y has one index (neuron), then it is used as a neuron output node; while if it has two indices (neuron and input), it is a neuron input node. y, y, w, w, y,i w,i w,ni f (net ) y F, (y ) o y,ni w,ni w, y,ni + Figure. Connection of a neuron with the rest of the network. Nodes y,i could represent network inputs or outputs of other neurons. F, (y ) is the nonlinear relationship between the neuron output node y and the network output o. K49_C.indd 9// ::6 PM

3 NBN Algorith - Output node of neuron is calculated using y = f ( net ) (.) where f is the activation function of neuron net value net is the su of weighted input nodes of neuron : ni, i, i i= where y,i is the ith input node of neuron weighted by w,i, and w, is the bias weight of neuron Using (.4) one ay notice that derivative of net is net = w y + w, (.4) net, i = y, i (.5) and slope s of activation function f is s y = net f net = ( ) net (.6) Between the output node y of a hidden neuron and network output o, there is a coplex nonlinear relationship (Figure.): o = F, ( y ) (.7) where o is the th output of the network. The coplexity of this nonlinear function F, (y ) depends on how any other neurons are between neuron and network output. If neuron is at network output, then o = y and F, ( y ) =, where F, is the derivative of nonlinear relationship between neuron and output... Jacobian Matrix Coputation The update rule of Levenberg Marquardt algorith is [TM94] w = w J J + µ I J e n+ n ( ) T n n n n (.8) where n is the index of iterations μ is the cobination coefficient I is the identity atrix J is the Jacobian atrix (Figure.) Fro Figure., one ay notice that, for every pattern p, there are no rows of Jacobian atrix where no is the nuber of network outputs. The nuber of coluns is equal to nuber of weights in the networks and the nuber of rows is equal to np no. K49_C.indd 9// :: PM

4 -4 Intelligent Systes neuron neuron e,, e,, e, e,,, = e,, e,, e, e,,, = p = e,no, e,no, e,no e,no,, = no J = e p,, e p,, e p,, e p, e p,,, e p, e p, e p,,,, = = p = p e np,, e np, e np, e np,,,, = e np,, e np, e np, e np,,,, = p = np e np,no, e np,no e np,no e np,no,,, = no Figure. Structure of the Jacobian atrix: () the nuber of coluns is equal to the nuber of weights and () each row corresponds to a specified training pattern p and output. The eleents of Jacobian atrix can be calculated by e = e y y net net, i, i (.9) By cobining with (.), (.5), (.6), and (.7), (.9) can be written as e, i = y, is F, (.) In second-order algoriths, the paraeter δ [N89,TM94] is defined to easure the EBP process, as δ, s F, = (.) By cobining (.) and (.), eleents of Jacobian atrix can be calculated by e, i = y δ (.), i, Using (.), in backpropagation process, the error can be replaced by a unit value. K49_C.indd 4 9// ::9 PM

5 NBN Algorith -5. Training Arbitrarily Connected Neural Networks The NBN algorith introduced in this chapter is developed for training arbitrarily connected neural networks using Levenberg Marquardt update rule. Instead of layer-by-layer coputation (introduced in Chapter ), the NBN algorith does the forward and backward coputation based on NBN routings [WCHK7], which akes it suitable for arbitrarily connected neural networks... Iportance of Training Arbitrarily Connected Neural Networks The traditional ipleentation of Levenberg Marquardt algorith [TM94], like MATLAB neural network toolbox (MNNT), was adopted only for standard MLP (ultilayer perceptron) networks, it turns out that the MLP networks are not efficient. Figure. shows the sallest structures to solve parity-7 proble. The standard MLP network with one hidden layer (Figure.a) needs at least eight neurons to find the solution. The BMLP (bridged ultiplayer perceptron) network (Figure.b) can solve the proble with four neurons. The FCC (fully connected cascade) network (Figure.c) is the ost powerful one, and it only requires three neurons to get the solutions. One ay notice that the last two types of networks are better choices for efficient training, but they also require ore challenging coputation. AQ Input Input Input Input 4 Input 5 Input 6 Input 7 Output Input Input Input Input 4 Input 5 Input 6 Input 7 Output + + (a) (b) + + Input Input Input Input 4 Input 5 Input 6 Input 7 (c) + Output Figure. Sallest structures for solving parity-7 proble: (a) standard MLP network (64 weights), (b) BMLP network (5 weights), and (c) FCC network (7 weights). K49_C.indd 5 9// :: PM

6 -6 Intelligent Systes x x Figure.4 Five neurons in arbitrarily connected network... Creation of Jacobian Matrix for Arbitrarily Connected Neural Networks In this section, the NBN algorith for calculating the Jacobian atrix for arbitrarily connected feedforward neural networks is presented. The rest of the coputations for weight updating follow the LM algorith, as shown in Equation.9. In the forward coputation, neurons are organized according to the direction of signal propagation, while in the backward coputation, the analysis will follow the backpropagation procedures. Let us consider the arbitrarily connected network with one output, as shown in Figure.4. For the network in Figure.4, using the NBN algorith, the network topology can be described siilarly in SPICE progra: n [odel] n [odel] 4 n [odel] 5 4 n 4 [odel] n 5 [odel] Notice that each line corresponds to one neuron. The first part (n n 5 ) is the neuron nae (Figure.4). The second part [odel] is the neuron odels, such as bipolar, unipolar, and linear. Models are declared in separate lines where the types of activation functions and the neuron gains are specified. The first digit in each line after the neuron odel indicates the network nodes starting with the output node of the neuron, followed with its input nodes. Please notice that neurons ust be ordered fro inputs to neuron outputs. It is iportant that, for each given neuron, the neuron inputs ust have saller indices than its output. The row eleents of the Jacobian atrix for a given pattern are being coputed in the following three steps [WCKD8]:. Forward coputation. Backward coputation. Jacobian eleent coputation... Forward Coputation In the forward coputation, the neurons connected to the network inputs are first processed so that their outputs can be used as inputs to the subsequent neurons. The following neurons are then processed as their input values becoe available. In other words, the selected coputing sequence has to follow the concept of feedforward signal propagation. If a signal reaches the inputs of several neurons at the sae tie, then these neurons can be processed in any sequence. In the exaple in Figure.4, there are K49_C.indd 6 9// :: PM

7 NBN Algorith -7 two possible ways in which neurons can be processed in the forward direction: n n n n 4 n 5 or n n n n 4 n 5. The two procedures will lead to different coputing processes but with exactly the sae results. When the forward pass is concluded, the following two teporary vectors are stored: the first vector y with the values of the signals on the neuron output nodes and the second vector s with the values of the slopes of the neuron activation functions, which are signal dependent.... Backward Coputation The sequence of the backward coputation is opposite to the forward coputation sequence. The process starts with the last neuron and continues toward the input. In the case of the network in Figure.4, the following are two possible sequences (backpropagation paths): n 5 n 4 n n n or n 5 n 4 n n n, and also they will have the sae results. To deonstrate the case, let us use the n 5 n 4 n n n sequence. The vector δ represents signal propagation fro a network output to the inputs of all other neurons. The size of this vector is equal to the nuber of neurons. For the output neuron n 5, its sensitivity is initialed using its slope δ,5 = s 5. For neuron n 4, the delta at n 5 will be propagated by w 45 the weight between n 4 and n 5, then by the slope of neuron n 4. So the delta paraeter of n 4 is presented as δ,4 = δ,5 w 45 s 4. For neuron n, the delta paraeters of n 4 and n 5 will be propagated to the output of neuron n and sued, then ultiplied by the slope of neuron n, as δ, = (δ,5 w 5 + δ,4 w 4 )s. For the sae procedure, it could be obtained that δ, = (δ, w + δ,4 w 4 )s and δ, = (δ, w + δ,5 w 5 )s. After the backpropagation process is done at neuron N, all the eleents of array δ are obtained.... Jacobian Eleent Coputation After the forward and backward coputation, all the neuron outputs y and vector δ are calculated. Then using Equation., the Jacobian row for a given pattern can be obtained. By applying all training patterns, the whole Jacobian atrix can be calculated and stored. For arbitrarily connected neural networks, the NBN algorith for Jacobian atrix coputation can be organized as shown in Figure.5. for all patterns (np) % Forward coputation for all neurons (nn) for all weights of the neuron (nx) calculate net; % Eq. (4) calculate neuron output; % Eq. () calculate neuron slope; % Eq. (6) for all outputs (no) calculate error; % Eq. () %Backward coputation initial delta as slope; for all neurons starting fro output neurons (nn) for the weights connected to other neurons (ny) ultiply delta through weights su the backpropagated delta at proper nodes ultiply delta by slope (for hidden neurons); related Jacobian row coputation; %Eq. () Figure.5 Pseudo code using NBN algorith for Jacobian atrix coputation K49_C.indd 7 9// :: PM

8 -8 Intelligent Systes.. Solve Probles with Arbitrarily Connected Neural Networks... Function Approxiation Proble Function approxiation is usually used in nonlinear control real of neural networks, for control surface prediction. In order to approxiate the function shown below, 5 points are selected fro to 4 as the training patterns. With only four neurons in FCC networks (as shown in Figure.6), the training result is presented in Figure.7. ( ) + 9 z = 4exp. 5( x 4). 5( y ) (.) AQ... Two-Spiral Proble Two-spiral proble is considered as a good evaluation of both training algoriths and training architectures [AS99]. Depending on the neural network architecture, different nubers of neurons are required for successful training. For exaple, using standard MLP networks with one hidden layer, 4 neurons are required for two-spiral proble [PLI8]; while with the FCC architecture, it can be solved with only eight neurons using the NBN algorith. NBN algoriths are not only uch faster but also can train reduced size networks which cannot be handled by the traditional EBP algorith (see Table.). For EBP algorith, learning constant is.5 (largest possible to avoid oscillation) and oentu is.5; axiu iteration is,, for EBP algorith and, for LM algorith; desired error =.; all neurons are in FCC networks; there are trials for each case. x y z + Figure.6 Network used for training the function approxiation proble; notice the output neuron is a linear neuron with gain = (a) (b) Figure.7 Averaged SSE between desired surface (a) and neural prediction (b) is.5. K49_C.indd 8 9// ::4 PM

9 NBN Algorith -9 Table. Neurons Training Results of Two-Spiral Proble Success Rate (%) Average Nuber of Iterations Average Tie (s) EBP NBN EBP NBN EBP NBN 8 Failing 87.7 Failing Failing 6.4 Failing.98 4 Failing 4.9 Failing Failing.8 Failing , , , Forward-Only Coputation The NBN procedure introduced in Section requires both forward and backward coputation. Especially, as shown in Figure.5, one ay notice that for networks with ultiple outputs, the backpropagation process has to be repeated for each output. In this section, an iproved NBN coputation is introduced to overcoe the proble, by reoving backpropagation process in the coputation of Jacobian atrix..4. Derivation The concept of δ, was described in Section.. One ay notice that δ, can be interpreted also as a signal gain between net input of neuron and the network output. Let us extend this concept to gain coefficients between all neurons in the network (Figures.8 and.). The notation of δ k, is an extension of Equation. and can be interpreted as signal gain between neurons and k, and it is given by Fk, y Fk, y δ k, = ( ) = ( ) net y y net = F s (.4) k, where k and are indices of neurons F k, (y ) is the nonlinear relationship between the output node of neuron k and the output node of neuron δ, δ,k l Network inputs net s y F k, net k s k y k Network outputs δ k, = F k, s Figure.8 neuron k. Interpretation of δ k, as a signal gain, where in feedforward network neuron ust be located before K49_C.indd 9 9// ::6 PM

10 - Intelligent Systes Naturally in feedforward networks, k. If k =, then δ k,k = s k, where s k is the slope of activation function calculated by Equation.6. Figure.8 illustrates this extended concept of δ k, paraeter as a signal gain. The atrix δ has a triangular shape and its eleents can be calculated in the forward-only process. Later, eleents of Jacobian can be obtained using Equation., where only last rows of atrix δ associated with network outputs are used. The key issue of the proposed algorith is the ethod of calculating of δ k, paraeters in the forward calculation process, and it will be described in the next part of this section..4. Calculation of δ Matrix for FCC Architectures Let us start our analysis with fully connected neural networks (Figure.9). Any other architecture could be considered as a siplification of fully connected neural networks by eliinating connections (setting weights to zero). If feedforward principle is enforced (no feedback), fully connected neural networks ust have cascade architectures. Slopes of neuron activation functions s can be also written in the for of δ paraeter as δ, = s. By inspecting Figure., δ paraeters can be written as For the first neuron, there is only one δ paraeter δ, = s (.5) w, w, w,4 w,4 Inputs w, w,4 4 + Figure.9 Four neurons in fully connected neural network, with five inputs and three outputs. S w, w, w,4 δ, S w, w,4 δ, δ, S w,4 δ 4, δ 4, δ 4, S 4 Figure. The δ k, paraeters for the neural network of Figure.9. Input and bias weights are not used in the calculation of gain paraeters. K49_C.indd 9// ::9 PM

11 NBN Algorith - For the second neuron, there are two δ paraeters δ δ = s, = s w s,, (.6) For the third neuron, there are three δ paraeters δ = s, δ = s w s,, (.7) δ = s w s + s w s w s,,,, One ay notice that all δ paraeters for the third neuron can be also expressed as a function of δ paraeters calculated for previous neurons. Equations.7 can be rewritten as δ = s, δ = δ w δ,,,, (.8) For the fourth neuron, there are four δ paraeters δ = s 4, 4 4 δ = δ w δ + δ w δ,,,,,,, δ = δ w δ 4, 4, 4, 4, δ = δ w δ + δ w δ 4, 4, 4, 4, 4, 4, 4, (.9) δ 4, = δ w δ + δ w δ + δ w δ 4, 4, 4, 4, 4, 4, 4, 4, 4, The last paraeter δ 4, can be also expressed in a copacted for by suing all ters connected to other neurons (fro to ) δ, = δ, w i, δi, i = (.) The universal forula to calculate δ k, paraeters using already calculated data for previous neurons is k δk, = δk, k wi, kδi, (.) i= where in feedforward network, neuron ust be located before neuron k, so k ; δ k,k = s k is the slope of activation function of neuron k; w,k is the weight between neuron and neuron k; and δ k, is a signal gain through weight w,k and through other part of network connected to w,k. In order to organize the process, the nn nn coputation table is for calculating signal gains between neurons, where nn is the nuber of neurons (Figure.). Natural indices (fro to nn) are given for each neuron according to the direction of signal propagation. For signal gain coputation, only connections between neurons need to be concerned, while the weights connected to network inputs and biasing weights of all neurons will be used only at the end of the process. For a given pattern, a saple of the nn nn coputation table is shown in Figure.. One ay notice that the indices of rows and coluns are the sae as the indices of neurons. In the following derivation, let us use k and used as K49_C.indd 9// ::5 PM

12 - Intelligent Systes Neuron Index k nn δ, w, w, w,k δ, δ, w, w,k w,nn w,nn δ, δ, δ, w,k w,nn k δ k, δ k, δ k, δ k,k w k,nn nn δ nn, δ nn, δ nn, δ nn,k δ nn,nn Figure. The nn nn coputation table; gain atrix δ contains all the signal gains between neurons; weight array w presents only the connections between neurons, while network input weights and biasing weights are not included. neuron indices to specify the rows and coluns in the coputation table. In feedforward network, k and atrix δ has a triangular shape. The coputation table consists of three parts: weights between neurons in upper triangle, vector of slopes of activation functions in ain diagonal, and signal gain atrix δ in lower triangle. Only ain diagonal and lower triangular eleents are coputed for each pattern. Initially, eleents on ain diagonal δ k,k = s k are known as slopes of activation functions and values of signal gains δ k, are being coputed subsequently using Equation.. The coputation is being processed NBN starting with the neuron closest to network inputs. At first, the row nuber one is calculated and then eleents of subsequent rows. Calculation on row below is done using eleents fro above rows using Equation.. After copletion of forward coputation process, all eleents of δ atrix in the for of the lower triangle are obtained. In the next step, eleents of Jacobian atrix are calculated using Equation.. In the case of neural networks with one output, only the last row of δ atrix is needed for gradient vector and Jacobian atrix coputation. If networks have ore outputs no, then last no rows of δ atrix are used. For exaple, if the network shown in Figure.9 has three outputs, the following eleents of δ atrix are used δ, δ, = s δ, = δ, 4 = δ, δ, δ, = s δ, 4 = δ4, δ4, δ4, δ4,4 = s 4 (.) and then for each pattern, the three rows of Jacobian atrix, corresponding to three outputs, are calculated in one step using Equation. without additional propagation of δ δ, { y} s { y} { y} { y4} δ, { y} δ, { y} s { y} { y 4} δ4, { y} δ4, { y} δ4, { y} s4 { y4} neuron neuron neuron neuron 4 (.) where neurons input vectors y through y 4 have 6, 7, 8, and 9 eleents respectively (Figure.9), corresponding to nuber of weights connected. Therefore, each row of Jacobian atrix has = eleents. If the network has three outputs, then fro six eleents of δ atrix and slopes, 9 eleents K49_C.indd 9// ::56 PM

13 NBN Algorith - of Jacobian atrix are calculated. One ay notice that the size of newly introduced δ atrix is relatively sall, and it is negligible in coparison with other atrices used in calculation. The iproved NBN procedure gives all the inforation needed to calculate Jacobian atrix (.), without backpropagation process; instead, δ paraeters are obtained in relatively siple forward coputation (see Equation.)..4. Training Arbitrarily Connected Neural Networks The proposed coputation above was derived for fully connected neural networks. If network is not fully connected, then soe eleents of the coputation table are zero. Figure. shows coputation Index s w, w, w,4 w,5 w,6 δ, s w, w,4 w,5 w,6 δ, δ, s w,4 w,5 w,6 δ 4, δ 4, δ 4, s 4 w 4,5 w 4,6 δ 5, δ 5, δ 5, δ 5,4 s 5 w 5,6 δ 6, δ 6, δ 6, δ 6,4 δ 6,5 s (a) Index s w,5 w,6 s w,5 w,6 5 s w,5 w, δ 5, δ 5, δ 5, δ 6, δ 6, δ 6, s 4 δ 5,4 δ 6,4 w 4,5 s 5 w 4,6 s (b) Index s w, w,6 s w,4 w,5 5 δ, s w,5 w,6 4 5 δ 4, δ 5, δ 5, δ 5, s 4 δ 5,4 w 4,5 s δ 6, δ 6, s 6 (c) Figure. Three different architectures with six neurons: (a) FCC network, (b) MLP network, and (c) arbitrarily connected neural network. K49_C.indd 9// ::57 PM

14 -4 Intelligent Systes tables for different neural network topologies with six neurons each. Please notice zero eleents are for not connected neurons (in the sae layers). This can further siplify the coputation process for popular MLP topologies (Figure.b). Most of used neural networks have any zero eleents in the coputation table (Figure.). In order to reduce the storage requireents (do not store weights with zero values) and to reduce coputation process (do not perfor operations on zero eleents), a part of the NBN algorith in Section. was adopted for forward coputation. In order to further siplify the coputation process, Equation. is copleted in two steps x k = w δ k, i, k i, i= (.4) and δ = δ x = s x (.5) k, k, k k, k k, The coplete algorith with forward-only coputation is shown in Figure.. By adding two additional steps using Equations.4 and.5 (highlighted in bold in Figure.), all coputation can be copleted in the forward-only coputing process..4.4 Experiental Results Several probles are presented to test the coputing speed of two different NBN algoriths with and without backpropagation process. The testing of tie costs for both the backpropagation coputation and the forward-only coputation are divided into forward part and backward part separately ASCII Codes to Iage Conversion This proble is to associate 56 ASCII codes with 56 character iages, each of which is ade up of 7 8 pixels (Figure.4). So there are 8 bit inputs (inputs of parity-8 proble), 56 patterns, and for all patterns (np) % Forward coputation for all neurons (nn) for all weights of the neuron (nx) calculate net; % Eq. (4) calculate neuron output; % Eq. () calculate neuron slope; % Eq. (6) set current slope as delta; for weights connected to previous neurons (ny) for previous neurons (nz) ultiply delta through weights then su; % Eq. (4) ultiply the su by the slope; % Eq. (5) related Jacobian eleents coputation; % Eq. () for all outputs (no) calculate error; % Eq. () Figure. Pseudo code of the forward-only coputation, in second-order algoriths. K49_C.indd 4 9// :: PM

15 NBN Algorith -5 Figure.4 The first 9 iages of ASCII characters. Table. Coparison for ASCII Character Recognition Proble Coputation Methods Tie Cost (s/iteration) Relative Tie (%) Forward Backward Backpropagation 8.4,8.74 Forward-only outputs. In order to solve the proble, the structure, neurons in MLP network, is used to train those patterns using NBN algoriths. The coputation tie is presented in Table Parity-7 Proble Parity-N probles are aied to associate n-bit binary input data with their parity bits. It is also considered to be one of the ost difficult probles in neural network training, although it has been solved analytically [BDA]. Parity-7 proble is trained with NBN algoriths, using both the forward-only coputation and traditional coputation separately. Two different network structures are used for training: eight neurons in 7-7- MLP network (64 weights) and three neurons in FCC network (7 weights). Tie cost coparison is shown in Table Error Correction Probles Error correction is an extension of parity-n probles for ultiple parity bits. In Figure.5, the left side is the input data, ade up of signal bits and their parity bits, while the right side is the related corrected signal bits and parity bits as outputs, so nuber of inputs is equal to the nuber of outputs. K49_C.indd 5 9// :: PM

16 -6 Intelligent Systes Table. Coparison for Parity-7 Proble Coputation Tie Cost (μs/iteration) Relative Networks Methods Forward Backward Tie (%) MLP Backpropagation Forward-only 9... FCC Backpropagation Forward-only Signal bits Parity bits Neural Networks Corrected signal bits Corrected parity bits Figure.5 Using neural networks to solve error correction proble; errors in input data can be corrected by well-trained neural networks. Two error correction experients are presented, one has 4 bit signal with its bit parity bits as inputs, 7 outputs, and 8 patterns (6 correct patterns and patterns with errors), using neurons in MLP network (47 weights); the other has 8 bit signal with its 4 bit parity bits as inputs, outputs, and 8 patterns (56 correct patterns and 7 patterns with errors), using 4 neurons in -- MLP network (76 weights). Error patterns with one incorrect bit ust be corrected. Both backpropagation coputation and the forward-only coputation were perfored with the NBN algoriths. The testing results are presented in Table Encoders and Decoders Experient results on -to-8 decoder, 8-to- encoder, 4-to-6 decoder, and 6-to-4 encoder, using NBN algoriths, are presented in Table.5. For -to-8 decoder and 8-to- encoder, neurons are used in --8 MLP network (44 weights) and 8-8 MLP network (99 weights) respectively; while for 4-to-6 decoder and 6-to-4 encoder, neurons are used in MLP network ( weights) and MLP network (4 weights) separately. In the encoder and decoder probles, one ay notice that for the sae nuber of neurons, the ore outputs the networks have, the ore efficiently the forward-only coputation works. Fro the presented experiental results, one ay notice that, for networks with ultiple outputs, the forward-only coputation is ore efficient than the backpropagation coputation; while for single output situation, the forward-only coputation is slightly worse. Table.4 Coparison for Error Correction Proble Coputation Tie Cost (s/iteration) Relative Probles Methods Forward Backward Tie (%) 4 bit signal Backpropagation.4.8 Forward-only bit signal Backpropagation Forward-only K49_C.indd 6 9// :: PM

17 NBN Algorith -7 Table.5 Coparison for Encoders and Decoders Coputation Tie Cost (μs/iteration) Relative Probles Methods Forward Backward Tie (%) -to-8 Traditional decoder Forward-only to- Traditional encoder Forward-only to-6 Traditional decoder Forward-only to-4 Traditional encoder Forward-only Direct Coputation of Quasi-Hessian Matrix and Gradient Vector Using Equation.8 for weight updating, one ay notice that the atrix ultiplication J T J and J T e have to be calculated H Q = J T J (.6) g = J T e (.7) where atrix Q is the quasi-hessian atrix g is the gradient vector [9YW] Traditionally, the whole Jacobian atrix J is calculated and stored [TM94] for further ultiplication operation using Equations.6 and.7. The eory liitation ay be caused by Jacobian atrix storage, as described below. In the NBN algorith, quasi-hessian atrix Q and gradient vector g are calculated directly, without Jacobian atrix coputation and storage. Therefore, the NBN algorith can be used in training the probles with unliited nuber of training patterns..5. Meory Liitation in Levenberg Marquardt Algorith In the Levenberg Marquardt algorith, Jacobian atrix J has to be calculated and stored for Hessian atrix coputation [TM94]. In this procedure, as shown in Figure., at least np no nn eleents (Jacobian atrix) have to be stored. For sall and edian size pattern training, this ethod ay work soothly. However, it would be a huge eory cost for training large-sized patterns, since the nuber of eleents of Jacobian atrix J is proportional to the nuber of patterns. For exaple, the pattern recognition proble in MNIST handwritten digit database [CKOZ6] consists of 6, training patterns, 784 inputs, and outputs. Using only the siplest possible neural network with neurons (one neuron per each output), the eory cost for the entire Jacobian atrix storage is nearly 5 Gb. This huge eory requireent cannot be satisfied by any Windows copliers, where there is a Gb liitation for single-array storage. Therefore, Levenberg Marquardt algorith cannot be used for probles with large nuber of patterns. K49_C.indd 7 9// ::6 PM

18 -8 Intelligent Systes.5. Review of Matrix Algebra There are two ways to ultiply rows and coluns of two atrices. If the row of the first atrix is ultiplied by the colun of the second atrix, then we obtain a scalar, as shown in Figure.6a. When the colun of the first atrix is ultiplied by the row of the second atrix then the result is a partial atrix q (Figure.6b) [L5]. The nuber of scalars is nn nn, while the nuber of partial atrices q, which later have to be sued, is np no. When J T is ultiplied by J using routine shown in Figure.6b, at first, partial atrices q (size: nn nn) need to be calculated np no ties, then all of np no atrices q ust be sued together. The routine of Figure.6b sees coplicated; therefore alost all atrix ultiplication processes use the routine of Figure.6a, where only one eleent of resulted atrix is calculated and stored each tie. Even the routine of Figure.6b sees to be ore coplicated; after detailed analysis (see Table.6), one ay conclude that the coputation tie for atrix ultiplication of the two ways is basically the sae. In a specific case of neural network training, only one row of Jacobian atrix J (colun of J T ) is known for each training pattern, so if routine fro Figure.6b is used then the process of creation of quasi-hessian atrix can start sooner without the necessity of coputing and storing the entire Jacobian atrix for all patterns and all outputs. Table.7 roughly estiates the eory cost in two ultiplication ethods separately. The analytical results in Table.7 show that the colun-row ultiplication (Figure.6b) can save a lot of eory. nn np no J T np no J = Q nn (a) nn nn nn J T = J q nn (b) Figure.6 Two ways of ultiplying atrices: (a) row colun ultiplication results in a scalar and (b) colun row ultiplication results in a partial atrix q. Table.6 Coputation Analysis Multiplication Methods Addition Multiplication Row colun (np no) nn nn (np no) nn nn Colun row nn nn (np no) nn nn (np no) np is the nuber of training patterns, no is the nuber of outputs, and nn is the nuber of weights. K49_C.indd 8 9// ::6 PM

19 NBN Algorith -9 Table.7 Meory Cost Analysis Multiplication Methods Eleents for Storage Row colun (np no) nn + nn nn + nn Colun row nn nn + nn Difference (np no) nn.5. Quasi-Hessian Matrix Coputation Let us introduce quasi-hessian subatrix q p, (size: nn nn) e e e e e p, p, p, p, p, nn ep, ep, ep, ep, ep, q p, = nn ep, ep, ep, ep, ep, nn nn nn (.8) Using the procedure in Figure.5b, the nn nn quasi-hessian atrix Q can be calculated as the su of subatrices q p, Q = np no p= = q p, (.9) By introducing nn vector p, p, = e e p, p, p, e (.) nn subatrices q p, in Equation. can be also written in the vector for (Figure.5b) T q = (.) p, p, p, One ay notice that for the coputation of subatrices q p,, only N eleents of vector p, need to be calculated and stored. All the subatrices can be calculated for each pattern p and output separately, and sued together, so as to obtain quasi-hessian atrix Q. Considering the independence aong all patterns and outputs, there is no need to store all the quasi-hessian subatrices q p,. Each subatrix can be sued to a teporary atrix after its coputation. Therefore, during the direct coputation of quasi-hessian atrix Q using (.9), only eory for nn eleents is required, instead of that for the whole Jacobian atrix with (np no) nn eleents (Table.7). Fro (.), one ay notice that all the subatrices q p, are syetrical. With this property, only upper (or lower) triangular eleents of those subatrices need to be calculated. Therefore, during the iproved quasi-hessian atrix Q coputation, ultiplication operations in (.) and su operations in (.9) can be both reduced by half approxiately. K49_C.indd 9 9// ::7 PM

20 - Intelligent Systes.5.4 Gradient Vector Coputation Gradient subvector η p, (size: nn ) is h p, ep, w e ep, = e ep, e nn p, p, p, ep, ep, = e p, nn e p, (.) With the procedure in Figure.6b, gradient vector g can be calculated as the su of gradient subvector η p, g = np p= = no h p, (.) Using the sae vector p, defined in (.), gradient subvector can be calculated using h p, p, e p, = (.4) AQ Siilarly, gradient subvector η p, can be calculated for each pattern and output separately, and sued to a teporary vector. Since the sae vector p, is calculated during quasi-hessian atrix coputation above, there is only an extra scalar e p, need to be stored. With the iproved coputation, both quasi-hessian atrix Q and gradient vector g can be coputed directly, without Jacobian atrix storage and ultiplication. During the process, only a teporary vector p, with nn eleents needs to be stored; in other words, the eory cost for Jacobian atrix storage is reduced by (np no) ties. In the MINST proble entioned in part one of this section, the eory cost for the storage of Jacobian eleents could be reduced fro ore than 5 GB to nearly.7 kb..5.5 Jacobian Row Coputation The key point of the iproved coputation above for quasi-hessian atrix Q and gradient vector g is to calculate vector p, defined in (.) for each pattern and output. This vector is equivalent of one row of Jacobian atrix J. By cobining Equations. and., the eleents of vector p, can be calculated by = p, δ p,, y p,, y p,, i δ p,, y p,, y p,, i (.5) where y p,,i is the ith input of neuron, when training pattern p. Using the NBN procedure introduced in Section., all eleents y p,,i in Equation.5 can be calculated in the forward coputation, while vector δ is obtained in the backward coputation; or, using the iproved NBN procedure in Section.4, both vectors y and δ can be obtained in the iproved forward coputation. Again, since only one vector p, needs to be stored for each pattern K49_C.indd 9// ::6 PM

21 NBN Algorith - % Initialization Q=; g= % Iproved coputation for p=:np % Nuber of patterns % Forward coputation for =:no % Nuber of outputs % Backward coputation calculate vector p, ; % Eq. (5) calculate sub atrix q p, ; % Eq. () calculate sub vector η p, ; % Eq. (4) Q=Q+q p, ; % Eq. (9) g=g+η p, ; % Eq. () Figure.7 algorith. Pseudo code of the iproved coputation for quasi-hessian atrix and gradient vector in NBN and output in the iproved coputation, the eory cost for all those teporary paraeters can be reduced by (np no) ties. All atrix operations are siplified to vector operations. Generally, for the proble with np patterns and no outputs, the NBN algorith without Jacobian atrix storage can be organized as the pseudo code shown in Figure Coparison on Meory and Tie Consuption Several experients are designed to test the eory and tie efficiencies of the NBN algorith, coparing with traditional LM algorith. They are divided into two parts: () eory coparison and () tie coparison Meory Coparison Three probles, each of which has huge nuber of patterns, are selected to test the eory cost of both the traditional coputation and the iproved coputation. LM algorith and NBN algorith are used for training, and the test results are shown in Tables.8 and.9. In order to ake ore precise coparison, eory cost for progra code and input files were not used in the coparison. Table.8 Probles Meory Coparison for Parity Parity-N probles N = 4 N = 6 Patterns 6,84 65,56 Structures a 5 neurons 7 neurons Jacobian atrix sizes 5,46,7 7,85,8 Weight vector sizes 45 Average iteration Success rate (%) 9 Algoriths Actual Meory Cost (Mb) LM algorith NBN algorith.4 4. a All neurons are in FCC networks. K49_C.indd 9// ::7 PM

22 - Intelligent Systes Table.9 Proble Meory Coparison for MINST Proble MINST Patterns 6, Structures 784 = single layer network a Jacobian atrix sizes 47,, Weight vector sizes 785 Algoriths Actual Meory Cost (Mb) LM algorith NBN algorith 5.67 a In order to perfor efficient atrix inversion during training, only one of ten digits is classified each tie. Table. Tie Coparison for Parity Probles Parity-N Probles N = 9 N = N = N = 5 Patterns 5,48 8,9,768 Neurons 4 6 Weights Average iterations Success rate (%) Algoriths Averaged Training Tie (s) Traditional LM ,47.6 Iproved LM ,797.9 Fro the test results in Tables.8 and.9, it is clear that eory cost for training is significantly reduced in the iproved coputation Tie Coparison Parity-N probles are presented to test the training tie for both traditional coputation and the iproved coputation using LM algorith. The structures used for testing are all FCC networks. For each proble, the initial weights and training paraeters are the sae. Fro Table., one ay notice that the NBN coputation cannot only handle uch larger probles, but also coputes uch faster than LM algorith, especially for large-sized pattern training. The larger the pattern size is, the ore tie efficient the iproved coputation will be..6 Conclusion In this chapter, the NBN algorith is introduced to solve the structure and eory liitation in Levenberg Marquardt algorith. Based on the specially designed NBN routings, the NBN algorith can be used not only for traditional MLP networks, but also other arbitrarily connected neural networks. The NBN algorith can be organized in two procedures with backpropagation process and without backpropagation process. Experiental results show that the forer one is suitable for networks with single output, while the latter one is ore efficient for networks with ultiple outputs. The NBN algorith does not require to store and to ultiply large Jacobian atrix. As a consequence, eory requireent for quasi-hessian atrix and gradient vector coputation is decreased by K49_C.indd 9// ::7 PM

23 NBN Algorith - (P M) ties, where P is the nuber of patterns and M is the nuber of outputs. An additional benefit of eory reduction is also a significant reduction in coputation tie. Therefore, the training speed of the NBN algorith becoes uch faster than the traditional Levenberg Marquardt algorith. In the NBN algorith, quasi-hessian atrix can be coputed on fly when training patterns are applied. Moreover, it has the special advantage for applications which require dynaically changing the nuber of training patterns. There is no need to repeat the entire ultiplication of J T J, but only add to or subtract fro quasi-hessian atrix. The quasi-hessian atrix can be odified as patterns are applied or reoved. There are two ipleentations of the NBN algorith on the website: edu/ wilab/nnt/index.ht. MATLAB version can handle arbitrarily connected networks, but Jacobian atrix is coputed and stored [7WCHK]. In the C++ version [9YW], all new features of the NBN algorith entioned in this chapter are ipleented. References [AS99] J. R Alvarez-Sanchez, Inecting knowledge into the solution of the two-spiral proble. Neural Copute and Applications, 8, 65 7, 999. [AW95] T. J. Andersen and B.M. Wilaowski, A odified regression algorith for fast one layer neural network training, World Congress of Neural Networks, Washington DC, July 7, 995, Vol., [BDA] B. M. Wilaowski, D. Hunter, and A. Malinowski, Solving parity-n probles with feedforward neural networks. Proceedings of the IEEE IJCNN, pp , IEEE Press,. [CKOZ6] L. J. Cao, S. S. Keerthi, Chong-Jin Ong, J. Q. Zhang, U. Periyathaby, Xiu Ju Fu, and H. P. Lee, Parallel sequential inial optiization for the training of support vector achines, IEEE Transactions on Neural Networks, 7(4), 9 49, April 6. [L5] David C. Lay, Linear Algebra and its Applications, Addison-Wesley Publishing Copany, rd version, pp. 4, July, 5. [L44] K. Levenberg, A ethod for the solution of certain probles in least squares. Quarterly of Applied Matheatics, 5, 64 68, 944. [M6] D. Marquardt, An algorith for least-squares estiation of nonlinear paraeters. SIAM J. Appl. Math., (), 4 44, June 96. [N89] Robert Hecht Nielsen, Theory of the back propagation neural network. Proceedings of the 989 IEEE IJCNN, pp , IEEE Press, New York, 989. [PLI8] Jian-Xun Peng, Kang Li, and G. W. Irwin, A new Jacobian atrix for optial learning of singlelayer neural networks, IEEE Transactions on Neural Networks, 9(), 9 9, January 8. [TM94] Hagan M. T., M. Menha, Training feedforward networks with the Marquardt algorith. IEEE Transactions on Neural Networks, 5(6), , 994. [W9] B. M. Wilaowski, Neural network architectures and learning algoriths, IEEE Industrial Electronics Magazine, (4), [WB] B. M. Wilaowski and J. Binfet, Microprocessor ipleentation of fuzzy systes and neural networks, International Joint Conference on Neural Networks (IJCNN ), Washington DC, July 5 9,, pp [WCHK7] B. M. Wilaowski, N. Cotton, J. Hewlett, and O. Kaynak, Neural network trainer with second order learning algoriths, Proceedings of the International Conference on Intelligent Engineering Systes, June 9, 7 July, 7, pp. 7. [WCKD8] B. M. Wilaowski, N. J. Cotton, O. Kaynak, and G. Dundar, Coputing gradient vector and Jacobian atrix in arbitrarily connected neural networks, IEEE Trans. on Industrial Electronics, 55(), , October 8. AQ4 K49_C.indd 9// ::7 PM

24 -4 Intelligent Systes [WCM99] B. M. Wilaowski, Y. Chen, and A. Malinowski, Efficient algorith for training neural networks with one hidden layer, presented at 999 International Joint Conference on Neural Networks (IJCNN 99), Washington, DC, July 6, 999, pp [WH] B. M. Wilaowski and H. Yu, Iproved coputation for Levenberg Marquardt training, IEEE Transactions on Neural Networks,,. [WT9] B. M. Wilaowski and L. Torvik, Modification of gradient coputation in the back-propagation algorith, presented at Artificial Neural Networks in Engineering (ANNIE 9), St. Louis, MI, Noveber 4 7, 99. [YW9] Hao Yu and B. M. Wilaowski, C++ ipleentation of neural networks trainer, th International Conference on Intelligent Engineering Systes (INES-9), Barbados, April 6 8, 9. K49_C.indd 4 9// ::7 PM

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Pattern Classification using Simplified Neural Networks with Pruning Algorithm

Pattern Classification using Simplified Neural Networks with Pruning Algorithm Pattern Classification using Siplified Neural Networks with Pruning Algorith S. M. Karuzzaan 1 Ahed Ryadh Hasan 2 Abstract: In recent years, any neural network odels have been proposed for pattern classification,

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Variations on Backpropagation

Variations on Backpropagation 2 Variations on Backpropagation 2 Variations Heuristic Modifications Moentu Variable Learning Rate Standard Nuerical Optiization Conjugate Gradient Newton s Method (Levenberg-Marquardt) 2 2 Perforance

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Analysis of Impulsive Natural Phenomena through Finite Difference Methods A MATLAB Computational Project-Based Learning

Analysis of Impulsive Natural Phenomena through Finite Difference Methods A MATLAB Computational Project-Based Learning Analysis of Ipulsive Natural Phenoena through Finite Difference Methods A MATLAB Coputational Project-Based Learning Nicholas Kuia, Christopher Chariah, Mechatronics Engineering, Vaughn College of Aeronautics

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Hand Written Digit Recognition Using Backpropagation Neural Network on Master-Slave Architecture

Hand Written Digit Recognition Using Backpropagation Neural Network on Master-Slave Architecture J.V.S.Srinivas et al./ International Journal of Coputer Science & Engineering Technology (IJCSET) Hand Written Digit Recognition Using Backpropagation Neural Network on Master-Slave Architecture J.V.S.SRINIVAS

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

arxiv: v3 [cs.ds] 22 Mar 2016

arxiv: v3 [cs.ds] 22 Mar 2016 A Shifting Bloo Filter Fraewor for Set Queries arxiv:1510.03019v3 [cs.ds] Mar 01 ABSTRACT Tong Yang Peing University, China yangtongeail@gail.co Yuanun Zhong Nanjing University, China un@sail.nju.edu.cn

More information

VI. Backpropagation Neural Networks (BPNN)

VI. Backpropagation Neural Networks (BPNN) VI. Backpropagation Neural Networks (BPNN) Review of Adaline Newton s ethod Backpropagation algorith definition derivative coputation weight/bias coputation function approxiation exaple network generalization

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007 Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial

More information

An improved self-adaptive harmony search algorithm for joint replenishment problems

An improved self-adaptive harmony search algorithm for joint replenishment problems An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Low complexity bit parallel multiplier for GF(2 m ) generated by equally-spaced trinomials

Low complexity bit parallel multiplier for GF(2 m ) generated by equally-spaced trinomials Inforation Processing Letters 107 008 11 15 www.elsevier.co/locate/ipl Low coplexity bit parallel ultiplier for GF generated by equally-spaced trinoials Haibin Shen a,, Yier Jin a,b a Institute of VLSI

More information

Genetic Algorithm Search for Stent Design Improvements

Genetic Algorithm Search for Stent Design Improvements Genetic Algorith Search for Stent Design Iproveents K. Tesch, M.A. Atherton & M.W. Collins, South Bank University, London, UK Abstract This paper presents an optiisation process for finding iproved stent

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

Feedforward Networks

Feedforward Networks Feedforward Networks Gradient Descent Learning and Backpropagation Christian Jacob CPSC 433 Christian Jacob Dept.of Coputer Science,University of Calgary CPSC 433 - Feedforward Networks 2 Adaptive "Prograing"

More information

Multi-view Discriminative Manifold Embedding for Pattern Classification

Multi-view Discriminative Manifold Embedding for Pattern Classification Multi-view Discriinative Manifold Ebedding for Pattern Classification X. Wang Departen of Inforation Zhenghzou 450053, China Y. Guo Departent of Digestive Zhengzhou 450053, China Z. Wang Henan University

More information

(6) B NN (x, k) = Tp 2 M 1

(6) B NN (x, k) = Tp 2 M 1 MMAR 26 12th IEEE International Conference on Methods and Models in Autoation and Robotics 28-31 August 26 Międzyzdroje, Poland Synthesis of Sliding Mode Control of Robot with Neural Networ Model Jaub

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Efficient Filter Banks And Interpolators

Efficient Filter Banks And Interpolators Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Feedforward Networks. Gradient Descent Learning and Backpropagation. Christian Jacob. CPSC 533 Winter 2004

Feedforward Networks. Gradient Descent Learning and Backpropagation. Christian Jacob. CPSC 533 Winter 2004 Feedforward Networks Gradient Descent Learning and Backpropagation Christian Jacob CPSC 533 Winter 2004 Christian Jacob Dept.of Coputer Science,University of Calgary 2 05-2-Backprop-print.nb Adaptive "Prograing"

More information

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Figure 1: Equivalent electric (RC) circuit of a neurons membrane Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of

More information

Homework 3 Solutions CSE 101 Summer 2017

Homework 3 Solutions CSE 101 Summer 2017 Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing

More information

2. Image processing. Mahmoud Mohamed Hamoud Kaid, IJECS Volume 6 Issue 2 Feb., 2017 Page No Page 20254

2. Image processing. Mahmoud Mohamed Hamoud Kaid, IJECS Volume 6 Issue 2 Feb., 2017 Page No Page 20254 www.iecs.in International Journal Of Engineering And Coputer Science ISSN: 319-74 Volue 6 Issue Feb. 17, Page No. 54-6 Index Copernicus Value (15): 58., DOI:.18535/iecs/v6i.17 Increase accuracy the recognition

More information

ANALYTICAL INVESTIGATION AND PARAMETRIC STUDY OF LATERAL IMPACT BEHAVIOR OF PRESSURIZED PIPELINES AND INFLUENCE OF INTERNAL PRESSURE

ANALYTICAL INVESTIGATION AND PARAMETRIC STUDY OF LATERAL IMPACT BEHAVIOR OF PRESSURIZED PIPELINES AND INFLUENCE OF INTERNAL PRESSURE DRAFT Proceedings of the ASME 014 International Mechanical Engineering Congress & Exposition IMECE014 Noveber 14-0, 014, Montreal, Quebec, Canada IMECE014-36371 ANALYTICAL INVESTIGATION AND PARAMETRIC

More information

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Local Maximum Ozone Concentration Prediction Using LSTM Recurrent Neural Networks

Local Maximum Ozone Concentration Prediction Using LSTM Recurrent Neural Networks Local Maxiu Ozone Concentration Prediction Using LSTM Recurrent Neural Networks Sabrine Ribeiro René Alquézar Dept. Llenguatges i Sistees Inforàtics Universitat Politècnica de Catalunya Capus Nord, Mòdul

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

Feedforward Networks

Feedforward Networks Feedforward Neural Networks - Backpropagation Feedforward Networks Gradient Descent Learning and Backpropagation CPSC 533 Fall 2003 Christian Jacob Dept.of Coputer Science,University of Calgary Feedforward

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT

NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT PACS REFERENCE: 43.5.LJ Krister Larsson Departent of Applied Acoustics Chalers University of Technology SE-412 96 Sweden Tel: +46 ()31 772 22 Fax: +46 ()31

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

The Algorithms Optimization of Artificial Neural Network Based on Particle Swarm

The Algorithms Optimization of Artificial Neural Network Based on Particle Swarm Send Orders for Reprints to reprints@benthascience.ae The Open Cybernetics & Systeics Journal, 04, 8, 59-54 59 Open Access The Algoriths Optiization of Artificial Neural Network Based on Particle Swar

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Elliptic Curve Scalar Point Multiplication Algorithm Using Radix-4 Booth s Algorithm

Elliptic Curve Scalar Point Multiplication Algorithm Using Radix-4 Booth s Algorithm Elliptic Curve Scalar Multiplication Algorith Using Radix-4 Booth s Algorith Elliptic Curve Scalar Multiplication Algorith Using Radix-4 Booth s Algorith Sangook Moon, Non-eber ABSTRACT The ain back-bone

More information

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi

More information

Defect-Aware SOC Test Scheduling

Defect-Aware SOC Test Scheduling Defect-Aware SOC Test Scheduling Erik Larsson +, Julien Pouget*, and Zebo Peng + Ebedded Systes Laboratory + LIRMM* Departent of Coputer Science Montpellier 2 University Linköpings universitet CNRS Sweden

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

Reducing Vibration and Providing Robustness with Multi-Input Shapers

Reducing Vibration and Providing Robustness with Multi-Input Shapers 29 Aerican Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -2, 29 WeA6.4 Reducing Vibration and Providing Robustness with Multi-Input Shapers Joshua Vaughan and Willia Singhose Abstract

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Neural Network-Aided Extended Kalman Filter for SLAM Problem

Neural Network-Aided Extended Kalman Filter for SLAM Problem 7 IEEE International Conference on Robotics and Autoation Roa, Italy, -4 April 7 ThA.5 Neural Network-Aided Extended Kalan Filter for SLAM Proble Minyong Choi, R. Sakthivel, and Wan Kyun Chung Abstract

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

(t, m, s)-nets and Maximized Minimum Distance, Part II

(t, m, s)-nets and Maximized Minimum Distance, Part II (t,, s)-nets and Maxiized Miniu Distance, Part II Leonhard Grünschloß and Alexander Keller Abstract The quality paraeter t of (t,, s)-nets controls extensive stratification properties of the generated

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Ufuk Demirci* and Feza Kerestecioglu**

Ufuk Demirci* and Feza Kerestecioglu** 1 INDIRECT ADAPTIVE CONTROL OF MISSILES Ufuk Deirci* and Feza Kerestecioglu** *Turkish Navy Guided Missile Test Station, Beykoz, Istanbul, TURKEY **Departent of Electrical and Electronics Engineering,

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011 Page Lab Eleentary Matri and Linear Algebra Spring 0 Nae Due /03/0 Score /5 Probles through 4 are each worth 4 points.. Go to the Linear Algebra oolkit site ransforing a atri to reduced row echelon for

More information

E. Alpaydın AERFAISS

E. Alpaydın AERFAISS E. Alpaydın AERFAISS 00 Introduction Questions: Is the error rate of y classifier less than %? Is k-nn ore accurate than MLP? Does having PCA before iprove accuracy? Which kernel leads to highest accuracy

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes Low-coplexity, Low-eory EMS algorith for non-binary LDPC codes Adrian Voicila,David Declercq, François Verdier ETIS ENSEA/CP/CNRS MR-85 954 Cergy-Pontoise, (France) Marc Fossorier Dept. Electrical Engineering

More information

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals Estiation of ADC Nonlinearities fro the Measureent in Input Voltage Intervals M. Godla, L. Michaeli, 3 J. Šaliga, 4 R. Palenčár,,3 Deptartent of Electronics and Multiedia Counications, FEI TU of Košice,

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN

INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN V.A. Koarov 1, S.A. Piyavskiy 2 1 Saara National Research University, Saara, Russia 2 Saara State Architectural University, Saara, Russia Abstract. This article

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

Kinematics and dynamics, a computational approach

Kinematics and dynamics, a computational approach Kineatics and dynaics, a coputational approach We begin the discussion of nuerical approaches to echanics with the definition for the velocity r r ( t t) r ( t) v( t) li li or r( t t) r( t) v( t) t for

More information

2.9 Feedback and Feedforward Control

2.9 Feedback and Feedforward Control 2.9 Feedback and Feedforward Control M. F. HORDESKI (985) B. G. LIPTÁK (995) F. G. SHINSKEY (970, 2005) Feedback control is the action of oving a anipulated variable in response to a deviation or error

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

Numerical Solution of the MRLW Equation Using Finite Difference Method. 1 Introduction

Numerical Solution of the MRLW Equation Using Finite Difference Method. 1 Introduction ISSN 1749-3889 print, 1749-3897 online International Journal of Nonlinear Science Vol.1401 No.3,pp.355-361 Nuerical Solution of the MRLW Equation Using Finite Difference Method Pınar Keskin, Dursun Irk

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information