Mixed Transfer Function Neural Networks for Knowledge Acquisition
|
|
- Ruth Montgomery
- 5 years ago
- Views:
Transcription
1 Mixed Transfer Function Neural Networks for Knowledge Acquisition M. Iad Khan, Yakov Fraan and Saeid Nahavandi Abstract Modeling hels to understand and redict the outcoe of colex sstes. Inductive odeling ethodologies are beneficial for odeling the sstes where the uncertainties involved in the sste do not erit to obtain an accurate hsical odel. However inductive odels, like Artificial Neural Networks (ANNs, a suffer fro a few drawbacks involving over-fitting and the difficult to easil understand the odel itself. This can result in user reluctance to accet the odel or even colete rejection of the odeling results. Thus, it becoes highl desirable to ake such inductive odels ore corehensible and to autoaticall deterine the odel colexit to avoid over-fitting. In this aer, we roose a novel te of ANN, a Mixed Transfer Function Artificial Neural Network (MTFANN, which ais to irove the colexit fitting and corehensibilit of the ost oular te of ANN (MLP - a Multilaer Percetron. Index Ters inductive odeling, neural networks, ixed transfer functions, over-fitting, odel colexit I I. INTRODUCTION nductive odeling ethodologies are useful where the uncertainties involved in the sste do not erit to obtain an accurate hsical odel. Artificial Neural Networks (ANNs are one of these ethods successfull alied to odel a wide variet of robles exloiting their universal aroxiation roert [, 2, 3]. The ost coonl used te of ANN is a Multi-Laer Percetron (MLP which although owerful could ake it difficult to answer basic questions like how the odel was learnt and what it has learnt due to the usage of colex transfer functions like sigoids which can also lead to overfitting. This can result (and has been observed in ractice in user reluctance to accet the odel or even a colete rejection of odeling results. The lack of the ease of understanding of what and how the MLP odel has learnt about the roble stes fro the fact that the knowledge reresented b the MLP odel is concentrated in the weights This work was suorted b the Co-oerative Research Center for Allo and Solidification Technolog (CAST I. M. Khan is with the Institute of Technolog Research and Innovation (ITRI, Deakin Universit, Waurn Ponds Caus, Geelong 327, Australia (e-ail: ik@deakin.edu.au Y. Fraan is with the Institute of Technolog Research and Innovation (ITRI, Deakin Universit at Burwood Caus, Elgar Road, Burwood, VIC, 325, Australia (e-ail: fra@deakin.edu.au S. Nahavandi is with the Institute of Technolog Research and Innovation (ITRI, Deakin Universit, Waurn Ponds Caus, Geelong 327, Australia (e-ail: nahavand@deakin.edu.au and the transfer functions (TF of its neurons. However, weights being nubers are not easil interretable. TFs used in the current MLP networks being soe sort of colex functions, while having a caabilit of aroxiating colex robles in a fair nuber of neurons and laers are also not easil interretable. Norall, all neurons in a MLP network use the sae transfer function (e.g. sigoid/herboloid tangent which also liits the odel flexibilit and can lear to over-fitting. It is highl desirable to ake such neural network odels ore corehensible and to autoaticall deterine the aroriate colexit of the odel to avoid over-fitting. An iroveent in corehensibilit has a otential to hel understanding of underling relationshis between the inuts and the oututs of the sste that can irove existing knowledge about the odeled sste. In the reainder of this aer we will describe the current ono-transfer function MLP and the roosed MTFANN. We will also resent an overview of the existing knowledge extraction ethods fro MLP networks and deonstrate exected benefits of using MTFANN on the nuerical exale fro DELVE. II. MONO-TRANSFER FUNCTION MULTI-LAYER PERCEPTRON Multi-Laer Percetron, the ost coon te of Artificial Neural Networks, are a nueric data rocessing structure consisting of connections and nodes that uses divide and conquer strateg to rocess the inforation it receives fro its environent. Matheaticall this data structure can be reresented as: i= n γ = Lη= NL = Φ xiθ ij γ = η= i= ( (2. Here is an outut, is attern nuber, r reresents noise and n is the nuber of outut coonents (neurons in the outut laer, θ is the araeter of the odel, Φ is the transfer function and i and j are the indexes reresenting neurons fro current to next laer under consideration, and θ ij is the araeter associated with i and j th neuron. The equation 2. reresents a connection fro i to j with the araeter value (weight which reresents the strength of the connection between the two neurons. Authorized licensed use liited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on June 07,200 at 05:04:3 UTC fro IEEE Xlore. Restrictions al.
2 The notation can be exlained further. is a r n cobination of and = E( n + r (2.2 = R( x + r, n r E( n (2.3 The atheatical exectation is regressed [4] as a ( R x regression function while the noise r is added as it is, to each odeled outut for both the exectation and regression equations the equations 2.2 and 2.3. Ideall, the roble is odeled as the regression exression R in equation 2.3 and the noise generall results in a rediction error in an MLP odel for the roble. The MLP uses divide-and-conquer strateg and al TFs on the weighted su fro one laer to the next laer to reach the solution after being trained. Several exales of data eleents or atterns are shown during the training session to the network until the network reaches a desired level of accurac deterined b the energ function b aling a training/learning algorith. In the case of the ost coonl used feed-forward MLP, back-roagation training algorith [5] is norall used. The weights are odified during learning as deterined b the training algorith. The Energ Functions coonl used are Su of Squared Error (SSE, Mean Squared Error (MSE and Root Mean Squared Error (RMSE: i N = = 2 ( on i RMSE = N (2.4 Here N is the total nuber of data eleents assed through the network. The current tes of MLPs norall have a fixed TF in an one laer. Φ in equation 2. is the fixed TF. The fixed TFs generall used are either sigoid: f ( x = x (2.5 + e or herbolic tangent function: f ( x = Tanh( x (2.6 The ajor roble with these kinds of networks is their black-box behavior. These MLPs have the caabilit to odel a roble quite well but their accetabilit is often questioned b the doain exerts due to difficult to corehend the network. III. EXISTING KNOWLEDGE EXTRACTION METHODS FROM NEURAL NETWORKS To address the lack of corehensibilit of the MLPs, several knowledge extraction ethods have been develoed to convert ono-transfer function MLP odels into a ore userfriendl forat. Most of such work is based on acquisition of sbolic IF-THEN, and/or a kind of redicate logic rules although attets have been also ade to extract ore equation like rules [6, 7]. For exale, attets have been ade to extract the knowledge fro MLPs [7] and to redesign MLPs to silif the [8]. Sentiono et al. [6] for exale have extracted linear regression rules b linear aroxiation of sigoid TF. The obtained rules consisted of an antecedent in data doain and a consequent in the for of linear regression related to the cluster of data reresented b the antecedent. Towel and Shavlik s [9] subset algorith atteted to find out all subsets of weights that are ore than the bias of a given neuron. However, this algorith faces a coutational colexit roble, which essentiall eans that an increase in size of the network is accoanied b an increase in nuber of weights and hence an increase in search sace. To tackle the search sace colexit roble, heuristics were alied that while able to substantiall reduce algorithic search colexit, can result in incolete and unsound rules. Additionall a good heuristic requires a sound knowledge of the roble doain which is not alwas available and is a riar otivator for alication of inductive odeling aroaches. Gracez, Broda and Gabb [0] have alied artial ordering to the inut vector set and then used a edagogical extraction with runing of the search sace to reduce the search sace colexit. A edagogical aroach to rule extraction still treat the network as a black box and extracts rules b quering the network. The authors suort their aroach with strong theores that rove that the extraction algorith is sound and colete for regular networks which are defined as the network with entire set of weights fro hidden to outut laer being all ositive or negative which is not alwas the case in realit. The authors further devise a ethodolog to extend their algorith for non-regular networks. The define Basic Neural Structures (BNS and then rove a series of theores to conclude that their ethod is sound and colete. BNS is a sile for of MLP with no hidden neurons and onl one outut neuron. BNS is basicall defined to be extracted as a subart of a MLP. Matheaticall it is defined as: let N be a neural network with inut neurons, r hidden neurons and q outut neurons. A sub-network N 0 of N is a Basic Neural Structure (BNS if and onl if either N 0 exactl contains inut neurons, hidden neuron and 0 outut neurons of N, or N 0 contains exactl 0 inut neurons, r hidden neurons and outut neuron of N, deending on which laer of N is under consideration to be odelled as a BNS. Note that BNS itself is considered not to have an hidden laers of its own, but a contain hidden laer neurons fro the arent network. The nuber of BNS in a network is a su of the nuber of hidden Authorized licensed use liited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on June 07,200 at 05:04:3 UTC fro IEEE Xlore. Restrictions al.
3 laer and outut laer neurons r + q. The work b Setiono and Azcarraga [], Sentiono, Leow, and Zurada [6] and Saito and Nakano [2] is interesting fro the function aroxiation oint of view. Algorith of Sentiono et al. [6, ] consists of runing the final network to obtain a silified odel, clustering the data and then aroxiating transfer function with iecewise linear functions with IF-THEN rules caturing non-onotonicit of the network and the roble doain. A finer division of inut sace results in ore accurate rules but significantl increases the nuber of rules. Norall a balance should be aintained between the nuber of rules and their colexit. The authors assued that the corehensibilit of the rules lies in their silicit easured b the nuber of rules versus their accurac however silicit can be difficult to define in the context of rule extraction. For exale a degree of silicit deends on the target audience of the extracted rules. The corehensibilit of the rules a also deend on the rule for, such as for exale cris logic roositional/redicate logic rules [7], fuzz rules [3, 4], regression equation based rules [6], eal achine autoata [5], finite state autoata [6] and decision trees [7]. Hence it can be concluded that the corehensibilit of the rules deends on their for as well as audience. Local function networks like Radial Basis Function (RBF networks have a suitable architecture for rule extraction, but the task of extraction is not trivial because of nonlinear functions or high inut diensionalit [8] and the fact that hidden units are shared across several outut classes or a not contribute to an of the. The REX and hrex algoriths of rule extraction resented b the authors in [8], extracts rules fro hidden neurons associated with a single outut class and shared hidden neurons resectivel. It is ossible to irove these algoriths b negating the sign of a negative weight (b taking the absolute value of and al a negation on the activation value of the hidden neuron fro where the negative weight connection originated which is siilar to the regularization of network weights [0]. The idea of droing negative a weight altogether is not ver elegant as it can result of the loss of soe valuable knowledge reresented b MLP odel. In the case of RBF networks with shared hidden units, this can have an effect on all the forats of the rules. For exale, considering the case where hidden units H n are associated with the class A, viz., we know that the attern belongs to class A if unit N is activated. The current hrex algorith [8] if used for extraction of inequations will result in inequations of the for: If (H AND H 2 AND H n Class A (3. If (w h + w 2 h 2 + w n h n T A Class A (3.2 Here h n reresents activations of the resective neurons H n and w n are weights connecting the shared neurons H n with the class A outut neuron and T A is a threshold required to classif a attern as class A. Now, we argue in contrar to [8] that negative weights are iortant because the can lead to the reservation of soe iortant knowledge. To show our arguent, let s suose that w 2 is negative, and then the last rule becoes: Let V = - w 2 (3.3 If (w h + Vh 2 + w n h n T A Class A (3.4 If (w h + w 2 (-h 2 + w n h n T A Class A (3.5 Here it can be seen that a negative value in the activation (or in the inut data itself can be excitator rovided that there is a resence of negative weight values connecting the with the next laer. It is an iortant discover in the knowledge about network s inut and negative activation values that has been disregarded b the current hrex extraction algorith. We can further silif the inequalit b writing in a ore foral and siler for if we substitute variables x,, z in the lace of activation values (or inuts and relace the weight values with constants a, b, c. If (ax+b+cz>= T A Class A (3.6 It can be noticed that the knowledge that a negative inut (or activation can be excitator is of significant iortance because it could be the case that network has never seen a negative value during training. If we can extract this inforation fro a trained network, it can serve as a warning that an incoing negative value can result in an unreliable odel rediction. It can be seen fro above discussion that the area of extracting knowledge fro MLP networks uses advanced algoriths which are coutationall intensive that can lead to sacrificing soe odelling accurac for coutational feasibilit. It can be argued, that there is a need for a schee that can trivialize knowledge extraction fro the network so that there is a inial or no need for a secial knowledge extraction algorith. It is also necessar to use an autoatic network construction echanis since it is iortant to achieve a network odel the lowest colexit that can add to high corehensibilit of the odel and also will hel to avoid over-fitting. The roosed neural network and the construction algorith to address the above ais are resented in the next section. IV. MIXED TRANSFER FUNCTION ARTIFICIAL NEURAL NETWORK If we reove the liitation of using onl ono-transfer function in an MLP network and use instead a nuber transfer functions with various colexit within the sae network, the knowledge extraction algorith could becoe inial and has the otential to be ore corehensible than the ethods discussed in the revious section. Consequentl, we roose a novel te of MLP, a Mixed Transfer Function Artificial Neural Network (MTFANN, which ais to Authorized licensed use liited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on June 07,200 at 05:04:3 UTC fro IEEE Xlore. Restrictions al.
4 irove the colexit fitting and corehensibilit of the ost oular te of MLP the ono-transfer function feedforward network (FFN described reviousl. The ain otivation for MTFANN is to create a neural network which is corehensible, caable of odeling a wide range of robles, and at least coarable to current MLP in ters of accurac and generalization. One iortant goal is to aintain the accurac of the odel as oosed to existing knowledge extraction ethods fro neural networks which generall coroise accurac for higher corehensibilit. The nature of the roble is that it is generall ver difficult to obtain a greater corehensibilit of a neural network odel without losing soe accurac. Thus it is ver iortant that MTFANN are suleented with an autoatic network construction algorith to ensure network architecture with inial colexit (nuber of nodes and laers that can solve roble with a high degree of accurac [9] (referred to as otial architecture in ost of the existing literature. The otial network architecture ensures a good generalization [9] and a inial nuber of coonents of acquired knowledge (equations, rules, etc. in knowledge reresentation, which in fact gives a boost to corehensibilit. MTFANN essentiall have the sae roerties as described in the section II with the use of TFs of different aroxiation colexities a hidden laer. i= n γ = = Lη = NL r, N Φη( xiθij γ = η= i= (4. Φ η in the above equation reresents the TFs of various aroxiation colexities, while all the other sbols of the equation are essentiall the sae as in section II. The TFs that can be used are for exale linear, olnoials, logarithic, exonential, sigoid and herbolic tangent and so on. Sil seaking, an coutable atheatical function can be used as a TF. The roosed network construction algorith is a Transfer Function Selection and Allocation Algorith. The algorith begins with a dataset D defined as D = {( x, }, : N, N = Size( D (4.2 = The doain and range sets can be searated fro D: For inuts: I = { x =,2,3,..., l} (4.3 For oututs: Y = n,2,3,..., } (4.4 { = o The outut set is an exectation of outut Y and the noise = E( n r (4.5 + = R( x + r (4.6 The roble can be forulated as a search roble over the set of available transfer functions, which in general is infinite. T = φ, φ,,...} (4.7 { 2 φ3 We can take a artiall ordered finite set of T over the colexit doain, which eans that the set is ordered according to the colexit of TF which can be a atheatical colexit or corehensibilit. Generall, the corehensibilit and the atheatical colexit go together. T = φ, φ, φ,..., φ } (4.8 q { 2 3 q We define the TF selection oerator Γ that is oerating on its four araeters that results in a TF, and which on the general case is defined as: Γ ( I, Y,, T, ε = φ φ, φ, 0 l q (4.9 r n x, r, n, T, S( q a l Γ ( ε = φ _ (4.0 Γ... Γ k ( x,, T k, S( ε = φ (4. k+ ( x, r, n, T k+, S( ε = φ + l k + q (4.2 If required, a network silification can be done on the basis of following criteria: φ φ k + 0 N( φ + = φ k φ + 0 (4.3 φ k+ φ 0 φ φ 0 l q, φ 0 (4.4 k k+ z Thus, MTFANN contains neurons with transfer functions (TF of a different aroxiation colexities in contrast to TFs with the sae aroxiation colexit as widel used in sigoid and herbolic tangent based FFNs. MTFANN uses an add-node based algorith that autoaticall construct the network starting with sile transfer functions like linear and iterativel adds nodes with increasing TF colexit as required. Due to this kind of add-node strateg MTFANN are less likel to over-fit the data in coarison with the current sigoid and herbolic-tangent FFN. The usage of ixed TFs erits to obtain inut-outut relationshis in the for of regression equations without the need for an secific knowledge extraction algorith. It also enables to coare the influence of inuts on the oututs with a degree of transfer colexit which cannot be done if the all the neurons in the network have the sae TFs as it is the case with current FFNs. Thus, MTFANN rovides us with a fraework to i fit the colexit of the sste being odeled fro lower to higher ii straightforward knowledge acquisition to obtain relationshis between inuts and oututs in the roble doain iii increased corehensibilit and thus increased user accetabilit of inductive odeling results. Authorized licensed use liited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on June 07,200 at 05:04:3 UTC fro IEEE Xlore. Restrictions al.
5 V. A NUMERICAL EXAMPLE To evaluate both the erforance of MTFANN and the qualit of the extracted knowledge, a Puadn fail of data sets fro DELVE [20] was used. Puadn fail of datasets is a realistic siulation of the dnaics of a Pua 560 robot ar. The task is to redict angular acceleration of one of the robot ar's links. The inuts include angular ositions, velocities and torques of the robot ar. These data sets have been secificall generated for the DELVE environent so the individual data sets san the corners of a cube whose diensions reresent: (a nuber of inuts (8 or 32; (b degree of non-linearit (fairl linear or non-linear; (c aount of noise in the outut (oderate or high. DELVE defines the aount of noise in the outut as a fraction of the variance that would reain unexlained if the universal aroxiator is used on an infinite training set. If this residual variance exceeds 25%, the noise is considered high. A task is defined b DELVE as highl non-linear if the linear ethod would leave unexlained ore than 40% residual variance on an infinite training set. In this work, we have used Puadn-8nh dataset. Here 8 stand for 8 inuts and nh stands for high non-linearit and high noise. The MTFANN construction algorith of the revious Section was alied beginning with a sile linear node. Then another linear node was added. Addition of ore linear nodes did not resulted in further error reduction. Then a linear node with bias was added. The addition of linear with bias node did not result in a significant reduction of error so it was reoved. Now the first non-linear Sin node was added that resulted in a significant decrease in error. After addition of two further Sin nodes, the fourth Sin node did not result in an significant reduction in error and was reoved. The addition of Exonential node also did not result in an significant reduction in error. Thus the final odel contained two linear and three Sin neurons. The erforance of MTFANN was also coared with nine other achine learning ethods fro DELVE. The ethods used are e-ese- (Mixtures-of-exerts trained with earl stoing, he-ese- (Hierarchical ixtures-of-exerts trained with earl stoing, lin- (Linear regression with least squares, e-el- (Mixtures-of-exerts trained with Baesian ethods (enseble learning, l-ese- (ensebles of ultilaer ercetrons using earl stoing, with single hidden laer and herbolic tangent transfer function with linear outut neurons and conjugate gradient learning echanis, ars3.6-bag- (Multivariate Adative Regression Slines (MARS version 3.6 with Bagging, hegrow- (Hierarchical ixtures-of-exerts trained using growing and earl stoing and he-el- (Hierarchical ixtures-of-exerts trained with Baesian ethods (enseble learning. The Table shows that MTFANN is erforing better than alost all the other achine learning ethods on the Pua dataset with high non-linearit and high noise. The onl excetion is an MLP enseble. The MTFANN residual error is still lower than that of an MLP enseble however the t-test does not deonstrate a significant difference between the two ethods. However, an MLP enseble consists of ultile neural networks of highl colex neurons (herbolic tangents and thus is ver difficult to corehend in contrast to roosed MTFANN which we deonstrate below. Methods SSE t-test MTFANN N/A 2 e-ese he-ese lin e-el l-ese Mars3.6-bag He-grow He-el Table : Perforance of MTFANN against other ethods on Puadn-8x dataset fro DELVE To convert the obtained network for Pua-8nh roble into the corehensible regression equation forat, it is useful to first divide the network to obtain a regression equation fro each regressor (a hidden neuron and then to add those equations together to obtain the colete reresentation of the acquired odel in the regression equation forat. For exale, to obtain equation fro the st linear regressor (the st hidden neuron, the network can be reresented as in Figure. Figure : The network view to obtain the regression equation for the outut of the first hidden neuron. Fro Figure, the following exression (equation for first neuron can be extracted: = ( x x x x x x x x Thus we can obtain a regression equation that links a first coonent of the angular acceleration of the robot ar with the angular ositions of the resective links x, x2, x 3 and the angular velocities for links -3 x, x 4 5, x 6 3 Authorized licensed use liited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on June 07,200 at 05:04:3 UTC fro IEEE Xlore. Restrictions al.
6 resectivel and the torques at joints and 2 resectivel x, x 7 8. The equation for second linear hidden neuron can be obtained in the sae fashion. The regression equation obtained fro the third hidden neuron is a sine-based equation since neurons starting fro this osition contains sine transfer functions: [2] M. I. Khan, Y. Fraan and S. Nahavandi, High-Pressure Die-Casting Process Modeling Using Neural Networks, in J. Karuzzaan, R. Begg and R. Sarker (eds, Artificial Neural Networks in Finance and Manufacturing, , [3] I. Gabriel and A. Dobnikar, On-line identification and reconstruction of finite autoata with generalized neural networks, Neural Networks, vol. 6, 0-20, [4] I. Rivals and L. Personnaz,, Neural-Network Construction and Selection in Nonlinear Modeling, IEEE Transactions on Neural Networks, Vol. 4, No. 4, , =.357 SIN (0.( x x x x5-0.24x4 [5] Ruelhart, Hiton and Willias, Learning Internal Reresentations b x x x 2 Here reresent a third coonent of the angular acceleration of the robot ar. The factor of 0. is a sine factor in the ileentation of the sine function in Stuttgart Neural Network Siulator (SNNS used in the exerients. Two further equations based on Sine function can be obtained fro other Sin based neurons in a siilar fashion. The final equation which is an additive cobination of the equations for each hidden neuron, while being long, consists of sile, easil corehensible eleents. Additionall we have coared MTFANN with a sigoid MLP of the sae colexit (5 hidden neurons as the Table aarentl shows that MTFANN siilar in odeling accurac to that of a enseble of MLPs with a uch lower corehensibilit than MTFANN. The results of these exerients are suarized in Table 2. Method SSE t-test MTFANN Sigoid Table 2: Coarison between MTFANN and MLP of the sae colexit (5 hidden nodes on Puadn-8x dataset fro DELVE VI. CONCLUSION In this aer, we have roosed a novel te of ANN, a Mixed Transfer Function Artificial Neural Network (MTFANN, which ais to irove the colexit fitting and corehensibilit of the ost oular te of ANN (MLP - a Multilaer Percetron. The results of the alication of MTFANN on realistic exale of the dnaics of a Pua 560 robot ar with a high degree of non-linearit and high noise have deonstrated its abilit to obtain a highl accurate odel which is in ost cases uch ore accurate than a ore colex achine learning ethods used. At the sae tie the obtained sile regression equations fro the MTFANN odel has also show the effectiveness of a sile knowledge extraction aroach resented that is able to convert the odel into a sile and corehensible forat without an losses in odeling accurac. Finall, it was shown that MTFANN is able to obtain a odel with a uch better accurac that coarable MLP odel of the sae colexit. Error Proagation, MIT Press. [6] R. Sentiono, W.K. Leow, J.M. Zurada, Extraction of Rules fro Artificial Neural Networks for Nonlinear Regression, IEEE Transactions on Neural Networks, Vol. 3, No. 3, [7] R. Andrews, J. Diederich, and A. Tickle, Surve and critique of techniques for extracting rules fro trained artificial neural networks, Knowledge Based Sstes., vol. 8, no. 6, , 998. [8] B. J. Park, S. K. Oh, and W. Pedrcz, "The Hbrid Multi-laer Inference Architecture and Algorith of F'PNN Based on FNN and PNN", 9th IFSA World Congress, , 200 [9] G. Towell and J.Shavlik, The extraction of refined rules fro knowledge based neural networks, Machine Learning, Vol. 3, No.,. 7-0, 993. [0] A.S. d Avila Garcez, K. Broda, D.M. Gabb, Sbolic knowledge extraction fro trained neural networks: A sound aroach, Artificial Intelligence, vol. 25, , 200 [] R. Setiono and A. Azcarraga, Generating concise sets of linear regression rules fro Artificial Neural Networks, International Journal of Artificial Intelligence Tools, vol., no. 2, , [2] K. Saito, and R. Nakano, Extracting regression rules fro neural networks, Neural Networks, vol. 5, , 2002 [3] W. Duch, R. Adaczak and K. Grabczewski, A new ethodolog of extraction, otiization and alication of cris and fuzz logical rules, IEEE Transactions on Neural Networks, Vol., No. 2, 2000 [4] J. M. Besada-Juez, M. A. Sanz-Bobi, Extraction of Fuzz Rules Using Sensibilit Analsis in a Neural Network. ICANN 2002, [5] P. Tino and J. Saja, Learning and extracting initial eal autoata with a odular neural network odel", Neural Coutation, vol. 7 no. 4, , 995 [6] C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, and Y.C. Lee Learning and extracting finite state autoata with second-order recurrent neural networks, Neural Coutation, vol. 4, no. 3, , 992. [7] M. W. Craven and J. W. Shavlik., Extracting Tree-Structured Reresentations of Trained Networks, Advances in Neural Inforation Processing Sstes, Vol. 8. MIT Press, 996. [8] K. McGare, S. Werter and J. MacIntre, The Extraction and Coarison of Knowledge fro Local Function Networks, International Journal of Coutational Intelligence and Alications, vol. 8, no. 0, 200 [9] N. K. Treadgold and T. D. Gedeon, Exloring Constructive Cascade Networks, IEEE Transactions on Neural Networks, Vol. 0, No. 6, 999. [20] htt:// VII. REFERENCES [] K. Hornik, M. Stinchcobe and H. White, Universal aroxiation of an unknown aing and its derivatives using ultilaer feedforward networks, Neural Networks, vol. 3, , 990. Authorized licensed use liited to: DEAKIN UNIVERSITY LIBRARY. Downloaded on June 07,200 at 05:04:3 UTC fro IEEE Xlore. Restrictions al.
OPENING THE NEURAL NETWORK BLACKBOX: AN ALGORITHM FOR EXTRACTING RULES FROM FUNCTION APPROXIMATING ARTIFICIAL NEURAL NETWORKS
OENING THE NEURAL NETWORK BLACKBOX: AN ALGORITHM FOR EXTRACTING RULES FROM FUNCTION AROXIMATING ARTIFICIAL NEURAL NETWORKS Rud Setiono Wee Kheng Leow National Universit of Singaore Singaore Jaes Y.L. Thong
More informationFrequency Domain Analysis of Rattle in Gear Pairs and Clutches. Abstract. 1. Introduction
The 00 International Congress and Exosition on Noise Control Engineering Dearborn, MI, USA. August 9-, 00 Frequency Doain Analysis of Rattle in Gear Pairs and Clutches T. C. Ki and R. Singh Acoustics and
More informationComputationally Efficient Control System Based on Digital Dynamic Pulse Frequency Modulation for Microprocessor Implementation
IJCSI International Journal of Couter Science Issues, Vol. 0, Issue 3, No 2, May 203 ISSN (Print): 694-084 ISSN (Online): 694-0784 www.ijcsi.org 20 Coutationally Efficient Control Syste Based on Digital
More informationNONNEGATIVE matrix factorization finds its application
Multilicative Udates for Convolutional NMF Under -Divergence Pedro J. Villasana T., Stanislaw Gorlow, Meber, IEEE and Arvind T. Hariraan arxiv:803.0559v2 [cs.lg 5 May 208 Abstract In this letter, we generalize
More informationSuppress Parameter Cross-talk for Elastic Full-waveform Inversion: Parameterization and Acquisition Geometry
Suress Paraeter Cross-talk for Elastic Full-wavefor Inversion: Paraeterization and Acquisition Geoetry Wenyong Pan and Kris Innanen CREWES Project, Deartent of Geoscience, University of Calgary Suary Full-wavefor
More information[95/95] APPROACH FOR DESIGN LIMITS ANALYSIS IN VVER. Shishkov L., Tsyganov S. Russian Research Centre Kurchatov Institute Russian Federation, Moscow
[95/95] APPROACH FOR DESIGN LIMITS ANALYSIS IN VVER Shishkov L., Tsyganov S. Russian Research Centre Kurchatov Institute Russian Federation, Moscow ABSTRACT The aer discusses a well-known condition [95%/95%],
More information5. Dimensional Analysis. 5.1 Dimensions and units
5. Diensional Analysis In engineering the alication of fluid echanics in designs ake uch of the use of eirical results fro a lot of exerients. This data is often difficult to resent in a readable for.
More informationAN EXPLICIT METHOD FOR NUMERICAL SIMULATION OF WAVE EQUATIONS
The 4 th World Conference on Earthquake Engineering October -7, 8, Beiing, China AN EXPLICIT ETHOD FOR NUERICAL SIULATION OF WAVE EQUATIONS Liu Heng and Liao Zheneng Doctoral Candidate, Det. of Structural
More informationHandout 6 Solutions to Problems from Homework 2
CS 85/185 Fall 2003 Lower Bounds Handout 6 Solutions to Probles fro Hoewor 2 Ait Charabarti Couter Science Dartouth College Solution to Proble 1 1.2: Let f n stand for A 111 n. To decide the roerty f 3
More informationAnalytical Analysis and Feedback Linearization Tracking Control of the General Takagi-Sugeno Fuzzy Dynamic Systems
290 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 29, NO., FEBRUARY 999 In order to verify the erforance of the roosed ethod, exerients with the NIST nueral
More informationThe Schrödinger Equation and the Scale Principle
Te Scrödinger Equation and te Scale Princile RODOLFO A. FRINO Jul 014 Electronics Engineer Degree fro te National Universit of Mar del Plata - Argentina rodolfo_frino@aoo.co.ar Earlier tis ear (Ma) I wrote
More informationNumerical Method for Obtaining a Predictive Estimator for the Geometric Distribution
British Journal of Matheatics & Couter Science 19(5): 1-13, 2016; Article no.bjmcs.29941 ISSN: 2231-0851 SCIENCEDOMAIN international www.sciencedoain.org Nuerical Method for Obtaining a Predictive Estiator
More informationControl and Stability of the Time-delay Linear Systems
ISSN 746-7659, England, UK Journal of Inforation and Couting Science Vol., No. 4, 206,.29-297 Control and Stability of the Tie-delay Linear Systes Negras Tahasbi *, Hojjat Ahsani Tehrani Deartent of Matheatics,
More informationThe Semantics of Data Flow Diagrams. P.D. Bruza. Th.P. van der Weide. Dept. of Information Systems, University of Nijmegen
The Seantics of Data Flow Diagras P.D. Bruza Th.P. van der Weide Det. of Inforation Systes, University of Nijegen Toernooiveld, NL-6525 ED Nijegen, The Netherlands July 26, 1993 Abstract In this article
More informationA STUDY OF UNSUPERVISED CHANGE DETECTION BASED ON TEST STATISTIC AND GAUSSIAN MIXTURE MODEL USING POLSAR SAR DATA
A STUDY OF UNSUPERVISED CHANGE DETECTION BASED ON TEST STATISTIC AND GAUSSIAN MIXTURE MODEL USING POLSAR SAR DATA Yang Yuxin a, Liu Wensong b* a Middle School Affiliated to Central China Noral University,
More informationCALCULATION of CORONA INCEPTION VOLTAGES in N 2 +SF 6 MIXTURES via GENETIC ALGORITHM
CALCULATION of COONA INCPTION VOLTAGS in N +SF 6 MIXTUS via GNTIC ALGOITHM. Onal G. Kourgoz e-ail: onal@elk.itu.edu.tr e-ail: guven@itu.edu..edu.tr Istanbul Technical University, Faculty of lectric and
More informationEXACT BOUNDS FOR JUDICIOUS PARTITIONS OF GRAPHS
EXACT BOUNDS FOR JUDICIOUS PARTITIONS OF GRAPHS B. BOLLOBÁS1,3 AND A.D. SCOTT,3 Abstract. Edwards showed that every grah of size 1 has a biartite subgrah of size at least / + /8 + 1/64 1/8. We show that
More informationDesign of Linear-Phase Two-Channel FIR Filter Banks with Rational Sampling Factors
R. Bregović and. Saraäi, Design of linear hase two-channel FIR filter bans with rational saling factors, Proc. 3 rd Int. Sy. on Iage and Signal Processing and Analysis, Roe, Italy, Set. 3,. 749 754. Design
More informationSUPPORTING INFORMATION FOR. Mass Spectrometrically-Detected Statistical Aspects of Ligand Populations in Mixed Monolayer Au 25 L 18 Nanoparticles
SUPPORTIG IFORMATIO FOR Mass Sectroetrically-Detected Statistical Asects of Lig Poulations in Mixed Monolayer Au 25 L 8 anoarticles Aala Dass,,a Kennedy Holt, Joseh F. Parer, Stehen W. Feldberg, Royce
More informationThe CIA (consistency in aggregation) approach A new economic approach to elementary indices
The CIA (consistency in aggregation) aroach A new econoic aroach to eleentary indices Dr Jens ehrhoff*, Head of Section Business Cycle and Structural Econoic Statistics * Jens This ehrhoff, resentation
More informationBroadband Synthetic Aperture Matched Field Geoacoustic Inversion
Broadband Snthetic Aerture Matched Field Geoacoustic Inversion PhD Candidate: Bien Aik Tan htt://www.l.ucsd.edu/eole/btan/ PhD Coittee: Prof. Willia Hodgkiss Chair Prof. Peter Gerstoft Co-chair Prof. Willia
More informationDesign and Dynamic Analysis of Drill Pipe Car Lifting Mechanism. Di Wu
th International Conference on Sensors, Measureent and Intelligent Materials (ICSMIM 0) Design and Dnaic Analsis of Drill Pie Car Lifting Mechanis Di Wu Xi an Research Institute,CCTEG,Xi an70077,china
More informationVACUUM chambers have wide applications for a variety of
JOURNAL OF THERMOPHYSICS AND HEAT TRANSFER Vol. 2, No., January March 27 Free Molecular Flows Between Two Plates Equied with Pus Chunei Cai ZONA Technology, Inc., Scottsdale, Arizona 85258 Iain D. Boyd
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationINTERIOR BALLISTIC PRINCIPLE OF HIGH/LOW PRESSURE CHAMBERS IN AUTOMATIC GRENADE LAUNCHERS
XXXX IB08 19th International Syosiu of Ballistics, 7 11 May 001, Interlaken, Switzerland INTERIOR BALLISTIC PRINCIPLE OF HIGH/LOW PRESSURE CHAMBERS IN AUTOMATIC GRENADE LAUNCHERS S. Jaraaz1, D. Micković1,
More informationChange-point detection for recursive Bayesian geoacoustic inversion
Change-oint detection for recursive Baesian geoacoustic inversion Bien Aik Tan Peter Gerstoft Caglar Yardi and Willia S. Hodgkiss Universit of California San Diego htt://www.l.ucsd.edu/eole/btan/ Overview
More informationModi ed Local Whittle Estimator for Long Memory Processes in the Presence of Low Frequency (and Other) Contaminations
Modi ed Local Whittle Estiator for Long Meory Processes in the Presence of Low Frequency (and Other Containations Jie Hou y Boston University Pierre Perron z Boston University March 5, 203; Revised: January
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More informationNonlinear Active Noise Control Using NARX Model Structure Selection
2009 Aerican Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 10-12, 2009 FrC13.6 Nonlinear Active Noise Control Using NARX Model Structure Selection R. Naoli and L. Piroddi, Meber,
More informationParallelizing Spectrally Regularized Kernel Algorithms
Journal of Machine Learning Research 19 (2018) 1-29 Subitted 11/16; Revised 8/18; Published 8/18 Parallelizing Sectrally Regularized Kernel Algoriths Nicole Mücke nicole.uecke@atheatik.uni-stuttgart.de
More informationarxiv: v4 [math.st] 9 Aug 2017
PARALLELIZING SPECTRAL ALGORITHMS FOR KERNEL LEARNING GILLES BLANCHARD AND NICOLE MÜCKE arxiv:161007487v4 [athst] 9 Aug 2017 Abstract We consider a distributed learning aroach in suervised learning for
More informationLIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).
LIMITATIONS OF RECEPTRON XOR Problem The failure of the ercetron to successfully simle roblem such as XOR (Minsky and Paert). x y z x y z 0 0 0 0 0 0 Fig. 4. The exclusive-or logic symbol and function
More informationMistiming Performance Analysis of the Energy Detection Based ToA Estimator for MB-OFDM
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS Mistiing Perforance Analysis of the Energy Detection Based ToA Estiator for MB-OFDM Huilin Xu, Liuqing Yang contact author, Y T Jade Morton and Mikel M Miller
More informationApproximation by Piecewise Constants on Convex Partitions
Aroxiation by Piecewise Constants on Convex Partitions Oleg Davydov Noveber 4, 2011 Abstract We show that the saturation order of iecewise constant aroxiation in L nor on convex artitions with N cells
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationAlgorithm Design and Implementation for a Mathematical Model of Factoring Integers
IOSR Journal of Matheatics (IOSR-JM e-iss: 78-578, -ISS: 39-765X. Volue 3, Issue I Ver. VI (Jan. - Feb. 07, PP 37-4 www.iosrjournals.org Algorith Design leentation for a Matheatical Model of Factoring
More informationNew Set of Rotationally Legendre Moment Invariants
New Set of Rotationally Legendre Moent Invariants Khalid M. Hosny Abstract Orthogonal Legendre oents are used in several attern recognition and iage rocessing alications. Translation and scale Legendre
More informationMinimizing Machinery Vibration Transmission in a Lightweight Building using Topology Optimization
1 th World Congress on Structural and Multidiscilinary Otiization May 19-4, 13, Orlando, Florida, USA Miniizing Machinery Vibration ransission in a Lightweight Building using oology Otiization Niels Olhoff,
More informationEnsemble Based on Data Envelopment Analysis
Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807
More informationExploiting Matrix Symmetries and Physical Symmetries in Matrix Product States and Tensor Trains
Exloiting Matrix Syetries and Physical Syetries in Matrix Product States and Tensor Trains Thoas K Huckle a and Konrad Waldherr a and Thoas Schulte-Herbrüggen b a Technische Universität München, Boltzannstr
More informationOn spinors and their transformation
AMERICAN JOURNAL OF SCIENTIFIC AND INDUSTRIAL RESEARCH, Science Huβ, htt:www.scihub.orgajsir ISSN: 5-69X On sinors and their transforation Anaitra Palit AuthorTeacher, P5 Motijheel Avenue, Flat C,Kolkata
More informationFRESNEL FORMULAE FOR SCATTERING OPERATORS
elecounications and Radio Engineering, 70(9):749-758 (011) MAHEMAICAL MEHODS IN ELECROMAGNEIC HEORY FRESNEL FORMULAE FOR SCAERING OPERAORS I.V. Petrusenko & Yu.K. Sirenko A. Usikov Institute of Radio Physics
More informationBinomial and Poisson Probability Distributions
Binoial and Poisson Probability Distributions There are a few discrete robability distributions that cro u any ties in hysics alications, e.g. QM, SM. Here we consider TWO iortant and related cases, the
More informationOne- and multidimensional Fibonacci search very easy!
One and ultidiensional ibonacci search One and ultidiensional ibonacci search very easy!. Content. Introduction / Preliinary rearks...page. Short descrition of the ibonacci nubers...page 3. Descrition
More informationOn Maximizing the Convergence Rate for Linear Systems With Input Saturation
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 48, NO. 7, JULY 2003 1249 On Maxiizing the Convergence Rate for Linear Systes With Inut Saturation Tingshu Hu, Zongli Lin, Yacov Shaash Abstract In this note,
More informationA Constraint View of IBD Graphs
A Constraint View of IBD Grahs Rina Dechter, Dan Geiger and Elizabeth Thoson Donald Bren School of Inforation and Couter Science University of California, Irvine, CA 92697 1 Introduction The reort rovides
More informationDolph-Chebyshev Pattern Synthesis for Uniform Circular Arrays
1 Dolh-Chebyshev Pattern Synthesis for Unifor Circular Arrays Tin-Ei Wang, Russell Brinkan, and Kenneth R. Baker, Sr. Meber, IEEE Interdiscilinary Telecounications Progra UCB 530, University of Colorado,
More informationAn Investigation into the Effects of Roll Gyradius on Experimental Testing and Numerical Simulation: Troubleshooting Emergent Issues
An Investigation into the Effects of Roll Gyradius on Exeriental esting and Nuerical Siulation: roubleshooting Eergent Issues Edward Dawson Maritie Division Defence Science and echnology Organisation DSO-N-140
More informationCHAPTER 2 THERMODYNAMICS
CHAPER 2 HERMODYNAMICS 2.1 INRODUCION herodynaics is the study of the behavior of systes of atter under the action of external fields such as teerature and ressure. It is used in articular to describe
More informationDISCRETE DUALITY FINITE VOLUME SCHEMES FOR LERAY-LIONS TYPE ELLIPTIC PROBLEMS ON GENERAL 2D MESHES
ISCRETE UALITY FINITE VOLUME SCHEMES FOR LERAY-LIONS TYPE ELLIPTIC PROBLEMS ON GENERAL 2 MESHES BORIS ANREIANOV, FRANCK BOYER AN FLORENCE HUBERT Abstract. iscrete duality finite volue schees on general
More informationSecurity Transaction Differential Equation
Security Transaction Differential Equation A Transaction Volue/Price Probability Wave Model Shi, Leilei This draft: June 1, 4 Abstract Financial arket is a tyical colex syste because it is an oen trading
More informationNumerical Model of the Human Head under Side Impact
J. Basic. Al. Sci. Res., 3(3)47-474, 3 3, TextRoad Publication ISSN 9-434 Journal of Basic and Alied Scientific Research www.textroad.co Nuerical Model of the Huan Head under Side Iact Behrooz Seehri (PHD),
More informationACCURACY OF THE DISCRETE FOURIER TRANSFORM AND THE FAST FOURIER TRANSFORM
SIAM J. SCI. COMPUT. c 1996 Society for Industrial and Alied Matheatics Vol. 17, o. 5,. 1150 1166, Seteber 1996 008 ACCURACY OF THE DISCRETE FOURIER TRASFORM AD THE FAST FOURIER TRASFORM JAMES C. SCHATZMA
More informationPhase field modelling of microstructural evolution using the Cahn-Hilliard equation: A report to accompany CH-muSE
Phase field odelling of icrostructural evolution using the Cahn-Hilliard equation: A reort to accoany CH-uSE 1 The Cahn-Hilliard equation Let us consider a binary alloy of average coosition c 0 occuying
More informationLecture 3: October 2, 2017
Inforation and Coding Theory Autun 2017 Lecturer: Madhur Tulsiani Lecture 3: October 2, 2017 1 Shearer s lea and alications In the revious lecture, we saw the following stateent of Shearer s lea. Lea 1.1
More informationCALIFORNIA INSTITUTE OF TECHNOLOGY
CALIFORNIA INSIUE OF ECHNOLOGY Control and Dynaical Systes Course Project CDS 270 Instructor: Eugene Lavretsky, eugene.lavretsky@boeing.co Sring 2007 Project Outline: his roject consists of two flight
More informationModeling soft Scandinavian clay behavior using the asymptotic state
NGM 216 Reyjavi Proceedings of the 17 th Nordic Geotechnical Meeting Challenges in Nordic Geotechnic 25 th 28 th of May Modeling soft Scandinavian clay behavior using the asytotic state Jon A. Rønningen
More informationInput-Output (I/O) Stability. -Stability of a System
Inut-Outut (I/O) Stability -Stability of a Syste Outline: Introduction White Boxes and Black Boxes Inut-Outut Descrition Foralization of the Inut-Outut View Signals and Signal Saces he Notions of Gain
More informationMesopic Visual Performance of Cockpit s Interior based on Artificial Neural Network
Mesoic Visual Perforance of Cockit s Interior based on Artificial Neural Network Dongdong WEI Fudan University Det of Mechanics & Science Engineering Shanghai, China Abstract The abient light of cockit
More informationFeedback-error control
Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller
More informationQuadratic Reciprocity. As in the previous notes, we consider the Legendre Symbol, defined by
Math 0 Sring 01 Quadratic Recirocity As in the revious notes we consider the Legendre Sybol defined by $ ˆa & 0 if a 1 if a is a quadratic residue odulo. % 1 if a is a quadratic non residue We also had
More informationA GENERAL THEORY OF PARTICLE FILTERS IN HIDDEN MARKOV MODELS AND SOME APPLICATIONS. By Hock Peng Chan National University of Singapore and
Subitted to the Annals of Statistics A GENERAL THEORY OF PARTICLE FILTERS IN HIDDEN MARKOV MODELS AND SOME APPLICATIONS By Hock Peng Chan National University of Singaore and By Tze Leung Lai Stanford University
More informationPattern Classification using Simplified Neural Networks with Pruning Algorithm
Pattern Classification using Siplified Neural Networks with Pruning Algorith S. M. Karuzzaan 1 Ahed Ryadh Hasan 2 Abstract: In recent years, any neural network odels have been proposed for pattern classification,
More informationThe Number of Information Bits Related to the Minimum Quantum and Gravitational Masses in a Vacuum Dominated Universe
Wilfrid Laurier University Scholars Coons @ Laurier Physics and Couter Science Faculty Publications Physics and Couter Science 01 The uber of Inforation Bits Related to the Miniu Quantu and Gravitational
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volue 19, 2013 htt://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Physical Acoustics Session 1PAb: Acoustics in Microfluidics and for Particle
More information#A62 INTEGERS 16 (2016) REPRESENTATION OF INTEGERS BY TERNARY QUADRATIC FORMS: A GEOMETRIC APPROACH
#A6 INTEGERS 16 (016) REPRESENTATION OF INTEGERS BY TERNARY QUADRATIC FORMS: A GEOMETRIC APPROACH Gabriel Durha Deartent of Matheatics, University of Georgia, Athens, Georgia gjdurha@ugaedu Received: 9/11/15,
More informationSome simple continued fraction expansions for an in nite product Part 1. Peter Bala, January ax 4n+3 1 ax 4n+1. (a; x) =
Soe sile continued fraction exansions for an in nite roduct Part. Introduction The in nite roduct Peter Bala, January 3 (a; x) = Y ax 4n+3 ax 4n+ converges for arbitrary colex a rovided jxj
More informationThe Generalized Integer Gamma DistributionA Basis for Distributions in Multivariate Statistics
Journal of Multivariate Analysis 64, 8610 (1998) Article No. MV971710 The Generalized Inteer Gaa DistributionA Basis for Distributions in Multivariate Statistics Carlos A. Coelho Universidade Te cnica
More informationUniform Deviation Bounds for k-means Clustering
Unifor Deviation Bounds for k-means Clustering Olivier Bache Mario Lucic S Haed Hassani Andreas Krause Abstract Unifor deviation bounds liit the difference between a odel s exected loss and its loss on
More informationEdinburgh Research Explorer
Edinburgh Research Exlorer ALMOST-ORTHOGONALITY IN THE SCHATTEN-VON NEUMANN CLASSES Citation for ublished version: Carbery, A 2009, 'ALMOST-ORTHOGONALITY IN THE SCHATTEN-VON NEUMANN CLASSES' Journal of
More informationSHOUYU DU AND ZHANLE DU
THERE ARE INFINITELY MANY COUSIN PRIMES arxiv:ath/009v athgm 4 Oct 00 SHOUYU DU AND ZHANLE DU Abstract We roved that there are infinitely any cousin ries Introduction If c and c + 4 are both ries, then
More informationOptimal Adaptive Computations in the Jaffard Algebra and Localized Frames
www.oeaw.ac.at Otial Adative Coutations in the Jaffard Algebra and Localized Fraes M. Fornasier, K. Gröchenig RICAM-Reort 2006-28 www.rica.oeaw.ac.at Otial Adative Coutations in the Jaffard Algebra and
More informationJ.B. LASSERRE AND E.S. ZERON
L -NORMS, LOG-BARRIERS AND CRAMER TRANSFORM IN OPTIMIZATION J.B. LASSERRE AND E.S. ZERON Abstract. We show that the Lalace aroxiation of a sureu by L -nors has interesting consequences in otiization. For
More information1. (2.5.1) So, the number of moles, n, contained in a sample of any substance is equal N n, (2.5.2)
Lecture.5. Ideal gas law We have already discussed general rinciles of classical therodynaics. Classical therodynaics is a acroscoic science which describes hysical systes by eans of acroscoic variables,
More informationAyşe Alaca, Şaban Alaca and Kenneth S. Williams School of Mathematics and Statistics, Carleton University, Ottawa, Ontario, Canada. Abstract.
Journal of Cobinatorics and Nuber Theory Volue 6, Nuber,. 17 15 ISSN: 194-5600 c Nova Science Publishers, Inc. DOUBLE GAUSS SUMS Ayşe Alaca, Şaban Alaca and Kenneth S. Willias School of Matheatics and
More informationMultilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale
World Alied Sciences Journal 5 (5): 546-552, 2008 ISSN 1818-4952 IDOSI Publications, 2008 Multilayer Percetron Neural Network (MLPs) For Analyzing the Proerties of Jordan Oil Shale 1 Jamal M. Nazzal, 2
More informationFundamentals of Astrodynamics and Applications 3 rd Ed
Fundaentals of Astrodynaics and Alications 3 rd Ed Errata June 0, 0 This listing is an on-going docuent of corrections and clarifications encountered in the book. I areciate any coents and questions you
More informationarxiv: v2 [math.st] 13 Feb 2018
A data-deendent weighted LASSO under Poisson noise arxiv:1509.08892v2 [ath.st] 13 Feb 2018 Xin J. Hunt, SAS Institute Inc., Cary, NC USA Patricia Reynaud-Bouret University of Côte d Azur, CNRS, LJAD, Nice,
More informationDesign of Robust Reference Input Tracker via Delayed Feedback Control Method
Design of Robust Reference Inut Tracker via Delayed Feedback Control Method Zahed Dastan, Mahsan Tavakoli-Kakhki * Faculty of Electrical Engineering, KN Toosi University of Technology, Tehran, Iran * Eail:
More informationRecent Developments in Multilayer Perceptron Neural Networks
Recent Develoments in Multilayer Percetron eural etworks Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, Texas 75265 walter.delashmit@lmco.com walter.delashmit@verizon.net Michael
More informationQualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science
Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationInternational Journal of Industrial Engineering Computations
International Journal of Industrial Engineering Coutations 3 (0 695 70 Contents lists available at GrowingScience International Journal of Industrial Engineering Coutations oeage: wwwgrowingscienceco/ijiec
More informationReview from last time Time Series Analysis, Fall 2007 Professor Anna Mikusheva Paul Schrimpf, scribe October 23, 2007.
Review fro last tie 4384 ie Series Analsis, Fall 007 Professor Anna Mikusheva Paul Schrif, scribe October 3, 007 Lecture 3 Unit Roots Review fro last tie Let t be a rando walk t = ρ t + ɛ t, ρ = where
More informationOptimization of Dynamic Reactive Power Sources Using Mesh Adaptive Direct Search
Acceted by IET Generation, Transission & Distribution on 6/2/207 Otiization of Dynaic Reactive Power Sources Using Mesh Adative Direct Search Weihong Huang, Kai Sun,*, Junjian Qi 2, Jiaxin Ning 3 Electrical
More informationAnalysis of low rank matrix recovery via Mendelson s small ball method
Analysis of low rank atrix recovery via Mendelson s sall ball ethod Maryia Kabanava Chair for Matheatics C (Analysis) ontdriesch 0 kabanava@athc.rwth-aachen.de Holger Rauhut Chair for Matheatics C (Analysis)
More informationComparative Design of Radial and Transverse Flux PM Generators for Direct-Drive Wind Turbines
Paer ID 1325 Coarative Design of Radial and Transverse Flux PM Generators for Direct-Drive Wind Turbines Deok-je Bang, Henk Polinder, Ghanshya Shrestha and Jan Abraha Ferreira Electrical Power Processing
More informationPattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationA Subspace Iteration for Calculating a Cluster of Exterior Eigenvalues
Advances in Linear Algebra & Matrix heory 05 5 76-89 Published Online Seteber 05 in SciRes htt://wwwscirorg/ournal/alat htt://dxdoiorg/0436/alat0553008 A Subsace Iteration for Calculating a Cluster of
More informationUniform Deviation Bounds for k-means Clustering
Unifor Deviation Bounds for k-means Clustering Olivier Bache Mario Lucic S. Haed Hassani Andreas Krause Abstract Unifor deviation bounds liit the difference between a odel s exected loss and its loss on
More informationAn Iterative Substructuring Approach to the Calculation of. Eigensolution and Eigensensitivity
his is the Pre-Published Version. An Iterative Substructuring Aroach to the Calculation of Eigensolution and Eigensensitivity Shun Weng PhD student, Deartent of Civil and Structural Engineering, he Hong
More informationAdaptive Super Twisting Controller for a Quadrotor UAV
Prerint - final, definitive version available at htt://www.ieeelore.co/ acceted for ICRA26 Ma 26 Adative Suer Twisting Controller for a Quadrotor UAV Sujit Rajaa, Carlo Masone, Heinrich H. Bülthoff,2 and
More informationPROCEEDINGS OF THE YEREVAN STATE UNIVERSITY
PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 13,,. 8 14 M a t h e a t i c s ON BOUNDEDNESS OF A CLASS OF FIRST ORDER LINEAR DIFFERENTIAL OPERATORS IN THE SPACE OF n 1)-DIMENSIONALLY
More informationSPEED CONTROL OF PERMANENT MAGNET SYNCHRONOUS MOTOR USING FEEDBACK LINEARIZATION METHOD
Inian ournal of Funaental an Alie Life Sciences ISSN: 645 (Online) An Oen Access, Online International ournal Available at www.cibtech.org/s.e/jls/05/0/jls.ht 05 Vol.5 (S),. 9-98/Iza an Ghanbari SPEED
More informationBallistic Pendulum. Introduction
Ballistic Pendulu Introduction The revious two activities in this odule have shown us the iortance of conservation laws. These laws rovide extra tools that allow us to analyze certain asects of hysical
More informationSolving Poisson equations by boundary knot method
International Worksho on MeshFree Methos 23 1 Solving Poisson equations by bounary knot etho W. Chen 1 Abstract: The bounary knot etho (BKM) is a recent eshfree bounary-tye raial basis function (RBF) collocation
More informationCAUCHY PROBLEM FOR TECHNOLOGICAL CUMULATIVE CHARGE DESIGN. Christo Christov, Svetozar Botev
SENS'6 Second Scientific Conference with International Particiation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY 4 6 June 6, Varna, Bulgaria ----------------------------------------------------------------------------------------------------------------------------------
More informationAnomalous heat capacity for nematic MBBA near clearing point
Journal of Physics: Conference Series Anoalous heat caacity for neatic MA near clearing oint To cite this article: D A Lukashenko and M Khasanov J. Phys.: Conf. Ser. 394 View the article online for udates
More information