FEEDFORWARD NETWORKS WITH FUZZY SIGNALS 1

Size: px
Start display at page:

Download "FEEDFORWARD NETWORKS WITH FUZZY SIGNALS 1"

Transcription

1 FEEDFORWARD NETWORKS WITH FUZZY SIGNALS R. Bělohlávek Institute for Research and Applications of Fuzzy Modeling University of Ostrava Bráfova Ostrava Czech Republic belohlav@osu.cz and Department of Computer Science Technical University of Ostrava tř. 7. listopadu Ostrava-Poruba Czech Republic Abstract. The paper discusses feedforard neural netorks ith fuzzy signals. We analyze the feedforard phase and sho some properties of the output function. Then e present a backpropagation like adaptation algorithm for crisp eights thresholds and neuron slopes of the multilayer netork ith sigmoidal transfer functions. We provide theoretical ustification for the adaptation formulas. The results are of general nature and together ith the presented approach can be used for other types of feedforard netorks. Proposed and discussed are also applications of the presented feedforard netorks. Key ords: feedforard neural netorks fuzzy sets extension principle adaptation backpropagation. Introduction Fuzzy methods and neural netorks constitute the essential part of soft-computing a discipline aiming at methods capable of dealing ith complex humanistic systems. Among the most important features of neural netorks e recognize first of all the massively parallel architecture and dynamics hich is inspired by the structure of human brain the adaptation capabilities and the fault tolerance. On the other hand fuzzy sets represent an useful model of vagueness a phenomenon playing crucial role in the ay humans regard the orld. The significant properties of fuzzy methods are the efficient processing of the modeled indeterminacy especially the capability of modeling and further processing of natural language semantics. This results in human-friendly models the models inputs and outputs can be expressed or interpreted on the level of natural language or have other clearly understandable meaning. Both neural netorks and fuzzy methods are closely related ith human cognition hoever on different levels. Neural netorks concern the microlevel the level of brain. Fuzzy methods concern the macrolevel the level of mental phenomena. The levels (micro and macro) are strongly connected. The macrolevel seems to be implemented in the microlevel. The analysis of this combination of the levels is fundamental for our understanding of human cognition. The study of the analogous combination of the soft-computing-methodics counterparts of the levels i.e. neural netorks and fuzzy methods is therefore orth pursuing. Not surprisingly neural and fuzzy models can be combined in many ays. The resulting models can be built up of separate neural and fuzzy modules performing relatively independent tasks they can be based on neural models supplied by some features of fuzzy models (esp. processing of indeterminate information) or conversely based on fuzzy models supplied by features of neural Supported by grant No. 20/96/0985 of the GAČR and partially by IGS of University of Ostrava No. 03/97 029/98.

2 netorks (esp. adaptability). An overvie of various kinds of these models called neuro-fuzzy or fuzzy-neuro models can be found e.g. in [4 7 6]. The combinations of neuro and fuzzy models presented in literature are often ad hoc approaches ithout correct mathematical treatment. To bring some order into the development it is necessary to delineate and to pursue some paradigms of the combination. A useful paradigm is represented by Zadeh s extension principle (EP) [9 2 4]. The combination based on EP has been firstly presented probably in [4]. Our paper deals ith this approach and is organized as follos. In Section 2 e propose the architecture and deal ith the feedforard phase of our model. In Section 3 e propose an adaptation procedure and derive the adaptation formulas. Possible applications are discussed in Section 4. An example demonstrating the learning and interpolation capabilities is presented in Section 5. Section 6 briefly summarizes the results. 2 Architecture and feedforard phase By a feedforard (neural) netork [] it is usually understood a netorked computational scheme composed of several simple computational units hich are connected in such a ay that the graph of information flo contains no loops (the information is fed forard). The simple units process information in a ay analogous to that of biological neurons. The adective neural is due to the analogy ith biological neural netorks. The feedforard netorks take their inputs process them in the feedforard phase and return the outputs. The inputs to the netork are usually real numbers. In hat follos e discuss the case here the inputs and the outputs are not real numbers but fuzzy sets in the real line. The reason and advantages of having fuzzy sets instead of crisp real numbers are discussed in Section 4. There are several particular types of feedforard netorks see e.g. []. Probably the most popular one is the multilayer feedforard netork ith sigmoidal transfer functions called also backpropagation net because of its adaptation procedure [7]. This netork ill be a subect of our investigation. Hoever our results hich concern the general case of feedforard netorks ill be formulated as general as necessary for application to other types of feedforard netorks. We no briefly describe the multilayer feedforard neural net. We suppose the th neuron of the l th layer to have the sigmoidal transfer function y (l) y (l). The input z(l) layer i.e. z (l) m l i (l) +e λ(l) z(l) here λ (l) is the slope of to the neuron is the thresholded eighted sum of the outputs of the preceding i y(l ) i θ (l). Here (l) i denotes the eight of the connection leading from is the threshold. the i th neuron of the (l ) th layer to the th neuron of the l th layer and θ (l) A part of the netork is depicted in Fig.. Let m l and r denote the number of neurons in the l th layer and the number of layers in the netork respectively. The inputs to the netork are formally the outputs of the 0 th layer. Such a netork performs a mapping F : R m 0 (0) mr. A suitable transformation can guarantee that the outputs are from R mr. From no on e suppose the inputs to the neural netork to be fuzzy sets in R instead of real numbers. One can then use the EP to extend the operations performed by the neural netork to fuzzy sets. We apply the EP to each operation performed by the net. To be able to cope effectively ith EP e restrict ourselves to the class F CI (R) of normal convex fuzzy sets in R ith compact α cuts for α (0]. Recall that a fuzzy set A is normal if there is an x ith A(x) ; the α cut of A is the set α A x;a(x) α}; A is convex if each of its α cuts is convex i.e. an interval in our case; each fuzzy set x may be represented by a collection α x;α (0]} of its α-cuts. Under 2

3 y (l) θ (l) λ(l) z (l) (l) i y (l ) i i Figure : Part of the underlying neural netork. our conditions for a continuous f : R n R e have for f obtained by EP α f(x...x n ) f( α x... α x n ) for x i F CI (R) i.e. the operations on fuzzy sets can be performed by interval operations see e.g. [2]. Furthermore if f is monotone the interval operations are easy to compute. We no describe the feedforard phase. The netork dynamics is the classical one of the feedforard neural netork. After a tupple of signals is presented the netork transfers signals layer by layer. For x i F CI (R) e denote α x i [ α x Li α x Ri ]. Suppose e have the output interval signals y (l ) i F CI (R) of all the neurons of the (l ) th layer at disposal. As mentioned above e may represent y (l ) i by a collection of intervals [ α y (l ) Li α y (l ) Ri ]. The total input to the th neuron in the l-th layer is then given by m l [ α z (l) L α z (l) R ] hich leads by the interval arithmetic to i (l) i [α y (l ) Li α y (l ) Ri ] θ (l) [] and α z (l) L α z (l) R ml i (l) i 0 m l i (l) i 0 (l) i α y (l ) Li + (l) i α y (l ) Ri + m l i (l) i <0 m l i (l) i <0 (l) i α y (l ) Ri θ (l) (l) i α y (l ) Li θ (l) (). (2) As the transfer function y(z) that for λ (l) +e λ(l) 0 y(z) 2 ) e have is increasing for λ(l) z > 0 and decreasing for λ (l) < 0 (note and [ α y (l) L α y (l) R ] [ [ α y (l) L α y (l) R ] [ + e λ(l) α z (l) L + e λ(l) α z (l) R + e λ(l) α z (l) R + e λ(l) α z (l) L ] ] for λ (l) 0 for λ (l) < 0. 3

4 The described signal transfer is performed successively for the layers l... r. Return no to our problem from the functional point of vie. For a given (regular) neural netork hich represents a mapping F : R m 0 (0) mr e have obtained a netork ith fuzzy signals hich represents a mapping F : F CI (R) m 0 F CI ((0)) mr. A natural question at this point is hat is the relationship of F to the function F obtained from F by the EP. We ill anser this question in general. We face the task of extending the function F hich is in a certain ay composed of several simple functions. Extension by EP can be done in to ays extend the hole composite function (in our case to F) or extend the simple constituent functions and compose them (in our case to F). In the to folloing statements e suppose that the structure of truth values forms a complete lattice of hich the usual interval [0] is a special case [4]. We need the folloing lemma. Lemma Let L L be a complete lattice K i L i I K L such that K i ; i I} K. Then it holds Ki ; i I} K. Proof Denote a K a i K i. We have to prove a i ; i I} a. K i K implies a i a for all i I. Hence a i ; i I} a. Conversely for each b K there is a such that b K and thus b a a i ; i I}. Hence a K b K} a i ; i I}. We have ai ; i I} a. The folloing theorem shos the relationship of the to ays of extending a composite function. Theorem 2 Let g : Y m Z f i : X k Y i... m be functions A...A k fuzzy sets in X. Then for the corresponding extensions it holds If moreover m then even holds. g(f... f m )(A... A k ) g(f...f m )(A... A k ). Proof By the extension principle e have and g(f )(A... A k ) g(f )(A... A k ) g(f... f m )(A...A k )(z) g(f (A...A k )... f m (A... A k ))(z) g(f (A... A k ) f m (A...A k ))(z) g(f (A A k ) f m (A A k ))(z) (f (A A k ) f m (A A k ))(y... y m ); g(y... y m ) z} m A (x ) A k (x k );f i (x... x k ) y i }}; g(y... y m ) z} a i g(f...f m )(A... A k )(z) g(f... f m )(A A k )(z) A (x ) A k (x k ); g(f...f m )(x... x k ) z} b 4

5 Denote K A (x ) A k (x k ); g(f... f m )(x... x k ) z} K yi A (x ) A k (x k ); y y... y i... y m g (z) f i (x...x k ) y i y}. We have a ni z g (y) yi K yi and b K. Clearly K K yi for each y and i. By ellknon lattice properties it follos K yi K yi hence K n i yi K yi hence b K ni y g (z) yi K yi a hich proves the first part. For m the assertion follos immediately from Lemma putting K i K y. From Theorem 2 e get by induction over the number of layers the folloing statement. Corollary 3 Let x... x m0 be fuzzy sets in R and F(x... x m0 ) y... y mr F(x... x m0 ) ŷ...ŷ mr here F and F are the functions described above. Then for i... m r it holds y i ŷ i. It can be easily demonstrated [3] that the equality of the output fuzzy sets does not hold in general even in the case of our netorks. In other ords the output fuzzy sets of F are ider that the output fuzzy sets of F. Hoever for crisp numbers taken as singletons the functions F F and F coincide hich has a positive consequence: If the inputs have one-element kernels (hich is often the case) then the kernels of the outputs of F and F coincide and are also one-element. We further progress ith the adaptation of our netork. Given a training set T X p O p x... x n o...o m F CI (R) n F CI ((0)) m p P } e ant to design a netork ith m 0 n m r m and adapt it to T. Adaptation in the case of feedforard netorks means the design of a netork hich approximately represents the mapping partially prescribed by T. Hence e ant F to satisfy F(X p ) O p for all p P here is some further unspecified relation approximately equals. We omit the phase of determining the number of layers and neurons in layers and suppose this has been done. Our task is to find appropriate parameters. We search not only for eights but also for thresholds and slopes see [3]. Adaptation leads to the minimization process of an appropriate error function for hich e choose the function here E p P C β(α k ) α k E p k α E p α E p L + α E p R 2 mr i (α y (r) Li α o p Li )2 + 2 mr i (α y (r) Ri α o p Ri )2 and α α C β(α ) β(α C ). The β(α k ) is the eight of importance of the α k -cut C represents the discretization. In order to minimize the error function one usually uses the method of the steepest descent approach. Recall that by the direction of the steepest descent it is meant the direction in hich the function falls most rapidly by infinitesimal changes in this direction. We search in R W+2N W is the number of eights N is the number of neurons in the net for the point p... (l) i...θ(l)...λ(l)... in hich E takes the minimum. By the standard method of the 5

6 steepest descent one chooses an initial p(0) e.g. at random and then at any time t determines the change p(t) p(t) ηd(p(t)) + µ p(t ) here d(p(t)) is the direction of the steepest descent of E in the point p(t) η > 0 is the length of the step (learning rate) and µ is a momentum constant. We have naturally arrived at the point here interval netorks i.e. netorks that ork ith real intervals instead of real numbers see [ ] can be exploited. From () and (2) it is clear that the functions α E p (and hence E) are not differentiable in general. The author in [8] uses a little trick to get differentiable error function. Formulas () and (2) can be reritten as follos and ml α z (l) L i (sgn + ( (l) i )(l) i α y (l ) Li + sgn ( (l) i )(l) i α y (l ) Ri ) θ (l) m l α z (l) L i (sgn ( (l) i )(l) i α y (l ) Li + sgn + ( (l) i )(l) i α y (l ) Ri ) θ (l) here sgn + (x) is the signum function and sgn sgn +. The undifferentiable functions sgn + and sgn are in [8] substituted by their differentiable analogies s + and s respectively here s + (x) + e x and s (x) + e x. The function E is then differentiable hoever this approximation leads to an inaccurate interval arithmetic. The main disadvantage of this modified architecture hich invalidates it for use for our purposes in netorks ith fuzzy signals is the subsethood nonmonotonicity see [2]. We say that function mapping n-tupples of intervals (or fuzzy sets) to m-tupples of intervals (or fuzzy sets) is subsethood monotonic if for every interval inputs x i x i i... n ith x i x i e have y y... m for the corresponding outputs. The folloing proposition can be easily proved. Proposition 4 Let f be a function obtained from f : R n R by the Extension Principle. Then for every fuzzy sets x i x i in R i... n such that x i x i e have f(x...x n ) f(x...x n ). By induction it follos that the mapping F is subsethood monotonic. As a special case e obtain the subsethood monotonicity for intervals. It follos that the feedforard netork ith fuzzy signals has to be subsethood monotonic i.e. the netork ith approximate sgn + sgn is not satisfactory. The practical result of using a nonmonotonic netork could be that from an α cut representation of input fuzzy sets one gets a system of α cuts hich do not represent any output fuzzy sets at all (it could happen α F(x) β F(x) for β < α and m 0 m r ). 3 Adaptation We no sho that a backpropagation like steepest descent algorithm is possible and ell ustified. We need the folloing concepts. By a signature e mean a tupple Σ Σ... Σ n here Σ i + ±} for i... n. If Σ is a signature then by R Σ e denote the set R Σ x... x n R n ; x i 0 for Σ i + x i 0 for Σ i x i R for Σ i ± i... n}. Call a function f : R n R Σ-differentiable in a R n if there are c... c n R such that 6

7 lim f(x+a) f(a) (c u + +c nu n) u 0u R Σ u 0. Let Σ be a signature a R n. A Σ-neighborhood of a is a subset U Σ (a) R n such that there is a (regular) neighborhood U(a) ith U Σ (a) U(a) (a+r Σ ) here a + R Σ a + u; u R Σ }. We say that f : R n R has partial derivatives in U Σ (a) if for each b U Σ (a) it holds: if b i a i for some i then f(b) x i exists and if b i a i for some i then Σ if(b) Σ ix i exists. Let f(x...x n ) : R n R have the derivatives f u + (a) in a R n for every u U R n. The vector d U such that f + d (a) minf + u ; u U} d u is (if exists) called the direction of the steepest descent in a ith respect to U. Here f u + (a) lim f(a+tu) f(a) t 0 + t. It holds that if f is Σ-differentiable in a then f has partial derivatives in U Σ (a). The properties necessary for the backpropagation like adaptation algorithm to be ell ustified are the folloing ones. The proofs follo from [2] our form of the error function and the fact that a sum of Σ-differentiable function is again a Σ-differentiable function. Complete proofs can be found in [3]. Theorem 5 ([3]) E(... (l) i...θ(l)... λ(l)...) is continuous. Theorem 6 ([3]) For a given p... (l) i...θ(l) ± for (l) i 0 Σ(l) i + } for (l) i follos: Σ (l) i for λ (l) 0 Σ θ(l) ±. Then E(... (l) i... θ(l)...λ(l)... let Σ be a signature determined as 0 Σλ(l) ± for λ (l) 0 Σ λ(l) + }...) is Σ-differentiable in p.... λ(l) Theorem 7 ([2]) Let I... n} f(x... x n ) : R n R be Σ-differentiable in a R n for each Σ ith Σ i ± for i I Σ i + } for i I. Let d d...d n be a vector determined as follos: For i I d i f(a) x i. For i I let d i 0 for f(a) d i f(a) d i + f(a) d i f(a) d i + f(a) 0 and + f(a) 0 for f(a) for f(a) for f(a) for f(a) > 0 and + f(a) 0 0 and + f(a) > 0 > 0 and + f(a) > 0 and + f(a) < 0 and f(a) > + f(a) < 0 and f(a) < + f(a). Then d is the direction of the steepest descent in a ith respect to R n. By previous statements a deepest descent adaptation algorithm in our case is as meaningful as the classical backpropagation. It follos from the previous considerations and theorems that the only thing remaining to be able to use the steepest descent approach is to get the partial derivatives of E. Since E is a sum of the functions α E p it is enough to derive the formulas for derivatives of α E p : The derivative of E equals then the sum of the derivatives of α E p. In hat follos e omit for simplicity the indicies denoting the appropriate α cuts and patterns i.e. e rite E instead of α E p similarly for E L E R y (l) L y(l) R z(l) L z(l) etc. First e derive formulas for computing E i. Suppose (l) i 0. Then e have 7

8 E i i + E R i i L L z (l) z (l) i + R R z (l) z (l) i L for λ (l) 0 R for λ (l) < 0 R for λ (l) 0 L for λ (l) < 0 (3) z (l) λ (l) y(l) ) for LR} z (l) L i y(l ) Li y (l ) Ri (l) R for (l) i > 0 for (l) i < 0 z i y(l ) Ri y (l ) Li for (l) i > 0 for (l) i < 0. No for the output layer (l r) e have L y (l) L o L R 0 hereas for the inner layer (l < r) L m (l+) k ( y (l+) y (l+) k k L + y (l+) y (l+) k k ) L holds. Furthermore Similarly L (l+) k 0 else for (l+) k > 0 z(l+) L (l+) k 0 else for (l+) k < 0. R m (l+) k ( y (l+) y (l+) k k R + y (l+) y (l+) k k ) R R (l+) k 0 else for (l+) k < 0 z(l+) R (l+) k 0 else for (l+) k > 0. To sum up denote 8

9 δ L(l) L L λ (l) y(l) L L ) δl(l) R R λ (l) y(l) R R ). Then e have i δ L(l) L y (l ) i + δ L(l) R y (l ) i (4) L for (l) i > 0λ(l) 0 or (l) i < 0λ(l) R for (l) i > 0λ(l) R if L L if R 0 < 0 or (l) i < 0λ(l) > 0 (5) (6) here for the output layer (l r) δ L(l) L and for the inner layer (l < r) (y (l) L o L)λ (l) y(l) L L ) δl(l) R 0 m δ L(l) l+ L k (δ L(l+) k + δ L(l+) k )λ (l) y(l) L L ) here m δ L(l) l+ R k (δ L(l+) k + δ L(l+) k )λ (l) y(l) ) k k (l+) k 0 else (l+) k for (l+) k for (l+) k 0 else. λ (l+) k > 0 λ (l+) k < 0 Similar formulas can also be derived for E R. E R i For the output layer (l r) e have i δ R(l) L y (l ) i + δ R(l) R y (l ) i (7) δ R(l) L 0 δ R(l) R (y (l) R o R)λ (l) y(l) R R ) 9

10 and for the inner layer (l < r) m l+ δ R(l) L k (δ R(l+) k + δ R(l+) k )λ (l) y(l) L L ) m l+ δ R(l) R k (δ R(l+) k + δ R(l+) k )λ (l) y(l) ) If (l) i 0 then for + E L i ( E L i + E R i (7) here e set as in (5) for (l) i > 0 ((l) i By an analogous ay e derive formulas for E E R ) the same formulas can be used except for (4) i < 0 (l) i > 0 (l) i < 0). θ (l) E. E θ (l) θ (l) + E R. θ (l) θ (l) δ L(l) L δ L(l) R E R θ (l) δ R(l) L δ R(l) R For λ (l) 0 e have E + E R. For the output layer (l r) e have (y (l) L o L)z (l) y(l) L L ) For the inner layer (l < r) E R (y (l) R o R)z (l) y(l) L L ) k (δl(l+) k m l+ ml+ k (δl(l+) k + δ L(l+) + δ L(l+) k )z (l) k )z (l) y(l) L y(l) L L ) + L ) E R m l+ k (δr(l+) k ml+ k (δr(l+) k + δ R(l+) + δ R(l+) k )z (l) k )z (l) y(l) L y(l) L L ) + L ) 0

11 Similarly as in the case of eights if λ (l) 0 then for + E L formulas as above can be used ith the exception that for + E L for λ (l) > 0 hereas for E L E R E L and + E R e set as in (3) for λ (l) < 0. E R the same e set as in (3) + E R Note that the derived formulas can be a little bit simplified hich is omitted in our paper because of its limited scope. 4 Possibilities of applications The presented fuzzy neural netork is an adaptive model mapping tupples of fuzzy sets to tupples of fuzzy sets. It can be applied in every situation here an input output behaviour of a system hich is partially described by input output pairs of fuzzy sets is to be approximated. In the folloing e concentrate on the systems described by the so called linguistic IF THEN rules. Complex systems hich are hardly to describe by traditional techniques can often be effectively described by means of fuzzy mathematics. Especially if there is only a linguistic description of the concerned problem expressed in the form of IF THEN rules methods of fuzzy logic have been proved to be very poerful. The expert knoledge of a system behaviour can often be represented by a set of IF THEN rules of the form IF X is A THEN Y is B. Both the antecedent as ell as the succedent can be of a more complex form. The linguistic expressions A and B (so called evaluating expressions) can be modeled by appropriate fuzzy sets A B from F CI (R). The hole knoledge about such a rule base is then to be aggregated into a suitable model R. For an input represented by the fuzzy set A an output B A R is then to be obtained by some method. Noadays there are several methods developed in the frame of the so called approximated reasoning [9]. Usually R is a fuzzy relation and is some proection operation. For a detailed discussion and formal analysis see e.g. [5]. Our model offers an alternative method: Design a netork and adapt it to the training set T consisting of the associations A B. Then for a given input represented by A e get an output B from the netork. The great advantage of fuzzy logic systems is their interpretability ust the point here neural netorks fail. On the other hand the advantageous property of neural netorks is their learning capability and generalization capability. It is very often the case that the linguistic description does not describe the problem completely. Rather there are both the partial linguistic description and the measured data set describing the system behavior at disposal. In such a case e can transform the knoledge (linguistically expressed and crisp) to the training set T design a netork and adapt it to T. Our model is able to be adapted to both fuzzy and crisp data simultaneously. The net can be then used to perform the approximate reasoning task or due to the evidenced good generalization property it should be able to generate a ne rule base or to complete the old one. Concerning further the approximation reasoning task of considerable importance is the computational complexity of the procedure performing the operation see [0 6] for the analysis of some cases. In general the complexity of increases ith the number of rules in the rule base in the todays methods of approximate reasoning. On the other hand the computational complexity of in the case of our netork does not depend on the number of rules. It depends only on the architecture of the netork i.e. on the number of layers and neurons in the respective layers. We afford to omit the particular formulas for the computational complexity hich are easy to derive.

12 5 0 o Figure 2: Inputs and corresponding output generated by the adapted netork. As a further advantageous property of our model can be considered the fact that due to the cut by cut processing of fuzzy sets it makes possible to get only a certain horizontal part of the output fuzzy set e.g. the top part hich is of considerable information value. This is not possible by the above discussed traditional methods. 5 Example In this section e present an example demonstrating interpolation capabilities of our model. We trained a 2 layer netork ith to inputs three neurons in the first layer and one neuron in the second layer. The training set contained the folloing patterns: T } here r denotes a fuzzy number about r. After the netork as adapted inputs 5 and 0 have been presented to the netork. The output o of the netork is depicted in Fig. 2. (A linear transform has been used not to be restricted to the output interval (0).) 6 Conclusions We have presented an adaptive feedforard neural netork model hich maps tupples of fuzzy sets to tupples of fuzzy sets. Our netork is fuzzified by the extension principle in a ay proposed in [4]. Hoever eights thresholds and parameters of neuron transfer function are crisp. As shon this leads to computational simple model ith theoretically ell ustified and relatively simple adaptation formulas based on the steepest descent approach. The steepest descent adaptation eliminates the need for heuristic adaptation hich as proposed e.g. in [8]. We have proposed applications in the approximate reasoning tasks and discussed the advantages and disadvantages of our model. From a point of vie of further theoretical investigation a very interesting property establishing both neural netorks and fuzzy systems as universal tools for various kinds of engineering applications is the universal approximation property. The authors in [5] negatively ansered the question hether neural netorks ith both fuzzy signals and eights can be universal approximators. The question of approximation capabilities of the presented netorks ith fuzzy signals is still not definitely ansered and is orth of further research. Both theoretical and application oriented investigations should sho further possibilities in the systematic combination of fuzzy systems and neural netorks. The aim of this paper as to contribute to this endeavour. 2

13 References [] Arbib M (Ed.) (995) The Handbook of Brain Theory and Neural Netorks. Cambridge Massachusetts London MIT Press [2] Bělohlávek R (997) Backpropagation for interval patterns. Neural Netork World. Int. J. on Neural and Mass Parallel Computing and Information Systems. 7-3: [3] Bělohlávek R (998) Netorks Processing Indeterminacy. PhD dissertation Ostrava (available at request) [4] Buckley J J Hayashi Y (992) Fuzzy Neural Nets and Applications. Fuzzy Sets and Artificial Intelligence. : 4 [5] Buckley J J Hayashi Y (994) Can fuzzy neural nets approximate continuous fuzzy functions? Fuzzy Sets & Systems. 6: 43 5 [6] Dvořák A (997) Computational Properties of Fuzzy Logic Deduction. In: Reusch B. (Ed.): Computational Intelligence. Theory and Applications. Proceedings of the 5th Fuzzy Days Dortmund pp Berlin Springer [7] Gupta M M Rao D H (994) On the principles of fuzzy neural netorks. Fuzzy Sets & Systems. 6: 8 [8] Hayashi Y Buckley J J Czogala E (992) Direct Fuzzification of Neural Netork and Fuzzified Delta Rule. Proc. of the 2nd Int. Conf. on Fuzzy Logic & Neural Netorks pp Iizuka Japan [9] Klir G J Yuan B (995) Fuzzy Sets and Fuzzy Logic. Theory and Applications. Upper Saddle River NJ Prentice Hall [0] Kóczy L T (995) Algorithmic aspects of fuzzy control. Int. J. of Approximate Reasoning. 2: [] Kosko B (99) Neural Netorks and Fuzzy Systems. Upper Saddle River NJ Prentice Hall [2] Kruse R Gebhardt J Klaonn F (995) Fuzzy Systeme. Stuttgart B. G. Teubner [3] Kufudaki O Hořeš J (99) PAB: Parameters adapting back propagation. Neural Netork World. Int. J. on Neural and Mass Parallel Computing and Information Systems. : [4] Novák V (989) Fuzzy Sets and Their Applications. Bristol Adam Hilger [5] Novák V (996) Paradigm formal properties and limits of fuzzy logic. Int. J. of General Systems. 24: [6] Pedrycz W (993) Fuzzy neural netorks and neurocomputations. Fuzzy Sets & Systems. 56: 28 [7] Rumelhart D E Hinton G E Williams R J (986) Learning representations by backpropagating errors. Nature. 323: [8] Šíma J (992) Generalized back propagation for interval training patterns. Neural Netork World. Int. J. on Neural and Mass Parallel Computing and Information Systems. 2:

14 [9] Šíma J (995) Neural expert systems. Neural Netorks. 8:

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment Enhancing Generalization Capability of SVM Classifiers ith Feature Weight Adjustment Xizhao Wang and Qiang He College of Mathematics and Computer Science, Hebei University, Baoding 07002, Hebei, China

More information

N-bit Parity Neural Networks with minimum number of threshold neurons

N-bit Parity Neural Networks with minimum number of threshold neurons Open Eng. 2016; 6:309 313 Research Article Open Access Marat Z. Arslanov*, Zhazira E. Amirgalieva, and Chingiz A. Kenshimov N-bit Parity Neural Netorks ith minimum number of threshold neurons DOI 10.1515/eng-2016-0037

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

Artificial Neural Networks. Part 2

Artificial Neural Networks. Part 2 Artificial Neural Netorks Part Artificial Neuron Model Folloing simplified model of real neurons is also knon as a Threshold Logic Unit x McCullouch-Pitts neuron (943) x x n n Body of neuron f out Biological

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

Computational Intelligence

Computational Intelligence Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning

More information

Lecture 3a: The Origin of Variational Bayes

Lecture 3a: The Origin of Variational Bayes CSC535: 013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton The origin of variational Bayes In variational Bayes, e approximate the true posterior across parameters

More information

Networks of McCulloch-Pitts Neurons

Networks of McCulloch-Pitts Neurons s Lecture 4 Netorks of McCulloch-Pitts Neurons The McCulloch and Pitts (M_P) Neuron x x sgn x n Netorks of M-P Neurons One neuron can t do much on its on, but a net of these neurons x i x i i sgn i ij

More information

Approximate solutions of dual fuzzy polynomials by feed-back neural networks

Approximate solutions of dual fuzzy polynomials by feed-back neural networks Available online at wwwispacscom/jsca Volume 2012, Year 2012 Article ID jsca-00005, 16 pages doi:105899/2012/jsca-00005 Research Article Approximate solutions of dual fuzzy polynomials by feed-back neural

More information

Where are we? Operations on fuzzy sets (cont.) Fuzzy Logic. Motivation. Crisp and fuzzy sets. Examples

Where are we? Operations on fuzzy sets (cont.) Fuzzy Logic. Motivation. Crisp and fuzzy sets. Examples Operations on fuzzy sets (cont.) G. J. Klir, B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications, Prentice-Hall, chapters -5 Where are we? Motivation Crisp and fuzzy sets alpha-cuts, support,

More information

Multilayer Feedforward Networks. Berlin Chen, 2002

Multilayer Feedforward Networks. Berlin Chen, 2002 Multilayer Feedforard Netors Berlin Chen, 00 Introduction The single-layer perceptron classifiers discussed previously can only deal ith linearly separable sets of patterns The multilayer netors to be

More information

CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 19 CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION 3.0. Introduction

More information

Backpropagation and his application in ECG classification

Backpropagation and his application in ECG classification University of Ostrava Institute for Research and Applications of Fuzzy Modeling Backpropagation and his application in ECG classification Ondřej Polakovič Research report No. 75 2005 Submitted/to appear:

More information

On the complexity of shallow and deep neural network classifiers

On the complexity of shallow and deep neural network classifiers On the complexity of shallow and deep neural network classifiers Monica Bianchini and Franco Scarselli Department of Information Engineering and Mathematics University of Siena Via Roma 56, I-53100, Siena,

More information

On the convergence speed of artificial neural networks in the solving of linear systems

On the convergence speed of artificial neural networks in the solving of linear systems Available online at http://ijimsrbiauacir/ Int J Industrial Mathematics (ISSN 8-56) Vol 7, No, 5 Article ID IJIM-479, 9 pages Research Article On the convergence speed of artificial neural networks in

More information

Fundamentals of DC Testing

Fundamentals of DC Testing Fundamentals of DC Testing Aether Lee erigy Japan Abstract n the beginning of this lecture, Ohm s la, hich is the most important electric la regarding DC testing, ill be revieed. Then, in the second section,

More information

Multilayer Neural Networks

Multilayer Neural Networks Pattern Recognition Lecture 4 Multilayer Neural Netors Prof. Daniel Yeung School of Computer Science and Engineering South China University of Technology Lec4: Multilayer Neural Netors Outline Introduction

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Fuzzy relation equations with dual composition

Fuzzy relation equations with dual composition Fuzzy relation equations with dual composition Lenka Nosková University of Ostrava Institute for Research and Applications of Fuzzy Modeling 30. dubna 22, 701 03 Ostrava 1 Czech Republic Lenka.Noskova@osu.cz

More information

Fuzzy Function: Theoretical and Practical Point of View

Fuzzy Function: Theoretical and Practical Point of View EUSFLAT-LFA 2011 July 2011 Aix-les-Bains, France Fuzzy Function: Theoretical and Practical Point of View Irina Perfilieva, University of Ostrava, Inst. for Research and Applications of Fuzzy Modeling,

More information

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta

More information

arxiv: v1 [cs.ai] 28 Jun 2016

arxiv: v1 [cs.ai] 28 Jun 2016 On the Semantic Relationship beteen Probabilistic Soft Logic and Markov Logic Joohyung Lee and Yi Wang School of Computing, Informatics, and Decision Systems Engineering Arizona State University Tempe,

More information

Some Classes of Invertible Matrices in GF(2)

Some Classes of Invertible Matrices in GF(2) Some Classes of Invertible Matrices in GF() James S. Plank Adam L. Buchsbaum Technical Report UT-CS-07-599 Department of Electrical Engineering and Computer Science University of Tennessee August 16, 007

More information

Neural networks and support vector machines

Neural networks and support vector machines Neural netorks and support vector machines Perceptron Input x 1 Weights 1 x 2 x 3... x D 2 3 D Output: sgn( x + b) Can incorporate bias as component of the eight vector by alays including a feature ith

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

On Proofs and Rule of Multiplication in Fuzzy Attribute Logic

On Proofs and Rule of Multiplication in Fuzzy Attribute Logic On Proofs and Rule of Multiplication in Fuzzy Attribute Logic Radim Belohlavek 1,2 and Vilem Vychodil 2 1 Dept. Systems Science and Industrial Engineering, Binghamton University SUNY Binghamton, NY 13902,

More information

8. Lecture Neural Networks

8. Lecture Neural Networks Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge

More information

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR 4.1 Introduction Fuzzy Logic control is based on fuzzy set theory. A fuzzy set is a set having uncertain and imprecise nature of abstract thoughts, concepts

More information

Motivation for the topic of the seminar

Motivation for the topic of the seminar Bogdan M. Wilamoski Motivation for the topic of the seminar Constrains: Not to talk about AMNSTC (nano-micro) Bring a ne perspective to students Keep it to the state-of-the-art Folloing topics ere considered:

More information

McGill University > Schulich School of Music > MUMT 611 > Presentation III. Neural Networks. artificial. jason a. hockman

McGill University > Schulich School of Music > MUMT 611 > Presentation III. Neural Networks. artificial. jason a. hockman jason a. hockman Overvie hat is a neural netork? basics and architecture learning applications in music History 1940s: William McCulloch defines neuron 1960s: Perceptron 1970s: limitations presented (Minsky)

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Stability of backpropagation learning rule

Stability of backpropagation learning rule Stability of backpropagation learning rule Petr Krupanský, Petr Pivoňka, Jiří Dohnal Department of Control and Instrumentation Brno University of Technology Božetěchova 2, 612 66 Brno, Czech republic krupan,

More information

N. Sarikaya Department of Aircraft Electrical and Electronics Civil Aviation School Erciyes University Kayseri 38039, Turkey

N. Sarikaya Department of Aircraft Electrical and Electronics Civil Aviation School Erciyes University Kayseri 38039, Turkey Progress In Electromagnetics Research B, Vol. 6, 225 237, 2008 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM FOR THE COMPUTATION OF THE CHARACTERISTIC IMPEDANCE AND THE EFFECTIVE PERMITTIVITY OF THE MICRO-COPLANAR

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence Artificial Intelligence (AI) Artificial Intelligence AI is an attempt to reproduce intelligent reasoning using machines * * H. M. Cartwright, Applications of Artificial Intelligence in Chemistry, 1993,

More information

The Application of Liquid State Machines in Robot Path Planning

The Application of Liquid State Machines in Robot Path Planning 82 JOURNAL OF COMPUTERS, VOL. 4, NO., NOVEMBER 2009 The Application of Liquid State Machines in Robot Path Planning Zhang Yanduo School of Computer Science and Engineering, Wuhan Institute of Technology,

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Artificial Neural Networks The Introduction

Artificial Neural Networks The Introduction Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

A Generalization of a result of Catlin: 2-factors in line graphs

A Generalization of a result of Catlin: 2-factors in line graphs AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 72(2) (2018), Pages 164 184 A Generalization of a result of Catlin: 2-factors in line graphs Ronald J. Gould Emory University Atlanta, Georgia U.S.A. rg@mathcs.emory.edu

More information

Semi-simple Splicing Systems

Semi-simple Splicing Systems Semi-simple Splicing Systems Elizabeth Goode CIS, University of Delaare Neark, DE 19706 goode@mail.eecis.udel.edu Dennis Pixton Mathematics, Binghamton University Binghamton, NY 13902-6000 dennis@math.binghamton.edu

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Recurrent Neural Networks and Logic Programs

Recurrent Neural Networks and Logic Programs Recurrent Neural Networks and Logic Programs The Very Idea Propositional Logic Programs Propositional Logic Programs and Learning Propositional Logic Programs and Modalities First Order Logic Programs

More information

Improving Time Efficiency of Feedforward Neural Network Learning

Improving Time Efficiency of Feedforward Neural Network Learning Improving Time Efficiency of Feedforard Neural Netork Learning by Batsukh Batbayar Bachelor of Engineering, National University of Mongolia Master of Physics, National University of Mongolia A thesis submitted

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Deterministic convergence of conjugate gradient method for feedforward neural networks

Deterministic convergence of conjugate gradient method for feedforward neural networks Deterministic convergence of conjugate gradient method for feedforard neural netorks Jian Wang a,b,c, Wei Wu a, Jacek M. Zurada b, a School of Mathematical Sciences, Dalian University of Technology, Dalian,

More information

Chapter 1 Similarity Based Reasoning Fuzzy Systems and Universal Approximation

Chapter 1 Similarity Based Reasoning Fuzzy Systems and Universal Approximation Chapter 1 Similarity Based Reasoning Fuzzy Systems and Universal Approximation Sayantan Mandal and Balasubramaniam Jayaram Abstract In this work, we show that fuzzy inference systems based on Similarity

More information

COMP304 Introduction to Neural Networks based on slides by:

COMP304 Introduction to Neural Networks based on slides by: COMP34 Introduction to Neural Networks based on slides by: Christian Borgelt http://www.borgelt.net/ Christian Borgelt Introduction to Neural Networks Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

SGD and Deep Learning

SGD and Deep Learning SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients

More information

A summary of Deep Learning without Poor Local Minima

A summary of Deep Learning without Poor Local Minima A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

Deep Belief Networks are compact universal approximators

Deep Belief Networks are compact universal approximators 1 Deep Belief Networks are compact universal approximators Nicolas Le Roux 1, Yoshua Bengio 2 1 Microsoft Research Cambridge 2 University of Montreal Keywords: Deep Belief Networks, Universal Approximation

More information

On Perception-based Logical Deduction and Its Variants

On Perception-based Logical Deduction and Its Variants 16th World Congress of the International Fuzzy Systems Association (IFSA) 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT) On Perception-based Logical Deduction and Its Variants

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

λ-universe: Introduction and Preliminary Study

λ-universe: Introduction and Preliminary Study λ-universe: Introduction and Preliminary Study ABDOLREZA JOGHATAIE CE College Sharif University of Technology Azadi Avenue, Tehran IRAN Abstract: - Interactions between the members of an imaginary universe,

More information

Adaptive Fuzzy Spiking Neural P Systems for Fuzzy Inference and Learning

Adaptive Fuzzy Spiking Neural P Systems for Fuzzy Inference and Learning Adaptive Fuzzy Spiking Neural P Systems for Fuzzy Inference and Learning Jun Wang 1 and Hong Peng 2 1 School of Electrical and Information Engineering, Xihua University, Chengdu, Sichuan, 610039, China

More information

The Power of Extra Analog Neuron. Institute of Computer Science Academy of Sciences of the Czech Republic

The Power of Extra Analog Neuron. Institute of Computer Science Academy of Sciences of the Czech Republic The Power of Extra Analog Neuron Jiří Šíma Institute of Computer Science Academy of Sciences of the Czech Republic (Artificial) Neural Networks (NNs) 1. mathematical models of biological neural networks

More information

Introduction to Artificial Neural Network - theory, application and practice using WEKA- Anto Satriyo Nugroho, Dr.Eng

Introduction to Artificial Neural Network - theory, application and practice using WEKA- Anto Satriyo Nugroho, Dr.Eng Introduction to Artificial Neural Netor - theory, application and practice using WEKA- Anto Satriyo Nugroho, Dr.Eng Center for Information & Communication Technology, Agency for the Assessment & Application

More information

EEE 8005 Student Directed Learning (SDL) Industrial Automation Fuzzy Logic

EEE 8005 Student Directed Learning (SDL) Industrial Automation Fuzzy Logic EEE 8005 Student Directed Learning (SDL) Industrial utomation Fuzzy Logic Desire location z 0 Rot ( y, φ ) Nail cos( φ) 0 = sin( φ) 0 0 0 0 sin( φ) 0 cos( φ) 0 0 0 0 z 0 y n (0,a,0) y 0 y 0 z n End effector

More information

Computational Intelligence Lecture 20:Neuro-Fuzzy Systems

Computational Intelligence Lecture 20:Neuro-Fuzzy Systems Computational Intelligence Lecture 20:Neuro-Fuzzy Systems Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

Input layer. Weight matrix [ ] Output layer

Input layer. Weight matrix [ ] Output layer MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons

More information

Data compression on the basis of fuzzy transforms

Data compression on the basis of fuzzy transforms Data compression on the basis of fuzzy transforms Irina Perfilieva and Radek Valášek University of Ostrava Institute for Research and Applications of Fuzzy Modeling 30. dubna 22, 701 03 Ostrava 1, Czech

More information

Nonlinear Projection Trick in kernel methods: An alternative to the Kernel Trick

Nonlinear Projection Trick in kernel methods: An alternative to the Kernel Trick Nonlinear Projection Trick in kernel methods: An alternative to the Kernel Trick Nojun Kak, Member, IEEE, Abstract In kernel methods such as kernel PCA and support vector machines, the so called kernel

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Criteria for the determination of a basic Clark s flow time distribution function in network planning

Criteria for the determination of a basic Clark s flow time distribution function in network planning International Journal of Engineering & Technology IJET-IJENS Vol:4 No:0 60 Criteria for the determination of a basic Clark s flo time distribution function in netork planning Dusko Letić, Branko Davidović,

More information

Data Retrieval and Noise Reduction by Fuzzy Associative Memories

Data Retrieval and Noise Reduction by Fuzzy Associative Memories Data Retrieval and Noise Reduction by Fuzzy Associative Memories Irina Perfilieva, Marek Vajgl University of Ostrava, Institute for Research and Applications of Fuzzy Modeling, Centre of Excellence IT4Innovations,

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Yongjin Park 1 Goal of Feedforward Networks Deep Feedforward Networks are also called as Feedforward neural networks or Multilayer Perceptrons Their Goal: approximate some function

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK

FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK FORECASTING OF ECONOMIC QUANTITIES USING FUZZY AUTOREGRESSIVE MODEL AND FUZZY NEURAL NETWORK Dusan Marcek Silesian University, Institute of Computer Science Opava Research Institute of the IT4Innovations

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

RESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK

RESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK ASIAN JOURNAL OF CIVIL ENGINEERING (BUILDING AND HOUSING) VOL. 7, NO. 3 (006) PAGES 301-308 RESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK S. Chakraverty

More information

HT FACTOR ANALYSIS FOR FORCED AND MIXED CONVECTION LAMINAR HEAT TRANSFER IN A HORIZONTAL TUBE USING ARTIFICIAL NEURAL NETWORK

HT FACTOR ANALYSIS FOR FORCED AND MIXED CONVECTION LAMINAR HEAT TRANSFER IN A HORIZONTAL TUBE USING ARTIFICIAL NEURAL NETWORK Proceedings of HT7 7 ASME-JSME Thermal Engineering Summer Heat Transfer Conference July 8-, 7, Vancouver, British Columbia, CANADA HT7-355 FACTOR ANALYSIS FOR FORCED AND MIXED CONVECTION LAMINAR HEAT TRANSFER

More information

Multilayer Perceptron Tutorial

Multilayer Perceptron Tutorial Multilayer Perceptron Tutorial Leonardo Noriega School of Computing Staffordshire University Beaconside Staffordshire ST18 0DG email: l.a.noriega@staffs.ac.uk November 17, 2005 1 Introduction to Neural

More information

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH ISSN 1726-4529 Int j simul model 9 (2010) 2, 74-85 Original scientific paper MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH Roy, S. S. Department of Mechanical Engineering,

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Bivariate Uniqueness in the Logistic Recursive Distributional Equation

Bivariate Uniqueness in the Logistic Recursive Distributional Equation Bivariate Uniqueness in the Logistic Recursive Distributional Equation Antar Bandyopadhyay Technical Report # 629 University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860

More information

Fuzzy Limits of Functions

Fuzzy Limits of Functions Fuzzy Limits of Functions Mark Burgin Department of Mathematics University of California, Los Angeles 405 Hilgard Ave. Los Angeles, CA 90095 Abstract The goal of this work is to introduce and study fuzzy

More information

Fibring Neural Networks

Fibring Neural Networks Fibring Neural Networks Artur S d Avila Garcez δ and Dov M Gabbay γ δ Dept of Computing, City University London, ECV 0H, UK aag@soicityacuk γ Dept of Computer Science, King s College London, WC2R 2LS,

More information

Application of the Fuzzy Weighted Average of Fuzzy Numbers in Decision Making Models

Application of the Fuzzy Weighted Average of Fuzzy Numbers in Decision Making Models Application of the Fuzzy Weighted Average of Fuzzy Numbers in Decision Making Models Ondřej Pavlačka Department of Mathematical Analysis and Applied Mathematics, Faculty of Science, Palacký University

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Machine Learning: Logistic Regression. Lecture 04

Machine Learning: Logistic Regression. Lecture 04 Machine Learning: Logistic Regression Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Supervised Learning Task = learn an (unkon function t : X T that maps input

More information

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1 90 8 80 7 70 6 60 0 8/7/ Preliminaries Preliminaries Linear models and the perceptron algorithm Chapters, T x + b < 0 T x + b > 0 Definition: The Euclidean dot product beteen to vectors is the expression

More information

A Particle Swarm Optimization (PSO) Primer

A Particle Swarm Optimization (PSO) Primer A Particle Swarm Optimization (PSO) Primer With Applications Brian Birge Overview Introduction Theory Applications Computational Intelligence Summary Introduction Subset of Evolutionary Computation Genetic

More information

In the Name of God. Lecture 9: ANN Architectures

In the Name of God. Lecture 9: ANN Architectures In the Name of God Lecture 9: ANN Architectures Biological Neuron Organization of Levels in Brains Central Nervous sys Interregional circuits Local circuits Neurons Dendrite tree map into cerebral cortex,

More information

EXTENDED FUZZY COGNITIVE MAPS

EXTENDED FUZZY COGNITIVE MAPS EXTENDED FUZZY COGNITIVE MAPS Masafumi Hagiwara Department of Psychology Stanford University Stanford, CA 930 U.S.A. Abstract Fuzzy Cognitive Maps (FCMs) have been proposed to represent causal reasoning

More information

Bidirectional Representation and Backpropagation Learning

Bidirectional Representation and Backpropagation Learning Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing

More information

Linear Time Computation of Moments in Sum-Product Networks

Linear Time Computation of Moments in Sum-Product Networks Linear Time Computation of Moments in Sum-Product Netorks Han Zhao Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 han.zhao@cs.cmu.edu Geoff Gordon Machine Learning Department

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information