Complete Recall on Alpha-Beta Heteroassociative Memory
|
|
- Charleen Robinson
- 5 years ago
- Views:
Transcription
1 Complete Recall on Alpha-Beta Heteroassociative Memory Israel Román-Godínez and Cornelio Yáñez-Márquez Centro de Investigación en Computación Juan de Dios Bátiz s/n esq. Miguel Othón de Mendizábal Unidad Profesional Adolfo López Mateos Del. Gustavo A. Madero, México, D.F. México Abstract. Most heteroassociative memories models intend to achieve the recall of the entire trained pattern. The Alpha-Beta associative memories only ensure the correct recall of the trained patterns in autoassociative memories, but not for the heteroassociative memories. In this work we present a new algorithm based on the Alpha-Beta Heteroassociative memories that allows, besides correct recall of some altered patterns, perfect recall of all the trained patterns, without ambiguity. The theoretical support and some experimental results are presented. 1 Introduction Associative memories have been an active area for research in computer sciences. In this respect, computer scientists are interested in developing mathematical models that behave as similar as possible to associative memories and, based on the former models, create, design and operate systems that are able to learn and recall patterns [1-3]. The ultimate goal of an associative memory is to correctly recall complete patterns from input patterns. These patterns might be an altered version of the one used to create the associative memory. The first known mathematical model of an associative memory is Steinbuch s Lernmatrix, developed in 1961 [4]. In the following years, many efforts were made. By 1982, Hopfield created a model that works, simultaneously, as associative memory and a neural network [5]. In the late 1990s morphological associative memories were developed by Ritter et al. [6]. In 2002, a more efficient model of associative memories arose; the Alpha-Beta associative memories were inspired on morphological associative memories [1]. Until this day, the Alpha-Beta model has been applied to several noteworthy problems, such as automatic color matching [7] and lenguage translators [8]. In this paper we propose an improvement on the Alfa-Beta associative memories, particularly on the heteroassociative memory, to ensure the correct recall of the fundamental set, characteristic that does not have the original model. The mathematical support is presented. A. Gelbukh and A.F. Kuri Morales (Eds.): MICAI 2007, LNAI 4827, pp , c Springer-Verlag Berlin Heidelberg 2007
2 194 I. Román-Godínez and C. Yáñez-Márquez This paper is organized as follows. Sections 2 is focused on explaining the Alpha-Beta heteroassociative memory model. Section 3 contains the core proposal and its theoretical support. Section 4 is devoted to the experimental results and finally the Section 5 is about conclusions and future research. 2 Alpha-Beta Associative Memories Here we use basic concepts about associative memories presented in [1]. An associative memory M is a system that relates input patterns, and outputs patterns, as follows: x M y. Each input vector x forms an association with a corresponding output vector y.thek-th association will be denoted as ( x k, y k).associative memory M is represented by a matrix whose ij-th component is m ij, and is generated from an apriori finite set of known associations, called the fundamental set of associations. If μ is an index, the fundamental set is represented as: {(x μ, y μ ) μ = 1, 2,..., p} with p the cardinality of the set. The patterns that form the fundamental set are called fundamental patterns. If it holds that x μ = y μ μ {1, 2,..., p}, thenm is autoassociative, otherwise it is heteroassociative. In this latter case it is possible to establish that μ {1, 2,..., p} for which x μ y μ. If when feeding a unkown fundamental pattern x ω with ω {1, 2,..., p} to an associative memory M, it happens that the output corresponds exactly to the associated pattern y ω, we say that recall is correct. Theheartofthemathematicaltoolsused in the Alpha-Beta model, are two binary operators designed specifically for these memories. These operators are defined in [1] as follows: First, we define the sets A = {0, 1} and B = {0, 1, 2}, then the operators α : A A B and β : A B A are defined in tabular form: Table 1. Alpha and Beta Operators xyα(x, y) x y β(x, y) Two types of heteroassociative Alpha-Beta memories are proposed: type Max ( ) andtypemin( ). For the generation of both types we will use the operator, which has the following [ form, for indices μ {1, 2,...,p},i {1, 2,...,m}, and j {1, 2,...,n}: y μ (x μ ) t] = α ( y μ ) i ij,xμ j. Alpha-Beta Heteroassociative Memories Type Max Learning[ Phase. For every μ = 1, 2,...,p, from the pair (x μ, y μ ) build the matrix: y μ (x μ ) t] Applying the Max binary operator,thev matrix m n
3 Complete Recall on Alpha-Beta Heteroassociative Memory 195 is: V = [ p y μ (x μ ) t] then the ij-th entry is given as: ν ij = p α(yμ i,xμ j ). We can observe that v ij B, i {1, 2,...,m}, j {1, 2,...,n}. Recall Phase. A pattern x ω,that could be or not from the fundamental set, is presented to the heteroassociative Alpha-Beta memory of type and we do the operation Δ β : VΔ β x ω. The result is a column vector of dimension m, whose i-th component is: (VΔ β x ω ) i = n β(ν ij,x ω j ). Remark 1. The Alpha-Beta Heteroassociative Memory Type Min are developed by duality, based on the learning and recall phase of the Alpha-Beta heteroassociative memories type Max. Wherever there is a operator change it for,if there is an change it for, where the operator Δ β is used change it for β. 3 Alpha-Beta Heteroassociative Memories with Complete Recall The Alpha-Beta autoassociative memories guarantee the complete recall of the fundamental set [20], but in the case of the heteroassociative memories is not possible to ensure this behavior. In this section we propose a new algorithm, modifying the original, with which the complete recall of the fundamental set is guaranteed. Definition 1. Let h, n Z +,A = {0, 1} and let x h A n be a binary pattern. We denote the sum of the positive components of x h by: U h = x h j. Definition 2. Let V be an Alpha-Beta heteroassociative memory type Max and {(x μ, y μ ) μ =1, 2,..., p} its fundamental set with x μ A n and y μ A p,a = {0, 1},B = {0, 1, 2},n Z +. The sum of the components with value equal to one of the i-th row of V is given as: s i = T j where T B n and its components { 1 νij =1 are defined as: T i = 0 ν ij 1 j {1, 2,..., n} and the s i components conform the max sum vector with s Z p. Definition 3. Let α, β, n Z +,A= {0, 1} and let x α, x β A n be two vectors; then x α x β x α i =1 x β i =1 i {1, 2,...,n} and x α < x β ix α i x β i and j such that x α j <xβ j Definition 4. Let x α A n with α, n Z +,A = {0, 1} ; each { component of 1 x the negated vector, denoted by x α,ofx α is given as: x α α i = i =0 0 x α i =1 i {1, 2,...,n}. Definition 5. Let α, β, n Z +,A = {0, 1} and let be x h A n. We denote the sum of the components equal to 0 of x h as:c h = x h i. i=1
4 196 I. Román-Godínez and C. Yáñez-Márquez Definition 6. Let Λ be a Alpha-Beta heteroassociative memory type Min and {(x μ, y μ ) μ =1, 2,..., p} its fundamental set with x μ A n and y μ A p,a = {0, 1},B = {0, 1, 2},n Z +. The sum of the components with value equal to zero of the i-th row of Λ is given as:r i = n T j where T B n and its components { 1 λij =0 are defined as: T i = 0 λ ij 0 j {1, 2,..., n} and the r i components conform the min sum vector with r Z p. Alpha-Beta Heteroassociative Memory Type Max Learning Phase. Let x A n and y A p be an input and output vectors, respectively. The corresponding fundamental set is denoted by {(x μ, y μ ) μ = 1, 2,..., p} which is built according with the following conditions: the y vectors are built with the one-hot codification: assigning for y μ the following values: y μ k =1,andyμ j =0forj =1, 2,...,k 1,k+1,...,m where k {1, 2,...,m}. And to each y μ vector correspond one and only one x μ vector. Step 1. For each μ {1, 2,...p}, from the couple (x μ, y μ ) build the matrix: [y μ (x μ ) t ] m n then, the min binary operator is applied to the matrices. p Therefore, the V matrix is obtained as follow: V = [y μ (x μ ) t ]wherethe ij-th component is given by: v ij = Recalling Phase p α(y μ i,xμ j ). Step 1. A pattern x ϖ is presented to V, the Δ β operation is done and the resulting vector is assigned to a vector called z ϖ : z ϖ = VΔ β x ϖ. Then the i-th n component of z ϖ is: z ϖ i = β(v ij,x ϖ j ) Step 2. Once we have the V matrix, it is necesary to build the max sum vector s according to the definition 2, therefore the corresponding y ϖ is given as: 1 if s i = k z yi ϖ i = k θs ϖ =1 0 otherwise where θ = {i z ϖ i =1}. Below are presented the lemmas, and a theorem that support the Alpha-Beta heteroassociative memory type Max presented before. Lemma 1. Let x i A n be a pattern randomly chosen from the fundamental set. In the new Alpha-Beta heteroassociative memory type Max learning phase, x i contributes, only, at the i-th row of V with U i times the value 1 and (n U i ) times the value 0.
5 Complete Recall on Alpha-Beta Heteroassociative Memory 197 Proof. Let x h A n and y h A p with A = {0, 1} and k, n, p Z + be two fundamental patterns, randomly chosen, that form the k-th association (x k,y k ) of V. According with the learnning phase we know that the matrix V is given p by: V = [y μ (x μ ) t ]particularlythek-th association is: [ y k (x k ) t] ij = α(yi k, xk j ). Now, by how the vector yk has been built, it happend that i {1, 2,..., k 1,k+1,...p}, j {1, 2,...n},k {1, 2,...p} yk k =1 α(yk k, xk j )=1 α(yk k, xk j )=2 yi k =0 α(yk i, xk j )=1 α(yk i, xk j )=0 (1) according to expression 1 it is evident that the maximum values of the k-th matrix are stored in its k-th row, depending exclusively on the values of x k,in other words, when x k j =1 α(yk k, xk j )=1orxk j =0 α(yk k, xk j )=2. Therefore, considering that for every fundamental association, to each input pattern correspond one and only one output pattern and that k with k {1, 2,...p} was randomly chosen; we can ensure that V is affected in its i-th row by x i and it is affected with U i times the value 1 and (n U i )timesthevalue2. Thus, the components of the V matrix contain only the values 1 or 2. Finally, we can rewrite the learning phase as follow: i {1, 2,..., p}, j {1, 2,...n} v ij = α(y i i, xi j ) (2) Lemma 2. Let s be the max sum vector of the matrix V; then s i = U i i {1, 2,...p} Proof. Let s be the max sum vector of the matrix V. Itsi-th component is expressed as definition 2 s i = T j (3) In the other hand, we know by definition 1 that U h = x h j.particularly,fora i with i {1, 2,...p} theexpressioncouldbewritenasfollow: U i = x i j (4) Moreover, we know by lemma 1 in expression 2 that x i affects the matrix V only in its i-th row, so it is possible to rewrite the expression 3 as: s i = α(yi, i xi j ). Given that yi i =1 i {1, 2,...p} and that α(yi i, xi j ) depends on xi j, then according to lemma 1 s i = x i j (5)
6 198 I. Román-Godínez and C. Yáñez-Márquez Finally, by transitivity of the equations 4 and 5 we can conclude that s i = x i j = Ui. Lemma 3. Let V be a heteroassociative memory type Max which fundamental set is {(x μ, y μ ) μ =1, 2,...p}, thenletx ϖ A n be pattern that will be presented to V with A {0, 1},ϖ {1, 2,..., p},n,p Z +. The z ϖ A p vector obtained from the original Alpha-Beta heteroassociative recall phase type Max will contain the value 1 in its i-th component where the i-th row of V correspond to the fundamental patterns lower or equal to x ϖ ;putdifferently: izi ϖ =1 x i x ω,x i {(x μ, y μ ) μ =1, 2,...p}. Proof. According with the original Alpha-Beta heteroassociative memory recall phase type Max we know that: in order to z ϖ i =1 n β(v ij,x ϖ j )=1 (6) n β(v ij,x ϖ j ) = 1, due to β only produce 1 or 2 values, it is necesary that j {1, 2,..., n} β(v ij,x ϖ j )=1. Therefore, considering lemma 1, j {1, 2,..., n} just the following cases are possible { β(v ij,x ϖ v ij =1 x ϖ j =1 j )=1 v ij =2 (x ϖ j =1 x ϖ (7) j =0) Now, as lemma 1 says, each x i pattern affect only the i-th row of V and it does acording to learning phase, from the expression 7 we can infer that: x i = x ϖ if j always happend that (v ij =1 x ϖ j =1)or(v ij =2 x ϖ j =0)andx i <x ϖ if j always happend that (v ij =1 x ϖ j =1)or j(v ij =2 x ϖ j =1). This is therefore, by transitivity of 6,7,8 we can conclude: x i x ϖ (8) iz ϖ i =1 x i x ϖ,x i {(x μ, y μ ) μ =1, 2,...p} Theorem 1. Let V be a heteroassociative memory type Max which fundamental set is {(x μ, y μ ) μ =1, 2,...p},without any pair repeted.let x ϖ A n be an input pattern presented to V and z ϖ A p the resulting class vector from the original Alpha-Beta heteroassociative memory recall phase type Max. The proposed algorithm will always obtain complete recall, in other word, we will always obtain the corresponding y ϖ without ambiguity. Proof. To prove the complete recall of the proposed algorithm it would be necesary to ensure that, for all components where zi ϖ = 1 there is just one maximum value in the s i components and it correspond to the correct pattern.this could
7 Complete Recall on Alpha-Beta Heteroassociative Memory 199 be demostrated by contradiction. Let x α A n and x β A n be the corresponding patterns to z ϖ α =1andzϖ β =1whenxϖ is presented to V with x α,x β,x ϖ {(x μ, y μ ) μ =1, 2,...p}.First, we assume that x α is the correct pattern and x β is an arbitrary spurios recalled pattern with corresponding s i values s α and s β, respectively and it holds that s α >s β. Now, we assume the negated of what we want to prove. s α s β (9) Now, by lemma 2, we know that the expression 9 could be writen as follow: U α U β (10) By lemma 3 for each spurious pattern x i, where x ϖ is the correct, imply that izi ϖ =1, x i x ϖ x i < x ϖ. Therefore, we can take as a hypothesis: x β < x α (11) according to definition 1, the inequality 11 could be expressed as U β <U α which is a contradiction with expression 10, then U α U β is false. Therefore, U α >U β, put differently s α >s β, is true for every spurious recalled pattern since x β was chosen arbitrarily. Alpha-Beta Heteroassociative Memory Type Min Learning Phase. Let x A n and y A p be input and output vectors, respectively. The corresponding fundamental set is denoted by {(x μ, y μ ) μ = 1, 2,..., p}. which is built according with the following conditions: the y vectors are built with the zero-hot codification: assigning for the output binary pattern y μ the following values: y μ k =0,andyμ j =1forj =1, 2,...,k 1,k+1,...,m where k {1, 2,...,m}. And, to each y μ vector correspond one and only one x μ vector. Step 1. For each μ {1, 2,...p}, from the couple (x μ, y μ ) build the matrix: [y μ (x μ ) t ] m n then the Min binary operator ( ) is applied to the matrices p obtained. Therefore, the Λ matrix is obtained as follow: Λ = [y μ (x μ ) t ] where the ij-th component is given by: λ ij = Recalling Phase p α(y μ i,xμ j ). Step 1. A pattern x ϖ is presented to Λ, the β operation is done and the resulting vector is assigned to a vector called z ϖ : z ϖ = Λ β x ϖ The i-th component n of the resulting column vector are: z ϖ i = β(λ ij,x ϖ j )
8 200 I. Román-Godínez and C. Yáñez-Márquez Step 2. It is necesary to build the min sum vector r according to the definition 6, therefore the corresponding y ϖ is given as: 0 if r i = k z yi ϖ i = k θr ϖ =0 1 otherwise where θ = {i z ϖ i =0}. Below are presented the lemmas, and a theorem that support the Alpha-Beta heteroassociative memory type Min presented before. Due to a matter of space, the proof of the lemma 4,5 and 6 are not developed here, but they were obtained by duality. Lemma 4. Let x i A n be a pattern randomly chosen from the fundamental set. In the Alpha-Beta heteroassociative memory type Min learning phase, x i contributes, only, at the i-th row of Λ with U i times the value 0 and (n U i ) times the value 1. Proof. This proof is similar to the one presented on lemma 1 taking account the conditions expressed on remark 1. Lemma 5. Let r be the min sum vector of the matrix Λ; then r i = C i, i {1, 2,...p}. Proof. This proof is similar to the one presented on lemma 2 taking account the conditions presented on remark 1. Lemma 6. Let Λ be a heteroassociative memory type Min which fundamental set is {(x μ, y μ ) μ =1, 2,...p}. Letx ϖ A n be an input pattern that will be presented to Λ with A {0, 1},ϖ {1, 2,..., p},n,p Z +. The z ϖ A p vector is obtained from the original Alpha-Beta heteroassociative recall phase type Min. This vector will contain the value 0 in its i-th component where the i-th row of Λ correspond to the fundamental patterns greater or equal to x ϖ ;put differently: izi ϖ =0 x i x ω,x i {(x μ, y μ ) μ =1, 2,...p}. Proof. This proof is similar to the one presented on lemma 3 taking account the conditions expressed on remark 1. Theorem 2. Let Λ be a heteroassociative memory type Min which fundamental set, without any pair repeted, is {(x μ, y μ ) μ =1, 2,...p}.Let x ϖ A n be an input pattern presented to Λ and z ϖ A p the resulting class vector from the original Alpha-Beta heteroassociative memory recall phase type Min. The proposed algorithm will always obtain complete recall, in other word, we will always obtain the corresponding y ϖ without ambiguity. Proof. To prove the complete recall of the proposed algorithm it would be necesary to ensure that, for all components where zi ϖ = 0 there is just one maximum value in the r i components and it correspond to the correct pattern.this could
9 Complete Recall on Alpha-Beta Heteroassociative Memory 201 be demostrated by contradiction. Let x α A n and x β A n be the corresponding patterns to z ϖ α =0andzϖ β =0whenxϖ is presented to Λ with x α,x β,x ϖ {(x μ, y μ ) μ =1, 2,...p}.First, we assume that x α is the correct pattern and x β is an arbitrary spurios recalled pattern with corresponding r i values r α and r β, respectively and it holds that r α <r β. Now, we assume the negated of what we want to prove. s α s β (12) Now, by lemma 5, we know that the expression 12 could be writen as follow: C α C β (13) By lemma 6 for each spurious pattern x i, where x ϖ is the correct, imply that izi ϖ =0, x i x ϖ x i > x ϖ. Therefore, we can consider as a hypothesis: x β > x α (14) according to definition 5, the inequality 14 could be expressed as U β >U α which is a contradiction with expression 13, then C α C β is false. Therefore, C α <C β, put differently r α >r β, is true for every spurious recalled pattern since x β was chosen arbitrarily. 4 Experimental Results In despite of the theoretical support presented in the last section, a serie of experiments were done to illustrate the efficiency of our proposal. With n the dimension of the input vector and p the number of input patterns, three different finite samples were randomly generated, automatically. Each of them was used to build the six different fundamental sets according with the specification mentioned in our proposal. After that, the six different memories -three Max and three Min- were built and the recall phase, each one with its corresponding fundamental patterns, was applied. The results are presented in table 2. It is evident that the original algorithm presents more errors than the one proposed in this paper. Experiment Number Table 2. Experimental Results n p Original Algorithm Error (%) Max Min Max Min Modified Algorithm Error (%)
10 202 I. Román-Godínez and C. Yáñez-Márquez 5 Conclusion and Future Work In this work we proposed a new algorithm for the Alpha-Beta heteroassociative memories that let us recall the fundamental patterns without ambiguity. Therefore, the model of Alpha-Beta associative memories ensure the complete recall for the fundamental set in both cases. However, the conditions for this correct recall on non-fundamental patterns has not yet characterized. The theoretical support for this proposal is presented here along with some experimental test. Currently we are working on applications of the new Alpha-Beta heteroassociative algorithm and as a future work, we will investigate which are the condition that allow our algorithm to show correct recall on non-fundamental patterns. Acknowledgements. The authors would like to thank the Instituto Politécnico Nacional (Secretaría Académica, COFAA, SIP, and CIC), the CONACyT, and SNI for their economical support to develop this work. References 1. Yáñez-Márquez, C.: Associative Memories Based on Order Relations and Binary Operators (in Spanish). PhD Thesis. Center for Computing Research, México (2002) 2. Kohonen, T.: Self-Organization and Associative Memory. Springer, Heidelberg (1989) 3. Hassoun, M.H.: Associative Neural Memories. Oxford University Press, New York (1993) 4. Steinbuch, K.: Die Lermatrix, Kybernetik 1(1), (1961) 5. Hopfield, J.J.: Neural networks and physical systems with emergent collective computa-tional abilities. In: Proceedings of the National Academy of Sciences, vol. 79, pp (1982) 6. Ritter, G.X., Sussner, P., Diaz-de-Leon, J.L.: Morphological Associative Memories. IEEE Transactions on Neural Networks. 9, (1998) 7. Yáñez-Márquez, C., Felipe-Riverón, E.M., López-Yáñez, I., Flores-Carapia, R.: A Novel Approach to Automatic Color Matching, Lecture Notes in Computer Science. In: Martínez-Trinidad, J.F., Carrasco Ochoa, J.A., Kittler, J. (eds.) CIARP LNCS, vol. 4225, pp Springer, Heidelberg (2006) 8. Acevedo-Mosqueda, M.E., Yáñez-Márquez, C., López-Yáñez, I.: Alpha-Beta Bidirectional Associative Memories Based Translator. IJCSNS International Journal of Computer Science and Network Security 6(5A), (2006)
Alpha Beta bidirectional associative memories: theory and applications
Neural Processing Letters (7) 6: 4 DOI.7/s63-7-94- Alpha Beta bidirectional associative memories: theory and applications María Elena Acevedo-Mosqueda Cornelio Yáñez-Márquez Itzamá López-Yáñez Received:
More informationAssociative Model for the Forecasting of Time Series Based on the Gamma Classifier
Associative Model for the Forecasting of Time Series Based on the Gamma Classifier Itzamá López-Yáñez 1,2, Leonid Sheremetov 1, and Cornelio Yáñez-Márquez 3 1 Mexican Petroleum Institute (IMP), Av. Eje
More informationMorphological Associative Memories for Gray-Scale Image Encryption
Appl. Math. Inf. Sci. 8, No. 1, 127-134 (214) 127 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/1.12785/amis/8115 Morphological Associative Memories for Gray-Scale
More informationEasy Categorization of Attributes in Decision Tables Based on Basic Binary Discernibility Matrix
Easy Categorization of Attributes in Decision Tables Based on Basic Binary Discernibility Matrix Manuel S. Lazo-Cortés 1, José Francisco Martínez-Trinidad 1, Jesús Ariel Carrasco-Ochoa 1, and Guillermo
More informationModeling Traffic Flow for Two and Three Lanes through Cellular Automata
International Mathematical Forum, Vol. 8, 2013, no. 22, 1091-1101 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2013.3486 Modeling Traffic Flow for Two and Three Lanes through Cellular Automata
More informationInformation storage capacity of incompletely connected associative memories
Information storage capacity of incompletely connected associative memories Holger Bosch a, Franz J. Kurfess b, * a Department of Computer Science, University of Geneva, Geneva, Switzerland b Department
More information1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them
Information Storage Capacity of Incompletely Connected Associative Memories Holger Bosch Departement de Mathematiques et d'informatique Ecole Normale Superieure de Lyon Lyon, France Franz Kurfess Department
More informationSome Theoretical Aspects of max-c and min-d Projection Fuzzy Autoassociative Memories
Trabalho apresentado no XXXVII CNMAC, S.J. dos Campos - SP, 2017. Proceeding Series of the Brazilian Society of Computational and Applied Mathematics Some Theoretical Aspects of max-c and min-d Projection
More informationCHAPTER 3. Pattern Association. Neural Networks
CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or
More informationLyapunov-Based Controller for the Inverted Pendulum Cart System
Nonlinear Dynamics (2005) 40: 367 374 c Springer 2005 Lyapunov-Based Controller for the Inverted Pendulum Cart System CARLOS AGUILAR IBAÑEZ,O.GUTIÉRREZ FRIAS, and M. SUÁREZ CASTAÑÓN Centro de Investigación
More informationData Retrieval and Noise Reduction by Fuzzy Associative Memories
Data Retrieval and Noise Reduction by Fuzzy Associative Memories Irina Perfilieva, Marek Vajgl University of Ostrava, Institute for Research and Applications of Fuzzy Modeling, Centre of Excellence IT4Innovations,
More informationResearch Article Stabilization of the Ball on the Beam System by Means of the Inverse Lyapunov Approach
Mathematical Problems in Engineering Volume 212, Article ID 81597, 13 pages doi:1.1155/212/81597 Research Article Stabilization of the Ball on the Beam System by Means of the Inverse Lyapunov Approach
More information2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.
2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output
More informationComputational Intelligence Lecture 6: Associative Memory
Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence
More informationApplication of hopfield network in improvement of fingerprint recognition process Mahmoud Alborzi 1, Abbas Toloie- Eshlaghy 1 and Dena Bazazian 2
5797 Available online at www.elixirjournal.org Computer Science and Engineering Elixir Comp. Sci. & Engg. 41 (211) 5797-582 Application hopfield network in improvement recognition process Mahmoud Alborzi
More informationAssociative Memory : Soft Computing Course Lecture 21 24, notes, slides RC Chakraborty, Aug.
Associative Memory : Soft Computing Course Lecture 21 24, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, Aug. 10, 2010 http://www.myreaders.info/html/soft_computing.html www.myreaders.info
More informationConvolutional Associative Memory: FIR Filter Model of Synapse
Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,
More informationResearch Article On the Stabilization of the Inverted-Cart Pendulum Using the Saturation Function Approach
Mathematical Problems in Engineering Volume 211, Article ID 85615, 14 pages doi:1.1155/211/85615 Research Article On the Stabilization of the Inverted-Cart Pendulum Using the Saturation Function Approach
More informationSU(1,1) solution for the Dunkl oscillator in two dimensions and its coherent states
arxiv:1607.06169v1 [math-ph] 1 Jul 016 SU(1,1) solution for the Dunkl oscillator in two dimensions and its coherent states M. Salazar-Ramírez a, D. Ojeda-Guillén a, R. D. Mota b, and V. D. Granados c a
More information18.10 Addendum: Arbitrary number of pigeons
18 Resolution 18. Addendum: Arbitrary number of pigeons Razborov s idea is to use a more subtle concept of width of clauses, tailor made for this particular CNF formula. Theorem 18.22 For every m n + 1,
More informationPattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs
Pattern Association or Associative Networks Jugal Kalita University of Colorado at Colorado Springs To an extent, learning is forming associations. Human memory associates similar items, contrary/opposite
More informationarxiv:math/ v1 [math.ac] 11 Nov 2005
A note on Rees algebras and the MFMC property arxiv:math/0511307v1 [math.ac] 11 Nov 2005 Isidoro Gitler, Carlos E. Valencia and Rafael H. Villarreal 1 Departamento de Matemáticas Centro de Investigación
More informationAdaptive Binary Integration CFAR Processing for Secondary Surveillance Radar *
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No Sofia 2009 Adaptive Binary Integration CFAR Processing for Secondary Surveillance Radar Ivan Garvanov, Christo Kabakchiev
More informationArtificial Intelligence Hopfield Networks
Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationDynamics of Structured Complex Recurrent Hopfield Networks
Dynamics of Structured Complex Recurrent Hopfield Networks by Garimella Ramamurthy Report No: IIIT/TR/2016/-1 Centre for Security, Theory and Algorithms International Institute of Information Technology
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationMARKOV CHAIN MONTE CARLO
MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with
More information3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield
3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield (1982, 1984). - The net is a fully interconnected neural
More informationOn the SIR s ( Signal -to- Interference -Ratio) in. Discrete-Time Autonomous Linear Networks
On the SIR s ( Signal -to- Interference -Ratio) in arxiv:93.9v [physics.data-an] 9 Mar 9 Discrete-Time Autonomous Linear Networks Zekeriya Uykan Abstract In this letter, we improve the results in [5] by
More informationGraph coloring, perfect graphs
Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive
More informationMP-Polynomial Kernel for Training Support Vector Machines
MP-Polynomial Kernel for Training Support Vector Machines Iván Mejía-Guevara 1 and Ángel Kuri-Morales2 1 Instituto de Investigaciones en Matemáticas Aplicadas y Sistemas (IIMAS), Universidad Nacional Autónoma
More informationEfficient Sensitivity Analysis in Hidden Markov Models
Efficient Sensitivity Analysis in Hidden Markov Models Silja Renooij Department of Information and Computing Sciences, Utrecht University P.O. Box 80.089, 3508 TB Utrecht, The Netherlands silja@cs.uu.nl
More informationSolving a linear equation in a set of integers II
ACTA ARITHMETICA LXXII.4 (1995) Solving a linear equation in a set of integers II by Imre Z. Ruzsa (Budapest) 1. Introduction. We continue the study of linear equations started in Part I of this paper.
More informationBobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, and Anupam Goyal III. ASSOCIATIVE RECALL BY A POLYNOMIAL MAPPING
Synthesis of a Nonrecurrent Associative Memory Model Based on a Nonlinear Transformation in the Spectral Domain p. 1 Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, Anupam Goyal Abstract -
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationStorage Capacity of Letter Recognition in Hopfield Networks
Storage Capacity of Letter Recognition in Hopfield Networks Gang Wei (gwei@cs.dal.ca) Zheyuan Yu (zyu@cs.dal.ca) Faculty of Computer Science, Dalhousie University, Halifax, N.S., Canada B3H 1W5 Abstract:
More informationCoins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to
Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the
More informationSnyder noncommutative space-time from two-time physics
arxiv:hep-th/0408193v1 25 Aug 2004 Snyder noncommutative space-time from two-time physics Juan M. Romero and Adolfo Zamora Instituto de Ciencias Nucleares Universidad Nacional Autónoma de México Apartado
More informationResearch Article Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with Time-Varying Delays
Discrete Dynamics in Nature and Society Volume 2008, Article ID 421614, 14 pages doi:10.1155/2008/421614 Research Article Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with
More informationStrictly Positive Definite Functions on a Real Inner Product Space
Strictly Positive Definite Functions on a Real Inner Product Space Allan Pinkus Abstract. If ft) = a kt k converges for all t IR with all coefficients a k 0, then the function f< x, y >) is positive definite
More informationDepartment of Computer Science University at Albany, State University of New York Solutions to Sample Discrete Mathematics Examination I (Spring 2008)
Department of Computer Science University at Albany, State University of New York Solutions to Sample Discrete Mathematics Examination I (Spring 2008) Problem 1: Suppose A, B, C and D are arbitrary sets.
More informationIn biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.
In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.
More informationA Picture for Complex Stochastic Boolean Systems: The Intrinsic Order Graph
A Picture for Complex Stochastic Boolean Systems: The Intrinsic Order Graph Luis González University of Las Palmas de Gran Canaria, Department of Mathematics, Research Institute IUSIANI, 357 Las Palmas
More informationElectronic version of an article published as [Automatica, 2004, vol. 40, No. 8, p ] [DOI:
Electronic version of an article published as [Automatica, 2004, vol. 40, No. 8, p. 1423-1428] [DOI: http://dx.doi.org/10.1016/j.automatica.2004.03.009] [copyright Elsevier] Feedback passivity of nonlinear
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationFriedman s test with missing observations
Friedman s test with missing observations Edyta Mrówka and Przemys law Grzegorzewski Systems Research Institute, Polish Academy of Sciences Newelska 6, 01-447 Warsaw, Poland e-mail: mrowka@ibspan.waw.pl,
More informationApproximation Bound for Fuzzy-Neural Networks with Bell Membership Function
Approximation Bound for Fuzzy-Neural Networks with Bell Membership Function Weimin Ma, and Guoqing Chen School of Economics and Management, Tsinghua University, Beijing, 00084, P.R. China {mawm, chengq}@em.tsinghua.edu.cn
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationNeural Networks Lecture 6: Associative Memory II
Neural Networks Lecture 6: Associative Memory II H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural
More informationContraction Methods for Convex Optimization and monotone variational inequalities No.12
XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationCase study: stochastic simulation via Rademacher bootstrap
Case study: stochastic simulation via Rademacher bootstrap Maxim Raginsky December 4, 2013 In this lecture, we will look at an application of statistical learning theory to the problem of efficient stochastic
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationContent-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer
Associative Memory Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Storage Analysis Sparse Coding Implementation on a
More informationThe definitions and notation are those introduced in the lectures slides. R Ex D [h
Mehryar Mohri Foundations of Machine Learning Courant Institute of Mathematical Sciences Homework assignment 2 October 04, 2016 Due: October 18, 2016 A. Rademacher complexity The definitions and notation
More informationλ-universe: Introduction and Preliminary Study
λ-universe: Introduction and Preliminary Study ABDOLREZA JOGHATAIE CE College Sharif University of Technology Azadi Avenue, Tehran IRAN Abstract: - Interactions between the members of an imaginary universe,
More informationProof Terminology. Technique #1: Direct Proof. Learning objectives. Proof Techniques (Rosen, Sections ) Direct Proof:
Proof Terminology Proof Techniques (Rosen, Sections 1.7 1.8) TOPICS Direct Proofs Proof by Contrapositive Proof by Contradiction Proof by Cases Theorem: statement that can be shown to be true Proof: a
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationMultilayer Perceptron
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4
More informationNon-Hermitian inverted Harmonic Oscillator-Type Hamiltonians Generated from Supersymmetry with Reflections arxiv: v2 [hep-th] 28 Mar 2018
Non-Hermitian inverted Harmonic Oscillator-Type Hamiltonians Generated from Supersymmetry with Reflections arxiv:1707.0908v [hep-th] 8 Mar 018 R. D. Mota a, D. Ojeda-Guillén b, M. Salazar-Ramírez b and
More informationPOLARS AND DUAL CONES
POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.
More informationLecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity;
CSCI699: Topics in Learning and Game Theory Lecture 2 Lecturer: Ilias Diakonikolas Scribes: Li Han Today we will cover the following 2 topics: 1. Learning infinite hypothesis class via VC-dimension and
More informationarxiv: v1 [math.co] 3 Nov 2014
SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed
More informationLecture 7: More Arithmetic and Fun With Primes
IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 7: More Arithmetic and Fun With Primes David Mix Barrington and Alexis Maciel July
More informationIntroduction to Decision Sciences Lecture 10
Introduction to Decision Sciences Lecture 10 Andrew Nobel October 17, 2017 Mathematical Induction Given: Propositional function P (n) with domain N + Basis step: Show that P (1) is true Inductive step:
More informationarxiv: v1 [cs.sy] 25 Oct 2017
Reconstruct the Logical Network from the Transition Matrix Cailu Wang, Yuegang Tao School of Control Science and Engineering, Hebei University of Technology, Tianjin, 300130, P. R. China arxiv:1710.09681v1
More informationNeural Networks Introduction CIS 32
Neural Networks Introduction CIS 32 Functionalia Office Hours (Last Change!) - Location Moved to 0317 N (Bridges Room) Today: Alpha-Beta Example Neural Networks Learning with T-R Agent (from before) direction
More informationApproximation Properties of Positive Boolean Functions
Approximation Properties of Positive Boolean Functions Marco Muselli Istituto di Elettronica e di Ingegneria dell Informazione e delle Telecomunicazioni, Consiglio Nazionale delle Ricerche, via De Marini,
More informationWeek 4: Hopfield Network
Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from
More informationMATH FINAL EXAM REVIEW HINTS
MATH 109 - FINAL EXAM REVIEW HINTS Answer: Answer: 1. Cardinality (1) Let a < b be two real numbers and define f : (0, 1) (a, b) by f(t) = (1 t)a + tb. (a) Prove that f is a bijection. (b) Prove that any
More informationNeural Network Analysis of Russian Parliament Voting Patterns
eural etwork Analysis of Russian Parliament Voting Patterns Dusan Husek Acad. of Sci. of the Czech Republic, Institute of Computer Science, the Czech Republic Email: dusan@cs.cas.cz Alexander A. Frolov
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities
More informationMINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS
MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS LORENZ HALBEISEN, MARTIN HAMILTON, AND PAVEL RŮŽIČKA Abstract. A subset X of a group (or a ring, or a field) is called generating, if the smallest subgroup
More informationThis is a repository copy of Improving the associative rule chaining architecture.
This is a repository copy of Improving the associative rule chaining architecture. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/75674/ Version: Accepted Version Book Section:
More informationTesting a Normal Covariance Matrix for Small Samples with Monotone Missing Data
Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationHopfield Neural Network
Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra 1 (2014) 43-50 Wavelets and Linear Algebra http://wala.vru.ac.ir Vali-e-Asr University Linear preservers of two-sided matrix majorization Fatemeh Khalooeia, a Department of
More informationECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE 1: BASICS. 1. Bayes Rule. p(b j A)p(A) p(b)
ECON 7335 INFORMATION, LEARNING AND EXPECTATIONS IN MACRO LECTURE : BASICS KRISTOFFER P. NIMARK. Bayes Rule De nition. Bayes Rule. The probability of event A occurring conditional on the event B having
More informationResilient Distributed Optimization Algorithm against Adversary Attacks
207 3th IEEE International Conference on Control & Automation (ICCA) July 3-6, 207. Ohrid, Macedonia Resilient Distributed Optimization Algorithm against Adversary Attacks Chengcheng Zhao, Jianping He
More informationSupervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir
Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion
More informationSpectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min
Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before
More informationI. INTRODUCTION BINARY SEQUENCE GENERATOR THE NUIBER OF OUTPUT SEQUENCES OF A. (BSG) for cryptographic or spread-spectrum applications is the number
THE NUIBER OF OUTPUT SEQUENCES OF A BINARY SEQUENCE GENERATOR Jovan Dj. GoliC Institute of Applied Hathematics and Electronics. Belgrade School of Electrical Engineering, University of Belgrade. Yugoslavia
More informationCS Foundations of Communication Complexity
CS 2429 - Foundations of Communication Complexity Lecturer: Sergey Gorbunov 1 Introduction In this lecture we will see how to use methods of (conditional) information complexity to prove lower bounds for
More informationMark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.
University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x
More informationA Do It Yourself Guide to Linear Algebra
A Do It Yourself Guide to Linear Algebra Lecture Notes based on REUs, 2001-2010 Instructor: László Babai Notes compiled by Howard Liu 6-30-2010 1 Vector Spaces 1.1 Basics Definition 1.1.1. A vector space
More informationUniqueness of Generalized Equilibrium for Box Constrained Problems and Applications
Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.
More informationCE213 Artificial Intelligence Lecture 14
CE213 Artificial Intelligence Lecture 14 Neural Networks: Part 2 Learning Rules -Hebb Rule - Perceptron Rule -Delta Rule Neural Networks Using Linear Units [ Difficulty warning: equations! ] 1 Learning
More information1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio
Coercivity Conditions and Variational Inequalities Aris Daniilidis Λ and Nicolas Hadjisavvas y Abstract Various coercivity conditions appear in the literature in order to guarantee solutions for the Variational
More information(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);
STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend
More informationTensor Method for Constructing 3D Moment Invariants
Tensor Method for Constructing 3D Moment Invariants Tomáš Suk and Jan Flusser Institute of Information Theory and Automation of the ASCR {suk,flusser}@utia.cas.cz Abstract. A generalization from 2D to
More informationOn the number of ways of writing t as a product of factorials
On the number of ways of writing t as a product of factorials Daniel M. Kane December 3, 005 Abstract Let N 0 denote the set of non-negative integers. In this paper we prove that lim sup n, m N 0 : n!m!
More informationExtending the Associative Rule Chaining Architecture for Multiple Arity Rules
Extending the Associative Rule Chaining Architecture for Multiple Arity Rules Nathan Burles, James Austin, and Simon O Keefe Advanced Computer Architectures Group Department of Computer Science University
More informationA Statistical Genetic Algorithm
A Statistical Genetic Algorithm Angel Kuri M. akm@pollux.cic.ipn.mx Centro de Investigación en Computación Instituto Politécnico Nacional Zacatenco México 07738, D.F. Abstract A Genetic Algorithm which
More informationLecture 9 and 10: Malicious Security - GMW Compiler and Cut and Choose, OT Extension
CS 294 Secure Computation February 16 and 18, 2016 Lecture 9 and 10: Malicious Security - GMW Compiler and Cut and Choose, OT Extension Instructor: Sanjam Garg Scribe: Alex Irpan 1 Overview Garbled circuits
More informationLecture 7: Schwartz-Zippel Lemma, Perfect Matching. 1.1 Polynomial Identity Testing and Schwartz-Zippel Lemma
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 7: Schwartz-Zippel Lemma, Perfect Matching Lecturer: Shayan Oveis Gharan 01/30/2017 Scribe: Philip Cho Disclaimer: These notes have not
More informationFuzzy Cognitive Maps Learning through Swarm Intelligence
Fuzzy Cognitive Maps Learning through Swarm Intelligence E.I. Papageorgiou,3, K.E. Parsopoulos 2,3, P.P. Groumpos,3, and M.N. Vrahatis 2,3 Department of Electrical and Computer Engineering, University
More informationAssociative Neural Networks using Matlab
Associative Neural Networks using Matlab Example 1: Write a matlab program to find the weight matrix of an auto associative net to store the vector (1 1-1 -1). Test the response of the network by presenting
More information