Alpha Beta bidirectional associative memories: theory and applications

Size: px
Start display at page:

Download "Alpha Beta bidirectional associative memories: theory and applications"

Transcription

1 Neural Processing Letters (7) 6: 4 DOI.7/s Alpha Beta bidirectional associative memories: theory and applications María Elena Acevedo-Mosqueda Cornelio Yáñez-Márquez Itzamá López-Yáñez Received: October 6 / Accepted: April 7 / Published online: June 7 Springer Science+Business Media B.V. 7 Abstract In this work a new Bidirectional Associative Memory model, surpassing every other past and current model, is presented. This new model is based on Alpha Beta associative memories, from whom it inherits its name. The main and most important characteristic of Alpha Beta bidirectional associative memories is that they exhibit perfect recall of all patterns in the fundamental set, without requiring the fulfillment of any condition. The capacity they show is min(n,m),beingn and m the input and output patterns dimensions, respectively. Design and functioning of this model are mathematically founded, thus demonstrating that pattern recall is always perfect, with no regard to the trained pattern characteristics, such as linear independency, orthogonality, or Hamming distance. Two applications illustrating the optimal functioning of the model are shown: a translator and a fingerprint identifier. Keywords Bidirectional associative memories Alpha Beta associative memories Perfect recall Fingerprint identifier Introduction The area of Associative Memories, as a relevant part of Computer Science, has achieved ample importance and dynamism in the activities developed by numerous research groups around the globe, specifically among those working in topics related to theory and applications of pattern recognition and classification. M. E. Acevedo-Mosqueda (B) C. Yáñez-Márquez I. López-Yáñez Laboratorio de Inteligencia Artificial, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz s/n, México, DF 7738, México eacevedo@ipn.mx C. Yáñez-Márquez cyanez@cic.ipn.mx I. López-Yáñez ilopezb5@sagitario.cic.ipn.mx 3

2 M. E. Acevedo-Mosqueda et al. The fundamental purpose of an associative memory is to correctly recall complete patterns from input patterns, which may be altered with additive, subtractive, or mixed noise []. In the design of an associative memory there are two phases: before the pattern recalling phase comes the learning phase, during which the associative memory is built. In order to do so, associations are made; these associations are pairs of patterns, one input pattern and the other an output pattern. If for every association the input pattern is equal to the output pattern, the resulting memory is autoassociative; otherwise, the memory is heteroassociative. The latter means that an autoassociative memory can be considered as a particular case of an heteroassociative memory []. Through out time, Associative Memories have been developed in parallel to Neural Networks, from the conception of the first model of artificial neuron [3], to neural networks models based on modern concepts, such as Morphological Mathematics [4], going through such important works as those by the pioneers in perceptron-like neural networks models [5 7]. In 98 John J. Hopfield presents to the world his associative memory, model inspired by physics concepts which also has the particularity of an iterative algorithm [8]. This work incited a renewal of researchers interest on topics regarding both associative memories as well as neural networks, which had been dormant for some years. In this sense, neural networks had a recess of 3 years, parting from the publication in 969 of the book Perceptrons [9] by Minsky and Papert. In it, the authors demonstrated that the perceptron had severe limitations. On the side of associative memories, it was Karl Steinbuch who, in 96, developed an heteroassociative memory able to work as a binary pattern classifier. He called this first model of associative memory Lernmatrix []. Eight years later, Scottish researchers Willshaw et al. [] presented the Correlograph, elemental optical device capable of working as an associative memory. There is also another important antecedent to Hopfiled memory: two classical model of associative memory were independently presented in 97 by Anderson [] and Kohonen [3]. Due to their importance and the similitude of the involved concepts, both models are generically named Linear Associator. The work presented by Hopfield has enormous relevance since his model of neural network demonstrates that the interaction of simple processing elements, similar to neurons, give place to the rise of collective computational properties, such as the stability of memories. However, the Hopfield model of associative memory has two disadvantages: first, it is notable the small pattern recalling capacity, being only.5n, where n is the dimension of the stored patterns. Second, the Hopfield memory is only autoassociative; that is, it is incapable of associating patterns which are different. With the intention of solving the second disadvantage of the Hopfield model, in 988 Kosko [4] created a model of heteroassociative memory parting from the Hopfield memory: the Bidirectional Associative Memory (BAM), which is based on an iterative algorithm, the same as the Hopfield memory. The main characteristic of Kosko BAM is the heteroassociativity, made possible on the basis of the Hopfield model. Kosko proposed a solution based on the same matrix representing the Hopfield model, with which he was capable of realizing the learning phase in both directions: to obtain an output pattern from an input pattern and vice versa, to obtain an input pattern from an output pattern. From this comes the name of bidirectional. Even though the Kosko model was successful in obtaining a heteroassociative memory, the other aforementioned Hopfield memory disadvantage was not solved by the BAM: the Kosko bidirectional associative memory has a very low pattern learning and recovering capacity, dependant on the minimum of the dimensions of the input and output patterns. 3

3 Alpha Beta bidirectional associative memories 3 The same year of publication of the BAM, 988, the first tries of scientific research to improve the BAM pattern learning and recovering capacity appeared [5]. Since then, diverse research groups from around the globe have tried, through the years, to minimize this disadvantage of the BAM. In order to do so, many scientific concepts and mathematical techniques have been used, from multiple training to Householder coding [6] and genetic algorithms [7], and in some cases linear programming. However, the achievements have been modest and scarce. Actually, the vast majority of proposals of new bidirectional associative memory models presented in scientific journals and high level symposiums, do not even guarantee the fundamental set complete recovery. In other words, these models are not capable of recovering all of the learned patterns, and they fail in one or more [4 43]. In the present work, we present the theoretical foundation on which is based the design and functioning of the Alpha Beta bidirectional associative memories, which exhibit perfect recall of all patterns in the fundamental set. Alpha Beta associative memories In this section, basic concepts about associative memories are presented. Also, and since Alpha Beta associative memories are the basis for Alpha Beta bidirectional associative memories, the theoretical foundation of Alpha Beta associative memories is presented, as described in [44].. Basic concepts The fundamental purpose of an associative memory is to correctly recall complete patterns from input patterns, which may be altered with additive, substractive, or mixed noise. The concepts used in this section are presented in [,,44]. An Associative Memory can be formulated as an input output system, idea that is schematized as follows: x M y In this diagram, input and output pattern are represented by column vectors, denoted by x and y, respectively. Every input pattern makes up an association with its corresponding output pattern, similarly to an ordered pair. For instance, the patterns x and y in the former diagram make up the association (x, y). Input and output patterns will be denoted by bold letters, x and y, adding natural numbers as superscript for symbolic discrimination. For instance, to an input pattern x corresponds the output pattern y, and together they form the association (x, y ). In the same manner, for a specific positive integer number k, the corresponding association will be (x k, y k ). Associative memory M is represented by a matrix generated from a finite set of associations, known beforehand: this is the fundamental set of associations, orsimply fundamental set. The fundamental set is represented as follows: {(x µ, y µ ) µ =,,...,p} where p is a positive, integer number representing the cardinality of the fundamental set. 3

4 4 M. E. Acevedo-Mosqueda et al. The patterns that make up the associations of the fundamental set are called fundamental patterns. The nature of the fundamental set gives us an important criterion by which it is possible to classify associative memories: AmemoryisAutoassociative if it holds that x µ = y µ µ {,,...,p}, then one of the requisites is that n = m. AmemoryisHeteroassociative is that where µ {,,...,p} for which x µ = y µ. Notice that there can be heteroassociative memories with n = m. In problems where associative memories intervene, two important phases are considered: the learning phase, where the associative memory is generated from the p associations of the fundamental set, and the recalling phase, where the associative memory operates on an input pattern, as shown in the diagram presented at the beginning of this section. In order to specify the patterns components, the notation of two sets, which we will arbitrarily call A and B, is required. The components of the column vectors representing patterns, both input and output, will be elements from the set A, while the components of matrix M will be elements from the set B. There are no prerequisites or limitations with respect to the choice of the two sets, and therefore they do not need to be different or have special characteristics. This makes the number of possibilities to choose A and B infinite. By convention, every column vector representing an input pattern has n components whose values belong to the set A, and every column vector representing an output pattern has m components whose values belong to the set A. Inotherwords: x µ A n yy µ A m µ {,,...,p} The jth component of a column vector is indicated by the same letter of the vector, not in bold, putting a j as subscript ( j {,,..., n} or j {,,..., m}, accordingly). The jth component of a column vector x µ is represented by x µ j With the described basic concepts and the former notation, it is possible to express the two phases of an associative memory:. Learning phase (Generation of the associative memory). Find the adequate operators and a way to generate a matrix M that will store the p associations of the fundamental set {(x, y ), (x, y ),...,(x p, y p )},wherex µ A n and y µ A m µ {,,...,p}. If µ {,,...,p} such that x µ =y µ, the memory will be heteroassociative; if m = n and x µ = y µ µ {,,...,p}, the memory will be autoassociative.. Recalling phase (Operation of the associative memory). Find the adequate operators and sufficient conditions to obtain the output fundamental pattern y µ, when the memory M is operated with the input fundamental pattern x µ. The latter must hold for every element of the fundamental set and for both modes: autoassociative and heteroassociative. It is said that an associative memory M exhibits perfect recall if, during the recalling phase, when presented with a pattern x ω as input (ω {,,...,p}), M answers with the corresponding output fundamental pattern y ω. A bidirectional associative memory is also an input output system, only that the process is bidirectional. The forward direction is described in the same manner as a common associative memory: when presented with an input x, the system delivers an output y. The backward direction takes place presenting to the system an input y in order to receive and output x. 3

5 Alpha Beta bidirectional associative memories 5. Alpha Beta Associative Memories In this section the definitions of the α and β operations, and the matrix operations that make use of the original operations, are presented. Emphasis is put in the Alpha Beta associative memory learning and recalling phases, both for the max and min kinds, since they are fundamental to the Alpha Beta BAM design. Also, we present two Theorems that set the theoretical basis for the Alpha Beta Bidirectional Associative Memories. The theoretical foundation Theorems for the Alpha Beta autoassociative memories are numbered according to the original numeration appearing in [44]. The Alpha Beta associative memories are of two kinds and are able to operate in two different modes. The operator α is useful at the learning phase while the operator β is the basis for the pattern recall phase. The heart of the mathematical tools used in the Alpha Beta model, are two binary operators designed specifically for these memories. These operators are defined as follows: First, we define the sets A ={, } and B ={,, }, then the operators α and β are defined in tabular form: α : A A B β : B A A x y α(x, y) x y β(x, y) The sets A and B, theα and β operators, along with the usual (minimum) y (maximum) operators, form the algebraic system ( A, B, α, β,, ) which is the mathematical basis for the Alpha Beta associative memories. Definition of four matrix operation is required. Of these operations, only four particular cases will be used: [ αmax Operation : P mxr α Q rxn = ] fij α, m n where f ij α = r k= α(p ik, q kj ) [ βmax Operation : P mxr β Q rxn = ] f β ij, m n where f β ij = r k= β(p ik, q kj ) αmin Operation : P mxr α Q rxn = [ ] hij α, where m n hα ij = r α(p ik, q kj ) k= βmin Operation : P mxr β Q rxn = [ ] h β ij, where m n hβ ij = r β(p ik, q kj ) k= Below are shown some simplifications obtained when these four operations are applied to vectors: 3

6 6 M. E. Acevedo-Mosqueda et al. Let x A n and y A m ;theny α x t is a matrix with dimensions m n, and it holds that: k= α(y, x ) k= α(y, x )... k= α(y, x n ) k= α(y, x ) k= α(y, x )... k= α(y, x n ) y α x t = y α x t = k= α(y m, x ) k= α(y m, x )... k= α(y m, x n ) The symbol represents both operations α and α when a column vector of dimension m is operated with a row vector of dimension n: y α x t = y x t = y α x t Let x A n and P be a matrix of dimension m n: Operation P m n β x yields as a result a column vector of dimension m, whose ith component has the form: ( P m n β x ) i = n j= β ( p ij, x j ). Operation P m n β x yields as a result a column vector of dimension m, whose ith component has the form: ( P m n β x ) i = n j= β ( p ij, x j )..3 Alpha Beta Autoassociative Memories Below are shown some characteristics of Alpha Beta autoassociative memories:. The fundamental set takes the form {(x µ, x µ ) µ =,,...,p}.. Both input and output fundamental patterns are of the same dimension, denoted by n. 3. The memory is a square matrix, for both kinds, V (max) and (min). If x µ A n then: V = [ ] v ij n n and = [ λ ij ]n n. Next, the learning and recalling phases of the Alpha Beta autoassociative memories max are presented. m n Learning Phase Step. Foreveryµ =,,,p, from the pair (x µ, x µ ) is built the matrix: [ x µ (x µ ) t] n n Step. The binary operator max is applied to the matrices obtained in step : V = p µ= [ x µ (x µ ) t] The ijth entry is given by the following expression: v ij = p ( ) α x µ µ= i, x µ j and since α: A A B, wehavethatv ij B, i {,,...,n}. j {,,...,n}. 3

7 Alpha Beta bidirectional associative memories 7 Recalling Phase The Alpha Beta autoassociative memories of kind recalling phase has two possible cases. In the first case, the input pattern is a fundamental pattern. That is, the input is a pattern x ω, with ω {,,...,p}. In the second case, the input pattern is NOT a fundamental pattern, but a distorted version of at least one of the fundamental patterns. This means that if the input pattern is x, there must exist at least one index value ω {,,...,p} corresponding to the fundamental pattern in respect to which x is an altered version by one of the three types of noise: additive, substractive or mixed. Case Fundamental pattern. A pattern x ω, with ω {,,...,p}, is presented to the autoassociative Alpha Beta memory of type and we do operation β :V β x ω. The result of the former operation will be a column vector of dimension n. ( V β x ω) {[ ] } i = n β(v ij, x ω j ) = n p β j= j= µ= α(xµ i, x µ j ), x ω j Case Altered pattern. A binary pattern x (an altered pattern of some fundamental pattern x ω ), which is a column pattern of dimension n, is presented to the autoassociative Alpha Beta memory of type and the operation β is done: V β x. As in Case, the result of the former operation is a column vector of dimension n, whose ith component is expressed in the following manner: ( V β x ) {[ ] } i = n β(v ij, x j ) = n p β j= j= µ= α(xµ i, x µ j ), x j Theorem 4.3 Let {(x µ, x µ ) µ =,,...,p} be the fundamental set of an autoassociative Alpha Beta memory of type represented by V, and let x A n be a pattern altered with additive noise with respect to some fundamental pattern x ω, with ω {,,...,p}. If x is presented to V as input, and also for every i {,...,n} it holds that j = j {,...,n}, which is dependant on ω and i such that v ij α(x ω, x j ), then recall V β x is perfect; that is to say that V β x = x ω. Theorem 4.3 shows us that autoassociative Alpha Beta memories of type are immune to a certain amount of additive noise. Next, the learning and recalling phases of the Alpha Beta Autoassociative Memories min are presented. Learning Phase Step. Foreveryµ =,,,p, from the pair (x µ, x µ ) is built the matrix: [ x µ (x µ ) t] n n Step. The binary operator min is applied to the matrices obtained in step : = p µ= [ x µ (x µ ) t] The ijth entry is given by the following expression: λ ij = p µ= α ( x µ i, x µ j and since α: A A B, wehavethatλ ij B, i {,,...,n}. j {,,...,n}. ). 3

8 8 M. E. Acevedo-Mosqueda et al. Recalling Phase The Alpha Beta autoassociative memories of kind recalling phase has two possible cases. In the first case, the input pattern is a fundamental pattern. That is, the input is a patternx ω, with ω {,,...,p}. In the second case, the input pattern is NOT a fundamental pattern, but a distorted version of at least one of the fundamental patterns. This means that if the input pattern is x, there must exist at least one index value ω {,,...,p} corresponding to the fundamental pattern in respect to which x is an altered version by one of the three types of noise: additive, substractive or mixed. Case Fundamental pattern. A pattern x ω, with ω {,,...,p}, is presented to the autoassociative Alpha Beta memory of type and we do operation β : β x ω. The result of the former operation will be a column vector of dimension n. ( β x ω) {[ ] } i = n β(λ ij, x ω j ) = n p β j= j= µ= α(xµ i, x µ j ), x ω j Case Altered pattern. A binary pattern x (an altered pattern of some fundamental pattern x ω ), which is a column pattern of dimension n, is presented to the autoassociative Alpha Beta memory of type and the operation β is done: β x. As in Case, the result of the former operation is a column vector of dimension n, whose ith component is expressed in the following manner: ( β x ) {[ ] } i = n β(λ ij, x j ) = n p β j= j= µ= α(xµ i, x µ j ), x j Theorem 4.33 Let {(x µ, x µ ) µ =,,...,p} be the fundamental set of an autoassociative Alpha Beta memory of type represented by, and let x A n be a pattern altered with substractive noise with respect to some fundamental pattern x ω, with ω {,,...,p}. If x is presented to memory as input, and also for every i {,...,n} it holds that j = j {,...,n}, which is dependant on ω and i, such that λ ij α(x ω, x j ), then recall β x is perfect; that is to say that β x = x ω. Theorem 4.33 confirms that the autoassociative Alpha Beta memory of type are immune to a certain amount of substractive noise. 3 Alpha Beta Bidirectional Associative Memory Before going into detail over the processing of an Alpha Beta BAM, we will define the following. In this work we will assume that Alpha Beta associative memories have a fundamental set denoted by {(x µ, y µ ) µ =,,...,p}x µ A n and y µ A m, with A ={, }, n Z +, p Z +, m Z + and < p min( n, m ). Also, it holds that all input patterns are different; M that is x µ = x ξ if and only if µ = ξ. If µ {,,...p} it holds that x µ = y µ, the Alpha Beta memory will be autoassociative; if on the contrary, the former affirmation is negative, that is µ {,,...,p} for which it holds that x µ = y µ, then the Alpha Beta memory will be heteroassociative. Definition (One-hot) Let the set A be A ={, } and p Z +, p >, k Z +,such that k p. Thekth one-hot vector of p bits is defined as vector h k A p for which it holds that the kth component is h k k = and the ret of the components are hk j =, j = k, j p. 3

9 Alpha Beta bidirectional associative memories 9 x Stage Stage y Stage 4 Stage 3 Fig. Graphical schematics of the Alpha Beta bidirectional associative memory x Stage k h v = ( h ( () () k ) ) p Stage Modified Linear Associator y Fig. Schematics of the process done in the direction from x to y. Here are shown only Stage and Stage. Notice that vkk =,vik = i = k, i p, k p Definition (Zero-hot) Let the set A be A ={, } and p Z +, p >, k Z +,such that k p. Thekth zero-hot vector of p bits is defined as vector h k A p for which it holds that the kth component is h k k = and the ret of the components are hk j =, j = k, j p. Definition 3 (Expansion vectorial transform) Let the set A be A ={, } and n Z +, ym Z +. Given two arbitrary vectors x A n and e A m, the expansion vectorial transform of order m, τ e : A n A n+m,isdefinedasτ e (x, e) = X A n+m, a vector whose components are: X i = x i for i n and X i = e i for n + i n + m. Definition 4 (Contraction vectorial transform) Let the set A be A ={, } and n Z +, ym Z + such that m < n. Given one arbitrary vector X A n+m, the contraction vectorial transformoforderm, τ c : A n+m A m,isdefinedasτ c (X, m) = c A m, a vector whose components are: c i = X i+n for i < m. In both directions, the model is made up by two stages, as shown in Fig.. For simplicity, first will be described the process necessary in one direction, in order to later present the complementary direction which will give bidirectionality to the model (see Fig. ). ThefunctionofStageistoofferay k as output (k =,...,p) givenax k as input. Now we assume that as input to Stage we have one element of a set of p orthonormal vectors. Recall that the Linear Associator has perfect recall when it works with orthonormal vectors. In this work we use a variation of the Linear Associator in order to obtain y k, parting from a one-hot vector v k in its kth coordinate. For the construction of the modified Linear Associator, its learning phase is skipped and a matrix M representing the memory is built. Each column in this matrix corresponds to each output pattern y µ.inthisway,whenmatrixm is operated with a one-hot vector v k,the corresponding y k will always be recalled. The task of Stage is: given a x k or a noisy version of it ( x k ),theone-hot vector v k must be obtained without ambiguity and with no condition. 3

10 M. E. Acevedo-Mosqueda et al. The process in the contrary direction, which is presenting pattern y k (k =,...,p) as input to the Alpha Beta BAM and obtaining its corresponding x k, is very similar to the one described above. The task of Stage 3 is to obtain a one-hot vector v k given a y k.stage4isa modified Linear Associator built in similar fashion to the one in Stage. 3. Theoretical foundation of Stages and 3 Below are presented five Theorems and nine Lemmas with their respective proofs, as well as an illustrative example of each one. This mathematical foundation is the basis for the steps required by the complete algorithm, which is presented in Sect. 3.. These Theorems and Lemmas numbering corresponds to the numeration used in [45]. By convention, the symbol will be used to indicate the end of a proof. Theorem 4. Let {(x µ, x µ ) µ =,,...,p} be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, and let x A n be a pattern altered with additive noise with respect to some fundamental pattern x ω with ω {,,...,p}. Lets assume that during the recalling phase, x is presented to memory V as input, and lets consider an index k {,...,n}. The kth component recalled ( V β x ) k is precisely xω k if and only if it holds that r {,...,n}, dependant on ω and k, such that ν kr α(x ω k, x r ). Proof ) By hypothesis we assume that ( V β x ) k = xω k. By contradiction, now suppose false that r {,...,n} such that ν kr α(xk ω, x r ). The former is equivalent to stating that r {,...,n}ν kr >α(xk ω, x r ),whichisthesametosayingthat r {,...,n}β (ν kr, x r ) > β [ α(xk ω, x ] r ), x r = x ω k. When we take minimums at both sides of the inequality with respect to index r, wehave: n r= β (ν kr, x r ) > n r= x ω k = xω k and this means that ( V β x ) k = n β (ν kr, x r ) > x ω r= k, which contradicts the hypothesis. ) Since the conditions of Theorem 4.3 hold for every i {,...,n}, wehavethat V β x = x ω ; that is, it holds that ( V β x ) = x ω i i, i {,...,n}. When we fix indexes i ( and j such that i = k y j = r (which depends on ω and k) we obtain the desired result: V β x ) k = xω k. Example 3. Let p = 4andn = 4. The fundamental set for an associative memory contains tour pairs of patterns {(x µ, x µ ) µ =,, 3, 4}. Each vector x µ is a column vector with values in the set A 4 and the values for each vector components are the following: x =, x =, x3 =, x4 = The matrix of the autoassociative Alpha Beta memory of type max for this fundamental set is: V = 3

11 Alpha Beta bidirectional associative memories Now, lets suppose we present to matrix V a noisy version x of vector x ω with ω =, with additive noise, whose components are: x = then the recalled value will be : V β x = We can see that vector x was recalled perfectly. However, our interest lies in the recall of components and whether the condition of theorem 4. holds. The first component recalled by the autoassociative Alpha Beta memory of type max is equal to zero, which is precisely the value held by the first component of the second pattern. This is, ( V β x ) = = x. Now lets see whether the condition of r {,...,n}, such that ν kr α(xk ω, x r ), holds. For our example k = andω =. For r =, ν = andα(x, x ) = α(, ) =, that is ν >α(x, x ), does not hold. For r =, ν = andα(x, x ) = α(, ) =, that is ν >α(x, x ), does hold. For r = 3, ν 3 = andα(x, x 3) = α(, ) =, that is ν 3 >α(x, x 3), does not hold. For r = 4, ν 4 = andα(x, x 4) = α(, ) =, that is ν 4 >α(x, x 4), does not hold. Therefore, exists r = such that ν α(x, x ). Lemma 4. Let {(X k, X k ) k =,...,p} be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) for k =,...,p, and let F = τ e (x k, u) A n+p be a version of a specific pattern X k, altered with additive noise, being u A p the vector defined as u = p i= hi. If during the recalling phase F is presented to memory V, then component Xn+k k will be recalled in a perfect manner; that is ( V β F ) n+k = X n+k k =. Proof This proof will be done for two mutually exclusive cases. Case Pattern F has one component with value. This means that j {,...,n + p} such that F j = ; also, due to the way vector X k is built, it is clear that Xn+k k =. Then α(xn+k k, F j ) = α(, ) = and since the maximum allowed value for a component of memory V is we have ν (n+k) j α(xn+k k, F j ). According to Theorem 4., Xn+k k is perfectly recalled. Example 3. Taking the fundamental set of example 3., patterns X k for k =,, 3, 4are built, using the expansion vectorial transform from Definition 3: X =, X =, X 3 =, X 4 = 3

12 M. E. Acevedo-Mosqueda et al. The matrix of the autoassociative Alpha Beta memory of type max is: V = Now, having k = 3 we will use vector X 3 and obtain its noisy version F, with additive noise: X 3 =, the noisy vector F, with additive noise, is: F = When F is presented to memory V, the recalled vector is: = V β F = 3 X Seventh component of both vectors Recalling that for this example n = 4andk = 3, then ( V β F ) 4+3 = X =. Therefore, the seventh component of third vector is perfectly recalled. Case Pattern F does not contain a component with value. That is F j = j {,..., n+ p}. This means that it is not possible to guarantee the existence of a value j {,...,n + p} such that ν (n+k) j α(x k n+k, F j ), and therefore Theorem 4. cannot be applied. However, we will show the impossibility of ( V β F ) n+k =. The recalling phase of the autoassociative Alpha Beta memory of type max V, when having vector F as input, takes the following form for the n + kth recalled component: ( V β F ) n+k = n j= β(ν (n+k) j, F j ) = n j= β {[ p µ= α(x µ n+k, X µ j ) ], F j } DuetothewayvectorX k is built, besides X k n+k =, it is important to notice that X µ n+k =, µ = k, and from here we can establish the following: p µ= α(x µ n+k, X µ j ) = α(x k n+k, X k j ) = α(, X k j ) 3

13 Alpha Beta bidirectional associative memories 3 is different from zero regardless of the value of X k j. According to F j = j {,...,n+ p}, we can conclude the impossibility of ( V β F ) n+k = n β(α(, X k j ), ) j= being zero. That is ( V β F ) n+k = = X k n+k. Example 3.3 Using the matrix V, obtained in Example 3., and with k =, we get vector F, which is a noisy version by additive noise of X, whose component values are: X =, the noisy vector F, with additive noise, is: F = When F is presented to memory V, the recalled vector is: V β F = Fifth component of both vectors X = For this example n = 4yk =, then ( V β F ) 4+ = X 4+ =. Therefore, the fifth component of the first pattern is perfectly recalled. Theorem 4. Let {(X k, X k ) k =,...,p} be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) for k =,...,p, and let F = τ e (x k, u) A n+p be a pattern altered with additive noise with respect to some specific pattern X k,beingu A p the vector defined as u = p i= hi. Lets assume that during the recalling phase, F is presented to memory V as input, and the pattern R = V β F A n+p is obtained. If when taking vector R as argument, the contraction vectorial transform r = τ c (R, n) A p is done, the resulting vector r has two mutually exclusive possibilities: k {,...,p} such that r = h k,orr is not a one-hot vector. Proof From the definition of contraction vectorial transform we have that r i = R i+n = (V β F) i+n for i p, and in particular, by making i = k we have r k = R k+n = (V β F) k+n.however,bylemma4. ( V β F ) n+k = X n+k k, and since Xk = τ e (x k, h k ),the value Xn+k k is equal to the value of component hk k =. That is, r k =. When considering that r k =, vector r has two mutually exclusive possibilities: it can be that r j = j = k in which case r = h k ; or happens that j {,...,p}, j = k for which r j =, in which case it is not possible that r is a one-hot vector, given Definition. 3

14 4 M. E. Acevedo-Mosqueda et al. Example 3.4 Using the matrix V obtained in Example 3., and with k =, the component values of X and its respective noisy pattern F, by additive noise, are: X = and F = When F is presented to memory V, the recalled vector R is: R = V β F = When this vector is taken as argument and the contraction vectorial transform is done we obtain vector r, r = and according to Definition, we can see that r is the second one-hot vector of 4 bits. Now lets follow the same process for k = 4. Then, X 4 and its respective noisy pattern F, with additive noise, are: X 4 = and F = When F is presented to memory V we obtain vector R: R = V β F = 3

15 Alpha Beta bidirectional associative memories 5 When this vector is taken as argument and the contraction vectorial transform is done we obtain vector r, r = and according to Definition, r is not a one-hot vector. Theorem 4.3 Let {(x µ, x µ ) µ =,,...,p} be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, and let x A n be a pattern altered with substractive noise with respect to some fundamental pattern x ω with ω {,,...,p}.lets assume that during the recalling phase, x ω is presented to memory as input, and consider an index k {,...,n}. The kth recalled component ( β x ) k is precisely xω k if and only if it holds that r {,...,n}, dependant on ω and k, such that λ kr α ( xk ω, x ) r. Proof ) By hypothesis it is assumed that ( β x ) k = xω k. By contradiction, now lets suppose it is false that r {,...,n} such that λ kr α ( xk ω, x ) r. That is to say that r {,...,n}λ kr <α ( xk ω, x ) r, which is in turn equivalent to r {,...,n}β (λkr, x r ) < β [ α ( xk ω, x ) ] r, xr = x ω k. When taking the maximums at both sides of the inequality, with respect to index r, wehave n r= β (λ kr, x r ) < n r= x ω k = xω k and this means that ( β x ) k = n r= β (λ kr, x r ) < xk ω, affirmation which contradicts the hypothesis. ) When conditions for Theorem 4.33 [9] are met for every i {,...,n}, wehave β x = x ω. That is, it holds that ( f β x ) = x ω i i i {,...,n}. When indexes i and ( j are fixed such that i = k and j = r, depending on ω and k, we obtain the desired result β x ) k = xω k. Example 3.5 The fundamental set components values from example 3. are shown below, with p = 4andn = 4, x =, x =, x3 =, x4 = The matrix of the autoassociative Alpha Beta memory of type min for this fundamental set is: = Now lets assume we present matrix with a noisy version x of vector x ω with ω = 4, altered by substractive noise, whose component values are: x = then the recalled pattern will be β x = 3

16 6 M. E. Acevedo-Mosqueda et al. As can be seen, vector x 4 was perfectly recalled. However, we are interested in corroborating component recall, and also that the condition for Theorem 4.5 is met. The second component recalled by the autoassociative Alpha Beta memory min is equal to one which is exactly the second component value of the fourth pattern. That is, ( β x ) = = x4.now let us see whether the condition of r {,...,n}, such that λ kr α(xk ω, x r ), holds. For our example, k = andω = 4. For r =, λ = andα(x 4, x ) = α(, ) =, this is λ α(x 4, x ), does hold. For r =, λ = andα(x 4, x ) = α(, ) =, that is λ <α(x 4, x ), does not hold. For r = 3, λ 3 = andα(x 4, x 3) = α(, ) =, that is λ 3 <α(x 4, x 3), does not hold. For r = 4, λ 4 = andα(x 4, x 4) = α(, ) =, that is λ 4 <α(x 4, x 4), does not hold. Therefore, exists r = suchthatα(x 4, x ). Lemma 4. Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ) for k =,...,p, and let G = τ e (x k, w) A n+p be a pattern altered with substractive noise with respect to some specific pattern X k,beingw A p a vector whose components have values w i = u i, and u A p the vector defined as u = p i= hi. If during the recalling phase, G is presented to memory, then component X n+k k is recalled in a perfect manner. That is, ( β G ) n+k = X n+k k =. Proof This proof will be done for two mutually exclusive cases. Case Pattern G has one component with value. This means that j {,...,n + p} such that G j =. Also, due to the way vector X k is built, it is clear that X n+k k =. Because of this, α( X n+k k, G j ) = α(, ) = and, since the minimum allowed value for a component of memory is, we have λ (n+k) j α( X n+k k, G j ). According to Theorem 4.3, X n+k k is perfectly recalled. Example 3.6 Taking the fundamental set of Example 3., the patterns X k for k =,, 3, 4 are built, by using the expansion vectorial transform of Definition 3. 3 X =, X =, X 3 =, X 4 =

17 Alpha Beta bidirectional associative memories 7 The matrix of the autoassociative Alpha Beta memory of type min is: = Now, taking k = 3 we use vector X 3 and we obtain its noisy version, with substractive noise: X 3 =, the noisy vector G, with substractive noise, is: G = When presenting G to matrix, the recalled pattern is: = β G = 3 X Seventh component of both vectors Λ Lets recall that, for this example, n = 4andk = 3; then ( β G ) 4+3 = X =. Therefore, the seventh component of the third vector is perfectly recalled. Case Pattern G has no component with value ; that is, G j = j {,...,n + p}.this means that it is not possible to guarantee the existence of a value j {,...,n + p} such that λ (n+k) j α( X k n+k, G j ), and therefore Theorem 4.3 cannot be applied. However, lets show the impossibility of ( β G) n+k =. Recalling phase of the autoassociative Alpha Beta memory of type min with vector G as input, takes the following form for the n + kth recalled component: ( β G ) n+k = n j= β(λ (n+k) j, G j ) = n j= β {[ p µ= α( X µ n+k, X µ j ) ], G j } Duetothewayvector X k is built, besides that X k n+k =, it is important to notice that X µ n+k =, µ = k, and from here we can state that p µ= α( X µ n+k, X µ j ) = α( X k n+k, X k j ) = α(, X k j ) 3

18 8 M. E. Acevedo-Mosqueda et al. is different from regardless of the value of X k j. Taking into account that G j = j {,...,n + p}, we can conclude that it is impossible for ( β G ) n+k = n ( ( ) ) β α, X k j, j= to be equal to. That is, ( β G ) n+k = = X k n+k. Example 3.7 By using matrix obtained in Example 3.6 and with k =, we obtain vector G, which is the noisy version, with substractive noise, of X, whose component values are: X =, the noisy vector G, with substractive noise, is: G = When presenting G to matrix, the recalled pattern is: Λ β G = Sixth component of both vectors X = For this example, n = 4andk =, then ( β G ) 4+ = X 4+ =. Therefore, the sixth component of the second pattern is perfectly recalled. Theorem 4.4 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ) for k =,...,p, and let G = τ e (x k, w) A n+p be a pattern altered with substractive noise with respect to some specific pattern X k,beingw A p a vector whose components have values w i = u i, and u A p the vector defined as u = p i= hi. Lets assume that during the recalling phase, G is presented to memory as input, and the pattern S = β G A n+p is obtained as output. If when taking vectors as argument, the contraction vectorial transform s = τ c (S, n) A p is done, the resulting vector S has two mutually exclusive possibilities: k {,..., p} such that s = h k,ors is not a one-hot vector. Proof From the definition of contraction vectorial transform we have that s i = S i+n = ( β G) i+n for i p, and in particular, by making i = k we have s k = S k+n = ( β G) k+n.however,bylemma4. ( β G ) n+k = X n+k k, and since X k = τ e (x k, h k ), the value X n+k k is equal to the value of component h k k =. That is, s k =. When considering that s k =, vector s has two mutually exclusive possibilities: it can be that s j = j = k in which case s = h k ; or happens that j {,...,p}, j = k for which s j =, in which case it is not possible that s is a zero-hot vector, given Definition. 3

19 Alpha Beta bidirectional associative memories 9 Example 3.8 By taking matrix obtained in Example 3.6 and with k =, the component values of X and its respective noisy pattern G, with substractive noise, are: X = y G = When presenting G to matrix, the recalled pattern is: S = β G = When this vector is taken as argument and the contraction vectorial transform is done we obtain vector s, s = and according to Definition, we can see that s is the first zero-hot vector of 4 bits. Now let us do the same process for k = 4. Then, X 4 and its respective noisy pattern G, with substractive noise, are: X 4 = and G = When presenting G to matrix, the recalled pattern S is: S = β G = 3

20 M. E. Acevedo-Mosqueda et al. When taking this vector as argument and the contraction vectorial transform is done we obtain vector s, S = According to Definition, s is not a zero-hot vector. Lemma 4.3 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) A n+p k {,...,p}. If t is an index such that n + t n + pthenν ij = j {,...,n + p}. Proof In order to establish that ν ij = j {,...,n + p}, given the definition of α, itis enough to find, for each t {n +,...,n + p}, an index µ for which X t µ = in the expression that produces the tjth component of memory V,whichisν tj = p µ=,α(x t µ, X µ j ).Due to the way each vector X µ = τ e (x µ, h µ ) for µ =,...,p is built, and given the domain of index t {n +,...,n + p}, for each t exists s {,...,p} such that t = n + s. Thisis why two useful values to determine the result are µ = s and t = n + s, because Xn+s s =. Then, ν tj = p µ=,α(x t µ, X µ j ) = α(x n+s s, X s j ) = α(, X s j ), value which is different from. That is, ν ij = j {,...,n + p}. Example 3.9 Let us take vectors X k for k =,, 3, 4 from Example 3.: X =, X =, X 3 =, X 4 = The matrix of the autoassociative Alpha Beta memory of type max is: V = For t {5, 6, 7, 8} and j {,,...,8} ν ij = Lemma 4.4 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) for k =,...,p, and let F = τ e (x k, u) A n+p be an altered version, by additive noise, of a specific pattern X k,beingu A p the vector defined as u = p i= hi. Let us assume that during the recalling phase, F is presented to memory as input. Given a fixed index t {n +,...,n + p} such that t = n + k, it holds that(v β F) t = if and only if the following logic proposition is true: j {,...,n + p}(f j = ν tj = ). 3

21 Alpha Beta bidirectional associative memories Proof DuetothewayvectorsX k = τ e (x k, h k ) and F = τ e (x k, u) are built, we have that F t = is the component with additive noise with respect to component X k t =. ) There are two possible cases: Case Pattern F does not contain components with value. That is, F j = j {,...,n + p}. This means that the antecedent of proposition F j = ν tj = is false, and therefore, regardless of the truth value of consequence ν tj =, the expression j {,...,n+p}(f j = ν tj = ) is true. Case Pattern F contains at least one component with value. That is, r {,...,p} such that F r =. By hypothesis (V β F) t =, which means that the condition for perfect recall of X k t = is not met. In other words, according to Theorem 4. expression [ j {,...,n + p} such that ν tj α(x k t, F j )] is true, which is equivalent to j {,...,n + p} it holds that ν tj α(x k t, F j ) In particular, for j = r, and taking into account that X k t =, this inequality ends up like this: ν tr >α(x k t, F r ) = α(, ) =. That is, ν tr =, and therefore the expression j {,...,n + p}(f j = ν tj = ) is true. Example 3. Let us use vector X 3, with k = 3, from Example 3.9, and build its noisy vector F, with additive noise X 3 =, F = When presenting F to matrix V, presented in the aforementioned example, we obtain: = V β F = F = V (V β F) 6 = t = 6 t = 6 3

22 M. E. Acevedo-Mosqueda et al. ) Assuming the following expression is true j {,...,n + p}(f j = ν tj = ), there are two possible cases: Case Pattern F does not contain components with value. That is, F j = j {,...,n + p}. When considering that ( V β F ) = n+p t j= β ( ) ν tj, F j, according to the definition of β, it is enough to show that j {,...,n + p}ν tj =, which is guaranteed by Lemma 4.3. Then, it has been proven that ( V β F ) t = n+p j= β ( ) n+p ν tj, F j = j= β ( ν tj, ) =. Case Pattern F contains at least one component with value. That is, r {,...,p} such that F r =. By hypothesis we have that j {,...,n + p}(f j = ν tj = ) and, in particular, for j = r and ν tr =, which means that ( V β F ) t = n+p j= β ( ) ν tj, F j = β (ν tr, F r ) = β (, ) =. Corollary 4. Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) for k =,...,p, and let F = τ e (x k, u) A n+p be an altered version, by additive noise, of a specific pattern X k,beingu A p the vector defined as u = p i= hi. Let us assume that during the recalling phase, F is presented to memory as input. Given a fixed index t {n +,...,n + p} such that t = n + k, it holds that(v β F) t = if and only if the following logic proposition is true: j {,...,n + p}(f j = AND sν tj = ). Proof In general, given two logical propositions P and Q, the proposition (P if and only if Q) is equivalent to proposition ( P if and only if Q). If P is identified with equality (V β F) t = andq with expression j {,...,n + p}(f j = ν tj = ), by Lemma 4.4 the following proposition is true: { [(V β F) t = ] if and only if [ j {,...,n+ p}(f j = ν tj = )]}. This expression transforms in the following equivalent propositions: {(V β F) t = if and only if j {,...,n + p} such that (F j = ν tj = )} {(V β F) t = if and only if j {,...,n + p} such that [ (F j = ) OR ν tj = ]} {(V β F) t = if and only if j {,...,n+p} such that [ [ (F j =)] AND (ν tj =)]} {(V β F) t = if and only if j {,...,n + p} such that [(F j = ) AND ν tj = ]} Example 3. Taking X 3 and F from Example 3., when presenting F to V we have: 3 V β F = F = (V β F) 5 =, t = 5 (V β F) 8 =, t = 8 = V t = 5

23 Alpha Beta bidirectional associative memories 3 Lemma 4.5 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ) A n+p k {,...,p}. If t is an index such that n + t n + pthenλ tj = j {,...,n + p}. Proof In order to establish that λ tj = j {,...,n + p}, given the definition of α, itis enough to find, for each t {n +,...,n + p}, an index µ for which X t µ = in the expression leading to obtaining the tjth component of memory,whichisλ tj = p µ= α( X t µ, X µ j ). In fact, die to the way each vector X k = τ e (x k, h k ) for µ =,...,p is built, and given the domain of index t {n +,...,n + p}, for each t exists s {,...,p} such that t = n + s; therefore two values useful to determine the result are µ = s and t = n + s, because X n+s s =, then λ tj = p µ= α( X t µ, X µ j ) = α( X n+s s, X s j ) = α(, X µ j ), value different from. That is, λ tj = j {,...,n + p}. Example 3. Let us take vectors X k for k =,, 3, 4 from Example 3.6 X =, X =, X 3 =, X 4 = The matrix of the autoassociative Alpha Beta memory of type min is: = For t {5, 6, 7, 8} and j {,,...,8} λ ij = Lemma 4.6 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ) for k =,...,p, and let G = τ e (x k, w) A n+p be an altered version, by substractive noise, of a specific pattern X k,beingw A p a vector whose components have values w i = u i, and u A p the vector defined as u = p i= hi. Let us assume that during the recalling phase, G is presented to memory as input. Given a fixed index t {n +,...,n + p} such that t = n + k, it holds that ( β G) t =, if and only if the following logical proposition is true j {,...,n + p}(g j = λ tj = ). Proof Duetothewayvectors X k = τ e (x k, barh k ) and G = τ e (x k, w) are built, we have that G t = is the component with substractive noise with respect to component X k t =. ) There are two possible cases: Case Pattern G does not contain components with value. That is, G j = j {,..., n + p}. This means that the antecedent of logical proposition G j = λ tj = is 3

24 4 M. E. Acevedo-Mosqueda et al. false and therefore, regardless of the truth value of consequent λ tj =, the expression j {,...,n + p}(g j = λ tj = ) is true. Case Pattern G contains at least one component with value. That is, r {,...,n + p} such that G r =. By hypothesis ( β G) t =, which means that the perfect recall condition of X k t = is not met. In other words, according to Theorem 4.3 expression [ j {,...,n + p} such that λ tj α( X k t, G j )] is true, which in turn is equivalent to j {,...,n + p} it holds that λ tj <α(x k t, G j ) In particular, for j = r and considering that X k t =, this inequality yields: λ tr <α( X k t, G r )= α(, ) =. That is, λ tr =, and therefore the expressión j {,...,n + p}(g j = λ tj = ) is true. Example 3.3 Let us use vector X 4, with k = 4, from Example 3., and build its noisy vector G, with substractive noise. X 4 =, G = When presenting G to, shown n the mentioned example, we have = Λ β G = G = Λ t = 5 (Λ β G) 5 = t = 5 ) Assuming the following expression to be true, j {,...,n+ p}(g j = λ tj = ), there are two possible cases: Case Pattern G does not contain components with value. That is, G j = j {,..., n + p}. When considering that ( β G ) t = n+p j= β ( λ tj, G j ), according to the β definition, 3

25 Alpha Beta bidirectional associative memories 5 it is enough to show that j {,...,n + p}λ tj =, which is guaranteed by Lemma 4.5. Then, it is proven that ( β G ) t = n+p j= β ( ) λ tj, G j. Case Pattern G contains at least one component with value. That is, r {,...,n + p} such that G r =. By hypothesis we have that j {,...,n + p}(g j = λ tj = ) and, in particular, for j = r and λ tr =, which means that ( β G ) t = n+p j= β ( ) λ tj, G j = β (λ tr, G r ) = β (, ) =. Corollary 4. Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ) for k =,...,p, and let G = τ e (x k, w) A n+p be an altered version, by substractive noise, of a specific pattern X k,beingw A p a vector whose components have values w i = u i, and u the vector defined as u = p i= hi. Let us assume that during the recalling phase, G is presented to memory as input. Given a fixed index t {n +,...,n + p} such that t = n + k, it holds that ( β G) t = if and only if the following logic proposition is true: j {,...,n + p}(g j = AND λ tj = ). Proof In general, given two logical propositions P and Q, the proposition (P if and only if Q) is equivalent to proposition ( P if and only if Q). IfP is identified with equality ( β G) t = andq with expression j {,...,n + p}(g j = λ tj = ), by Lemma 4.6 the following proposition is true: { [( β G) t = ] if and only if [ j {,...,n + p}(g j = λ tj = )]}. This expression transforms into the following equivalent propositions: {( β G) t = if and only if j {,...,n + p} such that (G j = λ tj = )]} {( β G) t = if and only if j {,...,n + p} such that [ (G j = ) OR λ tj = ]} {( β G) t = if and only if j {,...,n+ p} such that [ [ (G j =)] AND (λ tj =)]} {( β G) t = if and only if j {,...,n + p} such that [G j = AND λ tj = ]} Example 3.4 Taking X 4 and G from Example 3.3, when presenting G to Λ β G = (Λ β G) 7 = t = 7 (Λ β G) 6 = t = 6 G = = Λ t = 6 3

26 6 M. E. Acevedo-Mosqueda et al. Lemma 4.7 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) for k =,...,p, and let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ), k {,...,p}. Then, for each i {n +,...,n + p} such that i = n + r, with r i {,...,p}, it holds that: ν ij = α(, X r i j ) and λ ij = α(, X r i j ) j {,...,n + p}. Proof DuetothewayvectorsX k = τ e (x k, h k ) and X k = τ e (x k, h k ) are built, we have that X r i i = and X r i i =, besides X µ i = and X µ i = µ = r i such that µ {,...,p}. Because of this, and using the definition of α, α(x r i i, X r i j ) = α(, Xr i j ) and α(x µ i, X µ j ) = α(, X µ j ), which implies that, regardless of the values of Xr i j and X µ j, it holds that α(x r i i, X r i j ) α(x µ i, X µ j ), from whence ν ij = p α(x µ µ= i, X µ j ) = α(xr i i, X r i j ) = α(, Xr i j ) We also have α( X r i i, X r i j ) = α(, X r i j ) and α( X µ i, X µ j ) = α(, X µ j ), which implies that, regardless of the values of X r i j and X µ j, it holds that α( X r i i, X r i j ) α( X µ i, X µ j ), from whence λ ij = p α( X µ µ= i, X µ j ) = α( X r i i, X r i j ) = α(, X r i j ) µ {,...,p}, j {,...,n + p}. Example 3.5 Let us take vectors X k for k =,, 3, 4 from Example 3. X =, X =, X 3 =, X 4 =, V = In this example i {5, 6, 7, 8} and j {,,...,8}. Since we are using the autoassociative Alpha Beta memory of type max, during the learning phase the maximum value must be taken. Using the definition of α, the maximum allowed value is, which can be reached through α(, ) =. With a value the maximum that can be obtained is α(, ) =. Therefore, the component yielding a maximum value will be that whose value is. The only component, out of the four vectors for i = 5, having a value of, is the one in X.Thatis, X5 = will determine a maximum value, then ν 5 j = α(x5, Xr i j ) = α(, Xr i j ). 3 For i = 6, X6 =, then ν 6 j = α(x6, Xr i j ) = α(, Xr i j ). For i = 7, X7 3 =, then ν 7 j = α(x7 3, Xr i j ) = α(, Xr i j ). For i = 8, X8 4 =, then ν 8 j = α(x8 4, Xr i j ) = α(, Xr i j ).

27 Alpha Beta bidirectional associative memories 7 Now let us take vectors X k for k =,, 3, 4 from Example 3.6 X =, X =, X 3 =, X 4 =, = In the case of the autoassociative Alpha Beta memory of type min, during the learning phase the minimum value must be taken. The minimum allowed value is and appears when α(, ) =. Therefore, the component yielding a minimum value will be that whose value is. The only component, out of the four vectors for i = 5, having a value of, is the one in X.Thatis,X5 = will determine a minimum value, then λ 5 j = α( X 5, Xr i j ) = α(, X r i j ). For i = 6, X 6 =, then λ 6 j = α( X 6, Xr i j ) = α(, X r i j ). For i = 7, X 7 3 =, then λ 7 j = α( X 7 3, X r i j ) = α(, X r i j ). For i = 8, X 8 4 =, then λ 8 j = α( X 8 4, X r i j ) = α(, X r i j ). Corollary 4.3 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) k {,...,p}, and let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ), k {,...,p}. Then, ν ij = λ ij +, i {n +,...,n + p}, i = n +r i, with r i {,...,p} and j {,...,n}. Proof Let i {n +,...,n + p} and j {,...,n} be two indexes arbitrarily selected. By Lemma 4.7, the expressions used to calculate the ijth components of memories V y take the following values: ν ij = α(, X r i j )yλ ij = α(, X r i j ) Considering that for j {,...,n}x r i j = X r i j, there are two possible cases: Case X r i j = = X r i j. We have the following values: ν ij = α(, ) = andλ ij = α(, ) =, therefore ν ij = λ ij +. Case X r i j = = X r i j. We have the following values: ν ij = α(, ) = yλ ij = α(, ) =, therefore ν ij = λ ij +. Since both indexes i and j were arbitrarily chosen inside their respective domains, the result ν ij = λ ij + is valid i {n +,...,n + p} and j {,...,n}. Example 3.6 For this example we shall use memories V y, presented in Example 3.5. V =, = 3

28 8 M. E. Acevedo-Mosqueda et al. With n = 4andp = 4, i {5, 6, 7, 8} and j {,, 3, 4}. ν 5 = λ 5 + = + = ν 6 = λ 6 + = + = ν 7 = λ 7 + = + = ν 8 = λ 8 + = + = ν 5 = λ 5 + = + = ν 6 = λ 6 + = + = ν 7 = λ 7 + = + = ν 8 = λ 8 + = + = ν 53 = λ 53 + = + = ν 63 = λ 63 + = + = ν 73 = λ 73 + = + = ν 83 = λ 83 + = + = ν 54 = λ 54 + = + = ν 64 = λ 64 + = + = ν 74 = λ 74 + = + = ν 84 = λ 84 + = + = Lemma 4.8 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) k {,...,p}, and let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ), k {,...,p}. Also, if we define vector u A p as u = p i= hi, and take a fixed index r {,...,p}, let us consider two noisy versions of pattern X r A n+p : vector F = τ e (x r, u) A n+p which is an additive noise altered version of pattern X r, and vector G = τ e (x r, w) A n+p, which is a substractive noise altered version of pattern X r,beingw A p a vector whose components take the values w i = u i i {,...,p}. If during the recalling phase, G is presented as input to memory and F is presented as input to memory V, and if also it holds that ( β G) t = for an index t {n +,...,n + p}, fixed such that t = n + r, then (V β F) t =. Proof Due to the way vectors X r, F y G are built, we have that F t = is the component in the vector with additive noise corresponding to component Xt r,andg t = is the component in the vector with substractive noise corresponding to component X t r. Also, since t = n + r, we can see that Xt r =, that is Xt r =, and X t r =. There are two possible cases: Case Pattern F does not contain any component with value. That is, F j = j {,...,n + p}. By Lemma 4.3 ν tj = j {,...,n + p},thenβ(ν tj, F j ) j {,...,n + p}, which means that ( V β F ) = n+p t j= β(ν tj, F j ) =. In other words, expression (V β F) t = is false. The only possibility for the theorem to hold, is for expression ( β G) t = to be false too. That is, we need to show that ( β G) t =. According to Corollary 4., the latter is true if for every t {n +,...,n + p} with t = n + r, exists j {,...,n + p} such that (G j = AND λ tj = ). Now,t = n + r indicates that s {,...,p}, s = r such that t = n+s, and by Lemma 4.7 α( X t s, X s j ) α( X t µ, X µ j ) µ {,...,p}, j {,...,n + p}, from where we have λ tj = p j= α( X t µ, X µ j ) = α( X t s, X s j ), and by noting the equality X t s = X n+s s =, it holds that: λ tj = α(, X s j ) j {,...,n + p} On the other side, i {,...,n} the following equalities hold: X i r = xi r = and X i s = xi s and also, taking into account that x r = x s, it is clear that h {,...,p} such that xh s = xr h ; meaning xh s = = X h s and therefore 3 λ th = α(, ) =

29 Alpha Beta bidirectional associative memories 9 Finally, since i {,...,n} it holds that G i = X i r = xi r =, in particular G h =. Then we have proven that for every t {n +,...,n + p} with t = n +r,exists j {,...,n + p} such that (G j = AND λ tj = ), and by Corollary 4. it holds that ( β G) t =, thus making expression ( β G) t = isfalse. Case Pattern F contains, besides the components with value of, at least one component with value. That is, h {,...,n + p} such that F h =. Due to the way vectors G and F are built i {,...,n}g i = F i and, also, necessarily h n and thus F h = G h =. By hypothesis t {n +,...,n + p} fixed such that t = n + r and ( β G) t =, and by Lemma 4.6 j {,...,n + p}(g j = λ tj = ). Given the way vector G is built we have that j {n +,...,n + p}g j =, thus making the former expression like this: j {,...,n}(g j = λ tj = ).LetJ be a set, proper subset of {,...,n}, defined like this: J ={j {,...,n} G j = }. The fact that J is a proper subset of {,...,n} is guaranteed by the existence of G h =. Now, t = n + r indicates that s {,...,p}, s = r such that t = n + s, and by Lemma 4.7 ν tj = α(, X s j ) and λ tj = α(, X s j ) j {,...,n + p}, from where we have that j J, X s j =, because if this was not the case, λ tj =. This means that for each j J X s j = = G j which in turn means that patterns X r and X s coincide with value in all components with index j J. Let us now consider the complement of set J, which is defined as J c ={j {,...,n} G j = }. The existence of at least one value j J c for which G j = y X s j = is guaranteed by the known fact that x r = x s.letus see, if X s j = j J c then j {,...,n} it holds that X s j = G j, which would mean that x r = x s.since j J c for which G j = and X s j =, this means that j J c for which F j = andx s j =. Now, β(ν tj, F j ) = β(α(, X s j ), ) = β(α(, ), ) = β(, ) =, and finally ( V β F ) t = n+p j= β(ν tj, F j ) = β(ν tj, F j ) = Example 3.7 From Example 3. let us takex and build the noisy vector F, with additive noise; from Example 3.6 take X and build the noisy vector G, with substractive noise, and use the memories V and shown in Example 3.5. X =, F =, X =, G = V =, = 3

30 3 M. E. Acevedo-Mosqueda et al. Now we present F to V and G to in order to obtain: = V β F, Λ β G = With n = 4andp = 4, t {5, 6, 7, 8} and for this example r =, thus t = 4 + = 6. Then, for t = 5( β G) 5 =,(V β F) 5 =. for t = 7( β G) 7 =,(V β F) 7 =. for t = 8( β G) 8 =,(V β F) 8 =. Lemma 4.9 Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) k {,...,p}, and let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ), k {,...,p}. Also, if we define vector u A p as u = p i= hi, and take a fixed index r {,...,p}, let us consider two noisy versions of pattern X r A n+p : vector F = τ e (x r, u) A n+p which is an additive noise altered version of pattern X r, and vector G = τ e (x r, w) A n+p, which is a substractive noise altered version of pattern X r,beingw A p a vector whose components take the values w i = u i i {,...,p}. If during the recalling phase, G is presented as input to memory and F is presented as input to memory V, and if also it holds that (V β F) t = for an index t {n +,...,n + p}, fixed such that t = n + r, then ( β G) t =. Proof Due to the way vectors X r, F y G are built, we have that F t = is the component in the vector with additive noise corresponding to component Xt r,andg t = is the component in the vector with substractive noise corresponding to component X t r. Also, since t = n + r, we can see that Xt r =, that is Xt r =, and X t r =. There are two possible cases: Case Pattern G does not contain any component with value. That is, G j = j {,...,n + p}. By Lemma 4.5 λ tj = j {,...,n + p}, thus β(λ tj, G j ) = j {,...,n + p}, which means that ( β G ) t = n+p j= β(λ tj, G j ) =. In other words, expression ( β G) t = is false. The only possibility for the theorem to hold, is for expression (V β F) t = to be false too. That is, we need to show that (V β F) t =. According to Corollary 4., the latter is true if for every t {n +,...,n + p} with t = n + r, exists j {,...,n + p} such that (F j = AND ν tj = ). Now, t = n + r indicates that s {,...,p}, s = r such that t = n + s, and by Lemma 4.6 α(xt s, X s j ) α(x t µ, X µ j ) µ {,...,p}, j {,...,n + p}, from where we have ν tj = p µ= α(x t µ, X µ j ) = α(x t s, X s j ), 3

31 Alpha Beta bidirectional associative memories 3 and by noting the equality Xt s = Xn+s s =, it holds that: ν tj = α(, X s j ) j {,...,n + p} () On the other side, i {,...,n} the following equalities hold: Xi r = xi r = and Xi s = xi s and also, taking into account that x r = x s, it is clear that h {,...,p} such that xh s = xr h ; meaning xs h = = X h s and therefore ν th = α(, Xh s ) = α(, ) = Finally, since i {,...,n} it holds that F i = Xi r = xi r =, in particular F h =. Then we have proven that for every t {n +,...,n + p} with t = n + r, exists j {,...,n + p} such that (F j = AND ν tj = ), and by Corollary 4. it holds that (V β F) t =, thus making expression (V β F) t = isfalse. Case Pattern G contains, besides the components with value of, at least one component with value. That is, h {,...,n + p} such that G h =. Due to the way vectors G and F are built i {,...,n}g i = F i and, also, necessarily h n and thus F h = G h =. By hypothesis t {n +,...,n + p} fixed such that t = n + r and (V β F) t =, and by Lemma 4.4 j {,...,n + p}(f j = ν tj = ). Given the way vector F is built we have that j {n +,...,n + p}g j =, thus making the former expression like this: j {,...,n + p}(f j = ν tj = ).LetJ be a set, proper subset of {,...,n}, defined like this: J ={j {,...,n} F j = }. The fact that J is a proper subset of {,...,n} is guaranteed by the existence of G h =. Now, t = n +r indicates that s {,...,p}, s = r such that t = n + s, and by Lemma 4.7 ν tj = α(, X s j ) and λ tj = α(, X s j ) j {,...,n + p}, from where we have that j J, X s j =, because if this was not the case, ν tj =. This means that for each j JX s j = = F j which in turn means that patterns X r and X s coincide with value in all components with index j J. Let us now consider the complement of set J, which is defined as J c ={j {,...,n} F j = }. The existence of at least one value j J c for which F j = yx s j = is guaranteed by the known fact that x r = x s.let us see, if X s j = j J c then j {,...,n} it holds that X s j =F j, which would mean that x r = x s.since j J c for which F j =andx s j =, this means that j J c for which G j =and X s j =. Now, β(λ tj, G j )=β(α(, X s j ), )=β(α(, ), )=β(, )=, and finally ( β G ) t = n+p j= β(λ tj, G j ) = β(λ tj, G j ) = Example 3.8 From Example 3., let us takex and build the noisy vector F, with additive noise; from Example 3.6 let us takex and build the noisy vector G, with additive noise; and let us use the memories V y shown in Example 3.5. X =, F =, X =, G =, 3

32 3 M. E. Acevedo-Mosqueda et al. V =, = Now we present F to V and G to in order to obtain: = V β F, G = With n = 4andp = 4, t {5, 6, 7, 8}, and for this example r =, thus t = 4 + = 5. Then, for t = 6( β G) 6 =,(V β F) 6 =. for t = 7( β G) 7 =,(V β F) 7 =. for t = 8( β G) 8 =,(V β F) 8 =. Theorem 4.5 (Main Theorem) Let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type max represented by V, with X k = τ e (x k, h k ) k {,...,p}, and let {( X k, X k) k =,...,p } be the fundamental set of an autoassociative Alpha Beta memory of type min represented by, with X k = τ e (x k, h k ), k {,...,p}. Also, if we define vector u A p as u = p i= hi, and take a fixed index r {,...,p}, let us consider two noisy versions of pattern X r A n+p : vector F = τ e (x r, u) A n+p, which is an additive noise altered version of pattern X r, and vector G = τ e (x r, w) A n+p, which is a substractive noise altered version of pattern X r,beingw A p a vector whose components take the values w i = u i i {,...,p}. Now, let us assume that during the recalling phase, G is presented as input to memory and F is presented as input to memory V, and patterns S= β G A n+p and R = V β F A n+p are obtained. If when taking vector R as argument the contraction vectorial transform r = τ c (R, n) A p is done, and when taking vector S as argument the contraction vectorial transform s = τ c (S, n) A p is done, then H = (r AND s) will be the kth one-hot vector of p bits, where s is the negated of s. Proof From the definition of contraction vectorial transform we have that r i = R i+n = (V β F) i+n and s i = S i+n = ( β G) i+n for i p, and in particular, by making i = k 3 Λ β

33 Alpha Beta bidirectional associative memories 33 we have r k = R k+n = (V β F) k+n and s k = S k+n = ( β G) k+n. By Lemmas 4. and 4. we have ( V β F ) n+k = X n+k k = and( β G ) n+k = X n+k k = ; and thus: H k = r k AND s k = AND = AND =. Now, by Lemma 4.8 we know that if ( β G) t = tal que t = i + n is a fixed index with t = n + k,then(v β F) t = ; thus: H i = r i AND s i = (V β F) t AND ( β G) t = AND = AND = On the other side, by Lemma 4.9 it is known that if (V β F) q = for a fixed index q = i + n such that q = n + k,then( β G) q =. According to the latter: H i = r i AND s i = (V β F) q AND ( β G) q = AND = AND = Then H i = fori = k yhi = fori = k. Therefore, and according to Definition, H will be the kth one-hot vector of p bits. 3. Theoretical Foundation of Stages and 4 In this section is presented the theoretical foundation which serves as the basis for the design and operation of Stages and 4, whose main element is an original variation of the Linear Associator. Let {(x µ, y µ ) µ =,,...,p} with A ={, }, x µ A n and y µ A m be the fundamental set of the Linear Associator. The Learning Phase consists of two stages: For each of the p associations (x µ, y µ ) find matrix y µ (x µ ) t of dimensions m x n. The p matrices are added together to obtain the memory M = p y µ (x µ) t [ = mij ]m n µ= in such way that the ijth component of memory M is expressed as: m ij = p y µ i x µ j µ= The Recalling Phase consists of presenting an input pattern x ω to the memory, where ω {,,...,p}, and do operation p M x ω = y µ (x µ) t x ω µ= The following form of expression allow us to investigate the conditions that must be met in order for the proposed recalling method to give perfect outputs as results: [ (x M x ω = y ω ω ) ] t x ω + µ =ω [ (x y µ µ ) ] t x ω For the latter expression to give pattern y ω as result, it is necessary that two equalities hold: [ (x ω ) t x ω] = [ (x µ ) t x ω] = as long as µ = ω. 3

34 34 M. E. Acevedo-Mosqueda et al. This means that, in order to have perfect recall, vectors x µ must be orthonormals. If that happens, then, for µ =,,...,p, wehave: y y y (x ) t y (n) =.. (x, ) y x,...,x (n) n = ym ym (n) Therefore, y y y (x ) t y (n) =.. (x, ) y x,...,x (n) n = ym ym (n) y p y p (x p ) t y p =... M = y p m y p (n) (x p, x p,...,x n p ) y p (n) = y p m(n) y p y y3 y p n y µ (x µ) t y y y3 y p = ym y m y3 m y m p µ= Taking advantage of the characteristic shown by the Linear Asssociator when the input patterns are orthonormals, and given that, by Definition, one-hot vectors v k with k =,...,p are orthonormals, we can obviate the learning phase by avoiding the vectorial operations done by the Linear Associator, and simply put the vectors in order, to form the Linear Associator. Stages and 4 correspond to two modified Linear Associators, built with vectors y and x, respectively, of the fundamental set. 3.3 Algorithm In this section we describe, step by step, the processes required by the Alpha Beta BAM, in the Learning Phase as well as in the Recalling Phase (by convention only) in the direction x y, the algorithm for Stages and. The following algorithm describes the steps needed by the Alpha Beta bidirectional associative memory in order to realize the learning and recalling phases in the direction x y. Learning phase. For each index k {,...,p} do expansion: X k = τ e (x k, h k ). Create an Alpha Beta autoassociative memory of type max V with the fundamental set {(X k, X k) } k =,...,p 3. For each index k {,...,p} do expansion: X k = τ e (x k, h k ) 3

35 Alpha Beta bidirectional associative memories Create an Alpha Beta autoassociative memory of type min with the fundamental set {( X k, X k) k =,...,p} 5. Create a matrix consisting of a modified Linear Associator with patterns y k y y y p y LAy = y y p..... yn y n y n p Recalling phase. Present, as input to Stage, a vector of the fundamental set x µ A n for some index µ {,...,p}. Build vector u A p in the following manner: u = 3. Do expansion: F = τ e (x µ, u) A n+p 4. Operate the Alpha Beta autoassociative memory max V with F, in order to obtain a vector R of dimension n + p p i= h i R = V β F A n+p 5. Do contraction r = τ c (R, n) A p 6. If ( k {,...,p} such that h k = r) it is assured that k = µ (based on Theorem 4.), and the result is h µ. Thus, operation LAy r is done, resulting in the corresponding y µ. STOP. Else { 7. Build vector w A p in such way that w i = u i, i {,...,p} 8. Do expansion: G = τ e (x µ, w) A n+p 9. Operate the Alpha Beta autoassociative memory min with G, in order to obtain a vector S of dimension n + p S = β G A n+p. Do contraction s = τ c (S µ, n) A p. If ( k {,...,p} such that h k = s) it is assured that k = µ (based on Theorem 4.4), and the result is h µ. Thus, operation LAy s is done, resulting in the corresponding y µ. STOP. Else {. Do operation t = r s, where is the symbol of the logical AND. The result of this operation is h µ (based on Theorem 4.5). Operation LAy t is done, in order to obtain the corresponding y µ.stop.}} 4Results In this section, two applications that make use of the algorithm described in the former section are presented, illustrating the optimal functioning which the Alpha Beta bidirectional associative memories exhibit. 3

36 36 M. E. Acevedo-Mosqueda et al. Fig. 3 (a) The corresponding translation of departure is partida. (b) With just a part of the Word departure the correct result is still given. (c) The translator works in an optimal manner even if the input word is misspelled The first application is a Spanish English/English Spanish translator. As second application, a fingerprint verifier is presented. For both applications implementation, the programming language Visual C++ 6. was used. The programs were executed on a Sony VAIO laptop computer with a Pentium 4 processor at.8 GHz. 4. Spanish English/English Spanish translator This program is able to translate words in English to Spanish and vice versa. For the Learning Phase two text files were used. These files contain words each, in English and Spanish, respectively. With both files the Alpha Beta bidirectional associative memory is created. The learning process lasts, approximately, 6. The program was executed on a Sony VAIO laptop computer with a Pentium 4 processor at.8 GHz. After the Alpha Beta BAM is built, during the Recalling Phase, a word is written as input, either in English or Spanish, and the translation mode is chosen. The word in the corresponding language appears immediately. An example can be seen in Fig. 3a. The word to be translated is departure and its corresponding translation in Spanish is partida. The translator presents other advantages as well. For instance, let us suppose that only a part of the word departure is written as input, say departu. The program will give as output the word partida (see Fig. 3b). Now, let us assume that instead of writing the last e, a w is written as a typo. The result can be seen in Fig. 3c. In this example we can see that, a misspell, which at the pattern level means that a noisy pattern has been given as input, does not limit the translator performance. The advantages shown by the translator stress out the advantages presented by the Alpha Beta bidirectional associative memories. These memories are immune to certain amounts and kinds of noise, properties which have not been characterized yet. 3

37 Alpha Beta bidirectional associative memories 37 Fig. 4 The bidirectional process of the model is shown in both screens. (a)the input pattern is a fingerprint, its corresponding output pattern a number. (b) A number is chosen as input pattern and its corresponding fingerprint is perfectly recalled 4. Fingerprint verifier The first part of the identification consists in building the Alpha Beta BAM; this is, the learning phase is realized. The input patterns set is made up of 4 fingerprints, which were obtained from the Fingerprint Verification Competition (FPV) [46]. Originally, these images have dimensions of 4 3 pixels. The Advanced Batch Converter image editor was used to modify the fingerprints dimensions, such that it was possible to present 4 images on one screen. The final dimensions of the images are 8 7 pixels. The output patterns set contains 4 images, each representing a number between and 4, inclusive. The dimensions of this images are pixels. For the verifier implementation it will be assumed that x represents the input patterns set. Given that fingerprints are represented with bidimensional images, and Alpha Beta BAMs work with column vectors, it becomes necessary to convert the images to a vectorial form. Therefore, input vectors dimension is n = 8 7 = 3, 6. In a similar way, the output patterns set s represented by y, and its images are converted to vector of dimension m = =. In this case, the number of trained patterns is p = 4. The learning phase process, whose algorithm was described in the former section, lasts approximately min and 34 s. Once the Alpha Beta BAM has been built, the program allows to choose any one of the learned fingerprints, or any one number associated to the fingerprints. If the chosen pattern is a fingerprint, the recalled pattern is the number associated to that specific fingerprint (see Fig. 4a). On the other hand, if a number is selected, the result of recall is its corresponding fingerprint (see Fig. 4b). For the recalling of a fingerprint parting from a number, the same algorithm that was described in the prior section is used, only this time the pattern presented to the verifier is a member of the pattern set y. In order to test the efficacy of the verifier, each of the 4 images depicting fingerprints was picked, obtaining in each case the perfect recall of its corresponding number. In the same way, the 4 numbers were individually presented as input; the recall phase delivered in a perfect manner, each of the corresponding fingerprints. From that we can observe that the verifier exhibits perfect recall of all learned patterns. Actually, in this example only 4 pairs of patterns were used; however, the verifier can asso- 3

Complete Recall on Alpha-Beta Heteroassociative Memory

Complete Recall on Alpha-Beta Heteroassociative Memory Complete Recall on Alpha-Beta Heteroassociative Memory Israel Román-Godínez and Cornelio Yáñez-Márquez Centro de Investigación en Computación Juan de Dios Bátiz s/n esq. Miguel Othón de Mendizábal Unidad

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. 2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

On the Hopfield algorithm. Foundations and examples

On the Hopfield algorithm. Foundations and examples General Mathematics Vol. 13, No. 2 (2005), 35 50 On the Hopfield algorithm. Foundations and examples Nicolae Popoviciu and Mioara Boncuţ Dedicated to Professor Dumitru Acu on his 60th birthday Abstract

More information

Associative Memory : Soft Computing Course Lecture 21 24, notes, slides RC Chakraborty, Aug.

Associative Memory : Soft Computing Course Lecture 21 24, notes, slides   RC Chakraborty,  Aug. Associative Memory : Soft Computing Course Lecture 21 24, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, Aug. 10, 2010 http://www.myreaders.info/html/soft_computing.html www.myreaders.info

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, and Anupam Goyal III. ASSOCIATIVE RECALL BY A POLYNOMIAL MAPPING

Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, and Anupam Goyal III. ASSOCIATIVE RECALL BY A POLYNOMIAL MAPPING Synthesis of a Nonrecurrent Associative Memory Model Based on a Nonlinear Transformation in the Spectral Domain p. 1 Bobby Hunt, Mariappan S. Nadar, Paul Keller, Eric VonColln, Anupam Goyal Abstract -

More information

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories //5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Neural Networks Lecture 6: Associative Memory II

Neural Networks Lecture 6: Associative Memory II Neural Networks Lecture 6: Associative Memory II H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

Information storage capacity of incompletely connected associative memories

Information storage capacity of incompletely connected associative memories Information storage capacity of incompletely connected associative memories Holger Bosch a, Franz J. Kurfess b, * a Department of Computer Science, University of Geneva, Geneva, Switzerland b Department

More information

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu

More information

Morphological Associative Memories for Gray-Scale Image Encryption

Morphological Associative Memories for Gray-Scale Image Encryption Appl. Math. Inf. Sci. 8, No. 1, 127-134 (214) 127 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/1.12785/amis/8115 Morphological Associative Memories for Gray-Scale

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

A NEW SET THEORY FOR ANALYSIS

A NEW SET THEORY FOR ANALYSIS Article A NEW SET THEORY FOR ANALYSIS Juan Pablo Ramírez 0000-0002-4912-2952 Abstract: We present the real number system as a generalization of the natural numbers. First, we prove the co-finite topology,

More information

1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them

1 Introduction Tasks like voice or face recognition are quite dicult to realize with conventional computer systems, even for the most powerful of them Information Storage Capacity of Incompletely Connected Associative Memories Holger Bosch Departement de Mathematiques et d'informatique Ecole Normale Superieure de Lyon Lyon, France Franz Kurfess Department

More information

Proof Terminology. Technique #1: Direct Proof. Learning objectives. Proof Techniques (Rosen, Sections ) Direct Proof:

Proof Terminology. Technique #1: Direct Proof. Learning objectives. Proof Techniques (Rosen, Sections ) Direct Proof: Proof Terminology Proof Techniques (Rosen, Sections 1.7 1.8) TOPICS Direct Proofs Proof by Contrapositive Proof by Contradiction Proof by Cases Theorem: statement that can be shown to be true Proof: a

More information

Computational Intelligence Lecture 6: Associative Memory

Computational Intelligence Lecture 6: Associative Memory Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

Application of hopfield network in improvement of fingerprint recognition process Mahmoud Alborzi 1, Abbas Toloie- Eshlaghy 1 and Dena Bazazian 2

Application of hopfield network in improvement of fingerprint recognition process Mahmoud Alborzi 1, Abbas Toloie- Eshlaghy 1 and Dena Bazazian 2 5797 Available online at www.elixirjournal.org Computer Science and Engineering Elixir Comp. Sci. & Engg. 41 (211) 5797-582 Application hopfield network in improvement recognition process Mahmoud Alborzi

More information

Iterative Autoassociative Net: Bidirectional Associative Memory

Iterative Autoassociative Net: Bidirectional Associative Memory POLYTECHNIC UNIVERSITY Department of Computer and Information Science Iterative Autoassociative Net: Bidirectional Associative Memory K. Ming Leung Abstract: Iterative associative neural networks are introduced.

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated

More information

Neural networks: Associative memory

Neural networks: Associative memory Neural networs: Associative memory Prof. Sven Lončarić sven.loncaric@fer.hr http://www.fer.hr/ipg 1 Overview of topics l Introduction l Associative memories l Correlation matrix as an associative memory

More information

Department of Computer Science University at Albany, State University of New York Solutions to Sample Discrete Mathematics Examination I (Spring 2008)

Department of Computer Science University at Albany, State University of New York Solutions to Sample Discrete Mathematics Examination I (Spring 2008) Department of Computer Science University at Albany, State University of New York Solutions to Sample Discrete Mathematics Examination I (Spring 2008) Problem 1: Suppose A, B, C and D are arbitrary sets.

More information

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs Pattern Association or Associative Networks Jugal Kalita University of Colorado at Colorado Springs To an extent, learning is forming associations. Human memory associates similar items, contrary/opposite

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS 1. Cardinal number of a set The cardinal number (or simply cardinal) of a set is a generalization of the concept of the number of elements

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Propositional Logic. Fall () Propositional Logic Fall / 30

Propositional Logic. Fall () Propositional Logic Fall / 30 Propositional Logic Fall 2013 () Propositional Logic Fall 2013 1 / 30 1 Introduction Learning Outcomes for this Presentation 2 Definitions Statements Logical connectives Interpretations, contexts,... Logically

More information

Week 4: Hopfield Network

Week 4: Hopfield Network Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from

More information

Boolean Algebra and Proof. Notes. Proving Propositions. Propositional Equivalences. Notes. Notes. Notes. Notes. March 5, 2012

Boolean Algebra and Proof. Notes. Proving Propositions. Propositional Equivalences. Notes. Notes. Notes. Notes. March 5, 2012 March 5, 2012 Webwork Homework. The handout on Logic is Chapter 4 from Mary Attenborough s book Mathematics for Electrical Engineering and Computing. Proving Propositions We combine basic propositions

More information

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870 Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870 Jugal Kalita University of Colorado Colorado Springs Fall 2014 Logic Gates and Boolean Algebra Logic gates are used

More information

MATH FINAL EXAM REVIEW HINTS

MATH FINAL EXAM REVIEW HINTS MATH 109 - FINAL EXAM REVIEW HINTS Answer: Answer: 1. Cardinality (1) Let a < b be two real numbers and define f : (0, 1) (a, b) by f(t) = (1 t)a + tb. (a) Prove that f is a bijection. (b) Prove that any

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

Properties and Classification of the Wheels of the OLS Polytope.

Properties and Classification of the Wheels of the OLS Polytope. Properties and Classification of the Wheels of the OLS Polytope. G. Appa 1, D. Magos 2, I. Mourtos 1 1 Operational Research Department, London School of Economics. email: {g.appa, j.mourtos}@lse.ac.uk

More information

Lecture 1: September 25, A quick reminder about random variables and convexity

Lecture 1: September 25, A quick reminder about random variables and convexity Information and Coding Theory Autumn 207 Lecturer: Madhur Tulsiani Lecture : September 25, 207 Administrivia This course will cover some basic concepts in information and coding theory, and their applications

More information

Chain Independence and Common Information

Chain Independence and Common Information 1 Chain Independence and Common Information Konstantin Makarychev and Yury Makarychev Abstract We present a new proof of a celebrated result of Gács and Körner that the common information is far less than

More information

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

NUMERICAL MATHEMATICS & COMPUTING 7th Edition NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS

MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS LORENZ HALBEISEN, MARTIN HAMILTON, AND PAVEL RŮŽIČKA Abstract. A subset X of a group (or a ring, or a field) is called generating, if the smallest subgroup

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

HANDOUT AND SET THEORY. Ariyadi Wijaya

HANDOUT AND SET THEORY. Ariyadi Wijaya HANDOUT LOGIC AND SET THEORY Ariyadi Wijaya Mathematics Education Department Faculty of Mathematics and Natural Science Yogyakarta State University 2009 1 Mathematics Education Department Faculty of Mathematics

More information

Learning Methods for Linear Detectors

Learning Methods for Linear Detectors Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2011/2012 Lesson 20 27 April 2012 Contents Learning Methods for Linear Detectors Learning Linear Detectors...2

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

ERROR-CORRECTING CODES AND LATIN SQUARES

ERROR-CORRECTING CODES AND LATIN SQUARES ERROR-CORRECTING CODES AND LATIN SQUARES Ritu Ahuja Department of Mathematics, Khalsa College for Women, Civil Lines, Ludhiana 141001, Punjab, (India) ABSTRACT Data stored and transmitted in digital form

More information

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen

More information

Capacity of a Two-way Function Multicast Channel

Capacity of a Two-way Function Multicast Channel Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over

More information

The Importance of Being Formal. Martin Henz. February 5, Propositional Logic

The Importance of Being Formal. Martin Henz. February 5, Propositional Logic The Importance of Being Formal Martin Henz February 5, 2014 Propositional Logic 1 Motivation In traditional logic, terms represent sets, and therefore, propositions are limited to stating facts on sets

More information

THE information capacity is one of the most important

THE information capacity is one of the most important 256 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 1, JANUARY 1998 Capacity of Two-Layer Feedforward Neural Networks with Binary Weights Chuanyi Ji, Member, IEEE, Demetri Psaltis, Senior Member,

More information

Proof Theoretical Studies on Semilattice Relevant Logics

Proof Theoretical Studies on Semilattice Relevant Logics Proof Theoretical Studies on Semilattice Relevant Logics Ryo Kashima Department of Mathematical and Computing Sciences Tokyo Institute of Technology Ookayama, Meguro, Tokyo 152-8552, Japan. e-mail: kashima@is.titech.ac.jp

More information

An Algebraic Characterization of the Halting Probability

An Algebraic Characterization of the Halting Probability CDMTCS Research Report Series An Algebraic Characterization of the Halting Probability Gregory Chaitin IBM T. J. Watson Research Center, USA CDMTCS-305 April 2007 Centre for Discrete Mathematics and Theoretical

More information

BOOLEAN ALGEBRA INTRODUCTION SUBSETS

BOOLEAN ALGEBRA INTRODUCTION SUBSETS BOOLEAN ALGEBRA M. Ragheb 1/294/2018 INTRODUCTION Modern algebra is centered around the concept of an algebraic system: A, consisting of a set of elements: ai, i=1, 2,, which are combined by a set of operations

More information

Associative Model for the Forecasting of Time Series Based on the Gamma Classifier

Associative Model for the Forecasting of Time Series Based on the Gamma Classifier Associative Model for the Forecasting of Time Series Based on the Gamma Classifier Itzamá López-Yáñez 1,2, Leonid Sheremetov 1, and Cornelio Yáñez-Márquez 3 1 Mexican Petroleum Institute (IMP), Av. Eje

More information

Lecture 6. Notes on Linear Algebra. Perceptron

Lecture 6. Notes on Linear Algebra. Perceptron Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer

Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Associative Memory Content-Addressable Memory Associative Memory Lernmatrix Association Heteroassociation Learning Retrieval Reliability of the answer Storage Analysis Sparse Coding Implementation on a

More information

A Generalized Eigenmode Algorithm for Reducible Regular Matrices over the Max-Plus Algebra

A Generalized Eigenmode Algorithm for Reducible Regular Matrices over the Max-Plus Algebra International Mathematical Forum, 4, 2009, no. 24, 1157-1171 A Generalized Eigenmode Algorithm for Reducible Regular Matrices over the Max-Plus Algebra Zvi Retchkiman Königsberg Instituto Politécnico Nacional,

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

10.1 Radical Expressions and Functions Math 51 Professor Busken

10.1 Radical Expressions and Functions Math 51 Professor Busken 0. Radical Expressions and Functions Math 5 Professor Busken Objectives Find square roots without a calculator Simplify expressions of the form n a n Evaluate radical functions and find the domain of radical

More information

Visual cryptography schemes with optimal pixel expansion

Visual cryptography schemes with optimal pixel expansion Theoretical Computer Science 369 (2006) 69 82 wwwelseviercom/locate/tcs Visual cryptography schemes with optimal pixel expansion Carlo Blundo a,, Stelvio Cimato b, Alfredo De Santis a a Dipartimento di

More information

Math From Scratch Lesson 28: Rational Exponents

Math From Scratch Lesson 28: Rational Exponents Math From Scratch Lesson 28: Rational Exponents W. Blaine Dowler October 8, 2012 Contents 1 Exponent Review 1 1.1 x m................................. 2 x 1.2 n x................................... 2 m

More information

NEUTRIX CALCULUS I NEUTRICES AND DISTRIBUTIONS 1) J. G. VAN DER CORPUT. (Communicated at the meeting of January 30, 1960)

NEUTRIX CALCULUS I NEUTRICES AND DISTRIBUTIONS 1) J. G. VAN DER CORPUT. (Communicated at the meeting of January 30, 1960) MATHEMATICS NEUTRIX CALCULUS I NEUTRICES AND DISTRIBUTIONS 1) BY J. G. VAN DER CORPUT (Communicated at the meeting of January 30, 1960) It is my intention to give in this lecture an exposition of a certain

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Economics 204 Fall 2011 Problem Set 1 Suggested Solutions

Economics 204 Fall 2011 Problem Set 1 Suggested Solutions Economics 204 Fall 2011 Problem Set 1 Suggested Solutions 1. Suppose k is a positive integer. Use induction to prove the following two statements. (a) For all n N 0, the inequality (k 2 + n)! k 2n holds.

More information

2012 IEEE International Symposium on Information Theory Proceedings

2012 IEEE International Symposium on Information Theory Proceedings Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical

More information

Multiclass Classification-1

Multiclass Classification-1 CS 446 Machine Learning Fall 2016 Oct 27, 2016 Multiclass Classification Professor: Dan Roth Scribe: C. Cheng Overview Binary to multiclass Multiclass SVM Constraint classification 1 Introduction Multiclass

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Convolutional Associative Memory: FIR Filter Model of Synapse

Convolutional Associative Memory: FIR Filter Model of Synapse Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,

More information

Minimization of Boolean Expressions Using Matrix Algebra

Minimization of Boolean Expressions Using Matrix Algebra Minimization of Boolean Expressions Using Matrix Algebra Holger Schwender Collaborative Research Center SFB 475 University of Dortmund holger.schwender@udo.edu Abstract The more variables a logic expression

More information

CONSTRUCTION OF THE REAL NUMBERS.

CONSTRUCTION OF THE REAL NUMBERS. CONSTRUCTION OF THE REAL NUMBERS. IAN KIMING 1. Motivation. It will not come as a big surprise to anyone when I say that we need the real numbers in mathematics. More to the point, we need to be able to

More information

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1.

Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1. CompNeuroSci Ch 10 September 8, 2004 10 Associative Memory Networks 101 Introductory concepts Consider the way we are able to retrieve a pattern from a partial key as in Figure 10 1 Figure 10 1: A key

More information

Optimal XOR based (2,n)-Visual Cryptography Schemes

Optimal XOR based (2,n)-Visual Cryptography Schemes Optimal XOR based (2,n)-Visual Cryptography Schemes Feng Liu and ChuanKun Wu State Key Laboratory Of Information Security, Institute of Software Chinese Academy of Sciences, Beijing 0090, China Email:

More information

Simulating Neural Networks. Lawrence Ward P465A

Simulating Neural Networks. Lawrence Ward P465A Simulating Neural Networks Lawrence Ward P465A 1. Neural network (or Parallel Distributed Processing, PDP) models are used for a wide variety of roles, including recognizing patterns, implementing logic,

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

arxiv: v2 [cs.ds] 17 Sep 2017

arxiv: v2 [cs.ds] 17 Sep 2017 Two-Dimensional Indirect Binary Search for the Positive One-In-Three Satisfiability Problem arxiv:1708.08377v [cs.ds] 17 Sep 017 Shunichi Matsubara Aoyama Gakuin University, 5-10-1, Fuchinobe, Chuo-ku,

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Classification of Root Systems

Classification of Root Systems U.U.D.M. Project Report 2018:30 Classification of Root Systems Filip Koerfer Examensarbete i matematik, 15 hp Handledare: Jakob Zimmermann Examinator: Martin Herschend Juni 2018 Department of Mathematics

More information

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules Extending the Associative Rule Chaining Architecture for Multiple Arity Rules Nathan Burles, James Austin, and Simon O Keefe Advanced Computer Architectures Group Department of Computer Science University

More information

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS Proceedings of the First Southern Symposium on Computing The University of Southern Mississippi, December 4-5, 1998 COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS SEAN

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Some New Properties of Wishart Distribution

Some New Properties of Wishart Distribution Applied Mathematical Sciences, Vol., 008, no. 54, 673-68 Some New Properties of Wishart Distribution Evelina Veleva Rousse University A. Kanchev Department of Numerical Methods and Statistics 8 Studentska

More information

Discrete Mathematical Structures: Theory and Applications

Discrete Mathematical Structures: Theory and Applications Chapter 1: Foundations: Sets, Logic, and Algorithms Discrete Mathematical Structures: Theory and Applications Learning Objectives Learn about sets Explore various operations on sets Become familiar with

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Reification of Boolean Logic

Reification of Boolean Logic 526 U1180 neural networks 1 Chapter 1 Reification of Boolean Logic The modern era of neural networks began with the pioneer work of McCulloch and Pitts (1943). McCulloch was a psychiatrist and neuroanatomist;

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information