Optimization of Quadratic Forms: Unified Minimum/Maximum Cut Computation in Directed/Undirected Graphs

Size: px
Start display at page:

Download "Optimization of Quadratic Forms: Unified Minimum/Maximum Cut Computation in Directed/Undirected Graphs"

Transcription

1 Optimization of Quadratic Forms: Unified Minimum/Maximum Cut Computation in Directed/Undirected Graphs by Garimella Ramamurthy Report No: IIIT/TR/2015/-1 Centre for Security, Theory and Algorithms International Institute of Information Technology Hyderabad , INDIA November 2015

2 OPTIMIZATION OF QUADRATIC FORMS: UNIFIED MINIMUM / MAXIMUM CUT COMPUTATION IN DIRECTED / UNDIRECTED GRAPHS Garimella Rama Murthy, International Institute of Information Technology Hyderabad, India rammurthy@iiit.ac.in Abstract In this research paper, the problem of optimization of a quadratic form over the corners of hypercube is reviewed. Results related to the computation of global optimum stable / anti-stable state of a Hopfield Neural Network (HNN) are discussed. Using these results, an efficient algorithm for computation of minimum cut as well as maximum cut in an undirected as well as directed graph is discussed. Also, POTENTIALLY, a deterministic exact polynomial time algorithm for the NP- Hard problem of maximum cut in an undirected graph is discussed. Effectively, unified approach to minimum cut as well as maximum cut computation in arbitrary directed as well as undirected graphs is presented. I. INTRODUCTION In computer science, graphs (directed as well as undirected) naturally arise in many applied problems. For instance, in a directed graph, computing the minimum cut is an interesting research problem (a special case transportation problem). Ford-Fulkerson algorithm is the first polynomial time algorithm to compute the minimum cut in a directed graph. Several efficient polynomial time algorithms have been designed for such a problem. But, it has been realized that computing the maximum cut in an undirected graph is an NP-hard problem. So far, no polynomial time algorithm has been designed for such a problem. This research paper provides an effort in that direction. This research paper is organized as follows. In Section 2, relevant research literature is reviewed. The author in his research efforts obtained general results for the problem of optimizing a quadratic form over the corners of unit hypercube. These results and the related ideas are documented in Section 3. In Section 4, the problem of computing the largest stable state of a Hopfield neural network is solved. The results are utilized to design an algorithm for computing the minimum cut in an arbitrary undirected graph. The results are generalized for min cut computation in arbitrary directed graphs. Also, in Section 4, an exact deterministic polynomial time algorithm for the NP hard problem of maximum cut computation in undirected graphs is discussed. Main Contributions: (1) Efficient algorithm for minimum cut / maximum cut computation when the matrix of weights (synaptic weight matrix) is structured (with respect to the spectral representation i.e. eigenvectors corresponding to positive / negative eigenvalues that are corners of the unit hypercube) (2) Unified approach (efficient algorithms) for minimum cut / maximum cut computation in undirected / directed graphs (Effectively 4 algorithms for 4 problems) (3) POTENTIALLY polynomial time algorithm for maximum cut computation in undirected / directed graphs.

3 II. `REVIEW OF RESEARCH LITERATURE: MIN / MAX CUT COMPUTATION : HOPFIELD NEURAL NETWORK Contribution of Hopfield et.al: Hopfield neural network constitutes a discrete time nonlinear dynamical system. It is naturally associated with a weighted undirected graph G = (V,E), where V is the set of vertices and E is the set of edges. A weight value is attached to each edge and a threshold value is attached to each vertex/node /artificial neuron of the graph. The order of the network is the number of nodes / vertices in the associated graph. Thus a discrete time Hopfield neural network of order N is uniquely specified by (A) N x N Symmetric Synaptic Weight Matrix M i.e. denotes the weight attached to the edge from node i to node j (node j to node i) (B) Nx1 Threshold Vector i.e. denotes the threshold attached to node i. Each neuron is in one of the two states i.e. +1 or -1. Thus, the state space of such a non-linear dynamical system is the N-dimensional unit hypercube. For notational purposes, let () denote the state of node / neuron i at the discrete time index t. Let the state of the Hopfield neural network at time t be denoted by the Nx1 vector (). The state at node i is updated in the following manner (i.e. computation of next state of node i ) ( +1) = ()..(2.1) i.e. the next state at node i is +1 if the term in the bracket is non-negative and -1 if it is negative. Depending on the set of nodes at which the state updation given in equation (2.1) is performed, the neural network operation is classified into the following modes: Serial Mode: The set of nodes at which state updation as in (2.1) is performed is exactly one i.e. at time t only at one of the nodes / neurons the above state updation is performed. Fully Parallel Mode: At time t, the state updation as in (2.1) is performed simultaneously at all the nodes In the state space of discrete time Hopfiled neural network, there are certain distinguished states, called the STABLE STATES. Definition: A state V(t) is called a Stable State if and only if () = ( () ).(2.2) Thus, if state dynamics of the network reaches the stable state at some time t, it will remain there for ever i.e. no change in the state of network occurs regardless of the mode of operation of the network (i.e. it is a fixed point in the state dynamics of discrete time Hopfield neural network). The following Convergence Theorem summarizes the dynamics of discrete time Hopfield neural network in the serial and parallel modes of operation. It characterizes the operation of the neural network as an associative memory. Theorem 1: Let the pair Z = (M,T) specify a Hopfield neural network. Then the following hold true: [1] Hopfield : If N is operating in a serial mode and the elements of the diagonal of M are non-negative, the network will always converge to a stable state (i.e. there are no cycles in the state space).

4 [2] Goles: If N is operating in the fully parallel mode, the network will always converge to a stable state or to a cycle of length 2 (i.e the cycles in the state space are of length 2). The proof of above Theorem is based on associating the state dynamics of Hopfield Neural Network (HNN) with an energy function. It is reasoned that the energy function is non-decreasing when the state of the network is updated (at successive time instants). Since the energy function is bounded from above, the energy will converge to some value. The next step in the proof is to show that constant energy implies a stable state is reached in the first case and atmost a cycle of length 2 is reached in the second case. The so called energy function utilized to prove the above convergence Theorem is the following one: () = () () 2 ().(2.3) Thus, HNN, when operating in the serial mode will always get to a stable state that corresponds to a local maximum of the energy function. Hence the Theorem suggests that Hopfield Associative Memory (HAM) could be utilized as a device for performing local/global search to compute the maximum value of the energy function. Contribution of Bruck et.al The above Theorem implies that all the optimization problems which involve optimization of a quadratic form over the unit hypercube (constraint / feasible set) can be mapped to a HNN which performs a search for its optimum. One such problem is the computation of a minimum cut in an undirected graph. For the sake of brevity, we skip the definition of a cut in a graph. It can be noted that min cut specification is equivalent to specifying the subset of vertices, on one side of the cut. Minimum and Maximum Cut in Undirected Graphs: Stable / Anti-Stable States: In the following Theorem, proved in [BrB], the equivalence between the minimum cut and the computation of global optimum of energy function of HNN is summarized. Theorem 2: Consider a Hopfield Neural Network (HNN) Z= (M,T) with the thresholds at all nodes being zero i.e. T 0. The problem of finding the global optimum stable state (for which the energy is maximum) is equivalent to finding a minimum cut in the graph corresponding to N. Corollary: Consider a Hopfield neural network Z =(M,T) with the thresholds at all neurons being zero i.e. T 0. The problem of finding a state V for which the energy is global minimum is equal to finding a maximum cut in the graph corresponding to N. Proof: Follows from the argument as in [BrB]. We repeat the argument for clarity (required for understanding the algorithms for MIN/MAX computation in DIRECTED / UNDIRECTED graphs). From the Lemma 1 discussed below [RaN], there is no loss of generality in assuming that the threshold at all the nodes/vertices of the Hopfield neural network is zero. Thus, the energy function is a pure quadratic form i.e! =. Let us denote the sum of weights of edges in the graph with both end points in vertex set as " " and let # #, " # denote the corresponding sums for the other two cases. We readily have that! = 2 ( " " + # # " # ) = 2 ( " " + # # + " # ) 4 " #.

5 Since the first term in the above equation is constant (it is the sum of weights of all edges), it follows that the minimization of H is equivalent to maximization of " #. It is clear that " # is the weight of the cut in N with being the nodes of the Z with state equal to 1. Q. E.D. Thus, the operation of Hopfield neural network in the serial mode is equivalent to conducting a local search algorithm for finding a minimum cut in the associated graph. State updation at a node in the network is equivalent to moving it from one side of the cut to the other side in the local search algorithm. As in the case of stable states, we have the following definition. Definition: The local minimum vectors of a quadratic form on the hypercube are called anti-stable states. This concept was first introduced in [3, Rama1]. Remark 1 : In view of Theorem 2, minimum cut computation in an undirected graph reduces to determining the global optimum stable state, S of the associated Hopfield neural network N = (M,T) with T 0. The {+ 1} components of S determine the subset of vertices that are members of one side of the cut (and {- 1} components determine the vertices on the other side of the cut). Similarly, maximum cut computation requires determination of global optimum anti-stable state. Minimum and Maximum Cut in Directed Graphs: Stable / Anti-Stable States: In view of the above Theorem, a natural question that arises is to see if a Hopfield neural network can be designed to perform a local search for minimum cut in a directed graph. In that effort we utilize the standard definition of minimum cut in a directed graph. The following Theorem is in the same spirit of Theorem for directed graphs. Theorem 3: Let M be the matrix of edge weights (M is not necessarily symmetric) in a weighted directed graph denoted by G = (V,E). The network % = (&,) performs a local search for a DMC of G where ( = ( + ) 2 Proof: Refer [9] i.e. [BrS]. ) = 1 2 *( ) ) ) Thus, this Theorem shows that the computation of minimum cut in a directed graph is equivalent to determining the global optimum stable state of a suitably chosen Hopfield neural network. Note: Similarly as in Theorem 2, the computation of directed maximum cut (i.e. maximum cut in a directed graph) reduces to computation of global minimum anti-stable state (global minimum vector of quadratic energy function) in a suitably chosen Hopfield neural network. Based on Theorem 2 and Theorem 3, we design polynomial time algorithms for computation of minimum cut as well as maximum cut in directed and undirected graphs. The key idea is polynomial time algorithm for computation of global optimum stable state and anti-stable state.

6 Claim 1 : In summary MIN / MAX cut computation in directed / undirected graphs is equivalent to the determination of global optimum stable / anti-stable state computation of associated Hopfield neural network. The problem of optimization of a quadratric form over the hypercube arises in many research problems in computer science. The author, after mastering the results in [Hop], [BrB] contemplated on removing the conditions required in Theorems 1 and 2. The fruits of such effort are documented in the following Section. III. OPTIMIZATION OF QUDRATIC FORMS OVER HYPERCUBE The energy function associated with a HNN, considered in (2.3) is not exactly a quadratic form. The author questioned whether the threshold vector associated with a HNN can always be assumed to be zero (for instance by introducing a dummy node and suitably choosing the synaptic weights from it to the other nodes). The result of such effort is the following Lemma. Lemma 1: There is no loss of generality in assuming that the threshold vector associated with a Hopfield Neural Network (HNN) is an all-zero vector Proof: Refer the argument provided in [RaN]. Thus, it is clear that a properly chosen HNN acts as a local / global optimization device of an arbitrary quadratic form as the objective function on the constraint set being unit hypercube. Thus in the following discussion, we consider only a pure quadratic form as the energy function. Also, in part 1 of Theorem 1, we require the condition that the diagonal elements of symmetric synaptic weight matrix are all non-negative. We now show in the following Theorem that such a condition can be removed. In this section, we consider the problem of maximization of quadratic form (associated with a symmetric matrix) over the corners of binary, symmetric hypercube. Mathematically, this set is specified precisely as follows: S = { X = ( xi, x2, x3,..., xn ) : xi = ± 1 for1 i N}... (3.1) From now onwards, we call the above set simply as hypercube. This optimization problem arises in a rich class of applications. This problem is the analogue of the maximization over the hypersphere of quadratic form associated with a symmetric matrix. Rayleigh provided the solution to the optimization problem on the unit hypersphere. A necessary condition on the optimum vector lying on the unit hypercube is now provided. This Theorem is the analogue of the maximization over the hypersphere of a quadratic form associated with a symmetric matrix. The following Theorem and other associated results were first documented in [Rama1]. Theorem 4: Let B be an arbitrary N x N real matrix. From the standpoint of maximization of the T quadratic form i.e. u Bu on the hypercube, it is no loss of generality to assume that B is a symmetric T matrix with zero diagonal elements. If u maximizes the quadratic form u Bu, subject to the constraint that, = 1 f-. 1 % (i.e. u lies on the corners of hypercube), then u = sign( Cu). (3.2) 1 Where ( T C = B + B ) with all the diagonal elements set to zero. In the above equation (i.e. eqn 3.2), 2 Sign (0) is interpreted as +1 or -1 based on the requirement.

7 Proof: Refer [Rama1], [Rama3]. Also refer [RaN] for an elementary proof. Corollary: Let E be an arbitrary N x N real matrix. If u minimizes the quadratic form the constraint u = 1 for 1 i n, then i u = sign( Cu).(3.11) where C is the symmetric matrix with zero diagonal elements obtained from E. T u Eu, subject to Note: It is immediate to see that if, is a stable state (or anti-stable state), then -, (minus u ) is also a stable state (or anti-stable state). Remark 2: In view of Theorem 4, there is no loss of generality in assuming that the TRACE of matrix is zero for determining the stable / anti-stable states (i.e for optimizing the quadratic form). Since TRACE of a matrix is the sum of eigenvalues, the sum of positive valued eigenvalues is equal to the sum of negative valued eigenvalues. Hence, it is easy to see that a symmetric matrix with zero diagonal elements cannot be purely positive definite or purely negative definite. It can be assumed to be indefinite with the largest eigenvalue being a positive real number. Thus the location of stable states (vectors) is invariant under variation of Trace (M).. IV. GLOBAL OPTIMUM STABLE / ANTI-STABLE STATE COMPUTATION : COMPUTATION OF MINIMUM AND MAXIMUM CUT IN DIRECTED AND UNDIRECTED GRAPHS: As discussed in Section II, Bruck et.al [BrB] showed that the problem of computing the maximum stable state is equivalent to that of computation of minimum cut in the associated undirected graph. As per Bruck et.al, this is claimed to be an NP hard problem [BrB]. But theoretical computer scientists Rao Kosaraju and Sartaj Sahni informed the author that the minimum cut in an undirected graph is known to be in P (i.e. polynomial time algorithms exist). They also informed the author that MAX CUT in an undirected graph is NP complete From the corollary of Theorem 2, it follows that computing the MAXIMUM CUT is equivalent to the problem of determining the global minimum anti-stable state (i.e. determining the corner of unit hypercube where the global minimum of the quadratic form is attained). Goals: To see if an efficient polynomial time algorithm can be discovered for the problem of computing the minimum cut in an undirected graph. Also, finding a polynomial time algorithm for the NP complete problem of computing the MAXIMUM CUT in an undirected graph. Thus we are interested in knowing whether P = NP. In the following discussion, we consider the quadratic form associated with the matrix M (which also can be treated as the synaptic weight matrix of a Hopfield Neural Network). Lemma 2: Suppose the synaptic weight matrix (connection matrix) of the Hopfield neural network is a non-negative matrix (i.e. all the components are non-negative real numbers). Then, the global optimum stable state is the all ones vector i.e. [ ]. Proof: Since the energy function is a quadratic form and the variables are only allowed to assume { +1, -1 } values, the global optimum is achieved by taking the sum of all the components of the symmetric synaptic weight matrix. Hence, the vectors of all ones i.e. [ ] is the global optimum stable state. Q.E.D. Corollary: The global optimum stable state i.e. all ones vector corresponds to the EMPTY minimum cut in the associated undirected weighted graph.

8 Note: If u is a stable state, it is easy to see that -u is also a stable state. Thus, when the synaptic weight matrix is non-negative, the vector of all MINUS ONES is also the global optimum stable state. This, once again corresponds to EMPTY MINIMUM CUT. Structured Synaptic Weight Matrices: Computation of global optimum Stable/anti-stable State: The following lemma relates eigenvectors of a symmetric matrix M and the stable / anti-stable states associated with the synaptic weight matrix M. Lemma 3: If a corner of unit hypercube is an eigenvector of M corresponding to positive / negative eigenvalue, then it is also a stable / anti-stable state Proof: Follows from the definition of eigenvector and stable state. Details avoided for brevity.. We now reason how the computation of global optimum stable state is related to the computation of global optimum anti-stable state. Claim 2: Suppose / 0 is a anti stable state of the matrix M, then / 0 is stable state of the matrix -M (minus M). Proof: Follows from the definition of stable and anti-stable states Q.E.D. The following remark follows from the above Lemma on invoking Rayleigh s Theorem.. We now briefly state Rayleigh s Theorem. The proof can be found in a textbook of linear algebra. Theorem 5 : Rayleigh s Theorem: The local optima of the quadratic form associated with a symmetric matrix M (i.e. / /) on the unit Euclidean hypersphere (i.e. { X : / / = 1 }) occur at the eigenvectors with the corresponding value of the quadratic form being the eigenvalue. Remark 3: Suppose we consider a vector on the hypercube (one corner of hypercube), say /2 which is also the eigenvector of matrix M corresponding to the largest eigenvalue i.e. We have /2 = /2. Then, since is positive, we have that /2 = (3 456 /& ) = /2. Thus, in view of Rayleigh s Theorem, such a corner of the hypercube is also the global optimum stable state. This inference follows from the fact that the points on hypercube can be projected onto the unit hypersphere. Let X be a vector lying on the unit hypercube. Project such a vector onto the unit hypersphere using the following transformation i.e. / 7 = %. It is clear that the Euclidean norm of Y is equal to one. We now briefly consider such a case. CASE A : This case where a corner of the hypercube is also the eigenvector corresponding to the maximum positive eigenvalue is very special. More generally a corner of hypercube is the eigenvector corresponding to a positive eigenvalue. It should be realized that we are only interested in the SIGN STRUCTURE of the eigenvector corresponding to the largest positive eigenvalue (which is also a corner of hypercube)it is well known that the computation of maximum eigenvector of a symmetric matrix can be carried out using a polynomial time algorithm (Elsner s / Lanczos algorithm).

9 Suppose we use such an algorithm to compute the global minimum anti-stable state (Refer claim above Rayleigh s Theorem). We have found a polynomial time algorithm for such an NP hard problem. Thus in such a case P = NP. In case of a 2 x 2 synaptic weight matrix, it can easily be shown that the global optimum stable state happens to be the eigenvector corresponding to the largest eigenvalue. Details are avoided for brevity. Note: Using Rayleigh s Theorem, in case of hypothesis of Lemma 3, the eigenvector of M corresponding to largest positive eigenvalue is the global optimum stable state. Also, the eigenvector corresponding to smallest negative eigenvalue of M is the global optimum anti-stable state. More generally, an eigenvector that is a corner of hypercube corresponding to a larger positive eigenvalue is a more optimal ( maximum ) stable state. Similarly, an eigenvector that is a corner of hypercube corresponding to a smaller negative eigenvalue is a more optimal ( minimum) anti-stable state. By computing the spectral representation of M, if hypothesis in Lemma 3 are satisfied, global optimum stable state as well as global optimum anti-stable state can be computed by a polynomial time algorithm. CASE B: Now, we consider the arbitrary case where the eigenvector corresponding to the largest eigenvalue is NOT a stable state. More generally none of the corners of hypercube is an eigenvector. ALGORITHM A : Efficient Algorithm for minimum / maximum cut computation in directed / undirected graphs: Global optimum stable / anti-stable state computation of a Hopfield Neural Network: (I) (II) By claim 2, the computation of global optimum (minimum) anti-stable state is equivalent to computation of global optimum stable state of a suitably chosen Hopfield neural network. By Theorem 2, computation of global optimum stable state is equivalent to computing minimum cut in the graph corresponding to Hopfield neural network. There are several efficient polynomial time algorithms for computing the minimum cut in a directed / undirected graph. Thus, in view of corollary of Theorem 2, claim 2, Theorem 3, efficient POLYNOMIAL TIME algorithms exist for computing the MAXIMUM CUT in an undirected / directed graph. Now we discuss other possible algorithms for computing the minimum / maximum cut in directed /undirected graphs based on the connection of such algorithms to Hopfield neural network. Lemma 4: If y is an arbitrary vector on unit hypersphere that is obtained by projecting a vector on unit hypercube (onto the unit hypersphere) and x 0 is the eigenvector of symmetric matrix M (of Euclidean norm one) corresponding to the maximum eigen value (on the unit hypersphere), then we have that T T T y My = µ max + 2 µ max ( y x0 ) x0 + ( y x0 ) M ( y x0) Proof: Follows from a standard argument Q.E.D.

10 Remark 4: Since, by Rayleigh s theorem, it is well known that the global optimum value of a quadratic form on the unit hypersphere is the maximum eigen value i.e. µ max, it is clear that for all corners of the hypercube projected onto the unit hypersphere, we must necessarily have that 2 max ( 0) T T µ y x x0 + ( y x0 ) M ( y x0) 0. The goal is to choose a y, such that the above quantity is as less negative as possible (so that the value of quadratic form is as close to µ max as possible). Unlike in Remark 4, suppose that L = ( 9 0 ) ( 9 0 ). Then a natural question is to see if L can be utilized some how for arriving at the global optimum stable state. Such a question was the starting point for the following algorithm to compute the global optimum stable state. ALGORITHM B: ALGORITHM FOR COMPUTATION OF GLOBAL OPTIMUM STABLE STATE OF A HOPFIELD NEURAL NETWORK: ALGORITHM FOR COMPUTATION OF MINIMUM CUT IN AN UNDIRECTED / DIRECTED GRAPH : Step 1: Suppose the right eigenvector corresponding to the largest eigen value of M is real (i.e. real valued components). Let such an eigenvector be 9 0 (i.e. largest eigenvector of the synaptic weight matrix of the undirected graph associated with the Hopfield neural network). Compute the SIGN STRUCTURE of such an eigenvector, L ; = ( 9 0 ). Step 2: Using L as in the initial condition (vector), run the Hopfield neural network in the serial mode of operation until a stable state is reached. Such a state is the global optimum stable state. By Theorem 2, it corresponds (leads) to the minimum cut in the associated graph. In view of the following Lemma, eigenvector corresponding to the largest eigenvalue can always be assumed to contain real valued components. Lemma 5: If A is symmetric and real valued matrix, then every eigenvector can be CHOSEN to contain real valued components: Proof: Follows from standard argument related to linear algebra of a symmetric matrix. Q.E.D. Note: In view of Theorem 3, the same algorithm can be applied for computation of minimum cut in a directed graph (with a properly chose synaptic weight matrix defined in Theorem 3). In view of Lemma 4 and above remark 4, the claim is that the global optimum stable state is reached through the above procedure. The formal proof that global optimum stable state is reached is provided in Appendix Theorem 6: The above algorithm converges to the global optimum stable state using the vector L as the initial condition (on running Hopfield neural network in serial mode). Proof: Refer the Appendix Note: In view of Lemma 2 and Rayleigh s Theorem, it can be reasoned that (when the synaptic weight matrix is non-negative), non-empty min cut can be computed by the following procedure Let < 0 be the eigenvector corresponding to the second largest eigenvalue (computed by removing the mode corresponding to largest eigenvalue). Let = ; = ( < 0 ).

11 To compute the non-empty minimum cut, run the Hopfield neural network in serial mode with ;2 as the initial condition. We now determine the computational complexity of the above algorithm. The above algorithm involves the following computations. It is a POTENTIALLY a polynomial time algorithm. (A) Signed Largest Eigenvector Computation: Computation of eigenvector corresponding to the largest eigenvalue of the symmetric matrix (connection matrix of Hopfield neural network) i.e 9 0. The problem is a topic in standard computational linear algebra. One can use power method, Lanczos method etc. By a suitable modification of Lanczos algorithm, a polynomial time algorithm for computing Signed largest eigenvector i.e. ; = ( 9 0 ) is designed. Details are avoided for brevity. (B) State Updation in Serial Mode with Proper Initial Condition Vector: Using L as the initial condition, we run the Hopfield neural network in serial mode until global optimum stable state is reached. It is possible to bound the number of computations for this task. Under the assumption that the number of { 1 vectors } in the domain of attraction of largest stable state is bounded by a polynomial of constant degree (as a function of number of vertices), a polynomial time algorithm is designed for this task. Maximum Cut Computation in Undirected and Directed Graphs : Global Optimum Anti-Stable State Computation of a Hopfield Neural Network: Claim: Suppose / 0 is an anti stable state of the matrix M, then / 0 is stable state of the matrix -M (minus M). Proof: Follows from the definition of stable and anti-stable states Q.E.D. Thus, by the above discussion, the NP complete problem of maximum cut computation in undirected as well as directed graphs reduces to computation of global optimum anti-stable state. Remark 5: Similar Theorem, as Theorem 6, is proved for computing the global optimum anti-stable state (global minimum of the associated quadratic form) of the Hopfield neural network. The associated algorithm is POTENTIALLY a POLYNOMIAL time algorithm for the NP-complete problem of MAXIMUM CUT computation in an undirected graph. Based on Theorem 3, an interesting algorithm for computation of maximum cut in a directed graph is discussed. Hopfield Neural Network: Associated One Step Associative Memory: POLYNOMIAL TIME Algorithm for Minimum/ Maximum Cut Computation in Undirected / Directed Graphs: Now, we investigate the possibility of arriving at a more efficient algorithm for computing the global optimum (maximum / minimum) stable / anti-stable state of a Hopfield neural network. Specifically, we propose a POLYNOMIAL TIME algorithm for computing the global optimum stable state as well as anti-stable state. As per the above discussion (Theorem 6, Remark 5), we restrict the discussion to computation of global optimum / maximum stable state(equally applies to the global minimum anti-stable state ). In view of following Lemma, it has been shown in [BrB] that a graph theoretic code is naturally associated with a Hopfield network (with the associated quadratic energy function). The local and global optima of the energy function are the code words.

12 Lemma 6: Given a linear block code, a neural network can be constructed in such a way that every local maximum of the energy function corresponds to a codeword and every codeword corresponds to a local maximum. Proof: Refer the paper by Bruck et.al [BrB]. The following fact enables bounding minimum distance. Note: Graph-theoretic error correcting codes are limited in the sense that the following upper bound on the minimum distance holds true;? 2, where? A hc DD,D?AEFC E?, A hc FE.?EG< -H C?C E? IC.C9 ACA -H hc EAA-FEC?.EJh.CAJCFICG<. Goal: To compute the global optimum stable state (i.e. global optimum of the energy function) using the associated graph theoretic encoder. We now propose a POTENTIALLY polynomial time algorithm to achieve the above goal. ALGORITHM C: Step 1 : Compute the real eigenvector 9 0 (i.e. real valued components) of symmetric matrix M corresponding to the largest eigenvalue. Compute the corner, L of hypercube from 9 0 in the following manner L = Sign (9 0 ). Complexity of this step: A polynomial time algorithm for this step is well known in the linear algebra literature. Step 2: As discussed in [BrB], the { +1, -1 } valued vector L partitions the vertex set into two groups with +1 valued vertices on one side of the cut and the -1 valued vertices on other side of the cut. This will also enable determining the characteristic vector of cut edges. We thus arrive at a row vector, B of dimension E which can atmost be K % L. By Theorem 6 and the results in [BrB], such a vector, B lies in 2 the coding sphere corresponding to global optimum stable state i.e the codeword corresponding to the global optimum of quadratic energy function. Complexity of this Step: The determination of characteristic vector of the cut edges (which need not be the global minimum cut. It will be a global minimum cut if L is an eigenvector of M corresponding to a positive eigenvalue) requires labeling of atmost E edges ( E can atmost be K % 2 L). Step 3: Compute the generator matrix of the graph theoretic code associated with the Hopfield network. The characteristic vector computed in Step 2 (by Theorem 6) lies in the coding sphere corresponding to the global optimum codeword (global maximum of associated quadratic energy function). Using the associated information word (and the generator matrix), the global optimum codeword is computed in one step. This corresponds to the global optimum stable state. Thus, this provides an efficient algorithm to compute the min cut / max cut in an undirected / directed graph

13 REFERENCES [1] [BrB] J. Bruck and M. Blaum, Neural Networks, Error Correcting Codes and Polynomials over the Binary Cube, IEEE Transactions on Information Theory, Vol.35, No.5, September [2] [Hop] J.J. Hopfield, Neural Networks and Physical Systems with Emergent Collective Computational Abilities, Proceedings of National Academy Of Sciences, USA Vol. 79, pp , 1982 [3] [Rama1]G. Rama Murthy, Optimal Signal Design for Magnetic and Optical Recording Channels, Bellcore Technical Memorandum, TM-NWT , April 1st, 1991 [4] [Rama2]G. Rama Murthy, Efficient Algorithms for Computation of Minimum Cut and Maximum Cut in an Undirected Graph, Manuscript in Preparation [5] [Rama3]G. Rama Murthy, Multi-dimensional Neural Networks : Unified Theory, Research monograph published by NEW AGE INTERNATIONAL PUBLISHERS, NEW DELHI, [6] [RaN] G. Rama Murthy and B. Nischal, Hopfield-Amari Neural Network : Minimization of Quadratic forms, The 6th International Conference on Soft Computing and Intelligent Systems, Kobe Convention Center (Kobe Portopia Hotel) November 20-24, 2012, Kobe, Japan. [7] [Rama4] G. Rama Murthy, Multi/Infinite Dimensional Neural Networks: Multi / Infinite Dimensional Logic Theory, International Journal of Neural Networks, Volume 15, No. 3, June 2005 [8] [Rama6] G. Rama Murthy, Optimization of Quadratic Forms: NP Hard Problems : Neural Networks, 2013 International Symposium on Computational and Business Intelligence (ISCBI 2013), August 24-26, 2013, New Delhi, India. Available on IEEE Explore [9] [BrS] J. Bruck and J. Sanz, A Study on Neural Networks, International Journal of Intelligemt System, Vol. 3, pp , [10] G. Rama Murthy, Towards a Resolution of P = NP conjecture, Cornell University Archive [11] G. Rama Murthy, Towards a Resolution of P = NP conjecture, IIIT Hyderabad Archive. [12] S. L. Hakimi and H. Frank, Cut-set matrices and linear codes, IEEE Transactions on Information Theory, vol. IT-11, pp , July 1965 APPENDIX Proof of Theorem 6 : In view of the results in [1] ([BrB]), the idea is to reason that the vector L is in the domain of attraction of the global optimum stable state. Equivalently using the results in [1], we want to show that the initial vector is in the coding sphere of the codeword corresponding to the global optimum stable state. Let < 0 be the global optimum stable state / vector on the unit hypercube. Let M 0 be one among the other stable states (i.e. the second largest stable state). Thus, the idea is to reason that the Hamming distance between L and < 0 i.e.? N (;,< 0 ) is smaller than the Hamming distance between L and M 0 i.e.? N (;,M 0 ) i.e.

14 To reason that? N (;,< 0 ) <? N (;,M 0 ). The proof is by contradiction i.e. say? N (;,M 0 ) + 1 =? N (;,< 0 ).(4.1) We know that the sign structure of the vectors L and 9 0 is exactly the same. More explicitly all the components of L and 9 0 have the same sign (positive or negative). Since the three vectors O ;,< 0,M 0 } lie on the unit hypercube, we consider various possibilities with respect to the sign structure of those vectors. Thus, we define the following sets: P..Set of components of vectors O < 0 and M 0 } that agree (both of them) in sign with those of the vector 9 0 (and hence L ). R..Set of components of vectors O < 0 and M 0 } that DONOT agree (both of them) in sign with those of the vector 9 0 (and hence L ). S..Set of components of vector < 0 where only < 0 differs in sign from those components of vector 9 0 (and hence L ). TU..Set of components of vector M 0 where only M 0 differs in sign from those components of vector 9 0 (and hence L ). Since the components of { 9 0, < 0, M 0 } assume { +1 or -1 ] values, there are eight possibilities. The following table provides a tabular summary of the sign structure of the components of { 9 0, < 0, M 0 } and the set { P,R,S,T U } the components belong to. 9 0 < 0 M 0 Set A D C B B C D A We make the following inferences: Components of { 9 0, < 0, M 0 } that belong to Set P contribute zero value to the distances? N (;,< 0 ),? N (;,M 0 ). Components of { 9 0, < 0, M 0 } that belong to Set R contribute constant value to the distances? N (;,< 0 ),? N ( ;,M 0 ). Let R = j. It could be noted that for contradiction hypothesis R as well as P can be set to zero. By the hypothesis, the cardinality of set S, i.e. S is atleast one larger than the cardinality of the set TU i.e. TU. For concreteness, we first consider the case where S = TU + 1.

15 To illustrate the argument, we first consider the case where only the last component of < 0 differs from that of 9 0 in sign (but not M 0 ) and all other components (of both < 0,M 0 ) either agree or disagree in sign with those of 9 0. To proceed with the proof argument, the vectors L, < 0, M 0 (lying on the unit hypercube) are projected onto the unit hypersphere through the following transformation: Let the projected vectors be Q, R i.e. V = < 0 %,W = M 0 %, where N is the dimension of the symmetric matrix M.. Thus, we want to reason (if the Hamming distance condition specified above i.e. equation (4.1) is satisfied i.e. our hypothesis) that the value of quadratic form associated with the vectors Q, R satisfies the following inequality i.e. W W > V V. Hence the idea is to thus arrive at a contradiction to the fact that < 0 is the global optimum stable state. Note: From Theorem 4, it is clear that by a variation of diagonal elements of M, a constant value will be added to both sides of the above inequality. Thus, it is sufficient (to arrive at a contradiction) to exhibit one possible choice of diagonal elements of M for which the above inequality holds. In view of Lemma 4, we have the following expressions for V V and W W : V V = (V 9 0 ) 9 0 +(V 9 0 ) (V 9 0 ) W W = (W 9 0 ) 9 0 +(W 9 0 ) (W 9 0 ) Equivalently, we effectively want to show that (W V ) (W 9 0 ) (W 9 0 ) (V 9 0 ) (V 9 0 ) > 0. Let us label the terms in the above expression as follows: (I) = (W V ) 9 0 (II) = (W 9 0 ) (W 9 0 ) - (V 9 0 ) (V 9 0 ). We want to show that (I) + (II) > 0. To prove such an inequality, we first partition the vectors Q, 9 0, R (lying on the unit hypersphere) into two parts: Part (A): where components of Q, R simultaneously agree or disagree in sign with those of 9 0. Part (B): Last component of Q that disagrees in sign with the last component of 9 0. But the last component of R agrees in sign with that of 9 0. Thus, the vectors 9 0, Q, R are given by 9 0 = Y 9 0 Z [ 9 \,V = ]V Z,W = ] W Z 0 V [^ W [^ Where V [, W [ are scalars. Also the components of OV Z,W Z }

16 are simultaneously either + or. Thus except for the last component, all other components of the [ vector R-Q are all zeroes. Further suppose that the last component of 9 0.C.9 0 is _, with _ > 0. Then it is easy to see that (W [ V [ ) = `. Summarizing (W V ) = a0 0 0 #` b. Hence, we have that (W V ) 9 0 = ` _. Thus ( I ) = d e fgh i which is strictly greater than zero. Similarly, even when the last component of 9 0 is +_ with > 0, it is easy to reason that (W V ) 9 0 is strictly greater than zero. Now we consider the other term i.e Term (II): We first partition M into a block structured matrix i.e. = ] ` ` j ^, where j = `` is a scalar and ` = ` We also partition the vectors ( V 9 0 ),(W 9 0 ) in the following manner: ( V 9 0 ) = ] k () l () ^ ; ( W 9 0 ) = ] k(`) l (`) ^, Where l (),l (`) are scalars. As per the partitioning procedure, it is clear that k () = k (`). ([) Also, let us consider the case where the last component of 9 0.C.9 0 is _, with _ > 0. In such a case l () = + _ ; l(`) = + _. Note: For the case, where the last component of 9 0 is +_ nh _ > 0, all the following equations are suitably modified. Details are avoided for brevity. In term (II), the following definition is utilized: H = (W 9 0 ) (W 9 0 ) and J = (V 9 0 ) (V 9 0 ). Thus (II) = H J. In view of partitioning of the matrix M and vectors ( V 9 0 ),(W 9 0 ); we have that Using the fact that k () = k (`), we have that H = k (`)o k (`) + 2 k (`)o ` l (`) + l (`)o `` l (`) J = k ()o k () + 2 k ()o ` l () + l ()o `` l (). H- J = 2 k ()o ` l (`) l () + j p(l (`) )` (l () )`q Let k ()o ` = ` k () = r. Thus, we have that.

17 H J = 2 rk_ _ L+ ja(_ )` (_ + )`b = d s + j (#d t ). Hence, we have the following expression for (I)+(II) (I)+(II) = d e fgh t d s + #d t u. = d p r ( j_ 3 456_ ) q. But since 9 0 is the eigenvector of M corresponding to the largest eigenvalue 3 456, we have that (Z) ` 9 0 j_ = 3456 _. (Z) ` 9 0 = j_ 3456 _. Hence we necessarily have that (I) + (II) = d a r ` (Z) 9 0 b = d a ` k () (Z) ` 9 0 b = d a ` (V (Z) (Z) 9 0 ) (Z) ` 9 0 b = d p ` V (Z) q. We first note that ` is constrained by the fact that < 0 is a stable state. Thus ( < 0 ) = < 0. Or equivalently K w x L = < 0 = ( V ). Thus, we necessarily have y ] zv(z) ` ` j ^ { = } < (Z) 0 +1 ~. In view of Theorem 4, this equation must hold true irrespective of choice of diagonal elements of matrix M (i.e. diagonal elements of and j). The above equation can equivalently be specified in the following manner: K V (Z) + (Z) `L = 7 0 and (` V (Z) + u ) = + 1. Let V (Z) + (Z) ` = with ( ) = 7 0. Equivalently, we have that [ V (Z) ] V (Z) 1 + % [ V(Z) ] ` = [ V (Z) ].

18 This equation can equivalently be specified in the following form: ` V (Z) = % [ V (Z) ] [ V (Z) ] V (Z) ƒ From Theorem (4), we have freedom in choosing the diagonal elements of M (since Trace(M) contributes a constant value to the value of quadratic form at all the corners of hypercube and the points corners of hypercube projected onto the unit hypersphere. The location of stable / anti-stable states is invariant under arbitrary choice of diagonal elements of matrix and scalar). Please refer to the note at the beginning of the proof (by contradiction). Thus by a suitable choice of diagonal elements of, we can ensure that ` V (Z) < 0. Also, by a proper choice of, the following equation is also satisfied ( ` V (Z) + u ) = + 1. Thus, we arrive at the desired contradiction to the fact that < 0 is a global optimum stable state. Thus, the vector L is in the domain of attraction of the global optimum stable state. Hence, with this choice of initial condition, when the Hopfield neural network is run in the serial mode, global optimum stable state is reached. Note: To arrive at contradiction, the case where S = TU + 1 is sufficient. For concreteness, we consider the more general case in the following. Thus, we now consider the case Where S TU + 2. We generalize the above proof for this arbitrary case (using block matrices). Even in this case, we want to show that (I) + (II) > 0, where (I) = (W V ) 9 0 and (II) = (W 9 0 ) (W 9 0 ) - (V 9 0 ) (V 9 0 ) > 0. Let us first consider the term (I). Partition the vectors { R, Q, 9 0 } into FOUR parts (as per the sets P,R,S,TU considered in the above discussion) i.e. 9 0 = ˆ90 Z V Z W Z [ 9 0 V ; V = Ž [ W ; W = Ž [ Š 0 V W V Š W Š It is clear that from the description of the sets PU,R,S,TU, the following equations follow: W Z = V Z,W [ = V [.

19 ( W V ) = 0 0 ˆ % 2 % 2 % 2 % ; 9 0 = ˆ + +(.) H (.) + + _ _ l = Ž k _ Where A ( F-DJ-CA -H ICF-. l ), H s (F-DJ-CA -H ICF-. k), A (components of vector ), _ A (components of vector _ ) are all non-negative real numbers. Let the vector of all ones be denoted by C. Hence we have that Thus the term (I) becomes (R Q ) 9 = K ` + ` _ L C. 2 3 (R Q ) 9 = d ( + _ ) C > 0. Using the reasoning similar to the above, it is shown that (I) + (II) > 0. Q.E.D.

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks 1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT

More information

Hopfield-Amari Neural Network:

Hopfield-Amari Neural Network: Hopfield-Amari Neural Network: Minimization of Quadratic Forms Dr G Rama Murthy PhD Associate Professor IIIT Hyderabad Andhra Pradesh, India E-mail: rammurthy@iiitacin B Nischal(BTech) NIT Hamirpur Himachal

More information

On the Dynamics of a Recurrent Hopfield Network

On the Dynamics of a Recurrent Hopfield Network On the Dynamics of a Recurrent Hopfield Network Rama Garimella Department of Signal Processing Tampere University of Technology Tampere, Finland Email: rammurthy@iiit.ac.in Berkay Kicanaoglu Department

More information

On the Dynamics of a Recurrent Hopfield Network

On the Dynamics of a Recurrent Hopfield Network On the Dynamics of a Recurrent Hopfield Network by Garimella Ramamurthy Report No: III/R/2015/-1 Centre for Communications International Institute of Information echnology Hyderabad - 500 032, INDIA April

More information

Dynamics of Structured Complex Recurrent Hopfield Networks

Dynamics of Structured Complex Recurrent Hopfield Networks Dynamics of Structured Complex Recurrent Hopfield Networks by Garimella Ramamurthy Report No: IIIT/TR/2016/-1 Centre for Security, Theory and Algorithms International Institute of Information Technology

More information

Bounded Infinite Sequences/Functions : Orders of Infinity

Bounded Infinite Sequences/Functions : Orders of Infinity Bounded Infinite Sequences/Functions : Orders of Infinity by Garimella Ramamurthy Report No: IIIT/TR/2009/247 Centre for Security, Theory and Algorithms International Institute of Information Technology

More information

Dynamics of Ordinary and Recurrent Hopfield Networks: Novel Themes

Dynamics of Ordinary and Recurrent Hopfield Networks: Novel Themes 2017 IEEE 7th International Advance Computing Conference Dynamics of Ordinary and Recurrent Hopfield Networks: Novel Themes Siva Prasad Raju Bairaju, Rama Murthy Garimella, Ayush Jha,Anil Rayala APIIIT

More information

Structured Multi Matrix Variate, Matrix Polynomial Equations: Solution Techniques

Structured Multi Matrix Variate, Matrix Polynomial Equations: Solution Techniques Structured Multi Matrix Variate, Matrix Polynomial Equations: Solution Techniques Garimella Rama Murthy, Associate Professor, IIIT Hyderabad, Gachibowli, HYDERABAD-32, AP, INDIA ABSTRACT In this research

More information

Convolutional Associative Memory: FIR Filter Model of Synapse

Convolutional Associative Memory: FIR Filter Model of Synapse Convolutional Associative Memory: FIR Filter Model of Synapse Rama Murthy Garimella 1, Sai Dileep Munugoti 2, Anil Rayala 1 1 International Institute of Information technology, Hyderabad, India. rammurthy@iiit.ac.in,

More information

Chance and Zero Polynomials

Chance and Zero Polynomials Chance and Zero Polynomials by Garimella Ramamurthy Report No: IIIT/TR/2015/-1 Centre for Security, Theory and Algorithms International Institute of Information Technology Hyderabad - 500 032, INDIA August

More information

Also, in recent years, Tsallis proposed another entropy measure which in the case of a discrete random variable is given by

Also, in recent years, Tsallis proposed another entropy measure which in the case of a discrete random variable is given by Gibbs-Shannon Entropy and Related Measures: Tsallis Entropy Garimella Rama Murthy, Associate Professor, IIIT---Hyderabad, Gachibowli, HYDERABAD-32, AP, INDIA ABSTRACT In this research paper, it is proved

More information

Multi-Dimensional Neural Networks: Unified Theory

Multi-Dimensional Neural Networks: Unified Theory Multi-Dimensional Neural Networks: Unified Theory Garimella Ramamurthy Associate Professor IIIT-Hyderebad India Slide 1 Important Publication Book based on my MASTERPIECE Title: Multi-Dimensional Neural

More information

Weakly Short Memory Stochastic Processes: Signal Processing Perspectives

Weakly Short Memory Stochastic Processes: Signal Processing Perspectives Weakly Short emory Stochastic Processes: Signal Processing Persectives by Garimella Ramamurthy Reort No: IIIT/TR/9/85 Centre for Security, Theory and Algorithms International Institute of Information Technology

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before

More information

Root systems and optimal block designs

Root systems and optimal block designs Root systems and optimal block designs Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, UK p.j.cameron@qmul.ac.uk Abstract Motivated by a question

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture 9: Low Rank Approximation

Lecture 9: Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the

More information

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i ) Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Some global properties of neural networks. L. Accardi and A. Aiello. Laboratorio di Cibernetica del C.N.R., Arco Felice (Na), Italy

Some global properties of neural networks. L. Accardi and A. Aiello. Laboratorio di Cibernetica del C.N.R., Arco Felice (Na), Italy Some global properties of neural networks L. Accardi and A. Aiello Laboratorio di Cibernetica del C.N.R., Arco Felice (Na), Italy 1 Contents 1 Introduction 3 2 The individual problem of synthesis 4 3 The

More information

A note on the minimal volume of almost cubic parallelepipeds

A note on the minimal volume of almost cubic parallelepipeds A note on the minimal volume of almost cubic parallelepipeds Daniele Micciancio Abstract We prove that the best way to reduce the volume of the n-dimensional unit cube by a linear transformation that maps

More information

6.046 Recitation 11 Handout

6.046 Recitation 11 Handout 6.046 Recitation 11 Handout May 2, 2008 1 Max Flow as a Linear Program As a reminder, a linear program is a problem that can be written as that of fulfilling an objective function and a set of constraints

More information

Sergey Norin Department of Mathematics and Statistics McGill University Montreal, Quebec H3A 2K6, Canada. and

Sergey Norin Department of Mathematics and Statistics McGill University Montreal, Quebec H3A 2K6, Canada. and NON-PLANAR EXTENSIONS OF SUBDIVISIONS OF PLANAR GRAPHS Sergey Norin Department of Mathematics and Statistics McGill University Montreal, Quebec H3A 2K6, Canada and Robin Thomas 1 School of Mathematics

More information

Every Neural Code Can Be Realized by Convex Sets

Every Neural Code Can Be Realized by Convex Sets Every Neural Code Can Be Realized by Convex Sets Megan K. Franke and Samuel Muthiah July 21, 2017 Abstract Place cells are neurons found in some mammals that fire based on the animal s location in their

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Bichain graphs: geometric model and universal graphs

Bichain graphs: geometric model and universal graphs Bichain graphs: geometric model and universal graphs Robert Brignall a,1, Vadim V. Lozin b,, Juraj Stacho b, a Department of Mathematics and Statistics, The Open University, Milton Keynes MK7 6AA, United

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

The Hypercube Graph and the Inhibitory Hypercube Network

The Hypercube Graph and the Inhibitory Hypercube Network The Hypercube Graph and the Inhibitory Hypercube Network Michael Cook mcook@forstmannleff.com William J. Wolfe Professor of Computer Science California State University Channel Islands william.wolfe@csuci.edu

More information

On the Cardinality of Mersenne Primes

On the Cardinality of Mersenne Primes On the Cardinality of Mersenne Primes Garimella Rama Murthy, Associate Professor, International Institute of Information Technology (IIIT), Gachibowli, Hyderabad-32,AP,INDIA ABSTRACT In this research paper,

More information

1 Review: symmetric matrices, their eigenvalues and eigenvectors

1 Review: symmetric matrices, their eigenvalues and eigenvectors Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three

More information

Geometric Steiner Trees

Geometric Steiner Trees Geometric Steiner Trees From the book: Optimal Interconnection Trees in the Plane By Marcus Brazil and Martin Zachariasen Part 3: Computational Complexity and the Steiner Tree Problem Marcus Brazil 2015

More information

Modeling and Stability Analysis of a Communication Network System

Modeling and Stability Analysis of a Communication Network System Modeling and Stability Analysis of a Communication Network System Zvi Retchkiman Königsberg Instituto Politecnico Nacional e-mail: mzvi@cic.ipn.mx Abstract In this work, the modeling and stability problem

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

Spectral Graph Theory Lecture 3. Fundamental Graphs. Daniel A. Spielman September 5, 2018

Spectral Graph Theory Lecture 3. Fundamental Graphs. Daniel A. Spielman September 5, 2018 Spectral Graph Theory Lecture 3 Fundamental Graphs Daniel A. Spielman September 5, 2018 3.1 Overview We will bound and derive the eigenvalues of the Laplacian matrices of some fundamental graphs, including

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Properties and Classification of the Wheels of the OLS Polytope.

Properties and Classification of the Wheels of the OLS Polytope. Properties and Classification of the Wheels of the OLS Polytope. G. Appa 1, D. Magos 2, I. Mourtos 1 1 Operational Research Department, London School of Economics. email: {g.appa, j.mourtos}@lse.ac.uk

More information

On convergent power series

On convergent power series Peter Roquette 17. Juli 1996 On convergent power series We consider the following situation: K a field equipped with a non-archimedean absolute value which is assumed to be complete K[[T ]] the ring of

More information

Innovative Approach to Non-Uniform Sampling

Innovative Approach to Non-Uniform Sampling Innovative Approach to Non-Uniform Sampling by Rammurthy, Sandhyasree Thaskani, Narendra Ahuja Report No: IIIT/TR/200/49 Centre for Communications International Institute of Information Technology Hyderabad

More information

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

16.1 L.P. Duality Applied to the Minimax Theorem

16.1 L.P. Duality Applied to the Minimax Theorem CS787: Advanced Algorithms Scribe: David Malec and Xiaoyong Chai Lecturer: Shuchi Chawla Topic: Minimax Theorem and Semi-Definite Programming Date: October 22 2007 In this lecture, we first conclude our

More information

Generating p-extremal graphs

Generating p-extremal graphs Generating p-extremal graphs Derrick Stolee Department of Mathematics Department of Computer Science University of Nebraska Lincoln s-dstolee1@math.unl.edu August 2, 2011 Abstract Let f(n, p be the maximum

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition) Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries

More information

Decomposing Bent Functions

Decomposing Bent Functions 2004 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 Decomposing Bent Functions Anne Canteaut and Pascale Charpin Abstract In a recent paper [1], it is shown that the restrictions

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

1 Computational Problems

1 Computational Problems Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

Deterministic Approximation Algorithms for the Nearest Codeword Problem

Deterministic Approximation Algorithms for the Nearest Codeword Problem Deterministic Approximation Algorithms for the Nearest Codeword Problem Noga Alon 1,, Rina Panigrahy 2, and Sergey Yekhanin 3 1 Tel Aviv University, Institute for Advanced Study, Microsoft Israel nogaa@tau.ac.il

More information

1 Some loose ends from last time

1 Some loose ends from last time Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Kruskal s and Borůvka s MST algorithms September 20, 2010 1 Some loose ends from last time 1.1 A lemma concerning greedy algorithms and

More information

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze

More information

On Locating-Dominating Codes in Binary Hamming Spaces

On Locating-Dominating Codes in Binary Hamming Spaces Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

CS 173: Induction. Madhusudan Parthasarathy University of Illinois at Urbana-Champaign. February 7, 2016

CS 173: Induction. Madhusudan Parthasarathy University of Illinois at Urbana-Champaign. February 7, 2016 CS 173: Induction Madhusudan Parthasarathy University of Illinois at Urbana-Champaign 1 Induction February 7, 016 This chapter covers mathematical induction, and is an alternative resource to the one in

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Shortest paths with negative lengths

Shortest paths with negative lengths Chapter 8 Shortest paths with negative lengths In this chapter we give a linear-space, nearly linear-time algorithm that, given a directed planar graph G with real positive and negative lengths, but no

More information

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from

More information

Reconstruction and Higher Dimensional Geometry

Reconstruction and Higher Dimensional Geometry Reconstruction and Higher Dimensional Geometry Hongyu He Department of Mathematics Louisiana State University email: hongyu@math.lsu.edu Abstract Tutte proved that, if two graphs, both with more than two

More information

The Structure of Trivalent Graphs with Minimal Eigenvalue Gap *

The Structure of Trivalent Graphs with Minimal Eigenvalue Gap * Journal of Algebraic Combinatorics, 6 (1997), 321 329 c 1997 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. The Structure of Trivalent Graphs with Minimal Eigenvalue Gap * BARRY GUIDULI

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

Notes for Lecture Notes 2

Notes for Lecture Notes 2 Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation

More information

A Combinatorial Bound on the List Size

A Combinatorial Bound on the List Size 1 A Combinatorial Bound on the List Size Yuval Cassuto and Jehoshua Bruck California Institute of Technology Electrical Engineering Department MC 136-93 Pasadena, CA 9115, U.S.A. E-mail: {ycassuto,bruck}@paradise.caltech.edu

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

P is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.

P is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k. Complexity Theory Problems are divided into complexity classes. Informally: So far in this course, almost all algorithms had polynomial running time, i.e., on inputs of size n, worst-case running time

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

Weakly Secure Data Exchange with Generalized Reed Solomon Codes

Weakly Secure Data Exchange with Generalized Reed Solomon Codes Weakly Secure Data Exchange with Generalized Reed Solomon Codes Muxi Yan, Alex Sprintson, and Igor Zelenko Department of Electrical and Computer Engineering, Texas A&M University Department of Mathematics,

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

Simultaneous Diagonalization of Positive Semi-definite Matrices

Simultaneous Diagonalization of Positive Semi-definite Matrices Simultaneous Diagonalization of Positive Semi-definite Matrices Jan de Leeuw Version 21, May 21, 2017 Abstract We give necessary and sufficient conditions for solvability of A j = XW j X, with the A j

More information

Small Label Classes in 2-Distinguishing Labelings

Small Label Classes in 2-Distinguishing Labelings Also available at http://amc.imfm.si ISSN 1855-3966 (printed ed.), ISSN 1855-3974 (electronic ed.) ARS MATHEMATICA CONTEMPORANEA 1 (2008) 154 164 Small Label Classes in 2-Distinguishing Labelings Debra

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

On shredders and vertex connectivity augmentation

On shredders and vertex connectivity augmentation On shredders and vertex connectivity augmentation Gilad Liberman The Open University of Israel giladliberman@gmail.com Zeev Nutov The Open University of Israel nutov@openu.ac.il Abstract We consider the

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Total Dominator Colorings in Paths

Total Dominator Colorings in Paths International J.Math. Combin. Vol.2(2012), 89-95 Total Dominator Colorings in Paths A.Vijayalekshmi (S.T.Hindu College, Nagercoil, Tamil Nadu, India) E-mail: vijimath.a@gmail.com Abstract: Let G be a graph

More information

A Remark on Alan H. Kawamoto: Nonlinear Dynamics in the Resolution of Lexical Ambiguity: A Parallel Distributed Processing Account

A Remark on Alan H. Kawamoto: Nonlinear Dynamics in the Resolution of Lexical Ambiguity: A Parallel Distributed Processing Account A Remark on Alan H. Kawamoto: Nonlinear Dynamics in the Resolution of Lexical Ambiguity: A Parallel Distributed Processing Account REGINALD FERBER Fachbereich 2, Universität Paderborn, D-33095 Paderborn,

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

Lecture 26: April 22nd

Lecture 26: April 22nd 10-725/36-725: Conve Optimization Spring 2015 Lecture 26: April 22nd Lecturer: Ryan Tibshirani Scribes: Eric Wong, Jerzy Wieczorek, Pengcheng Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals Satoshi AOKI and Akimichi TAKEMURA Graduate School of Information Science and Technology University

More information

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information