Assignment # 4. K-WTA neural networks are relatively simple. N neurons, with all the connection strengths equal to -1 (with no self-connections).
|
|
- Ambrose Garrett
- 6 years ago
- Views:
Transcription
1 Farrukh Jabeen COM 58 Mathematical Methods In AI Assignment # 4 This problem asks you to analyze a K-WTA network (K winner take all). First read the K-WTA reference article, then answer the questions below. Read the Hopfield reference for a more general introduction to the topic. The power point presentation is also helpful as an introduction. K-WTA neural networks are relatively simple. N neurons, with all the connection strengths equal to - (with no self-connections)..define the connection strength matrix W. Answer: w ij = connection strength between neuron i and neuron j. Connection strengths are symmetric: w ij = w ji. w ij = - if i is not equal to j w ii = 0 (no self-connections) W is an nxn symmetric matrix. In java we can write like that for ( i=0; i < this.m_numberofneurons; i++) { for (int j=0; j < this.m_numberofneurons; j++) { // Setting initial Weights if (i==j) { this.m_weights[i][j] = 0; } else { this.m_weights [i][j] = -; } }
2 2. What are the eigenvalues and eigenvectors of W. (Work out the cases for N=2 and N=3, then try to generalize the result to arbitrary N). Eigenvectors exist only for square matrices. An eigenvector of a square matrix has the defining property that multiplication by this matrix is equivalent to multiplication by a scalar(i.e. the length of the vector changes, its direction does not). If v is an eigenvector of W, Wv= λv Where λ is a scalar called the eigenvalue associated with the eigenvector v. For N=2 W= Computing Eigenvalues and Eigenvectors We can rewrite the condition Wv= v as (W I)v=0 where I is the n n identity matrix the determinant of W I must equal 0. We call p( )=det(w I) the characteristicc polynomial of W. The eigenvalues of W are simply the roots of the characteristic polynomial of W. p( )= det 0-λ λ = λ 2 -=0 = λ 2 = = λ = + Thus, = and 2= are the eigenvalues of W Now we find the eigenvectors corresponding to =. Let (W- I)v=0 (here =) v v= v2 Then from which we obtain the equations v v2 =0 v v2 = 0 If we let v= =t, then v2= t. All eigenvectors corresponding to = are multiples of and thus the eigenspace corresponding to = is given by the span of the eigenspace corresponding to = =.. That is, is a basis of Similarly we can find the eigenvectors corresponding to 2= -
3 All eigenvectors corresponding to 2=- are multiples of and thus the eigenspace corresponding to 2=- is given by the span of. That is eigenspace corresponding to 2=-. s, is a basis of the For N=3 W= Similarly we can find p( )=det(w I) the characteristic polynomial of W. The eigenvalues of W are simply the roots of the characteristicc polynomial of W P( ) 0- λ =det λ λ = -λ λ λ Expand the determinant and we will get the values of λ = -2,, =-2, 2= and λ3= are the eigenvalues of W. Now we find the eigenvectors corresponding to =-2. Let v= v v2 v3 then (W- (-2)I)v=0 = (W +2I)v=0 W + 2I = Now (W+ 2I) v=0 = 0 0 0
4 2v v2- v3=0 -v + 2v2-v3=0 -v v2+ 2v3=0 If we let v3= =t, then v=t and v2=t. All eigenvectors corresponding to =-2 are multiples of and thus the eigenspace corresponding to =-2 is given by the span of eigenspace corresponding to =-2. That is, is a basis of the Similarly we can find the eigenvectors corresponding to 2= All eigenvectors corresponding to 2= are multiples of 0 and thus the eigenspace corresponding to 2= is given by the span of the eigenspace corresponding to 2= =. 0. That is, 0 is a basis of And the eigenvectors corresponding to 3= will be same as above Eigenvalues and eigenvectors can be complex-valued as well as real-valued. The dimension of the eigenspace corresponding to an eigenvalue is less than or equal to the multiplicity of that eigenvalue. The techniques used here are practical for 2 2 and 3 3 matrices. Eigenvalues and eigenvectors of larger matrices are often found using other techniques, such as iterative methods. Let W be an n n matrix. The eigenvalues of W are the roots of the characteristic polynomial p( )=det(w I) For each eigenvalue, we find eigenvectors v= (W I)v=0 The set of alll vectors v satisfying Wv= v is called the eigenspace of W corresponding to. v v2... vn 3. Show that all K-WTA hyperplanes are parallel. by solving the linear system
5 Consider a symmetric network which bounded continuous activation values We will call K winner network if it satisfies the following conditions. It always converges to a final state consisting of K units with maximum activation value and N-k units with minimum activation value. The units at the maximum values are those units that had larger initial states. The permissible exceptions are when there are ties between the winners. The initial state of a unit is defined to be the value of Where is the external input. Here we assume that external inputs are constant with respect to time but can be different for each unit. k-winner corners of the N dimensional hypercube formed by the M and m boundaries. Now let The set of K winner corners for a given K and N. If Then exactly K of are equal to M and the rest are equal to m. So for a given choice of K all the lie on a common N- dimensional hyperplane perpendicular to the vector [,]. Thus, all k-winner hyperplanes are parallel. 4. Show that the k-winner states are stable equilibriums when the external input satisfies (assume m=0 and M=): k- < ext < k. Hint: K-winner states are the corners of the hypercube corresponding to k neurons at maximum activation and the rest at minimum activation. To show that these states are stable equilibriums you must imagine the activation vector as being very close to, but not on, the hypercube corner. Then show that the dynamics of the network will "drive" the activation vector into the corner, as opposed to away from the corner. That is, if activation is near the maximum, then the update will add a positive amount, and if an activation is near the minimum the update will add a negative amount. Here we consider maximum neuron activation = minimum neuron activation = 0 K-winner states are the corners of the hypercube corresponding to k neurons at maximum activation and the rest at minimum activation. To show that these states are stable equilibriums we must imagine the activation vector as being very close to, but not on, the hypercube corner. Then we show that the dynamics of the network will "drive" the activation vector into the corner, as opposed to away from the corner. That is, if activation is near the maximum, then the update will add a positive amount, and if an activation is near the minimum the update will add a negative amount k-winner states are at a stable equilibrium when the neuron activations are no longer changing from one iteration to the next. Neuron activations get updated through the following:
6 () ai(t+) = ai(t) + step * (M - ai(t)) * (ai(t) - m) * neti(t) where 'M' is the maximum activation, 'm' is the minimum activation, step is a constant that determines how fast the system converges to a stable equilibrium, and 'net' is the net input to a neuron. The system enters into stable equilibrium when ai(t+) = ai(t) for all neurons. In our system, this occurs when all neurons have an activation of either 'M' or 'm'. When this occurs, (M - ai(t)) * (ai(t) - m) = 0 and then ai(t+) = ai(t) for all neurons. As the activation vector approaches a hypercube corner, (M - ai(t)) * (ai(t) - m) approaches a minimum (0 in our case), causing the neurons to get closer to either 'm' or 'M' by increasingly smaller increments. However, in order for the neuron's activations to get closer to the hypercube corner, the neti(t) factor must be positive for the neurons approaching 'M' (winners) and negative for the neurons approaching 'm ' (losers). (2a) for k winners: neti(t) > 0 (2b) for N-k losers: neti(t) < 0 neti(t) is calculated through: (3) neti(t) = [Wa(t) + e(t)]i where W is an NxN matrix of connection strengths (wij) populated with 0 where i = j and - everywhere else and e is our external input. In the hypercube corner where k neurons have 'M' activation ( in our case) and N-k neurons have 'm' activation (0 in our case), the following is true: (4a) for k winners: neti(t) = - (k - ) + ei (4b) for N-k losers: neti(t) = - k + ei However, as the system approaches the hypercube corner, the winners aren't quite all at 'M' and the losers aren't quite all at 'm' so, (5a) for k winners: neti(t) < - (k - ) + ei (5b) for N-k losers: neti(t) > - k + ei With our assumptions regarding the sign of neti(t) from (2), this becomes: (6a) for k winners: 0 < - (k - ) + e (6b) for N-k losers: 0 > - k + e Solving for e yields: (7) k - < e < k 5. Show that the dynamics defined by da/dt = net is a gradient descent on the energy function. Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point. Suppose in the k winner network if we have number of neurons = 20 maximum neuron activation = minimum neuron activation = 0 number of iterations =500 The following graphs show that energy decreases for each iteration. Here Octave is used to plot the results
7 Gradient descent with respect to an network is defined to be energy function, implemented by calculations. The energy of the A corner seeking part implemented and So interactive activation can be viewed geometrically as a two step process Step : For each unit calculate the negative gradient via. The vector points into particular orthant of N- dimensional space. That orthant is consistent with a particular corner of the N-dimensional hypercube, call it corner(t). From the vector difference between the current activations a(t) and this corner vector: corner(t) a(t).
8 Step2: The multiplying gives the components of the vector that the dynamics will follow That is where From this geometric point of view we can say that that the dynamics follow a vector that points into an orthant that is on the positive side of the gradient descent; thus, for sufficiently small step size, the energy of network decreases at each iteration. References Dr.Wolfe class notes PA4&dq=eigenvalues+neural+network&source=bl&ots=svMyC_UB_N&sig=OCRpobxfriavqpLB5hG_5FdhqU&hl=en&ei=RLZSv3DKoP 8sQPSrbGSBg&sa=X&oi=book_result&ct=result&resnum=5&ved=0CB4Q6AEwBDgU#v=onepage&q=eigenvalues%20neural%20network&f =false
Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Eigenvalues and Eigenvectors Consider the equation A x = λ x, where A is an nxn matrix. We call x (must be non-zero) an eigenvector of A if this equation can be solved for some value of λ. We call λ an
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationDiagonalization. Hung-yi Lee
Diagonalization Hung-yi Lee Review If Av = λv (v is a vector, λ is a scalar) v is an eigenvector of A excluding zero vector λ is an eigenvalue of A that corresponds to v Eigenvectors corresponding to λ
More informationMath 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith
Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.
More informationTHE EIGENVALUE PROBLEM
THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationName: Final Exam MATH 3320
Name: Final Exam MATH 3320 Directions: Make sure to show all necessary work to receive full credit. If you need extra space please use the back of the sheet with appropriate labeling. (1) State the following
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More informationMATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.
MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry
More informationCity Suburbs. : population distribution after m years
Section 5.3 Diagonalization of Matrices Definition Example: stochastic matrix To City Suburbs From City Suburbs.85.03 = A.15.97 City.15.85 Suburbs.97.03 probability matrix of a sample person s residence
More informationQuestion: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?
Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationEigenvalues and Eigenvectors
LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationThe Hypercube Graph and the Inhibitory Hypercube Network
The Hypercube Graph and the Inhibitory Hypercube Network Michael Cook mcook@forstmannleff.com William J. Wolfe Professor of Computer Science California State University Channel Islands william.wolfe@csuci.edu
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationGeneralized Eigenvectors and Jordan Form
Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least
More informationCHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v
CHAPTER 5 REVIEW Throughout this note, we assume that V and W are two vector spaces with dimv = n and dimw = m. T : V W is a linear transformation.. A map T : V W is a linear transformation if and only
More informationDesigning Information Devices and Systems I Fall 2018 Homework 5
Last Updated: 08-09-9 0:6 EECS 6A Designing Information Devices and Systems I Fall 08 Homework 5 This homework is due September 8, 08, at 3:59. Self-grades are due October, 08, at 3:59. Submission Format
More informationSOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.
SOLUTIONS: ASSIGNMENT 9 66 Use Gaussian elimination to find the determinant of the matrix det 1 1 4 4 1 1 1 1 8 8 = det = det 0 7 9 0 0 0 6 = 1 ( ) 3 6 = 36 = det = det 0 0 6 1 0 0 0 6 61 Consider a 4
More informationc c c c c c c c c c a 3x3 matrix C= has a determinant determined by
Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.
More informationDesigning Information Devices and Systems I Discussion 4B
Last Updated: 29-2-2 9:56 EECS 6A Spring 29 Designing Information Devices and Systems I Discussion 4B Reference Definitions: Matrices and Linear (In)Dependence We ve seen that the following statements
More informationMAT 1302B Mathematical Methods II
MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationMATH 221, Spring Homework 10 Solutions
MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationMATH 115A: SAMPLE FINAL SOLUTIONS
MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More informationFinal. for Math 308, Winter This exam contains 7 questions for a total of 100 points in 15 pages.
Final for Math 308, Winter 208 NAME (last - first): Do not open this exam until you are told to begin. You will have 0 minutes for the exam. This exam contains 7 questions for a total of 00 points in 5
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationHomework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9
Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More information(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.
1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III
More informationMath 1553, Introduction to Linear Algebra
Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level
More informationEigenvalues, Eigenvectors, and Diagonalization
Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will
More informationSOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0
SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM () We find a least squares solution to ( ) ( ) A x = y or 0 0 a b = c 4 0 0. 0 The normal equation is A T A x = A T y = y or 5 0 0 0 0 0 a b = 5 9. 0 0 4 7
More informationLinear Algebra II Lecture 13
Linear Algebra II Lecture 13 Xi Chen 1 1 University of Alberta November 14, 2014 Outline 1 2 If v is an eigenvector of T : V V corresponding to λ, then v is an eigenvector of T m corresponding to λ m since
More informationIn the Name of God. Lecture 11: Single Layer Perceptrons
1 In the Name of God Lecture 11: Single Layer Perceptrons Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just
More informationRecitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples
Math b TA: Padraic Bartlett Recitation 9: Probability Matrices and Real Symmetric Matrices Week 9 Caltech 20 Random Question Show that + + + + +... = ϕ, the golden ratio, which is = + 5. 2 2 Homework comments
More information(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).
.(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)
More informationFFTs in Graphics and Vision. The Laplace Operator
FFTs in Graphics and Vision The Laplace Operator 1 Outline Math Stuff Symmetric/Hermitian Matrices Lagrange Multipliers Diagonalizing Symmetric Matrices The Laplacian Operator 2 Linear Operators Definition:
More information33A Linear Algebra and Applications: Practice Final Exam - Solutions
33A Linear Algebra and Applications: Practice Final Eam - Solutions Question Consider a plane V in R 3 with a basis given by v = and v =. Suppose, y are both in V. (a) [3 points] If [ ] B =, find. (b)
More informationSolution Set 7, Fall '12
Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationMAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:
MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More informationMAC Module 12 Eigenvalues and Eigenvectors
MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMath 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that
Math 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that det(a) = 2, det(b) = 2, det(c) = 1, det(d) = 4. 2 (a) Compute det(ad)+det((b
More informationMath 24 Spring 2012 Questions (mostly) from the Textbook
Math 24 Spring 2012 Questions (mostly) from the Textbook 1. TRUE OR FALSE? (a) The zero vector space has no basis. (F) (b) Every vector space that is generated by a finite set has a basis. (c) Every vector
More informationEE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS
EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.
More informationExamples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.
The exam will cover Sections 6.-6.2 and 7.-7.4: True/False 30% Definitions 0% Computational 60% Skip Minors and Laplace Expansion in Section 6.2 and p. 304 (trajectories and phase portraits) in Section
More informationArtificial Intelligence Hopfield Networks
Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationLINEAR SYSTEMS, MATRICES, AND VECTORS
ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate
More information3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield
3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield (1982, 1984). - The net is a fully interconnected neural
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationLecture 11: Eigenvalues and Eigenvectors
Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()
More informationLinear Algebra Review
Linear Algebra Review ORIE 4741 September 1, 2017 Linear Algebra Review September 1, 2017 1 / 33 Outline 1 Linear Independence and Dependence 2 Matrix Rank 3 Invertible Matrices 4 Norms 5 Projection Matrix
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationMathematical Methods for Engineers 1 (AMS10/10A)
Mathematical Methods for Engineers 1 (AMS10/10A) Quiz 5 - Friday May 27th (2016) 2:00-3:10 PM AMS 10 AMS 10A Name: Student ID: Multiple Choice Questions (3 points each; only one correct answer per question)
More informationMatrices related to linear transformations
Math 4326 Fall 207 Matrices related to linear transformations We have encountered several ways in which matrices relate to linear transformations. In this note, I summarize the important facts and formulas
More informationDefinitions for Quizzes
Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does
More informationExercise Set 7.2. Skills
Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize
More informationTherefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.
Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be
More informationClassification with Perceptrons. Reading:
Classification with Perceptrons Reading: Chapters 1-3 of Michael Nielsen's online book on neural networks covers the basics of perceptrons and multilayer neural networks We will cover material in Chapters
More informationMath 215 HW #9 Solutions
Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationEigenspaces. (c) Find the algebraic multiplicity and the geometric multiplicity for the eigenvaules of A.
Eigenspaces 1. (a) Find all eigenvalues and eigenvectors of A = (b) Find the corresponding eigenspaces. [ ] 1 1 1 Definition. If A is an n n matrix and λ is a scalar, the λ-eigenspace of A (usually denoted
More informationLU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU
LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is
More informationHomework 3 Solutions Math 309, Fall 2015
Homework 3 Solutions Math 39, Fall 25 782 One easily checks that the only eigenvalue of the coefficient matrix is λ To find the associated eigenvector, we have 4 2 v v 8 4 (up to scalar multiplication)
More informationA. Incorrect! Apply the rational root test to determine if any rational roots exist.
College Algebra - Problem Drill 13: Zeros of Polynomial Functions No. 1 of 10 1. Determine which statement is true given f() = 3 + 4. A. f() is irreducible. B. f() has no real roots. C. There is a root
More informationMath 121 Practice Final Solutions
Math Practice Final Solutions December 9, 04 Email me at odorney@college.harvard.edu with any typos.. True or False. (a) If B is a 6 6 matrix with characteristic polynomial λ (λ ) (λ + ), then rank(b)
More information4. Determinants.
4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.
More informationAfter we have found an eigenvalue λ of an n n matrix A, we have to find the vectors v in R n such that
7.3 FINDING THE EIGENVECTORS OF A MATRIX After we have found an eigenvalue λ of an n n matrix A, we have to find the vectors v in R n such that A v = λ v or (λi n A) v = 0 In other words, we have to find
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationMTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate
More informationMath 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros
Math 164 (Lec 1): Optimization Instructor: Alpár R. Mészáros Midterm, October 6, 016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationMath 113 Homework 5. Bowei Liu, Chao Li. Fall 2013
Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name
More informationDECOMPOSING A SYMMETRIC MATRIX BY WAY OF ITS EXPONENTIAL FUNCTION
DECOMPOSING A SYMMETRIC MATRIX BY WAY OF ITS EXPONENTIAL FUNCTION MALIHA BAHRI, WILLIAM COLE, BRAD JOHNSTON, AND MADELINE MAYES Abstract. It is well known that one can develop the spectral theorem to decompose
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationHW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.
HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your
More informationMATH 1553-C MIDTERM EXAMINATION 3
MATH 553-C MIDTERM EXAMINATION 3 Name GT Email @gatech.edu Please read all instructions carefully before beginning. Please leave your GT ID card on your desk until your TA scans your exam. Each problem
More informationProblems for M 10/26:
Math, Lesieutre Problem set # November 4, 25 Problems for M /26: 5 Is λ 2 an eigenvalue of 2? 8 Why or why not? 2 A 2I The determinant is, which means that A 2I has 6 a nullspace, and so there is an eigenvector
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationLinear Algebra Practice Problems
Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More information