Best Experienced Payoff Dynamics and Cooperation in the Centipede Game: Online Appendix

Size: px
Start display at page:

Download "Best Experienced Payoff Dynamics and Cooperation in the Centipede Game: Online Appendix"

Transcription

1 Best Experienced Payoff Dynamics and Cooperation in the Centipede Game: Online Appendix William H Sandholm, Segismundo S Izquierdo, and Luis R Izquierdo February 8, 018 Contents I Exact and numerical calculation in Mathematica I1 Algebraic numbers and solutions to polynomial equations I Algorithms from computational algebra 3 I3 Numerical evaluation and precision tracking 3 II The BEP Centipedenb notebook 4 II1 Exact analysis 4 II Numerical analysis 5 II3 More on computation of approximate rest points and eigenvalues 6 III Dimension reduction for local stability analysis 7 IV Time until convergence under different test-set rules 10 V Analyses of repulsion from the backward induction state 13 V1 Test-two, stick-if-tie: 13 V Test-adjacent, stick-if-tie: 14 VI Formulas for BEP(τ, 1, β) dynamics in Centipede 15 VI1 Test-all 15 VI Test-two 16 VI3 Test-adjacent 17 VII Multinomial formulas for BEP(τ, κ, β) dynamics 18 Department of Economics, University of Wisconsin Department of Industrial Organization, Universidad de Valladolid Department of Civil Engineering, Universidad de Burgos

2 VIII Approximate components of interior rest points 0 VIII1 Test-all 1 VIII Test-two 5 VIII3 Test-adjacent 9 IX Approximate eigenvalues of DV(ξ ) 33 IX1 Test-all 33 IX Test-two 35 IX3 Test-adjacent 37 I Exact and numerical calculation in Mathematica In this section we describe the built-in Mathematica functions we use to prove exact (analytical) results and to obtain numerical evaluations of exact expressions I1 Algebraic numbers and solutions to polynomial equations To obtain our analytical results, we take advantage of Mathematica s ability to perform exact computations using algebraic numbers As described in Strzeboński (1996, 1997), Mathematica represents algebraic numbers using Root objects, with Root[poly, k] designating one of the roots of the minimal polynomial poly The index k is used to single out a particular root of poly, with the lowest indices referring to the real roots of poly in increasing order, and the higher indices referring to the complex roots in a more complicated way Root objects also contain a hidden third element that specifies an isolating set for the root, meaning a set containing the root of poly in question and no others The forms of isolating sets depend on whether roots are isolated using arbitraryprecision floating point methods or exact methods If Mathematica s default settings are used, then roots are isolated using arbitrary-precision floating point methods based on the Jenkins-Traub algorithm (Jenkins (1969), Jenkins and Traub (1970a,b)), the workhorse numerical algorithm for this purpose While in theory this algorithm always isolates all real and complex roots of poly in disjoint disks in the complex plane, flawless implementation of the algorithm is difficult; see Strzeboński (1997, p 649) If we instead use the setting SetOptions[Root,ExactRootIsolation->True] then Mathematica isolates roots using exact methods that is, methods that only use rational number calculations Real roots of polynomials are isolated in disjoint intervals using the Vincent-Akritas-Strzeboński method, which is based on Descartes rule of signs and a classic theorem of Vincent; see Akritas et al (1994) and Akritas (010) Complex roots are isolated in rectangles using the Collins and Krandick (199) method Exact roots of univariate polynomials (and much else) can be computed using the Mathematica function Reduce When computing the exact rest points of BEP dynamics, we apply Reduce to the output of the function GroebnerBasis, described next

3 I Algorithms from computational algebra The Mathematica function GroebnerBasis is an implementation of a proprietary variation of the algorithm of Buchberger (1965, 1970) 1 Choosing the option Method -> Buchberger causes Mathematica to use the original Buchberger algorithm, which runs considerably more slowly than the default algorithm; however, there was only one case in which the default algorithm produced a Gröbner basis and the Buchberger algorithm failed to terminate The Mathematica function CylindricalDecomposition implements the Collins (1975) cylindrical algebraic decomposition algorithm with various improvements If this function is run in its default mode, it makes use of arbitrary-precision arithmetic To force Mathematica to work with algebraic numbers, one uses the following settings: SetOptions[Root,ExactRootIsolation->True] SetSystemOptions[ InequalitySolvingOptions -> CADDefaultPrecision ->Infinity] Unfortunately, these settings cause CylindricalDecomposition to run extremely slowly, and in the case of BEP dynamics in Centipede it only generates a result in two-dimensional cases Even if arbitrary-precision arithmetic is permitted, the function only generates a result when the dimension is or 3 I3 Numerical evaluation and precision tracking When Mathematica performs calculations using arbitrary-precision numbers x, it keeps track of the digits whose correctness it views as guaranteed Precision[x] reports the number of correct base 10 significant digits of x: for instance, if x = d 0 d 1 d d 3 d 4 10 k, the precision is the number of the correct digits in d 0 d 1 d d 3 d 4 Accuracy[x] is the number of correct base 10 digits of x to the right of the decimal point Exact numbers in Mathematica (eg, integers, rational numbers, and algebraic numbers) have Precision equal to To perform certain parts of our analysis (in particular, checking that an eigenvalue of a derivative matrix has negative real part), we need to numerically evaluate exact numbers and expressions We do so using the Mathematica function N N[expr, n] evaluates expr as an arbitrary-precision number at guaranteed precision n When Mathematica performs computations using arbitrary-precision numbers, it maintains precision and accuracy guarantees, the values of which can be accessed using the Precision and Accuracy functions While in principle Mathematica s precision tracking should not make mistakes, there are at least two reasons for exercising caution when using it in proofs First, Mathematica s precision tracking is not based on interval arithmetic, which represents real and complex numbers using exact intervals (in R) and rectangles (in C) that contain the numbers in question, and which relies on theorems that define rules for performing arithmetic and 1 An up-to-date presentation of Gröbner basis algorithms, including many improvements on Buchberger s algorithm, can be found in Cox et al (015) See referencewolframcom/language/tutorial/complexpolynomialsystemshtml for details 3

4 other mathematical operations on these intervals and rectangles that maintain containment guarantees (Alefeld and Herzberger (1983), Tucker (011)) Instead, Mathematica s precision bounds are sometimes obtained using faster methods of the Jenkins-Traub variety (see Section I1), which work correctly in theory but which are difficult to implement perfectly Second, Mathematica s precision tracking is a black box: the specific algorithms it employs are proprietary We contend with these issues by restricting our use of Mathematica s numerical evaluation and precision tracking to a few clearly delineated cases: the evaluation of algebraic numbers, and the basic arithmetic operations of addition, subtraction, multiplication, and division In particular, we do not use Mathematica for precision tracking in the computation of matrix inverses or the solution of linear systems, operations for which interval arithmetic does not generally provide clean answers (Alefeld and Herzberger (1983)) While one could insist that interval arithmetic be used for all non-exact calculations, we chose not to do so II The BEP Centipedenb notebook In this section we describe the main functions from the BEP Centipedenb notebook, which contains all of the procedures we use to analyze BEP dynamics Section II1 describes functions used to prove analytical results, and Section II describes the functions used in numerical analyses and in approximations with error bounds (cf Proposition III4) More details about the use of these functions are provided in the BEP Centipedenb notebook itself Section II3 explains the algorithms used to compute numerical values of rest points of the dynamics and eigenvalues of their derivative matrices Unless stated otherwise, the functions described below take a test-set rule τ {τ all, τ two, τ adj }, a tie-breaking rule β {β min, β stick, β unif } and a length d of the Centipede game as parameters All functions besides the last two are for BEP dynamics with number of trials κ = 1 The BEP Centipedenb notebook includes examples of the use of each of the functions II1 Exact analysis The functions for exact analysis of BEP dynamics in Centipede are as follows: ExactRestPoints the dynamic Uses GroebnerBasis and Reduce to compute the exact rest points of InstabilityOfVertexRestPoint Conducts an analysis of the local stability of the vertex rest point ξ To do this, the function computes the derivative matrix DV (ξ ) of the dynamic and the eigenvalues and eigenvectors of DV(ξ ), where V : aff(ξ) TΞ (see Appendix A) Finally, the function reports whether one can conclude that ξ is unstable The function was not used explicitly in our analysis Instead, we used it to determine the form of the derivative matrix, eigenvalues, and eigenvectors for arbitrary values of d; see Appendix A and Section V below 4

5 LocalStabilityOfInteriorRestPoint Conducts an analysis of the local stability of the interior rest point ξ To do this, the function computes a rational approximation ξ of the exact interior rest point ξ The function then evaluates the eigenvalues of DV(ξ), evaluates the perturbation bound from Proposition III4 (which combines arguments from Appendix B and Section III), and reports whether one can conclude that ξ is asymptotically stable See Section III for further details GlobalStabilityOfInteriorRestPoint Conducts an analysis of the global stability of the interior rest point ξ To do this, the function uses CylindricalDecomposition to determine whether the relevant Lyapunov function (see Sections 5 and 61) is a strict Lyapunov function for the interior rest point ξ on domain Ξ {ξ } We did not use this function in our analysis because it fails to terminate under the settings for exact computation described in Section I3 II Numerical analysis The following functions from the BEP Centipedenb are used for numerical analysis and as subroutines for LocalStabilityOfInteriorRestPoint FloatingPointApproximateRestPoint Computes a floating point approximation of the stable interior rest point of the BEP dynamic See Section II3 for details RationalApproximateRestPoint Computes a rational approximation of the stable interior rest point of the BEP dynamic See Section II3 for details EigenvaluesAtRationalApproximateRestPoint Computes the exact eigenvalues of DV(ξ), where ξ is the rational approximation to the interior rest point obtained from a call to RationalApproximateRestPoint See Section II3 for details NEigenvaluesAtRationalApproximateRestPoint Computes the eigenvalues of DV( ξ) using arbitrary-precision arithmetic, where ξ is a 16-digit precision approximation to the rational point computed using RationalApproximateRestPoint See Section II3 for details NumericalGlobalStabilityOfInteriorRestPointLyapunov Evaluates the time derivative Λ(ξ) = Λ(ξ) V(ξ) at a floating-point approximation Λ of the appropriate candidate Lyapunov function L for the interior rest point ξ, reporting instances in which the time derivative is not negative should any exist The (presumably large number of) states ξ at which to evaluate Λ(ξ) is chosen by the user NumericalGlobalStabilityOfInteriorRestPointNDSolve Computes numerical solutions to the BEP dynamic from initial conditions provided by the user, and reports whether any of these numerical solutions fails to converge to a neighborhood of the interior rest point ξ 5

6 NDSolveMeanDynamics Uses Mathematica s NDSolve function to compute a numerical solution to the BEP dynamic from an initial condition provided by the user The solution is computed until the time at which the norm of the law of motion is sufficiently small, where what constitutes sufficiently small can be chosen by the user The function also graphs the components of the state as a function of time, and reports the terminal point and the time at which this point is reached FloatingPointApproximateRestPointTestAllMinIfTieWithBIAgents Computes a floating point approximation of the stable interior rest point of the dynamics in a population consisting of mass b of backward induction agents and mass (1 b) of BEP(τ all, 1, β min ) agents The value of b is specified by the user This function was used to produce Figures 5 and 6 See Section II3 for details FloatingPointApproximateRestPointTestTwoMinIfTieWithBIAgents Computes a floating point approximation of the stable interior rest point of the dynamics in a population consisting of mass b of backward induction agents and mass (1 b) of BEP(τ two, 1, β min ) agents The value of b is specified by the user This function was used to produce Figure 7 See Section II3 for details FloatingPointApproximateRestPointTestAllMinIfTieManyTrials Uses Mathematica s FindRoot function to compute a floating point approximation of the stable interior rest point of the BEP(τ all, κ, β min ) dynamic, where the number of trials κ is specified by the user This function was used in producing Figures 8 and 9 NDSolveMeanDynamicsTestAllMinIfTieManyTrials Uses Mathematica s NDSolve function to compute a numerical solution of the BEP(τ all, κ, β min ) dynamic, where the number of trials κ and the initial condition of the solution are specified by the user The solution is computed until the time at which the norm of the law of motion is sufficiently small, where what constitutes sufficiently small can be chosen by the user The function also graphs the components of the state as a function of time, and reports the terminal point and the time at which this point is reached The function was used in producing Figures 8, 9, and 10 II3 More on computation of approximate rest points and eigenvalues The BEP Centipedenb notebook computes approximate rest points of BEP(τ, 1, β) dynamics using the Euler method: {ξ t } T t=0 is computed starting from an initial condition ξ 0 by iteratively applying (1) ξ t+1 = ξ t + h V (ξ t ), where V : R s R s is the (extended) law of motion of the dynamics and h is the step size of the algorithm This algorithm is run in two sequential stages, to be described next When one of the first three FloatingPointApproximateRestPoint functions from Section II is called, algorithm (1) is run using IEEE 754 Standard double-precision 6

7 floating-point arithmetic The step size of the algorithm is set to h = 4, and the initial condition is ξ 0 = (x 0, y 0 ) Ξ = (X, Y), where x 0 and y 0 are the barycenters of simplices X and Y Several thousand iterations of (1) are run, and the output of each iteration is projected onto Ξ to minimize the accumulation of roundoff errors from the floating-point calculation The floating-point numbers obtained in this way are very close to the exact quantities they approximate, but their digits (ie, the values of the d i in x = d 0 d 1 d d 3 d 4 10 k ) may all be wrong, especially in small numbers, since many of the exact numbers we aim to approximate lie outside the range of IEEE 754 double-precision 3 To address this issue, the function RationalApproximateRestPoint begins with a call to FloatingPointApproximateRestPoint, and then uses the output of this procedure to create the initial condition for a second stage that employs rational arithmetic This initial condition is the rational point in Ξ that lies closest to the floating-point output of the first stage The step size h is set to 1 in the second stage, since overshooting is no longer a problem in the neighborhood of the exact rest point Increment (1) is executed repeatedly using rational arithmetic until it locates a rational point ξ T that is an approximate fixed point of (1), in the sense that ξ T and ξ T+1 = ξ T + V (ξ T ) agree with 6 digits of precision for numbers greater or equal to 10 4, or 3 digits of precision for smaller numbers This agrees with the format we use to report rest points in the tables in Section VIII NEigenvaluesAtRationalApproximateRestPoint computes the eigenvalues of DV( ξ) using arbitrary-precision arithmetic, where ξ is a 16-digit precision approximation to the rational point computed by calling RationalApproximateRestPoint The use of arbitrary precision allows us to keep track of the precision of the computed eigenvalues Proposition III4 provides a bound on the distances between the eigenvalues of DV(ξ) and the eigenvalues of DV(ξ ) In the tables in Section IX, the reported eigenvalues, which are arbitrary-precision approximations to the (algebraic-valued) eigenvalues of DV(ξ), are shown with 5 digits of precision for numbers greater or equal to 1, 4 digits of precision for numbers greater or equal to 10, and 3 digits of precision for smaller numbers III Dimension reduction for local stability analysis This section presents the dimension reduction step used to reduce the computational demands of computing eigenvalue perturbation bounds, and presents a version of this bound (Proposition III4) that incorporates all of the simplifications introduced here and in Appendix B Write a = s 1, b = s, and s = s 1 + s, and recall that d = s The computations we use to prove local stability of the interior rest point require calculations involving the derivative matrices DV (ξ) R s s that quickly become very computationally demanding as the size of the matrix grows Since we are only interested in the action of DV (ξ) on the d-dimensional subspace TΞ, it should be possible to perform the desired calculations 3 For example, note that the IEEE 754 double-precision representation of numbers such as and (both of which appear in Table 1 below) is 0, since both numbers are well below , which is the smallest positive IEEE 754 double-precision number 7

8 using matrices in R d d We now show explicitly how this is done The analysis is a simple extension of arguments from Sandholm (007, p 661) Define the orthonormal matrix R R s s by R = a a a(a 1) + 1 a a a(a 1) a a a(a 1) a a a a a(a 1) a a a a(a 1) a(a 1) a a + 1 a(a 1) a a a a a(a 1) a(a 1) a a a(a 1) + 1 a 0 0 a a a a 0 0 a a 0 0 b b + 1 b b b b b(b 1) b(b 1) b(b 1) b b b b + 1 b(b 1) b(b 1) b b b b b(b 1) b(b 1) b b b b b b + 1 b(b 1) b(b 1) b(b 1) 0 0 b b b b b b b b Define the matrix J R d s J = Define R R d s by R = JR In words, R is R with the last row in each block removed Let e a denote the last standard basis vector in R a The upper diagonal block of R rotates { span{1, e a } = span 1 a 1, a a 1 ( ea 1 a 1)} R a about its orthogonal complement by an angle of cos 1 ( 1 a ) It is easy to verify that this block maps 1 to ae a, and so, by virtue of being orthonormal, maps TΞ 1 isometrically to R a = {x Ra : x a = 0} Likewise, the lower diagonal block of R maps 1 to be b and maps TΞ isometrically to R b = {y Rb : y b = 0} Altogether, premultiplying z R s by R double-rotates z so that its TΞ component lies in R a Rb Then premultiplying the result by J removes the now-superfluous final coordinates of each block Recall from Appendix A that the vector field V maps aff(ξ) to TΞ, so that DV(ξ)z TΞ 8

9 for all ξ Ξ and z TΞ, and that the extension V of V maps R s to itself, so that DV (ξ) also maps R s to itself Proposition III1 If ξ aff(ξ), then DV(ξ) and R DV (ξ) R have the same eigenvalues, including multiplicities Proposition III1 is an immediate corollary of the following lemma: Lemma III Suppose that M R s s maps TΞ into itself, and let z C s be an element of the complexification of TΞ Then (λ, z) is an eigenvalue/eigenvector pair of M if and only if (λ, Rz) is an eigenvalue/eigenvector pair for RM R Proof The proof follows Sandholm (007) To start, recall from Appendix A that Φ R s s is the orthogonal projection of R s onto TΞ, and note the following geometrically obvious facts, each of which can be verified by direct computation: () (3) R R = R J JR = Φ R s s, R R = JRR J = JJ = I R d d If Mz = λz, then since Mz and z are in the complexification of TΞ, we have RMΦz = λ RΦz; thus () and (3) imply that RM R Rz = λ R R Rz = λ Rz Conversely, if RM R Rz = λ Rz, then () implies that RMΦz = RMz = λ Rz, and so that Mz = λz The following result is also needed to obtain the eigenvalue bound Proposition III3 For M, M R s s, RM R RM R 4 M M Proof Using the submultiplicativity of matrix norms and the orthonormality of R, RM R RM R RMR RM R = R(M M )R R R R R(M M )R R 4 M M Using the results above and the arguments from Appendix B, including the definition = max i S max k S j S we obtain the following result: V i ξ j ξ k (1,, 1 1,, 1), Proposition III4 Suppose that RDV (ξ) R is complex diagonalizable with RDV (ξ) R = Q diag(λ) Q 1, and let λ be an eigenvalue of DV(ξ ) Then there is an eigenvalue λ i of DV(ξ) such that (4) λ λ i < 8 d d/ 1 tr( Q Q) d/ det( Q) ξ k ξ k k S 9

10 When the function InstabilityOfVertexRestPoint from the BEP Centipedenb notebook is called, the eigenvectors in the matrix Q are chosen to have the Euclidean norm 1, as this tends to lower the bound on the condition number of Q (see Guggenheimer et al (1995)) If this normalization were performed exactly, then we would have tr( Q Q) = d, allowing us to simplify inequality (4) However, because InstabilityOfVertexRestPoint performs the normalization after converting the entries of Q to arbitrary precision numbers, it uses the original inequality (4) Of course, the effect of this choice on the bound we obtain is essentially nil IV Time until convergence under different test-set rules Figures 1,, and 3 present numerical solutions to the BEP(τ, 1, β min ) dynamics with τ = τ all, τ two, and τ adj in a Centipede game of length d = 10 All solutions have initial condition ξ = (x, y) = ((99, 01, 0, ), (99, 01, 0, )) 4 The computation of the solution is cut off when it enters a Euclidean ball of radius 10 3 centered at the stable rest point ξ of each dynamic The numbers of time units required until the ball is reached are 1011 under τ all, 3981 under τ two, and 1507 under τ adj, as suggested in Section 61 Figures 4, 5, and 6 present solutions as above, but for a Centipede game of length d = 0 In this case the numbers of time units required until the ball of radius 10 3 around ξ is reached are 98 under τ all, 7640 under τ two, and 4165 under τ adj, again agreeing with the claim in Section 61 It is noteworthy that the time until convergence under BEP(τ all, 1, β min ) is faster in the game of length 0 than in the game of length 10 Judging from Figures 1 and 4, it appears that when revising agents test all strategies, having more strategies to test makes them abandon strategy 1 more quickly in favor of more cooperative strategies, which in turn increases the chances that still more cooperative strategies will be chosen during subsequent revisions 4 The results are similar for other choices of ξ with x 1 = y 1 = 99 10

11 10 x 1 x 4 10 y 1 y 4 08 x x 3 x 5 x 6 08 y y 3 y 5 y (i) population (ii) population Figure 1: Solution to the BEP(τ all, 1, β min ) dynamic from initial condition ξ in Centipede of length d = x 1 x 4 10 y 1 y 4 08 x x 3 x 5 x 6 08 y y 3 y 5 y (i) population (ii) population Figure : Solution to the BEP(τ two, 1, β min ) dynamic from initial condition ξ in Centipede of length d = x 1 x 4 10 y 1 y 4 08 x x 3 x 5 x 6 08 y y 3 y 5 y (i) population 1 (ii) population Figure 3: Solution to the BEP(τ adj, 1, β min ) dynamic from initial condition ξ in Centipede of length d = 10 11

12 10 10 x 1 x 5 x 9 y 1 y 5 y 9 08 x x 3 x 6 x 7 x 10 x y y 3 y 6 y 7 y 10 y 11 x 4 x 8 y 4 y (i) population (ii) population Figure 4: Solution to the BEP(τ all, 1, β min ) dynamic from initial condition ξ in Centipede of length d = x 1 x 5 x 9 y 1 y 5 y 9 08 x x 3 x 6 x 7 x 10 x y y 3 y 6 y 7 y 10 y 11 x 4 x 8 y 4 y (i) population (ii) population Figure 5: Solution to the BEP(τ two, 1, β min ) dynamic from initial condition ξ in Centipede of length d = x 1 x 5 x 9 y 1 y 5 y 9 08 x x 3 x 6 x 7 x 10 x y y 3 y 6 y 7 y 10 y 11 x 4 x 8 y 4 y (i) population (ii) population Figure 6: Solution to the BEP(τ adj, 1, β min ) dynamic from initial condition ξ in Centipede of length d = 0 1

13 V Analyses of repulsion from the backward induction state V1 Test-two, stick-if-tie: Under the BEP(τ two, 1, β stick ) dynamic, DV (ξ ) = m m m m m m m m m n n n n For d 3, the eigenvalues of DV(ξ ) with respect to TΞ and the bases for their eigenspaces are: (5) (6) (7) (8) 0, { ε ε j : j {3,, s } } if d 4; 1, { m δ δ i : i {3,, s 1 } } ; { λ ( 1 m m ), ( λ, λ,, λ } m m 1, 1,, 1 n n ) ; and { λ ( 1 m m ), ( λ +, λ +,, λ } + m m 1, 1,, 1 n n ) The eigenvectors in (5) span the center subspace E c of the linear equation ż = DV(ξ ) z, while the eigenvectors in (6) and (7) span the stable subspace E s The normal vector to the hyperplane E c E s is the orthogonal projection onto TΞ of the auxiliary vector which satisfies z aux = 1 λ δ 1 ε 1 (z ) (δ i δ 1 ) = 1 λ > 0 for i S 1 {1}, and (z ) (ε j ε 1 ) = 1 > 0 for j S {1} Since the remaining eigenvalue, from (8), is positive, the arguments used in the appendix for BEP(τ all, 1, β stick ) imply that ξ is a repellor 13

14 V Test-adjacent, stick-if-tie: Under the BEP(τ adj, 1, β stick ) dynamic, DV(x, y ) = For d 3, the eigenvalues of DV(ξ ) with respect to TΞ and the bases for their eigenspaces are: (9) (10) (11) 0, λ λ , , { (δ δ 3 ) ε 1 + ε } { ε ε j : j {3,, s } } if d 4 { δ 3 δ i : i {4,, s 1 } } if d 5; { } ( λ, λ, 0,, 0 1, 1, 0, 0) ; and { } ( λ +, λ +, 0,, 0 1, 1, 0, 0) The eigenvectors in (9) span the center subspace E c of the linear equation ż = DV(ξ ) z, while the eigenvector in (10) spans the stable subspace E s The normal vector to the hyperplane E c E s is the orthogonal projection onto TΞ of the auxiliary vector which satisfies z aux = ( 1 λ 1 ) δ1 1 δ ε 1 (z ) (δ δ 1 ) = (z aux) (δ δ 1 ) = 1 λ > 0, (z ) (δ i δ 1 ) = (z aux) (δ i δ 1 ) = λ > 0 for i S 1 {1, }, and (z ) (ε j ε 1 ) = (z aux) (ε j δ 1 ) = 1 > 0 for j S {1} Since the remaining eigenvalue, from (11), is positive, the arguments used in the appendix for BEP(τ all, 1, β stick ) imply that ξ is a repellor 14

15 VI Formulas for BEP(τ, 1, β) dynamics in Centipede This section provides explicit formulas for BEP(τ, 1, β) dynamics in the Centipede game for the cases considered in the paper that is, for τ {τ all, τ two, τ adj } and β {β min, β stick, β unif } These are the formulas implemented in the BEP Centipedenb notebook VI1 Test-all BEP(τ all, 1, β min ): ẋ i = s k=i ẏ j = s 1 s 1 k=j+1 BEP(τ all, 1, β stick ): ẋ i = + s k=i s 1 q=i+1 i 1 + x i ẏ j = + s 1 k=j+1 s q=j+1 y k i m=1 y m s1 i + i 1 k 1 y k y l i k k m=1 y m s1 i x i, x k (x 1 + x ) s 1 + (x 1 ) s y 1 if j = 1, j+1 x k y k i m=1 i 1 x q m=1 k 1 y k m=1 x m s j + y m s1 i + i k 1 y k j k 1 j k+1 k x k x l x m s j y j i 1 y l i k k j+1 x k x m s j + y q q=1 i 1 x q k 1 y k i k k y l y m s1 i 1 + m=1 m=1 y m s1 k 1 x i, j 1 y q q=1 m=1 i k k y l y m s1 i + m=1 j k 1 j k+1 k x k x l x m s j + j k 1 j k+ k x k x l x m s j y j (x 1) s + m=1 j k 1 k x k x l x m s k y j m=1 m=1 otherwise 15

16 BEP(τ all, 1, β unif ): ẋ i = ẏ j = s j=i s 1 h=j+1 VI Test-two BEP(τ two, 1, β min ): ẋ i = 1 s s 1 1 ẏ j = 1 s s 1 BEP(τ two, 1, β stick ): ẋ i = 1 s s 1 1 ẏ j = 1 s s 1 y j i j=1 y j s1 i + j+1 x h x h s j i 1 h=1 s 1 h=i+1 j 1 h=1 s h=j+1 i 1 h=1 s 1 h=i+1 j 1 h=1 s h=j+1 h=1 s k=h+1 s i 1 + xs 1 y k + k=i s 1 k=h+ s 1 k 1 y k s + j=1 y j s1 k 1 j=0 h=1 ( ) s 1 j k 1 y k 1 k j j + 1 y l s1 k 1 j x i, h=0 j k 1 x k x h s k ( ) s k x h k 1 k h h + 1 x l s k h y j h k 1 y k y l (x i + x h ) + i i 1 k 1 y k y l + y k y l + i 1 h+1 k 1 x k + x k x l (y j + y h ) + j+1 k=j+1 s k=h+1 s y k + k=i s 1 k=h+ s 1 x k x l + h k 1 j k 1 x k x l + j y k x k h 1 y k y l (x i + x h ) + x i (x i + x h ) x i, (y j + y h ) y j y k + i i 1 k 1 i 1 y k y l + y k y l (x i + x h ) + x i h+1 k 1 x k + x k x l (y j + y h ) + y j j+1 k=j+1 x k x l + h x k j k 1 x k x l (y j + y h ) + y j + j y k x i, x k y j 16

17 BEP(τ two, 1, β unif ): ẋ i = 1 s s 1 1 ẏ j = 1 s s 1 i 1 h=1 s 1 h=i+1 j 1 h=1 s h=j+1 VI3 Test-adjacent s k=h+1 s k=i s 1 k=h+ s 1 h k 1 y k + y k y l + 1 h 1 y k (x i + x h ) + i i 1 k 1 y k y l + y k y l + 1 i 1 y k (x i + x h ) x i, h+1 k 1 x k + x k x l + 1 h x k (y j + y h ) + j+1 j k 1 x k x l + x k x l + 1 j x k (y j + y h ) y j k=j+1 Let c p i equal if i {1, s p } and equal 1 otherwise, ie, c p i = 1 + 1[i {1, s p }] BEP(τ adj, 1, β min ): ẋ i = ẏ j = 1 1[i = 1] + 1 1[i = s1 ] 1 1[j = 1] + 1 1[j = s ] i 1 h=i 1 i+1 h=i+1 j 1 h=j 1 j+1 h=j+1 s k=h+1 s y k + k=i s 1 k=h+ h k 1 y k y l ( c 1 i x i + c 1 h x h) + i i 1 k 1 y k y l + y k y l + h+1 k 1 x k + s 1 j+1 k=j+1 x k x l + i 1 y k x k x l ( c j y j + c h y h) + j k 1 x k x l + j x k ( c 1 i x i + c 1 h x h) x i, ( c j y j + c h y h) y j 17

18 BEP(τ adj, 1, β stick ): ẋ i = ẏ j = 1 1[i = 1] + 1 1[i = s1 ] BEP(τ adj, 1, β unif ): ẋ i = ẏ j = 1 1[j = 1] + 1 1[j = s ] 1 1[i = 1] + 1 1[i = s1 ] 1 1[j = 1] + 1 1[j = s ] i 1 h=i 1 i+1 h=i+1 j 1 h=j 1 j+1 h=j+1 i 1 s k=h+1 s h=i 1 i+1 h=i+1 j 1 h=j 1 j+1 h=j+1 y k + k=i s 1 k=h+ k=j+1 h k 1 i i 1 k 1 y k y l + h+1 k 1 x k + s 1 j+1 s k=h+1 s y k + k=i s 1 k=h+ x k x l + ( y k y l c 1 i x i + c 1 h x ) h 1 h + c 1 i x i y k ( y k y l c 1 i x i + c 1 h x ) h + c 1 i x i ( x k x l c j y j + c h y ) h + c j y j h + i 1 x k j k 1 ( x k x l c j y j + c h y ) h + c j y j h k 1 y k y l + 1 h 1 y k i i 1 k 1 y k y l + y k y l + 1 h+1 k 1 x k + x k x l + 1 h x k s 1 j+1 k=j+1 x k x l + j k 1 x k x l + 1 i 1 ( c 1 i x i + c 1 h x h) + y k + y k j x i, x k y j ( c 1 i x i + c 1 h x h) x i, ( c j y j + c h y h) + j x k ( c j y j + c h y h) y j VII Multinomial formulas for BEP(τ, κ, β) dynamics The general formula (B) for the BEP(τ, κ, β) dynamic explicitly lists each of the κ strategies played an agent s opponents when an agent tests a strategy i in his test set We can obtain a formula with far fewer terms by instead working with the distribution of opponents strategies when the agent tests strategy i Using such formulas is essential for numerical computations when κ is not small To express (B) in this form we introduce a number of definitions Let Z sq,κ + = z Zsq + : z j = κ j S q 18

19 denote the set of possible (unnormalized) empirical distributions of opponents strategies when a population p agent tests one of his own strategies κ times When the state of population q is ξ q Ξ q, the probability that empirical distribution z occurs is the multinomial probability ( ) M p,κ (z, ξ q κ ) = (ξ q 1 z 1 z )z1 (ξ q ) z s q s q s q And if a population p agent faces empirical distribution z when testing strategy i S p, his total payoff is π p i (z) = j S q U p ij z j Therefore, if we let Π p,κ (ξ q ) be a random variable representing the total payoff obtained if i strategy i S p is tested κ times when the state of the opposing population is ξ q, then the distribution of Π p,κ (ξ q ) is i P ( ) Π p,κ (ξ q ) = w p i i = M p,κ (z, ξ q ) z Z sq,κ + : πp i (z)=w p i We use the notation above to obtain our new expression for BEP dynamics Let W p,κ i = {π p i (z): z Zsq,κ + } denote the set of possible test results for strategy i S p in κ trials Also, for R p S p, write W p,κ = R p k R p Wp,κ Then we can express the BEP(τ, κ, β) dynamic as k ( ) (1) ξ p = ξ i j τ p j j S (Rp p,κ ) p R p S Π (ξ q ) = w p k k β p (w p, R p ) ji ξp i p w p W p,κ R p k R p P The general formula (1) becomes simpler if particular test-set and tie-breaking rules are chosen For instance, under the convention that an empty product evaluates to 1, the BEP(τ all, κ, β min ) dynamic can be expressed as ẋ i = ẏ j = w 1 i W1,κ i w j W,κ j P( Π 1,κ (y) = w 1 i i P( Π,κ (x) = w i j ) i 1 ) j 1 P ( ) s1 Π 1,κ (y) < w 1 k i P ( ) Π 1,κ (y) w 1 l i x i, l=i+1 P ( ) s Π,κ (x) < w k i P ( ) Π,κ (x) w l i y j l=j+1 19

20 VIII Approximate components of interior rest points The tables in this section present approximate components of the unique interior rest points of BEP(τ, 1, β) dynamics in Centipede games of lengths up to d = 0 Dashed lines in the tables separate cases in which the values were originally computed exactly from those in which the values were computed numerically Tables 1,, and 3 present the approximate rest points of the BEP(τ all, 1, β) dynamics with β = β min, β stick, and β unif The approximate rest points of the BEP(τ two, 1, β) dynamics are presented in Tables 4 6, and those of the BEP(τ adj, 1, β) dynamics are presented in Tables 7 9 The main text contains graphs of the rest points as a function of d for the three cases with tie-breaking rule β min Graphs for the remaining six cases appear here as Figures 7 1 0

21 VIII1 Test-all p [6] [5] [4] [3] [] [1] [0] q [6] [5] [4] [3] [] [1] [0] Table 1: The interior rest point of the BEP(τ all, 1, β min ) dynamic for Centipede of lengths d {3,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 6) from numerical (d 7) results 1

22 p [6] [5] [4] [3] [] [1] [0] q [6] [5] [4] [3] [] [1] [0] Table : The interior rest point of the BEP(τ all, 1, β stick ) dynamic for Centipede of lengths d {,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 5) from numerical (d 6) results

23 p [6] [5] [4] [3] [] [1] [0] q [6] [5] [4] [3] [] [1] [0] Table 3: The interior rest point of the BEP(τ all, 1, β unif ) dynamic for Centipede of lengths d {,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 6) from numerical (d 7) results 3

24 d (i) penultimate mover d (ii) last mover Figure 7: The interior rest point of Centipede under the BEP(τ all, 1, β min ) dynamic for game lengths d = 3,, 0 Markers,,, and, represent weights on strategies [0], [1], [], and [3] (continue at all decision nodes; stop at the last, second-to-last, or third-to-last decision node) Other weights are less than 10 8 The dashed line separates exact (d 6) and numerical (d 7) results d (i) penultimate mover d (ii) last mover Figure 8: The interior rest point of Centipede under the BEP(τ all, 1, β min ) dynamic for game lengths d = 3,, 0 Markers,,, and, represent weights on strategies [0], [1], [], and [3] (continue at all decision nodes; stop at the last, second-to-last, or third-to-last decision node) Other weights are less than 10 8 The dashed line separates exact (d 6) and numerical (d 7) results 4

25 VIII Test-two p [7] [6] [5] [4] [3] [] [1] [0] q [7] [6] [5] [4] [3] [] [1] [0] Table 4: The interior rest point of the BEP(τ two, 1, β min ) dynamic for Centipede of lengths d {3,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 8) from numerical (d 9) results 5

26 p [7] [6] [5] [4] [3] [] [1] [0] q [7] [6] [5] [4] [3] [] [1] [0] Table 5: The interior rest point of the BEP(τ two, 1, β stick ) dynamic for Centipede of lengths d {,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 8) from numerical (d 9) results 6

27 p [7] [6] [5] [4] [3] [] [1] [0] q [7] [6] [5] [4] [3] [] [1] [0] Table 6: The interior rest point of the BEP(τ two, 1, β unif ) dynamic for Centipede of lengths d {,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 8) from numerical (d 9) results 7

28 d (i) penultimate mover d (ii) last mover Figure 9: The stable rest point of Centipede under the BEP(τ two, 1, β min ) dynamic for game lengths d = 3,, 0 Markers,,,,, and represent weights on strategies [0], [1], [], [3], [4], and [5] Other weights are less than 10 4 The dashed line separates exact (d 8) and numerical (d 9) results d (i) penultimate mover d (ii) last mover Figure 10: The stable rest point of Centipede under the BEP(τ two, 1, β min ) dynamic for game lengths d = 3,, 0 Markers,,,,, and represent weights on strategies [0], [1], [], [3], [4], and [5] Other weights are less than 10 4 The dashed line separates exact (d 8) and numerical (d 9) results 8

29 VIII3 Test-adjacent p [7] [6] [5] [4] [3] [] [1] [0] q [7] [6] [5] [4] [3] [] [1] [0] Table 7: The interior rest point of the BEP(τ adj, 1, β min ) dynamic for Centipede of lengths d {3,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 7) from numerical (d 8) results 9

30 p [7] [6] [5] [4] [3] [] [1] [0] q [7] [6] [5] [4] [3] [] [1] [0] Table 8: The interior rest point of the BEP(τ adj, 1, β stick ) dynamic for Centipede of lengths d {,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 6) from numerical (d 7) results 30

31 p [7] [6] [5] [4] [3] [] [1] [0] q [7] [6] [5] [4] [3] [] [1] [0] Table 9: The interior rest point of the BEP(τ adj, 1, β unif ) dynamic for Centipede of lengths d {,, 0} p denotes the penultimate player, q the last player The dashed lines separated exact (d 7) from numerical (d 8) results 31

32 d (i) penultimate mover d (ii) last mover Figure 11: The stable rest point of Centipede under the BEP(τ adj, 1, β min ) dynamic for game lengths d = 3,, 0 Markers,,,,, and represent weights on strategies [0], [1], [], [3], [4], and [5] Other weights are less than 10 4 The dashed line separates exact (d 6) and numerical (d 7) results d (i) penultimate mover d (ii) last mover Figure 1: The stable rest point of Centipede under the BEP(τ adj, 1, β min ) dynamic for game lengths d = 3,, 0 Markers,,,,, and represent weights on strategies [0], [1], [], [3], [4], and [5] Other weights are less than 10 4 The dashed line separates exact (d 7) and numerical (d 8) results 3

33 IX Approximate eigenvalues of DV(ξ ) The tables in this section show approximate eigenvalues of the derivative matrix DV(ξ ) at the interior rest point ξ of BEP(τ, 1, β) dynamics in Centipede games of lengths up to d = 0 Tables 10, 11, and 1 present the approximate eigenvalues of DV(ξ ) for the BEP(τ all, 1, β) dynamics with β = β min, β stick, and β unif The approximate eigenvalues for BEP(τ two, 1, β) dynamics are in Tables 13 15, and those for BEP(τ adj, 1, β) dynamics are in Tables IX1 Test-all d = 3 1 ± d = ± 377 i 8589 ± 377 i d = ± 384 i 8645 ± 384 i 1 d = ± 384 i 8645 ± 384 i 1 ± i d = ± 384 i 8645 ± 384 i 1 ± i 1 d = ± 384 i 8645 ± 384 i 1 ± i 1 1 d = ± 384 i 8645 ± 384 i 1 ± i d = ± 384 i 8645 ± 384 i 1 ± i d = ± 384 i 8645 ± 384 i 1 ± i Table 10: Approximate eigenvalues of DV(ξ ) for the BEP(τ all, 1, β min ) dynamic The symbol 1 is used as a shorthand for d = 8090 ± 4468 i d = ± 5843 i 1 d = ± 399 i 7756 ± 3807 i d = ± 393 i 773 ± 3746 i 9989 d = ± 393 i 773 ± 3746 i d = ± 393 i 773 ± 3746 i d = ± 393 i 773 ± 3746 i d = ± 393 i 773 ± 3746 i d = ± 393 i 773 ± 3746 i Table 11: Approximate eigenvalues of DV(ξ ) for the BEP(τ all, 1, β stick ) dynamic The symbol 1 is used as a shorthand for

Best Experienced Payoff Dynamics and Cooperation in the Centipede Game

Best Experienced Payoff Dynamics and Cooperation in the Centipede Game Best Experienced Payoff Dynamics and Cooperation in the Centipede Game William H. Sandholm, Segismundo S. Izquierdo, and Luis R. Izquierdo April 20, 2017 Abstract We study population game dynamics under

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS The problems listed below are intended as review problems to do before the final They are organied in groups according to sections in my notes, but it is not

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Lecture 12 : Graph Laplacians and Cheeger s Inequality CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS NEEL PATEL Abstract. The goal of this paper is to study the irreducible representations of semisimple Lie algebras. We will begin by considering two

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0 3 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0 0 1 )) 0 0 1 2 3 4 5 Figure 9: An example of an axis parallel ellipsoid E(a, A) in two dimensions. Notice that the eigenvectors of A correspond to the axes

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Reconstruction and Higher Dimensional Geometry

Reconstruction and Higher Dimensional Geometry Reconstruction and Higher Dimensional Geometry Hongyu He Department of Mathematics Louisiana State University email: hongyu@math.lsu.edu Abstract Tutte proved that, if two graphs, both with more than two

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Overview of Computer Algebra

Overview of Computer Algebra Overview of Computer Algebra http://cocoa.dima.unige.it/ J. Abbott Universität Kassel J. Abbott Computer Algebra Basics Manchester, July 2018 1 / 12 Intro Characteristics of Computer Algebra or Symbolic

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

12.4 The Diagonalization Process

12.4 The Diagonalization Process Chapter - More Matrix Algebra.4 The Diagonalization Process We now have the background to understand the main ideas behind the diagonalization process. Definition: Eigenvalue, Eigenvector. Let A be an

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio CSE 206A: Lattice Algorithms and Applications Spring 2014 Basis Reduction Instructor: Daniele Micciancio UCSD CSE No efficient algorithm is known to find the shortest vector in a lattice (in arbitrary

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version 5/5/2 2 version

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Vector Spaces and Subspaces

Vector Spaces and Subspaces Vector Spaces and Subspaces Our investigation of solutions to systems of linear equations has illustrated the importance of the concept of a vector in a Euclidean space. We take time now to explore the

More information

Classification of root systems

Classification of root systems Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points

More information

Lecture 10: October 27, 2016

Lecture 10: October 27, 2016 Mathematical Toolkit Autumn 206 Lecturer: Madhur Tulsiani Lecture 0: October 27, 206 The conjugate gradient method In the last lecture we saw the steepest descent or gradient descent method for finding

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs CS26: A Second Course in Algorithms Lecture #2: Applications of Multiplicative Weights to Games and Linear Programs Tim Roughgarden February, 206 Extensions of the Multiplicative Weights Guarantee Last

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space

More information

9.1 Eigenvectors and Eigenvalues of a Linear Map

9.1 Eigenvectors and Eigenvalues of a Linear Map Chapter 9 Eigenvectors and Eigenvalues 9.1 Eigenvectors and Eigenvalues of a Linear Map Given a finite-dimensional vector space E, letf : E! E be any linear map. If, by luck, there is a basis (e 1,...,e

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Errors. Intensive Computation. Annalisa Massini 2017/2018

Errors. Intensive Computation. Annalisa Massini 2017/2018 Errors Intensive Computation Annalisa Massini 2017/2018 Intensive Computation - 2017/2018 2 References Scientific Computing: An Introductory Survey - Chapter 1 M.T. Heath http://heath.cs.illinois.edu/scicomp/notes/index.html

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

MATH 223A NOTES 2011 LIE ALGEBRAS 35

MATH 223A NOTES 2011 LIE ALGEBRAS 35 MATH 3A NOTES 011 LIE ALGEBRAS 35 9. Abstract root systems We now attempt to reconstruct the Lie algebra based only on the information given by the set of roots Φ which is embedded in Euclidean space E.

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Rendezvous On A Discrete Line

Rendezvous On A Discrete Line Rendezvous On A Discrete Line William H. Ruckle Abstract In a rendezvous problem on a discrete line two players are placed at points on the line. At each moment of time each player can move to an adjacent

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors 3-1 Roundoff errors and floating-point arithmetic

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

LECTURE OCTOBER, 2016

LECTURE OCTOBER, 2016 18.155 LECTURE 11 18 OCTOBER, 2016 RICHARD MELROSE Abstract. Notes before and after lecture if you have questions, ask! Read: Notes Chapter 2. Unfortunately my proof of the Closed Graph Theorem in lecture

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Evolutionary Dynamics and Extensive Form Games by Ross Cressman. Reviewed by William H. Sandholm *

Evolutionary Dynamics and Extensive Form Games by Ross Cressman. Reviewed by William H. Sandholm * Evolutionary Dynamics and Extensive Form Games by Ross Cressman Reviewed by William H. Sandholm * Noncooperative game theory is one of a handful of fundamental frameworks used for economic modeling. It

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information