A proof of Pohlke s theorem with an analytic determination of the reference trihedron

Size: px
Start display at page:

Download "A proof of Pohlke s theorem with an analytic determination of the reference trihedron"

Transcription

1 A proof of Pohlke s theorem with an analytic determination of the reference trihedron Renato Manfrin March 3, 08 Abstract By elementary arguments of linear algebra and vector algebra we give here a proof of Pohlke s fundamental theorem of oblique axonometry. We also give simple explicit formulae for the reference trihedrons (Pohlke matrices) and the corresponding directions of projection onto the drawing plane. Mathematics Subject Classification : Primary 5N0; Secondary 5N05. Keywords : Pohlke s theorem, oblique axonometry. Introduction The famous Pohlke s fundamental theorem of oblique axonometry asserts that: three arbitrary straight line segments OP, OP, OP 3 in a plane, originating from a point O and which are not contained in a line, can be considered as the parallel projection of three edges OQ, OQ, OQ 3 of a cube. Or, with the words of H. Steinhaus ([], p. 70), one can draw any three (not all parallel) segments from one point, complete the figure with parallel segments, and consider it as a (generally oblique) projection of a cube. K.W. Pohlke formulated this theorem in 853 and published it in 860, without demonstration, in the first part of his textbook on descriptive geometry [9]. The first elementary rigorous proof was given by H.A. Schwarz [0] in 864, at that time a student of Pohlke. Subsequently, as remarked by D.J. Struik ([], p. 40), several proofs have been given, synthetic and analytic, none of which is simple because they also give the method by which one can construct the direction of projection. See among the others [3, 4, 7,, 6, 8, ]. Here we prove Pohlke s theorem by reducing it to a particular case in which the direction of projection coincides with that of one of the edges of the cube. To achieve this simplification we restate Pohlke s theorem as a theorem of linear algebra and then we make a simple observation based on the fact that for a matrix row-rank and column-rank are equal. More precisely, if we introduce a cartesian system of coordinates such that O = and P i = x i y i 0 ( i 3) (.) are points of the plane {z = 0} then Pohlke s theorem can be reformulated as follows:

2 Theorem. If the matrix A = x x x 3 y y y = (A, A, A 3 ) (.) has rank, then there exist a matrix B with orthogonal columns of equal norm, B = x x x 3 y y y 3 z z z 3 = (B, B, B 3 ), (.3) and a parallel projection Π onto the plane {z = 0} such that Π(B i ) = A i ( i 3 ). Now, see Lemma., the columns of A can be obtained by a parallel projection of the columns of the matrix B if and only if the rows of A can be obtained by a parallel projection of the rows of the matrix B. But, since the third row of A has zero entries, the direction of this last projection is the same of the third row of B. As a by-product, we finally obtain explicit formulae for the reference trihedron and the direction of projection. For instance, for B, Π (which are not unique) we may take: B = + α + β + β αβ α αβ + α β α β x x x 3 y y y 3 x y 3 y x 3 ϱ y x 3 x y 3 ϱ x y y x ϱ (.4) and Π x y z = x + αz y + βz 0, (.5) with α, β, ϱ defined by (3.6), (3.0), (3.), (3.) below from the rows A, A of A. See also (4.6), (4.8) and the simple examples at the end of Chapter 4. The fact that the proof becomes easier if one has some information about the direction of projection is particularly evident when Π is, a priori, the orthogonal projection onto the plane {z = 0}. More precisely, denoting with A, A, A 3 the rows of the matrix A given by (.), we can reformulate the Gauss theorem of orthogonal axonometry as follows: Proposition. A, A are nonzero orthogonal rows of equal norm, that is A = A = 0 with A A, (.6) if and only if there exists a matrix B with nonzero orthogonal columns of equal norm such that Π (B i ) = A i ( i 3) (.7) where Π : R 3 {z = 0} is the orthogonal projection.

3 Proof: In fact, assuming (.6) and setting ρ = A = A, we may define the row vector B 3 = (z, z, z 3 ) as B 3 = ρ A A or B 3 = ρ A A. (.8) Namely, we choose B 3 equal to (z, z, z 3 ) or ( z, z, z 3 ) where z = ρ x x 3 y y 3, z = ρ x x 3 y y 3, z 3 = ρ x x y y. (.9) Then, B 3 A, A and B 3 = ρ. Hence, setting B = A, B = A, the matrix B = B B B 3 = x x x 3 y y y 3 z z z 3 (.0) has orthogonal rows of norm ρ. This means that B is a multiple of an orthogonal matrix and thus it follows that the columns B i ( i 3) are orthogonal and of norm ρ. Finally, if Π : R 3 {z = 0} is the orthogonal projection onto the plane {z = 0}, we clearly have (.7). Conversely, if there exists a matrix B = (B, B, B 3 ) such that (.7) holds, than B = A and B = A. If, in addition, the columns B, B, B 3 are nonzero, orthogonal and of equal norm then B is a multiple of an orthogonal matrix. Thus, the rows A and A are nonzero, orthogonal and of equal norm. Remark.3 In the proof of Prop.. the row B 3 is necessarily given by one of the expressions of (.8), since (.7) implies that B = A, B = A. Hence, there are exactly two distinct possibilities for B : one with B 3 = ρ A A and the other with B 3 = ρ A A. We may say that in case of orthogonal projection Pohlke s problem (namely, that of finding a matrix B and a projection Π with the required properties) has exactly two distinct solutions. More generally, to consider the question of the multiplicity of solutions of Pohlke s problem, we give the following definition: Definition.4 (Pohlke matrix) Let A be a matrix of rank as in (.). We say that a matrix B is a Pohlke matrix for A if B has orthogonal columns of equal norm and there exists a parallel projection Π : R 3 {z = 0} such that Π(B i ) = A i, i 3. Besides, we say that two Pohlke matrices B, B are conjugate if they correspond to the same parallel projection Π : R 3 {z = 0}, that is Π(B i ) = Π( B i ) = A i, i 3. Then, we have: Corollary.5 Under the assumption of Theorem., in case of oblique projection (i.e., nonorthogonal projection) there are exactly two couples of conjugate Pohlke matrices that correspond to two distinct, oblique, directions of projection. See the explicit formulae (4.6) and (4.8) below. 3

4 Some facts of linear algebra We give here two simple results of linear algebra. The first is the key lemma that we need in the proof of Pohlke s theorem. The second one is a standard fact on orthogonal transition matrices; see [5]. We need it to compute the direction of the projection Π : R 3 {z = 0} and the mutually orthogonal column vectors B i such that Π(B i ) = A i. Let A be the real 3 3 matrix a a a 3 A = a a a 3 a 3 a 3 a 33 A A A 3 (A, A, A 3 ) (.) where A i, A i are, respectively, row and column vectors of A : A i = (a i, a i, a i3 ), a i A i = a i a 3i ( i 3). (.) We indicate with V A and V A the row and column spaces of A, namely V A = span { A, A, A 3 }, V A = span { A, A, A 3}. (.3) If rank(a) =, then V A and V A are -dimensional subspaces (i.e., planes throw the origin) of R 3. Given a column vector U, U V A, we may consider the parallel projection Π U : R 3 V A, in the direction of U, by setting Π U (V ) = V A {V + t U : t R} (.4) for any column vector V R 3. Since V A is a plane throw the origin, Π U is a linear map. In the same way we can define the parallel projection Π W : R 3 V A in the direction of a given row vector W, if W V A. When the direction of projection is not specified, we write Π ( Π ) for column (row) projections onto V A ( V A ). Lemma. Let A and B be 3 3 matrices such that rank(a) = and rank(b) = 3. Then the rows of A can be obtained by a parallel projection Π : R 3 V A of the rows of the matrix B if and only if the columns of A can be obtained by a parallel projection Π : R 3 V A of the columns of the matrix B. Proof: Let Π be a parallel projection onto V A such that Then we have Π (B i ) = A i for i 3. (.5) row-rank (B A) = row-rank B A B A B 3 A 3 =, (.6) because the row vectors B i A i are parallel to the direction of the projection Π. Since row-rank and column-rank of a matrix are equal ([5], p. 8) this implies that column-rank (B A) = column-rank (B A, B A, B 3 A 3 ) =. (.7) 4

5 Thus there exists a column vector U R 3 such that Moreover, observe that (B i A i ) U for i 3. (.8) U V A ( that is U V A ) (.9) because rank(b) = 3, while rank(a) =. Hence, for all column vector V R 3 the line {V + Ut : t R} has one and only one intersection with the plane V A. This permits us to define the projection Π : R 3 V A in the direction of the column vector U by setting Π (V ) = V A {V + Ut : t R} for V R 3. (.0) Since we clearly have Π (B i ) = A i converse can be proved similarly. ( i 3), this proves the first part of the lemma. The Remark. The common value of row-rank and column-rank is obvious in the case of the previous proof. In fact, it is evident that if B i A i = λ i W for i 3 (W a given row λ λ vector, λ i R) then (B i A i ) λ λ 3 ( i 3). Thus we can choose U = λ λ 3. Having proved Lemma., and taking into account that if B is a square matrix then B has nonzero orthogonal columns of equal norm B is a nonzero multiple of an orthogonal matrix B has nonzero orthogonal rows of equal norm, we can immediately restate Definition.4 in the following equivalent form: Definition.3 Let A be a 3 3 matrix of rank. We say that a 3 3 matrix B is a Pohlke matrix for A if B is a multiple of an orthogonal matrix and there exist parallel projections Π : R 3 V A and Π : R V A such that Π (B i ) = A i and Π (B i ) = A i for i 3. Next, we recall a standard result concerning the existence of orthogonal transition matrices for two given bases of R 3. We state Lemma.5 below in terms of row vectors but, by transposition, a similar result holds for column vectors. Definition.4 We denote by E i, i 3, the standard base of row vectors: E = (, 0, 0), E = (0,, 0), E 3 = (0, 0, ). (.) Lemma.5 Let {A, A, A 3 } and {Ã, Ã, Ã3} be two sets of linearly independent row vectors (i.e., two bases of R 3 ) such that A i A j = Ãi Ãj ( i, j 3). (.) Then, there exists a unique orthogonal transition matrix T such that A i = Ãi T ( i 3). 5

6 Proof: Uniqueness is quite obvious. To determine T, let us define the nonsingular matrices A à G = A, G = Ã. (.3) A 3 à 3 We have Then, setting T = G G, it is clear that E i G = A i, E i G = à i i 3. (.4) à i T = Ãi G G = E i G = A i. (.5) On the other hand, using (.) and the linearity of the map X X T, it is easy to see that for all row vectors X R 3 one has X T = X. In fact, since {Ã, Ã, Ã3} is a base of R 3, there exist unique scalars λ, λ, λ 3 R such that X = λ à + λ à + λ 3 à 3. Then, by (.), we have XT = λ i A i = λ i λ j (A i A j) = λ i λ j (Ãi Ãj) = λ i à i = X, (.6) and it is well known that this is equivalent to the orthogonality of the matrix T. See [5]. 3 Proof of Pohlke s theorem We prove here Pohlke s theorem as stated in Theorem.. With the notations of the previous sections, in the following we suppose x x x 3 A = y y y 3 with rank(a) =, (3.) i.e., the rows A and A are linearly independent and V A is the plane {z = 0}. By Lemma., it is sufficient to find a matrix x x x 3 B B = y y y 3 = B, (3.) z z z 3 B 3 with orthogonal rows B, B, B 3 of equal norm, and a parallel projection Π W : R 3 V A, with W V A, such that Π W (B i ) = A i for i 3. This means that A i = B i + λ i W ( i 3) (3.3) where λ i are suitable real coefficients. Since A 3 = (0, 0, 0), it is clear that the projection Π W must be parallel to B 3. Hence, we may assume W = B 3. Then, it is enough to prove that there exist B, B, B 3 orthogonal and of equal norm such that { A = B + λ B 3 (3.4) A = B + λ B 3 6

7 for suitable scalars λ, λ R. To this aim we first consider the following: Auxiliary problem: Let {E, E, E 3 } be the standard base of row vectors of Definition.4, and let γ (0, π), λ > 0 be given parameters. We look for α, β R such that: E + αe 3 E + βe 3 = λ, (E + αe 3 ) (E + βe 3 ) E + αe 3 E + βe 3 = cos γ. Before proving the solvability of (3.5) it is worthwhile to define the quantity: Definition 3. For (γ, λ) (0, π) (0, + ) we set η = η(λ, γ) λ + + (λ + ) 4λ sin γ λ sin γ (3.5). (3.6) Then, for (γ, λ) (0, π) (0, + ), we have: ( η(λ, γ) η λ, π ) = λ + + λ λ = { /λ if 0 < λ if λ (3.7) with strict inequality if γ π. In particular, η(γ, λ) satisfies the following: (i) η, ηλ for all (γ, λ) (0, π) (0, + ), { π } (ii) η = (γ, λ) [, + ), { π } (iii) ηλ = (γ, λ) (0, ]. Finally, for simplicity of writing, we also introduce a signum function: { if t 0 sgn(t) if t < 0 (3.8) (3.9) Lemma 3. Assume that (γ, λ) (0, π) (0, + ). Then the real solutions of (3.5) are ( (α, β) = ± η λ, sgn(cos γ) ) η. (3.0) Thus, for (γ, λ) ( π, ) system (3.5) has two real, distinct solutions ; for (γ, λ) = ( π, ) the only solution is (α, β) = (0, 0). Proof: It is clear that system (3.5) is equivalent to the following + α = λ ( + β ) αβ = + α + β cos γ. (3.) 7

8 Multiplying the first equation of (3.) by ( + β ) one easily sees that α β = λ ( + β ) α β = λ ( + β ) (λ + )( + β ) +. (3.) On the other hand, squaring the other equation of (3.), we have α β = ( + α ) ( + β ) cos γ = λ ( + β ) cos γ. (3.3) Hence, from (3.) and (3.3), we deduce that + β satisfies the second order equation Solving (3.4), we obtain But, for all γ (0, π), λ > 0, we have sin γ λ ( + β ) (λ + )( + β ) + = 0. (3.4) + β = λ + ± (λ + ) 4λ sin γ λ sin γ. (3.5) λ + (λ + ) 4λ sin γ λ sin γ = λ + + (λ + ) 4λ sin γ λ + + λ, (3.6) with equality only when (γ, λ) { } π (0, ]. Thus, taking into account (i), (iii) of (3.8), for (γ, λ) { π } (0, ] we have + β = λ + + (λ + ) 4λ sin γ λ sin γ + α = λ + + (λ + ) 4λ sin γ sin γ = η, (3.7) = ηλ >. (3.8) By the second equation of (3.) αβ > 0 if γ (0, π ), while αβ < 0 if γ ( π, π); furthermore, by (ii), (iii) of (3.8) and (3.7), (3.8), we have α = ηλ > 0, β = η = 0 for γ = π, λ >. It follows that for (γ, λ) { π } (0, ] there are exactly two distinct solutions: ± ( ηλ, η ) if 0 < γ < π, λ > 0 (α, β) = ± ( ηλ, η ) if γ = π, λ > ± ( ηλ, η ) π if < γ < π, λ > 0 (3.9) On the other hand, for (γ, λ) { π } (0, ] it is immediate to verify that (3.) is satisfied if and only if α = 0 and β =. Thus, by (iii) of (3.8), we can write again λ (α, β) = ±( ηλ, η ), (3.0) 8

9 and (by (ii) of (3.8)) we find two distinct solutions unless (γ, λ) = ( π, ). It is now clear that (3.9), (3.0) can be summarized by formula (3.0) for all (γ, λ) (0, π) (0, + ). To conclude, it remains to verify that (3.0) gives, effectively, solutions of both equations of (3.5). The first one, + α = λ ( + β ), is obviously satisfied by (3.0). The second one, αβ = + α + β cos γ, is certainly satisfied for γ = π, because α = 0 or β = 0. For γ π the sign of αβ is the same of cos γ ; thus, it enough to show the α β = ( + α )( + β ) cos γ, which is equivalent to ( + α )( + β ) sin γ = ( + α ) + ( + β ). This can be easily verified by substituting the expressions (3.7), (3.8) for + β, + α. Now, to prove the solvability of (3.4), we apply Lemma.5 and Lemma 3.. To begin with, we set the values of the parameters γ (0, π), λ > 0 of system (3.5): ( ) A A γ arccos, λ A A A A. (3.) Applying Lemma 3., we choose (α, β) a real solution of system (3.5), and then we introduce the scale factor ϱ (0, + ) : Definition 3.3 ϱ A = A + α λ η = A = A. (3.) η + β Next, we look for an isometry which transforms the couple of vectors ϱ(e + αe 3 ), ϱ(e + βe 3 ) into the couple A, A. To this aim, it is clear that there are only two possibilities: we consider the sets of linearly independent row vectors {A, A, A 3 } and {Ã, Ã, Ã3} defined by A = A à = ϱ(e + αe 3 ) = (ϱ, 0, ϱα) A = A and à = ϱ(e + βe 3 ) = (0, ϱ, ϱβ) (3.3) A 3 = ± ϱ A A à 3 = ϱ à à = ( ϱα, ϱβ, ϱ) where for A 3 we may take, indifferently, the sign + or. From (3.), (3.) we have A i A j = Ãi Ãj ( i, j 3). (3.4) Hence, by Lemma.5, there exists a unique orthogonal, transition matrix T such that A i = Ãi T ( i 3). (3.5) In particular, since T is orthogonal, setting B i ϱ E i T ( i 3) (3.6) we have B i = ϱ and B i B j for i j. Besides (3.3), (3.5) give A = A = ϱe T + α(ϱe 3 T ) = B + αb 3 A = A = ϱe T + β(ϱe 3 T ) = B + βb 3. (3.7) Thus, we have proved that (3.4) is solvable. 9

10 4 Determination of Pohlke matrices It is clear that the Pohlke matrix B, with the rows B, B, B 3 defined in (3.6), coincides with ϱ T where T is the transition matrix given by Lemma.5. Namely, we have B = ϱ T = ϱ G G, (4.) where G, G are 3 3 matrices defined as in (.3). More precisely, we have: G = G = ϱ ϱ( + α + β ) 0 α 0 β α β, (4.) + β αβ α αβ + α β α β where (α, β) is any real solution of (3.5). For G there are two possibilities: (4.3) G = x x x 3 y y y 3 x y 3 y x 3 ϱ y x 3 x y 3 ϱ x y y x ϱ (4.4) or G = x x x 3 y y y 3 x y 3 y x 3 ϱ y x 3 x y 3 ϱ x y y x ϱ. (4.5) In conclusion, we finally obtain B = + α + β + β αβ α αβ + α β α β where (α, β) is any solution of (3.5). x x x 3 y y y 3 x y 3 y x 3 y x 3 x y 3 x y y x (4.6) Remark 4. Note that B = B = B 3 = ϱ, because G G is an orthogonal matrix. The direction of projection. It is now easy to find the direction of the projection corresponding to the two Pohlke matrices (one with +ϱ and the other with ϱ ) given by formula (4.6) for a fixed solution (α, β) of system (3.5). Indeed, it is sufficient to write explicitly a column of B A. For simplicity, below 0

11 we compute ( + α + β )(B 3 A 3 ) : ( + β )x 3 αβy 3 α x y y x αβx 3 + ( + α )y 3 β x y y x αx 3 + βy 3 + x y y x = α x 3 αβy 3 α x y y x αβx 3 β y 3 β x y y x αx 3 + βy 3 + x y y x ( + α + β )x 3 ( + α + β )y 3 = 0 = ν with ν = αx 3 + βy 3 + x y y x. This means that the direction of projection is given by the column vector α U = β, (4.8) and thus the two Pohlke matrices given by (4.6) are conjugate. Remark 4. Let B the Pohlke matrix given by (4.6). Then, by (4.8), B corresponds to the orthogonal projection onto the plane {z = 0} iff (α, β) = (0, 0) iff (γ, λ) = ( π, ) iff A, A are nonzero, orthogonal rows of equal norm, as we have already seen in Prop... Multiplicity of Pohlke matrices. Having the expression (4.6), it is clear that there are at most four different Pohlke s matrices. On the other hand, still from (4.6), taking into account Lemma 3., we see that for B 3 we have the following possibilities: α β, (4.7) B 3 = ± αa + βa + α + β ± A A, (4.9) ϱ where, now, (α, β) is a fixed solution of system (3.5). Thus, since A, A are linearly independent, if (α, β) (0, 0) we find that B 3 = ± C ± D, (4.0) with C, D two linearly independent rows. Hence, taking into account Prop.. and Lemma 3., in case of non-orthogonal projection there are exactly four distinct possibilities for B 3, i.e., four different Pohlke matrices. Besides, by (4.8), for every solution (α, β) (0, 0) of system (3.5) there are two distinct, conjugate Pohlke matrices. Examples of Pohlke matrices. () Let us consider the matrix A = 4/3 4/3 4/ (4.)

12 It is clear that A = A = 4 and A A. Thus, by Proposition., there are only two Pohlke matrices. Namely, 4/3 4/3 4/3 B = 3 5/ 3 4/ 3 / (4.) 3 or () Let us consider the matrix B = A = 4/3 4/3 4/3 3 5/ 3 4/ 3 / The hypotheses of Proposition. are not verified. We have. (4.3). (4.4) and then λ = A A = =, cos γ = A A 3 A A =, (4.5) η = λ + + (λ + ) 4λ sin γ λ sin γ =, ϱ = A = =. (4.6) η Thus, applying Lemma 3., we find ( (α, β) = ± ηλ, sgn(cos γ) ) η = ± ( 3, ). (4.7) With these two solutions for (α, β) we can determine G. We have: G = (i) (4.8) or G = (ii) (4.9) Having A A = (,, ) and ϱ =, for the matrix G we find: G = (I) (4.0)

13 or G = (II) (4.) Combining the matrices (4.8), (4.9), (4.0), (4.) we have four possibilities, say B (i,i), B (i,ii), B (ii,i), B (ii,ii), for the Pohlke matrix B = ϱ G G. More precisely, we have: B (i,i) = 5 B (i,ii) = 5 B (ii,i) = 5 B (ii,ii) = , (4.), (4.3) , (4.4). (4.5) Note that the couples of conjugate Pohlke matrices are B (i,i), B (i,ii) and B (ii,i), B (ii,ii) and the corresponding directions of projection are: U (i) = 3 and U (ii) = 3. (4.6) References [] Beskin N. An Analogue of Pohlke-Schwarz s theorem in central axonometry, Recueil Mathématique [Math. Sbornik] N.S., Vol. 9(6) (946), pp [] Campedelli L. Lezioni di Geometria, Vol. II - Parte I, Cedam, Padova 97. [3] Cayley A. On a problem of Projection, Quartely Journal of Pure and Applied Mathematics, vol. XIII (875), pp [4] Emch A. Proof of Pohlke s Theorem and Its Generalizations by Affinity, American Journal of Mathematics, 40 (98), pp [5] Eves H. Elementary Matrix Theory, Dover Publications, New York,

14 [6] Klein S. Elementary Mathematics from an advanced standpoint. Geometry, Dover Publications, 949. [7] Loria G. Storia della geometria descrittiva, Hoepli, Milano, 9. [8] Manara C.F. L aspetto algebrico di un fondamentale teorema di geometria descrittiva, Periodico di Matematiche - Serie IV, Vol. XXXII (954), pp [9] Pohlke K.W. Lehrbuch der Darstellenden Geometrie, Part I, Berlin, 860. [0] Schwarz H.A. Elementarer Beweis des Pohlkeschen Fundamentalsatzes der Axonometrie, Crelle s Journal, vol. LXIII (864), pp [] Steinhaus H. Mathematical snapshots, Oxford University Press, 969. [] Struik D.J. Lectures on Analytic and Projiective Geometry, Addison-Wesley, 953. Dipartimento di Culture del Progetto Università IUAV di Venezia Dorsoduro 96, Cotonificio Veneziano 303 Venezia, ITALY address : manfrin@iuav.it 4

A short proof of Pohlke-Schwarz s theorem via Pohlke s theorem

A short proof of Pohlke-Schwarz s theorem via Pohlke s theorem A short proof of Pohlke-Schwarz s theorem via Pohlke s theorem Renato Manfrin May 7, 28 Abstract We give a proof of Pohlke-Schwarz s theorem of oblique axonometry with explicit formulae for the reference

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

x 1 x 2. x 1, x 2,..., x n R. x n

x 1 x 2. x 1, x 2,..., x n R. x n WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Systems of Linear Equations

Systems of Linear Equations LECTURE 6 Systems of Linear Equations You may recall that in Math 303, matrices were first introduced as a means of encapsulating the essential data underlying a system of linear equations; that is to

More information

ORIE 6300 Mathematical Programming I August 25, Recitation 1

ORIE 6300 Mathematical Programming I August 25, Recitation 1 ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Calvin Wylie Recitation 1 Scribe: Mateo Díaz 1 Linear Algebra Review 1 1.1 Independence, Spanning, and Dimension Definition 1 A (usually infinite)

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Math 3C Lecture 20. John Douglas Moore

Math 3C Lecture 20. John Douglas Moore Math 3C Lecture 20 John Douglas Moore May 18, 2009 TENTATIVE FORMULA I Midterm I: 20% Midterm II: 20% Homework: 10% Quizzes: 10% Final: 40% TENTATIVE FORMULA II Higher of two midterms: 30% Homework: 10%

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

EE263: Introduction to Linear Dynamical Systems Review Session 2

EE263: Introduction to Linear Dynamical Systems Review Session 2 EE263: Introduction to Linear Dynamical Systems Review Session 2 Basic concepts from linear algebra nullspace range rank and conservation of dimension EE263 RS2 1 Prerequisites We assume that you are familiar

More information

Linear equations in linear algebra

Linear equations in linear algebra Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Fall 2016 MATH*1160 Final Exam

Fall 2016 MATH*1160 Final Exam Fall 2016 MATH*1160 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie Dec 16, 2016 INSTRUCTIONS: 1. The exam is 2 hours long. Do NOT start until instructed. You may use blank

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

SOMEWHAT STOCHASTIC MATRICES

SOMEWHAT STOCHASTIC MATRICES SOMEWHAT STOCHASTIC MATRICES BRANKO ĆURGUS AND ROBERT I. JEWETT Abstract. The standard theorem for stochastic matrices with positive entries is generalized to matrices with no sign restriction on the entries.

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Matrix Algebra: Vectors

Matrix Algebra: Vectors A Matrix Algebra: Vectors A Appendix A: MATRIX ALGEBRA: VECTORS A 2 A MOTIVATION Matrix notation was invented primarily to express linear algebra relations in compact form Compactness enhances visualization

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University Applied Matrix Algebra Lecture Notes Section 22 Gerald Höhn Department of Mathematics, Kansas State University September, 216 Chapter 2 Matrices 22 Inverses Let (S) a 11 x 1 + a 12 x 2 + +a 1n x n = b

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 22 Solutions For Problem Sheet 3 In this Problem Sheet, we looked at some problems on real inner product spaces. In particular, we saw that many different

More information

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0 Kernel and range Definition: The kernel (or null-space) of A is ker A { v V : A v = 0 ( U)}. Theorem 5.3. ker A is a subspace of V. (In particular, it always contains 0 V.) Definition: A is one-to-one

More information

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall. Math 54 First Midterm Exam, Prof Srivastava September 23, 26, 4:pm 5:pm, 55 Dwinelle Hall Name: SID: Instructions: Write all answers in the provided space This exam includes two pages of scratch paper,

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Auerbach bases and minimal volume sufficient enlargements

Auerbach bases and minimal volume sufficient enlargements Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

A Solution of a Tropical Linear Vector Equation

A Solution of a Tropical Linear Vector Equation A Solution of a Tropical Linear Vector Equation NIKOLAI KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 Universitetsky Ave., St. Petersburg, 198504 RUSSIA nkk@math.spbu.ru

More information

Linear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011

Linear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011 Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6D: 2-planes in R 4 The angle between a vector and a plane The angle between a vector v R n and a subspace V is the

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed! Math 415 Exam I Calculators, books and notes are not allowed! Name: Student ID: Score: Math 415 Exam I (20pts) 1. Let A be a square matrix satisfying A 2 = 2A. Find the determinant of A. Sol. From A 2

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters

More information

1.8 Dual Spaces (non-examinable)

1.8 Dual Spaces (non-examinable) 2 Theorem 1715 is just a restatement in terms of linear morphisms of a fact that you might have come across before: every m n matrix can be row-reduced to reduced echelon form using row operations Moreover,

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

Inner Product and Orthogonality

Inner Product and Orthogonality Inner Product and Orthogonality P. Sam Johnson October 3, 2014 P. Sam Johnson (NITK) Inner Product and Orthogonality October 3, 2014 1 / 37 Overview In the Euclidean space R 2 and R 3 there are two concepts,

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

is Use at most six elementary row operations. (Partial

is Use at most six elementary row operations. (Partial MATH 235 SPRING 2 EXAM SOLUTIONS () (6 points) a) Show that the reduced row echelon form of the augmented matrix of the system x + + 2x 4 + x 5 = 3 x x 3 + x 4 + x 5 = 2 2x + 2x 3 2x 4 x 5 = 3 is. Use

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Math 290, Midterm II-key

Math 290, Midterm II-key Math 290, Midterm II-key Name (Print): (first) Signature: (last) The following rules apply: There are a total of 20 points on this 50 minutes exam. This contains 7 pages (including this cover page) and

More information

Lecture 2 Systems of Linear Equations and Matrices, Continued

Lecture 2 Systems of Linear Equations and Matrices, Continued Lecture 2 Systems of Linear Equations and Matrices, Continued Math 19620 Outline of Lecture Algorithm for putting a matrix in row reduced echelon form - i.e. Gauss-Jordan Elimination Number of Solutions

More information

GEOMETRY OF MATRICES x 1

GEOMETRY OF MATRICES x 1 GEOMETRY OF MATRICES. SPACES OF VECTORS.. Definition of R n. The space R n consists of all column vectors with n components. The components are real numbers... Representation of Vectors in R n.... R. The

More information

Math 2030 Assignment 5 Solutions

Math 2030 Assignment 5 Solutions Math 030 Assignment 5 Solutions Question 1: Which of the following sets of vectors are linearly independent? If the set is linear dependent, find a linear dependence relation for the vectors (a) {(1, 0,

More information

A Primer in Econometric Theory

A Primer in Econometric Theory A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Solutions to Math 51 First Exam April 21, 2011

Solutions to Math 51 First Exam April 21, 2011 Solutions to Math 5 First Exam April,. ( points) (a) Give the precise definition of a (linear) subspace V of R n. (4 points) A linear subspace V of R n is a subset V R n which satisfies V. If x, y V then

More information

Algorithms to Compute Bases and the Rank of a Matrix

Algorithms to Compute Bases and the Rank of a Matrix Algorithms to Compute Bases and the Rank of a Matrix Subspaces associated to a matrix Suppose that A is an m n matrix The row space of A is the subspace of R n spanned by the rows of A The column space

More information

Lecture 7: Introduction to linear systems

Lecture 7: Introduction to linear systems Lecture 7: Introduction to linear systems Two pictures of linear systems Consider the following system of linear algebraic equations { x 2y =, 2x+y = 7. (.) Note that it is a linear system with two unknowns

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Introducing Preorder to Hilbert C -Modules

Introducing Preorder to Hilbert C -Modules Int. Journal of Math. Analysis, Vol. 4, 2010, no. 28, 1349-1356 Introducing Preorder to Hilbert C -Modules Biserka Kolarec Department of Informatics and Mathematics Faculty of Agriculture, University of

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Definition: A vector is a directed line segment which represents a displacement from one point P to another point Q.

Definition: A vector is a directed line segment which represents a displacement from one point P to another point Q. THE UNIVERSITY OF NEW SOUTH WALES SCHOOL OF MATHEMATICS AND STATISTICS MATH Algebra Section : - Introduction to Vectors. You may have already met the notion of a vector in physics. There you will have

More information

Mathematical Foundations

Mathematical Foundations Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

MATH 304 Linear Algebra Lecture 20: Review for Test 1. MATH 304 Linear Algebra Lecture 20: Review for Test 1. Topics for Test 1 Part I: Elementary linear algebra (Leon 1.1 1.4, 2.1 2.2) Systems of linear equations: elementary operations, Gaussian elimination,

More information

A new product on 2 2 matrices

A new product on 2 2 matrices A new product on 2 2 matrices L Kramer, P Kramer, and V Man ko October 17, 2018 arxiv:1802.03048v1 [math-ph] 8 Feb 2018 Abstract We study a bilinear multiplication rule on 2 2 matrices which is intermediate

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Lecture 1 Systems of Linear Equations and Matrices

Lecture 1 Systems of Linear Equations and Matrices Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information