OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE

Size: px
Start display at page:

Download "OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE"

Transcription

1 OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE JAMES B. WILSON Abstract. Three algorithms of Gram-Schmidt type are given that produce orthogonal decompositions of finite d-dimensional symmetric, alternating, or Hermitian forms over division rings. The first uses d 3 /3+O(d 2 products in. Next, that algorithm is adapted in two new directions. One is a nearly optimal sequential algorithm whose complexity matches the complexity of matrix multiplication. The other is a parallel algorithm. 1. Introduction The classic Gram-Schmidt orthogonalization process returns an orthonormal basis of an inner product space. Here we generalize that process in the appropriate fashion to general symmetric, alternating, and other classical forms over division rings ; cf. 1, Section III.2; 6, Section V.7. We group all classical forms under the title of Hermitian. So a Hermitian -form is a function b : V V on a finite-dimensional left -vector space V where b is -linear in the first variable and for some s and some unital anti-isomorphism σ of, for all u, v V, b(u, v = sb(v, u σ. Hermitian -forms capture the usual symmetric forms (where s = 1 and σ = 1, skew-symmetric forms (where s = 1 and σ = 1, as well as traditional Hermitian forms. Call a basis X for V standard if every x X has at most one y x X where b(x, y x 0. We develop three algorithms to find standard bases of finite-dimensional Hermitian forms. The first is largely instructional (though it has practical uses in small dimensions. Indeed this first algorithm also serves as proof that standard bases exist. Existence of standard bases for fields is well-known, cf. 1, Theorem 3.7, but many proofs begin by classifying forms into subclasses, e.g. symmetric, skewsymmetric, etc. which depend on properties of (, s, σ. Consequently, algorithms to find standard bases also divide into cases and so they are not easily adapted to general inputs. Our uniform method is as efficient as case-by-case methods and in our judgment is straight-forward to implement. The second and third algorithms adapt the first method to meet practical needs, first with an asymptotically nearly optimal sequential method and second a parallel variant. This ensures that Gram- Schmidt type algorithms are not the bottleneck in optimizing and parallelizing complex algorithms that depend internally on finding standard bases. Applications that depend on standard bases are quite varied, e.g. identifying classical groups 5, Section 3, computing with algebras with involutions 2, Section 4.2, and identifying central products of nilpotent groups and Lie algebras 16, p We prove: Date: May 8, Key words and phrases. bilinear form, Hermitian form, polynomial-time algorithm, parallel NC algorithm. 1

2 2 JAMES B. WILSON Theorem 1. There are deterministic algorithms that, given a d-dimensional Hermitian -form b : V V, return a standard basis for b. (i If r is the rank of the radical of b, then the first algorithm uses d r inversions, ( d equality tests, d 3 /3 r 3 /3 + O(d 2 additions, d 3 /3 r 3 /3 + O(d 2 multiplications in, and d 3 /6 r 3 /6 + O(d 2 applications of s and σ. (ii The second algorithm uses O(d ω -time in an arithmetic model (i.e. operations in are constant time, where ω is an upper bound on the exponent of matrix operations over. (iii The third algorithm assumes is a field and uses d O(1 -threads of parallel execution and returns in time (log 2 d O(1 ; so it is in NC. The idea behind Theorem 1(i is shared by many generalizations of Gram- Schmidt. For symmetric forms it goes back at least to Smiley s Algebra of Matrices 12, Section 12.2 and is adequately described as symmetric elimination. Dax and Kaniel 4 give a detailed analysis of such an algorithm for symmetric forms (i.e. σ = 1, s = 1, and consequently is commutative. The algorithm of Theorem 1(i has the same complexity as that of Dax and Kaniel (compare 4, p. 226, but it is likely that their carefully considered techniques for the symmetric case improve upon the constants hidden by the asymptotic measure. It may be possible to adapt some of their methods to our more general situation. Holt and Roney-Dougal 5 use a similar idea in case-by-case algorithms for Hermitian forms over finite fields. A predecessor to Theorem 1(ii that applies to symmetric forms over fields was given in 3, Theorem Theorem 1(i is suitable for small dimensions and so the algorithms for Theorem 1 parts (ii and (iii focus on the asymptotic complexity of finding a standard basis for general Hermitian forms. Our proof of Theorem 1(ii depends on the bound O(d ω on the complexity of matrix multiplication, matrix inversion, and the cost to solve systems of linear equations of dimension d over. Asymptotic improvements over the natural Gaussian methods (where ω = 3 are known. E.g. Strassen 14 invented routines to multiply, invert, and find null-spaces in M 2 (R (R an arbitrary ring which involve 7 products in R rather than the usual 8 by Gaussian methods. By embedding M d (R into M 2 k(r = M 2 (M 2 k 1(R, one can apply Strassen s (2 -strategies recursively and so produce algorithms to multiply, invert, and compute nullspaces of arbitrary dimension faster than the typical Gaussian methods, specifically ω log 2 7 over arbitrary rings 14, Facts 2, 3, & 4. There are many known improvements over Strassen s original methods and it may be the case that fast matrix operations are influenced by the properties of. Indeed, improvements along the lines of Coppersmith-Winograd, Cohn-Umans, and others increasingly depend on rich properties of algebraic geometry and tensor products which may not be immediately applicable to noncommutative rings; see 8, Chapter 11. The final concern is that improved complexity for matrix operations often comes at the cost of involved data structures and pre-processing steps which can impose undesirable overhead in small dimensions. The exact cross-over dimension is an issue of ongoing research; at present the Strassen-type methods have been use to make practical improvement for dimensions as low as 12 9, p. 5. After a few preliminaries in Section 2, we prove Theorem 1(i in Section 3, Theorem 1(ii in Section 4, and Theorem 1(iii in Section 5. The proof that the three algorithms each return standard bases is largely unchanged from the proof for part (i. Hence, Sections 4 and 5 focus on the performance of the respective algorithms.

3 OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE 3 The algorithms for parts (ii and (iii appear identical, except that the algorithm for part (iii takes advantage of a different computational platform, a parallel computer. This subtle but important difference is the main topic of Section 5. Finally, Section 6 deals with some common post-processing steps that allow for canonical returns in certain special cases. 2. Preliminaries Throughout is a division ring, V is a left finite-dimensional -vector space, M d ( denotes (d d-matrices over, and b : V V is a function where, for all u, v, w V and all α, b(u + αw, v = b(u, v + αb(w, v & b(v, u = sb(u, v σ. These assumptions imply that b(u, v + αw = b(u, v + b(u, wα σ. If for some u, v V, b(u, v = α 0 then 1 = b(α 1 u, v = sb(α 1 u, v σ2 s σ = ss σ. If b(v, V = 0, we are free to choose s = 1. In any case, ss σ = 1. We presume to know an ordered basis X = {x 1,..., x d } for V and so we define the usual Gram matrix B M d ( for b : V V by setting B ij = b(x i, x j. If we identify V with d using this fixed basis X, then b(u, v = ubv σt. The radical of b is the nullspace of B. The Hermitian assumption on b equates to B = sb σt ; we call B (s, σ-hermitian. The set of (s, σ-hermitian matrices will be denoted H d (, s, σ and H d (, s, σ denotes the nonsingular (s, σ-hermitian matrices. Given an invertible matrix A, b(ua, va = uaba σt v σt and we say B is isometric to ABA σt. To write B in a standard basis is therefore to produce an invertible matrix A where ABA σt has at most one nonzero entry in every row and column (the rows of A are a standard basis. It is not always possible to diagonalize B by isometries, e.g. B = , but we can permute a standard basis so that B is (2.1 B 1 B m, where each B i is either α i or 0 α i sα i 0, for some αi. The value of each α i can often be adjusted and indeed we can also swap (1 1-blocks for (2 -blocks and vice-versa to arrive at specific canonical forms (e.g. to exhibit the signature or Witt index of the form; details in Section 6. Our algorithms assume exact operations in, such as in algebraic number fields, rational quaternion division rings, or finite fields. We make no claims about their numerical stability in fields with floating-point approximations; for that see 10 instead. Each of our algorithms allows the user to choose a computational encoding for, such as by polynomials or matrices over a field or by structure constants; see 11, p Also, b can be encoded as a black-box ; however, even to find a nonzero value b(u, v can require ( d evaluations of b, and so it simplifies our description to assume that b is input by a (d d-matrix B = sb σt. If the s and σ are not specified with B, then suitable values can be detected during the execution of the algorithms for Theorem 1. We either prove that B = 0 (so s = 1 and σ = 1 suffice or we find u, v V such that b(u, v = ubv σt 0. The first such pair u, v V determines σ by α σ = b(u, v 1 b(u, αv, and s = b(v, ub(u, v σ. We write the algorithm as though s and σ are known. In many situations multiplying by s is done quite efficiently, e.g. if s = ±1; likewise, σ might be implemented as a sign-change of some constituent of α, e.g. a + bi a bi. Therefore, we treat the application of s and σ as separate from other operations in.

4 4 JAMES B. WILSON 3. Hermitian Elimination We start with a simple method which is not asymptotically optimal, but captures all Hermitian forms at once. As with the Gaussian Elimination, the idea is to understand the effect of elementary matrices A applied to B as ABA σt and then select a series of steps that offers few redundancies. HermElim( B H d (, s, σ (I Base case If d 1 return (B, I d. (II Anisotropic case If B = B11 with B 11 0, then create C = sc σt M d 1 ( using the following straight-line programs: V i 1 = B i1 B11 1, (2 i d; C (i 1(j 1 = V i 1 B 1j + B ij, (2 i j d; C (j 1(i 1 = sc σ (i 1(j 1, (2 i < j d. (C, A = HermElim(C. Return ( B 11 C, 1 0 A V A. (III Isotropic case Else, if B j1 0 for some j > 1 then use a permutation matrix P 2j to swap rows 2 and j and columns 2 and j, i.e. C = P 2j BP2j σt = 0 C12 C 21 C 22. Now produce D = sd σt M d 2 ( using the following straight-line programs: X i 2 = C i2 (C 21 1, Y i 2 = (C i1 + X i 2 C 22 s(c 21 σ D (i (j = Y i 2 C 1j + X i 2 C 2j + C ij D (j (i = sd σ (i 1(j 1 (D, A = HermElim(D. If C 22 = 0, then return ( C σ C 22, D return ( 0 1 s 0 C , D A X A Y A C 1 21 C P 2j. A X A Y A (3 i d; (3 i j d; (3 i < j d. P 2j ; else C 22 0, (IV Radical case Else, B = 0 C, so let (C, A = HermElim(C and return ( 0 C, 1 A. Proof of Theorem 1(i. The algorithm is HermElim. Given B = sb σt M d (, HermElim(B returns a block-diagonal matrix D whose blocks are as in (2.1, along with an invertible matrix A such that ABA σt = D. In particular, the rows of A are a standard basis for b(u, v = ubv σt, where u, v d. To see the return is correct, if we enter case II then B = B 11 su σt U with B 11 = sb11 σ 0. So the straight-line programs of case II implement the matrix products V = UB11 1 and 1 B11 su σt σt 1 B11 (3.1 =. V U V C I d 1 I d 1 In case III, there is a permutation matrix P 2j where C = P 2j BP σt 2j and C 11 = 0 and C 12 = sc σ Thus the the straight-line programs of case III implement the

5 OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE 5 matrix products X = UC21 1, Y = (V + XC 2C12 1, and 0 C 12 su σt 1 ( Y X I d 2 C 21 C 22 sv σt U V Y X I d 2 σt 0 C 12 = C 21 C 22 D The final steps in case III are to recursively modify D and then either diagonalize 0 C12 C 21 C 22 when C22 0 or else scale the first row and column to produce the standard form 0 s As each recursive step generates an isometric copy of the given matrix, the final result is isometric to the original and also in the required standard form. If the lone objective is to rewrite B in a standard form D as in (2.1 then the algorithm may omit the construction of the matrix A with ABA σt = D. Otherwise, the desired matrix A is produced recursively, specifically, in cases II and III respectively: A = A = V I d 1 A V A (3.3 A = 1 1 A 21 T Y X I d 2 C 1 P 2j = 21 T P 2j, A Y A X A C 1 where T = 0 if C 22 = 0 and otherwise T = C22 1. Now we consider the time complexity. There are at most d equality tests to decide ( on the correct case to enter. The straight-line programs of case II amount to d ( 2 additions or subtractions, (d 1+ d ( multiplications, 1+ d 1 2 applications of s and σ, and one inversion in. The straight-line programs of case III involve at most 2 inversion, at most 3(d + 2 ( ( d 1 2 products, at most (d d 1 2 additions or subtractions, and ( d 1 2 applications of s and σ. There are more operations of each kind used by two sequential applications of case II than one application of case III so in the worst instances only cases I, II, and IV are used. Case IV uses only equality tests and so in the worst instances, case IV is entered only at the tail end of the algorithm. Let r be the dimension of the radical (the nullspace of B. The total number equality tests is ( ( d. There are at most d inversions, d applications of s and σ, and at most d ( i i=r 2 + O(i = d 3 /6 r 3 /6 + O(d 2 additions, likewise with multiplications. If in addition a standard basis is desired, then the algorithm must also compute the matrix A such that ABA σt = D (the rows of A are a standard basis for B. If performed naïvely this could amount to d products of (d d-matrices. Instead, observe in (3.3 that the A can be produced by a vector-matrix product A V of dimension d in case II, and two vector-matrix products A X and A Y of dimension d 2 and a permutation in case III. Furthermore, at each iteration the matrix A has at most ( d nonzero, nonidentity, entries so the number of multiplications required to perform a vector-matrix product is ( d. The total number of multiplications to produce the final A is d ( i i=r 2 = d 3 /6 r 3 /6 + O(d 2. Likewise with additions. So the total number of products to produce (B, A is d 3 /3 r 3 /3 + O(d 2. 1 Though we have indicated ABA σt = C in (3.1, the matrix C is determined already by AB = 0 C. Furthermore, C = scσt. This makes for shorter and fewer straight-line programs, compared to a direct encoding of the matrix products. The similar situation applies to (3..

6 6 JAMES B. WILSON 4. Nearly optimal sequential methods The traditional algorithms to multiply or invert matrices, or solve linear systems of equations of dimension d are not the most efficient methods for large dimensions. The various new methods use O(d ω operations in for some 2 ω log , Theorem 2; 15, p Here we prove the same complexity for finding a decomposition as in (2.1. Bürgisser et. al. 3, Theorem show the cost of finding an orthogonal basis of a symmetric -form of dimension d, and a field, is Ω(d ω, provided that ω > 2. Hence, under the assumption that ω > 2 (the present bound is ω > 2.3, the algorithm of Theorem 1(ii is best possible in general. Our algorithm is similar to Strassen s matrix multiplication algorithm in that we work with block elimination inspired by the case for dimension d = 2. First we can compute the radical of the form in one initial pass using the algorithm FindRadical. FindRadical( B H d ( begin Compute an invertible A such that AB = B 1 0 where B1 has full rank; return ( ABA σt = B , A ; end. If B is nondegenerate then we treat B = B 0 su σt where B U V 0 H d/2 (, s, σ. Similar to our Hermitian elimination method we consider two cases. The block anisotropic case assumes B 0 0, and consequently B 0 has positive rank. In the block isotropic case B 0 = 0 but since B is nondegenerate we know U has full rank. In either case the existence of positive rank matrices permits us to eliminate values in rows and columns elsewhere in B. ( BlockAnisotropicElim B H d (, s, σ : B = begin ( B , A0 = FindRadical(B 0; U 1 U 2 = UA σt Z = V U 1B 1 σt 1 su (D 1, A 1 = BlockAnistropicElim(B ( 1; 0 su (D 2, A = BlockIsotropicElim 2 σt U 2 Z end. D return 0 D 0 2 B 0 su σt, 0 B U V 0 M d/2 ( 0, where U 1 M d/2 e (, e = dim B 1; /* See (4.1 */ 1 ; /* See (4. */ A 1 0 0, 0 A 0 2 ; 0 A 0 0 ; U 1B I d/2 To understand that BlockAnisotropicElim returns correctly we note that the steps in the algorithm implement following matrix products as indicated in the comments. First, an invertible matrix is extracted from the block B 0 as follows. A0 0 B0 su (4.1 σt σt B A = A su σt. 0 I d/2 U V 0 I d/2 UA σt 0 V

7 OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE 7 Next, B 1 is used to eliminate all the rows and columns on which it is incident. (4. 0 I d/2 0 B 1 0 su σt σt su σt I d/2 U 1 B = B su σt 0 I d/2 U 1 U 2 V U 1 B I d/2 0 U 2 Z Finally block anisotropic elimination is applied recursively to B 1 and block isotropic 0 su elimination is applied to σt 2. U 2 Z σt A B A D ( su2 A σt 0 = A 0 U 2 Z 0 2 D 0 2 Block isotropic elimination is handled by BlockIsotropicElim. ( BlockIsotropicElim 0f U, 0 < 2f d + 1 B H d (, s, σ : B = su σt V begin Compute invertible A and C where UA = C 0, C is invertible; V = A σt V A;Partition V = Z Y sy σt B, Z Mf ( ; /* See (4.4 */ Decompose Z = sz σt = T + D + st σt where T is upper triangular with 0 entries on the diagonal and D = sd σt is diagonal with diagonal entries α 1,..., α f ; /* See (4.5. */ Let P be the permutation where e 2n 1P = e n, e 2nP = e f+n ; D = P DP t = 0 1 s α s α f ; i {1,..., f} if (α i 0 then (B i, A i =, ; ( α σ i 0 0 α i else (B i, A i = ( 0 s 1 0, α ; (B 0, A 0 = BlockAnisotropicElim(B ; 1 α 1 i 0 1 return B 1 B f B 0, (A 1 A f A 0 T C 1 Y C 1 end. C 1 A σt ; The first step of BlockIsotropicElim modifies B as follows. C 1 0f U C 1 σt 0 f I f 0 (4.4 A σt su σt V A σt = si f Z sy σt. 0 Y B As Z = sz σt M f (, we know Z = st σt + D + T where D is diagonal and T is upper triangular with 0 s on the diagonal. So we use the I f block to clear both Y and T as follows. (4.5 σt I f 0 f I F 0 I f 0 f I f T I f si f Z sy σt T I f = si f D. Y 0 I d 2f 0 Y B Y 0 I d 2f B Finally we apply a permutation P described in the algorithm BlockIsotorpicElim and complete the reduction of the resulting (2 -blocks. We also make a recursive call to the block anisotropic case to arrange for a standard basis for B.

8 8 JAMES B. WILSON The complete algorithm is BlockHermElim. BlockHermElim( B H d (, s, σ begin if (d 1 then return (B, I; ( B , A0 = FindRadical(B; Partition B 1 = B 2 with dim B2 = (dim B1/2 ; if (B 2 0 then (D, A 1 = BlockAnisotropicCase(B 1; else (D, A 1 = BlockIsotropicElim(B 1; return ( D 0, A 1 A0 I ; end. Proof of Theorem 1(ii. The algorithm BlockHermElim suffices as described so it remains to analyze the time complexity of the algorithm. To find the radical and adjust the basis for B accordingly involves solving a system of linear equations and performing 2 matrix products at a cost of O(d ω in total. Next we cost BlockAnisotropicElim and BlockIsotropicElim. In the block anisotropic case we again solve a system of equations but now in a d/2 -dimensional setting. We also multiply 5 matrices of dimension at most d/2, make recursive calls to BlockAnisotropicElim on a square matrix of dimension 1 e d/2, and BlockIsotorpicElim on a square matrix of dimension (d e with the first d/2 e block all zero. The algorithm ends with a product of highly structured (d d-matrices. So the time T ani (d satisfies: T ani (d T ani (e + T iso (d e, d/2 e + O(d ω where T iso (a, b is the time spent in BlockIsotropicElim on an (a a-matrix with the first (b b-block all zero. For such an input, BlockIsotropicElim uses a bounded number of matrix operations and one call to BlockAnisotropicElim on a matrix of dimension (a 2b (a 2b. As a result, T iso (a, b T ani (a 2b + O(a ω. Thus, the total time T (d spent by BlockHermElim for an input of dimension d is: T (d T ani (e + T iso (d e, d/2 e + O(d ω 2T ani (d/ + O(d ω = O(d ω. 5. Parallel variations We now consider how to distribute the work of our algorithms on parallel computers. Hermitian eliminations, as presented in HermElim, leads to some naïve parallelism, which here means that threads of parallel execution do not share memory or pass messages and that these threads to no duplicate the operations of others. Suppose there are threads 1, 2,..., c of execution (that can each store at least 3 values from and where each thread can query the values from B. The thread t

9 OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE 9 is responsible to evaluate, for each (t 1c + 2 i min{d, tc + 1}, the straightline programs of case II and case III with i fixed. E.g. in case II evaluate: V i 1 = B i1 B 1 11, C (i 1(j 1 = V i 1 B 1j + B ij, C (j 1(i 1 = sc σ (i 1(j 1, (i j d, (i < j d. Thus, each thread evaluates O(d straight-line programs, each involving at most 4 operations. If the number of threads c is small then for most d, c < d and this naïve parallelization runs on c threads and cuts the running time down by c. This is the best we can expect for a small number of processors c. There are natural situations where naïve parallelism becomes inadequate. For instance, in the algorithms of 2 the task is to recognize the structure of a k-algebra A with an involution. This involves finding standard bases for Hermitian -forms b : V V, but where is often a large extension of k. Hence, operations in are much more costly than in k and so it is prudent to minimize duplicate steps. Also, d = dim V can be much smaller than dim k A, even to the point where d is far less than the number of available threads on the intended computer cluster. However, d 2 can still be large and this is the effective running time for the naïve parallel method just described. The defect in such an algorithm is that many of the threads are idle during this time and could instead be used to help cut-down the execution time. Here an alternative parallelization scheme such as the NC-algorithm of Theorem 1(iii is necessary. The NC-complexity class is an abstract model for highly distributed computation. An NC-algorithm can use a polynomial number of threads of parallel execution and, unlike the naïve parallel model, these threads are permitted to pass messages and share memory. The trade-off for such a generous work-force is that all computations must be completed in time polynomial in the logarithm of the input size. Thus, for an input of size d, an NC-algorithm may use O(d c1 -threads of execution and complete in time O((log c c2, for two constants c 1 and c 2. A computer cluster that runs an NC-algorithm will have a bounded number of threads of parallel execution and so it will be forced to schedule threads to run on the same physical processor in some order so that it can simulate having O(d c1 -threads. Scheduling may be a difficult step, but the advantage of the NC-complexity class is that asymptotically the speed of an NC-algorithm improves proportional to the number of available processors and so the algorithm design is independent of a specific computer system. For details see 7. Proof of Theorem 1(iii. The algorithm BlockHermElim can be modified to a parallel NC-algorithm. First,the recursive calls in BlockAnisotropicElim should be made by independent parallel threads, and also the loop in BlockIsotropicElim should be executed by parallel threads. As we saw in the proof of Theorem 1(ii, the recursive depth of BlockHermElim is at most log 2 d. Therefore the number of parallel branchings in these recursive calls is 2 log 2 d = d. So the algorithm needs O(log d-time and d threads to reach the basic linear algebra problems. The linear algebra is then done by NC-algorithms; cf. 7, Section 3.8, 4.5; 13, Section 2.3. The resulting modification make BlockHermElim into an NC-algorithm. Remark 5.1. As with optimal sequential algorithms, the of NC-algorithms for linear algebra problems may depend on. Indeed, much of the literature concentrates

10 10 JAMES B. WILSON on the case of commutative coefficients. However, commutativity is not necessary in many algorithms. For example, the method of 7, Section 3.8 for NC matrix multiplication works along the line of Strassen s methods for matrix multiplication as briefly described in the introduction. First one chooses 4 independent straightline programs that implement the multiplication in M 2 (R (where R is any ring, one for each entry in the (2 -matrix. Evaluating the independent straight-line programs in parallel threads leads to an NC-algorithm (in an arithmetic model. For general d, we embed M d ( into M 2 k( and proceed recursively using M 2 k( = M 2 (M 2 k 1( and R = M 2 k 1(. The depth of the recursion is O(log d and so the number of threads used is at most 4 log 2 d O(d 2. Likewise the time spent is polynomial in log 2 d as each of the branches is computed in parallel. A similar adaptation applies to compute inverses of matrices along the lines of 14, Fact 4. Unfortunately, we have not seen a method for solving systems of linear equations in NC that does not require determinants (most invoke Cramer s rule. Unfortunately, determinants are not defined over non-commutative division rings in a manner that implies Cramer s rule. So further work is needed. 6. Post-processing adjustments In our algorithms we opted for a decomposition of B which returns a blockdiagonal matrix with arbitrary numbers of (1 1- and (2 -blocks. Since the (2 -blocks are of a unique type, a canonical return can often be created by converting (1 1-blocks into (2 -blocks. E.g. if α = sα σ 0 and 2 = use: α 1 α α α 1 α 1/2 1/2 1 σt 0 1 =. α 1/2 1/2 s 0 If instead 2 = 0 then use 0 α α α 1 1 α 1 α σt α = α. α In some cases a canonical return for (1 1-blocks is possible with a few adjustments. The block α is isometric to γαγ σ for 0 γ. Hence, if α = γ 1 γ σ for some γ {0}, then we may replace α with 1. Computationally finding γ to perform this adjustment can be involved. Already when σ = 1 and a field this amounts to finding a square-root of α. If the number of classes in of the form γαγ σ is linearly ordered, it is possible to sort the (1 1-blocks accordingly. Another situation for modification of (1 1-blocks involves multiple blocks as follows. If is a field and 0 α = γγ σ + δδ σ for some γ, δ (for example, if σ = 1 and α is a sum of squares, then γ δ δ σ γ σ σt 1 0 γ δ 0 1 δ σ γ σ = α 0. 0 α Acknowledgments Thanks to Peter Brooksbank for suggesting this note and offering comments and to the referee for suggestions and corrections.

11 OPTIMAL ALGORITHMS OF GRAM-SCHMIDT TYPE 11 References 1 E. Artin, Geometric algebra, Wiley Classics Library, John Wiley & Sons Inc., New York, Reprint of the 1957 original; A Wiley-Interscience Publication. 2 P. A. Brooksbank and J. B. Wilson, Computing isometry groups of Hermitian maps, Trans. Amer. Math. Soc. 364 (201, P. Bürgisser, M. Clausen, and M. A. Shokrollahi, Algebraic complexity theory, Grundlehren der Mathematischen Wissenschaften Fundamental Principles of Mathematical Sciences, vol. 315, Springer-Verlag, Berlin, With the collaboration of Thomas Lickteig. 4 A. Dax and S. Kaniel, Pivoting techniques for symmetric Gaussian elimination, Numer. Math. 28 (1977, no. 2, D. F. Holt and C. M. Roney-Dougal, Constructing maximal subgroups of classical groups, LMS J. Comput. Math. 8 (2005, (electronic. 6 Nathan Jacobson, Lectures in abstract algebra, Springer-Verlag, New York, Volume II: Linear algebra; Reprint of the 1953 edition Van Nostrand, Toronto, Ont.; Graduate Texts in Mathematics, No R. M. Karp and V. Ramachandran, Parallel algorithms for shared-memory machines, Handbook of theoretical computer science, Vol. A, Elsevier, Amsterdam, 1990, pp J. M. Landsberg, Tensors: geometry and applications, Graduate Studies in Mathematics, vol. 128, American Mathematical Society, Providence, RI, MR S. Huss-Lederman, E. Jacobson, A. Tsao, T. Turnbull, and J. R. Johnson, Implementation of Strassen s algorithm for matrix multiplication, Proceedings of the 1996 ACM/IEEE conference on Supercomputing (CDROM, 1996, DOI / , (to appear in print. 10 F. J. Linge, Efficient Gram-Schmidt orthonormalisation on parallel computers, Comm. Numer. Meth. Engng. 16 (2000, L. Rónyai, Computations in associative algebras, Groups and computation (New Brunswick, NJ, 1991, DIMACS Ser. Discrete Math. Theoret. Comput. Sci., vol. 11, Amer. Math. Soc., Providence, RI, 1993, pp M. F. Smiley, Algebra of matrices, Allyn and Bacon, Inc., Boston, V. I. Solodovnikov, Upper bounds of complexity of the solution of systems of linear equations, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI 118 (198, , (Russian, with English summary. The theory of the complexity of computations, I. 14 V. Strassen, Gaussian elimination is not optimal, Numerische Mathematik 13 (1969, /BF J. von zur Gathen and J. Gerhard, Modern computer algebra, 2nd ed., Cambridge University Press, Cambridge, J. B. Wilson, Finding central decompositions of p-groups, J. Group Theory 12 (2009, no. 6, Department of Mathematics, Colorado State University, Fort Collins, CO 80523, USA address: jwilson@math.colostate.edu

Inner Rank and Lower Bounds for Matrix Multiplication

Inner Rank and Lower Bounds for Matrix Multiplication Inner Rank and Lower Bounds for Matrix Multiplication Joel Friedman University of British Columbia www.math.ubc.ca/ jf Jerusalem June 19, 2017 Joel Friedman (UBC) Inner Rank and Lower Bounds June 19, 2017

More information

On Unimodular Matrices of Difference Operators

On Unimodular Matrices of Difference Operators On Unimodular Matrices of Difference Operators S. A. Abramov, D. E. Khmelnov Dorodnicyn Computing Centre, Federal Research Center Computer Science and Control of the Russian Academy of Sciences, Vavilova,

More information

arxiv: v1 [math.ra] 15 Dec 2015

arxiv: v1 [math.ra] 15 Dec 2015 NIL-GOOD AND NIL-GOOD CLEAN MATRIX RINGS ALEXI BLOCK GORMAN AND WING YAN SHIAO arxiv:151204640v1 [mathra] 15 Dec 2015 Abstract The notion of clean rings and 2-good rings have many variations, and have

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1) Chapter 11. 11.0 Introduction Eigensystems An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if A x = λx (11.0.1) Obviously any multiple of an eigenvector x will also be an

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

MAT 5330 Algebraic Geometry: Quiver Varieties

MAT 5330 Algebraic Geometry: Quiver Varieties MAT 5330 Algebraic Geometry: Quiver Varieties Joel Lemay 1 Abstract Lie algebras have become of central importance in modern mathematics and some of the most important types of Lie algebras are Kac-Moody

More information

Three Ways to Test Irreducibility

Three Ways to Test Irreducibility Three Ways to Test Irreducibility Richard P. Brent Australian National University joint work with Paul Zimmermann INRIA, Nancy France 12 Feb 2009 Outline Polynomials over finite fields Irreducibility criteria

More information

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS 1 SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS HUAJUN HUANG AND HONGYU HE Abstract. Let G be the group preserving a nondegenerate sesquilinear form B on a vector space V, and H a symmetric subgroup

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Three Ways to Test Irreducibility

Three Ways to Test Irreducibility Outline Three Ways to Test Irreducibility Richard P. Brent Australian National University joint work with Paul Zimmermann INRIA, Nancy France 8 Dec 2008 Polynomials over finite fields Irreducibility criteria

More information

Total Ordering on Subgroups and Cosets

Total Ordering on Subgroups and Cosets Total Ordering on Subgroups and Cosets Alexander Hulpke Department of Mathematics Colorado State University 1874 Campus Delivery Fort Collins, CO 80523-1874 hulpke@math.colostate.edu Steve Linton Centre

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Dynamic Matrix Rank.

Dynamic Matrix Rank. Dynamic Matrix Rank Gudmund Skovbjerg Frandsen 1 and Peter Frands Frandsen 2 1 BRICS, University of Aarhus, Denmark gudmund@daimi.au.dk 2 Rambøll Management, Aarhus, Denmark Peter.Frandsen@r-m.com Abstract.

More information

Lattice Basis Reduction and the LLL Algorithm

Lattice Basis Reduction and the LLL Algorithm Lattice Basis Reduction and the LLL Algorithm Curtis Bright May 21, 2009 1 2 Point Lattices A point lattice is a discrete additive subgroup of R n. A basis for a lattice L R n is a set of linearly independent

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

A 3 3 DILATION COUNTEREXAMPLE

A 3 3 DILATION COUNTEREXAMPLE A 3 3 DILATION COUNTEREXAMPLE MAN DUEN CHOI AND KENNETH R DAVIDSON Dedicated to the memory of William B Arveson Abstract We define four 3 3 commuting contractions which do not dilate to commuting isometries

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

18.06 Quiz 2 April 7, 2010 Professor Strang

18.06 Quiz 2 April 7, 2010 Professor Strang 18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Polynomial functions over nite commutative rings

Polynomial functions over nite commutative rings Polynomial functions over nite commutative rings Balázs Bulyovszky a, Gábor Horváth a, a Institute of Mathematics, University of Debrecen, Pf. 400, Debrecen, 4002, Hungary Abstract We prove a necessary

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009 Preface The title of the book sounds a bit mysterious. Why should anyone read this

More information

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1 LU decomposition -- manual demonstration Instructor: Nam Sun Wang lu-manualmcd LU decomposition, where L is a lower-triangular matrix with as the diagonal elements and U is an upper-triangular matrix Just

More information

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK Séminaire Lotharingien de Combinatoire 52 (2004), Article B52f COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK MARC FORTIN AND CHRISTOPHE REUTENAUER Dédié à notre

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Maths for Signals and Systems Linear Algebra for Engineering Applications

Maths for Signals and Systems Linear Algebra for Engineering Applications Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Efficient Algorithms for Order Bases Computation

Efficient Algorithms for Order Bases Computation Efficient Algorithms for Order Bases Computation Wei Zhou and George Labahn Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada Abstract In this paper we present two algorithms

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Algebraic Complexity Theory

Algebraic Complexity Theory Peter Biirgisser Michael Clausen M. Amin Shokrollahi Algebraic Complexity Theory With the Collaboration of Thomas Lickteig With 21 Figures Springer Chapter 1. Introduction 1 1.1 Exercises 20 1.2 Open Problems

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations MATH 2 - SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. We carry out row reduction. We begin with the row operations yielding the matrix This is already upper triangular hence The lower triangular matrix

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Intermediate semigroups are groups

Intermediate semigroups are groups Semigroup Forum Version of September 12, 2017 Springer-Verlag New York Inc. Intermediate semigroups are groups Alexandre A. Panin arxiv:math/9903088v3 [math.gr] 4 Dec 1999 Communicated by Mohan Putcha

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

(Kac Moody) Chevalley groups and Lie algebras with built in structure constants Lecture 1. Lisa Carbone Rutgers University

(Kac Moody) Chevalley groups and Lie algebras with built in structure constants Lecture 1. Lisa Carbone Rutgers University (Kac Moody) Chevalley groups and Lie algebras with built in structure constants Lecture 1 Lisa Carbone Rutgers University Slides will be posted at: http://sites.math.rutgers.edu/ carbonel/ Video will be

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM ORIGINATION DATE: 8/2/99 APPROVAL DATE: 3/22/12 LAST MODIFICATION DATE: 3/28/12 EFFECTIVE TERM/YEAR: FALL/ 12 COURSE ID: COURSE TITLE: MATH2800 Linear Algebra

More information

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this:

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this: Chapter 2. 2.0 Introduction Solution of Linear Algebraic Equations A set of linear algebraic equations looks like this: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1N x N =b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

AN INVERSE MATRIX OF AN UPPER TRIANGULAR MATRIX CAN BE LOWER TRIANGULAR

AN INVERSE MATRIX OF AN UPPER TRIANGULAR MATRIX CAN BE LOWER TRIANGULAR Discussiones Mathematicae General Algebra and Applications 22 (2002 ) 161 166 AN INVERSE MATRIX OF AN UPPER TRIANGULAR MATRIX CAN BE LOWER TRIANGULAR Waldemar Ho lubowski Institute of Mathematics Silesian

More information

Introduction to Algorithms

Introduction to Algorithms Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that

More information

Algebra Exam Syllabus

Algebra Exam Syllabus Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

More information

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 PROFESSOR HENRY C. PINKHAM 1. Prerequisites The only prerequisite is Calculus III (Math 1201) or the equivalent: the first semester of multivariable calculus.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

A biased overview of computational algebra

A biased overview of computational algebra A biased overview of computational algebra Peter Brooksbank Bucknell University Linear Algebra and Matrix Theory: connections, applications and computations NUI Galway (December 3, 2012) Lecture Outline

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Decompositions Of Modules And Matrices

Decompositions Of Modules And Matrices University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 1973 Decompositions Of Modules And Matrices Thomas

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Columbus State Community College Mathematics Department Public Syllabus

Columbus State Community College Mathematics Department Public Syllabus Columbus State Community College Mathematics Department Public Syllabus Course and Number: MATH 2568 Elementary Linear Algebra Credits: 4 Class Hours Per Week: 4 Prerequisites: MATH 2153 with a C or higher

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

On families of anticommuting matrices

On families of anticommuting matrices On families of anticommuting matrices Pavel Hrubeš December 18, 214 Abstract Let e 1,..., e k be complex n n matrices such that e ie j = e je i whenever i j. We conjecture that rk(e 2 1) + rk(e 2 2) +

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Counting symmetric nilpotent matrices

Counting symmetric nilpotent matrices Counting symmetric nilpotent matrices A. E. Brouwer, R. Gow & J. Sheekey March 2012 / March 9, 2013 Abstract We determine the number of nilpotent matrices of order n over F q that are self-adjoint for

More information

How to find good starting tensors for matrix multiplication

How to find good starting tensors for matrix multiplication How to find good starting tensors for matrix multiplication Markus Bläser Saarland University Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

Mathematics Online Instructional Materials Correlation to the 2009 Algebra II Standards of Learning and Curriculum Framework

Mathematics Online Instructional Materials Correlation to the 2009 Algebra II Standards of Learning and Curriculum Framework and Curriculum Framework Provider York County School Division Course Title Algebra II AB Last Updated 2010-11 Course Syllabus URL http://yorkcountyschools.org/virtuallearning/coursecatalog.aspx AII.1 The

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

ON ISOTROPY OF QUADRATIC PAIR

ON ISOTROPY OF QUADRATIC PAIR ON ISOTROPY OF QUADRATIC PAIR NIKITA A. KARPENKO Abstract. Let F be an arbitrary field (of arbitrary characteristic). Let A be a central simple F -algebra endowed with a quadratic pair σ (if char F 2 then

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

A New Characterization of Boolean Rings with Identity

A New Characterization of Boolean Rings with Identity Irish Math. Soc. Bulletin Number 76, Winter 2015, 55 60 ISSN 0791-5578 A New Characterization of Boolean Rings with Identity PETER DANCHEV Abstract. We define the class of nil-regular rings and show that

More information

Hermite normal form: Computation and applications

Hermite normal form: Computation and applications Integer Points in Polyhedra Gennady Shmonin Hermite normal form: Computation and applications February 24, 2009 1 Uniqueness of Hermite normal form In the last lecture, we showed that if B is a rational

More information

WRONSKIANS AND LINEAR INDEPENDENCE... f (n 1)

WRONSKIANS AND LINEAR INDEPENDENCE... f (n 1) WRONSKIANS AND LINEAR INDEPENDENCE ALIN BOSTAN AND PHILIPPE DUMAS Abstract We give a new and simple proof of the fact that a finite family of analytic functions has a zero Wronskian only if it is linearly

More information