Exercise 1.3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow.
|
|
- Darren Eaton
- 5 years ago
- Views:
Transcription
1 54 Chapter Introduction 9 s Exercise 3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow Exercise 5 Follow the instructions for this problem given on the class wiki For the example described in this section, Recreate the above table by programming it up with Matlab or Octave, starting with the assumption that today is cloudy 2 Create two similar tables starting with the assumption that today is sunny andrainy, respectively 3 Compare how x 7 differs depending on today s weather 4 What do you notice if you compute x k starting with today being sunny/cloudy/rainy and you let k get large? 5 What does x represent? Exercise 6 Given Table, create the following table, which predicts the weather the day after tomorrow given the weather today: Day after Tomorrow sunny cloudy rainy This then tells us the entries in Q in 7 Today sunny cloudy rainy Exercise 8 Let x, y R n Show that vector addition commutes: x + y y + x Exercise 9 Let x, y R n Show that x T y y T x
2 9 s 55 Exercise Start building your library by implement the functions in Figure 6 See directions on the class wiki page Exercise 8 Use mathematical induction to prove that n i i2 n n2n /6 : Proof by induction: Base case: n For this case, we must show that i i2 22 /6 i i2 Definition of summation arithmetic 2 /6 This proves the base case Inductive step: Inductive Hypothesis IH: Assume that the result is true for n k where k : k i 2 k k2k /6 i We will show that the result is then also true for n k + : k+ i i 2 k + k2k + /6 Assume that k Then
3 56 Chapter Introduction k+ i i 2 arithmetic k i i2 split off last term k i i2 + k 2 Inductive Hypothesis k k2k /6+k 2 algebra 2k 3 3k 2 + k+6k 2 /6 algebra 2k 3 +3k 2 + k/6 algebra kk + 2k + /6 arithmetic k + k + 2k + /6 This proves the inductive step By the Principle of Mathematical Induction the result holds for all n Exercise 24 For each of the following, determine whether it is a linear transformation or not: χ χ F F χ χ 2 F χ χ First check if F F transformation So it COULD be a linear
4 9 s 57 Next, check if F αx αf x Let x an arbitrary scalar Then χ F α F αχ α α χ be an arbitrary vector and α be αχ α Next, check if F x + y F x+fy Let x arbitrary vectors Then χ F + χ ψ ψ ψ 2 + Thus it is a linear transformation χ χ 2 F First check if F F transformation F ψ ψ 2 F αf χ χ + ψ + ψ + ψ 2 χ χ and y + F ψ ψ ψ 2 χ + ψ + ψ 2 ψ ψ 2 be an So it COULD be a linear Now, looking at it, I suspect this is not a linear transformation because of the So, I will try to construct an example where F αx αf x or F x + y F x+fy Let x and α 2 Then 2 4 F αx F 2 F Also 2F x 2F 2 2 Thus, for this choice of α and x we find that F αx αf x Thus it is not a linear transformation
5 58 Chapter Introduction Exercise 26 Let x, e i R n Show that e T i x x T e i χ i the ith element of x Recall that e i where the occurs in the ith entry and x χ χ i χ i χ i+ χ n Then e T i x χ + + χ i + χ i + χ i+ + + χ n χ i Exercise 34 Show that the transformation in Example 2 is not a linear transformation by computing a possible matrix that represents it, and then showing that it does not represent it χ χ + ψ Let F If F were a linear transformation, then there would be ψ χ + α β a corresponding matrix, A, such that γ δ χ α β χ F ψ γ δ ψ This matrix would be computed by computing its columns α + F γ + 2 and α β so that γ δ β F δ But 2 2 χ ψ χ + ψ 2χ + ψ + + χ + ψ χ + Thus F cannot be a linear transformation There is no matrix that has the same action as F
6 9 s 59 Exercise 35 Show that the transformation in Example 23 is not a linear transformation by computing a possible matrix that represents it, and then showing that it does not represent it χ χψ Let F If F were a linear transformation, then there would be a ψ χ α β corresponding matrix, A, such that γ δ χ α β χ F ψ γ δ ψ This matrix would be computed by computing its columns α F γ and α β so that γ δ β F δ But χ ψ χ χψ χ Thus F cannot be a linear transformation There is no matrix that has the same action as F Exercise 36 For each of the transformations in Exercise 24 compute a possible matrix that represents it and use it to show whether the transformation is linear Let F χ χ a corresponding matrix, A F χ If F were a linear transformation, then there would be α α α 2 α α α 2 α 2 α 2 α 22, such that α α α 2 α α α 2 α 2 α 2 α 22 χ
7 6 Chapter Introduction This matrix would be computed by computing its columns α α F, α 2 and so that α α α 2 α α α 2 α 2 α 2 α 22 α α α 2 α 2 α 2 α 22 F F χ Checking now χ,, F χ Thus there is a matrix that corresponds to F which is therefore a linear transformation χ χ 2 Let F If F were a linear transformation, then there would be α α a corresponding matrix, A, such that α α χ α α F χ α α This matrix would be computed by computing its columns α F and α α α α α so that α α χ F Checking now Thus it is not a linear transformation χ F χ
8 9 s 6 Exercise 39 Let D 2 3 represent? In particular, answer the following questions: L : R? R?? Give? and?? What linear transformation, L, does this matrix A linear transformation can be describe by how it transforms the unit basis vectors: Le Le Le 2 χ L L : R 3 R 3 A linear transformation can be describe by how it transforms the unit basis vectors: 2 Le Le 3 Le 2 L χ Lχ e + e + e 2 χ Le + Le + Le 2 χ χ 3 Thus, the elements of the input vector are scaled by the corresponding elements of the diagonal of the matrix
9 62 Chapter Introduction Exercise 42 Give examples for each of the triangular matrices in Definition 4 lower triangular strictly lower triangular unit lower triangular upper triangular strictly upper triangular unit upper triangular if α i,j for all i<j if α i,j for all i j if α i,j for all i<jand α i,j if i j if α i,j for all i>j if α i,j for all i j if α i,j for all i>jand α i,j if i j Exercise 43 Show that a matrix that is both lower and upper triangular is in fact a diagonal matrix If a matrix is both upper and lower triangular then α ij if i j Thus the matrix is diagonal Exercise 44 Add the functions trilu A and triuu A to your SLAP library These functions return the lower and upper triangular part of A, respectively, with the diagonal set to ones Thus, > A [ ]; > trilu A ans Hint: use the tril and eye functions You will also want to use the size function to extract the dimensions of A, to pass in to eye
10 9 s 63 See the base directory of the SLAP library Exercise 49 Show that a triangular matrix that is also symmetric is in fact a diagonal matrix Let us focus on a lower triangular matrix Then α ij if i>j But if the matrix is also symmetric then α ij if i<j Thus α ij if i j Exercise 56 Prove Theorem 22 We need to show that L C αx αl C x: L C αx L A αx+l B αx αl A x+αl B x αl C x L C x + y L C x+l C y: L C x + y L A x + y+l B x + y L A x+l A y+l B x+l B y L A x+l B x+l A y+l B y L C x+l C y Exercise 58 Note: I have changed this question from before Give a motivation for matrix addition by considering a linear transformation L C x L A x+l B x Given A, B, C R m n let L A, L B and L C be the corresponding linear transformations Define L C x L A x +L B x Let a j, b j,and c j be the jth columns of A, B, and C, respectively Then c j L C e j L A e J +L B e j a j + b j Thus the elements of the each of the columns of C equal the addition of the corresponding elements of columns of A and B Exercise 67 Modify the algorithms in Figure 9 to compute y : Lx, where L is a lower triangular matrix The answer is in Figure 23 It computes y : Lx + y, since the given operation can then be obtained by starting with y the vector of zeroes
11 64 Chapter Introduction y : Ltrmvmult unb varba, x, y ATL A TR Partition A, A BL A BR yt x, y y B where A TL is, x T, y T are while ma TL <ma do Repartition ATL A TR A BL A BR x A a A 2 a T α a T 2 A 2 a 2 A 22, yt y B where α,, and ψ are scalars y ψ, y : Ltrmvmult unb var2ba, x, y ATL A TR Partition A, A BL A BR yt x, y y B where A TL is, x T, y T are while ma TL <ma do Repartition ATL A TR A BL A BR x A a A 2 a T α a T 2 A 2 a 2 A 22, yt y B where α,, and ψ are scalars y ψ, ψ : a T x + α +a T 2 }{{} + ψ {}}{ y : a + y ψ : α + ψ : a 2 + Continue with ATL A TR A BL endwhile A BR x A a A 2 a T α a T 2 A 2 a 2 A 22, yt y B y ψ, Continue with ATL A TR A BL endwhile A BR x A a A 2 a T α a T 2 A 2 a 2 A 22, yt y B y ψ, Figure 23: to Exercise 67 Compare and contrast to Figure 9 Executing either of these algorithms with A L will compute y : Lx + y where L is lower triangular Notice that the algorithm would become even clearer if {A, a, α} were replaced by {L, l, λ} Exercise 68 Modify the algorithms in Figure 2 so that x is overwritten with x : Ux, without using the vector y The answer is in Figure 24 It computes x : Ux Some explaination: Let us first consider y : Ux + y If we partition the matrix and vectors we get y ψ : U u U 2 υ u T 2 U 22 x + y ψ U x + u + U 2 + y υ + u T 2 + ψ U 2 +
12 9 s 65 Notice that the algorithm in Figure 2left has the property that when the current iteration starts, everything in red currently exists in y: y U x + u + U 2 + y ψ : υ + u T 2 + ψ U 2 + The update ψ : υ + u T 2 + ψ then makes it so that one more element of y has been updated Now, turn to the computation x : Ux: x U x + u + U 2 : υ + u T 2 U 2 Notice that to update one more element of x we much compute : υ + u T 2 which justifies the algorithm in Figure 24left Now, let s turn to the algorithm in Figure 2right That algorithm has the property that when the current iteration starts, everything in red currently exists in y: y ψ : U x + u + U 2 +y υ + u T 2 + ψ U 2 + The updates y : u + y ψ : υ + ψ then make it so that the vector y contains everything in blue: y U x + u + U 2 +y ψ : υ + u T 2 +ψ U 2 + Now, turn to the computation x : Ux: x U x + u + U 2 : υ + u T 2 U 2 Notice that the computations x : u + x : υ
13 66 Chapter Introduction x : Trmv un unb varbu, x UTL U TR Partition U, U BR x where U TL is, x T is while mu TL <mu do Repartition UTL U TR U BR x where υ, are scalars : u T x + υ + u T 2 U u U 2 υ u T 2 U 22, x : Trmv un unb var2u, x UTL U TR Partition U, U BR x where U TL is, x T is while mu TL <mu do Repartition UTL U TR U BR x where υ, are scalars x : u + x : υ U u U 2 υ u T 2 U 22, Continue with UTL U TR U BR endwhile x U u U 2 υ u T 2 A 22, Continue with UTL U TR U BR endwhile x U u U 2 υ u T 2 A 22, Figure 24: to Exercise 67 make is so that everthing in blue is in x: x U x + u + U 2 : υ + u T 2 U 2 which justifies the algorithm in Figure 24right Exercise 69 Reason why the algorithm you developed for Exercise 67 cannot be trivially changed so that x : Lx without requiring y What is the solution? Let us focus on the algorithm on the left in Figure 23 Consider the update ψ : a T x + α +a T 2 }{{} + ψ
14 9 s 67 and reason what would happen if we blindly changed this to : a T x + α Then x on the right of the : refers to the original contents of subvector x, but those have been overwritten by previous iterations The solution is to make this change, but to then move through the matrix backwards This is not an issue for the case where we are working with an upper triangular matrix, because there the computation accidently moves in just the right direction You may want to clarify this to yourself by working through a small 3 3 example Exercise 7 Develop algorithms for computing x : U T x and x : L T x, where U and L are respectively upper triangular and lower triangular, without explicitly transposing matrices U and L Let us focus on computing x : U T x Compare U u U 2 υ u T 2 U 22 T U T u T υ U2 T u 2 U22 T to L l T λ L 2 l 2 L 22 Notice that computing x : U T x is the same as computing x : Lx except that you use U T for L, u T for l T, etc Thus, you need to make the obvious changes to the algorithms you developed for x : Lx Ditto for x : L T x except that you modify the algorithms you developed for x : Ux Exercise 7 Compute the cost, in flops, of the algorithm in Figure 2right Let us analyze the algorithm in Figure 2right The cost is in the updates and y : u + y ψ : υ + ψ Now, during the first iteration, u and y are of length, so that that iteration requires 2 flops for the second step only During the ith iteration staring with i, u and y are
15 68 Chapter Introduction of length i so that the cost of that iteration is 2i flops for the first step an axpy operation and 2 flops for the second Thus, if U is an n n matrix, then the total cost is given by n i n [2 + 2i] 2n +2 i nn i 2n +2 2n + n 2 n n 2 + n n 2 2 flops Thus the cost of the two different algorithms is the same! This is not surprising: to get from one algorithm to the other, all you are doing is reordering the same operations Exercise 74 Modify the algorithms in Figure 9 to compute y : Ax + y, where A is symmetric and stored in the lower triangular part of matrix Notice that if A is symmetric, then A a A 2 a T α a T 2 A 2 a 2 A 22 which means that so that A T a A T 2 a T α a T 2 A T 2 a 2 A T 22 T A a A 2 a T α a T 2 A 2 a 2 A 22 A a A 2 a T α a T 2 A 2 a 2 A 22 A T A a a A T 2 A 2 a T a T α a T 2 a T 2 A T 2 A 2 a 2 a 2 A T 22 A 22 Thus, if only the lower triangular part is stored, we can take advantage of the equalities in Figure 9, replacing in the left algorithm A T A a T a T α A T 2 A 2 a 2 a 2 A T 22 A 22 ψ : a T + α + a T 2 + ψ by ψ : a T + α + a T 2 + ψ and in the right algorithm y : a + y ψ : α + ψ : a 2 + by y : a + y ψ : α + ψ : a 2 +
16 9 s 69 Exercise 77 Follow the directions on the wiki to implement the above routines for computing the general matrix-vector multiply Exercise 78 Follow the directions on the wiki to implement the above routines for computing the triangular matrix-vector multiply Exercise 79 Follow the directions on the wiki to implement the above routines for computing the symmetric matrix-vector multiply Exercise 8 When you try to pick from the algorithms in Figure 2, you will notice that neither algorithm has the property that almost all computations involve data that is stored consecutively in memory There are actually two more algorithms for computing this operation The observation is that y : Ax + y, where A is symmetric and stored in the upper trianglar part of the array, can be computed via y : Ux + y followed by y : Û T x + y, where U and Û are the upper triangular and strictly upper triangular parts of A Now, for the first there are two algorithms given in the text, and the other one is an exercise This gives us 2 2 combinations By merging the two loops that you get one for each of the two operations you can find all four algorithms for the symmetric matrix-vector multiplication From these, you can pick the algorithm that strides through memory in the most favorable way
Practical Linear Algebra. Class Notes Spring 2009
Practical Linear Algebra Class Notes Spring 9 Robert van de Geijn Maggie Myers Department of Computer Sciences The University of Texas at Austin Austin, TX 787 Draft January 8, Copyright 9 Robert van de
More informationFrom Matrix-Vector Multiplication to Matrix-Matrix Multiplication
Week4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication here are a LO of programming assignments this week hey are meant to help clarify slicing and dicing hey show that the right abstractions
More informationChapter 3 - From Gaussian Elimination to LU Factorization
Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 29 http://z.cs.utexas.edu/wiki/pla.wiki/ 1
More informationMatrix-Vector Operations
Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested
More informationMatrix-Vector Operations
Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested
More informationWeek6. Gaussian Elimination. 6.1 Opening Remarks Solving Linear Systems. View at edx
Week6 Gaussian Elimination 61 Opening Remarks 611 Solving Linear Systems View at edx 193 Week 6 Gaussian Elimination 194 61 Outline 61 Opening Remarks 193 611 Solving Linear Systems 193 61 Outline 194
More informationNotes on Householder QR Factorization
Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem
More informationMatrix-Matrix Multiplication
Week5 Matrix-Matrix Multiplication 51 Opening Remarks 511 Composing Rotations Homework 5111 Which of the following statements are true: cosρ + σ + τ cosτ sinτ cosρ + σ sinρ + σ + τ sinτ cosτ sinρ + σ cosρ
More informationMatrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.
Matrix Arithmetic There is an arithmetic for matrices that can be viewed as extending the arithmetic we have developed for vectors to the more general setting of rectangular arrays: if A and B are m n
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationLAFF Linear Algebra Foundations and Frontiers. Daniel Homola
LAFF Linear Algebra Foundations and Frontiers Daniel Homola Notes for the MOOC course by The University of Texas at Austin September 2015 Contents I Week 1-4 5 1 Vectors 6 1.1 What is a vector.............................
More informationLinear Transformations and Matrices
Week2 Linear Transformations and Matrices 2 Opening Remarks 2 Rotating in 2D View at edx Let R θ : R 2 R 2 be the function that rotates an input vector through an angle θ: R θ (x) θ x Figure 2 illustrates
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationMore Gaussian Elimination and Matrix Inversion
Week7 More Gaussian Elimination and Matrix Inversion 7 Opening Remarks 7 Introduction 235 Week 7 More Gaussian Elimination and Matrix Inversion 236 72 Outline 7 Opening Remarks 235 7 Introduction 235 72
More information8.3 Householder QR factorization
83 ouseholder QR factorization 23 83 ouseholder QR factorization A fundamental problem to avoid in numerical codes is the situation where one starts with large values and one ends up with small values
More informationI = i 0,
Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationNotes on LU Factorization
Notes on LU Factorization Robert A van de Geijn Department of Computer Science The University of Texas Austin, TX 78712 rvdg@csutexasedu October 11, 2014 The LU factorization is also known as the LU decomposition
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More information10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )
c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More informationMatrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices
Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationSection Vectors
Section 1.2 - Vectors Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Vectors We will call a one-dimensional
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationBLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product
Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8
More informationOrthogonal Projection, Low Rank Approximation, and Orthogonal Bases
Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 38 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 38.. Outline..
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationOrthogonal Projection, Low Rank Approximation, and Orthogonal Bases
Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 463 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464.. Outline..
More informationVector, Matrix, and Tensor Derivatives
Vector, Matrix, and Tensor Derivatives Erik Learned-Miller The purpose of this document is to help you learn to take derivatives of vectors, matrices, and higher order tensors (arrays with three dimensions
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More informationMatrices. In this chapter: matrices, determinants. inverse matrix
Matrices In this chapter: matrices, determinants inverse matrix 1 1.1 Matrices A matrix is a retangular array of numbers. Rows: horizontal lines. A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More information22A-2 SUMMER 2014 LECTURE Agenda
22A-2 SUMMER 204 LECTURE 2 NATHANIEL GALLUP The Dot Product Continued Matrices Group Work Vectors and Linear Equations Agenda 2 Dot Product Continued Angles between vectors Given two 2-dimensional vectors
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationLecture 3: Special Matrices
Lecture 3: Special Matrices Feedback of assignment1 Random matrices The magic matrix commend magic() doesn t give us random matrix. Random matrix means we will get different matrices each time when we
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationchapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions
chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS The purpose of this chapter is to introduce you to matrix algebra, which has many applications. You are already familiar with several algebras: elementary
More informationThe Matrix Algebra of Sample Statistics
The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics
More informationA = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1
Lecture Notes: Determinant of a Square Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Determinant Definition Let A [a ij ] be an
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationMatrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix
Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More information1300 Linear Algebra and Vector Geometry
1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Matrix Inversion Algorithm One payoff from this theorem: It gives us a way to invert matrices.
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More information7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.
7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More informationEvaluating Determinants by Row Reduction
Evaluating Determinants by Row Reduction MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Objectives Reduce a matrix to row echelon form and evaluate its determinant.
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationGeneral Comments on Proofs by Mathematical Induction
Fall 2015: CMSC250 Notes on Mathematical Induction Proofs General Comments on Proofs by Mathematical Induction All proofs by Mathematical Induction should be in one of the styles below. If not there should
More information1 Matrices and matrix algebra
1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns
More informationMTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)
MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and
More informationEXAMPLES OF CLASSICAL ITERATIVE METHODS
EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic
More informationa11 a A = : a 21 a 22
Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are
More informationSection 9.2: Matrices.. a m1 a m2 a mn
Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in
More informationREPRESENTATIONS FOR THE THEORY AND PRACTICE OF HIGH-PERFORMANCE DENSE LINEAR ALGEBRA ALGORITHMS. DRAFT October 23, 2007
REPRESENTATIONS FOR THE THEORY AND PRACTICE OF HIGH-PERFORMANCE DENSE LINEAR ALGEBRA ALGORITHMS ROBERT A VAN DE GEIJN DRAFT October 23, 2007 Abstract The Cholesky factorization operation is used to demonstrate
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationReview of Vectors and Matrices
A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationMore on Matrix Inversion
Week8 More on Matrix Inversion 8.1 Opening Remarks 8.1.1 When LU Factorization with Row Pivoting Fails View at edx The following statements are equivalent statements about A R n n : A is nonsingular. A
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationMathematics 13: Lecture 10
Mathematics 13: Lecture 10 Matrices Dan Sloughter Furman University January 25, 2008 Dan Sloughter (Furman University) Mathematics 13: Lecture 10 January 25, 2008 1 / 19 Matrices Recall: A matrix is a
More information4. Determinants.
4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationChapter 2:Determinants. Section 2.1: Determinants by cofactor expansion
Chapter 2:Determinants Section 2.1: Determinants by cofactor expansion [ ] a b Recall: The 2 2 matrix is invertible if ad bc 0. The c d ([ ]) a b function f = ad bc is called the determinant and it associates
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationA FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic
A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationLecture 2: Linear Algebra
Lecture 2: Linear Algebra Rajat Mittal IIT Kanpur We will start with the basics of linear algebra that will be needed throughout this course That means, we will learn about vector spaces, linear independence,
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More information1 Counting spanning trees: A determinantal formula
Math 374 Matrix Tree Theorem Counting spanning trees: A determinantal formula Recall that a spanning tree of a graph G is a subgraph T so that T is a tree and V (G) = V (T ) Question How many distinct
More informationCS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:
CS100: DISCRETE STRUCTURES Lecture 3 Matrices Ch 3 Pages: 246-262 Matrices 2 Introduction DEFINITION 1: A matrix is a rectangular array of numbers. A matrix with m rows and n columns is called an m x n
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationSparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations
Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system
More informationAnnouncements Wednesday, October 10
Announcements Wednesday, October 10 The second midterm is on Friday, October 19 That is one week from this Friday The exam covers 35, 36, 37, 39, 41, 42, 43, 44 (through today s material) WeBWorK 42, 43
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.7 LINEAR INDEPENDENCE LINEAR INDEPENDENCE Definition: An indexed set of vectors {v 1,, v p } in n is said to be linearly independent if the vector equation x x x
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationAnnouncements Monday, October 02
Announcements Monday, October 02 Please fill out the mid-semester survey under Quizzes on Canvas WeBWorK 18, 19 are due Wednesday at 11:59pm The quiz on Friday covers 17, 18, and 19 My office is Skiles
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationSpecial types of matrices
Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 2 Special types of matrices What you need to know already: What a matrix is. The basic terminology and notation used for matrices. What
More informationAlgorithms for Reducing a Matrix to Condensed Form
lgorithms for Reducing a Matrix to Condensed Form FLME Working Note #53 Field G. Van Zee Robert. van de Geijn Gregorio Quintana-Ortí G. Joseph Elizondo October 3, 2 Revised January 3, 22 bstract In a recent
More informationSection 1.6. M N = [a ij b ij ], (1.6.2)
The Calculus of Functions of Several Variables Section 16 Operations with Matrices In the previous section we saw the important connection between linear functions and matrices In this section we will
More informationBASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =
CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with
More information