MODULE 13. Topics: Linear systems
|
|
- Geoffrey Jefferson
- 5 years ago
- Views:
Transcription
1 Topics: Linear systems MODULE 13 We shall consider linear operators and the associated linear differential equations. Specifically we shall have operators of the form i) Lu u A(t)u where A(t) is an n n matrix with continuous functions of t in its entries and u = (u 1,..., u n ). ii) Lu n j=0 a j(t)u (j) (t). In most applications this L will operate on a single function u(t). We assume that the coefficient of the jth derivative is continuous in t, and that a n (t) 0 for any t in the interval of interest. Hence without loss of generality we may assume that a n (t) 1. Both operators have in common that they are linear, i.e., L(u + αv) = Lu + αlv. In either case we shall consider the differential equation Lu = 0 In other words, we are going to examine the null space of L in the domain of L consisting of all sufficiently smooth functions. if we set As we noted before, the nth order equation can be transformed into a first order system v 1 = u v 2 = u v n = u (n 1) then the first order system equivalent to the single nth order scalar equation is Lv v A(t)v = 0 where A(t) = a 0 a 1 a 2 a n 1 83
2 The equivalence between the nth order scalar equation and the first order system will be exploited time and again. We also note that the equation Lu = 0 for a first order system will usually be written as u = A(t)u so that we can apply the above existence and uniqueness theorem. Theorem: dim N (L) = n. Proof. Consider the n initial value problems u i = A(t)u i u i (0) = ê i, i = 1,..., n. It follows from our existence and uniqueness theorem that the functions {u i (t)} exist, at least in a neighborhood of t = 0. Without proof we shall accept that if the coefficients are bounded for all t then these solutions in fact exist for all t. These solutions are linearly independent, for if n α j u j (t) = U(t) α = 0 j=1 where U is the matrix whose ith column is u i (t), then also U(0) α = 0. But U(0) = I, hence α = (α 1,..., α n ) = 0. Now suppose that w(t) is any vector N (L). Then by linearity the function v(t) = U(t)w(0) satisfies v (t) = A(t)v(t) v(0) = w(0). Uniqueness guarantees that v(t) w(t). Hence any function in the null space of L is representable as a linear combination of the solutions {u i }. Hence these n functions form a basis of N (L). 84
3 Example: If then By inspection, u 1 (t) = u = 0 1 u 1 0 cosh t sinh t, u sinh t 2 (t) =. cosh t ( e t w(t) = e t ) N (L) and 2w(t) = u 1 (t) u 2 (t). What does this theorem tell us about n Lu a j (t)u (j) (t) = 0 j=0 The corresponding vector system v = A(t)v has n linearly independent solutions v i (t), i = 1,..., n. The first component of each vector v i (t) is a solution of Lu = 0. The remaining components are the derivatives of u. If we set u i (t) = first component of v i (t) then Lu i (t) = 0 and u (k) i (0) = δ i,k+1. In other words, u 1 (0) = 1 and all derivatives up to order (n 1) are zero at t = 0, u 2 (0) = 0, u 2(0) = 1, and all other derivatives of u 2 are zero at t = 0. If we look at the Wronskian of these n functions we see from W (t) = det u 1(t) u 2 (t) u n (t) = det(v 1 v 2... v n ) u (n 1) 1 (t) u (n 1) 2 (t) u (n 1) n (t) 85
4 that W (0) = 1 so that the functions {u i (t)} are linearly independent. The uniqueness of solutions for an initial value problem can then be used to demonstrate that these functions are a basis of N (L). We summarize: i) The equation u = A(t)u has n linearly independent solutions {u i } and any solution of this system can be written as a linear combination of the {u i }. ii) The equation n Lu = a j (t)u (j) (t) = 0 j=0 has n linearly independent solutions {u i } and any solution of Lu = 0 can be written as a linear combination of the {u i }. Example: Consider the boundary value problem Lu u + u = 0 u(0) + u(1) = 1 u (2) = 2. We verify that two linearly independent solutions of Lu = 0 are u 1 (t) = cos t and u 2 (t) = sin t. A solution u(t) of our boundary value problem, if it exists, must belong to the span{u 1 (t), u 2 (t)}, i.e., We satisfy the boundary conditions if u(t) = α 1 u 1 (t) + α 2 u 2 (t). 1 + cos 1 sin 1 α1 = sin 2 cos 2 α The system is not singular. α 1 and α 2 can be found, hence a solution u(t) exists. Moreover, this solution is unique because if there were two solutions v(t) and w(t) then the function u(t) = v(t) w(t) 86
5 would have to satisfy Lu = 0 u(0) + u(1) = 0 u (2) = 0. But the only function in the span of u 1 (t) and u 2 (t) which satisfies these boundary conditions is the zero function. The dominant practical difficulty in solving linear equations is the calculation of the basis functions. Only for constant coefficient systems is there a consistent way of generating the basis. This topic will be studied at length a little later. However, given one element in the null space of an nth order linear operator one can often find a second linearly independent element from a related simpler problem. We shall discuss only the case of a second order equation of the form Lu a 2 (t)u + a 1 (t)u + a 0 (t)u = 0 where the coefficients are continuous in t on some interval. Here the computation proceeds as follows: Let u 1 (t) be an element in N (L), i.e., Lu 1 (t) = 0 then one can find a second element N (L) of the form u 2 (t) = φ(t)u 1 (t) where φ is found from (essentially) a first order separable equation. Since we want Lu 2 = 0 we find that φ must be chosen so that a 2 (t)[φ u 1 + 2φ u 1 + φu 1] + a 1 (t)[φ u 1 + φu 1] + a 0 (t)φu 1 = 0. Since Lu 1 = 0 this equation simplifies to a 2 (t)u 1 φ + [2a 2 (t)u 1 + a 1 (t)u 1 ]φ = 0. 87
6 We now have a first order separable equation in ψ φ which we can write as ψ ψ = [2a 2(t)u 1(t) + a 1 (t)u 1 (t)] a 2 (t)u 1 (t) and which has a non-zero exponential solution. The function φ is then obtained by integrating ψ. Example: Consider Lu t 2 u + 3tu + u = 0. It is known that one can find a solution of the form u 1 (t) = t α. If we substitute this function into Lu = 0 we find that α must be chosen such that [α(α 1) + 3α + 1]t α = 0, i.e., α 2 + 2α + 1 = 0, which yields the solution α = 1 and u 1 (t) = 1 t in any interval not including the origin. Since the above quadratic in α has only one repeated root we do not obtain two different elements in N (L). Let us then find a second element of the form u 2 (t) = φ(t) 1 t If we substitute u 2 into the differential equation we obtain [ t 2 φ 1 t 1 2φ 2 + φ 2 ] [ + 3t φ 1 3 t φ 1 ] + φ 1 2 t = 0 so that tφ + φ = 0. It follows that φ (t) = K t 88
7 and hence that φ(t) = K ln t + c. Hence a second element in N (L) is u 2 (t) = 1 t ln t. By inspection u 2 is not a scalar multiple of u 1 so we have a basis for N (L). Finally we observe that the element φ generated in this way is always linearly independent from u 1. We simply look at the Wronskian: u1 (t) φ(t)u det 1 (t) u 1(t) φ (t)u 1 (t) + φ(t)u = u 2 1(t)φ (t) 1(t) which is non-zero at some point because φ is an exponential function. 89
8 Homework Module 13 1) Consider Lu u + 3tu 4 sin u = f(t). i) Prove or disprove: L is linear on C 2 [0, 1]. ii) Determine f(t) such that u(t) = 5 t is a solution of Lu = f(t). 2) Consider Lu = u 2u + u = te t. i) Show that {e t, te t } is a basis of N (L). ii) Determine a particular integral of the form u p (t) = (a 0 + a 1 t + a 2 t 2 + a 3 t 3 )e t iii) Find a solution of Lu = te t, u(1) = 1, u (2) = 2. iv) Write a first order system equivalent to Lu = te t, u(1) = 1, u (2) = 2 and give its solution. 3) Find a basis of N (L) when Lu = u + 4u. Solve Lu = 0 subject to π 0 or explain why no such solution exists. 4) Find a solution of the form u(t) = t α for u(t)dt = 1 Lu t 2 u + 5tu + 4u = 0 then find a basis for N (L). 90
MODULE 12. Topics: Ordinary differential equations
Topics: Ordinary differential equations MODULE 12 We shall be concerned with mappings = functions = operators which map a function space into a function space. We shall write, sort of generically and in
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More informationMath 322. Spring 2015 Review Problems for Midterm 2
Linear Algebra: Topic: Linear Independence of vectors. Question. Math 3. Spring Review Problems for Midterm Explain why if A is not square, then either the row vectors or the column vectors of A are linearly
More informationwhich arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i
MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.
More informationSecond Order Linear Equations
October 13, 2016 1 Second And Higher Order Linear Equations In first part of this chapter, we consider second order linear ordinary linear equations, i.e., a differential equation of the form L[y] = d
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationChapter 4. Higher-Order Differential Equations
Chapter 4 Higher-Order Differential Equations i THEOREM 4.1.1 (Existence of a Unique Solution) Let a n (x), a n,, a, a 0 (x) and g(x) be continuous on an interval I and let a n (x) 0 for every x in this
More informationLu u. µ (, ) the equation. has the non-zero solution
MODULE 18 Topics: Eigenvalues and eigenvectors for differential operators In analogy to the matrix eigenvalue problem Ax = λx we shall consider the eigenvalue problem Lu = µu where µ is a real or complex
More informationSec. 7.4: Basic Theory of Systems of First Order Linear Equations
Sec. 7.4: Basic Theory of Systems of First Order Linear Equations MATH 351 California State University, Northridge April 2, 214 MATH 351 (Differential Equations) Sec. 7.4 April 2, 214 1 / 12 System of
More informationChapter 13: General Solutions to Homogeneous Linear Differential Equations
Worked Solutions 1 Chapter 13: General Solutions to Homogeneous Linear Differential Equations 13.2 a. Verifying that {y 1, y 2 } is a fundamental solution set: We have y 1 (x) = cos(2x) y 1 (x) = 2 sin(2x)
More informationThis MUST hold matrix multiplication satisfies the distributive property.
The columns of AB are combinations of the columns of A. The reason is that each column of AB equals A times the corresponding column of B. But that is a linear combination of the columns of A with coefficients
More informationMathematical Economics: Lecture 6
Mathematical Economics: Lecture 6 Yu Ren WISE, Xiamen University October 10, 2012 Outline Chapter 11 Linear Independence 1 Chapter 11 Linear Independence New Section Chapter 11: Linear Independence Linear
More informationChapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.
Chapter 3 Directions: For questions 1-11 mark each statement True or False. Justify each answer. 1. (True False) Asking whether the linear system corresponding to an augmented matrix [ a 1 a 2 a 3 b ]
More informationSign the pledge. On my honor, I have neither given nor received unauthorized aid on this Exam : 11. a b c d e. 1. a b c d e. 2.
Math 258 Name: Final Exam Instructor: May 7, 2 Section: Calculators are NOT allowed. Do not remove this answer page you will return the whole exam. You will be allowed 2 hours to do the test. You may leave
More informationLECTURE 6: PSEUDOSPHERICAL SURFACES AND BÄCKLUND S THEOREM. 1. Line congruences
LECTURE 6: PSEUDOSPHERICAL SURFACES AND BÄCKLUND S THEOREM 1. Line congruences Let G 1 (E 3 ) denote the Grassmanian of lines in E 3. A line congruence in E 3 is an immersed surface L : U G 1 (E 3 ), where
More informationMath 54. Selected Solutions for Week 5
Math 54. Selected Solutions for Week 5 Section 4. (Page 94) 8. Consider the following two systems of equations: 5x + x 3x 3 = 5x + x 3x 3 = 9x + x + 5x 3 = 4x + x 6x 3 = 9 9x + x + 5x 3 = 5 4x + x 6x 3
More informationLecture 12: Diagonalization
Lecture : Diagonalization A square matrix D is called diagonal if all but diagonal entries are zero: a a D a n 5 n n. () Diagonal matrices are the simplest matrices that are basically equivalent to vectors
More informationLinear Algebra. Grinshpan
Linear Algebra Grinshpan Saturday class, 2/23/9 This lecture involves topics from Sections 3-34 Subspaces associated to a matrix Given an n m matrix A, we have three subspaces associated to it The column
More information(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.
1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III
More informationSection 9.8 Higher Order Linear Equations
Section 9.8 Higher Order Linear Equations Key Terms: Higher order linear equations Equivalent linear systems for higher order equations Companion matrix Characteristic polynomial and equation A linear
More information1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is
ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important
More informationHonors Differential Equations
MIT OpenCourseWare http://ocw.mit.edu 18.034 Honors Differential Equations Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. LECTURE 13. INHOMOGENEOUS
More informationEigenvalues and Eigenvectors
Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationMath 334 A1 Homework 3 (Due Nov. 5 5pm)
Math 334 A1 Homework 3 Due Nov. 5 5pm No Advanced or Challenge problems will appear in homeworks. Basic Problems Problem 1. 4.1 11 Verify that the given functions are solutions of the differential equation,
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationMATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.
MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If
More informationVANDERBILT UNIVERSITY. MATH 2610 ORDINARY DIFFERENTIAL EQUATIONS Practice for test 1 solutions
VANDERBILT UNIVERSITY MATH 2610 ORDINARY DIFFERENTIAL EQUATIONS Practice for test 1 solutions The first test will cover all material discussed up to (including) section 4.5. Important: The solutions below
More information1 Solution to Homework 4
Solution to Homework Section. 5. The characteristic equation is r r + = (r )(r ) = 0 r = or r =. y(t) = c e t + c e t y = c e t + c e t. y(0) =, y (0) = c + c =, c + c = c =, c =. To find the maximum value
More informationNumerical Analysis: Approximation of Functions
Numerical Analysis: Approximation of Functions Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a
More information1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.
1. Determine by inspection which of the following sets of vectors is linearly independent. (a) (d) 1, 3 4, 1 { [ [,, 1 1] 3]} (b) 1, 4 5, (c) 3 6 (e) 1, 3, 4 4 3 1 4 Solution. The answer is (a): v 1 is
More informationHigher Order Linear Equations
C H A P T E R 4 Higher Order Linear Equations 4.1 1. The differential equation is in standard form. Its coefficients, as well as the function g(t) = t, are continuous everywhere. Hence solutions are valid
More informationGENERAL VECTOR SPACES AND SUBSPACES [4.1]
GENERAL VECTOR SPACES AND SUBSPACES [4.1] General vector spaces So far we have seen special spaces of vectors of n dimensions denoted by R n. It is possible to define more general vector spaces A vector
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationA Second Course in Elementary Differential Equations
A Second Course in Elementary Differential Equations Marcel B Finan Arkansas Tech University c All Rights Reserved August 3, 23 Contents 28 Calculus of Matrix-Valued Functions of a Real Variable 4 29 nth
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS
More informationcomplex dot product x, y =
MODULE 11 Topics: Hermitian and symmetric matrices Setting: A is an n n real or complex matrix defined on C n with the complex dot product x, y = Notation: A = A T, i.e., a ij = a ji. We know from Module
More informationLinear Independence. Consider a plane P that includes the origin in R 3 {u, v, w} in P. and non-zero vectors
Linear Independence Consider a plane P that includes the origin in R 3 {u, v, w} in P. and non-zero vectors If no two of u, v and w are parallel, then P =span{u, v, w}. But any two vectors determines a
More informationMATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS
MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element
More informationExam in TMA4110 Calculus 3, June 2013 Solution
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 8 Exam in TMA4 Calculus 3, June 3 Solution Problem Let T : R 3 R 3 be a linear transformation such that T = 4,
More informationPreviously Monte Carlo Integration
Previously Simulation, sampling Monte Carlo Simulations Inverse cdf method Rejection sampling Today: sampling cont., Bayesian inference via sampling Eigenvalues and Eigenvectors Markov processes, PageRank
More informationLinear transformations
Linear Algebra with Computer Science Application February 5, 208 Review. Review: linear combinations Given vectors v, v 2,..., v p in R n and scalars c, c 2,..., c p, the vector w defined by w = c v +
More information2 Determinants The Determinant of a Matrix Properties of Determinants Cramer s Rule Vector Spaces 17
Contents 1 Matrices and Systems of Equations 2 11 Systems of Linear Equations 2 12 Row Echelon Form 3 13 Matrix Algebra 5 14 Elementary Matrices 8 15 Partitioned Matrices 10 2 Determinants 12 21 The Determinant
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationDiagonalizing Hermitian Matrices of Continuous Functions
Int. J. Contemp. Math. Sciences, Vol. 8, 2013, no. 5, 227-234 HIKARI Ltd, www.m-hikari.com Diagonalizing Hermitian Matrices of Continuous Functions Justin Cyr 1, Jason Ekstrand, Nathan Meyers 2, Crystal
More informationHomework 2 Solutions
Math 312, Spring 2014 Jerry L. Kazdan Homework 2 s 1. [Bretscher, Sec. 1.2 #44] The sketch represents a maze of one-way streets in a city. The trac volume through certain blocks during an hour has been
More informationLinear DifferentiaL Equation
Linear DifferentiaL Equation Massoud Malek The set F of all complex-valued functions is known to be a vector space of infinite dimension. Solutions to any linear differential equations, form a subspace
More informationHomework #6 Solutions
Problems Section.1: 6, 4, 40, 46 Section.:, 8, 10, 14, 18, 4, 0 Homework #6 Solutions.1.6. Determine whether the functions f (x) = cos x + sin x and g(x) = cos x sin x are linearly dependent or linearly
More informationMATH 251 Week 6 Not collected, however you are encouraged to approach all problems to prepare for exam
MATH 51 Week 6 Not collected, however you are encouraged to approach all problems to prepare for exam A collection of previous exams could be found at the coordinator s web: http://www.math.psu.edu/tseng/class/m51samples.html
More information2 Eigenvectors and Eigenvalues in abstract spaces.
MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationMath 4153 Exam 1 Review
The syllabus for Exam 1 is Chapters 1 3 in Axler. 1. You should be sure to know precise definition of the terms we have used, and you should know precise statements (including all relevant hypotheses)
More informationAFFINE AND PROJECTIVE GEOMETRY, E. Rosado & S.L. Rueda 4. BASES AND DIMENSION
4. BASES AND DIMENSION Definition Let u 1,..., u n be n vectors in V. The vectors u 1,..., u n are linearly independent if the only linear combination of them equal to the zero vector has only zero scalars;
More informationChoose three of: Choose three of: Choose three of:
MATH Final Exam (Version ) Solutions July 8, 8 S. F. Ellermeyer Name Instructions. Remember to include all important details of your work. You will not get full credit (or perhaps even any partial credit)
More informationCITY UNIVERSITY LONDON
No: CITY UNIVERSITY LONDON BEng (Hons)/MEng (Hons) Degree in Civil Engineering BEng (Hons)/MEng (Hons) Degree in Civil Engineering with Surveying BEng (Hons)/MEng (Hons) Degree in Civil Engineering with
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More informationThe Wronskian as a Method for Introducing Vector Spaces
The Wronskian as a Method for Introducing Vector Spaces Daniel A. Ramras Department of Mathematical Sciences, New Mexico State University http://math.nmsu.edu/ ramras/wronskian.html January 10, 2013 Daniel
More informationMATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA
MATH 33 Sample Questions for Exam 3. Find x and y so that x 4 3 5x 3y + y = 5 5. x = 3/7, y = 49/7. Let A = 3 4, B = 3 5, C = 3 Perform the indicated operations, if possible: a AC b AB c B + AC d CBA AB
More informationDifferential equations
Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system
More informationUNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *
4.4 UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH 19 Discussion Problems 59. Two roots of a cubic auxiliary equation with real coeffi cients are m 1 1 and m i. What is the corresponding homogeneous
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More information(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).
.(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationThe First Fundamental Form
The First Fundamental Form Outline 1. Bilinear Forms Let V be a vector space. A bilinear form on V is a function V V R that is linear in each variable separately. That is,, is bilinear if αu + β v, w αu,
More informationVariation of Parameters
Variation of Parameters James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University April 13, 218 Outline Variation of Parameters Example One We eventually
More informationHOMEWORK 2 - SOLUTIONS
HOMEWORK 2 - SOLUTIONS - 2012 ANDRÉ NEVES Exercise 15 of Chapter 2.3 of Do Carmo s book: Okay, I have no idea why I set this one because it is similar to another one from the previous homework. I might
More informationSection 9.2: Matrices.. a m1 a m2 a mn
Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More informationMath 250B Final Exam Review Session Spring 2015 SOLUTIONS
Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationTonelli Full-Regularity in the Calculus of Variations and Optimal Control
Tonelli Full-Regularity in the Calculus of Variations and Optimal Control Delfim F. M. Torres delfim@mat.ua.pt Department of Mathematics University of Aveiro 3810 193 Aveiro, Portugal http://www.mat.ua.pt/delfim
More informationLinear Independence. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics
Linear Independence MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Given a set of vectors {v 1, v 2,..., v r } and another vector v span{v 1, v 2,...,
More informationSection 4.7: Variable-Coefficient Equations
Cauchy-Euler Equations Section 4.7: Variable-Coefficient Equations Before concluding our study of second-order linear DE s, let us summarize what we ve done. In Sections 4.2 and 4.3 we showed how to find
More informationLaplace Transforms and use in Automatic Control
Laplace Transforms and use in Automatic Control P.S. Gandhi Mechanical Engineering IIT Bombay Acknowledgements: P.Santosh Krishna, SYSCON Recap Fourier series Fourier transform: aperiodic Convolution integral
More informationUnit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence
Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationSecond Order Differential Equations Lecture 6
Second Order Differential Equations Lecture 6 Dibyajyoti Deb 6.1. Outline of Lecture Repeated Roots; Reduction of Order Nonhomogeneous Equations; Method of Undetermined Coefficients Variation of Parameters
More informationCambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information
Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,
More informationAMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =
AMS1 HW Solutions All credit is given for effort. (- pts for any missing sections) Problem 1 ( pts) Consider the following matrix 1 1 9 a. Calculate the eigenvalues of A. Eigenvalues are 1 1.1, 9.81,.1
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationMIDTERM REVIEW AND SAMPLE EXAM. Contents
MIDTERM REVIEW AND SAMPLE EXAM Abstract These notes outline the material for the upcoming exam Note that the review is divided into the two main topics we have covered thus far, namely, ordinary differential
More informationMATH 310, REVIEW SHEET 2
MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More informationDesigning Information Devices and Systems I Discussion 4B
Last Updated: 29-2-2 9:56 EECS 6A Spring 29 Designing Information Devices and Systems I Discussion 4B Reference Definitions: Matrices and Linear (In)Dependence We ve seen that the following statements
More information8 sin 3 V. For the circuit given, determine the voltage v for all time t. Assume that no energy is stored in the circuit before t = 0.
For the circuit given, determine the voltage v for all time t. Assume that no energy is stored in the circuit before t = 0. Spring 2015, Exam #5, Problem #1 4t Answer: e tut 8 sin 3 V 1 For the circuit
More informationorthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,
5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications
More informationMath 265 Midterm 2 Review
Math 65 Midterm Review March 6, 06 Things you should be able to do This list is not meant to be ehaustive, but to remind you of things I may ask you to do on the eam. These are roughly in the order they
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More information4.6 Bases and Dimension
46 Bases and Dimension 281 40 (a) Show that {1,x,x 2,x 3 } is linearly independent on every interval (b) If f k (x) = x k for k = 0, 1,,n, show that {f 0,f 1,,f n } is linearly independent on every interval
More informationMath 4377/6308 Advanced Linear Algebra
2.3 Composition Math 4377/6308 Advanced Linear Algebra 2.3 Composition of Linear Transformations Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math4377
More informationExistence Theory: Green s Functions
Chapter 5 Existence Theory: Green s Functions In this chapter we describe a method for constructing a Green s Function The method outlined is formal (not rigorous) When we find a solution to a PDE by constructing
More informationVector Space Examples Math 203 Spring 2014 Myers. Example: S = set of all doubly infinite sequences of real numbers = {{y k } : k Z, y k R}
Vector Space Examples Math 203 Spring 2014 Myers Example: S = set of all doubly infinite sequences of real numbers = {{y k } : k Z, y k R} Define {y k } + {z k } = {y k + z k } and c{y k } = {cy k }. Example:
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationMath 313 Chapter 5 Review
Math 313 Chapter 5 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 5.1 Real Vector Spaces 2 2 5.2 Subspaces 3 3 5.3 Linear Independence 4 4 5.4 Basis and Dimension 5 5 5.5 Row
More informationMATH Linear Algebra
MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization
More information