Math 61CM - Solutions to homework 2

Similar documents
Solutions to Homework 5 - Math 3410

Math 24 Spring 2012 Questions (mostly) from the Textbook

Math 61CM - Solutions to homework 6

Section 8.3 Partial Fraction Decomposition

Questionnaire for CSET Mathematics subset 1

Math 109 HW 9 Solutions

Math 54. Selected Solutions for Week 5

Partial Fraction Decomposition Honors Precalculus Mr. Velazquez Rm. 254

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

1. Solve each linear system using Gaussian elimination or Gauss-Jordan reduction. The augmented matrix of this linear system is

Chapter 3. Vector spaces

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 2050 Assignment 6 Fall 2018 Due: Thursday, November 1. x + y + 2z = 2 x + y + z = c 4x + 2z = 2

Math 54 HW 4 solutions

LIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS

Solutions for Homework Assignment 2

PARTIAL FRACTION DECOMPOSITION. Mr. Velazquez Honors Precalculus

Solutions to Practice Final

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Math 4377/6308 Advanced Linear Algebra

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Chapter Five Notes N P U2C5

MULTIPLYING TRINOMIALS

SYMBOL EXPLANATION EXAMPLE

Math 1302 Notes 2. How many solutions? What type of solution in the real number system? What kind of equation is it?

Linear Algebra Quiz 4. Problem 1 (Linear Transformations): 4 POINTS Show all Work! Consider the tranformation T : R 3 R 3 given by:

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

MATH FINAL EXAM REVIEW HINTS

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

CHMC: Finite Fields 9/23/17

Abstract Vector Spaces and Concrete Examples

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

Homework #2 solutions Due: June 15, 2012

1.3 Linear Dependence & span K

Beginning Algebra. 1. Review of Pre-Algebra 1.1 Review of Integers 1.2 Review of Fractions

Linear Independence. Consider a plane P that includes the origin in R 3 {u, v, w} in P. and non-zero vectors

Algebra I Unit Report Summary

Proofs. Chapter 2 P P Q Q

Math 601 Solutions to Homework 3

Numbers. 2.1 Integers. P(n) = n(n 4 5n 2 + 4) = n(n 2 1)(n 2 4) = (n 2)(n 1)n(n + 1)(n + 2); 120 =

ACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms

Solutions to Math 51 Midterm 1 July 6, 2016

Row Space, Column Space, and Nullspace

Math 4377/6308 Advanced Linear Algebra I Dr. Vaughn Climenhaga, PGH 651A HOMEWORK 3

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Chapter 2. Polynomial and Rational Functions. 2.5 Zeros of Polynomial Functions

Linear Algebra, Summer 2011, pt. 2

SB CH 2 answers.notebook. November 05, Warm Up. Oct 8 10:36 AM. Oct 5 2:22 PM. Oct 8 9:22 AM. Oct 8 9:19 AM. Oct 8 9:26 AM.

Algebra 1 Seamless Curriculum Guide

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH CSE20 Homework 5 Due Monday November 4

This lecture: basis and dimension 4.4. Linear Independence: Suppose that V is a vector space and. r 1 x 1 + r 2 x r k x k = 0

Math 2142 Homework 5 Part 1 Solutions

Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet

A2 HW Imaginary Numbers

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

Chapter 2 notes from powerpoints

Seunghee Ye Ma 8: Week 2 Oct 6

Problems for M 10/12:

ALGEBRA I FORM I. Textbook: Algebra, Second Edition;Prentice Hall,2002

Math Analysis Notes Mrs. Atkinson 1

Eigenvalues and Eigenvectors

Math Exam 2, October 14, 2008

More Polynomial Equations Section 6.4

MATHEMATICAL INDUCTION

Extra Problems for Math 2050 Linear Algebra I

CHAPTER EIGHT: SOLVING QUADRATIC EQUATIONS Review April 9 Test April 17 The most important equations at this level of mathematics are quadratic

Math 3 Variable Manipulation Part 3 Polynomials A

Florida Math Curriculum (433 topics)

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Math 61CM - Quick answer key to section problems Fall 2018

Organization Team Team ID#

is equal to = 3 2 x, if x < 0 f (0) = lim h = 0. Therefore f exists and is continuous at 0.

Common Core Algebra 2 Review Session 1

Cheat Sheet for MATH461

Vector Spaces, Orthogonality, and Linear Least Squares

1 Systems of equations

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

MT804 Analysis Homework II

Digital Workbook for GRA 6035 Mathematics

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

Proof 1: Using only ch. 6 results. Since gcd(a, b) = 1, we have

Today. Polynomials. Secret Sharing.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Linear Algebra. Min Yan

Linear Algebra Math 221

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Algebra Summer Review Packet

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

JUST THE MATHS UNIT NUMBER 1.9. ALGEBRA 9 (The theory of partial fractions) A.J.Hobson

web: HOMEWORK 1

Rings. Chapter 1. Definition 1.2. A commutative ring R is a ring in which multiplication is commutative. That is, ab = ba for all a, b R.

Revision notes for Pure 1(9709/12)

Homework 9 Solutions to Selected Problems

Day 6: 6.4 Solving Polynomial Equations Warm Up: Factor. 1. x 2-2x x 2-9x x 2 + 6x + 5

Proofs. Chapter 2 P P Q Q

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Review problems for MA 54, Fall 2004.

Transcription:

Math 61CM - Solutions to homework 2 Cédric De Groote October 5 th, 2018 Problem 1: Let V be the vector space of polynomials of degree at most 5, with coefficients in a field F Let U be the subspace of V consisting of polynomials of the form az 5 + bz + c with a, b, c F Find a subspace W such that every element v V can be written in one and only one way as the sum of an element in U and another element in W Solution: We have V = { a 5 z 5 + a 4 z 4 + a 3 z 3 + a 2 z 2 + a 1 z + a 0 a 0,, a 5 F } U = { az 5 + bz + c a, b, c F } Since every polynomial can be written in a unique way as a sum of monomials, a good guess for W is We have several things to check W = { dz 4 + ez 3 + fz 2 d, e, f F } W is a subspace: it s clear that 0 W (taking d = e = f = 0), and for λ, µ F we have λ (dz 4 + ez 3 + fz 2 ) +µ (d z 4 + e z 3 + f z 2 ) = (d + d )z 4 + (e + e )z 3 + (f + f )z 2 W }{{}}{{} W W for any d, d, e, e, f, f F Any element v V can be written in some way as the sum of an element in U and another element in W : for any a 5, a 4, a 3, a 2, a 1, a 0 F, we have a 5 z 5 + a 4 z 4 + a 3 z 3 + a 2 z 2 + a 1 z + a 0 = (a 5 z 5 + a 1 z + a 0 ) + (a }{{} 4 z 4 + a 3 z 3 + a 2 z 2 ) }{{} U W This decomposition is unique: if (a 5 z 5 + a 1 z + a 0 ) + (a 4 z 4 + a 3 z 3 + a 2 z 2 ) = (a 5z 5 + a 1z + a 0) + (a 4z 4 + a 3z 3 + a 2z 2 ), then a i = a i for every 0 i 5, by the definition of equality of polynomials Hence this decomposition is unique 1

Problem 2: Prove that there exists a quadratic polynomial ax 2 + bx + c whose graph passes through the points (0, 1), (1, 0), (2, 3) Is such a polynomial unique? Solution: Let p(x) = ax 2 + bx + c We need to find a, b, c such that p(0) = 1, p(1) = 0 and p(2) = 3 Plugging in the definition of p(x), this yields the system c = 1 a + b + c = 0 4a + 2b + c = 3 Solving it, we find the unique solution a = 2 b = 3 c = 1 Therefore, such a polynomial is unique If you are interested: it is in general true that there exists a unique polynomial of degree n whose graph passes through n + 1 points so that no two of them have the same x-coordinate This is because the associated system of n + 1 equations has a non-zero determinant (see https://enwikipediaorg/ wiki/vandermonde_matrix) As we will see later in the class, this implies that the system has a unique solution Problem 3: Find a single homogeneous linear equation with unknowns x 1, x 2, x 3 such that the solution set is the span of the two vectors (1, 1, 1), (1, 2, 0) Solution: To answer this question, it is sufficient to guess a solution, and then explain why the solution set is the span of these two vectors However, I will show how to solve this problem without guessing We want to find a, b, c R such that (1, 1, 1) and (1, 2, 0) are solutions of ax 1 + bx 2 + cx 3 = 0 In other words, we want { a + b + c = 0 a 2b = 0 After a few manipulations, we see that this is equivalent to { c = 3b a = 2b Taking b = 1 for example, this gives the equation 2x 1 + x 2 3x 3 = 0 Now, we need to check that this is indeed the right solution Call S the solution set of 2x 1 + x 2 3x 3 = 0, and W the span of (1, 1, 1) and (1, 2, 0) W S: the above shows that the two vectors (1, 1, 1) and (1, 2, 0) are in S Since we know that S is a subspace of R 3 (Q6(b), HW1), it follows that W S In other words, we know that any linear combination of solutions is a solution, and W consists exactly in all the possible linear combinations of (1, 1, 1) and (1, 2, 0) 2

S W : any solution (x 1, x 2, x 3 ) of 2x 1 + x 2 3x 3 = 0 must satisfy x 2 = 3x 3 2x 1 So, we can write the general solution as (x 1, 3x 3 2x 1, x 3 ) = x 3 (1, 1, 1) + (x 1 x 3 )(1, 2, 0), which is a linear combination of (1, 1, 1) and (1, 2, 0) Hence, S W Problem 4: Let V be a vector space and suppose S = {v 1,, v k } is a finite set of linearly dependent vectors in V Prove that there is a proper subset of S whose span is equal to the span of S By a proper subset, we mean a subset T S such that T S Solution: By the Lemma 210 in the notes, there exists j {1,, k} and c 1,, c j 1, c j+1,, c k F such that v j = c 1 v 1 + + c j 1 v j 1 + c j+1 v j+1 + + c k v k Let T = S \ {v j } Clearly, T is a proper subset of S Now we show that span(t ) = span(s) span (T ) span (S): this is immediate, since T S span (S) span (T ): let v = a 1 v 1 + + a k v k span (S) Then, replacing v j by the formula above, and regrouping the terms, we can write v = (a 1 + a j c 1 )v 1 + + (a j 1 + a j c j 1 )v j 1 + (a j+1 + a j c j+1 )v j+1 + + (a k + a j c k )v k, which is in span (T ) Hence span (S) span (T ) Problem 5: Use Gaussian elimination in R to show that the solution set of the homogeneous system x 1 + 2x 2 + 3x 3 + 4x 4 = 0 x 1 + 4x 2 + 3x 3 + 2x 4 = 0 2x 1 + 5x 2 + 6x 3 + 7x 4 = 0 x 1 + 3x 3 + 6x 4 = 0 is a plane through 0 (that is, it is the span of two linearly independent vectors), and find two linearly independent vectors whose span is the solution space Solution: We apply Gaussian elimination Recall that it consists in modifying the system of equations in order to find a set of variables, called the pivot variables, such that each one of them appears only once in the system of equations, with a coefficient 1, and two of them can not appear in the same equation Furthermore, this set should be maximal, ie there is no larger set of variables satisfying the same condition (this is equivalent to asking that the number of such variables is equal to the number of non-zero equations) To do this, you are allowed to do the three operations described at the beginning of the section 41 in the notes; the precise algorithm to follow in order to achieve this is described in the notes The other variables are called the free variables The reason for that name, and reason for which this whole Gaussian elimination thing is useful, is that once we have a system of equations as in the paragraph above, we can isolate each of the pivot variables, and express them only in terms of the free variables The free variables are free because any choice of values for them gives a unique solution for the corresponding pivot variables 3

We can then write the general solution of the system, and express it as a linear combination of vectors with coefficients given by the free variables This exhibits an explicit set of vectors that generates the solution set of the equation The matrix of our system is 1 2 3 4 1 4 3 2 2 5 6 7 1 0 3 6 Substracting the appropriate multiple of the first equation to the second, third and fourth ones in such a way that the coefficient of x 1 is 0, we get 1 2 3 4 0 2 0 2 0 1 0 1 0 2 0 2 Substracting 2 times the third equation from the second one, and ( 2) times the third one from the fourth one, we get 1 2 3 4 0 0 0 0 0 1 0 1 0 0 0 0 Finally, we want to get rid of that 2 in the first equation, so that the variable x 2 appears only in the third one To achieve this, we substract 2 times the third equation from the first one, to get 1 0 3 6 0 0 0 0 0 1 0 1 0 0 0 0 The second and the fourth equations are 0 = 0 (which is always true), so we can drop them The associated system is therefore { x1 + 3x 3 + 6x 4 = 0 x 2 x 4 = 0, which we rewrite as { x1 = 3x 3 6x 4 x 2 = x 4 in such a way that the pivot variables x 1 and x 2 are isolated Now it is clear that we can choose x 3 and x 4 in any way we want, and that x 1 and x 2 will be entirely determined by that choice So, we can write the general solution as ( 3x 3 6x 4, x 4, x 3, x 4 ) = x 3 ( 3, 0, 1, 0) + x 4 ( 6, 1, 0, 1), hence is in the span of ( 3, 0, 1, 0) and ( 6, 1, 0, 1) Furthermore, since ( 3, 0, 1, 0) and ( 6, 1, 0, 1) are not collinear, they are linearly independent It follows that ( 3, 0, 1, 0) and ( 6, 1, 0, 1) are two linearly independent vectors whose span is the solution space 4

Problem 6: In parts (a) and (b) we will show that multiplicative inverses exist in Z/pZ for p prime So, you should not use this fact in your answers to (a) and (b) However, feel free to use other basic facts about arithmetic in Z/pZ (a) Suppose a Z/pZ, with a 0 For any x, y Z/pZ, show that if ax = ay (with multiplication in Z/pZ, ie modulo p) then x = y [Hint: you will need to unwrap the definition of mod p multiplication, and make use of the hypotheses that p is prime and a 0] (b) By considering the set {a0, a1,, a(p 1)} over Z/pZ, or otherwise, show that there exists b Z/pZ such that ab = 1 (again, with multiplication mod p) (c) In the set of integers Z, solve the system of equations { 2x + y=2 mod 5 3x 2y=0 mod 5, by using Gaussian elimination in Z/5Z Solution: (a) Suppose that ax = ay This means that ax ay = 0 in Z/pZ, hence p divides ax ay = a(x y), now seen as an element of Z Since p is prime, we have that p divides a or p divides x y But we assumed that a 0 in Z/pZ, hence p does not divide a It follows that p divides x y, hence x = y in Z/pZ (b) Assume that there does not exist such an element b Then the elements a0,, a(p 1) take at most p 1 different values It follows by the pigeonhole principle that two of them must be equal, ie ak = al for k l in Z/p/Z By the point (a), this implies that k = l, a contradiction Another way of phrasing this is that the map from Z/pZ to Z/pZ given by multiplication by a is one-to-one, by (a) Since any one-to-one map from a finite set to itself also has to be onto, there must exist b Z/pZ such that the image of b is 1, ie ab = 1 (c) The associated matrix to that system is (including a last column for the right-hand side): ( ) 2 1 2 3 2 0 We multiply the first row by 3 (which is the inverse of 2 in Z/5Z), and the second row by 2 (which is the inverse of 3 in Z/5Z) Remember that 2 ( 2) = 4 = 1 ( ) 1 3 1 1 1 0 We substract the first row from the second one Remember that 1 3 = 2 = 3 and that 0 1 = 1 = 4 ( ) 1 3 1 0 3 4 5

We want to get rid of the 3 in the first row For this, we substract the second row from the first one Remember that 1 4 = 3 = 2 ( ) 1 0 2 0 3 4 Finally, we want to transform this 3 in the second row into a 1 To do this, we multiply by 2, which is the inverse of 3 Keep in mind that 2 4 = 8 = 3 ( ) 1 0 2 0 1 3 The corresponding system of equations is { x=2 mod 5 y=3 mod 5 This is our solution Problem 7: If a, b R, prove: (a) There is a rational r (a, b); (b) There is an irrational c (a, b) (Hint: use part (a) and Q2 of HW1); (c) (a, b) contains infinitely many rationals and infinitely many irrationals Solution: (a) The idea is as follow: using Archimedes principle, we will produce a rational number which is smaller than b a We will them use Archimedes principle again to produce a multiple of it that is between a and b Here is how it works: with x = 1 and h = b a, Archimedes principle produces k 1 Z such that (k 1 1)(b a) 1 < k 1 (b a); It is clear that we must have k 1 > 0, since b a > 0 In particular, we get 1 k 1 < b a Now, apply Archimedes principle again with x = a and h = 1 k 1 to get k 2 Z such that (k 2 1) 1 k 1 a < k 2 1 k 1 Then, r = k 1 k 2 is rational, and larger than a by the line above We check that r < b: r = k 2 k 1 = (k 2 1) 1 k 1 + 1 k 1 < a + (b a) = b 6

(b) By the point (a), there exists a rational number c such that a 2 < c < b 2 Adding 2, we get a < c + 2 < b The number c + 2 is irrational, by Q2 of HW1 (c) By (a), there exists a rational number x 1 between a and b Apply (a) again to find another rational number x 2 between a and x 1 Aply (a) again to find another rational number x 3 between a and x 2 This way, we construct infinitely many distinct rational numbers between a and b, as asked For irrational numbers, do the same as above, but using (b) to construct irrational numbers Problem 8: Let a n, b n and c n be three sequences such that a n b n c n for all n N, and the limits of a n and c n exist and coincide: lim a n = lim c n n n Show that then the sequence b n also converges, and lim a n = lim b n = lim c n n n n Solution: Let us call s = lim n a n = lim n c n Let ε > 0 Since a n converges to s, there exists N 1 N such that for every n N 1, we have a n s < ε 3 (we took ε 3 in the definition of convergence) Similarly, since c n converges to s, there exists N 2 N such that for every n N 2, we have c n s < ε 3 (we took ε 3 in the definition of convergence) Let N 3 = max{n 1, N 2 } Since N 3 is greater or equal to both N 1 and N 2, we have that for all n N 3, both a n s < ε 3 and c n s < ε 3 By the triangular inequality, we have a n c n = a n s + s c n a n s + s c n For n N 3, this is smaller than ε 3 + ε 3 = 2ε 3 Since a n c n, we have a n c n = c n a n We therefore get c n a n < 2ε 3, which is c n < a n + 2ε 3 Since a n b n c n, we get a n b n c n < a n + 2ε 3 Therefore, a n b n < a n + 2ε 3, which is 0 b n a n < 2ε 3 Therefore, b n a n < 2ε 3, for n N 3 Now, for n N 3, we have by the triangular inequality b n s = b n a n + a n s b n a n + a n s < 2ε 3 + ε 3 = ε This proves that b n converges to s Other method: Substracting s, we get a n s b n s c n s It is then easy to see that we must have b n s max{ a n s, c n s } Now, as in the solution before, given ε > 0, there exists N 3 N such that a n s < ε and c n s < ε for n N 3 Then, we also have b n s < ε for n N 3, proving that b n converges to s 7

Problem 9: Show that lim n n1/n = 1 Hint: show that for any δ > 0 there exists N (that depends on δ) so that n (1 + δ) n for all n N Solution: Let us prove the hint first Let δ > 0 Note that, for n 2, we have n + (1 + δ) n = n + n j=0 ( ) n δ j n + 1 + nδ + n2 n δ 2 j 2 This last expression is larger than 0 for n large; indeed, it is a polynomial of degree 2 whose coefficient of n 2 is positive, hence it goes to + as n + Therefore there exists N N such that n+(1+δ) n 0 for n N, ie n (1 + δ) n for all n N This is equivalent to: for any δ > 0, there exists N N such that for all n N we have n 1/n 1 + δ Since n 1/n 1 for all n 1, this means that for every δ > 0, there exists N N such that 1 n 1/n < δ for n N It follows by the definition of limit that lim n n1/n = 1 8