MTH 5102 Linear Algebra Practice Exam 1 - Solutions Feb. 9, 2016

Similar documents
Elementary Linear Algebra

Chapter 3. Vector Spaces

Theoretical foundations of Gaussian quadrature

STUDY GUIDE FOR BASIC EXAM

The Regulated and Riemann Integrals

MATRICES AND VECTORS SPACE

MATH34032: Green s Functions, Integral Equations and the Calculus of Variations 1

Chapter 14. Matrix Representations of Linear Transformations

Matrices. Elementary Matrix Theory. Definition of a Matrix. Matrix Elements:

Pavel Rytí. November 22, 2011 Discrete Math Seminar - Simon Fraser University

Multivariate problems and matrix algebra

Lecture 2e Orthogonal Complement (pages )

Practice final exam solutions

Matrix Algebra. Matrix Addition, Scalar Multiplication and Transposition. Linear Algebra I 24

Introduction To Matrices MCV 4UI Assignment #1

SOLUTIONS FOR ANALYSIS QUALIFYING EXAM, FALL (1 + µ(f n )) f(x) =. But we don t need the exact bound.) Set

Inner-product spaces

Elements of Matrix Algebra

4. Calculus of Variations

Here we study square linear systems and properties of their coefficient matrices as they relate to the solution set of the linear system.

Lecture 3. Limits of Functions and Continuity

HW3, Math 307. CSUF. Spring 2007.

Math 520 Final Exam Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

7.2 Riemann Integrable Functions

MATRIX DEFINITION A matrix is any doubly subscripted array of elements arranged in rows and columns.

Best Approximation in the 2-norm

Math 4310 Solutions to homework 1 Due 9/1/16

MATH 409 Advanced Calculus I Lecture 19: Riemann sums. Properties of integrals.

Bases for Vector Spaces

a a a a a a a a a a a a a a a a a a a a a a a a In this section, we introduce a general formula for computing determinants.

Numerical integration

Matrices and Determinants

Convex Sets and Functions

308K. 1 Section 3.2. Zelaya Eufemia. 1. Example 1: Multiplication of Matrices: X Y Z R S R S X Y Z. By associativity we have to choices:

ODE: Existence and Uniqueness of a Solution

440-2 Geometry/Topology: Differentiable Manifolds Northwestern University Solutions of Practice Problems for Final Exam

A Matrix Algebra Primer

Abstract inner product spaces

Problem Set 4: Solutions Math 201A: Fall 2016

ECON 331 Lecture Notes: Ch 4 and Ch 5

Math 426: Probability Final Exam Practice

Quadratic Forms. Quadratic Forms

Lecture 19: Continuous Least Squares Approximation

Chapter 3 MATRIX. In this chapter: 3.1 MATRIX NOTATION AND TERMINOLOGY

Handout 4. Inverse and Implicit Function Theorems.

1 Linear Least Squares

The final exam will take place on Friday May 11th from 8am 11am in Evans room 60.

Chapter 2. Determinants

a n = 1 58 a n+1 1 = 57a n + 1 a n = 56(a n 1) 57 so 0 a n+1 1, and the required result is true, by induction.

MATH 174A: PROBLEM SET 5. Suggested Solution

Numerical Linear Algebra Assignment 008

Chapter One: Calculus Revisited

Math Solutions to homework 1

Things to Memorize: A Partial List. January 27, 2017

Review on Integration (Secs ) Review: Sec Origins of Calculus. Riemann Sums. New functions from old ones.

Geometric Sequences. Geometric Sequence a sequence whose consecutive terms have a common ratio.

Are Deligne-Lusztig representations Deligne-Lusztig? Except when they are complex?

The Algebra (al-jabr) of Matrices

Analytical Methods Exam: Preparatory Exercises

Math 270A: Numerical Linear Algebra

Module 6: LINEAR TRANSFORMATIONS

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

The Fundamental Theorem of Calculus. The Total Change Theorem and the Area Under a Curve.

Matrix Solution to Linear Equations and Markov Chains

Best Approximation. Chapter The General Case

MATH , Calculus 2, Fall 2018

The Banach algebra of functions of bounded variation and the pointwise Helly selection theorem

UNIFORM CONVERGENCE MA 403: REAL ANALYSIS, INSTRUCTOR: B. V. LIMAYE

Semigroup of generalized inverses of matrices

MATH 101A: ALGEBRA I PART B: RINGS AND MODULES 35

Final Exam - Review MATH Spring 2017

Math 554 Integration

LECTURE 3. Orthogonal Functions. n X. It should be noted, however, that the vectors f i need not be orthogonal nor need they have unit length for

INTRODUCTION TO LINEAR ALGEBRA

7.2 The Definite Integral

Review of Calculus, cont d

Math 1B, lecture 4: Error bounds for numerical methods

Anonymous Math 361: Homework 5. x i = 1 (1 u i )

REPRESENTATION THEORY OF PSL 2 (q)

KOÇ UNIVERSITY MATH 106 FINAL EXAM JANUARY 6, 2013

Fundamental Theorem of Calculus

Algebraic systems Semi groups and monoids Groups. Subgroups and homomorphisms Cosets Lagrange s theorem. Ring & Fields (Definitions and examples)

A PROOF OF THE FUNDAMENTAL THEOREM OF CALCULUS USING HAUSDORFF MEASURES

Hilbert Spaces. Chapter Inner product spaces

MORE FUNCTION GRAPHING; OPTIMIZATION. (Last edited October 28, 2013 at 11:09pm.)

Jim Lambers MAT 280 Spring Semester Lecture 26 and 27 Notes

Math 61CM - Solutions to homework 9

Fourier series. Preliminary material on inner products. Suppose V is vector space over C and (, )

Orthogonal Polynomials and Least-Squares Approximations to Functions

Review of Riemann Integral

set is not closed under matrix [ multiplication, ] and does not form a group.

Chapter 3 Polynomials

How do you know you have SLE?

Contents. Outline. Structured Rank Matrices Lecture 2: The theorem Proofs Examples related to structured ranks References. Structure Transport

Chapter 5 Determinants

Linear Algebra 1A - solutions of ex.4

u(t)dt + i a f(t)dt f(t) dt b f(t) dt (2) With this preliminary step in place, we are ready to define integration on a general curve in C.

1. Gauss-Jacobi quadrature and Legendre polynomials. p(t)w(t)dt, p {p(x 0 ),...p(x n )} p(t)w(t)dt = w k p(x k ),

Generalized Fano and non-fano networks

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

Transcription:

Nme (Lst nme, First nme): MTH 502 Liner Algebr Prctice Exm - Solutions Feb 9, 206 Exm Instructions: You hve hour & 0 minutes to complete the exm There re totl of 6 problems You must show your work Prtil credit my be given even for incomplete problems s long s you show your work () Give the de nition of vector spce V over eld F ; (b) Write out explicitly the de nitions of ddition nd sclr multipliction for the vector spces (i) f0g; (ii) F n ; (iii) M mn (F ); (iv) F (S; F ), where S is ny nonempty set; (v) P (F ) nd P n (F ); (vi) (R) nd n (R); (vii) The set of symmetric nn mtrices over eld F ; (viii) n s vector spce over the complex numbers ; (ix) n s vector spce over the rel numbers R; (c) Prove tht P n (F ) is subspce of P (F ) nd both re subspce of F (F; F ); (d) Prove tht n (R) is subspce of (R) nd both re subspces of F (R; R); (e) Prove tht the set in (vii) is subspce of M nn (F ) Solution () See the book; (b): We will denote c 2 F then (i) 0 + 0 = 0 nd c0 = 0; (ii) ( ; : : : n )+(b ; : : : ; b n ) = ( + b ; : : : ; n + b n ) nd c ( ; : : : n ) = (c ; ; c n ); (iii) 0 0 0 n b b n @ A + @ m mn b m b mn nd 0 n c @ m mn A = A = @ + b n + b n m + b m mn + b mn 0 c c n @ c m c mn (iv) f + g nd cf for f; g 2 F (S; F ) re the functions (f + g) (s) = f (s) + g (s) nd (cf) (s) = c [f (s)] for ll s 2 S; (v) sme de nition s in F (F; F ); (iv) sme de nition s F (R; R); (vii) sme de nition s M nn (F ); (viii) sme de nition s F n with F = ; (ix) sme de nition s F n with F = but sclr multipliction is restricted to sub eld R of the eld ; (c) It is not true tht P n (F ) nd P (F ) cn both cn be treted s subsets of F (F; F ) by f (x) = n x n + + 0 when treting x s n independent vrible in domin F is function from F into F In fct, consider the eld F = f0; g of chrcteristic 2 then x = x 2 in F (F; F ), but, x, x 2 re linerly independent in P 2 (F ) = spn ; x; x 2 The correct identi ction of P n (F ) nd P (F ) is with the subspce of F (N; F ) in Exmple 5 Sec 2 of the book by identitfying f (x) = m x m + + 0 with the sequence f n g with n = 0 if n > m; (d) n (R) is the set of ll functions from R into R whose derivtives up to the nth order (with A ; A

n 0) re continuous with 0 (R) = (R) the set of ll continuous functions, in prticulr, this implies n (R) F (R; R) nd since if f; g 2 n (R) nd ; b 2 R then f + bg hs ll its derivtives up to the nth order (with n 0) re continuous by elementry results in single-vrible clculus (ie, sum of continuous functions is continuous, the sum of the derivtives is the derivtives of the sum, etc) implying f + bg 2 n (R) implying n (R) is subspce of (R) which is subspce of F (R; R) for ll n 0; (e) if A nd re symmetric nn mtrices over F nd c 2 F then in prticulr A t = A, t = 2 M nn (F ) nd (ca + ) t = ca t + t = ca+ (since the trnspose opertion on M nn (F ) into M nn (F ) is liner trnsformtion) implying ca + is symmetric n n mtrix over F implying the set in (vii) is subspce of M nn (F ), s desired This completes the proof 2

2 () Prove the intersection W \ W 2 of ny two subspces W nd W 2 of vector spce V over eld F is subspce of V ; (b) Find necessry nd su cient conditions for the union W [ W 2 to be subspce of V ; (c) Provide counterexmple to the sttement: If W nd W 2 re subspces of vector spce V over eld F then W [ W 2 is subspce of V over F Proof () First, W \W 2 V Next, if ; b 2 F nd u; v 2 W \W 2 then, since W nd W 2 re vector spces over F contining both u nd v nd hence re closed under liner combintions of u nd v, we must hve u + bv 2 W \ W 2 Therefore, this proves W \ W 2 is subspce of V (b) if W nd W 2 re two subspces of vector spce V over eld F then W [ W 2 is subspce of V if nd only if W i = W [ W 2 for i = or i = 2 Proof Suppose tht W nd W 2 re two subspces of vector spce V over eld F If W i = W [ W 2 for i = or i = 2 then it is subspce of V by hypothesis tht W i is subspce Let us now prove the converse Suppose W [W 2 is subspce of V If W = W [W 2 or W 2 = W [W 2 then we re done Thus, suppose tht W ; W 2 6= W [ W 2 Then we cn nd w 2 W n W 2 nd w 2 2 W 2 n W Hence since by hypothesis W [ W 2 is subspce of V then we must hve w + w 2 2 W [ W 2 implying tht w + w 2 is in W or W 2 ut then either w 2 = (w + w 2 ) w is in W or w = (w + w 2 ) w 2 is in W 2 contrdicting tht w 2 W n W 2 nd w 2 2 W 2 n W Therefore, this contrdiction proves tht W = W [ W 2 or W 2 = W [ W 2 This completes the proof ounterexmple (c) Let W = f(; 0) : 2 Rg nd W 2 = f(0; b) : b 2 Rg Then these re subspces of R 2, but W [ W 2 = f(; b) : ; b 2 R nd either = 0 or b = 0g is not subspce since if 2 R nd 6= 0 then (; 0); (0; ) 2 W [ W 2 but (; 0) + (0; ) = (; ) 62 W [ W 2 so tht W [W 2 is not closed under the opertion of ddition in R 2 nd therefore cnnot be subspce of R 2 3

3 () Suppose V is vector spce over eld F nd let S be subset of F Give the de nition of: (i) the spn of S; (ii) S is linerly dependent/independent; (iii) S genertes V ; (iv) S is bsis for V ; (v) V is nite-dimensionl; (vi) V is in nite-dimensionl Solution 2 See book 4

4 () Prove, for the following exmples of vector spces, tht they re nitedimensionl by nding bsis for them nd nd their dimensions: (i) f0g; (ii) F n ; (iii) M mn (F ); (iv) F (S; F ), where S is ny set with n elements; (v) P n (F ); (vi) n over ; (vi) n over R; (vii) the set of symmetric nn mtrices over eld F ; (b) Prove, for the following exmples of vector spces, tht they re in nite-dimensionl: (iv) F (S; F ), where S = fv ; v 2 ; : : :g is n in nite set with distinct elements v, v 2, ; (v) P (F ); (vi) (R) nd n (R) Proof (): (i) A bsis for f0g is the empty set ; nd so dim (f0g) = 0; (ii) A bsis for F n is the stndrd bsis f(; 0; : : : ; 0) ; : : : ; (0; 0; : : : ; )g nd since there re n vectors in this bsis then dim (F n ) = n; (iii) A bsis for M mn (F ) is E ij : i m, j n, where E ij is the m n mtrix with zeros in ll entries except for in the ith row nd jth column entry nd so dim (M mn (F )) = mn; (iv) Let S = fx ; : : : ; x n g be set of n distinct elements x ; : : : ; x n De ne the functions i : S! F by i (x) = 0 (the dditive identity element of the eld F ) if x 6= x i nd i (x i ) = (the multiplictive identity element of F ) It follows tht f i : i ng is bsis for F (S; F ) nd so dim (F (S; F )) = n; (v) The stndrd bsis f; x; : : : ; x n g nd so dim (6 P n (F )) = n + ; (vi) Sme s in (ii) but with F = ; (vii) A bsis is f(; 0; : : : ; 0) ; : : : ; (0; 0; : : : ; )g [ f(i; 0; : : : ; 0) ; : : : ; (0; 0; : : : ; i)g, where i = p so dim ( n ) = 2n where n is the vector spce over eld R; (vii) Using (iii), bsis is E ij + E ji : i < j n [ E ii : i n which hs 2n (n + ) elements implying this is the dimension of the set of symmetric n n mtrices over eld F Proof (b): (iv) Use prt ()(iv) with y : S! F for ech y 2 S de ned by y (x) = 0 if x 6= y nd y (y) = The set f y : y 2 Sg is linerly independent in F (S; F ) nd not nite since S is not; (v) P n (F ) P (F ) for ll n with dim P n (F ) = n + which implies P (F ) is in nite-dimensionl; (vi) The functions f m (x) = x m, m re linerly independent in n (R) (for ny n 0) for if X n if i = 0 i= for some i 2 R, i = ; : : : ; n then this mens 0 = X n i= if i (x) = X n i= ix i for ll x 2 R implying the polynomil p (x) = P n i= ix i with rel coe cients hs in nitely mny zeros in R ut fundmentl theorem of polynomils over the eld tells us tht polynomil with coe cients in of degree n 0 hs n zeros in (counting multiplicities) Hence the degree of p (x) must be nd so p (x) is the zero polynomil implying i = 0 for i = ; : : : ; n This proves tht the set of functions ff m : m g is linerly independent in n (R) nd hence n (R) must be in nite-dimensionl This completes the proof One my wish to consider problem (c) in conjunction with the proof of 4(b)(vi) to understnd it better 5

5 Using the method to prove Theorem 9 (p 44): Prove tht the set S = f(2; 3; 5) ; (8; 2; 20); g (; 0; 2); (0; 2; ); (7; 2; 0)g genertes R 3 Also, nd bsis for R 3 which is subset of S Proof See exmple 6 in Sec 6 of the book 6

6 () Stte nd prove the Dimension Theorem; (b) Let T : P 3 (R)! P 2 (R) nd T 2 : P 2 (R)! R be the functions T (f (x)) = f 0 (x) ; T 2 (f (x)) = Z b f (x) dx (for b > ): Prove tht these functions re: (i) well-de ned; (ii) liner trnsformtions; (c) Find bsis for R (T j ) nd N (T j ), for j = ; 2 (d) Is the mp T 3 : P 2 (R)! P 3 (R) de ned by T 3 (f) (x) = Z x f (t) dt; x 2 R liner trnsformtion? (e) Is the mp T 4 : P 2 (R)! (R) de ned by T 4 (f) (x) = Z x f (t) dt; x 2 R liner trnsformtion? If so, nd bsis for R (T 4 ) nd N (T 4 ) Proof () see the book; (b): (i) Let p (x) 2 P 3 (R) then p (x) = 3 x 3 + 2 x 2 + x+ 0 for some i 2 R, i = 0; ; 2; 3 Then p 0 (x) = 3 3 x 2 +2 2 x+ 2 P 2 (R) Thus, T : P 2 (R)! P 2 (R) is well-de ned function Similrly since ll the elements of P (R) treted s functions re continuous nd hence R b q (x) dx is well-de ned rel number for ny q (x) 2 P 2 (R) Thus, T 2 : P 2 (R)! R is well-de ned function; (ii) The derivtive nd integrls re liner is obvious from results from single-vrible clculus for functions tht re di erentible nd continuous, respectively, such s ll the elements of P (R) (c) A bsis for P 3 (R) nd P 2 (R) is ; x; x 2 ; x 3 nd ; x; x 2, respectively And T () = 0, T (x) =, T x 2 = 2x, T x 3 = 3x 2 nd hence T (x) ; T x 2 ; T x 3 is bsis for R (T ) nd fg is bsis for N (T ) Similrly, T 2 () = b > 0 nd R is one-dimensionl so tht bsis for R (T 2 ) is ft 2 ()g nd N (T 2 ) = 2 x 2 + x + 0 2 P 2 (R) : 2 3 b 3 3 + 2 b 2 2 + 0 (b ) = 0 y the Dimension Theorem we must hve implying 3 = dim (P 2 (R)) = nullity (T 2 ) + rnk (T 2 ) = nullity (T 2 ) + dim (N (T 2 )) = nullity (T 2 ) = 2 Now p (x) = x + b, where b = b 2 2 2 b is in N (T 2 ) s is p 2 (x) = x 2 + c, where c = b 3 3 3 b Thus since fp (x), p 2 (x)g is linerly independent subset of N (T 2 ) nd dim (N (T 2 )) = 2 this implies tht fp (x), p 2 (x)g is bsis for N (T 2 ) (d) Yes (e) Yes Now ; x; x 2 is bsis for P 2 (R) nd T 4 () = x, T 4 (x) = 2 x2 2, T 4 x 2 = 3 x3 3 which re linerly independent in (R) so T 4 () ; T 4 (x) ; T 4 x 2 is bsis for R (T 4 ) nd N (T 4 ) = f0g hs bsis ; This completes the proof/solution 7