Honours Algebra 2, Assignment 8

Similar documents
Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 110, Spring 2015: Midterm Solutions

Elementary linear algebra

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

MODEL ANSWERS TO THE FIRST HOMEWORK

MATH 423 Linear Algebra II Lecture 12: Review for Test 1.

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

MIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS

There are two things that are particularly nice about the first basis

Lecture 6: Finite Fields

MODEL ANSWERS TO HWK #7. 1. Suppose that F is a field and that a and b are in F. Suppose that. Thus a = 0. It follows that F is an integral domain.

Inner product spaces. Layers of structure:

Linear Models Review

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

2: LINEAR TRANSFORMATIONS AND MATRICES

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

LINEAR ALGEBRA MICHAEL PENKAVA

Math 61CM - Solutions to homework 6

Linear Algebra Lecture Notes-II

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math Linear Algebra II. 1. Inner Products and Norms

1. General Vector Spaces

Linear Algebra Massoud Malek

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Chapter 2: Linear Independence and Bases

MATH FINAL EXAM REVIEW HINTS

MATH Linear Algebra

Linear Algebra Lecture Notes-I

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

I teach myself... Hilbert spaces

Math Real Analysis II

Typical Problem: Compute.

Linear Algebra, Summer 2011, pt. 3

Chapter 5. Basics of Euclidean Geometry

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

(v, w) = arccos( < v, w >

Solutions to odd-numbered exercises Peter J. Cameron, Introduction to Algebra, Chapter 2

LINEAR ALGEBRA (PMTH213) Tutorial Questions

Inner Product Spaces

LINEAR ALGEBRA W W L CHEN

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra. Workbook

MATH1050 Greatest/least element, upper/lower bound

(v, w) = arccos( < v, w >

MTH 503: Functional Analysis

MATH Linear Algebra

Math 396. Quotient spaces

Math Topology II: Smooth Manifolds. Spring Homework 2 Solution Submit solutions to the following problems:

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes!

6 Cosets & Factor Groups

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

1. Tangent Vectors to R n ; Vector Fields and One Forms All the vector spaces in this note are all real vector spaces.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

LINEAR ALGEBRA W W L CHEN

EXERCISES IN MODULAR FORMS I (MATH 726) (2) Prove that a lattice L is integral if and only if its Gram matrix has integer coefficients.

(v, w) = arccos( < v, w >

SETS AND FUNCTIONS JOSHUA BALLEW

Further Mathematical Methods (Linear Algebra)

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Rings. EE 387, Notes 7, Handout #10

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

UMA Putnam Talk LINEAR ALGEBRA TRICKS FOR THE PUTNAM

LINEAR ALGEBRA REVIEW

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Exercises for Unit I I (Vector algebra and Euclidean geometry)

LINEAR ALGEBRA REVIEW

5 Set Operations, Functions, and Counting

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Linear Algebra Practice Problems

NOTES ON FINITE FIELDS

A Primer in Econometric Theory

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

Further Mathematical Methods (Linear Algebra)

HOMEWORK 2 - RIEMANNIAN GEOMETRY. 1. Problems In what follows (M, g) will always denote a Riemannian manifold with a Levi-Civita connection.

4.3 - Linear Combinations and Independence of Vectors

Prof. Ila Varma HW 8 Solutions MATH 109. A B, h(i) := g(i n) if i > n. h : Z + f((i + 1)/2) if i is odd, g(i/2) if i is even.

CHAPTER II HILBERT SPACES

SPECTRAL THEORY EVAN JENKINS

General Inner Product & Fourier Series

Supplementary Material for MTH 299 Online Edition

Name (print): Question 4. exercise 1.24 (compute the union, then the intersection of two sets)

Exercise Solutions to Functional Analysis

Algebraic structures I

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Section 7.5 Inner Product Spaces

1 Invariant subspaces

Transpose & Dot Product

Commutative Rings and Fields

Linear Algebra. Min Yan

Math 300: Final Exam Practice Solutions

NOTES on LINEAR ALGEBRA 1

Recall the convention that, for us, all vectors are column vectors.

INNER PRODUCT SPACE. Definition 1

Transcription:

Honours Algebra, Assignment 8 Jamie Klassen and Michael Snarski April 10, 01 Question 1. Let V be the vector space over the reals consisting of polynomials of degree at most n 1, and let x 1,...,x n be n distinct real numbers. a) Show that the functionals l 1,...,l n on V defined by l j (p(x)) = p(x j ) 1 j n form a basis for the dual space V. b) (Polynomial interpolation). Given real numbers y 1,...,y n, show that there is a unique polynomial of degree at most n 1 satisfying p(x j ) = y j for all 1 j n, and that it is given by the formula p(x) = p(x 1 )p 1 (x) + + p(x n )p n (x) where (p 1 (x),..., p n (x)) is the basis of V which is dual to (l 1,...,l n ), i.e., satisfies (p 1,...,p n) = (l 1,...,l n ). c) Write down a formula for the basis (p 1,...,p n ). Solution (a) Since we are dealing with a finite-dimensional space, dimv = dimv = n, and so it suffices to show the n vectors l 1,...,l n are linearly independent. Suppose that l = n i=1 α il i = 0; we need to show α i = 0 for all i, so pick any α j. Since l is the zero operator, l(p(x)) = 0 for all p(x) V. Consider the polynomial p j (x) = i j (x x i). We have ( n ) l(p j (x)) = α i l i x i ) i j(x i=1 = α 1 (x 1 x 1 )...(x 1 x n ) + + α j (x j x 1 )...(x j x n ) + + α n (x n x 1 )...(x n x n ) = α j p j (x j ) = 0. Since all the numbers x 1,...,x n are distinct, p j (x j ) 0 by construction, whence it follows that α j = 0. Since α j was arbitrary, we conclude that α i = 0 for i = 1,...,n, and we are done. (b) The polynomial is given, so there is no need to prove existence as many of you did. To prove uniqueness, suppose there are two polynomials p(x) and q(x) of degree at most n 1 satisfying p(x j ) = q(x j ) = y j for j = 1,...,n. The polynomial p(x) q(x) has degree at most n 1 as well and has n roots, so because we are over a field, p(x) q(x) 0. To see that the given polynomial n p(x) = p i (x)p(x i ) = p 1 (x 1 ) + + p n (x n ) i=1 1

works, simply plug in x j and find p(x j ) = n p i (x j )p(x i ) = p j (x j )p(x j ) = p(x j ). i=1 (c) We need only write down the formula: p j (x) = i j x x i x j x i. (This is a little confusing, so write it out and you ll see why it works.) Question. Proposition 7..1 of Eyal Goren s notes shows that there is a natural injective linear transformation from V to V, the dual of the dual of V. It then shows furthermore that this map is an isomorphism when V is finite-dimensional. Read the proof of Proposition 7..1 carefully and note the crucial role played by the finite-dimensionality assumption. (You do not need to write anything, yet...) ow, give an example to show that the natural map V V need not be surjective in general, if V is not assumed to be finite dimensional. Solution Some of you discussed some facts of functional analysis, sequence spaces and continuous duals, which are cool; however, it is important to know that the duals you discuss in functional analysis are proper subspaces of the algebraic dual, consisting only of those functionals which are continuous or have finite operator norm. On the other hand, several of you exhibited the example of V = Z [x], the vector space of polynomials with coefficients in Z, proving that the natural map is not surjective by giving a countable basis for V and showing that V must have an uncountable basis. I will generalize the above approaches and show you the cool-but-not-very-often-proven fact that in infinite dimensions, the natural injection is never surjective. The proof: The axiom of choice will rear its ugly head a few times most notably as Hamel s basis theorem, that every vector space has a basis or more generally that any linearly independent set can be extended to a basis. So let {v i } i I be a basis for V (of course I is infinite by assumption), and let B be its dual basis (v i (v j) = δ ij extended by linearity). Extend B to a basis D for V. ow define a linear functional ψ : V F by ψ(f) = 1 for all f D extended by linearity. Let φ : V V be the natural injection φ(v)(f) = f(v). We will show that ψ / image(φ) = { k K a k φ(v k ) : a k F, K I is a finite subset}. To see this, suppose to the contrary that ψ = k K a kφ(v k ) image(φ) for some finite K I. Since I is infinite I \ K, so let j I \ K be arbitrary. Then vj D is an element of the dual basis so ψ(vj ) = 1 and v j (v k) = 0 for all k K, so 1 = ψ(vj) = a k φ(v k )(vj) = a k vj(v k ) = 0, k K k K a contradiction. Thus ψ V \ image(φ) so φ is not surjective. Question 3. Let H = R i, j, k be the usual ring of quaternions. (Recall that a typical quaternion is an element of the form z = a + bi + cj + dk, a, b, c, d R.)

Recall the trace and norm functions on H defined by trace(z) = a = z + z, norm(z) = a + b + c + d = z z, where z = a bi cj dk. Show that H, equipped with the rule x, y := trace(xȳ) is a real inner product space. Solution The fact that H is a real vector space is essentially trivial; if {e 1,...,e 4 } is the standard basis for R 4, then we can biject H to R 4 by 1 e 1, i e, j e 3, k e 4 and the vector space axioms follow immediately (strictly speaking one should state that we are taking the formal addition and scalar multiplication given by the ring structure as our operations). To see that the given rule is an inner product, we establish the identity xy = xȳ; letting x = a+bi+cj+dk, y = a +b i+c j+d k we have, using the identity i = j = k = ijk = 1: xy = (a + bi + cj + dk)(a + b i + c j + d k) = aa bb cc dd + (ab ba + cd dc )i + (ac bd + ca + db )j + (ad + bc cb + da )k = aa bb cc dd (ab ba + cd dc )i (ac bd + ca + db )j (ad + bc cb + da )k = (a bi cj dk)(a b i c j d k) = xȳ. Furthermore, trace(xy) = aa bb cc dd = a a b b c c d d = trace(yx) and and for α R trace(x + y) = (a + a ) = a + a = trace(x) + trace(y) trace(αx) = (αa) = α(a) = α trace(x). ow we check the axioms by hand: i. Well-defined: Just for thoroughness, the trace of a quaternion is always a real number. ii. Symmetry: We have in general that x = a bi cj dk = a + bi + cj + dk = x so x, y = trace(xȳ) = xȳ + xȳ = xy + xy = trace( xy) = trace(y x) = y, x. iii. Positive definiteness: We have x, x = trace(x x) = (a + b + c + d ) 0 since squares are nonnegative, with equality if and only if a = b = c = d = 0; i.e. if and only if x = 0. iv. Bilinearity: It suffices to check linearity in the first argument since we have already established 3

symmetry. We have for α, β R and x, y, z H, as required. αx + βy, z = trace((αx + βy) z) = trace(αx z + βy z) = trace(αx z) + trace(βy z) = α trace(x z) + β trace(y z) = α x, z + β y, z Question 4. ote that H can also be viewed as a complex vector space by viewing the elements of the form a+bi as complex numbers and using the multiplication in H to define the scalar multiplication. Is H equipped with the pairing above a Hermitian space? Explain. Solution ope. In a Hermitian inner product space, we require αx, y = αxy (physicists use linearity in the second argument, x, αy = α x, y ; I ve come to trust physics on all things notation). Let x, y H and α C. We have: α x, y = α(xy + xy) = αxy + αxy αx, y = αxy + αxy so unless α = α (i.e. α R, we will not have equality for all x, y H, and so this is not a Hermitian space. Question 5. Let V be the vector space of real-valued functions on the interval [, π] equipped with the inner product f, g := f(t)g(t) dt. (marker s note: we must assume that this inner product is always well-defined, perhaps by restricting our attention to those functions which are Riemann-integrable on [, π]) Let W be the subspace of functions spanned by f 0 = 1/ π, f j (t) := 1/ π cos(jt), with 1 j and g j (t) := 1/ π sin(jt) with 1 j. Show that f 0, f 1,...,f, g 1,...,g is an orthonormal basis for W. Given f V, give a formula for the coefficients of the function a 0 f 0 + a 1 f 1 + + a f + b 1 g 1 + + b g in W which best approximates f. Compute these coefficients in the case of the function f(x) = x. Solution f 0, f 1,...,f, g 1,...,g is a basis for V by construction; more precisely, it spans by definition, and the functions involved will be seen to be linearly independent by showing they are mutually orthogonal. The proof that this set is orthonormal is a bit long and tedious, involving a bunch of integration tricks it was marked somewhat generously. If you had trouble with it (most of you did not), use the identities sin(a)cos(b) = 1 (sin(a + b) + sin(a b)), sin(a)sin(b) = 1 (cos(a + b) cos(a b)), and cos(a)cos(b) = 1 (cos(a + b) + cos(a b)) 4

to ease the process. Of course, given an arbitrary f V, the orthogonal projection onto W will yield the vector in W closest (with respect to the norm metric induced by our inner product) to V. This means a 0 = 1 π f(t)dt, a i = 1 π f(t)cos(it)dt, b i = 1 π f(t)sin(it)dt, 1 i. Computing in the particular case of f(x) = x gives a 0 = a i = 0 for all i since x and xcos(ix) are odd functions, and b j = 1 π t sin(jt)dt = sin(jt) jt cos(jt) π j = 1 sin(jπ) jπ cos(jπ) sin( jπ) jπ cos( jπ) t= π j = π( 1) j+1. j Question 6. Let V be the space of polynomials of degree equipped with the inner product f, g = 1 0 f(t)g(t)dt. Find an orthonormal basis for V, by applying the Gramm-Schmidt procedure to the ordered basis (1, x, x ). Solution An exercise in computation, something the entire honours math stream needs more of. Let v 1 = 1, and note that v 1, v 1 = 1 0 dt = 1, so our first vector is already normalized. v = x x,1 1, 1 1 = x = x 1. We will have to normalize v, but we do that last as scaling it will simply make computation more difficult, and v, v = v will come out in the next computation anyway. 1 0 tdt v 3 = x x, 1 1, 1 1 x, x 1/ (x 1/) x 1/, x 1/ ( = x 1 1 3 0 t3 t dt ) 1 0 t t + 1 4 dt(x 1 ) = x 1 ( 1/1 3 1/1 (x 1 ) ) = x x + 1 6. ormalizing, v v = ( 1 x 1 ) = 3x 3 v 3 v 3 = ( 180 x x + 1 ) = 6 5x 6 5 + 5. 6 so our orthonormal basis is {1, 3x 3, 6 5x 6 5 + 5}. Question 7. Let (x 1, x,...,x ) and (y 1,...,y ) be sequences of real numbers. Fix an integer k less than. In general, there need not be a polynomial p of degree k such that p(x j ) = y j for j = 1,...,. The next-best thing one might ask for is a polynomial p of degree k for which the quantity (p(x 1 ) y 1 ) + (p(x ) y ) + + (p(x ) y ) 5

is minimised. Describe an approach that would produce such a p. In the case k = 1, give a formula for the coefficients a, b of p(x) = ax + b in terms of x 1,...,x, y 1,...,y. Solution B: Many of you solved this problem with calculus methods, obtaining the correct solution I gave full marks for this when it satisfactorily solved the problem. It s debatable whether this method is easier, but it is certainly less motivated, given the inner product space type solution, which I present here. Let V be the vector space of polynomials of degree 1, and let W be the subspace of polynomials of degree k. From question 1 there is a polynomial q V satisfying q(x j ) = y j for j = 1,...,. Define an inner product on V by f, g := f(x 1 )g(x 1 ) + f(x )g(x ) + + f(x )g(x ). Then the quantity in the question is p q = p q, p q, and the p minimizing this quantity can be found by using the Gramm-Schmidt process to produce an orthonormal basis for W and computing the orthogonal projection of q onto this basis. For k = 1 we start with the basis {1, x} for W, and Gramm-Schmidt gives 1, as an orthonormal basis. Projecting gives x 1 x k x k 1 ( n x k) p = 1, q 1 x 1 + x k x 1 ( x k 1 ), q x k x ( k x k 1 k) x ( = 1 x ky k 1 ( x k)( ) ( y k) x 1 ) x k y k + x k 1 k) x = x ky k 1 ( x k)( y k) x k 1 x + k) 1 y k x ky k 1 ( x k)( y k) 1 x x k 1 ) x k x k so a = x ky k 1 ( x k)( y k) x k 1 k) x and b = 1 y k x ky k 1 ( x k)( y k) 1 x k 1 ) x k x k. Another nice solution to this problem comes from considering a polynomial as the vector of its coefficients, letting A be the transformation which evaluates a polynomial at x 1,...,x and minimizing the Euclidean distance between (y 1,...,y ) and A(p), which is really just a reformulation of the solution above. 6