Best approximation in the 2-norm Department of Mathematical Sciences, NTNU september 26th 2012
Vector space A real vector space V is a set with a 0 element and three operations: Addition: x, y V then x + y V Inverse: for all x V there is x V such that x + ( x) = 0 Scalar multiplication: λ R, x V then λ x V 1 x + y = y + x 2 (x + y) + z = x + (y + z) 3 0 + x = x + 0 = x 4 0 x = 0 5 1 x = x 6 (λ µ) x = λ (µ x) 7 λ (x + y) = λx + λy 8 (λ + µ) x = λx + µx
Inner product spaces: 9.2 Definition of inner product space Let V be a linear space over R. A function, : V V R, is called an inner product on V if: 1 f + g, h = f, h + g, h for all f, g, h in V ; 2 λf, g = λ f, g for all f, g in V and λ R; 3 f, g = g, f for all f, g in V ; 4 f, f > 0 if f 0, f V.
Inner product spaces: 9.2 Definition of inner product space Let V be a linear space over R. A function, : V V R, is called an inner product on V if: 1 f + g, h = f, h + g, h for all f, g, h in V ; 2 λf, g = λ f, g for all f, g in V and λ R; 3 f, g = g, f for all f, g in V ; 4 f, f > 0 if f 0, f V. A linear space with an inner product is an inner product space.
Inner product spaces: 9.2 Definition of inner product space Let V be a linear space over R. A function, : V V R, is called an inner product on V if: 1 f + g, h = f, h + g, h for all f, g, h in V ; 2 λf, g = λ f, g for all f, g in V and λ R; 3 f, g = g, f for all f, g in V ; 4 f, f > 0 if f 0, f V. A linear space with an inner product is an inner product space. Example The n-dimensional Euclidean space R n is an inner product space u, b = n u i v i, i=0 u, v R n and u = [u 1,..., u n ] T, v = [v 1,..., v n ] T.
Inner product spaces: orthogonality and norm Orthogonality If in an inner product space V we have f, g = 0 then we say that f is orthogonal to g.
Inner product spaces: orthogonality and norm Orthogonality If in an inner product space V we have f, g = 0 then we say that f is orthogonal to g. Induced norm In an inner product space we can define the norm f := f, f 1 2, this is called norm induced by the inner product.
Inner product spaces: orthogonality and norm Orthogonality If in an inner product space V we have f, g = 0 then we say that f is orthogonal to g. Induced norm In an inner product space we can define the norm f := f, f 1 2, this is called norm induced by the inner product. Theorem An inner product space V over R with the induced norm defined above is a normed space. Proof. Need to prove that the induced norm satisfies the axioms of norm, the proof makes use of the Cauchy-Schwarz inequality.
Lemma: Cauchy-Schwarz inequality f, g f g, f, g V. Proof Similar to the one seen in chapter 2 for the inner product in R n.
Lemma: Cauchy-Schwarz inequality f, g f g, f, g V. Proof Similar to the one seen in chapter 2 for the inner product in R n. Using the lemma we prove the triangular inequality for the induced norm: f + g 2 = f + g, f + g = f 2 + 2 f, g + g 2 from Cauchy-Schwarz we get f + g 2 f 2 + 2 f g + g 2 = ( f + g ) 2 so f + g f + g.
Lemma: Cauchy-Schwarz inequality f, g f g, f, g V. Proof Similar to the one seen in chapter 2 for the inner product in R n. Using the lemma we prove the triangular inequality for the induced norm: f + g 2 = f + g, f + g = f 2 + 2 f, g + g 2 from Cauchy-Schwarz we get f + g 2 f 2 + 2 f g + g 2 = ( f + g ) 2 so f + g f + g. The other two axioms of norm are directly proved using the definition of induced norm.
Space of continuous functions Example (9.3) C[a, b] (continuous functions defined on the closed interval [a, b]) is an inner product space with the inner product f, g = b a w(x)f (x)g(x)dx, and w is a weight function defined, positive, continuous and integrable on (a, b). The induced norm is ( b f 2 = a ) 1 w(x) f (x) 2 2 dx. This norm is called the 2-norm on C[a, b].
Space of square integrable functions: L 2 We do not need f to be continuous in order for f 2 to be finite, it is sufficient that f 2 w is integrable on [a, b].
Space of square integrable functions: L 2 We do not need f to be continuous in order for f 2 to be finite, it is sufficient that f 2 w is integrable on [a, b]. Example L 2 w (a, b) := {f : (a, b) R w(x) f (x) 2 is integrable on (a, b)}. This is an inner product space with the inner product f, g = b a w(x)f (x)g(x)dx, and w is a weight function defined, positive, continuous and integrable on (a, b). The induced norm is ( ) 1 b 2 f 2 = w(x) f (x) 2 dx. a This norm is called the L 2 -norm. Note C[a, b] L 2 w (a, b). In the case w 1 then we write L 2 (a, b).
Best approximation in the 2-norm: ch 9.3 The problem of best approximation in the 2-norm can be formulated as follows: Best approximation problem in the 2-norm Given that f L 2 w (a, b), find p n Π n such that f p n 2 = inf q Π n f q 2 such p n is called a polynomial of best approximation of degree n to the function f in the 2-norm on (a, b).
Best approximation in the 2-norm: ch 9.3 The problem of best approximation in the 2-norm can be formulated as follows: Best approximation problem in the 2-norm Given that f L 2 w (a, b), find p n Π n such that f p n 2 = inf q Π n f q 2 such p n is called a polynomial of best approximation of degree n to the function f in the 2-norm on (a, b). Theorem 9.2 Let f L 2 w (a, b), there exists a unique polynomial p n Π n such that f p n 2 = min q Π n f q 2.
Example: best approximation in 2 vs Let ɛ > 0 fixed f (x) = 1 e x ɛ, x [0, 1] find p0 2 norm and p0 norm
In general, to find p n (x) = c 0 + c 1 x + + c n x n best approximation in the two norm on [0, 1]:
In general, to find p n (x) = c 0 + c 1 x + + c n x n best approximation in the two norm on [0, 1]: Solve where M is the Hilbert matrix: M c = b M j,k = 1 0 x k+j dx = 1 k + j + 1 j, k = 0,..., n, and b T = b 0. b n, b j = 1 0 f (x)x j dx.
Alternatively write p n (x) = γ 0 ϕ 0 + γ 1 ϕ 1 (x) + + γ n ϕ n (x) best approximation in the two norm on [a, b]: and Π n = span{1, x,..., x n } = span{ϕ 0, ϕ 1..., ϕ n }, and find γ 0,..., γ n minimizing b n E(γ 0,..., γ n ) := f (x) γ j ϕ j 2. a j=0
Alternatively write p n (x) = γ 0 ϕ 0 + γ 1 ϕ 1 (x) + + γ n ϕ n (x) best approximation in the two norm on [a, b]: and Π n = span{1, x,..., x n } = span{ϕ 0, ϕ 1..., ϕ n }, and find γ 0,..., γ n minimizing b n E(γ 0,..., γ n ) := f (x) γ j ϕ j 2. a j=0 This leads to M γ = b where γ = [γ 0,..., γ n ] T, M is the matrix: M j,k = ϕ k, ϕ j j, k = 0,..., n, and b = [b 0,..., b n ] T b j = f, ϕ j. We d like M diagonal (i.e. ϕ k, ϕ j = δ k,j ).
Def 9.4: Orthogonal polynomials Given a weight function w on (a, b) we say that the sequence of polynomials ϕ j, j = 0, 1,... is a system of orthogonal polynomials on (a, b) with respect to w, if each ϕ j is of exact degree j and ϕ j, ϕ k = δ j,k. To construct a system of orthogonal polynomials we can use a Gram-Schmidt process.
Gram-Schmidt orthogonalization Let ϕ 0 1 and assume ϕ j j = 0,..., n n 0 are already orthogonal so ϕ k, ϕ j = consider and take THEN b a w(x)ϕ k (x)ϕ j (x) dx = 0, k, j {0,..., n}, k j q(x) = x n+1 a 0 ϕ 0 (x) a n ϕ n (x) Π n+1, a j = x n+1, ϕ j ϕ j, ϕ j, j = 0,..., n
Gram-Schmidt orthogonalization Let ϕ 0 1 and assume ϕ j j = 0,..., n n 0 are already orthogonal so ϕ k, ϕ j = consider and take THEN b a w(x)ϕ k (x)ϕ j (x) dx = 0, k, j {0,..., n}, k j q(x) = x n+1 a 0 ϕ 0 (x) a n ϕ n (x) Π n+1, a j = x n+1, ϕ j ϕ j, ϕ j, j = 0,..., n q, ϕ j = x n+1, ϕ j a j ϕ j, ϕ j = 0,
Gram-Schmidt orthogonalization Let ϕ 0 1 and assume ϕ j j = 0,..., n n 0 are already orthogonal so ϕ k, ϕ j = consider and take THEN b a and we can take w(x)ϕ k (x)ϕ j (x) dx = 0, k, j {0,..., n}, k j q(x) = x n+1 a 0 ϕ 0 (x) a n ϕ n (x) Π n+1, with α a scaling constant. a j = x n+1, ϕ j ϕ j, ϕ j, j = 0,..., n q, ϕ j = x n+1, ϕ j a j ϕ j, ϕ j = 0, ϕ n+1 = α q
Example (9.6 Legendre polynomials) (a, b) = ( 1, 1) w(x) 1: ϕ 0 (x) 1 ϕ 1 (x) = x ϕ 2 (x) = 3 2 x 2 1 2 ϕ 2 (x) = 5 2 x 3 3 2 x Example (9.6 Chebishev polynomials) (a, b) = ( 1, 1) w(x) (1 x 2 ) 1/2 T n (x) = cos(n arcos(x)), n = 0, 1,....
Characterization of the best approximation in the 2-norm Theorem 9.3 p n Π n is such that f p n 2 = min q Π n f q 2, f L 2 w (a, b), if and only if f p n, q = 0, q Π n Remark: This theorem relates best approximation to orthogonality.
Example On (0, 1) and w(x) 1 on (0, 1) by Gram-Schmidt orthogonalization we get So for ϕ 0 = 1, ϕ 1 = x 1 2, ϕ 2(x) = x 2 x + 1 6. f (x) = 1 e x ɛ, x [0, 1] the best approximation p 3 is p 3 (x) = γ 0 + γ 1 ϕ 1 (x) + γ 2 ϕ 2 (x)
Example On (0, 1) and w(x) 1 on (0, 1) by Gram-Schmidt orthogonalization we get So for ϕ 0 = 1, ϕ 1 = x 1 2, ϕ 2(x) = x 2 x + 1 6. f (x) = 1 e x ɛ, x [0, 1] the best approximation p 3 is p 3 (x) = γ 0 + γ 1 ϕ 1 (x) + γ 2 ϕ 2 (x) γ 0 = γ 1 = γ 1 = 1 0 1 (1 e x ɛ ) dx, 0 (1 e x ɛ )(x 1 2 ) dx 1 0 (x, 1 2 )2 dx 1 0 (1 e x ɛ )(x 2 x + 1 6 ) dx 1 0 (x, 2 x + 1 6 )2 dx