Transpose & Dot Product

Similar documents
Transpose & Dot Product

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Tangent spaces, normals and extrema

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Section 6.4. The Gram Schmidt Process

2. Every linear system with the same number of equations as unknowns has a unique solution.

PRACTICE PROBLEMS FOR THE FINAL

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Solutions to Review Problems for Chapter 6 ( ), 7.1

Note: Every graph is a level set (why?). But not every level set is a graph. Graphs must pass the vertical line test. (Level sets may or may not.

1. General Vector Spaces

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Chapter 3 Transformations

Review problems for MA 54, Fall 2004.

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra Massoud Malek

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Conceptual Questions for Review

A Brief Outline of Math 355

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

There are six more problems on the next two pages

MATH Max-min Theory Fall 2016

(v, w) = arccos( < v, w >

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

TBP MATH33A Review Sheet. November 24, 2018

Chapter 7: Symmetric Matrices and Quadratic Forms

Linear Algebra Practice Problems

Chapter 6: Orthogonality

MTH 2032 SemesterII

8. Diagonalization.

Math 302 Outcome Statements Winter 2013

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

(v, w) = arccos( < v, w >

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Miderm II Solutions To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

LINEAR ALGEBRA QUESTION BANK

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

P = A(A T A) 1 A T. A Om (m n)

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set

7. Symmetric Matrices and Quadratic Forms

7. Dimension and Structure.

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Linear Algebra Primer

Math 3191 Applied Linear Algebra

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

MA 265 FINAL EXAM Fall 2012

1 9/5 Matrices, vectors, and their applications

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

MAT Linear Algebra Collection of sample exams

SYLLABUS. 1 Linear maps and matrices

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Symmetric matrices and dot products

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Upon successful completion of MATH 220, the student will be able to:

Chapter 6. Orthogonality and Least Squares

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH Linear Algebra

(v, w) = arccos( < v, w >

Recall the convention that, for us, all vectors are column vectors.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Answer Key for Exam #2

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Homework 5. (due Wednesday 8 th Nov midnight)

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Lecture 6 Positive Definite Matrices

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

There are two things that are particularly nice about the first basis

Solutions to Math 51 First Exam October 13, 2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Designing Information Devices and Systems II

Consider a subspace V = im(a) of R n, where m. Then,

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math Linear Algebra II. 1. Inner Products and Norms

MATH 221, Spring Homework 10 Solutions

LINEAR ALGEBRA W W L CHEN

Quizzes for Math 304

Singular Value Decomposition (SVD)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Math 291-3: Lecture Notes Northwestern University, Spring 2016

DS-GA 1002 Lecture notes 10 November 23, Linear models

8 Extremal Values & Higher Derivatives

Transcription:

Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example: If A = [ ] 1 2 3, then A 4 5 6 T = 1 4 2 5. 3 6 Convention: From now on, vectors v R n will be regarded as columns (i.e.: n 1 matrices). Therefore, v T is a row vector (a 1 n matrix). Observation: Let v, w R n. Then v T w = v w. This is because: v T w = [ w ] 1 v 1 v n. = v 1 w 1 + + v n w n = v w. w n Where theory is concerned, the key property of transposes is the following: Prop 18.2: Let A be an m n matrix. Then for x R n and y R m : Here, is the dot product of vectors. Extended Example (Ax) y = x (A T y). Let A be a 5 3 matrix, so A: R 3 R 5. N(A) is a subspace of R 3 C(A) is a subspace of R 5. The transpose A T is a 5 3 matrix, so A T : R 5 R 3 C(A T ) is a subspace of R 3 N(A T ) is a subspace of R 5 Observation: Both C(A T ) and N(A) are subspaces of R 3. Might there be a geometric relationship between the two? (No, they re not equal.) Hm... Also: Both N(A T ) and C(A) are subspaces of R 5. Might there be a geometric relationship between the two? (Again, they re not equal.) Hm...

Orthogonal Complements Def: Let V R n be a subspace. The orthogonal complement of V is the set V = {x R n x v = 0 for every v V }. So, V consists of the vectors which are orthogonal to every vector in V. Fact: If V R n is a subspace, then V R n is a subspace. Examples in R 3 : The orthogonal complement of V = {0} is V = R 3 The orthogonal complement of V = {z-axis} is V = {xy-plane} The orthogonal complement of V = {xy-plane} is V = {z-axis} The orthogonal complement of V = R 3 is V = {0} Examples in R 4 : The orthogonal complement of V = {0} is V = R 4 The orthogonal complement of V = {w-axis} is V = {xyz-space} The orthogonal complement of V = {zw-plane} is V = {xy-plane} The orthogonal complement of V = {xyz-space} is V = {w-axis} The orthogonal complement of V = R 4 is V = {0} Prop 19.3-19.4-19.5: Let V R n be a subspace. Then: (a) dim(v ) + dim(v ) = n (b) (V ) = V (c) V V = {0} (d) V + V = R n. Part (d) means: Every vector x R n can be written as a sum x = v + w where v V and w V. Also, it turns out that the expression x = v + w is unique: that is, there is only one way to write x as a sum of a vector in V and a vector in V.

Meaning of C(A T ) and N(A T ) Q: What does C(A T ) mean? Well, the columns of A T are the rows of A. So: C(A T ) = column space of A T = span of columns of A T = span of rows of A. For this reason: We call C(A T ) the row space of A. Q: What does N(A T ) mean? Well: x N(A T ) A T x = 0 (A T x) T = 0 T x T A = 0 T. So, for an m n matrix A, we see that: N(A T ) = {x R m x T A = 0 T }. For this reason: We call N(A T ) the left null space of A. Relationships among the Subspaces Theorem: Let A be an m n matrix. Then: C(A T ) = N(A) N(A T ) = C(A) Corollary: Let A be an m n matrix. Then: C(A) = N(A T ) N(A) = C(A T ) Prop 18.3: Let A be an m n matrix. Then rank(a) = rank(a T ). Motivating Questions for Reading Problem 1: Let b C(A). So, the system of equations Ax = b does have solutions, possibly infinitely many. Q: What is the solution x of Ax = b with x the smallest? Problem 2: Let b / C(A). So, the system of equations Ax = b does not have any solutions. In other words, Ax b 0. Q: What is the vector x that minimizes the error Ax b? That is, what is the vector x that comes closest to being a solution to Ax = b?

Orthogonal Projection Def: Let V R n be a subspace. Then every vector x R n can be written uniquely as x = v + w, where v V and w V. The orthogonal projection onto V is the function Proj V : R n given by: Proj V (x) = v. (Note that Proj V (x) = w.) Prop 20.1: Let V R n be a subspace. Then: Proj V + Proj V = I n. R n Of course, we already knew this: We have x = v+w = Proj V (x)+proj V (x). Formula: Let {v 1,..., v k } be a basis of V R n. Let A be the n k matrix A = v 1 v k. Then: Proj V = A(A T A) 1 A T. ( ) Geometry Observations: Let V R n be a subspace, and x R n a vector. (1) The distance from x to V is: Proj V (x) = x Proj V (x). (2) The vector in V that is closest to x is: Proj V (x). Derivation of ( ): Notice Proj V (x) is a vector in V = span(v 1,..., v k ) = C(A) = Range(A), and therefore Proj V (x) = Ay for some vector y R k. Now notice that x Proj V (x) = x Ay is a vector in V = C(A) = N(A T ), which means that A T (x Ay) = 0, which means A T x = A T Ay. Now, it turns out that our matrix A T A is invertible (proof in L20), so we get y = (A T A) 1 A T x. Thus, Proj V (x) = Ay = A(A T A) 1 A T x.

Minimum Magnitude Solution Prop 19.6: Let b C(A) (so Ax = b has solutions). Then there exists exactly one vector x 0 C(A T ) with Ax 0 = b. And: Among all solutions of Ax = b, the vector x 0 has the smallest length. In other words: There is exactly one vector x 0 in the row space of A which solves Ax = b and this vector is the solution of smallest length. To Find x 0 : Start with any solution x of Ax = b. Then x 0 = Proj C(AT )(x). Least Squares Approximation Idea: Suppose b / C(A). So, Ax = b has no solutions, so Ax b 0. We want to find the vector x which minimizes the error Ax b. That is, we want the vector x for which Ax is the closest vector in C(A) to b. In other words, we want the vector x for which Ax b is orthogonal to C(A). So, Ax b C(A) = N(A T ), meaning that A T (Ax b) = 0, i.e.: A T Ax = A T b. In terms of projections, this means solving Ax = Proj C(A) (b).

Orthonormal Bases Def: A basis {w 1,..., w k } for a subspace V is an orthonormal basis if: (1) The basis vectors are mutually orthogonal: w i w j = 0 (for i j); (2) The basis vectors are unit vectors: w i w i = 1. (i.e.: w i = 1) Orthonormal bases are nice for (at least) two reasons: (a) It is much easier to find the B-coordinates [v] B of a vector when the basis B is orthonormal; (b) It is much easier to find the projection matrix onto a subspace V when we have an orthonormal basis for V. Prop: Let {w 1,..., w k } be an orthonormal basis for a subspace V R n. (a) Every vector v V can be written (b) For all x R n : v = (v w 1 )w 1 + + (v w k )w k. Proj V (x) = (x w 1 )w 1 + + (x w k )w k. (c) Let A be the matrix with columns {w 1,..., w k }. Then A T A = I k, so: Orthogonal Matrices Proj V = A(A T A) 1 A T = AA T. Def: An orthogonal matrix is an invertible matrix C such that C 1 = C T. Example: Let {v 1,..., v n } be an orthonormal basis for R n. Then the matrix C = v 1 v n is an orthogonal matrix. In fact, every orthogonal matrix C looks like this: the columns of any orthogonal matrix form an orthonormal basis of R n. Where theory is concerned, the key property of orthogonal matrices is: Prop 22.4: Let C be an orthogonal matrix. Then for v, w R n : Cv Cw = v w.

Gram-Schmidt Process Since orthonormal bases have so many nice properties, it would be great if we had a way of actually manufacturing orthonormal bases. That is: Goal: We are given a basis {v 1,..., v k } for a subspace V R n. We would like an orthonormal basis {w 1,..., w k } for our subspace V. Notation: We will let V 1 = span(v 1 ) V 2 = span(v 1, v 2 ). V k = span(v 1,..., v k ) = V. Idea: Build an orthonormal basis for V 1, then for V 2,..., up to V k = V. Gram-Schmidt Algorithm: Let {v 1,..., v k } be a basis for V R n. (1) Define w 1 = v 1 v 1. (2) Having defined {w 1,..., w j }, let y j+1 = v j+1 Proj Vj (v j+1 ) = v j+1 (v j+1 w 1 )w 1 (v j+1 w 2 )w 2 (v j+1 w j )w j, and define w j+1 = y j+1 y j+1. Then {w 1,..., w k } is an orthonormal basis for V. Quadratic Forms (Intro) Given an m n matrix A, we can regard it as a linear transformation T : R n R m. In the special case where the matrix A is a symmetric matrix, we can also regard A as defining a quadratic form : Def: Let A be a symmetric n n matrix. The quadratic form associated to A is the function Q A : R n R given by: Q A (x) = x Ax = x T Ax = [ x 1 x 1 ] x n A. Notice that quadratic forms are not linear transformations! x n

Definiteness Def: Let Q: R n R be a quadratic form. We say Q is positive definite if Q(x) > 0 for all x 0. We say Q is positive semi-definite if Q(x) 0 for all x 0. We say Q is negative definite if Q(x) < 0 for all x 0. We say Q is negative semi-definite if Q(x) 0 for all x 0. We say Q is indefinite if there are vectors x for which Q(x) > 0, and also vectors x for which Q(x) < 0. Def: Let A be a symmetric matrix. We say A is positive definite if Q A (x) = x T Ax > 0 for all x 0. We say A is negative definite if Q A (x) = x T Ax < 0 for all x 0. We say A is indefinite if there are vectors x for which x T Ax > 0, and also vectors x for which x T Ax < 0. (Similarly for positive semi-definite and negative semi-definite.) In other words: A is positive definite Q A is positive definite. A is negative definite Q A is negative definite. A is indefinite Q A is indefinite. The Hessian Def: Let f : R n R be a function. Its Hessian at a R n is the symmetric matrix: f x1 x 1 (a) f x1 x n (a) Hf(a) =...... f xn x 1 (a) f xn x n (a) Note that the Hessian is a symmetric matrix. Therefore, we can also regard Hf(a) as a quadratic form: Q Hf(a) (x) = x T Hf(a) x = [ f ] x1 x 1 (a) f x1 x n (a) x 1 x 1 x n....... f xn x 1 (a) f xn x n (a) In particular, it makes sense to ask whether the Hessian is positive definite, negative definite, or indefinite. x n

Single-Variable Calculus Review Recall: In calculus, you learned: For a function f : R R, a critical point is a point a R where f (a) = 0 or f (a) does not exist. If f(x) has a local min/max at x = a, then x = a is a critical point. The converse is false: critical points don t have to be local minima or local maxima (e.g., they could be inflection points.) The second derivative test : If x = a is a critical point for f(x), then f (a) > 0 tells us that x = a is a local min, whereas f (a) < 0 tells us that x = a is a local max. It would be nice to have similar statements in higher dimensions: Critical Points & Second Derivative Test Def: A critical point of f : R n R is a point a R n at which Df(a) = 0 T or Df(a) is undefined. In other words, each partial derivative f x i (a) is zero or undefined. Theorem: If f : R n R has a local max / local min at a R n, then a is a critical point of f. N.B.: The converse of this theorem is false! Critical points do not have to be a local max or local min (e.g., they could be saddle points). Def: A saddle point of f : R n R is a critical point of f that is not a local max or local min. Second Derivative Test: Let f : R n R be a function, and a R n be a critical point of f. (a) If Hf(a) is positive definite, then a is a local min of f. (a ) If Hf(a) is positive semi-definite, then a is a local min or saddle point. (b) If Hf(a) is negative definite, then a is a local max of f. (b ) If Hf(a) is negative semi-definite, then a is a local max or saddle point. (c) If Hf(a) is indefinite, then a is a saddle point of f.

Local Extrema vs Global Extrema Finding Local Extrema: We want to find the local extrema of a function f : R n R. (i) Find the critical points of f. (ii) Use the Second Derivative Test to decide if the critical points are local maxima / minima / saddle points. Theorem: Let f : R n R be a continuous function. If R R n is a closed and bounded region, then f has a global max and a global min on R. Finding Global Extrema: We want to find the global extrema of a function f : R n R on a region R R n. (1) Find the critical points of f on the interior of R. (2) Find the extreme values of f on the boundary of R. (Lagrange mult.) Then: The largest value from Steps (1)-(2) is a global max value. The smallest value from Steps (1)-(2) is a global min value. Lagrange Multipliers (Constrained Optimization) Notation: Let f : R n R m be a function, and S R n be a subset. The restricted function f S : S R m is the same exact function as f, but where the domain is restricted to S. Theorem: Suppose we want to optimize a function f(x 1,..., x n ) constrained to a level set S = {g(x 1,..., x n ) = c}. If a is an extreme value of f S on the level set S = {g(x 1,..., x n ) = c}, and if g(a) 0, then f(a) = λ g(a) for some constant λ. Reason: If a is an extreme value of f S on the level set S, then D v f(a) = 0 for all vectors v that are tangent to the level set S. Therefore, f(a) v = 0 for all vectors v that are tangent to S. This means that f(a) is orthogonal to the level set S, so f(a) must be a scalar multiple of the normal vector g(a). That is, f(a) = λ g(a).

Motivation for Eigenvalues & Eigenvectors We want to understand a quadratic form Q A (x), which might be ugly and complicated. Idea: Maybe there s an orthonormal basis B = {w 1,..., w n } of R n that is somehow best suited to A so that with respect to the basis B, the quadratic form Q A looks simple. What do we mean by basis suited to A? And does such a basis always exist? Well: Spectral Theorem: Let A be a symmetric n n matrix. Then there exists an orthonormal basis B = {w 1,..., w n } of R n such that each w 1,..., w n is an eigenvector of A. i.e.: There is an orthonormal basis of R n consisting of eigenvectors of A. Why is this good? Well, since B is a basis, every w R n can be written w = u 1 w 1 + + u n w n. (That is: the B-coordinates of w are (u 1,..., u n ).) It then turns out that: Q A (w) = Q A (u 1 w 1 + + u n w n ) = (u 1 w 1 + + u n w n ) A(u 1 w 1 + + u n w n ) = λ 1 (u 1 ) 2 + λ 2 (u 2 ) 2 + + λ n (u n ) 2. (yay!) In other words: the quadratic form Q A is in diagonal form with respect to the basis B. We have made Q A look as simple as possible! Also: The coefficients λ 1,..., λ n are exactly the eigenvalues of A. Corollary: Let A be a symmetric n n matrix, with eigenvalues λ 1,..., λ n. (a) A is positive-definite all of λ 1,..., λ n are positive. (b) A is negative-definite all of λ 1,..., λ n are negative. (c) A is indefinite there is a positive eigenvalue λ i > 0 and a negative eigenvalue λ j < 0. Useful Fact: Let A be any n n matrix, with eigenvalues λ 1,..., λ n. Then tr(a) = λ 1 + + λ n det(a) = λ 1 λ 2 λ n. Cor: If any one of the eigenvalues λ j = 0 is zero, then det(a) = 0.

For Fun: What is a Closed Ball? What is a Sphere? The closed 1-ball (the interval ) is B 1 = {x R x 2 1} = [ 1, 1] R. The closed 2-ball (the disk ) is B 2 = {(x, y) R 2 x 2 + y 2 1} R 2. The closed 3-ball (the ball ) is B 3 = {(x, y, z) R 3 x 2 + y 2 + z 2 1}. The 1-sphere (the circle ) is S 1 = {(x, y) R 2 x 2 + y 2 = 1} R 2. The 2-sphere (the sphere ) is S 2 = {(x, y, z) R 3 x 2 +y 2 +z 2 = 1} R 3. The 3-sphere is S 3 = {(x, y, z, w) R 4 x 2 + y 2 + z 2 + w 2 = 1} R 4. The closed n-ball B n is the set B n = {(x 1,..., x n ) R n (x 1 ) 2 + + (x n ) 2 1} = {x R n x 2 1} R n. The (n 1)-sphere S n 1 is the boundary of B n : it is the set S n 1 = {(x 1,..., x n ) R n (x 1 ) 2 + + (x n ) 2 = 1} = {x R n x 2 = 1} R n. In other words, S n 1 consists of the unit vectors in R n. Optimizing Quadratic Forms on Spheres Problem: Optimize a quadratic form Q A : R n R on the sphere S n 1 R n. That is, what are the maxima and minima of Q A (w) subject to the constraint that w = 1? Solution: Let λ max and λ min be the largest and smallest eigenvalues of A. The maximum value of Q A for unit vectors is λ max. Any unit vector w max which attains this maximum is an eigenvector of A with eigenvalue λ max. The minimum value of Q A for unit vectors is λ min. Any unit vector w min which attains this minimum is an eigenvector of A with eigenvalue λ min. Corollary: Let A be a symmetric n n matrix. (a) A is positive-definite the minimum value of Q A restricted to unit vector inputs is positive (i.e., iff λ min > 0). (b) A is negative-definite the maximum value of Q A restricted to unit vector inputs is negative (i.e., iff λ max < 0). (c) A is indefinite λ max > 0 and λ min < 0.

Directional First & Second Derivatives Def: Let f : R n R be a function, a R n be a point. The directional derivative of f at a in the direction v is: D v f(a) = f(a) v. The directional second derivative of f at a in the direction v is: Q Hf(a) (v) = v T Hf(a)v. Q: What direction v increases the directional derivative the most? What direction v decreases the directional derivative the most? A: We ve learned this: the gradient f(a) is the direction of greatest increase, whereas f(a) is the direction of greatest decrease. New Questions: What direction v increases the directional second derivative the most? What direction v decreases the directional second derivative the most? Answer: The (unit) directions of minimum and maximum second derivative are (unitized) eigenvectors of Hf(a), and so they are mutually orthogonal. The max/min values of the directional second derivative are the max/min eigenvalues of Hf(a).