EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Similar documents
EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

1. General Vector Spaces

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Knowledge Discovery and Data Mining 1 (VO) ( )

MA 265 FINAL EXAM Fall 2012

LinGloss. A glossary of linear algebra

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Review of Some Concepts from Linear Algebra: Part 2

Lecture Summaries for Linear Algebra M51A

Elementary linear algebra

Linear System Theory

ELE/MCE 503 Linear Algebra Facts Fall 2018

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

EE731 Lecture Notes: Matrix Computations for Signal Processing

Recall the convention that, for us, all vectors are column vectors.

Foundations of Matrix Analysis

Lecture 1: Review of linear algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Algebra in Actuarial Science: Slides to the lecture

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

MAT Linear Algebra Collection of sample exams

Lecture 7: Positive Semidefinite Matrices

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Math 315: Linear Algebra Solutions to Assignment 7

Matrices and Linear Algebra

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Matrices A brief introduction

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

LINEAR ALGEBRA REVIEW

and let s calculate the image of some vectors under the transformation T.

Linear Algebra Formulas. Ben Lee

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Chapter 3 Transformations

Review problems for MA 54, Fall 2004.

Linear Algebra Massoud Malek

Chapter 5 Eigenvalues and Eigenvectors

Lecture notes: Applied linear algebra Part 1. Version 2

Symmetric and anti symmetric matrices

Linear Algebra Lecture Notes-II

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

NOTES on LINEAR ALGEBRA 1

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Notes on Linear Algebra

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Nonlinear Programming Algorithms Handout

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization

6 Inner Product Spaces

Vector Spaces and Linear Transformations

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

1 Last time: least-squares problems

MAT 610: Numerical Linear Algebra. James V. Lambers

Basic Elements of Linear Algebra

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

3. Associativity: (a + b)+c = a +(b + c) and(ab)c = a(bc) for all a, b, c F.

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Review of some mathematical tools

G1110 & 852G1 Numerical Linear Algebra

Chapter 1. Matrix Calculus

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

E 231, UC Berkeley, Fall 2005, Packard

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Definitions for Quizzes

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

MATHEMATICS 217 NOTES

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Online Exercises for Linear Algebra XM511

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

2. Every linear system with the same number of equations as unknowns has a unique solution.

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

Linear Algebra Review

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Control Systems. Linear Algebra topics. L. Lanari

Review of Linear Algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Solutions to Final Exam

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Dimension. Eigenvalue and eigenvector

1 Linear Algebra Problems

Eigenvalue and Eigenvector Problems

The following definition is fundamental.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Linear Systems and Matrices

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Transcription:

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 1 / 23

Outline 1 Linear Algebra / Matrix Analysis Notation & Definitions 2 Basic Vector Space Results 3 Overview of Matrix Analysis Concepts 4 Special Types of Matrices 5 Inner Products & Norms Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 2 / 23

Linear Algebra / Matrix Analysis Notation & Definitions Terminology List Common fields and sets R field of real scalars C field of complex scalars F general field (will be either R or C here) R n, C n, F n set of n 1 vectors over R, C, or F, respectively R m n, C m n, F m n set of m n matrices over R, C, or F, respectively Special vector/matrix sets R n +, R n ++ set of n 1 real vectors whose components are nonnegative or positive, respectively S n set of n n real symmetric matrices H n set of n n Hermitian matrices S n +, S n ++ set of n n real symmetric positive semidefinite or positive definite matrices, respectively H n +, H n ++ set of n n Hermitian positive semidefinite or positive definite matrices, respectively Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 3 / 23

Linear Algebra / Matrix Analysis Notation & Definitions Terminology List (Continued) Vector space quantities dim(v) dimension of vector space V R(A) range space of matrix A N (A) null space of matrix A rank(a) rank of matrix A (i.e., dim(r(a))) nullity(a) nullity of matrix A (i.e., dim(n (A))) Common matrix operators and quantities complex conjugate operator (i.e., a or A ) T transpose operator (i.e., a T or A T ) complex conjugate transpose operator (i.e., a or A ) det(a) determinant of square matrix A A 1 inverse of square matrix A (if it exists) diag(a) column vector formed from diagonal components of matrix A diag(a) diagonal matrix formed from components of vector a tr(a) trace of matrix A (i.e., the sum of diagonal components) Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 4 / 23

Linear Algebra / Matrix Analysis Notation & Definitions Terminology List (Continued) Inner product / norm quantities x, y inner product of x and y (x and y can be either vectors or matrices) a, A norm of vector a or matrix A, respectively a p l p-norm of vector a A a,b operator norm of A induced by vector norms a and b A F Frobenius norm of matrix A z dual norm of z associated with z (z can be either a vector or a matrix) Useful vectors & matrices and miscellaneous terminology 1 column vector of all ones 0 m n m n matrix of zeros I n n n identity matrix [a] k k-th element of vector a [A] k,l (k, l)-th element of matrix A Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 5 / 23

Basic Vector Space Results Linear Dependence and Independence A set of n vectors {v 1, v 2,..., v n } from a vector space V is said to be linearly dependent if and only if there are n scalars a 1, a 2,..., a n not all zero such that n a k v k = 0. In other words, there is at least one vector, say v l, which depends linearly on the other vectors, i.e., v l = 1 a l n k l a k v k. The vectors v 1, v 2,..., v n are said to be linearly independent if they are not linearly dependent. Equivalently, the set {v 1, v 2,..., v n } is linearly independent when n a k v k = 0, if and only if a 1 = a 2 = = a n = 0. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 6 / 23

Basic Vector Space Results Span, Basis, & Dimension Let S {v 1,..., v n } denote a subset of vectors from a vector space V defined over F. The span of S (denoted span(s)) is the set of all linear combinations of elements in S, i.e., span(s) {c 1 v 1 + + c n v n : c 1,..., c n F}. If S is a linearly independent set of vectors which spans V (meaning span(s) = V), then S is said to be a basis for V. Any vector w V can be represented in terms of a basis S in one and only one way. As such, a basis effectively defines a coordinate system for V. A vector space V can be characterized by several different bases (bases are nonunique). However, all bases for a given vector space will have the same number of elements. This common number is call the dimension of the vector space V and is denoted dim(v). Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 7 / 23

Basic Vector Space Results Range & Null Space, Rank & Nullity Let A F m n. Then, the range space R(A) and null space N (A) are defined as follows. R(A) {y = Ax : x F n } N (A) {x F n : Ax = 0}. The rank and nullity of A are the dimensions of the range and null spaces, respectively, i.e., rank(a) dim(r(a)) nullity(a) dim(n (A)). Equality of row and column ranks: ( ) rank(a) = rank A T. Rank-nullity theorem: rank(a) + nullity(a) = n. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 8 / 23

Basic Vector Space Results Systems of Linear Equations Let A F m n, x F n, and b F m. Consider the following system of linear equations which we would like to solve for x. Ax = b. This system can behave in any one of three possible ways: 1 It can have a single unique solution. (b R(A), nullity(a) = 0) 2 It can have infinitely many solutions. (b R(A), nullity(a) > 0) 3 It can have no solution. (b / R(A)) If a solution exists (i.e., b R(A)), then the set of solutions can be characterized as follows. Let p be a particular solution to Ax = b, i.e., Ap = b. Then, the solution set is given by {p + v : Av = 0}, that is, the solution set is a translation of the solution set of the homogeneous system Ax = 0. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 9 / 23

Overview of Matrix Analysis Concepts Matrix Arithmetic Operations Addition and scalar multiplication: If α, β F, A F m n, and B F m n, then C αa + βb is such that C F m n with [C] k,l = α [A] k,l + β [B] k,l. Matrix multiplication: If A F m n and B F n p, then C AB is such that C F m p with n [C] k,l = [A] k,i [B] i,l, 1 k m, 1 l p. i=1 Block matrix multiplication: If A F m 1 n 1, B F m 1 n 2, C F m 2 n 1, D F m 2 n 2, E F n 1 p 1, F F n 1 p 2, G F n 2 p 1, and H F n 2 p 2, then we have [ ] [ ] [ ] A B E F AE + BG AF + BH =. C D G H CE + DG CF + DH Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 10 / 23

Overview of Matrix Analysis Concepts Determinant of a Square Matrix The determinant of an n n square matrix A, denoted det(a), is a scalar quantity used to help construct the inverse of A (if it exists), calculate the eigenvalues of A, and determine the volume of the parallelepiped spanned by the columns of A (via its absolute value). It can be determined recursively as n n det(a) = ( 1) k+l [A] k,l M k,l = ( 1) k+l [A] l,k M l,k, where l is a fixed integer in 1 l n and M k,l is the minor of [A] k,l, which is the determinant of the (n 1) (n 1) submatrix formed by deleting the k-th row and m-th column of A. (The determinant of a scalar is the scalar itself.) Properties: If A and B are n n and C AB, then det(c) = det(a) det(b). If A, B, C, and D are m m, m n, n m, and n n, respectively, then ([ ]) ([ ]) A 0m n A B det = det = det(a) det(d). C D 0 n m D If A is an n n triangular matrix (either upper or lower), then n det(a) = [A] k,l. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 11 / 23

Matrix Inverse Overview of Matrix Analysis Concepts The inverse of an n n square matrix A, denoted A 1, is one for which AA 1 = A 1 A = I n. If such a matrix exists, A is said to be invertible. Otherwise, A is said to be singular. It can be shown that A is invertible if and only if det(a) 0. In this case, [ A 1 ] k,l = 1 det(a) ( 1)l+k M l,k, 1 k n, 1 l n, where M l,k is the minor of [A] l,k. The quantity C l,k ( 1) l+k M l,k is the cofactor of [A] l,k. Matrix Inversion Lemma: If A, B, C, and D are m m, m n, n m, and n n, respectively, and A and D are nonsingular, then (A BD 1 C ) 1 = A 1 + A 1 B ( D CA 1 B ) 1 CA 1. This lemma can be proved by considering the inverse of the block matrix M given by [ ] A B M. C D In this case, the matrix S D;M A BD 1 C is called the Schur complement of D in M. Similarly, S A;M D CA 1 B is the Schur complement of A in M. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 12 / 23

Overview of Matrix Analysis Concepts Eigenvalues & Eigenvectors For an n n square matrix A, an n 1 nonzero vector v such that Av = λv is said to be an eigenvector of A with eigenvalue λ. The eigenvalues of A can be obtained as the roots of its characteristic polynomial p(λ) given by p(λ) det(λi n A). Note that there are exactly n eigenvalues (counting multiplicity). Properties of eigenvalues & eigenvectors: If {λ 1,..., λ n} denote the set of eigenvalues of A, then it can be shown that det(a) = n λ k, tr(a) = n λ k. The eigenvalues of a triangular matrix (either upper or lower) are the diagonal elements. Suppose A is n n with n linearly independent eigenvectors v 1,..., v n corresponding to eigenvalues λ 1,..., λ n. If V [ v 1 v n ] and Λ diag ( [ λ1 λ n ] T ), then we have A = VΛV 1, and we say that A is diagonalizable. If the eigenvalues of a matrix A are distinct, then A is diagonalizable. Otherwise, A may or may not be diagonalizable. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 13 / 23

Overview of Matrix Analysis Concepts Useful Miscellaneous Identities Conjugate, transpose, and conjugate transpose: Suppose that α and β are scalars, that U is k l, V is k l, X is m n, and Y is n p, and that A is m 1 n 1, B is m 1 n 2, C is m 2 n 1, and D is m 2 n 2. Then we have [ ] A B [ A (αu + βv) = α U + β V, (XY) = X Y B ], = C D C D. (αu + βv) T = αu T + βv T, (XY) T = Y T X T, (αu + βv) = α U + β V, (XY) = Y X, [ A B C D [ A B C D ] T = ] = [ A T B T [ A C T D T C B D Trace: Suppose that α and β are scalars, that U is k k, V is k k, X is m n, and Y is n m, and that A is m m, B is m n, C is n m, and D is n n. Then we have ([ ]) A B tr(αu + βv) = αtr(u) + βtr(v), tr(xy) = tr(yx), tr = tr(a) + tr(d). C D Inverse: Suppose that c is a nonzero scalar and that A and B are invertible n n matrices. Then we have (ca) 1 = 1 c A 1, ( A 1) 1 = A, (AB) 1 = B 1 A 1. (A T ) 1 = (A 1) T, (A ) 1 = (A 1) (, det A 1) 1 = det(a). ] ].. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 14 / 23

Special Types of Matrices Unitary and Normal Matrices Unitary Matrices: A matrix U C m n (with m n) is said to be unitary if U U = I n. Similarly, if U R m n, then U is unitary if U T U = I n. If m = n, then we have U U = UU = I m, (U C m m ). U T U = UU T = I m, (U R m m ). Normal Matrices: A matrix A C n n is said to be normal if A A = AA. Similarly, if A R n n, then A is normal if A T A = AA T. It can be shown that A is normal if and only if it is diagonalizable by a unitary matrix. More specifically, A is diagonalizable if and only if A = UΛU, (A C n n ). A = UΛU T, (A R n n ). Here, U is unitary and Λ = diag(λ 1,..., λ n ) is a diagonal matrix of eigenvalues of A. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 15 / 23

Special Types of Matrices Symmetric and Hermitian Matrices Symmetric matrices: An n n matrix A is said to be symmetric if A = A T. The set of n n real symmetric matrices is a special one and will be denoted S n. Any real symmetric matrix A is normal and as such, is unitarily diagonalizable, i.e., admits a decomposition of the form A = UΛU T as described above. Hermitian matrices: An n n matrix A is said to be Hermitian if A = A. The set of n n Hermitian matrices is a special one and will be denoted H n. As any real symmetric matrix is also Hermitian (yet there are Hermitian matrices which are not real symmetric), it follows that S n H n. Any Hermitian matrix is normal and as such, is unitarily diagonalizable, i.e., admits a decomposition of the form A = UΛU as described above. Properties of real symmetric and Hermitian matrices: The eigenvalues of any real symmetric or any Hermitian matrix are always real. If A S n, v R n, B H n, and w C n, then we have v T Av R, w Bw R. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 16 / 23

Special Types of Matrices Positive Semidefinite / Positive Definite Matrices Positive semidefinite matrices: An n n real symmetric matrix A is said to be positive semidefinite if v T Av 0 for all v R n. Similarly, an n n Hermitian matrix B is said to be positive semidefinite if w Bw 0 for all w C n. In either case, we will write A 0 and B 0. The sets of positive semidefinite real symmetric and Hermitian matrices are special ones and will be denoted S n + and H n +, respectively. Positive definite matrices: An n n real symmetric matrix A is said to be positive definite if v T Av > 0 for all v R n, v 0. Similarly, an n n Hermitian matrix B is said to be positive definite if w Bw > 0 for all w C n, w 0. In either case, we will write A 0 and B 0. The sets of positive semidefinite real symmetric and Hermitian matrices are special ones and will be denoted S n ++ and H n ++, respectively. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 17 / 23

Special Types of Matrices Properties of Positive Semidefinite/Definite Matrices (Definiteness of Diagonal Elements and Eigenvalues:) The diagonal entries of a positive semidefinite (definite) matrix are always nonnegative (positive). A real symmetric or Hermitian matrix is positive semidefinite (definite) if and only if all of the eigenvalues are nonnegative (positive). (Partial Ordering:) For arbitrary square matrices A and B, we will write A B if (A B) 0 and write A B if (A B) 0. This defines a partial ordering on the set of all square matrices. (Matrix Square Roots:) If A S n +, ρ A = rank(a), B H n +, and ρ B = rank(b), then there exists a P R ρ A n and Q C ρ B n such that A = P T P, B = Q Q. (Cholesky Decomposition:) If A S n ++ and B H n ++, then there exist lower triangular matrices L A R n n and L B C n n with strictly positive diagonal entries such that A = L A L T A, B = L B L B. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 18 / 23

Inner Products & Norms Inner Products One way to measure the correlation or coherence between two vectors or matrices is through the use of an inner product. An inner product x, y maps two vectors or matrices x and y (defined over a field F) to the underlying field F and satisfies the following properties. Conjugate symmetry: x, y = y, x. Linearity in the first argument: Positive definiteness: αx + βy, z = α x, z + β y, z for all α, β F. x, x 0 with equality if and only if x = 0. Common inner products: y T x, for x, y R n (standard inner product on R n ) x, y = y x, for x, y C n (standard inner product on C n ). y Px, for x, y C n, P H n ++ tr ( Y T X ), for X, Y R m n (standard inner product on R m n ) X, Y = tr ( Y X ), for X, Y C m n (standard inner product on C m n ) tr ( Y PX ), for X, Y C m n, P H n ++. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 19 / 23

Inner Products & Norms Norms: Definition and Examples of Vector Norms One way to measure the length of a vector or matrix in some sense is through the use of a norm. A norm x maps a vector or matrix x (defined over a field F) to R + and satisfies the following. Positive definiteness: Homogeneity: Triangle inequality: x 0 with equality if and only if x = 0. αx = α x for all α F. x + y x + y for all x, y. Common vector norms: For all examples considered here, we assume x F n and x k = [x] k. Euclidean norm: (l 2 -norm) x 2 = n x k 2 = x x. Chebyshev norm: (l -norm) x = max { x 1,..., x n }. l p-norm: ( n ) 1 p x p = x k p (valid only for p 1). Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 20 / 23

Inner Products & Norms Matrix Norms: Operator and Entrywise Norms There are conventionally three varieties of matrix norms: operator norms, entrywise norms, and Schatten norms (which will be discussed at another time). Operator Norms: If a and b are vector norms defined over F m and F n, respectively, then the operator norm of X F m n, induced by a and b, is defined as X a,b sup { Xu a : u b 1 }. Example: When both a and b are the Euclidean norm, the resulting operator norm is called the spectral norm or the l 2 -norm and is given by ( X 2 = λ max X X ), ( where λ max X X ) denotes the maximum eigenvalue of X X. Entrywise Norms: If we treat the matrix X F m n as a vector of size mn and apply a familiar vector norm, we obtain an entrywise norm. Assuming X k,l = [X] k,l, some examples are as follows. X F = m n Xk,l 2 = tr ( X X ) (Frobenius norm). l=1 ( 1 m n p X p = Xk,l p) (l p-norm). l=1 Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 21 / 23

The Dual Norm Inner Products & Norms The concept of duality occurs frequently throughout the study of convex optimization. One way in which duality manifests itself is through the dual norm. If z is some norm for either a vector or matrix z, the associated dual norm z is defined as follows. z sup{re[ x, z ] : x 1}. The dual norm can be expressed in the following equivalent forms, which are more convenient for analysis. { x z = sup{ x, z : x = 1} = sup x, z : x 0}. Properties: The dual norm z is indeed a norm. The primal norm and dual norm satisfy the inequality x y x, y = y, x for all x, y. The dual of the dual norm, denoted z, is the original norm z, i.e., z = z. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 22 / 23

Inner Products & Norms Examples of Dual Norms To calculate the dual norm z, typically an upper bound on x, z is computed and x is chosen so as to achieve the upper bound, if possible. Using this approach leads to the following examples of dual norms. Primal Norm: Dual Norm: Euclidean norm n z k 2 Euclidean norm n z k 2 l 1 -norm l p -norm n z k l -norm max{ z 1,..., z n } ( n z k p ) 1 p l q -norm ( n z k q ) 1 q spectral norm nuclear norm ) Z 2 = λ max (Z Z) Z 2 = tr( Z Z (q = p/(p 1)) Andre Tkacenko (JPL) EE/ACM 150 - Lecture 2 April 5, 2012 23 / 23