Vector spaces. EE 387, Notes 8, Handout #12

Similar documents
Rings. EE 387, Notes 7, Handout #10

Cyclic codes: overview

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows.

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

The definition of a vector space (V, +, )

Week 3: January 22-26, 2018

Lecture Summaries for Linear Algebra M51A

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

0.2 Vector spaces. J.A.Beachy 1

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

EE512: Error Control Coding

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Arrangements, matroids and codes

MAT 242 CHAPTER 4: SUBSPACES OF R n

Online Exercises for Linear Algebra XM511

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Algebra. Min Yan

LINEAR ALGEBRA REVIEW

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

2. Every linear system with the same number of equations as unknowns has a unique solution.

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Algorithms to Compute Bases and the Rank of a Matrix

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

Math Linear Algebra Final Exam Review Sheet

3. Coding theory 3.1. Basic concepts

Row Space, Column Space, and Nullspace

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

SYMBOL EXPLANATION EXAMPLE

EE512: Error Control Coding

MATH32031: Coding Theory Part 15: Summary

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

MATH 291T CODING THEORY

Math 321: Linear Algebra

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

1. General Vector Spaces

Elementary linear algebra

Mathematics Department

ELE/MCE 503 Linear Algebra Facts Fall 2018

MATH 291T CODING THEORY

Math 512 Syllabus Spring 2017, LIU Post

MT5821 Advanced Combinatorics

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Orthogonal Arrays & Codes

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Linear Algebra Practice Problems

Vector Spaces. distributive law u,v. Associative Law. 1 v v. Let 1 be the unit element in F, then

Chapter 3 Linear Block Codes

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Linear Algebra Highlights

Math 321: Linear Algebra

Chapter 1 Vector Spaces

Review of Linear Algebra

Math 2174: Practice Midterm 1

The extended coset leader weight enumerator

Linear Algebra. Preliminary Lecture Notes

Fall 2016 MATH*1160 Final Exam

MATH3302. Coding and Cryptography. Coding Theory

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Homework 5 M 373K Mark Lindberg and Travis Schedler

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Linear Algebra. Preliminary Lecture Notes

LINEAR ALGEBRA SUMMARY SHEET.

Cyclic Redundancy Check Codes

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Lecture 22: Section 4.7

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 2 Subspaces of R n and Their Dimensions

1 Last time: inverses

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

ERROR CORRECTING CODES

MA 265 FINAL EXAM Fall 2012

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

A Relation Between Weight Enumerating Function and Number of Full Rank Sub-matrices

Linear Algebra Practice Problems

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Review 1 Math 321: Linear Algebra Spring 2010

Linear Systems and Matrices

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

EE731 Lecture Notes: Matrix Computations for Signal Processing

Prelims Linear Algebra I Michaelmas Term 2014

Algebraic Methods in Combinatorics

Math 54 HW 4 solutions

AFFINE AND PROJECTIVE GEOMETRY, E. Rosado & S.L. Rueda 4. BASES AND DIMENSION

MA 0540 fall 2013, Row operations on matrices

Transcription:

Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is a commutative group 2. Associative law: (a 1 a 2 ) v = a 1 (a 2 v). 3. Distributive laws: a (v 1 +v 2 ) = a v 1 +a v 2, (a 1 +a 2 ) v = a 1 v +a 2 v 4. Unitary law: 1 v = v for every vector v in V Recall definition of field: a set F with associative and commutative operators + and that satisfy the following axioms. 1. (F, +) is a commutative group 2. (F {0}, ) is a commutative group 3. Distributive law: a (b+c) = (a b)+(a c) A field is a vector space over itself of dimension 1. In general, vector spaces do not have vector-vector multiplication. An algebra is a vector space with an associative, distributive multiplication; e.g., n n matrices over F. EE 387, October 7, 2015 Notes 8, Page 1

Span, basis, linear independence Definition: The span of a set of vectors {v 1,...,v n } is the set of all linear combinations of those vectors: {a 1 v 1 + +a n v n : a 1,...,a n F }. Definition: A set of vectors {v 1,...,v n } is linearly independent (LI) if no vector in the set is a linear combination of the other vectors; equivalently, there does not exist a set of scalars {a 1,...,a n }, not all zero, such that a 1 v 1 + a 2 v 2 + + a n v n = 0. Definition: A basis for a vector space is a LI set that spans the vector space. Definition: A vector space is finite dimensional if it has a finite basis. Lemma: The number of vectors in a linearly independent set is the number of vectors in a spanning set. Theorem: All bases for a finite-dimensional vector space have the same number of elements. Thus the dimension of the vector space is well defined. EE 387, October 7, 2015 Notes 8, Page 2

Vector coordinates Theorem: Every vector can be expressed as a linear combination of basis vectors in exactly one way. Proof: Suppose that {v 1,...,v n } is a basis and a 1 v 1 + + a n v n = a 1 v 1 + + a n v n, are two possibly different representations of the same vector. Then (a 1 a 1) v 1 + +(a n a n) v n = 0. By linear independence of the basis a 1 a 1 = 0,...,a n a n = 0 = a 1 = a 1,...,a n = a n. We say that V is isomorphic to vector space F n = {(a 1,...,a n ) : a i F} with componentwise addition and scalar multiplication. Theorem: If V is a vector space of dimension n over a field with q elements, then the number of elements of V is q n. Proof: There are q n n-tuples of scalars from F. EE 387, October 7, 2015 Notes 8, Page 3

Inner product and orthogonal complement Definition: A vector subspace of a vector space V is a subset W that is a vector space, i.e., closed under vector addition and scalar multiplication. Theorem: If W is a subspace of V then dimw dimv. Proof: Any basis {w 1,...,w k } for subspace W can be extended to a basis of V by adding n k vectors {v k+1,...,v n }. Therefore dimw dimv. Definition: The inner product of two n-tuples over a field F is (a 1,...,a n ) (b 1,...,b n ) = a 1 b 1 + +a n b n. Definition: Two n-tuples are orthogonal if their inner product is zero. Definition: The orthogonal complement of a subspace W of n-tuples is the set W of vectors orthogonal to every vector in W; that is, v W if and only if v w = 0 for every w W. Theorem: (Dimension Theorem) If dimw = k then dimw = n k. Theorem: If W is subspace of finite-dimensional V, then W = W. In infinite-dimensional vector spaces, W may be a proper subset of W. EE 387, October 7, 2015 Notes 8, Page 4

Linear transformations and matrix multiplication Definition: A linear transformation on a vector space V is a function T that maps vectors in V to vectors in V such that for all vectors v 1,v 2 and scalars a 1,a 2, T(a 1 v 1 +a 2 v 2 ) = a 1 T(v 1 )+a 2 T(v 2 ). Every linear transformation T can be described by an n n matrix M. The coordinates of T(v 1,...,v n ) are the components of the vector-matrix product (v 1,...,v n )M. Each component of w = vm is inner product of v with a column of M: w j = v 1 m 1j + +v n m nj = n i=1 v im ij. Here is another way to look at w = vm. If m i is i-th row of M, then (v 1,...,v n )M = v 1 m 1 + +v n m n. Thus vm is linear combination of rows of M weighted by coordinates of v. As v ranges over n-tuples, vm ranges over subspace spanned by rows of M. EE 387, October 7, 2015 Notes 8, Page 5

Matrix rank Definition: A square matrix is nonsingular if it has a multiplicative inverse. Definition: The determinant of A = (a ij ) is deta = ( 1) σ a i1 σ(i 1 ) a inσ(i n), σ where σ is a permutation of {1,...,n} and ( 1) σ is the sign of σ. Theorem: A matrix is nonsingular if and only if its determinant is nonzero. Definition: The rank of a matrix is the size of largest nonsingular submatrix. The rank of a matrix can be computed efficiently: 1. Use Gaussian elimination to transform matrix to row-reduced echelon form. 2. The rank is the number of nonzero rows of the row-reduced echelon matrix. Theorem: The rank of a matrix is the dimension of the row space, which equals the dimension of the column space. Therefore the number of linearly independent rows is the same as the number of linearly independent columns. EE 387, October 7, 2015 Notes 8, Page 6

Linear block codes and group codes Definition: A linear block code over a field F of blocklength n is a linear subspace of F n. Facts about linear block codes: The sum and difference of codewords are codewords. Scalar multiples of codewords are also codewords. A binary block code that is closed under addition is linear because the only scalars are 0 and 1. Parity-check codes are linear block codes over GF(2). Every PC code is defined by a set of homogeneous binary equations. If C is a LBC over GF(Q) of dimension k, then its rate is R = log QQ k = k n n. Note that the rate is determined by k and n and not by Q. A group code is a subgroup of the n-tuples over an additive group. EE 387, October 7, 2015 Notes 8, Page 7

Minimum weight The Hamming weight w H (v) is the number of nonzero components of v. Obvious facts: w H (v) = d H (0,v) d H (v 1,v 2 ) = w H (v 1 v 2 ) = w H (v 2 v 1 ) w H (v) = 0 if and only if v = 0 Definition: The minimum (Hamming) weight of a block code is the weight of the nonzero codeword with smallest weight: Examples of minimum weight: w min = w = min{w H (c) : c C, c 0} Simple parity-check codes: w = 2. Repetition codes: w = n. (7,4) Hamming code: w = 3. (There are 7 codewords of weight 3.) Weight enumerator: A(x) = 1+7x 3 +7x 4 +x 7. Simple product code: w = 4. EE 387, October 7, 2015 Notes 8, Page 8

Minimum distance = minimum weight Theorem: For every linear block code, d = w. Proof: We show that w d and w d. ( ) Let c 0 be a nonzero minimum-weight codeword. Since 0 is a codeword, w = w H (c 0 ) = d H (0,c 0 ) d. ( ) Let c 1 c 2 be closest codewords. Since c 1 c 2 is a nonzero codeword, d = d H (c 1,c 2 ) = w H (c 1 c 2 ) w. Combining these two inequalities, we obtain d = w. It is easier to find minimum weight than minimum distance because the weight minimization considers only a single parameter. Computer search: test vectors of weight 1,2,3,... until codeword is found. It is also easier to determine the weight distribution of a linear code than the distance distribution of a general code. The result d = w holds for group codes, since the proof used only subtraction. EE 387, October 7, 2015 Notes 8, Page 9

Generator matrix Definition: A generator matrix for a linear block code C of blocklength n and dimension k is any k n matrix G whose rows form a basis for C. Every codeword is a linear combination of the rows of a generator matrix G: g 0 g 1 c = mg = [m 0 m 1... m k 1 ]. g k 1 = m 0 g 0 +m 1 g 1 + +m k 1 g k 1. Since G has rank k, the representation of c is unique. Each component c j of c is inner product of m with j-th column of G: c j = m 0 g 0,j + m 1 g 1,j + + m k 1 g k 1,j. Both sets of equations can be used for encoding. Each codeword symbol requires k multiplications (by constants) and k 1 additions. EE 387, October 7, 2015 Notes 8, Page 10

Parity-check matrix Definition: The dual code of C is the orthogonal complement C. Definition: A parity-check matrix for a linear block code C is any r n matrix H whose rows span C. (Obviously, r n k.) Example: G and H for (5,4) simple parity-check code. 1 1 0 0 0 G = 0 1 1 0 0 0 0 1 1 0 = H = [ 1 1 1 1 1 ] 0 0 0 1 1 (H is generator matrix for the (5,1) repetition code the dual code.) Example: G and H for (7,4) cyclic Hamming code. 1 1 0 1 0 0 0 G = 0 1 1 0 1 0 0 1 1 1 0 0 1 0 = H = 1 0 0 1 0 1 1 0 1 0 1 1 1 0 0 0 1 0 1 1 1 1 0 1 0 0 0 1 A cyclic code is a linear block code such that the cyclic shift of every codeword is also a codeword. It is not obvious that this property holds for the code generated by G. EE 387, October 7, 2015 Notes 8, Page 11

Codewords of linear code are determined by H Theorem: If C is an (n,k) linear block code with parity-check matrix H, then an n-tuple c is a codeword if and only if ch T = 0. Proof: ( ) Suppose c is a codeword. Each component of ch T is the inner product of c and a column of H T, which is a row of H. Since every row of H is in C, each row is to c. Thus each component of ch T is c h i = 0. ( ) Since rows of H span C, any n-tuple satisfying ch T = 0 belongs to the orthogonal complement of C. By the Dimension Theorem (Blahut Theorem 2.5.10), C = C. Therefore if ch T = 0 then c belongs to C. ( ) Thus C consists of vectors satisfying the check equations ch T = 0. EE 387, October 7, 2015 Notes 8, Page 12

Generator vs. parity-check matrices Usually H consists of n k independent rows, so H is (n k) n. Sometimes it is convenient or elegant to use a parity-check matrix with redundant rows (for example, binary BCH codes, to be discussed later). Each row of H corresponds to an equation satisfied by all codewords. Since each row of G is a codeword, for any parity-check matrix H, G k n H T r n = 0 k r (r n k) Each 0 is 0 k r corresponds to one codeword and one equation. Conversely, if GH T = 0 and rankh = n k then H is a check matrix. How do we find H from G? We can find H from G by finding n k linearly independent solutions of the linear equation GH T = 0. The equations GH T = 0 are easy to solve when G is systematic. EE 387, October 7, 2015 Notes 8, Page 13

Systematic generator matrices Definition: A systematic generator matrix is of the form p 0,0 p 0,n k 1 1 0 0 p 1,0 p 1,n k 1 0 1 0 G = [ P I ] =........... p k 1,0 p k 1,n k 1 0 0 1 Advantages of systematic generator matrices: Message symbols appear unscrambled in each codeword, in the rightmost positions n k,...,n 1. Encoder complexity is reduced; only check symbols need be computed: c j = m 0 g 0,j + m 1 g 1,j + + m k 1 g k 1,j (j = 0,...,n k 1) Check symbol encoder equations easily yield parity-check equations: c j c n k g 0,j c n k+1 g 1,j c n 1 g k 1,j = 0 (m i = c n k+i ) Systematic parity-check matrix is easy to find: H = [I P T ]. EE 387, October 7, 2015 Notes 8, Page 14

Systematic parity-check matrix Let G be a k n systematic generator matrix: p 0,0 p 0,n k 1 1 0 0 p 1,0 p 1,n k 1 0 1 0 G = [ P I k ] =........... p k 1,0 p k 1,n k 1 0 0 1 The corresponding (n k) n systematic parity-check matrix is 1 0 0 p 0,0 p k 1,0 H = [I n k P T 0 1 0 p 0,1 p k 1,1 ] =........... 0 0 1 p 0,n k p k 1,n k (The minus signs are not needed for fields of characteristic 2, i.e., GF(2 m ).) Each row of H is corresponds to an equation satisfied by all codewords. These equations tell how to compute the check symbols c 0,...,c n k 1 in terms of the information symbols c n k,...,c n 1. EE 387, October 7, 2015 Notes 8, Page 15

Minimum weight and columns of H ch T = 0 for every codeword c = (c 0,c 1,...,c n 1 ). Any nonzero codeword determines a linear dependence among a subset of rows of H T. Then ch T = 0 = 0 = (ch T ) T = Hc T = c 0 h 0 +c 1 h 1 + +c n 1 h n 1 is a linear dependence among a subset of the columns of H. Theorem: The minimum weight of a linear block code is the smallest number of linearly dependent columns of any parity-check matrix. Proof: Each linearly dependent subset of w columns corresponds to a codeword of weight w. Recall that a set of columns of H is linearly dependent if one column is a linear combination of the other columns. A LBC has w 2 iff one column of H is a multiple of another column. For binary Hamming codes, w = 3 because no columns of H are equal. The Big Question: how to find H such that no 2t+1 columns are LI? EE 387, October 7, 2015 Notes 8, Page 16