2. Norm, distance, angle

Size: px
Start display at page:

Download "2. Norm, distance, angle"

Transcription

1 L. Vandenberghe EE133A (Spring 2017) 2. Norm, distance, angle norm distance k-means algorithm angle hyperplanes complex vectors 2-1

2 Euclidean norm (Euclidean) norm of vector a R n : a = a a a2 n = a T a if n = 1, a reduces to absolute value a measures the magnitude of a sometimes written as a 2 to distinguish from other norms, e.g., a 1 = a 1 + a a n Norm, distance, angle 2-2

3 Properties Positive definiteness a 0 for all a, a = 0 only if a = 0 Homogeneity βa = β a for all vectors a and scalars β Triangle inequality a + b a + b for all vectors a and b of equal length (proof on page 2-7) Norm, distance, angle 2-3

4 Cauchy-Schwarz inequality a T b a b for all a, b R n moreover, equality a T b = a b holds if: a = 0 or b = 0; in this case a T b = 0 = a b a 0 and b 0, and b = γa for some γ > 0; in this case 0 < a T b = γ a 2 = a b a 0 and b 0, and b = γa for some γ > 0; in this case 0 > a T b = γ a 2 = a b Norm, distance, angle 2-4

5 Proof of Cauchy-Schwarz inequality 1. trivial if a = 0 or b = 0 2. assume a = b = 1; we show that 1 a T b 1 0 a b 2 = (a b) T (a b) = a 2 2a T b + b 2 = 2(1 a T b) with equality only if a = b 0 a + b 2 = (a + b) T (a + b) = a 2 + 2a T b + b 2 = 2(1 + a T b) with equality only if a = b 3. for general nonzero a, b, apply case 2 to the unit-norm vectors 1 a a, 1 b b Norm, distance, angle 2-5

6 Average and RMS value let a be a real n-vector the average of the elements of a is avg(a) = a 1 + a a n n = 1T a n the root-mean-square value is the root of the average squared entry rms(a) = a a a2 n n = a n Exercises show that avg(a) rms(a) show that average of b = ( a 1, a 2,..., a n ) satisfies avg(b) rms(a) Norm, distance, angle 2-6

7 Triangle inequality from Cauchy-Schwarz inequality for vectors a, b of equal size a + b 2 = (a + b) T (a + b) = a T a + b T a + a T b + b T b = a 2 + 2a T b + b 2 a a b + b 2 (by Cauchy-Schwarz) = ( a + b ) 2 taking squareroots gives the triangle inequality triangle inequality is an equality if and only if a T b = a b (see page 2-4) also note from line 3 that a + b 2 = a 2 + b 2 if a T b = 0 Norm, distance, angle 2-7

8 Outline norm distance k-means algorithm angle hyperplanes complex vectors

9 Distance the (Euclidean) distance between vectors a and b is defined as a b a b 0 for all a, b and a b = 0 only if a = b triangle inequality a c a b + b c for all a, b, c c a c b c a a b b RMS deviation between n-vectors a and b is rms(a b) = a b n Norm, distance, angle 2-8

10 Standard deviation let a be a real n-vector the de-meaned vector is the vector of deviations from the average a avg(a)1 = a 1 avg(a) a 2 avg(a). a n avg(a) = a 1 (1 T a)/n a 2 (1 T a)/n.. a n (1 T a)/n the standard deviation is the RMS deviation from the average std(a) = rms(a avg(a)1) = a ((1 T a)/n)1 n the de-meaned vector in standard units is 1 (a avg(a)1) std(a) Norm, distance, angle 2-9

11 Mean return and risk of investment vectors represent time series of returns on an investment (as a percentage) average value is (mean) return of the investment standard deviation measures variation around the mean, i.e., risk ak 0 bk 0 ck 0 dk k k k k (mean) return a b c d risk Norm, distance, angle 2-10

12 Exercise show that avg(a) 2 + std(a) 2 = rms(a) 2 Solution std(a) 2 a avg(a)1 2 = n = 1 ( ) T ( ) a 1T a n n 1 a 1T a n 1 ( = 1 a T a (1T a) 2 (1T a) 2 ( 1 T ) 2 a + n) n n n n = 1 (a T a (1T a) 2 ) n n = rms(a) 2 avg(a) 2 Norm, distance, angle 2-11

13 Exercise: nearest scalar multiple given two vectors a, b R n, with a 0, find scalar multiple ta closes to b b ˆta line {ta t R} Solution squared distance between ta and b is ta b 2 = (ta b) T (ta b) = t 2 a T a 2ta T b + b T b a quadratic function of t with positive leading coefficient a T a derivative with respect to t is zero for ˆt = at b a T a = at b a 2 Norm, distance, angle 2-12

14 Exercise: average of collection of vectors given N vectors x 1,..., x N R n, find the n-vector z that minimizes z x z x z x N 2 x 4 x 3 x 5 z x 1 x 2 z is also known as the centroid of the points x 1,..., x N Norm, distance, angle 2-13

15 Solution: sum of squared distances is z x z x z x N 2 n ( = (zi (x 1 ) i ) 2 + (z i (x 2 ) i ) (z i (x N ) i ) 2) = i=1 n ( Nz 2 i 2z i ((x 1 ) i + (x 2 ) i + + (x N ) i ) + (x 1 ) 2 i + + (x N ) 2 ) i i=1 here (x j ) i is ith element of the vector x j term i in the sum is minimized by z i = 1 N ((x 1) i + (x 2 ) i + + (x N ) i ) solution z is component-wise average of the points x 1,..., x N : z = 1 N (x 1 + x x N ) Norm, distance, angle 2-14

16 Outline norm distance k-means algorithm angle hyperplanes complex vectors

17 k-means clustering a popular iterative algorithm for partitioning N vectors x 1,..., x N in k clusters Norm, distance, angle 2-15

18 Algorithm choose initial representatives z 1,..., z k for the k groups and repeat: 1. assign each vector x i to the nearest group representative z j 2. set the representative z j to the mean of the vectors assigned to it as a variation, choose a random initial partition and start with step 2 initial representatives are often chosen randomly solution depends on choice of initial representatives or partition can be shown to converge in a finite number of iterations in practice, often restarted a few times, with different starting points Norm, distance, angle 2-16

19 Example: first iteration assignment to groups updated representatives Norm, distance, angle 2-17

20 Example: iteration 2 assignment to groups updated representatives Norm, distance, angle 2-18

21 Example: iteration 3 assignment to groups updated representatives Norm, distance, angle 2-19

22 Example: iteration 9 assignment to groups updated representatives Norm, distance, angle 2-20

23 Example: iteration 10 assignment to groups updated representatives Norm, distance, angle 2-21

24 Example: iteration 11 assignment to groups updated representatives Norm, distance, angle 2-22

25 Example: iteration 12 assignment to groups updated representatives Norm, distance, angle 2-23

26 Image clustering MNIST dataset of handwritten digits N = 60, 000 grayscale images of size (vectors x i of size 28 2 = 784) 25 examples: Norm, distance, angle 2-24

27 Group representatives (k = 20) k-means algorithm, with k = 20 and randomly chosen initial partition 20 group representatives Norm, distance, angle 2-25

28 Group representatives (k = 20) result for another initial partition Norm, distance, angle 2-26

29 Document topic discovery N = 500 Wikipedia articles, from weekly most popular lists (9/2015 6/2016) dictionary of 4423 words each article represented by a word histogram vector of size 4423 result of k-means algorithm with k = 9 and randomly chosen initial partition Cluster 1 largest coefficients in cluster representative z 1 word fight win event champion fighter... coefficient documents in cluster 1 closest to representative Floyd Mayweather, Jr, Kimbo Slice, Ronda Rousey, José Aldo, Joe Frazier,... Norm, distance, angle 2-27

30 Cluster 2 largest coefficients in cluster representative z 2 word holiday celebrate festival celebration calendar... coefficient documents in cluster 2 closest to representative Halloween, Guy Fawkes Night, Diwali, Hannukah, Groundhog Day,... Cluster 3 largest coefficients in cluster representative z 3 word united family party president government... coefficient documents in cluster 3 closest to representative Mahatma Gandhi, Sigmund Freund, Carly Fiorina, Frederick Douglass, Marco Rubio,... Norm, distance, angle 2-28

31 Cluster 4 largest coefficients in cluster representative z 4 word album release song music single... coefficient documents in cluster 4 closest to representative David Bowie, Kanye West, Celine Dion, Kesha, Ariana Grande,... Cluster 5 largest coefficients in cluster representative z 5 word game season team win player... coefficient documents in cluster 5 closest to representative Kobe Bryant, Lamar Odom, Johan Cruyff, Yogi Berra, José Mourinho,... Norm, distance, angle 2-29

32 Cluster 6 largest coefficients in representative z 6 word series season episode character film... coefficient documents in cluster 6 closest to cluster representative The X-Files, Game of Thrones, House of Cards, Daredevil, Supergirl,... Cluster 7 largest coefficients in representative z 7 word match win championship team event... coefficient documents in cluster 7 closest to cluster representative Wrestlemania 32, Payback (2016), Survivor Series (2015), Royal Rumble (2016), Night of Champions (2015),... Norm, distance, angle 2-30

33 Cluster 8 largest coefficients in representative z 8 word film star role play series... coefficient documents in cluster 8 closest to cluster representative Ben Affleck, Johnny Depp, Maureen O Hara, Kate Beckinsale, Leonardo DiCaprio,... Cluster 9 largest coefficients in representative z 9 word film million release star character... coefficient documents in cluster 9 closest to cluster representative Star Wars: The Force Awakens, Star Wars Episode I: The Phantom Menace, The Martian (film), The Revenant (2015 film), The Hateful Eight,... Norm, distance, angle 2-31

34 Outline norm distance k-means algorithm angle hyperplanes complex vectors

35 Angle between vectors the angle between nonzero real vectors a, b is defined as arccos ( a T b ) a b this is the unique value of θ [0, π] that satisfies a T b = a b cos θ b θ a Cauchy-Schwarz inequality guarantees that 1 at b a b 1 Norm, distance, angle 2-32

36 Terminology θ = 0 a T b = a b vectors are aligned or parallel 0 θ < π/2 a T b > 0 vectors make an acute angle θ = π/2 a T b = 0 vectors are orthogonal (a b) π/2 < θ π a T b < 0 vectors make an obtuse angle θ = π a T b = a b vectors are anti-aligned or opposed Norm, distance, angle 2-33

37 Orthogonal decomposition given a nonzero a R n, every n-vector x can be decomposed as x = ta + y with y a y x ta t = at x a 2, y = x at x a 2a proof is by inspection decomposition (i.e., t and y) exists and is unique for every x ta is projection of x on the line through a and the origin (see page 2-12) since y a, we have x 2 = ta 2 + y 2 Norm, distance, angle 2-34

38 Correlation coefficient the correlation coefficient between non-constant vectors a, b is ρ ab = ãt b ã b where ã = a avg(a)1 and b = b avg(b)1 are the de-meaned vectors only defined when a and b are not constant (ã 0 and b 0) ρ ab is the cosine of the angle between the de-meaned vectors a number between 1 and 1 ρ ab is the average product of the deviations from the mean in standard units ρ ab = 1 n n i=1 (a i avg(a)) std(a) (b i avg(b)) std(b) Norm, distance, angle 2-35

39 Examples a k b k b k ρ ab = k k a k a k b k b k ρ ab = k k a k a k b k b k ρ ab = k k a k Norm, distance, angle 2-36

40 Regression line scatter plot shows two n-vectors a, b as n points (a k, b k ) straight line shows affine function f(x) = c 1 + c 2 x with f(a k ) b k, k = 1,..., n f(x) x Norm, distance, angle 2-37

41 Least squares regression use coefficients c 1, c 2 that minimize J = 1 n n (f(a k ) b k ) 2 k=1 J is a quadratic function of c 1 and c 2 : J = 1 n n (c 1 + c 2 a k b k ) 2 k=1 = ( nc n avg(a)c 1 c 2 + a 2 c 2 2 2n avg(b)c 1 2a T bc 2 + b 2) /n to minimize J, set derivatives with respect to c 1, c 2 to zero: c 1 + avg(a)c 2 = avg(b), n avg(a)c 1 + a 2 c 2 = a T b solution is c 2 = at b n avg(a) avg(b) a 2 n avg(a) 2, c 1 = avg(b) avg(a)c 2 Norm, distance, angle 2-38

42 Interpretation slope c 2 can be written in terms of correlation coefficient of a and b: c 2 = (a avg(a)1)t (b avg(b)1) a avg(a)1 2 = ρ ab std(b) std(a) hence, expression for regression line can be written as f(x) = avg(b) + ρ ab std(b) (x avg(a)) std(a) correlation coefficient ρ ab is the slope after converting to standard units: f(x) avg(b) std(b) = ρ ab x avg(a) std(a) Norm, distance, angle 2-39

43 Examples ρ ab = 0.91 ρ ab = 0.89 ρ ab = 0.25 dashed lines in top row show average ± standard deviation bottom row shows scatter plots of top row in standard units Norm, distance, angle 2-40

44 Outline norm distance k-means algorithm angle hyperplanes complex vectors

45 Hyperplane one linear equation in n variables x 1, x 2,..., x n : in vector notation: a T x = b a 1 x 1 + a 2 x a n x n = b let H be the set of solutions: H = {x R n a T x = b} H is empty if a 1 = a 2 = = a n = 0 and b 0 H = R n if a 1 = a 2 = = a n = 0 and b = 0 H is called a hyperplane if a = (a 1, a 2,..., a n ) 0 for n = 2, a straight line in a plane; for n = 3, a plane in 3-D space,... Norm, distance, angle 2-41

46 Example b = 5 x 2 b = 10 a = (2, 1) b = 15 x 1 b = 15 b = 10 b = 0 b = 5 Norm, distance, angle 2-42

47 Geometric interpretation of hyperplane recall formula for orthogonal decomposition of x with respect to a (page 2-34): x = at x a 2a + y with y a H x satisfies a T x = b if and only if y x x = b a 2a + y with y a (b/ a 2 )a point (b/ a 2 )a is the intersection of hyperplane with line through a add arbitrary vectors y a to get all other points in hyperplane Norm, distance, angle 2-43

48 Exercise: projection on hyperplane show that the point in H = {x a T x = b} closest to c R n is ˆx = c + b at c a 2 a show that the distance of c to the hyperplane H = {x a T x = b} is a T c b a Norm, distance, angle 2-44

49 Solution a T c a 2a c H = {x a T x = b} b a 2a y ˆx line through a and origin ˆx = c + b at c a 2 a Norm, distance, angle 2-45

50 Solution general point x in H is x = b a 2a + y, y a decomposition of c with respect to a is c = at c a 2a + d with d = c at c a 2a squared distance between x and c is c x 2 = a T 2 c b a T a a + d y = (at c b) 2 a 2 + d y 2 (2nd step because d y a); distance is minimized by choosing y = d Norm, distance, angle 2-46

51 Kaczmarz algorithm Problem: find (one) solution of set of linear equations a T 1 x = b 1, a T 2 x = b 2,..., a T mx = b m here a 1, a 2,..., a m are nonzero n-vectors we assume the equations are solvable (have at least one solution) n is huge, so we can only use simple vector operations Algorithm: start at some initial x and repeat the following steps pick an index i {1,..., m}, for example, cyclically or randomly replace x with projection on hyperplane H i = { x a T i x = b i} x := x + b i a T i x a i 2 a i Norm, distance, angle 2-47

52 Tomography reconstruct unknown image from line integrals ray i a ij pixel j x represents unknown image with n pixels a ij is length of intersection of ray i and pixel j n b i is a measurement of the line integral a ij x j along ray i j=1 Kaczmarz alg. is also known as Algebraic Reconstruction Technique (ART) Norm, distance, angle 2-48

53 Outline norm distance k-means algorithm angle hyperplanes complex vectors

54 Norm norm of vector a C n : a = a a a n 2 = a H a positive definite: a 0 for all a, a = 0 only if a = 0 homogeneous: βa = β a for all vectors a, complex scalars β triangle inequality: a + b a + b for all vectors a, b of equal size Norm, distance, angle 2-49

55 Cauchy-Schwarz inequality for complex vectors a H b a b for all a, b C n moreover, equality a H b = a b holds if: a = 0 or b = 0 a 0 and b 0, and b = γa for some (complex) scalar γ exercise: generalize proof for real vectors on page 2-4 we say a and b are orthogonal if a H b = 0 we will not need definition of angle, correlation coefficient,... in C n Norm, distance, angle 2-50

Norm and Distance. Stephen Boyd. EE103 Stanford University. September 27, 2017

Norm and Distance. Stephen Boyd. EE103 Stanford University. September 27, 2017 Norm and Distance Stephen Boyd EE103 Stanford University September 27, 2017 Outline Norm and distance Distance Standard deviation Angle Norm and distance 2 Norm the Euclidean norm (or just norm) of an

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Additional Exercises for ECE133A

Additional Exercises for ECE133A Additional Exercises for ECE33A Winter Quarter 208 L Vandenberghe Contents Vectors 2 2 Matrices 9 3 Linear equations 2 4 Matrix inverses 4 5 Triangular matrices 8 6 Orthogonal matrices 20 7 LU factorization

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Exercise Solutions for Introduction to 3D Game Programming with DirectX 10

Exercise Solutions for Introduction to 3D Game Programming with DirectX 10 Exercise Solutions for Introduction to 3D Game Programming with DirectX 10 Frank Luna, September 6, 009 Solutions to Part I Chapter 1 1. Let u = 1, and v = 3, 4. Perform the following computations and

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

web: HOMEWORK 1

web:   HOMEWORK 1 MAT 207 LINEAR ALGEBRA I 2009207 Dokuz Eylül University, Faculty of Science, Department of Mathematics Instructor: Engin Mermut web: http://kisideuedutr/enginmermut/ HOMEWORK VECTORS IN THE n-dimensional

More information

9. Least squares data fitting

9. Least squares data fitting L. Vandenberghe EE133A (Spring 2017) 9. Least squares data fitting model fitting regression linear-in-parameters models time series examples validation least squares classification statistics interpretation

More information

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication.

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication. This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication. Copyright Pearson Canada Inc. All rights reserved. Copyright Pearson

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

Linear Algebra. Alvin Lin. August December 2017

Linear Algebra. Alvin Lin. August December 2017 Linear Algebra Alvin Lin August 207 - December 207 Linear Algebra The study of linear algebra is about two basic things. We study vector spaces and structure preserving maps between vector spaces. A vector

More information

which has a check digit of 9. This is consistent with the first nine digits of the ISBN, since

which has a check digit of 9. This is consistent with the first nine digits of the ISBN, since vector Then the check digit c is computed using the following procedure: 1. Form the dot product. 2. Divide by 11, thereby producing a remainder c that is an integer between 0 and 10, inclusive. The check

More information

Math General Topology Fall 2012 Homework 1 Solutions

Math General Topology Fall 2012 Homework 1 Solutions Math 535 - General Topology Fall 2012 Homework 1 Solutions Definition. Let V be a (real or complex) vector space. A norm on V is a function : V R satisfying: 1. Positivity: x 0 for all x V and moreover

More information

Math 32A Discussion Session Week 2 Notes October 10 and 12, 2017

Math 32A Discussion Session Week 2 Notes October 10 and 12, 2017 Math 32A Discussion Session Week 2 Notes October 10 and 12, 2017 Since we didn t get a chance to discuss parametrized lines last week, we may spend some time discussing those before moving on to the dot

More information

x 1 x 2. x 1, x 2,..., x n R. x n

x 1 x 2. x 1, x 2,..., x n R. x n WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications

More information

Definitions and Properties of R N

Definitions and Properties of R N Definitions and Properties of R N R N as a set As a set R n is simply the set of all ordered n-tuples (x 1,, x N ), called vectors. We usually denote the vector (x 1,, x N ), (y 1,, y N ), by x, y, or

More information

1. Vectors. notation. examples. vector operations. linear functions. complex vectors. complexity of vector computations

1. Vectors. notation. examples. vector operations. linear functions. complex vectors. complexity of vector computations L. Vandenberghe ECE133A (Winter 2018) 1. Vectors notation examples vector operations linear functions complex vectors complexity of vector computations 1-1 Vector a vector is an ordered finite list of

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

3 Scalar Product. 3.0 The Dot Product. ~v ~w := v 1 w 1 + v 2 w v n w n.

3 Scalar Product. 3.0 The Dot Product. ~v ~w := v 1 w 1 + v 2 w v n w n. 3 Scalar Product Copyright 2017, Gregory G. Smith 28 September 2017 Although vector products on R n are rare, every coordinate space R n is equipped with a binary operation that sends two vectors to a

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Study guide for Exam 1. by William H. Meeks III October 26, 2012

Study guide for Exam 1. by William H. Meeks III October 26, 2012 Study guide for Exam 1. by William H. Meeks III October 2, 2012 1 Basics. First we cover the basic definitions and then we go over related problems. Note that the material for the actual midterm may include

More information

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Part 1a: Inner product, Orthogonality, Vector/Matrix norm Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Dot Product August 2013

Dot Product August 2013 Dot Product 12.3 30 August 2013 Dot product. v = v 1, v 2,..., v n, w = w 1, w 2,..., w n The dot product v w is v w = v 1 w 1 + v 2 w 2 + + v n w n n = v i w i. i=1 Example: 1, 4, 5 2, 8, 0 = 1 2 + 4

More information

Analytic Geometry. Orthogonal projection. Chapter 4 Matrix decomposition

Analytic Geometry. Orthogonal projection. Chapter 4 Matrix decomposition 1541 3 Analytic Geometry 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 In Chapter 2, we studied vectors, vector spaces and linear mappings at a general but abstract level. In this

More information

22A-2 SUMMER 2014 LECTURE Agenda

22A-2 SUMMER 2014 LECTURE Agenda 22A-2 SUMMER 204 LECTURE 2 NATHANIEL GALLUP The Dot Product Continued Matrices Group Work Vectors and Linear Equations Agenda 2 Dot Product Continued Angles between vectors Given two 2-dimensional vectors

More information

Quiz 2 Practice Problems

Quiz 2 Practice Problems Quiz Practice Problems Practice problems are similar, both in difficulty and in scope, to the type of problems you will see on the quiz. Problems marked with a are for your entertainment and are not essential.

More information

Projection Theorem 1

Projection Theorem 1 Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality

More information

Inner Product Spaces 6.1 Length and Dot Product in R n

Inner Product Spaces 6.1 Length and Dot Product in R n Inner Product Spaces 6.1 Length and Dot Product in R n Summer 2017 Goals We imitate the concept of length and angle between two vectors in R 2, R 3 to define the same in the n space R n. Main topics are:

More information

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7. Preliminaries Linear models: the perceptron and closest centroid algorithms Chapter 1, 7 Definition: The Euclidean dot product beteen to vectors is the expression d T x = i x i The dot product is also

More information

y 1 y 2 . = x 1y 1 + x 2 y x + + x n y n y n 2 7 = 1(2) + 3(7) 5(4) = 4.

y 1 y 2 . = x 1y 1 + x 2 y x + + x n y n y n 2 7 = 1(2) + 3(7) 5(4) = 4. . Length, Angle, and Orthogonality In this section, we discuss the defintion of length and angle for vectors. We also define what it means for two vectors to be orthogonal. Then we see that linear systems

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is 17 Inner products Up until now, we have only examined the properties of vectors and matrices in R n. But normally, when we think of R n, we re really thinking of n-dimensional Euclidean space - that is,

More information

Brief Review of Exam Topics

Brief Review of Exam Topics Math 32A Discussion Session Week 3 Notes October 17 and 19, 2017 We ll use this week s discussion session to prepare for the first midterm. We ll start with a quick rundown of the relevant topics, and

More information

Vector Geometry. Chapter 5

Vector Geometry. Chapter 5 Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

4.1 Distance and Length

4.1 Distance and Length Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

A Primer in Econometric Theory

A Primer in Econometric Theory A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Math 276, Spring 2007 Additional Notes on Vectors

Math 276, Spring 2007 Additional Notes on Vectors Math 276, Spring 2007 Additional Notes on Vectors 1.1. Real Vectors. 1. Scalar Products If x = (x 1,..., x n ) is a vector in R n then the length of x is x = x 2 1 + + x2 n. We sometimes use the notation

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Standards to Topics. Louisiana Student Standards for Mathematics Mathematics 8 Grade 8

Standards to Topics. Louisiana Student Standards for Mathematics Mathematics 8 Grade 8 Standards to Topics Louisiana Student Standards for Mathematics Mathematics 8 8.NS.A.01 Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

Linear Algebra I for Science (NYC)

Linear Algebra I for Science (NYC) Element No. 1: To express concrete problems as linear equations. To solve systems of linear equations using matrices. Topic: MATRICES 1.1 Give the definition of a matrix, identify the elements and the

More information

Designing Information Devices and Systems I Spring 2018 Homework 11

Designing Information Devices and Systems I Spring 2018 Homework 11 EECS 6A Designing Information Devices and Systems I Spring 28 Homework This homework is due April 8, 28, at 23:59. Self-grades are due April 2, 28, at 23:59. Submission Format Your homework submission

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

5. Subgradient method

5. Subgradient method L. Vandenberghe EE236C (Spring 2016) 5. Subgradient method subgradient method convergence analysis optimal step size when f is known alternating projections optimality 5-1 Subgradient method to minimize

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Matrix Algebra: Vectors

Matrix Algebra: Vectors A Matrix Algebra: Vectors A Appendix A: MATRIX ALGEBRA: VECTORS A 2 A MOTIVATION Matrix notation was invented primarily to express linear algebra relations in compact form Compactness enhances visualization

More information

MATH 12 CLASS 4 NOTES, SEP

MATH 12 CLASS 4 NOTES, SEP MATH 12 CLASS 4 NOTES, SEP 28 2011 Contents 1. Lines in R 3 1 2. Intersections of lines in R 3 2 3. The equation of a plane 4 4. Various problems with planes 5 4.1. Intersection of planes with planes or

More information

Lecture 20: 6.1 Inner Products

Lecture 20: 6.1 Inner Products Lecture 0: 6.1 Inner Products Wei-Ta Chu 011/1/5 Definition An inner product on a real vector space V is a function that associates a real number u, v with each pair of vectors u and v in V in such a way

More information

EE263 homework 3 solutions

EE263 homework 3 solutions EE263 Prof. S. Boyd EE263 homework 3 solutions 2.17 Gradient of some common functions. Recall that the gradient of a differentiable function f : R n R, at a point x R n, is defined as the vector f(x) =

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints By I. Necoara, Y. Nesterov, and F. Glineur Lijun Xu Optimization Group Meeting November 27, 2012 Outline

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Lecture 23: 6.1 Inner Products

Lecture 23: 6.1 Inner Products Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such

More information

SOLUTIONS FOR PROBLEMS 1-30

SOLUTIONS FOR PROBLEMS 1-30 . Answer: 5 Evaluate x x + 9 for x SOLUTIONS FOR PROBLEMS - 0 When substituting x in x be sure to do the exponent before the multiplication by to get (). + 9 5 + When multiplying ( ) so that ( 7) ( ).

More information

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n. 6.. Length, Angle, and Orthogonality In this section, we discuss the defintion of length and angle for vectors and define what it means for two vectors to be orthogonal. Then, we see that linear systems

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

Math 290, Midterm II-key

Math 290, Midterm II-key Math 290, Midterm II-key Name (Print): (first) Signature: (last) The following rules apply: There are a total of 20 points on this 50 minutes exam. This contains 7 pages (including this cover page) and

More information

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63 Contents Appendix D (Inner Product Spaces W-5 Index W-63 Inner city space W-49 W-5 Chapter : Appendix D Inner Product Spaces The inner product, taken of any two vectors in an arbitrary vector space, generalizes

More information

Vector Calculus. Lecture Notes

Vector Calculus. Lecture Notes Vector Calculus Lecture Notes Adolfo J. Rumbos c Draft date November 23, 211 2 Contents 1 Motivation for the course 5 2 Euclidean Space 7 2.1 Definition of n Dimensional Euclidean Space........... 7 2.2

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 0: Vector spaces 0.1 Basic notation Here are some of the fundamental sets and spaces

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Matrix-Vector Products and the Matrix Equation Ax = b

Matrix-Vector Products and the Matrix Equation Ax = b Matrix-Vector Products and the Matrix Equation Ax = b A. Havens Department of Mathematics University of Massachusetts, Amherst January 31, 2018 Outline 1 Matrices Acting on Vectors Linear Combinations

More information

Decidability of consistency of function and derivative information for a triangle and a convex quadrilateral

Decidability of consistency of function and derivative information for a triangle and a convex quadrilateral Decidability of consistency of function and derivative information for a triangle and a convex quadrilateral Abbas Edalat Department of Computing Imperial College London Abstract Given a triangle in the

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

Properties of Linear Transformations from R n to R m

Properties of Linear Transformations from R n to R m Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

ELEMENTARY MATRIX ALGEBRA

ELEMENTARY MATRIX ALGEBRA ELEMENTARY MATRIX ALGEBRA Third Edition FRANZ E. HOHN DOVER PUBLICATIONS, INC. Mineola, New York CONTENTS CHAPTER \ Introduction to Matrix Algebra 1.1 Matrices 1 1.2 Equality of Matrices 2 13 Addition

More information

MTAEA Vectors in Euclidean Spaces

MTAEA Vectors in Euclidean Spaces School of Economics, Australian National University January 25, 2010 Vectors. Economists usually work in the vector space R n. A point in this space is called a vector, and is typically defined by its

More information

What is it we are looking for in these algorithms? We want algorithms that are

What is it we are looking for in these algorithms? We want algorithms that are Fundamentals. Preliminaries The first question we want to answer is: What is computational mathematics? One possible definition is: The study of algorithms for the solution of computational problems in

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information