4. Multi-linear algebra (MLA) with Kronecker-product data.
|
|
- Piers Griffin
- 5 years ago
- Views:
Transcription
1 ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial interpolation. - Sinc interpolation. 3. Data-sparse formats to represent high-order tensors. - Tucker model. - Canonical (PARAFAC) model. - Two-level and mixed models. 4. Multi-linear algebra (MLA) with Kronecker-product data.
2 Chebyshev polynomials B. Khoromskij, Leipzig 2007(L3) 2 By E ρ = E ρ (B) with the reference interval B := [ 1, 1], we denote the Bernstein s regularity ellipse (with foci at w = ±1 and the sum of semi-axes equal to ρ > 1), E ρ := {w C : w 1 + w + 1 ρ + ρ 1 }. The Chebyshev polynomials, T n (w), are defined recursively T 0 (w) = 1, T 1 (w) = w, T n+1 (w) = 2wT n (w) T n 1 (w), n = 1, 2,.... Representation T n (x) = cos(n arccosx), x [ 1, 1], implies T n (1) = 1, T n ( 1) = ( 1) n. There holds T n (w) = 1 2 (zn + z n ) with w = 1 2 (z + 1 z ).
3 Best polynomial approximation by Chebyshev series B. Khoromskij, Leipzig 2007(L3) 3 Thm Let F be analytic and bounded by M in E ρ (with ρ > 1). Then the expansion F(w) = C C n T n (w), (1) n=1 holds for all w E ρ (Chebyshev series), and with C n = 1 π 1 1 F(w)T n (w) 1 w 2 dw. Moreover, C n M/ρ n and for w B and for m = 1, 2, 3,..., F(w) C 0 2 m n=1 C n T n (w) 2M ρ 1 ρ m, w B. (2)
4 Lagrangian polynomial interpolation B. Khoromskij, Leipzig 2007(L3) 4 Let P N (B) be the set of polynomials of degree N on B. Define by [I N F](x) P N (B) the interpolation polynomial of F w.r.t. the Chebyshev-Gauss-Lobatto (CGL) nodes ξ j = cos πj N B, j = 0, 1,...,N, with ξ 0 = 1, ξ N = 1, where ξ j are zeroes of the polynomials (1 x 2 )T N (x), x B. The Lagrangian interpolant I N of F has the form I N F := N F(ξ j )l j (x) P N (B) (3) j=0 with l j (x) being the set of interpolation polynomials l j := N k=0,j k x ξ k ξ j ξ k P N (B), j = 0,...,N. Clearly, I N (ξ j ) = F(ξ j ), since l j (ξ j ) = 1 and l j (ξ k ) = 0 k j.
5 Lebesque constant for Chebyshev interpolation B. Khoromskij, Leipzig 2007(L3) 5 Given the set {ξ j } N j=0 of interpolation points on [ 1, 1] and the associated Lagrangian interpolation operator I N. The approximation theory for polynomial interpolation includes the so-called Lebesque constant Λ N R >1, I N u,b Λ N u,b u C(B). (4) In the case of Chebyshev interpolation it can be shown that Λ N grows at most logarithmically in N, Λ N 2 π log N + 1. The interpolation points which produce the smallest value Λ N of all Λ N are not known, but Bernstein 54 proves that Λ N = 2 π log N + O(1).
6 Error bound for polynomial interpolation B. Khoromskij, Leipzig 2007(L3) 6 Thm Let u C [ 1, 1] have an analytic extension to E ρ bounded by M > 0 in E ρ (with ρ > 1). Then we have u I N u,i (1 + Λ N ) 2M ρ 1 ρ N, N N 1. (5) Proof. Due to (2) one obtains for the best polynomial approximations to u on [ 1, 1], min u v,b 2M v P N ρ 1 ρ N. The interpolation operator I N is a projection, that is, for all v P N we have I N v = v. Now apply the triangle inequality, u I N u,b = u v I N (u v),b (1 + Λ N ) u v,b.
7 Tensor-product polynomial interpolation B. Khoromskij, Leipzig 2007(L3) 7 Consider a multi-variate funct. f = f(x 1,...,x d ) : B d R, d 2, defined on a box B d = B 1 B 2... B d with B k = B = [ 1, 1]. Define N-th order tensor product interpolation operator I N f = I 1 N I 2 N... I d Nf P N [B d ], where I k N f denotes the interpolation polynomial w.r.t. x k, at nodes {ξ k } B k, k = 1,...,d. We choose the CGL nodes, hence the interpolation points ξ α B d, α = (i 1,...,i d ) N d 0, are obtained by the Cartesian product of 1D-nodes, ( ξ α := cos πi ) 1 N,...,cosπi d. N
8 Tensor-product polynomial interpolation B. Khoromskij, Leipzig 2007(L3) 8 Again, I N is the projection map, I N : C(B d ) P N := {p 1... p d : p i P N, i = 1,...d} implying stability of I N in the multidimensional case, cf. (4), I N f,b d Λ d N f,b d f C(B d ). (6) To derive an analogue of Thm. 3.2, introduce the product domain E (j) ρ := B 1... B j 1 E ρ (I j ) B j+1... B d, and denote by X j the (d 1)-dimensional subset of variables {x 1,...,x j 1, x j+1,...,x d } with x j B j, j = 1,..., d.
9 Tensor-product polynomial interpolation B. Khoromskij, Leipzig 2007(L3) 9 Assump Given f C (B d ), assume there is ρ > 1 s.t. for all j = 1,...,d, and each fixed ξ X j, there exists an analytic extension ˆf j (x j, ξ) of f(x j, ξ) to E ρ (B j ) C w.r.t. x j bounded in E ρ (B j ) by certain M j > 0, independent on ξ. Thm For f C (B d ), let Assump. 3.1 be satisfied. Then the interpolation error can be estimated by f I N f,b d Λ d N 2M ρ (f) ρ 1 ρ N, (7) where Λ N is the Lebesque const. for the 1D interpolant I k N, and M ρ (f) := max { max ˆf j (x, ξ) }. 1 j d x E ρ (j)
10 Tensor-product polynomial interpolation B. Khoromskij, Leipzig 2007(L3) 10 Proof. Multiple use of (4), (5) and the triangle inequality lead to f I N f f I 1 Nf + I 1 N(f I 2 N... I d Nf) f I 1 Nf + I 1 N(f I 2 Nf) + + INI 1 N(f 2 INf) IN 1... I d 1 N (f Id Nf) [(1 + Λ N ) max x E (1) ρ Λ d 1 N (1 + Λ N)(Λ d N 1) Λ N 1 ˆf 1 (x, ξ) + Λ N (1 + Λ N ) max ˆf 2 (x, ξ) x E ρ (2) (1 + Λ N) max ˆf d (x, ξ) ] x E ρ (d) 2M ρ ρ 1 ρ N. 2 ρ 1 ρ N Hence (7) follows since for x > 1 we have (1+x)(xn 1) x 1 x n.
11 Sinc-approximation of multi-variate functions B. Khoromskij, Leipzig 2007(L3) 11 Consider the separable approximation in the case Ω = R. Extension to the case Ω = R + or Ω = (a, b) is possible. The tensor-product Sinc interpolant C M w.r.t. the first d 1 variables, reads C M f := C 1 M... C d 1 M f, f : Rd R, where CM l f = Cl M (f, h), 1 l d, is the univariate Sinc interp. C M (f, h) = M k= M f(kh)s k,h (x), in x l I l = R, with R d = I 1... I d. Ex Examples of approximated function (x, y R d ) f(x) = x α, f(x) = exp(κ x ) x, f(x, y) = sinc( x y ).
12 Sinc-approximation of multi-variate functions B. Khoromskij, Leipzig 2007(L3) 12 Error bound for tensor-product Sinc interpolant. The estimation of the error f C M f requires the Lebesgue constant Λ M 1 defined by C M (f, h) Λ M f for all f C(R). (8) Stenger 93 proves the inequality Λ M = max x R M k= M S k,h (x) 2 (3 + log(m)). (9) π For each fixed l {1,...,d 1}, choose ζ l I l and define the remaining parameter set by Y l := I 1... I l 1 I l+1... I d R d 1.
13 Sinc-approximation of multi-variate functions B. Khoromskij, Leipzig 2007(L3) 13 Introduces the univariate (parameter dependent) function F l (, y) : I l R, y Y l, which is the restriction of f onto I l. Thm (Hackbusch, Khoromskij) For each l = 1,..., d 1 we assume that for any fixed y Y l, F l (, y) satisfies (a) F l (, y) H 1 (D δ ) with N(F l, D δ ) N l < uniformly in y; (b) F l (, y) has hyper-exponential decay with a = 1, C, b > 0. Then, for all y Y l, the optimal choice h := log M M yields f C M (f, h) C 2πδ Λd 2 M with Λ M defined by (9). max N l e πδm log M (10) l=1,...,d 1
14 Proof of the Sinc-interpolation error B. Khoromskij, Leipzig 2007(L3) 14 The multiple use of (8) and the triangle inequality lead to f C M f f C 1 Mf + C 1 M(f C 2 M...C d Mf) Note that f C 1 Nf + C 1 M(f C 2 Mf) + + C 1 MC 2 M(f C 3 Mf) C 1 M...C d 2 M hence (10) follows. [N 1 + Λ M N Λ d 2 M N d 1] 1 + Λ M Λ d 2 M 2πδ 1 2πδ e πδm log M max N l e πδm log M. l=1,...,d 1 Λ d 1 M 1 Λ M 1 Λd 2 M, Λ M, (f Cd 1 f) M
15 Data-sparse representation of high-order tensors B. Khoromskij, Leipzig 2007(L3) 15 Def A d-th order tensor on I d = I 1... I d, A := [a i1...i d ] R Id, d = pd, p, d, n N with multi-index i l = (i l,1,..., i l,p ) I l = I 1... I p (l = 1,..., d), and i l,m {1,..., n}, for m = 1,..., p (p = 1, 2, 3). The L 2 inner product of tensors induces the Frobenius norm A, B := a i1...i d b i1...i d, A F := A, A. (i 1...i d ) I d A R Id has I d = n dp entries. How to remove d from the exponential?
16 Data-sparse representation of high-order tensors B. Khoromskij, Leipzig 2007(L3) 16 Key ingredient: representation by a sum of rank-1 tensors A = V (1) 2 d V (d), a i1...i d = v (1) i 1 v (d) i d with low dimensional (canonical) comp. V (l) = {v (l) i l } R np. Complexity: dn p. Standard MLA has linear scaling in d. Ex Let A = a 1 a 2, B = b 1 b 2, a i, b i R n (q = 2, p = 1). Then (A, B) = (a 1, b 1 )(a 2, b 2 ), A F = (a 1, a 1 )(a 2, a 2 ) = a 1 a 2, where the latter corresponds to the Frobenius norm of a matrix.
17 Rank-(r 1,..., r d ) Tucker model B. Khoromskij, Leipzig 2007(L3) 17 Tucker Model (T r ). (orthonormalised set V (l) k l R I l ) A (r) = r 1 k 1 =1... r d k d =1 b k1...k d 1 V (1) k d V (d) k d R I 1... I d. (11) Core tens. B = {b k } R r 1... r d is not unique (up to rotations) Complexity (p = 1): r d + rdn n d with r = max r l n. Visualization of the Tucker model with d = 3: I 3 r 3 V (3) A I 3 I 1 = B r 1 r 2 I 2 V (2) I 2 I 1 V (1)
18 CANDECOMP/PARAFAC (CP) tensor format B. Khoromskij, Leipzig 2007(L3) 18 CP Model (C r ). Approx. A by a sum of rank-1 tensors A (r) = r k=1 b k 1 V (1) k 2 d V (d) k A, b k R with normalised V (l) k R np. Uniqueness is due to J. Kruskal 77. Complexity: r + rdn. The minimal number r is called a tensor rank of A (r). (3) (3) (3) 1 2 r V V V A b 1 b 2 (2) (2) (2) = V + V V 1 2 (1) (1) (1) 1 2 r V V V b r r Figure 1: Visualization of the CP-model for d = 3.
19 Two-level and mixed models B. Khoromskij, Leipzig 2007(L3) 19 Two-level Tucker model T (U,r,q), A (r,q) = B 1 V (1) 2 V (2)... d V (d) T (U,r,q) C (n,q), 1. B R r 1... r d is retrieved by the rank-q CP model C (r,q) 2. V (l) = [V (l) 1 V (l) 2...V (l) r l ] {U}, l = 1,..., d, {U} spans fixed (uniform/adaptive) basis; O(r d ) with r = max l d r l O(dqr) (independent of n!). Mixed model M C,T : A = A 1 + A 2, A 1 C r1, A 2 T r2. Applies to ill-conditioned tensors.
20 Challenge of multi-factor analysis B. Khoromskij, Leipzig 2007(L3) 20 There is little analogy between the cases d = 2 and d 3, Paradigm: linear algebra vs. multi-linear algebra (MLA). CP/Tucker tensor-product models have plenty of merits: 1. A (r) is repr. with low cost drn (resp. drn + r d ) n d. 2. V (l) k can be repr. in the data-sparse form: H-matrix (HKT), wavelet-based (WKT), uniform basis. 3. The core tensor B = {b k } can be sparsified as well. 4. Efficient numerical MLA (practical experience). Remark. CP decomposition (unique!) can t be retrieved by rotation and truncation of the Tucker model, C r = T r if r = 1, but C r T r if r = r 2.
21 Examples of T (U,r,q) -models B. Khoromskij, Leipzig 2007(L3) 21 (I) Tensor-product sinc-interpolation: analytic functions with point singularities, r = (r,..., r), r = q = O(log n log ε ) O(dqr). (II) Sparse grids: regularity of mixed derivatives, r = (n 1,..., n d ), hyperbolic cross q = n log d n O(n log d n). (III) Adaptive two-level appr.: Tucker + CP decomp. of B with q r O(dqn). Structured Kronecker product models (d-th order tensors of size n d ) Model Notation Memory/A x A B Comp. tools Canonical - CP Cr drn drn 2 ALS/Newton HKT - CP C H,r dr n log q n drn log q n Analytic (quadr.) Nested - CP C T(I),L dr log d n+ rd dr log d n SVD/QR/orthog. iter. Tucker T r r d + drn - Orthogonal ALS Two-level Tucker T (U,r,q) drq/drr 0 qn 2 dr 2 q 2 (mem.) Analyt.(interp.)+ CP
22 Properties of the Kronecker product B. Khoromskij, Leipzig 2007(L3) 22 The Kronecker product (KP) operation A B of two matrices A = [a ij ] R m n, B R h g is an mh ng matrix that has the block-representation [a ij B] (corr. to p = 2). 1. Let C R s t, then the KP satisfies the associative law, (A B) C = A (B C), and therefore we do not use brackets. The matrix A B C := (A B) C has (mhs) rows and (ngt) columns. 2. Let C R n r and D R g s, then the standard matrix-matrix product in the Kronecker format takes the form (A B)(C D) = (AC) (BD). The corresponding extension to q-th order tensors is (A 1... A q )(B 1... B q ) = (A 1 B 1 )... (A q B q ).
23 Properties of the Kronecker product B. Khoromskij, Leipzig 2007(L3) We have the distributive law (A + B) (C + D) = A C + A D + B C + B D. 4. Rank relation: rank(a B) = rank(a)rank(b). Ex In general A B B A. What is the condition on A and B that provides A B = B A? Invariance of some matrix properties: (1) If A and B are diagonal then A B is also diagonal, and conversely (if A B 0). (2) Let A and B be Hermitian/normal matrices (A = A resp. A 1 = A). Then A B is of the corresponding type. (3) A R n n, B R m m det(a B) = (deta) n (detb) m.
24 Kronecker product: matrix operations B. Khoromskij, Leipzig 2007(L3) 24 Thm Let A R n n and B R m m be invertible matrices. Then (A B) 1 = A 1 B 1. Proof. Since det(a) 0, det(b) 0 and the above property (3) we have det(a B) 0. Thus (A B) 1 exists and (A 1 B 1 )(A B) = (A 1 A) (B 1 B) = I nm. Lem Let A R n n and B R m m be unitary matrices. Then A B is a unitary matrix. Proof. Since A = A 1, B = B 1 we have (A B) = A B = A 1 B 1 = (A B) 1.
25 Kronecker product: matrix operations B. Khoromskij, Leipzig 2007(L3) 25 Define the commutator [A, B] := AB BA. Lem Let A R n n and B R m m. Then [A I n, I m B] = 0 R m2 n 2. Proof. [A I n, I m B] = (A I n )(I m B) (I m B)(A I n ) = A B A B = 0. Rem Let A, B R n n, C, D R m m and [A, B] = 0, [C, D] = 0. Then [A C, B D] = 0. Proof. Apply the identity (A B)(C D) = (AC) (BD).
1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationTensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij
1 Everything should be made as simple as possible, but not simpler. A. Einstein (1879-1955) Tensor-Product Representation of Operators and Functions (7 introductory lectures) Boris N. Khoromskij University
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.
More informationComponents and change of basis
Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition
More informationNUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More informationNEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:
More informationLecture 4. CP and KSVD Representations. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix
More informationSYLLABUS. 1 Linear maps and matrices
Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLectures on Tensor Numerical Methods for Multi-dimensional PDEs
Lectures on Tensor Numerical Methods for Multi-dimensional PDEs Lect 7-8-9 Polynomial and sinc-approximation in R d, TT-format, QTT approximation of functions and operators, integrating exotic oscillators,
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationLinear Algebra Formulas. Ben Lee
Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationFormula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column
Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationMath 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.
Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationOn Multivariate Newton Interpolation at Discrete Leja Points
On Multivariate Newton Interpolation at Discrete Leja Points L. Bos 1, S. De Marchi 2, A. Sommariva 2, M. Vianello 2 September 25, 2011 Abstract The basic LU factorization with row pivoting, applied to
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationLINEAR ALGEBRA QUESTION BANK
LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More informationComputational Foundations of Cognitive Science. Definition of the Inverse. Definition of the Inverse. Definition Computation Properties
al Foundations of Cognitive Science Lecture 1: s; Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February, 010 1 Reading: Anton and Busby, Chs..,.6 Frank Keller al Foundations
More informationMath/CS 466/666: Homework Solutions for Chapter 3
Math/CS 466/666: Homework Solutions for Chapter 3 31 Can all matrices A R n n be factored A LU? Why or why not? Consider the matrix A ] 0 1 1 0 Claim that this matrix can not be factored A LU For contradiction,
More informationRecall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:
Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More information1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?
. Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in
More informationMATH 235. Final ANSWERS May 5, 2015
MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationTensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints
Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationMTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)
MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More informationMATH 2210Q MIDTERM EXAM I PRACTICE PROBLEMS
MATH Q MIDTERM EXAM I PRACTICE PROBLEMS Date and place: Thursday, November, 8, in-class exam Section : : :5pm at MONT Section : 9: :5pm at MONT 5 Material: Sections,, 7 Lecture 9 8, Quiz, Worksheet 9 8,
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationMATH 106 LINEAR ALGEBRA LECTURE NOTES
MATH 6 LINEAR ALGEBRA LECTURE NOTES FALL - These Lecture Notes are not in a final form being still subject of improvement Contents Systems of linear equations and matrices 5 Introduction to systems of
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationMath 1553 Introduction to Linear Algebra
Math 1553 Introduction to Linear Algebra Lecture Notes Chapter 2 Matrix Algebra School of Mathematics The Georgia Institute of Technology Math 1553 Lecture Notes for Chapter 2 Introduction, Slide 1 Section
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationDEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V.
6.2 SUBSPACES DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V. HMHsueh 1 EX 1 (Ex. 1) Every vector space
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationLinear Algebra Practice Final
. Let (a) First, Linear Algebra Practice Final Summer 3 3 A = 5 3 3 rref([a ) = 5 so if we let x 5 = t, then x 4 = t, x 3 =, x = t, and x = t, so that t t x = t = t t whence ker A = span(,,,, ) and a basis
More informationLinear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.
Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationMATH 1210 Assignment 4 Solutions 16R-T1
MATH 1210 Assignment 4 Solutions 16R-T1 Attempt all questions and show all your work. Due November 13, 2015. 1. Prove using mathematical induction that for any n 2, and collection of n m m matrices A 1,
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLinear Algebra and Dirac Notation, Pt. 3
Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i
More informationMATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS
MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on
More information[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of
. Matrices A matrix is any rectangular array of numbers. For example 3 5 6 4 8 3 3 is 3 4 matrix, i.e. a rectangular array of numbers with three rows four columns. We usually use capital letters for matrices,
More informationNon commutative Khintchine inequalities and Grothendieck s theo
Non commutative Khintchine inequalities and Grothendieck s theorem Nankai, 2007 Plan Non-commutative Khintchine inequalities 1 Non-commutative Khintchine inequalities 2 µ = Uniform probability on the set
More informationChapter 2 Notes, Linear Algebra 5e Lay
Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationConcentration inequalities for non-lipschitz functions
Concentration inequalities for non-lipschitz functions University of Warsaw Berkeley, October 1, 2013 joint work with Radosław Adamczak (University of Warsaw) Gaussian concentration (Sudakov-Tsirelson,
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationChapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.
SSM: Linear Algebra Section 61 61 Chapter 6 1 2 1 Fails to be invertible; since det = 6 6 = 0 3 6 3 5 3 Invertible; since det = 33 35 = 2 7 11 5 Invertible; since det 2 5 7 0 11 7 = 2 11 5 + 0 + 0 0 0
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationPart III Symmetries, Fields and Particles
Part III Symmetries, Fields and Particles Theorems Based on lectures by N. Dorey Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often
More informationMatrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices
Graphics 2009/2010, period 1 Lecture 4 Matrices m n matrices Matrices Definitions Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse The system of m linear equations in
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More information