Most tensor problems are NP hard
|
|
- Augusta Barber
- 5 years ago
- Views:
Transcription
1 Most tensor problems are NP hard (preliminary report) Lek-Heng Lim University of California, Berkeley August 25, 2009 (Joint work with: Chris Hillar, MSRI) L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
2 Innocent looking problem Problem (Minimal rank-1 matrix subspace) Let A 1,..., A l R m n. Find smallest r such that there exist rank-1 matrices u 1 v1,..., u r vr with A 1,..., A l span{u 1 v 1,..., u r v r }. NP-complete over F q, NP-hard over Q [Håstad; 90]. Slight extension: NP-hard over R and C [L & Hillar; 09]. Convex relaxation along the lines of compressive sensing? 1 0, rank. Ky Fan/nuclear/Schatten/trace norm, A = rank(a) σ i (A). i=1 [Fazel, Hindi, Boyd; 01], [Recht, Fazel, Parrilo; 09], [Candès, Recht; 09], [Zhu, So, Ye; 09], [Toh, Yun; 09]. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
3 Tensors as hypermatrices Up to choice of bases on U, V, W, a 3-tensor A U V W may be represented as a hypermatrix A = a ijk l,m,n i,j,k=1 Rl m n where dim(u) = l, dim(v ) = m, dim(w ) = n if 1 we give it coordinates; 2 we ignore covariance and contravariance. Henceforth, tensor = hypermatrix. Scalar = 0-tensor, vector = 1-tensor, matrix = 2-tensor. Cubical 3-tensor a ijk R n n n is symmetric if a ijk = a ikj = a jik = a jki = a kij = a kji. Set of symmetric 3-tensors denoted S 3 (R). L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
4 Examples Higher order derivatives of real-valued multivariate functions: D (p) p f n f (x) = x j1 x j2 x jp. j 1,...,j p=1 grad f (x) R n, Hess f (x) R n n,..., D (p) f (x) R n n. Moments of a vector-valued random variable x = (x 1,..., x n ): A 1 A q={j 1,...,j p} S p (x) = E(x j1 x j2 x jp ) n j 1,...,j p=1. Cumulants of a vector-valued random variable x = (x 1,..., x n ): X ««Q Q K p(x) = ( 1) q 1 (q 1)!E x j E x j n j A 1 j A q j 1,...,j p=1. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
5 Blaming the math Wired: Gaussian copulas for CDOs. NYT: normal market in VaR. January 4, 2009 Risk Mismanagement By JOE NOCERA THERE AREN T MANY widely told anecdotes about the current financial crisis, at least not yet, but there s one that made the rounds in 2007, back when the big investment banks were first starting to write down L.H. billions Lim (Berkeley) of dollars in mortgage-backed Most derivatives tensor problems and other are so-called NP hard toxic securities. This was August well before 25, / 21
6 Why not Gaussian Log characteristic function log E(exp(i t, x )) = α =1 i α κ α (x) tα α!. Gaussian assumption equivalent to quadratic approximation: If x is multivariate Gaussian, then = 2. log E(exp(i t, x )) = i E(x), t t Cov(x)t. K 1 (x) mean (vector), K 2 (x) variance (matrix), K 3 (x) skewness (3-tensor), K 4 (x) kurtosis (4-tensor),.... Non-Gaussian data: Not enough to look at just mean and covariance. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
7 Why not copulas Nassim Taleb: Anything that relies on correlation is charlatanism. Even if marginals normal, dependence might not be. X2 ~ N(0,1) 1000 Simulated Clayton(3) Dependent N(0,1) Values X1 ~ N(0,1) For heavy tail phenomena, important to examine tensor-valued quantities like kurtosis. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
8 Why not VaR Paul Wilmott: The relationship between two assets can never be captured by a single scalar quantity. Multivariate f : R n R f (x) = a 0 + a 1 x + x A 2 x + A 3 (x, x, x) + + A k (x,..., x) +, Hooke s law in 1D: x extension, F force, k spring constant, F = kx. Hooke s law in 3D: x = (x 1, x 2, x 3 ), elasticity tensor C R , stress Σ R 3 3, strain Γ R 3 3 σ ij = 3 k,l=1 c ijklγ kl. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
9 Numerical linear algebra F = R or C. A F m n, b F m, and r min{m, n}. Rank and numerical rank Determine rank(a). Linear system of equations Determine if Ax = b has a solution and if so determine a solution x F n. Linear least squares problem Determine an x F n that minimizes Ax b 2. Spectral norm Determine the value of A 2,2 := max x 2 =1 Ax 2 = σ max (A). Eigenvalue problem If m = n, determine λ F and non-zero x F n with Ax = λx. Singular value problem Determine σ F and non-zero x F n, y F m with Ax = σy, A y = σx. Low rank approximation Determine A r F m n with rank(a r ) r and A A r F = min rank(b) r A B F. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
10 Numerical multilinear algebra? Computing the spectral norm of a 3-tensor: sup x,y,z 0 A(x, y, z) x 2 y 2 z 2. Computing the spectral norm of a symmeric 3-tensor: sup x 0 S(x, x, x) x 3. 2 Computing a best rank-1 approximation to a tensor: min A x y z F. x,y,z Computing a best rank-1 approximation to a symmetric tensor: min x S x x x F. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
11 Numerical multilinear algebra? Determine if a given value is a singular value of a 3-tensor crit x,y,z 0 A(x, y, z) x 3 y 3 z 3. Determine if a given value is an eigenvalue of a symmetric 3-tensor crit x 0 S(x, x, x) x 3. 3 Solving a system of bilinear equations in the exact sense A(x, y, I ) = b or approximating it in the least-squares sense min x,y A(x, y, I ) b 2. Shorthand: A(x, y, z) = l,m,n i,j,k=1 a ijkx i y j z jk, A(x, y, I ) = l,m i,j=1 a ijkx i y j. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
12 Tensor rank = Innocent looking roblem Segre outer product is u v w := u i v j w k l,m,n i,j,k=1. A decomposable tensor is one that can be expressed as u v w. Definition (Hitchcock, 1927) Let A R l m n. Tensor rank is defined as rank (A) := min { r A = r U V W Hom(U, V W ). i=1 σ iu i v i w i }. Write A = [A 1,..., A l ] where A 1,..., A l R m n. Then rank (A) = min { r A 1,..., A l span{u 1 v 1,..., u r v r } } L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
13 Best low rank approximation of a matrix Given A R m n. Want argmin rank(b) r A B. More precisely, find σ i, u i, v i, i = 1,..., r, that minimizes Theorem (Eckart Young) A σ 1 u 1 v 1 σ 2 u 2 v 2 σ r u r v r. Let A = UΣV = rank(a) i=1 σ i u i v i be singular value decomposition. For r rank(a), let A r := r i=1 σ iu i v i. Then A A r F = min rank(b) r A B F. No such thing for tensors of order 3 or higher. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
14 Best low-rank approximation of a tensor Given A R l m n, find σ i, u i, v i, w i, minimizing A σ 1 u 1 v 1 w 1 σ 2 u 2 v 2 w 2 σ r u r v r w r. Surprise: In general, existence of solution guaranteed only if r = 1. Theorem (de Silva-L) Let k 3 and d 1,..., d k 2. For any s such that 2 s min{d 1,..., d k }, there exists A R d 1 d k with rank (A) = s such that A has no best rank-r approximation for some r < s. The result is independent of the choice of norms. Symmetric variant: A S(R n ), find λ i, v i minimizing A λ 1 v 1 v 1 v 1 λ 2 v 2 v 2 v 2 λ r v r v r v r. Also ill-behaved [Comon, Golub, L, Mourrain; 08]. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
15 Best low-rank approximation of a tensor Explanation: Set of tensors (resp. symmetric tensors) of rank (resp. symmetric rank) r is not closed unless r = 1. Another well behaved case: If σ i, u i, v i, w i constrained to nonnegative orthant, then a solution always exist [L & Comon; 09]. Nonnegative tensor approximations: General non-linear programming algorithms, cf. [Friedlander & Hatz; 08] and many others. On-going work with Kojima and Toh: SDP algorithm for convex relaxation of well-behaved cases. Best rank-1 approximation of a symmetric tensor also NP-hard [L & Hillar; 09] min x S x x x F. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
16 Variational approach to eigenvalues/vectors S R m n symmetric. Eigenvalues and eigenvectors are critical values and critical points of x Sx/ x 2 2. Equivalently, critical values/points of x Sx constrained to unit sphere. Lagrangian: L(x, λ) = x Sx λ( x 2 2 1). Vanishing of L at critical (x c, λ c ) R n R yields familiar Sx c = λ c x c. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
17 Eigenvalues/vectors of a tensor Extends to tensors [L; 05]. For x = [x 1,..., x n ] R n, write x p := [x p 1,..., x p n ]. Define the l p -norm x p = (x p x p n ) 1/p. Define eigenvalues/vectors of S S p (R n ) as critical values/points of the multilinear Rayleigh quotient S(x,..., x)/ x p p. Lagrangian L(x, λ) := S(x,..., x) λ( x p p 1). At a critical point S(I n, x,..., x) = λx p 1. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
18 Some observations If S is symmetric, S(I n, x, x,..., x) = S(x, I n, x,..., x) = = S(x, x,..., x, I n ). Defined in [Qi; 05] and [L; 05] independently. Related to rank-1 approximation: attained when min S λx x x F x λ = max S, x x x. x =1 For p = 3, λ is an eigenvalue of S iff x (S i λe i )x = 0, i = 1,..., n, for some non-zero x. Here S i = [s ijk ] n j,k=1 and E i = e i e i. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
19 Graph coloring as polynomial system Found in various forms [Bayer; 82], [Lovász; 94], [de Loera; 95]. Form below from [L & Hillar; 09]. Lemma Let G be a graph. The 3n + E polynomials in 2n + 1 indeterminates { x i y i z 2, y i z xi 2, x iz yi 2, i = 1,..., n, xi 2 + x i x j + xi 2, {i, j} E, has a common nontrivial complex solution if and only if the graph G is 3-colorable. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
20 Tensor eigenvalue is NP-hard Problem (SymmQuadFeas) Let S = s ijk S 3 (R n ), i.e. s ijk = s ikj = s jik = s jki = s kij = s kji. Let G i (x) = x S i x for i = 1,..., n be the associated n homogeneous, real quadratic forms. Determine if the system of equations {G i (x) = c i } n i=1 has a nontrivial real solution 0 x R n. Theorem (L & Hillar; 09) Graph coloring is polynomial reducible to SymmQuadFeas. Corollary Given symmetric tensor S S 3 (R n ). 1 Given λ R. NP hard to check if λ is eigenvalue of S. 2 NP hard to find a best rank-1 approximation λx x x to S. L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
21 Caveat on complexity theory Different notions of NP-hardness: Cook-Karp-Levin (traditional); Blum-Shub-Smale; Valiant. Real versus bit complexity Real complexity: number of field operations required to determine feasibility as a function of n. Bit complexity: if matrices A has rational entries, number of bit operations necessary as a function of the number of bits required to specify all the a ijk. E.g. multiply two integers of size N, i.e. log N bits: real complexity is O(1), bit complexity is O(log N log log N). L.H. Lim (Berkeley) Most tensor problems are NP hard August 25, / 21
Principal Cumulant Component Analysis
Principal Cumulant Component Analysis Lek-Heng Lim University of California, Berkeley July 4, 2009 (Joint work with: Jason Morton, Stanford University) L.-H. Lim (Berkeley) Principal Cumulant Component
More informationSpectrum and Pseudospectrum of a Tensor
Spectrum and Pseudospectrum of a Tensor Lek-Heng Lim University of California, Berkeley July 11, 2008 L.-H. Lim (Berkeley) Spectrum and Pseudospectrum of a Tensor July 11, 2008 1 / 27 Matrix eigenvalues
More informationEigenvalues of tensors
Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors
More informationHyperdeterminants, secant varieties, and tensor approximations
Hyperdeterminants, secant varieties, and tensor approximations Lek-Heng Lim University of California, Berkeley April 23, 2008 Joint work with Vin de Silva L.-H. Lim (Berkeley) Tensor approximations April
More informationAlgebraic models for higher-order correlations
Algebraic models for higher-order correlations Lek-Heng Lim and Jason Morton U.C. Berkeley and Stanford Univ. December 15, 2008 L.-H. Lim & J. Morton (MSRI Workshop) Algebraic models for higher-order correlations
More informationNumerical multilinear algebra
Numerical multilinear algebra From matrices to tensors Lek-Heng Lim University of California, Berkeley March 1, 2008 Lek-Heng Lim (UC Berkeley) Numerical multilinear algebra March 1, 2008 1 / 29 Lesson
More informationTensors. Lek-Heng Lim. Statistics Department Retreat. October 27, Thanks: NSF DMS and DMS
Tensors Lek-Heng Lim Statistics Department Retreat October 27, 2012 Thanks: NSF DMS 1209136 and DMS 1057064 L.-H. Lim (Stat Retreat) Tensors October 27, 2012 1 / 20 tensors on one foot a tensor is a multilinear
More informationConditioning and illposedness of tensor approximations
Conditioning and illposedness of tensor approximations and why engineers should care Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7
More informationSymmetric eigenvalue decompositions for symmetric tensors
Symmetric eigenvalue decompositions for symmetric tensors Lek-Heng Lim University of California, Berkeley January 29, 2009 (Contains joint work with Pierre Comon, Jason Morton, Bernard Mourrain, Berkant
More informationMOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM
MOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM Abstract. The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse
More informationMultilinear Algebra in Data Analysis:
Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21 24, 2006 Thanks:
More informationComputational Complexity of Tensor Problems
Computational Complexity of Tensor Problems Chris Hillar Redwood Center for Theoretical Neuroscience (UC Berkeley) Joint work with Lek-Heng Lim (with L-H Lim) Most tensor problems are NP-hard, Journal
More informationNumerical Multilinear Algebra I
Numerical Multilinear Algebra I Lek-Heng Lim University of California, Berkeley January 5 7, 2009 L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra I January 5 7, 2009 1 / 55 Hope Past 50 years, Numerical
More informationMATRIX COMPLETION AND TENSOR RANK
MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2
More informationA Potpourri of Nonlinear Algebra
a a 2 a 3 + c c 2 c 3 =2, a a 3 b 2 + c c 3 d 2 =0, a 2 a 3 b + c 2 c 3 d =0, a 3 b b 2 + c 3 d d 2 = 4, a a 2 b 3 + c c 2 d 3 =0, a b 2 b 3 + c d 2 d 3 = 4, a 2 b b 3 + c 2 d 3 d =4, b b 2 b 3 + d d 2
More informationTensors. Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra
Tensors Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra This lecture is divided into two parts. The first part,
More informationNumerical Multilinear Algebra II
Numerical Multilinear Algebra II Lek-Heng Lim University of California, Berkeley January 5 7, 2009 L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra II January 5 7, 2009 1 / 61 Recap: tensor ranks
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationNumerical Multilinear Algebra III
Numerical Multilinear Algebra III Lek-Heng Lim University of California, Berkeley January 5 7, 2009 (Contains joint work with Jason Morton of Stanford and Berkant Savas of UT Austin L.-H. Lim (ICM Lecture)
More informationNumerical Multilinear Algebra: a new beginning?
Numerical Multilinear Algebra: a new beginning? Lek-Heng Lim Stanford University Matrix Computations and Scientific Computing Seminar Berkeley, CA October 18, 2006 Thanks: G. Carlsson, S. Lacoste-Julien,
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationNumerical Multilinear Algebra in Data Analysis
Numerical Multilinear Algebra in Data Analysis Lek-Heng Lim Stanford University Computer Science Colloquium Ithaca, NY February 20, 2007 Collaborators: O. Alter, P. Comon, V. de Silva, I. Dhillon, M. Gu,
More informationMost Tensor Problems are NP-Hard
0 Most Tensor Problems are NP-Hard CHRISTOPHER J. HILLAR, Mathematical Sciences Research Institute LEK-HENG LIM, University of Chicago We prove that multilinear (tensor) analogues of many efficiently computable
More informationGlobally convergent algorithms for PARAFAC with semi-definite programming
Globally convergent algorithms for PARAFAC with semi-definite programming Lek-Heng Lim 6th ERCIM Workshop on Matrix Computations and Statistics Copenhagen, Denmark April 1 3, 2005 Thanks: Vin de Silva,
More informationto appear insiam Journal on Matrix Analysis and Applications.
to appear insiam Journal on Matrix Analysis and Applications. SYMMETRIC TENSORS AND SYMMETRIC TENSOR RANK PIERRE COMON, GENE GOLUB, LEK-HENG LIM, AND BERNARD MOURRAIN Abstract. A symmetric tensor is a
More informationMore Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson
More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois
More informationApproximation algorithms for nonnegative polynomial optimization problems over unit spheres
Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationGeneric properties of Symmetric Tensors
2006 1/48 PComon Generic properties of Symmetric Tensors Pierre COMON - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University 2006 2/48 PComon Tensors & Arrays Definitions
More informationNumerical multilinear algebra in data analysis
Numerical multilinear algebra in data analysis (Ten ways to decompose a tensor) Lek-Heng Lim Stanford University April 5, 2007 Lek-Heng Lim (Stanford University) Numerical multilinear algebra in data analysis
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationPerron-Frobenius theorem for nonnegative multilinear forms and extensions
Perron-Frobenius theorem for nonnegative multilinear forms and extensions Shmuel Friedland Univ. Illinois at Chicago ensors 18 December, 2010, Hong-Kong Overview 1 Perron-Frobenius theorem for irreducible
More informationEigenvectors of Tensors and Waring Decomposition
Eigenvectors of Tensors and Waring Decomposition Luke Oeding University of California Berkeley Oeding, Ottaviani (UCB, Firenze) Giorgio Ottaviani Universita degli Studi di Firenze Waring Decomposition
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationThird-Order Tensor Decompositions and Their Application in Quantum Chemistry
Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationMatrix Factorizations
1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION
Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th
More informationSYLLABUS. 1 Linear maps and matrices
Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V
More informationLecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured
More informationLecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra
Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra
More informationThe convex algebraic geometry of rank minimization
The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationComplexity of Deciding Convexity in Polynomial Optimization
Complexity of Deciding Convexity in Polynomial Optimization Amir Ali Ahmadi Joint work with: Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Laboratory for Information and Decision Systems Massachusetts
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More information1 Review: symmetric matrices, their eigenvalues and eigenvectors
Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationEigenvalues of the Adjacency Tensor on Products of Hypergraphs
Int. J. Contemp. Math. Sciences, Vol. 8, 201, no. 4, 151-158 HIKARI Ltd, www.m-hikari.com Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Kelly J. Pearson Department of Mathematics and Statistics
More informationBEYOND MATRIX COMPLETION
BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (MSR) Part I: RecommendaIon systems and parially observed matrices THE NETFLIX PROBLEM movies
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 1. You are not allowed to use the svd for this problem, i.e. no arguments should depend on the svd of A or A. Let W be a subspace of C n. The
More informationSufficient Conditions to Ensure Uniqueness of Best Rank One Approximation
Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Yang Qi Joint work with Pierre Comon and Lek-Heng Lim Supported by ERC grant #320594 Decoda August 3, 2015 Yang Qi (GIPSA-Lab)
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationAlgorithms for tensor approximations
Algorithms for tensor approximations Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Algorithms for tensor approximations July 7 18, 2008 1 / 35 Synopsis Naïve: the Gauss-Seidel
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationDependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.
MFM Practitioner Module: Risk & Asset Allocation September 11, 2013 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y
More informationThe maximal stable set problem : Copositive programming and Semidefinite Relaxations
The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu
More informationLINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS
LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationAlgebra Workshops 10 and 11
Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationConvex sets, conic matrix factorizations and conic rank lower bounds
Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationAgenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization
Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationOptimization over Polynomials with Sums of Squares and Moment Matrices
Optimization over Polynomials with Sums of Squares and Moment Matrices Monique Laurent Centrum Wiskunde & Informatica (CWI), Amsterdam and University of Tilburg Positivity, Valuations and Quadratic Forms
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More information[POLS 8500] Review of Linear Algebra, Probability and Information Theory
[POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationEigenvalue problems and optimization
Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we
More informationStatistical Pattern Recognition
Statistical Pattern Recognition A Brief Mathematical Review Hamid R. Rabiee Jafar Muhammadi, Ali Jalali, Alireza Ghasemi Spring 2012 http://ce.sharif.edu/courses/90-91/2/ce725-1/ Agenda Probability theory
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationThe number of singular vector tuples and approximation of symmetric tensors / 13
The number of singular vector tuples and approximation of symmetric tensors Shmuel Friedland Univ. Illinois at Chicago Colloquium NYU Courant May 12, 2014 Joint results with Giorgio Ottaviani and Margaret
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationLinear vector spaces and subspaces.
Math 2051 W2008 Margo Kondratieva Week 1 Linear vector spaces and subspaces. Section 1.1 The notion of a linear vector space. For the purpose of these notes we regard (m 1)-matrices as m-dimensional vectors,
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationLecture 7 Spectral methods
CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationRanks of Real Symmetric Tensors
Ranks of Real Symmetric Tensors Greg Blekherman SIAM AG 2013 Algebraic Geometry of Tensor Decompositions Real Symmetric Tensor Decompositions Let f be a form of degree d in R[x 1,..., x n ]. We would like
More informationSEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More information