Optimization on the Grassmann manifold: a case study

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Optimization on the Grassmann manifold: a case study"

Transcription

1 Optimization on the Grassmann manifold: a case study Konstantin Usevich and Ivan Markovsky Department ELEC, Vrije Universiteit Brussel 28 March nd Benelux Meeting on Systems and Control, Houffalize, Belgium

2 Introduction Optimization on the Grassmann manifold Grassmann manifold: Gr(d, m) := { d-dimensional subspaces of R m} f(l ) L Gr(d,m) Example: fitting x 1,..., x N R m to a d-dimensional subspace L f 1 (L ) = N ρ(x k, L ) k=1 Examples: computer vision, machine learning, system identification,......, matrix/tensor low-rank factorization/approximation,... 2 of 15

3 Introduction What is this talk about Grassmann manifold: Gr(d, m) := { d-dimensional subspaces of R m} f(l ) L Gr(d,m) (Absil, Mahony, Sepulchre 2008): How to define derivatives, how to optimize; atlases, tangent bundles, Riemannian metric,... 3 of 15

4 Introduction What is this talk about Grassmann manifold: Gr(d, m) := { d-dimensional subspaces of R m} f(l ) L Gr(d,m) (Absil, Mahony, Sepulchre 2008): How to define derivatives, how to optimize; atlases, tangent bundles, Riemannian metric,... This talk: (re)statement of the problem, reformulations as min X R n f(x) 3 of 15

5 Introduction: problem (re)statement Problem (re)statement R R d m f(r) subject to R R f, where Rf := {R R d m rank R = d}, and f(r) = f(ur) nonsingular U R d d. d=1: Rf=R m \{0} 4 of 15

6 Introduction: problem (re)statement Problem (re)statement R R d m f(r) subject to R R f, where Rf := {R R d m rank R = d}, and f(r) = f(ur) nonsingular U R d d. Problems: How to impose the constraint R Rf? If R is a minimum, then UR is also a minimum... d=1: Rf=R m \{0} A solution: Reduce the search space 4 of 15

7 Introduction: problem (re)statement Reduction of the search space R R d m f(r) subject to R R f, where Rf := {R R d m rank R = d}, and f(r) = f(ur) nonsingular U R d d. How to reduce the search space: 1. RR = I d orthonormal basis represents all subspaces, nonlinear constraint d=1: Rf=R m \{0} 5 of 15

8 Introduction: problem (re)statement Reduction of the search space R R d m f(r) subject to R R f, where Rf := {R R d m rank R = d}, and f(r) = f(ur) nonsingular U R d d. d=1: Rf=R m \{0} How to reduce the search space: 1. RR = I d orthonormal basis represents all subspaces, nonlinear constraint 2. R = [ X I d ], where X R d (m d) represents almost all subspaces (all R with nonsingular R :,(m d+1):m ) 5 of 15

9 Outline Outline Introduction: problem (re)statement Optimization over [X I d ]Π Optimization with RR = I d A case study 5 of 15

10 Optimization over [X I d ]Π Parametrizations with permutation matrices [ X Id ] do not represent all d-dimensional subspaces Solution: consider all permutations Π R m m and matrices of the form [ X I d ] Π, Corresponds to fixing R :,J = I d 6 of 15

11 Optimization over [X I d ]Π Parametrizations with permutation matrices [ X Id ] do not represent all d-dimensional subspaces Solution: consider all permutations Π R m m and matrices of the form [ X I d ] Π, Corresponds to fixing R :,J = I d d=1, R m 6 of 15

12 Optimization over [X I d ]Π Parametrizations with permutation matrices [ X Id ] do not represent all d-dimensional subspaces Solution: consider all permutations Π R m m and matrices of the form [ X I d ] Π, Corresponds to fixing R :,J = I d d=1, R m 6 of 15

13 Optimization over [X I d ]Π Parametrizations with permutation matrices [ X Id ] do not represent all d-dimensional subspaces Solution: consider all permutations Π R m m and matrices of the form [ X I d ] Π, Corresponds to fixing R :,J = I d d=1, R m 6 of 15

14 Optimization over [X I d ]Π Parametrizations with permutation matrices [ X Id ] do not represent all d-dimensional subspaces Solution: consider all permutations Π R m m and matrices of the form [ X I d ] Π, Corresponds to fixing R :,J = I d d=1, R m Moreover, for d = 1 we can select Π: X 1,j 1 Question: for d > 1, can we consider only X from a bounded subset of R d (m d)? 6 of 15

15 Optimization over [X I d ]Π Bounded parametrizations with permutation matrices Theorem (Knuth, 1980) For any R R d m, rank R = d, there exist U and Π such that UR = [ X I d ] Π, Xij 1 for all (i, j) 7 of 15

16 Optimization over [X I d ]Π Bounded parametrizations with permutation matrices Theorem (Knuth, 1980) For any R R d m, rank R = d, there exist U and Π such that Proof: (sketch) UR = [ X I d ] Π, Xij 1 for all (i, j) 1. Find d d submatrix with maximal determinant (maximal volume) J 0 = arg max det R :,J R = J 2. Take Π 0 that permutes R :,J0 and R :,(m d+1):m R = RΠ 0 = 3. Take [ X I d ] = (R:,J0 ) 1 RΠ 0 By Kramer s rule we have that X i,j 1 7 of 15

17 Optimization over [X I d ]Π Optimization over permutations is equivalent to Π perm. R Rf f(r) min f ([ ] ) X I d Π X [ 1;1] d (m d) There are m! (m d)! permutations Π... 8 of 15

18 Optimization over [X I d ]Π Optimization over permutations is equivalent to Π perm. R Rf f(r) min f ([ ] ) X I d Π X [ 1;1] d (m d) There are m! (m d)! permutations Π..... we can switch the permutation in the course of optimization, if X i,j > δ 1 for some i, j. 8 of 15

19 Optimization with RR = I d Outline Introduction: problem (re)statement Optimization over [X I d ]Π Optimization with RR = I d A case study 8 of 15

20 Optimization with RR = I d Orthonormal basis R R d m f(r) subject to RR = I d Disadvantage: nonlinear constraint d=1, R m Possible solutions: 1. Lagrange multipliers, 2. penalty: f(r) + γ RR I d 2 F γ? not at all. 9 of 15

21 Optimization with RR = I d Penalty method for orthonormal bases R Rf f(r) subject to RR = I d where f(r) = f(ur) nonsingular U ( ) Theorem. For any γ > 0, the local minima of R Rf f(r) + γ RR I d 2 F coincide with the local minima of ( ). 10 of 15

22 Optimization with RR = I d Penalty method for orthonormal bases R Rf f(r) subject to RR = I d where f(r) = f(ur) nonsingular U ( ) Theorem. For any γ > 0, the local minima of R Rf f(r) + γ RR I d 2 F coincide with the local minima of ( ). Proof. Let R = UΣV be an SVD of R. Then f(r) + γ RR I d 2 F f(v ) + γ V V I d 2 F = f(v ) = f(r) 10 of 15

23 A case study Outline Introduction: problem (re)statement Optimization over [X I d ]Π Optimization with RR = I d A case study 10 of 15

24 A case study Structured low-rank approximation System identification, model reduction, etc. structured low-rank approximation Structured low-rank approximation: given p R np, S and r < m p p p 2 2 subject to rank S ( p) r. (SLRA) 11 of 15

25 A case study Structured low-rank approximation System identification, model reduction, etc. structured low-rank approximation Structured low-rank approximation: given p R np, S and r < m p p p 2 2 subject to rank S ( p) r. (SLRA) Kernel representation + variable projection: R R d m : rank R=d f(r) := ( min p p p 2 2 subject to RS ( p) = 0 ), where d = m r. 11 of 15

26 A case study Structured low-rank approximation: an example Hankel SLRA: given w = (w 1,..., w n ), g( [ ] r 1 r 2 r 3 ) := min w ŵ 2 2 subject to [ ] ŵ 1 ŵ 2 ŵ 3 ŵ n 2 r1 r 2 r 3 ŵ 2 ŵ 3 ŵ n 2 ŵ n 1 = 0 ŵ 3 ŵ n 2 ŵ n 1 ŵ n min f(r) solves the SLRA problem f(αr) = f(r) α 0 12 of 15

27 A case study: f([x 1 x 2 1]) (3 10 Hankel matrix) of 15

28 A case study Some results Mosaic Hankel low-rank approximation (LTI system identification of DAISY datasets), approximation errors are measured as 100% (1 p p 2 2/ p 2 2) t. # [X I d ] [X I d ]Π genrtr RR = I d RR I d 2 F Table: Percentage fits of the methods 14 of 15

29 Conclusions Conclusions 2 reformulations as optimization on Euclidean space freedom in choosing the optimization method Issues: bound on the number of switches, choosing regularization parameter... References: I. Markovsky, K. Usevich (in press), Structured low-rank approximation with missing values, SIAM J. Matrix Anal. Appl. K. Usevich, I. Markovsky (2012), Global and local optimization methods for structured low-rank approximation, submitted. software and reproducible experiments 15 of 15

Data-driven signal processing

Data-driven signal processing 1 / 35 Data-driven signal processing Ivan Markovsky 2 / 35 Modern signal processing is model-based 1. system identification prior information model structure 2. model-based design identification data parameter

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

Computation. For QDA we need to calculate: Lets first consider the case that

Computation. For QDA we need to calculate: Lets first consider the case that Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the

More information

The (extended) rank weight enumerator and q-matroids

The (extended) rank weight enumerator and q-matroids The (extended) rank weight enumerator and q-matroids Relinde Jurrius Ruud Pellikaan Vrije Universiteit Brussel, Belgium Eindhoven University of Technology, The Netherlands Conference on Random network

More information

DISSERTATION MEAN VARIANTS ON MATRIX MANIFOLDS. Submitted by. Justin D Marks. Department of Mathematics. In partial fulfillment of the requirements

DISSERTATION MEAN VARIANTS ON MATRIX MANIFOLDS. Submitted by. Justin D Marks. Department of Mathematics. In partial fulfillment of the requirements DISSERTATION MEAN VARIANTS ON MATRIX MANIFOLDS Submitted by Justin D Marks Department of Mathematics In partial fulfillment of the requirements For the Degree of Doctor of Philosophy Colorado State University

More information

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Convergence analysis of Riemannian GaussNewton methods and its connection with the geometric condition number by Paul Breiding and

More information

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Jialin Dong ShanghaiTech University 1 Outline Introduction FourVignettes: System Model and Problem Formulation Problem Analysis

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES II. DIFFERENTIABLE MANIFOLDS Washington Mio Anuj Srivastava and Xiuwen Liu (Illustrations by D. Badlyans) CENTER FOR APPLIED VISION AND IMAGING SCIENCES Florida State University WHY MANIFOLDS? Non-linearity

More information

Rank-Constrainted Optimization: A Riemannian Manifold Approach

Rank-Constrainted Optimization: A Riemannian Manifold Approach Ran-Constrainted Optimization: A Riemannian Manifold Approach Guifang Zhou1, Wen Huang2, Kyle A. Gallivan1, Paul Van Dooren2, P.-A. Absil2 1- Florida State University - Department of Mathematics 1017 Academic

More information

Singular Value Decomposition (SVD) and Polar Form

Singular Value Decomposition (SVD) and Polar Form Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

More information

ECON 4117/5111 Mathematical Economics

ECON 4117/5111 Mathematical Economics Test 1 September 29, 2006 1. Use a truth table to show the following equivalence statement: (p q) (p q) 2. Consider the statement: A function f : X Y is continuous on X if for every open set V Y, the pre-image

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

Algorithms to Compute Bases and the Rank of a Matrix

Algorithms to Compute Bases and the Rank of a Matrix Algorithms to Compute Bases and the Rank of a Matrix Subspaces associated to a matrix Suppose that A is an m n matrix The row space of A is the subspace of R n spanned by the rows of A The column space

More information

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT

MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT The following is the list of questions for the oral exam. At the same time, these questions represent all topics for the written exam. The procedure for

More information

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

Divergence Theorems in Path Space. Denis Bell University of North Florida

Divergence Theorems in Path Space. Denis Bell University of North Florida Divergence Theorems in Path Space Denis Bell University of North Florida Motivation Divergence theorem in Riemannian geometry Theorem. Let M be a closed d-dimensional Riemannian manifold. Then for any

More information

Inverse and Implicit Mapping Theorems (Sections III.3-III.4)

Inverse and Implicit Mapping Theorems (Sections III.3-III.4) MATH W8 Daily Notes Preamble As an executive decision, I am skipping Section III.2. It is something like an 8-page lemma, with a number of concepts and results introduced at this stage mainly for the purpose

More information

THE GAUSS-BONNET THEOREM FOR VECTOR BUNDLES

THE GAUSS-BONNET THEOREM FOR VECTOR BUNDLES THE GAUSS-BONNET THEOREM FOR VECTOR BUNDLES Denis Bell 1 Department of Mathematics, University of North Florida 4567 St. Johns Bluff Road South,Jacksonville, FL 32224, U. S. A. email: dbell@unf.edu This

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Support Vector Machine (SVM) Hamid R. Rabiee Hadi Asheri, Jafar Muhammadi, Nima Pourdamghani Spring 2013 http://ce.sharif.edu/courses/91-92/2/ce725-1/ Agenda Introduction

More information

Applications of Information Geometry to Hypothesis Testing and Signal Detection

Applications of Information Geometry to Hypothesis Testing and Signal Detection CMCAA 2016 Applications of Information Geometry to Hypothesis Testing and Signal Detection Yongqiang Cheng National University of Defense Technology July 2016 Outline 1. Principles of Information Geometry

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Welcome to Copenhagen!

Welcome to Copenhagen! Welcome to Copenhagen! Schedule: Monday Tuesday Wednesday Thursday Friday 8 Registration and welcome 9 Crash course on Crash course on Introduction to Differential and Differential and Information Geometry

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Lyapunov functions on nonlinear spaces S 1 R R + V =(1 cos ) Constructing Lyapunov functions: a personal journey

Lyapunov functions on nonlinear spaces S 1 R R + V =(1 cos ) Constructing Lyapunov functions: a personal journey Constructing Lyapunov functions: a personal journey Lyapunov functions on nonlinear spaces R. Sepulchre -- University of Liege, Belgium Reykjavik - July 2013 Lyap functions in linear spaces (1994-1997)

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Jiho Yoo and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

The Behavioral Approach to Systems Theory

The Behavioral Approach to Systems Theory The Behavioral Approach to Systems Theory... and DAEs Patrick Kürschner July 21, 2010 1/31 Patrick Kürschner The Behavioral Approach to Systems Theory Outline 1 Introduction 2 Mathematical Models 3 Linear

More information

Adjoint Orbits, Principal Components, and Neural Nets

Adjoint Orbits, Principal Components, and Neural Nets Adjoint Orbits, Principal Components, and Neural Nets Some facts about Lie groups and examples Examples of adjoint orbits and a distance measure Descent equations on adjoint orbits Properties of the double

More information

Exercises in Geometry II University of Bonn, Summer semester 2015 Professor: Prof. Christian Blohmann Assistant: Saskia Voss Sheet 1

Exercises in Geometry II University of Bonn, Summer semester 2015 Professor: Prof. Christian Blohmann Assistant: Saskia Voss Sheet 1 Assistant: Saskia Voss Sheet 1 1. Conformal change of Riemannian metrics [3 points] Let (M, g) be a Riemannian manifold. A conformal change is a nonnegative function λ : M (0, ). Such a function defines

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

CS , Fall 2011 Assignment 2 Solutions

CS , Fall 2011 Assignment 2 Solutions CS 94-0, Fall 20 Assignment 2 Solutions (8 pts) In this question we briefly review the expressiveness of kernels (a) Construct a support vector machine that computes the XOR function Use values of + and

More information

Essential Matrix Estimation via Newton-type Methods

Essential Matrix Estimation via Newton-type Methods Essential Matrix Estimation via Newton-type Methods Uwe Helmke 1, Knut Hüper, Pei Yean Lee 3, John Moore 3 1 Abstract In this paper camera parameters are assumed to be known and a novel approach for essential

More information

Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information

Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information Riemannian geometry of positive definite matrices: Matrix means and quantum Fisher information Dénes Petz Alfréd Rényi Institute of Mathematics Hungarian Academy of Sciences POB 127, H-1364 Budapest, Hungary

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified. PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system

More information

Grassmann Discriminant Analysis: a Unifying View on Subspace-Based Learning

Grassmann Discriminant Analysis: a Unifying View on Subspace-Based Learning University of Pennsylvania ScholarlyCommons Departmental Papers (ESE) Department of Electrical & Systems Engineering July 2008 Grassmann Discriminant Analysis: a Unifying View on Subspace-Based Learning

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Sub-Stiefel Procrustes problem. Krystyna Ziętak

Sub-Stiefel Procrustes problem. Krystyna Ziętak Sub-Stiefel Procrustes problem (Problem Procrustesa z macierzami sub-stiefel) Krystyna Ziętak Wrocław April 12, 2016 Outline 1 Joao Cardoso 2 Orthogonal Procrustes problem 3 Unbalanced Stiefel Procrustes

More information

arxiv: v1 [math.dg] 27 Jan 2011

arxiv: v1 [math.dg] 27 Jan 2011 ENERGY AND VOLUME OF VECTOR FIELDS ON SPHERICAL DOMAINS arxiv:1101.559v1 [math.dg] 7 Jan 011 FABIANO G. B. BRITO, ANDRÉ GOMES, AND GIOVANNI S. NUNES Abstract. We present in this paper a boundary version

More information

MULTIVARIABLE CALCULUS, LINEAR ALGEBRA, AND DIFFERENTIAL EQUATIONS

MULTIVARIABLE CALCULUS, LINEAR ALGEBRA, AND DIFFERENTIAL EQUATIONS T H I R D E D I T I O N MULTIVARIABLE CALCULUS, LINEAR ALGEBRA, AND DIFFERENTIAL EQUATIONS STANLEY I. GROSSMAN University of Montana and University College London SAUNDERS COLLEGE PUBLISHING HARCOURT BRACE

More information

Energy Minimization via Graph Cuts

Energy Minimization via Graph Cuts Energy Minimization via Graph Cuts Xiaowei Zhou, June 11, 2010, Journal Club Presentation 1 outline Introduction MAP formulation for vision problems Min-cut and Max-flow Problem Energy Minimization via

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Principal

More information

Dictionary Learning on Riemannian Manifolds

Dictionary Learning on Riemannian Manifolds Dictionary Learning on Riemannian Manifolds Yuchen Xie Baba C. Vemuri Jeffrey Ho Department of CISE, University of Florida, Gainesville FL, 32611, USA {yxie,vemuri,jho}@cise.ufl.edu Abstract. Existing

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

STOKES THEOREM ON MANIFOLDS

STOKES THEOREM ON MANIFOLDS STOKES THEOREM ON MANIFOLDS GIDEON DRESDNER Abstract. The generalization of the Fundamental Theorem of Calculus to higher dimensions requires fairly sophisticated geometric and algebraic machinery. In

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Two simple ideas from calculus applied to Riemannian geometry

Two simple ideas from calculus applied to Riemannian geometry Calibrated Geometries and Special Holonomy p. 1/29 Two simple ideas from calculus applied to Riemannian geometry Spiro Karigiannis karigiannis@math.uwaterloo.ca Department of Pure Mathematics, University

More information

Functional SVD for Big Data

Functional SVD for Big Data Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem

More information

Lecture 9. Econ August 20

Lecture 9. Econ August 20 Lecture 9 Econ 2001 2015 August 20 Lecture 9 Outline 1 Linear Functions 2 Linear Representation of Matrices 3 Analytic Geometry in R n : Lines and Hyperplanes 4 Separating Hyperplane Theorems Back to vector

More information

RIEMANNIAN METRIC AND GEOMETRIC MEAN FOR POSITIVE SEMIDEFINITE MATRICES OF FIXED RANK

RIEMANNIAN METRIC AND GEOMETRIC MEAN FOR POSITIVE SEMIDEFINITE MATRICES OF FIXED RANK SIAM J. MATRIX ANAL. APPL. Vol. 0, No. 0, pp. 000 000 c 2009 Society for Industrial and Applied Mathematics RIEMANNIAN METRIC AND GEOMETRIC MEAN FOR POSITIVE SEMIDEFINITE MATRICES OF FIXED RANK SILVÈRE

More information

Some topics in sub-riemannian geometry

Some topics in sub-riemannian geometry Some topics in sub-riemannian geometry Luca Rizzi CNRS, Institut Fourier Mathematical Colloquium Universität Bern - December 19 2016 Sub-Riemannian geometry Known under many names: Carnot-Carathéodory

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

( sin(t), cos(t), 1) dt =

( sin(t), cos(t), 1) dt = Differential Geometry MT451 Problems/Homework Recommended Reading: Morgan F. Riemannian geometry, a beginner s guide Klingenberg W. A Course in Differential Geometry do Carmo M.P. Differential geometry

More information

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION ALIREZA ESNA ASHARI, RAMINE NIKOUKHAH, AND STEPHEN L. CAMPBELL Abstract. The problem of maximizing a quadratic function subject to an ellipsoidal

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

Fractional Index Theory

Fractional Index Theory Fractional Index Theory Index a ( + ) = Z Â(Z ) Q Workshop on Geometry and Lie Groups The University of Hong Kong Institute of Mathematical Research 26 March 2011 Mathai Varghese School of Mathematical

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

Matrix Operations: Determinant

Matrix Operations: Determinant Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

1 Introduction: connections and fiber bundles

1 Introduction: connections and fiber bundles [under construction] 1 Introduction: connections and fiber bundles Two main concepts of differential geometry are those of a covariant derivative and of a fiber bundle (in particular, a vector bundle).

More information

Bregman Divergences. Barnabás Póczos. RLAI Tea Talk UofA, Edmonton. Aug 5, 2008

Bregman Divergences. Barnabás Póczos. RLAI Tea Talk UofA, Edmonton. Aug 5, 2008 Bregman Divergences Barnabás Póczos RLAI Tea Talk UofA, Edmonton Aug 5, 2008 Contents Bregman Divergences Bregman Matrix Divergences Relation to Exponential Family Applications Definition Properties Generalization

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

Distributions, Frobenious Theorem & Transformations

Distributions, Frobenious Theorem & Transformations Distributions Frobenious Theorem & Transformations Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Distributions & integral surfaces Involutivity Frobenious Theorem

More information

Differentiation. f(x + h) f(x) Lh = L.

Differentiation. f(x + h) f(x) Lh = L. Analysis in R n Math 204, Section 30 Winter Quarter 2008 Paul Sally, e-mail: sally@math.uchicago.edu John Boller, e-mail: boller@math.uchicago.edu website: http://www.math.uchicago.edu/ boller/m203 Differentiation

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

DIFFERENTIAL GEOMETRY. LECTURE 12-13,

DIFFERENTIAL GEOMETRY. LECTURE 12-13, DIFFERENTIAL GEOMETRY. LECTURE 12-13, 3.07.08 5. Riemannian metrics. Examples. Connections 5.1. Length of a curve. Let γ : [a, b] R n be a parametried curve. Its length can be calculated as the limit of

More information

How to learn from very few examples?

How to learn from very few examples? How to learn from very few examples? Dengyong Zhou Department of Empirical Inference Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tuebingen, Germany Outline Introduction Part A

More information

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky

ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky Equilibrium points and linearization Eigenvalue decomposition and modal form State transition matrix and matrix exponential Stability ELEC 3035 (Part

More information

Fast Coordinate Descent methods for Non-Negative Matrix Factorization

Fast Coordinate Descent methods for Non-Negative Matrix Factorization Fast Coordinate Descent methods for Non-Negative Matrix Factorization Inderjit S. Dhillon University of Texas at Austin SIAM Conference on Applied Linear Algebra Valencia, Spain June 19, 2012 Joint work

More information

Secrets of Matrix Factorization: Supplementary Document

Secrets of Matrix Factorization: Supplementary Document Secrets of Matrix Factorization: Supplementary Document Je Hyeong Hong University of Cambridge jhh37@cam.ac.uk Andrew Fitzgibbon Microsoft, Cambridge, UK awf@microsoft.com Abstract This document consists

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Topology III Home Assignment 8 Subhadip Chowdhury

Topology III Home Assignment 8 Subhadip Chowdhury Topology Home Assignment 8 Subhadip Chowdhury Problem 1 Suppose {X 1,..., X n } is an orthonormal basis of T x M for some point x M. Then by definition, the volume form Ω is defined to be the n form such

More information

EE263: Introduction to Linear Dynamical Systems Review Session 2

EE263: Introduction to Linear Dynamical Systems Review Session 2 EE263: Introduction to Linear Dynamical Systems Review Session 2 Basic concepts from linear algebra nullspace range rank and conservation of dimension EE263 RS2 1 Prerequisites We assume that you are familiar

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

A Geometric of Numerical Solution of Nonlinear Equations and Error by Urabe's Proposition

A Geometric of Numerical Solution of Nonlinear Equations and Error by Urabe's Proposition Publ. RIMS, Kyoto Univ. Vol. 5 (1969), pp.jl-9 A Geometric of Numerical Solution of Nonlinear Equations and Error by Urabe's Proposition By Yoshitane SHINOHARA* 1. Introduction In his paper [1], Rybashov

More information

CPSC 340: Machine Learning and Data Mining. More PCA Fall 2017

CPSC 340: Machine Learning and Data Mining. More PCA Fall 2017 CPSC 340: Machine Learning and Data Mining More PCA Fall 2017 Admin Assignment 4: Due Friday of next week. No class Monday due to holiday. There will be tutorials next week on MAP/PCA (except Monday).

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Exercise 1 (Formula for connection 1-forms) Using the first structure equation, show that

Exercise 1 (Formula for connection 1-forms) Using the first structure equation, show that 1 Stokes s Theorem Let D R 2 be a connected compact smooth domain, so that D is a smooth embedded circle. Given a smooth function f : D R, define fdx dy fdxdy, D where the left-hand side is the integral

More information

Introduction to Machine Learning HW6

Introduction to Machine Learning HW6 CS 189 Spring 2018 Introduction to Machine Learning HW6 1 Getting Started Read through this page carefully. You may typeset your homework in latex or submit neatly handwritten/scanned solutions. Please

More information

Manifold Approximation by Moving Least-Squares Projection (MMLS)

Manifold Approximation by Moving Least-Squares Projection (MMLS) Manifold Approximation by Moving Least-Squares Projection (MMLS) Barak Sober David Levin School of Applied Mathematics, Tel Aviv University, Israel 2018-01-17 arxiv:1606.07104v4 [cs.gr] 16 Jan 2018 Abstract

More information

LECTURE 8: THE SECTIONAL AND RICCI CURVATURES

LECTURE 8: THE SECTIONAL AND RICCI CURVATURES LECTURE 8: THE SECTIONAL AND RICCI CURVATURES 1. The Sectional Curvature We start with some simple linear algebra. As usual we denote by ( V ) the set of 4-tensors that is anti-symmetric with respect to

More information

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems Lecture 18: The Rank of a Matrix and Consistency of Linear Systems Winfried Just Department of Mathematics, Ohio University February 28, 218 Review: The linear span Definition Let { v 1, v 2,..., v n }

More information