Algebraic Clustering of Affine Subspaces

Size: px
Start display at page:

Download "Algebraic Clustering of Affine Subspaces"

Transcription

1 1 Algebraic Clustering of Affine Subspaces Manolis C. Tsakiris and René Vidal, Fellow, IEEE Abstract Subspace clustering is an important problem in machine learning with many applications in computer vision and pattern recognition. Prior work has studied this problem using algebraic, iterative, statistical, low-rank and sparse representation techniques. While these methods have been applied to both linear and affine subspaces, theoretical results have only been established in the case of linear subspaces. For example, algebraic subspace clustering ASC is guaranteed to provide the correct clustering when the data points are in general position and the union of subspaces is transversal. In this paper we study in a rigorous fashion the properties of ASC in the case of affine subspaces. Using notions from algebraic geometry, we prove that the homogenization trick, which embeds points in a union of affine subspaces into points in a union of linear subspaces, preserves the general position of the points and the transversality of the union of subspaces in the embedded space, thus establishing the correctness of ASC for affine subspaces. Index Terms Algebraic Subspace Clustering, Affine Subspaces, Homogeneous Coordinates, Algebraic Geometry. 1 INTRODUCTION Subspace clustering is the problem of clustering a collection of points drawn approximately from a union of linear or affine subspaces. This is an important problem in machine learning with many applications in computer vision and pattern recognition such as clustering faces, digits, images and motions [1]. Over the past 15 years, a variety of subspace clustering methods have appeared in the literature, including iterative [2], [3], probabilistic [4], algebraic [5], spectral [6], [7], and self-expressiveness-based [8], [9], [10], [11], [12] approaches. Among them, the Algebraic Subspace Clustering ASC method of [5], also known as GPCA, establishes an interesting connection between machine learning and algebraic geometry see also [13] for another such connection. By describing a union of n linear subspaces as the zero set of a system of homogeneous polynomials of degree n, ASC clusters the subspaces in closed form via polynomial fitting and differentiation or alternatively polynomial factorization [14]. Merits of algebraic subspace clustering. In addition to providing interesting algebraic geometric insights in the problem of subspace clustering, ASC is unique among other methods in that it is guaranteed to provide the correct clustering, under the mild hypothesis that the union of subspaces is transversal and the data points are in general position in an algebraic geometric sense. This entails that ASC can handle subspaces of dimension d comparable to the ambient dimension D high relative dimension d/d. In contrast, most state-of-the-art methods, such as Sparse Subspace Clustering SSC [10], [15], require the subspaces to be of sufficiently small relative dimension. Therefore, instances of applications where ASC is a natural candidate, while SSC is in principle inapplicable, are projective motion segmentation [16], 3D point cloud analysis [17] and hybrid system identification [18]. Moreover, it was recently demonstrated in [19] that, using the idea of filtrations of unions of subspaces [20], [21], ASC not only can be robustified to noise, but also outperforms state-of-the-art The authors are with the Center of Imaging Science, Johns Hopkins University, Baltimore, MD, 21218, USA. m.tsakiris@jhu.edu, rvidal@cis.jhu.edu methods in the popular benchmark dataset Hopkins155 [22] for real world motion segmentation. Dealing with affine subspaces. In several important applications, such as motion segmentation, the underlying subspaces do not pass through the origin, i.e., they are affine. Methods such as K-subspaces [2], [3] and mixtures of probabilistic PCA [4] can trivially handle this case by explicitly learning models of affine subspaces. Likewise, the spectral clustering method of [23] can handle affine subspaces by constructing an affinity that depends on the distance from a point to a subspace. However, these methods do not come with theoretical conditions under which they are guaranteed to give the correct clustering. On the other hand, when data X = [x 1 x N ] R D lying in a union of n distinct affine subspaces are embedded into homogeneous coordinates X = R x 1 x 2 x D+1, 1 N they lie in a union of n distinct linear subspaces of R D+1. If the embedded data X satisfy the geometric separation conditions of [15] with respect to the underlying union of linear subspaces, then SSC [10] applied to X is guaranteed to yield a subspace preserving affinity. However, while the conditions in [15] have a clear geometric interpretation for linear subspaces, it is unclear what these conditions entail when applied to affine subspaces via the embedding in 1. On the other hand, recent work [24], [25] shows that an l 0 version of SSC l 0 -SSC yields the correct clustering under mild conditions of general position in a linear algebraic sense, and this analysis can be easily extended to affine subspaces. However, l 0 -SSC has exponential complexity as opposed to the polynomial complexity l 1 -SSC [10]. While the ASC approach discussed here also has exponential complexity in general, under certain conditions it is more efficient than l 0 -SSC The worst-case complexity of ASC is O N n+d 2, which is linear in n the number of data points, N, and exponential in the number of subspaces, n, and the dimension of the original data D. In contrast, the worst-case complexity of l 0 -SSC is O maximal dimension of the affine subspaces. Hence, when n and D are small and d OD, the complexity of l 0 -SSC as a function of N dominates that of ASC. A detailed comparison of the complexities of ASC and l 0 -SSC is beyond the scope of the paper and is thus omitted. ND + 1d N 1 d+1, where d is the

2 2 Returning to ASC, the traditional way to handle points from a union of affine subspaces [26] is to use homogeneous coordinates as in 1, and subsequently apply ASC to the embedded data. We refer to this two-step approach as Affine ASC AASC. Although AASC has been observed to perform well in practice, it lacks a sufficient theoretical justification. On one hand, while it is true that the embedded points live in a union of associated linear subspaces, it is obvious that they have a very particular structure inside these subspaces. Specifically, even if the original points are generic, in the sense that they are sampled uniformly at random from the affine subspaces, the embedded points are clearly non-generic, in the sense that they always lie in the zero-measure intersection of the union of the associated linear subspaces with the hyperplane x 0 = 1. 2 Thus, even in the absence of noise, one may wonder whether this non-genericity of the embedded points will affect the behavior of AASC and to what extent. On the other hand, even if the affine subspaces are transversal, there is no guarantee that the associated linear subspaces are also transversal. Thus, it is natural to ask for conditions on the affine subspaces and the data points under which AASC is guaranteed to give the correct clustering. Paper contributions. In this paper we adapt abstract notions from algebraic geometry to the context of unions of affine subspaces in order to rigorously prove the correctness of AASC in the absence of noise. More specifically, we define in a very precise fashion the notion of points being in general position in a union of n linear or affine subspaces. Intuitively, points are in general position if they can be used to uniquely reconstruct the union of subspaces they lie in by means of polynomials of degree n that vanish on the points. Then we show that the embedding 1 preserves the property of points being in general position, which is one of the two success conditions of ASC. We also show that the second condition, which is the transversality of the union of linear subspaces in R D+1 that is associated to the union of affine subspaces in R D under the embedding 1, is also satisfied, provided that 1 the union of subspaces formed by the linear parts of the affine subspaces is transversal, and 2 the translation vectors of the affine subspaces do not lie in the zero measure set of a certain algebraic variety. Our exposition style is for the benefit of the reader unfamiliar with algebraic geometry. We introduce notions and notations as we proceed and give as many examples as space allows. We leave the more intricate details to the various proofs. 2 ALGEBRAIC SUBSPACE CLUSTERING REVIEW This section gives a brief review of the ASC theory [5], [27], [28], [21]. After defining the subspace clustering problem in Section 2.1, we describe unions of linear subspaces as algebraic varieties in Section 2.2, and give the main theorem of ASC Theorem 1 in terms of vanishing polynomials in Section 2.3. In Section 2.4 we elaborate on the main hypothesis of Theorem 1, the transversality of the union of subspaces. In Section 2.5 we introduce the notion of points in general position Definition 5 and adapt Theorem 1 to the more practical case of a finite set of points Theorem Subspace Clustering Problem Let X = {x 1,..., x N } be a set of points that lie in an unknown union of n > 1 linear subspaces Φ = n i=1 S i, where S i a linear 2. Here and in the rest of the paper, we consider only the uniform measure. subspace of R D of dimension d i < D. The goal of subspace clustering is to find the number of subspaces, their dimensions, a basis for each subspace, and cluster the data points based on their subspace membership, i.e., find a decomposition or clustering of X as X = X 1 X n, where X i = X S i. 2.2 Unions of Linear Subspaces as Algebraic Varieties The key idea behind ASC is that a union of n linear subspaces Φ = n i=1 S i of R D is the zero set of a finite set of homogeneous 3 polynomials of degree n with real coefficients in D indeterminates x := [x 1,..., x D ]. Such a set is called an algebraic variety [29], [30]. For example, a union of n hyperplanes Φ = H 1 H n, where the ith hyperplane H i = {x : b i x = 0} is defined by its normal vector b i R D, is the zero set of the polynomial px = b 1 xb 2 x b n x, 2 in the sense that a point x belongs to the union Φ if and only if px = 0. Likewise, the union of a plane with normal b and a line with normals b 1, b 2 in R 3 is the zero set of the two polynomials p 1 x = b xb 1 x and p 2 x = b xb 2 x. 3 More generally, for n subspaces of arbitrary dimensions, these vanishing polynomials are homogeneous of degree n. Moreover, they are factorizable into n linear forms, with each linear form defined by a vector orthogonal to one of the n subspaces Main Theorem of ASC The set I Φ of polynomials that vanish at every point of a union of linear subspaces Φ has a special algebraic structure: it is closed under addition and it is closed under multiplication by any element of the polynomial ring R = R[x 1,..., x D ]. Such a set of polynomials is called an ideal [29], [30] of R. If we restrict our attention to the subset I Φ,n of I Φ that consists only of vanishing polynomials of degree n, we notice that I Φ,n is a finite dimensional real vector space, because it is a subspace of R n, the latter being the set of all homogeneous polynomials of R of degree n, which is a vector space of dimension M n D := n+d 1 n. Theorem 1 Main Theorem of ASC, [5]. Let Φ = n i=1 S i be a transversal union of linear subspaces of R D. Let p 1,..., p s be a basis for I Φ,n and let x i be a point in S i such that x i i i S i. Then S i = Span p 1 xi,..., p s xi. In other words, we can estimate the subspace S i passing through a point x i, as the orthogonal complement of the span of the gradients of all the degree-n vanishing polynomials evaluated at x i. Observe that the only assumption on the subspaces required by Theorem 1, is that they are transversal, a notion explained next. 2.4 Transversal Unions of Linear Subspaces Intuitively, transversality is a notion of general position of subspaces, which entails that all intersections among subspaces are as small as possible, as allowed by their dimensions. Formally: 3. A polynomial in many variables is called homogeneous if each of its monomials has the same degree. For example, x x 1x 2 is homogeneous of degree 2, while x x 2 is non-homogeneous of degree Strictly speaking this is not always true; it is true though in the generic case, for example, if the subspaces are transversal see Definition 2.

3 Definition 2 [27]. A union Φ = n i=1 S i of linear subspaces of R D is transversal, if for any subset J of [n] := {1, 2,..., n} { codim S i = min D, } codims i, 4 where codims = D dims denotes the codimension of S. To understand Definition 2, let B i be a D c i matrix containing a basis for Si, where c i is the codimension of S i, and let J be a subset of [n], say J = {1,..., l}, l n. Then a point x belongs to S i if and only if x B J = 0, where B J = [B 1,..., B l ]. Hence, the dimension of S i is equal to the dimension of the left nullspace of B J, or equivalently, codim S i = rankb J. 5 Since B J is a D c i matrix, we must have that { rankb J min D, } c i. 6 Hence, transversality is equivalent to B J being full-rank, as J ranges over all subsets of [n], and B J drops rank if and only if all maximal minors of B J vanish, in which case there are certain algebraic relations between the basis vectors of Si, i J. Since any set given by algebraic relations has measure zero [31], this shows that a union of subspaces is transversal with probability 1. Proposition 3. Let Φ = n i=1 S i be a union of n linear subspaces in R D of codimensions 0 < c i < D, i [n]. Let b i1,..., b ici be a basis for S i. If the vectors {b ij i } ji=1,...,ci i=1,...,n do not lie in the zero-measure set of a proper algebraic variety of R D i [n] ci, then Φ is transversal. Example 4. Consider two planes S 1, S 2 in R 3 with normals b 1 and b 2. Then one expects their intersection S 1 S 2 to be a line, and hence be of codimension 2 = min3, 1 + 1, unless the two planes coincide, which happens only if b 1 is colinear with b 2. Clearly, if one randomly selects two planes in R 3, the probability that they are not transversal is zero. If we consider a third plane S 3 with normal b 3 such that every intersection S 1 S 2, S 1 S 3 and S 2 S 3 is a line, then the three planes fail to be transversal only if S 1 S 2 S 3 is a line. But this can happen only if the three normals b 1, b 2, b 3 are linearly dependent, which again is a probability zero event if the three planes are randomly selected. This reveals the important fact that the theoretical conditions for success of ASC in the absence of noise are much weaker than those for other methods such as SSC and LRSC, since as we just pointed out ASC will succeed almost surely Theorem Points In General Position In practice, we may not be given the polynomials p 1,..., p s that vanish on a union of subspaces Φ = n i=1 S i, but rather a finite collection of points X = {x 1,..., x N } sampled from Φ. If we want to fully characterize Φ from X, the least we can ask is that X uniquely defines Φ as a set, otherwise the problem becomes ill-posed. Since it is known that Φ is the zero set of I Φ,n [5], i.e., Φ = ZI Φ,n, it is natural to require that Φ can be recovered 5. Of course, the main disadvantage of ASC with respect to SSC or LRSC is its combinatorial computational complexity, which remains an open problem. as the zero set of all homogeneous polynomials of degree n that vanish on X. Definition 5 Points in general position. Let Φ be a union of n linear subspaces of R D, and X a finite set of points in Φ. We will say that X is in general position in Φ, if Φ = ZI X,n. Recall from Theorem 1 that for ASC to succeed, we need a basis p 1,..., p s for I Φ,n. The next result shows that if X is in general position in Φ, then we can compute such a basis form X. Proposition 6. X is in general position in Φ I X,n = I Φ,n. Proof. Suppose X is in general position in Φ, i.e., Φ = ZI X,n. We will show that I X,n = I Φ,n. The inclusion I X,n I Φ,n is immediate, since if p I Φ,n vanishes on Φ, then it will vanish on the subset X of Φ. Conversely, let p I X,n. Since by hypothesis Φ = ZI X,n, we will have that px = 0, x Φ, i.e., p vanishes on Φ, i.e., p I Φ,n, i.e., I X,n I Φ,n. Suppose I X,n = I Φ,n ; then ZI X,n = ZI Φ,n. Since Φ = ZI Φ,n [5], we have Φ = ZI X,n. Next, we show that points in general position always exist. Proposition 7. Any union Φ of n linear subspaces of R D admits a finite subset X that lies in general position in Φ. Proof. This follows from Theorem 2.9 in [28], together with the regularity result of [32], which says that the maximal degree of a generator of I Φ does not exceed n. Example 8. Let Φ = S 1 S 2 be the union of two planes of R 3 with normal vectors b 1, b 2, and let X = {x 1, x 2, x 3, x 4 } be four points of Φ, such that, x 1, x 2 S 1 S 2 and x 3, x 4 S 2 S 1. Let H 13 and H 24 be the planes spanned by x 1, x 3 and x 2, x 4 respectively, and let b 13, b 24 be the normals to these planes. Then the polynomial qx = b 13xb 24x certainly vanishes on X. But q does not vanish on Φ, because the only up to a scalar homogeneous polynomial of degree 2 that vanishes on Φ is px = b 1 xb 2 x. Hence X is not in general position in Φ. The geometric reasoning is that two points per plane are not enough to uniquely define the union of the two planes; instead a third point in one of the planes is required. In terms of a finite set of points X, Theorem 1 becomes: Theorem 9. Let X be a finite set of points sampled from a union Φ of n linear subspaces of R D. Let p 1,..., p s be a basis for I X,n, the vector space of homogeneous polynomials of degree n that vanish on X. Let x i be a point in X i := X S i such that x i i i S i. If X is in general position in Φ Definition 5, and Φ is transversal Definition 2, then S i = Span p 1 xi,..., p s xi. 3 PROBLEM STATEMENT AND CONTRIBUTIONS In this section we begin by defining the problem of clustering unions of affine subspaces in Section 3.1. In Section 3.2 we analyze the traditional algebraic approach for handling affine subspaces and point out that its correctness is far from obvious. Finally, in Section 3.3 we state the main findings of this paper. 3

4 4 3.1 Affine Subspace Clustering Problem Let X = {x 1,..., x N } be a finite set of points living in a union Ψ = n i=1 A i of n affine subspaces of R D. Each affine subspace A i is the translation by some vector µ i R D of a d i -dimensional linear subspace S i, i.e., A i = S i + µ i. The affine subspace clustering problem involves clustering the points X according to their subspace membership, and finding a parametrization of each affine subspace A i by finding a translation vector µ i and a basis for its linear part S i, for all i = 1,..., n. Note that there is an inherent ambiguity in determining the translation vectors µ i, since if A i = S i +µ i, then A i = S i +s i +µ i for any vector s i S i. Consequently, the best we can hope for is to determine the unique component of µ i in the orthogonal complement Si of S i. 3.2 Traditional Algebraic Approach Since the inception of ASC, the standard algebraic approach to cluster points living in a union of affine subspaces has been to embed the points into R D+1 and subsequently apply ASC [26]. The precise embedding φ 0 : R D R D+1 is given by α = α 1,..., α D φ0 α = 1, α 1,..., α D. 7 To understand the effect of this embedding and why it is meaningful to apply ASC to the embedded points, let A = S + µ be a d-dimensional affine subspace of R D, with u 1,..., u d being a basis for its linear part S. As noted in Section 3.1, we can also assume that µ S. For x A, there exists y R d such that x = Uy + µ, U := [u 1,..., u d ] R D d. 8 Then the embedded point x := φ 0 x can be written as [ [ x = = x] Ũ, Ũ := y µ u 1 u d ]. 9 Equation 9 clearly indicates that the embedded point x lies in the linear d+1-dimensional subspace S := SpanŨ of RD+1 and the same is true for the entire affine subspace A. From 9 one sees immediately that u 1,..., u d, µ can be used to construct a basis of S. The converse is also true: given any basis of S one can recover a basis for the linear part S and the translation vector µ of A. Hence, the embedding φ 0 takes a union of affine subspaces Ψ = n i=1 A i into a union of linear subspaces Φ = n S i=1 i of R D+1, in a way that there is a 1 1 correspondence between the parameters of A i a basis for the linear part and the translation vector and the parameters of S i a basis for every i [n]. To the best of our knowledge, the correspondence between A i and S i has been the sole theoretical justification so far in the subspace clustering literature for the traditional Affine ASC AASC approach for dealing with affine subspaces, which consists of 1 applying the embedding φ 0 to points X in Ψ, 2 computing a basis p 1,..., p s for the vector space I X,n of homogeneous polynomials of degree n that vanish on the embedded points X := φ 0 X, 3 for x i X S i i i Si, estimating S i via the formula S i = Span p 1 xi,..., p s xi, 10 4 and extracting the translation vector of A i and a basis for its linear part from a basis of S i. According to Theorem 9, the above process will succeed, if i the embedded points X are in general position in Φ in the sense of Definition 5, and ii the union of linear subspaces Φ is transversal. Note that these conditions need not be satisfied a-priori because of the particular structure of both the embedded data in 1 and the basis in 9. This gives rise to the following reasonable questions: Question 10. Under what conditions on X and Ψ, will X be in general position in Φ? Question 11. Under what conditions on Ψ will Φ be transversal? 3.3 Contributions The main contribution of this paper is to answer Questions Regarding Question 10, one may be tempted to conjecture that X is in general position in Φ, if the components of the points X along the union Φ := n i=1 S i of the linear parts of the affine subspaces are in general position inside Φ. However, this conjecture is not true, as illustrated by the next example. Example 12. Suppose that Ψ = A 1 A 2 is a union of two affine planes A i = S i + µ i of R 3. Then Φ = S 1 S 2 is a union of 2 planes in R 3 and as argued in Example 8, we can find 5 points in general position in Φ. However, Φ = S 1 S 2 is a union of 2 hyperplanes in R 4 and any subset of Φ in general position must consist of at least M = = 9 points. 6 To state the precise necessary and sufficient condition for X to be in general position in Φ, we first show that Ψ is the zero-set of non-homogeneous polynomials of degree n. Proposition 13. Let Ψ = n i=1 A i be a union of affine subspaces of R D, where each affine subspace A i is the translation of a linear subspace S i of codimension c i by a translation vector µ i. For each A i = S i + µ i, let b i1,..., b ici be a basis for Si. Then Ψ is the zero set of all degree-n polynomials of the form n b iji x b ij i µ i : j1,..., j n [c 1 ] [c n ]. 11 i=1 Thanks to Proposition 13 we can define points X to be in general position in Ψ, in analogy to Definition 5. Definition 14. Let Ψ be a union of n affine subspaces of R D and X a finite subset of Ψ. We will say that X is in general position in Ψ, if Ψ can be recovered as the zero set of all polynomials of degree n that vanish on X. Equivalently, a polynomial of degree n vanishes on Ψ if and only if it vanishes on X. We are now ready to answer our Question 10. Theorem 15. Let X be a finite subset of a union of n affine subspaces Ψ = n i=1 A i of R D, where A i = S i + µ i, with S i a linear subspace of R D of codimension 0 < c i < D. Let Φ = n S i=1 i be the union of n linear subspaces of R D+1 induced by the embedding φ 0 : R D R D+1 in 7. Denote by X Φ the image of X under φ 0. Then X is in general position in Φ if and only if X is in general position in Ψ. Our second Theorem answers Question 11. Theorem 16. Let Ψ = n i=1 A i be a union of n affine subspaces of R D, with A i = S i +µ i and µ i = B i a i, where B i R D ci is a basis for Si with c i = codim S i. If Φ = n i=1 S i is transversal 6. Otherwise one can fit a polynomial of degree 2 to the points, which does not vanish on Φ.

5 5 and a 1,..., a n do not lie in the zero-measure set of a proper algebraic variety 7 of R c1 R cn, then Φ is transversal. One may wonder if some of the µ i can be zero and Φ still be transversal. This depends on the c i as the next example shows. Example 17. Let A 1 = Spanb 11, b 12 + µ 1 be an affine line and A 2 = Spanb 2 + µ 2 an affine plane of R 3. Suppose that Φ = Spanb 11, b 12 Spanb 2 is transversal. Then Φ = S 1 S 2 is transversal if and only if the matrix [ b B [3] = 11 µ 1 b 12µ 1 b ] 2 µ 2 R b 11 b 12 b 2 has rank 3. But rank B[3] = 3, irrespectively of what the µ i are, simply because the matrix B [3] = [b 11 b 12 b 2 ] is full rank by the transversality assumption on Φ. Now let us replace the affine plane A 2 with a second affine line A 2 = Spanb 21, b 22 + µ 2. Then Φ is transversal if and only if b B [3] = 11 µ 1 b 12µ 1 b 21µ 2 b 22µ 2 R b 11 b 12 b 21 b 22 has rank 4, which is impossible if both µ 1, µ 2 are zero. As a corollary of Theorems 9, 15 and 16, we get the correctness Theorem of ASC for the case of affine subspaces. Theorem 18. Let Ψ = n i=1 A i be a union of affine subspaces of R D, with A i = S i + µ i and µ i = B i a i, where B i R D ci is a basis for Si with c i = codim S i. Let Φ = n S i=1 i be the union of n linear subspaces of R D+1 induced by the embedding φ 0 : R D R D+1 of 7. Let X be a finite subset of Ψ and denote by X Φ the image of X under φ 0. Let p 1,..., p s be a basis for I X,n, the vector space of homogeneous polynomials of degree n that vanish on X. Let x X A 1 i>1 A i, and denote x = φ 0 x. Define b k := p k x1 R D+1, k = 1,..., s, 14 and without loss of generality, let b 1,..., b l be a maximal linearly independent subset of b 1,..., b s. Define further γ k, b k R R D and γ 1, B 1 R l R D l as γk b k =:, k = 1,..., l 15 b k γ 1 := [γ 1,..., γ l ], B 1 := [b 1,..., b l ]. 16 If X is in general position in Ψ, Φ = n i=1 S i is transversal, and a 1,..., a n do not lie in the zero-measure set of a proper algebraic variety of R c1 R cn, then A 1 = SpanB 1 B 1 B 1 B 1 1 γ1. 17 Remark 19. The acute reader may notice that we still need to answer the question of whether Ψ admits a finite subset X in general position, to begin with. This answer is affirmative: If Ψ satisfies the hypothesis of Theorem 16, then Φ will be transversal, and so by Proposition 31 I Ψ is generated in degree n, in which case the existence of X follows from Theorem 2.9 in [28]. The rest of the paper is organized as follows: in Section 4 we establish the fundamental algebraic-geometric properties of a union of affine subspaces. Then using these tools, we prove in Section 5 Theorems 15 and 16. The proof of Theorem 18 is straightforward is thus omitted. 7. The precise description of this algebraic variety is given in the proof of the Theorem in Section ALGEBRAIC GEOMETRY OF UNIONS OF AFFINE SUBSPACES In Section 4.1 we describe the basic algebraic geometry of affine subspaces and unions thereof, in analogy to the case of linear subspaces. In particular, we show that a single affine subspace is the zero-set of polynomial equations of degree 1, and a union Ψ of affine subspaces is the zero-set of polynomial equations of degree n. In Section 4.2 we study more closely the embedding A φ0 S of an affine subspace A R D into its associated linear subspace S R D+1 see Section 3.2, which will lead to a deeper understanding of the embedding Ψ φ0 Φ of a union of affine subspaces Ψ R D into its associated union of linear subspaces Φ R D+1. As we will see, Ψ is dense in Φ in a very precise sense, and the algebraic manifestation of this relation Proposition 31 will be used later in Section 5.1, to prove our Theorem Affine Subspaces as Affine Varieties Let A = S+µ be an affine subspace of R D and let b 1,..., b c be a basis for the orthogonal complement S of S. The first important observation is that a vector x belongs to S if and only if x b k, k = 1,..., c. In the language of algebraic geometry this is the same as saying that S is the zero set of c linear polynomials: S = Z b 1 x,..., b c x, x := [x 1,..., x D ]. 18 Definition 20. Let Y be a subset of R D. The set I Y of polynomials px 1,..., x D that vanish on Y, i.e., py 1,..., y D = 0, [y 1,..., y D ] Y, is called the vanishing ideal of Y. One may wonder if the linear polynomials b i x, i = 1,..., c, form some sort of basis for the vanishing ideal I S of S. In fact this is true see the appendix in [21] for a proof and can be formalized by saying that these linear polynomials are generators of I S over the polynomial ring R = R[x 1,..., x D ]. This means that every polynomial that belongs to I S can be written as a linear combination of b 1 x,..., b c x with polynomial coefficients, i.e., px = p 1 xb 1 x + + p c xb c x 19 where p 1,..., p c are some polynomials in R. More compactly I S = b 1 x,..., b c x, 20 which reads as I S is the ideal generated by the polynomials b 1 x,..., b c x as in 19. The following important fact 8 will be used in Section 5.1 to prove our Theorem 15. Proposition 21. The vanishing ideal I S of a linear subspace S is always a prime ideal, i.e., if p, q are polynomials such that pq I S, then either p I S or q I S. Moving on, the second important observation is that x A if and only if x µ S. Equivalently, x A b k x µ, k = 1,..., c 21 or in algebraic geometric terms A = Z b 1 x b 1 µ,..., b c x b c µ. 22 In other words, the affine subspace A is an algebraic variety of R D. In fact, we say that A is an affine variety, since it is defined by non-homogeneous polynomials. To describe the vanishing ideal 8. For a proof see Appendix C in [21].

6 6 I A of A, note that a polynomial px vanishes on A if and only if px + µ vanishes on S. This, together with 20, give I A = b 1 x b 1 µ,..., b c x b c µ. 23 Next, we consider a union Ψ = n i=1 A i of affine subspaces A i = S i + µ i, i [n], of R D. We will prove Proposition 13, which describes Ψ as the zero-set of non-homogeneous polynomials of degree n, showing that Ψ is an affine variety of R D. Proof. Denote the set of all polynomials of the form 11 by P. First, we show that Ψ ZP. Take x Ψ; we will show that x ZP. Since Ψ = A 1 A n, x belongs to at least one of the affine subspaces, say x A i, for some i. For every polynomial p of P, there is a linear factor b ij i x b ij i µ i of p that vanishes on A i and thus on x. Hence p itself will vanish on x. Since p was an arbitrary element of P, this shows that every polynomial of P vanishes on x, i.e., x ZP. Next, we show that ZP Ψ. Let x ZP; we will show that x Ψ. If x is a root of all polynomials p 1j x = b 1jx b 1jµ 1, then x A 1 and we are done. Otherwise, one of these linear polynomials does not vanish on x, say p 11 x 0. Now suppose that x Ψ. By the above argument, for every affine subspace A i there must exist some linear polynomial b i1x b i1µ i, which does not vanish on x. As consequence, the polynomial n px = b i1 x b i1µ i 24 i=1 does not vanish on x, i.e., px 0. But because of the definition of P, we must have that p P. Since x was selected to be an element of ZP, we must have that px = 0, which is a contradiction, as we just saw that px 0. Consequently, the hypothesis that x Ψ, must be false, i.e., ZP Ψ, and the proof is concluded. The reader may wonder what the vanishing ideal I Ψ of Ψ is and what its relation is to the linear polynomials whose products generate Ψ, as in Proposition 13. In fact, this question is still partially open even in the simpler case of a union of linear subspaces [33], [32], [27]. As it turns out, I Ψ is intimately related to I Φ, where Φ = n S i=1 i is the union of linear subspaces associated to Ψ under the embedding φ 0 of 7. It is precisely this relation that will enable us to prove Theorem 15, and to elucidate it we need the notion of projective closure that we introduce next The Projective Closure of Affine Subspaces Let φ 0 A be the image of A = S + µ under the embedding φ 0 : R D R D+1 in 7. Let S be the d + 1-dimensional linear subspace of R D+1 spanned by the columns of Ũ see 9. A basis for the orthogonal complement of S in R D+1 is b b 1 := 1 µ,..., b b c := c µ, 25 b 1 since codim S i = codims, and the b i are linearly independent because the b i are. In algebraic geometric terms S = Z b 1 x b 1 µx 0,..., b c x b c µx 0 = Z b 1 x,..., b 26 c x, x := [x 0, x 1,..., x D ]. 9. Of course, the notion of projective closure is a well-known concept in algebraic geometry; here we introduce it in a self-contained fashion in the context of unions of affine subspaces, dispensing with unnecessary abstractions. b c By inspecting equations 22 and 26, we see that every point of φ 0 A satisfies the equations 26 of S. Since these equations are homogeneous, it will in fact be true that for any point x φ 0 A the entire line of R D+1 spanned by x will still lie in S. Hence, we may as well think of the embedding φ 0 as mapping a point x R D to a line of R D+1. To formalize this concept, we need the notion of projective space [30], [34]: Definition 22. The real projective space P D is defined to be the set of all lines through the origin in R D+1. Each non-zero vector α of R D+1 defines an element [α] of P D, and two elements [α], [β] of P D are equal in P D, if and only if there exists a nonzero λ R such that we have an equality α = λβ of vectors in R D+1. For each point [α] P D, we call the point α R D+1 a representative of [α]. Now we can define a new embedding ˆφ 0 : R D P D, that behaves exactly as φ 0 in 7, except that it now takes points of R D to lines of R D+1, or more precisely, to elements of P D : α 1, α 2,..., α D ˆφ 0 [1, α1, α 2,, α D ]. 27 A point x of A is mapped by ˆφ 0 to a line inside S, or more specifically, to the point [ x] of P D, whose representative x satisfies the equations 26 of S. The set of all lines of R D+1 that live in S, viewed as elements of P D, is denoted by [ S], i.e., [ S] = { [α] P D : α S }. 28 The representative α of every element [α] [ S] satisfies by definition the equations 26 of S, and so [ S] has naturally the structure of an algebraic variety of P D, which is called a projective variety. We emphasize that even though the varieties S and [ S] live in different spaces, R D+1 and P D respectively, they are defined by the same equations. In fact, every algebraic variety Y of R D+1 that is the unions of lines, which is true if and only if Y is defined by homogeneous equations, gives rise to a projective variety [Y] of P D defined by the same equations. Example 23. Recall from Section 2.2 that a union Φ of linear subspaces is defined as the zero-set of homogeneous polynomials. Then Φ gives rise to a projective variety [ Φ] of P D defined by the same equations as Φ, which can be thought of as the set of lines through the origin in R D+1 that live in Φ. Returning to our embedding ˆφ 0, to describe the precise connection between ˆφ 0 A and [ S] we need to resort to the kind of topology that is most suitable for the study of algebraic varieties [30], [34]: Definition 24 Zariski Topology. The real vector space R D and the projective space P D can be made into topological spaces, by defining the closed sets of their associated topology to be all the algebraic varieties in R D and P D respectively. We are finally ready to state without proof the formal algebraic geometric relation between ˆφ 0 A and S: Proposition 25. In the Zariski topology, the set ˆφ 0 A is open and dense in [ S], in particular [ S] is the closure 10 of ˆφ 0 A in P D. The projective variety [ S] is called the projective closure of A: it is the smallest projective variety that contains ˆφ 0 A. We now characterize the projective closure of a union of affine subspaces. 10. It can further be shown that [ S] = ˆφ 0 A [S]: intuitively, the set that we need to add to ˆφ 0 A to get a closed set is the slope [S] of A.

7 Proposition 26. Let Ψ = n i=1 A i be a union of affine subspaces of R D. Then the projective closure of Ψ in P D, i.e., the smallest projective variety that contains ˆφ 0 Ψ, is [ n n ] [ S [ Φ] i ] = S i =, 29 i=1 i=1 where S i is the linear subspace of R D+1 corresponding to A i under the embedding φ 0 of 7. The geometric fact that [ Φ] P D is the smallest projective variety of P D that contains ˆφ 0 Ψ, manifests itself algebraically in I Ψ being uniquely defined by I Φ and vice versa, in a very precise fashion. To describe this relation, we need a definition. Definition 27 Homogenization - Dehomogenization. Let p R = R[x 1,..., x D ] be a polynomial of degree n. The homogenization of p is the homogeneous polynomial p h = x n x1 0 p, x 2,..., x D 30 x 0 x 0 x 0 of R = R[x0, x 1,..., x D ] of degree n. Conversely, if P R is homogeneous of degree n, its dehomogenization is P d = P 1, x 1,..., x D, which is a polynomial of R of degree n. Example 28. Let P = x 2 0x 1 +x 0 x 2 2 +x 1 x 2 x 3 be a homogeneous polynomial of degree 3. Its dehomogenization is the degree-3 polynomial P d = x 1 + x x 1 x 2 x 3, and the homogenization of P d is P d h = x 3 0 x1 x 0 + x2 2 x x1x2x3 x 3 0 = P. The next result from algebraic geometry is crucial for our purpose. Theorem 29 Chapter 8 in [30]. Let Y be an affine variety of R D and let Ȳ be its projective closure in PD with respect to the embedding ˆφ 0 of 27. Let I Y, I Ȳ be the vanishing ideals of Y, Ȳ respectively. Then I Ȳ = I h Y, i.e., every element of IȲ arises as a homogenization of some element of I Y, and every element of I Y arises as the dehomogenization of some element of I Ȳ. We have already seen that Φ and [ Φ] are given as algebraic varieties by identical equations. It is also not hard to see that the vanishing ideals of these varieties are identical as well. Lemma 30. Let Φ = n S i=1 i be a union of linear subspaces of R D+1, and let [ Φ] = n i=1 [ S i ] be the corresponding projective variety of P D. Then I Φ,k = I [ Φ],k, i.e., a degree-k homogeneous polynomial vanishes on Φ if and only if it vanishes on [ Φ]. As a Corollary of Theorem 29 and Lemma 30, we obtain the key result of this Section, which we will use in Section 5.1. Proposition 31. Let Ψ = n i=1 A i be a union of affine subspaces of R D. Let Φ = n S i=1 i be the union of linear subspaces of R D+1 associated to Ψ under the embedding φ 0 of 7. Then I Φ is the homogenization of I Ψ. 5 PROOFS OF MAIN THEOREMS 5.1 Proof of Theorem 15 Suppose that X is in general position in Ψ. We need to show that X is in general position in Φ. In view of Proposition 6, and the fact that I Φ,n I X,n, it is sufficient to show that I Φ,n I X,n. To that end, let P be a homogeneous polynomial of degree n in R[x 0, x 1,..., x D ] that vanishes on the points X, i.e., P I X,n. Then for every point α = 1, α 1,..., α D of X, we have P α = P 1, α 1,..., α D = P d α 1,..., α D = 0, 31 that is, the dehomogenization P d of P vanishes on all points of X, i.e., P d I X. Now there are two possibilities: either P d has degree n, in which case P = h, P d or Pd has degree strictly less than n, say n k, k 1, in which case P = x k h. 0 Pd If Pd has total degree n, by the general position assumption on X, P d must vanish on Ψ. Then by Proposition 31, h Pd IΦ,n, and so P I Φ,n. If deg P d = n k, k 1, suppose we can find a linear form G = ζ x, that does not vanish on any of the S i, i [n], and it is not divisible by x 0. Then G d will have degree 1 and will not vanish on any of the A i, i [n]. Also G d k Pd has degree n and vanishes on X. Since X is in general position in Ψ, we will have that G d k Pd vanishes on Ψ. Then by Proposition 31, G k P d h I Φ,n. Since I Φ = n i=1 I Si we must have that G k P d h I Si, i [n]. Since I Si is a prime ideal Proposition 21 and G I Si, it must be the case that h P d I Si, i [n], i.e., h P d I Φ. But P = x k h, 0 Pd which shows that P IΦ,n. It remains to be shown that there exists a linear form G nondivisible by x 0, that does not vanish on any of the S i. Suppose this is not true; thus if G = b x + αx 0 is a linear form non-divisible by x 0, i.e., b 0, then G must vanish on some S i. In particular, for any non-zero vector b of R D, b x = b x + 0x 0 must vanish on some S i. Recall from Section 3.2, that if u i1,..., u idi is a basis for S i, the linear part of A i = S i + µ i, then µ i u i1 u idi is a basis for S i. Since b x vanishes on S i, it must vanish on each basis vector of S i. In particular, b u i1 = = b u idi = 0, which implies that the linear form b x, now viewed as a function on R D, vanishes on S i, i.e., b x I Si. To summarize, we have shown that for every 0 b R D, there exists an i [n] such that b x I Si. Taking b equal to the standard vector e 1 of R D, we see that the linear form x 1 must vanish on some S i, and similarly for the linear forms x 2,..., x D. This in turn means that the ideal m := x 1,..., x D generated by the linear forms x 1,..., x D, must lie in the union n i=1 I S i. But it is known from Proposition 1.11i in [29], that if an ideal a lies in the union of finitely many prime ideals, then the a must lie in one of these prime ideals. Applying this result to our case, we see that, since the I Si are prime ideals, m I Si for some i [n]. But this says that for any vector in S i all of its coordinates must be zero, i.e., S i = 0, which violates the assumption d i > 0,, i [n]. This contradiction proves the existence of our linear form G. Now suppose that X is in general position in Φ. We need to show that X is in general position in Ψ. To that end, let p be a vanishing polynomial of Ψ of degree n, then clearly p I X. Conversely, let p I X of degree n. Then for each point α X 0 = pα = pα 1,..., α D = p h 1, α 1,..., α D = p h α, 33 i.e., the homogenization p h vanishes on X. By hypothesis X is in general position in Φ, hence p h I Φ,n. Then by Proposition 7

8 8 31, the dehomogenization of p h must vanish on Ψ. But notice that p h = p, and so p vanishes on Ψ. d 5.2 Proof of Theorem 16 Let b i1,..., b ici be an orthonormal basis for S i, then b i1,..., b ici, biji := [b ij i b ij i B i a i ], 34 is a basis for S i. Suppose that Φ is not transversal. Then there exists some index set J [n], say without loss of generality J = {1,..., l}, l n, such that see also Section 2.4 { rank B J < min D + 1, } c i, 35 [ B J := B1,..., B ] l, Bi := [ b i1,..., b ici ], 36 where we have used the fact that codim S i = codim S i = c i, i [n]. Since Φ is transversal, we must have either rankb J = D or rankb J = c i. Suppose the latter condition is true, then c i D. Then all columns of B J are linearly independent, which implies that the same will be true for the columns of B J, and so rank B J = c i. Since by hypothesis c i D, we must have codim { S i = rank B J = min D + 1, } c i, 37 and so the transversality condition is satisfied for J, which is a contradiction on the hypothesis 35. Consequently, it must be the case that rankb J = D < c i. Since B J is a submatrix of B J, we must have that rank B J D. On the other hand, because of 35 we must have rank B J D, i.e., rank B J = D. Now B J is a D + 1 c i matrix, with the smaller dimension being D+1. Since its rank is D, it must be the case that all D+ 1 D +1 minors of B J vanish. The vanishing of these minors defines an algebraic variety W J of the parametric space n i=1 Rci, and Φ is non-transversal if and only if a 1,..., a n W := J [n] W J. Since W is a finite union of algebraic varieties it must be an algebraic variety itself, i.e., defined by a set of polynomial equations in the variables a 1,..., a n. 6 CONCLUSIONS We established in a rigorous fashion the correctness of ASC in the case of affine subspaces. Using the technical framework of algebraic geometry, we showed that the embedding of points lying in general position inside a union of affine subspaces preserves the general position. Moreover, the embedding of a transversal union of affine subspaces will almost surely give a transversal union of linear subspaces. Future research will aim at optimal realizations of the embedding in the presence of noise, analyzing SSC for affine subspaces, and reducing the complexity of ASC. REFERENCES [1] R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, vol. 28, no. 3, pp , March [2] P. S. Bradley and O. L. Mangasarian, k-plane clustering, Journal of Global Optimization, vol. 16, no. 1, pp , [3] P. Tseng, Nearest q-flat to m points, Journal of Optimization Theory and Applications, vol. 105, no. 1, pp , [4] M. Tipping and C. Bishop, Mixtures of probabilistic principal component analyzers, Neural Computation, vol. 11, no. 2, pp , [5] R. Vidal, Y. Ma, and S. Sastry, Generalized Principal Component Analysis GPCA, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 12, pp. 1 15, [6] G. Chen and G. Lerman, Spectral curvature clustering SCC, International Journal of Computer Vision, vol. 81, no. 3, pp , [7] R. Heckel and H. Bölcskei, Robust subspace clustering via thresholding, CoRR, vol. abs/ , [8] R. Vidal and P. Favaro, Low rank subspace clustering LRSC, Pattern Recognition Letters, vol. 43, pp , [9] G. Liu, Z. Lin, S. Yan, J. Sun, and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp , Jan [10] E. Elhamifar and R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp , [11] C. You, D. Robinson, and R. Vidal, Scalable sparse subspace clustering by orthogonal matching pursuit, in IEEE Conference on Computer Vision and Pattern Recognition, [12] C.-Y. Lu, H. Min, Z.-Q. Zhao, L. Zhu, D.-S. Huang, and S. Yan, Robust and efficient subspace segmentation via least squares regression, in European Conference on Computer Vision, [13] R. Livni, D. Lehavi, S. Schein, H. Nachliely, S. Shalev-shwartz, and A. Globerson, Vanishing component analysis, in International Conference on Machine Learning, vol. 28, no. 1, 2013, pp [14] R. Vidal, Y. Ma, and S. Sastry, Generalized Principal Component Analysis GPCA, in IEEE Conference on Computer Vision and Pattern Recognition, vol. I, 2003, pp [15] M. Soltanolkotabi, E. Elhamifar, and E. J. Candès, Robust subspace clustering, Annals of Statistics, vol. 42, no. 2, pp , [16] R. Vidal, Y. Ma, S. Soatto, and S. Sastry, Two-view multibody structure from motion, International Journal of Computer Vision, vol. 68, no. 1, pp. 7 25, [17] A. Sampath and J. Shan, Segmentation and reconstruction of polyhedral building roofs from aerial lidar point clouds, IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp , [18] R. Vidal, Recursive identification of switched ARX systems, Automatica, vol. 44, no. 9, pp , September [19] M. C. Tsakiris and R. Vidal, Filtrated spectral algebraic subspace clustering, in ICCV Workshop on Robust Subspace Learning and Computer Vision, 2015, pp [20], Abstract algebraic-geometric subspace clustering, in Asilomar Conference on Signals, Systems and Computers, [21], Filtrated algebraic subspace clustering, SIAM Journal on Imaging Sciences, 2017 to appear. [22] R. Tron and R. Vidal, A benchmark for the comparison of 3-D motion segmentation algorithms, in IEEE Conference on Computer Vision and Pattern Recognition, 2007, pp [23] T. Zhang, A. Szlam, Y. Wang, and G. Lerman, Hybrid linear modeling via local best-fit flats, International Journal of Computer Vision, vol. 100, no. 3, pp , [24] Y. Wang, Y.-X. Wang, and A. Singh, Graph connectivity in noisy sparse subspace clustering, International Conference on Artificial Intelligence and Statistics, pp , [25] Y. Yang, J. Feng, N. Jojic, J. Yang, and T. S. Huang, l 0 -sparse subspace clustering, European Conference on Computer Vision, pp , [26] R. Vidal, Generalized principal component analysis gpca: an algebraic geometric approach to subspace clustering and motion segmentation, Ph.D. dissertation, University of California, Berkeley, August [27] H. Derksen, Hilbert series of subspace arrangements, Journal of Pure and Applied Algebra, vol. 209, no. 1, pp , [28] Y. Ma, A. Y. Yang, H. Derksen, and R. Fossum, Estimation of subspace arrangements with applications in modeling and segmenting mixed data, SIAM Review, vol. 50, no. 3, pp , [29] M. Atiyah and I. MacDonald, Introduction to Commutative Algebra. Westview Press, [30] D. Cox, J. Little, and D. O Shea, Ideals, Varieties, and Algorithms. Springer, [31] P. Bürgisser and F. Cucker, The geometry of numerical algorithms, Springer Science & Business Media, vol. 349, [32] H. Derksen and J. Sidman, A sharp bound for the castelnuovo-mumford regularity of subspace arrangements, Advances in Mathematics, vol. 172, pp , [33] A. Conca and J. Herzog, Castelnuovo-mumford regularity of products of ideals, Collectanea Mathematica, vol. 54, no. 2, pp , [34] R. Hartshorne, Algebraic Geometry. Springer, 1977.

FILTRATED ALGEBRAIC SUBSPACE CLUSTERING

FILTRATED ALGEBRAIC SUBSPACE CLUSTERING FILTRATED ALGEBRAIC SUBSPACE CLUSTERING MANOLIS C. TSAKIRIS AND RENÉ VIDAL Abstract. Subspace clustering is the problem of clustering data that lie close to a union of linear subspaces. Existing algebraic

More information

Sparse Subspace Clustering

Sparse Subspace Clustering Sparse Subspace Clustering Based on Sparse Subspace Clustering: Algorithm, Theory, and Applications by Elhamifar and Vidal (2013) Alex Gutierrez CSCI 8314 March 2, 2017 Outline 1 Motivation and Background

More information

9. Birational Maps and Blowing Up

9. Birational Maps and Blowing Up 72 Andreas Gathmann 9. Birational Maps and Blowing Up In the course of this class we have already seen many examples of varieties that are almost the same in the sense that they contain isomorphic dense

More information

Math 145. Codimension

Math 145. Codimension Math 145. Codimension 1. Main result and some interesting examples In class we have seen that the dimension theory of an affine variety (irreducible!) is linked to the structure of the function field in

More information

Scalable Subspace Clustering

Scalable Subspace Clustering Scalable Subspace Clustering René Vidal Center for Imaging Science, Laboratory for Computational Sensing and Robotics, Institute for Computational Medicine, Department of Biomedical Engineering, Johns

More information

(dim Z j dim Z j 1 ) 1 j i

(dim Z j dim Z j 1 ) 1 j i Math 210B. Codimension 1. Main result and some interesting examples Let k be a field, and A a domain finitely generated k-algebra. In class we have seen that the dimension theory of A is linked to the

More information

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic Varieties Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic varieties represent solutions of a system of polynomial

More information

Generalized Principal Component Analysis (GPCA)

Generalized Principal Component Analysis (GPCA) IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 12, PAGE 1945-1959, DECEMBER 2005 1 Generalized Principal Component Analysis (GPCA) René Vidal, Member, IEEE, Yi Ma, Member,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998 CHAPTER 0 PRELIMINARY MATERIAL Paul Vojta University of California, Berkeley 18 February 1998 This chapter gives some preliminary material on number theory and algebraic geometry. Section 1 gives basic

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Part I Generalized Principal Component Analysis

Part I Generalized Principal Component Analysis Part I Generalized Principal Component Analysis René Vidal Center for Imaging Science Institute for Computational Medicine Johns Hopkins University Principal Component Analysis (PCA) Given a set of points

More information

DIVISORS ON NONSINGULAR CURVES

DIVISORS ON NONSINGULAR CURVES DIVISORS ON NONSINGULAR CURVES BRIAN OSSERMAN We now begin a closer study of the behavior of projective nonsingular curves, and morphisms between them, as well as to projective space. To this end, we introduce

More information

LINEAR ALGEBRA: THEORY. Version: August 12,

LINEAR ALGEBRA: THEORY. Version: August 12, LINEAR ALGEBRA: THEORY. Version: August 12, 2000 13 2 Basic concepts We will assume that the following concepts are known: Vector, column vector, row vector, transpose. Recall that x is a column vector,

More information

4. Images of Varieties Given a morphism f : X Y of quasi-projective varieties, a basic question might be to ask what is the image of a closed subset

4. Images of Varieties Given a morphism f : X Y of quasi-projective varieties, a basic question might be to ask what is the image of a closed subset 4. Images of Varieties Given a morphism f : X Y of quasi-projective varieties, a basic question might be to ask what is the image of a closed subset Z X. Replacing X by Z we might as well assume that Z

More information

Exploring the Exotic Setting for Algebraic Geometry

Exploring the Exotic Setting for Algebraic Geometry Exploring the Exotic Setting for Algebraic Geometry Victor I. Piercey University of Arizona Integration Workshop Project August 6-10, 2010 1 Introduction In this project, we will describe the basic topology

More information

Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit

Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit Chong You, Daniel P. Robinson, René Vidal Johns Hopkins University, Baltimore, MD, 21218, USA Abstract Subspace clustering methods based

More information

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University

More information

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ANDREW SALCH 1. Hilbert s Nullstellensatz. The last lecture left off with the claim that, if J k[x 1,..., x n ] is an ideal, then

More information

Block-Sparse Recovery via Convex Optimization

Block-Sparse Recovery via Convex Optimization 1 Block-Sparse Recovery via Convex Optimization Ehsan Elhamifar, Student Member, IEEE, and René Vidal, Senior Member, IEEE arxiv:11040654v3 [mathoc 13 Apr 2012 Abstract Given a dictionary that consists

More information

3. The Sheaf of Regular Functions

3. The Sheaf of Regular Functions 24 Andreas Gathmann 3. The Sheaf of Regular Functions After having defined affine varieties, our next goal must be to say what kind of maps between them we want to consider as morphisms, i. e. as nice

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013 18.782 Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013 Throughout this lecture k denotes an algebraically closed field. 17.1 Tangent spaces and hypersurfaces For any polynomial f k[x

More information

The Information-Theoretic Requirements of Subspace Clustering with Missing Data

The Information-Theoretic Requirements of Subspace Clustering with Missing Data The Information-Theoretic Requirements of Subspace Clustering with Missing Data Daniel L. Pimentel-Alarcón Robert D. Nowak University of Wisconsin - Madison, 53706 USA PIMENTELALAR@WISC.EDU NOWAK@ECE.WISC.EDU

More information

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014 Algebraic Geometry Andreas Gathmann Class Notes TU Kaiserslautern 2014 Contents 0. Introduction......................... 3 1. Affine Varieties........................ 9 2. The Zariski Topology......................

More information

10. Noether Normalization and Hilbert s Nullstellensatz

10. Noether Normalization and Hilbert s Nullstellensatz 10. Noether Normalization and Hilbert s Nullstellensatz 91 10. Noether Normalization and Hilbert s Nullstellensatz In the last chapter we have gained much understanding for integral and finite ring extensions.

More information

Polynomials, Ideals, and Gröbner Bases

Polynomials, Ideals, and Gröbner Bases Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields

More information

CHEVALLEY S THEOREM AND COMPLETE VARIETIES

CHEVALLEY S THEOREM AND COMPLETE VARIETIES CHEVALLEY S THEOREM AND COMPLETE VARIETIES BRIAN OSSERMAN In this note, we introduce the concept which plays the role of compactness for varieties completeness. We prove that completeness can be characterized

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

NONSINGULAR CURVES BRIAN OSSERMAN

NONSINGULAR CURVES BRIAN OSSERMAN NONSINGULAR CURVES BRIAN OSSERMAN The primary goal of this note is to prove that every abstract nonsingular curve can be realized as an open subset of a (unique) nonsingular projective curve. Note that

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Algebraic Geometry (Math 6130)

Algebraic Geometry (Math 6130) Algebraic Geometry (Math 6130) Utah/Fall 2016. 2. Projective Varieties. Classically, projective space was obtained by adding points at infinity to n. Here we start with projective space and remove a hyperplane,

More information

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix ECCV Workshop on Vision and Modeling of Dynamic Scenes, Copenhagen, Denmark, May 2002 Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Dept of EECS, UC Berkeley Berkeley,

More information

A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ

A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ NICOLAS FORD Abstract. The goal of this paper is to present a proof of the Nullstellensatz using tools from a branch of logic called model theory. In

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

LOW ALGEBRAIC DIMENSION MATRIX COMPLETION

LOW ALGEBRAIC DIMENSION MATRIX COMPLETION LOW ALGEBRAIC DIMENSION MATRIX COMPLETION Daniel Pimentel-Alarcón Department of Computer Science Georgia State University Atlanta, GA, USA Gregory Ongie and Laura Balzano Department of Electrical Engineering

More information

Resolution of Singularities in Algebraic Varieties

Resolution of Singularities in Algebraic Varieties Resolution of Singularities in Algebraic Varieties Emma Whitten Summer 28 Introduction Recall that algebraic geometry is the study of objects which are or locally resemble solution sets of polynomial equations.

More information

Math 210C. The representation ring

Math 210C. The representation ring Math 210C. The representation ring 1. Introduction Let G be a nontrivial connected compact Lie group that is semisimple and simply connected (e.g., SU(n) for n 2, Sp(n) for n 1, or Spin(n) for n 3). Let

More information

11. Dimension. 96 Andreas Gathmann

11. Dimension. 96 Andreas Gathmann 96 Andreas Gathmann 11. Dimension We have already met several situations in this course in which it seemed to be desirable to have a notion of dimension (of a variety, or more generally of a ring): for

More information

Thus, the integral closure A i of A in F i is a finitely generated (and torsion-free) A-module. It is not a priori clear if the A i s are locally

Thus, the integral closure A i of A in F i is a finitely generated (and torsion-free) A-module. It is not a priori clear if the A i s are locally Math 248A. Discriminants and étale algebras Let A be a noetherian domain with fraction field F. Let B be an A-algebra that is finitely generated and torsion-free as an A-module with B also locally free

More information

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

V (v i + W i ) (v i + W i ) is path-connected and hence is connected. Math 396. Connectedness of hyperplane complements Note that the complement of a point in R is disconnected and the complement of a (translated) line in R 2 is disconnected. Quite generally, we claim that

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS JENNY WANG Abstract. In this paper, we study field extensions obtained by polynomial rings and maximal ideals in order to determine whether solutions

More information

COMPLEX VARIETIES AND THE ANALYTIC TOPOLOGY

COMPLEX VARIETIES AND THE ANALYTIC TOPOLOGY COMPLEX VARIETIES AND THE ANALYTIC TOPOLOGY BRIAN OSSERMAN Classical algebraic geometers studied algebraic varieties over the complex numbers. In this setting, they didn t have to worry about the Zariski

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

On the Waring problem for polynomial rings

On the Waring problem for polynomial rings On the Waring problem for polynomial rings Boris Shapiro jointly with Ralf Fröberg, Giorgio Ottaviani Université de Genève, March 21, 2016 Introduction In this lecture we discuss an analog of the classical

More information

ON k-subspaces OF L-VECTOR-SPACES. George M. Bergman

ON k-subspaces OF L-VECTOR-SPACES. George M. Bergman ON k-subspaces OF L-VECTOR-SPACES George M. Bergman Department of Mathematics University of California, Berkeley CA 94720-3840, USA gbergman@math.berkeley.edu ABSTRACT. Let k L be division rings, with

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1. August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei 1. Vector spaces 1.1. Notations. x S denotes the fact that the element x

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Math 210B. Artin Rees and completions

Math 210B. Artin Rees and completions Math 210B. Artin Rees and completions 1. Definitions and an example Let A be a ring, I an ideal, and M an A-module. In class we defined the I-adic completion of M to be M = lim M/I n M. We will soon show

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Institutionen för matematik, KTH.

Institutionen för matematik, KTH. Institutionen för matematik, KTH. Contents 7 Affine Varieties 1 7.1 The polynomial ring....................... 1 7.2 Hypersurfaces........................... 1 7.3 Ideals...............................

More information

Topological vectorspaces

Topological vectorspaces (July 25, 2011) Topological vectorspaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ Natural non-fréchet spaces Topological vector spaces Quotients and linear maps More topological

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

12. Hilbert Polynomials and Bézout s Theorem

12. Hilbert Polynomials and Bézout s Theorem 12. Hilbert Polynomials and Bézout s Theorem 95 12. Hilbert Polynomials and Bézout s Theorem After our study of smooth cubic surfaces in the last chapter, let us now come back to the general theory of

More information

2. Prime and Maximal Ideals

2. Prime and Maximal Ideals 18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

Codewords of small weight in the (dual) code of points and k-spaces of P G(n, q)

Codewords of small weight in the (dual) code of points and k-spaces of P G(n, q) Codewords of small weight in the (dual) code of points and k-spaces of P G(n, q) M. Lavrauw L. Storme G. Van de Voorde October 4, 2007 Abstract In this paper, we study the p-ary linear code C k (n, q),

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

AN EXPOSITION OF THE RIEMANN ROCH THEOREM FOR CURVES

AN EXPOSITION OF THE RIEMANN ROCH THEOREM FOR CURVES AN EXPOSITION OF THE RIEMANN ROCH THEOREM FOR CURVES DOMINIC L. WYNTER Abstract. We introduce the concepts of divisors on nonsingular irreducible projective algebraic curves, the genus of such a curve,

More information

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12 MATH 8253 ALGEBRAIC GEOMETRY WEEK 2 CİHAN BAHRAN 3.2.. Let Y be a Noetherian scheme. Show that any Y -scheme X of finite type is Noetherian. Moreover, if Y is of finite dimension, then so is X. Write f

More information

arxiv: v2 [math.ag] 24 Jun 2015

arxiv: v2 [math.ag] 24 Jun 2015 TRIANGULATIONS OF MONOTONE FAMILIES I: TWO-DIMENSIONAL FAMILIES arxiv:1402.0460v2 [math.ag] 24 Jun 2015 SAUGATA BASU, ANDREI GABRIELOV, AND NICOLAI VOROBJOV Abstract. Let K R n be a compact definable set

More information

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD JAN-HENDRIK EVERTSE AND UMBERTO ZANNIER To Professor Wolfgang Schmidt on his 75th birthday 1. Introduction Let K be a field

More information

Combinatorics for algebraic geometers

Combinatorics for algebraic geometers Combinatorics for algebraic geometers Calculations in enumerative geometry Maria Monks March 17, 214 Motivation Enumerative geometry In the late 18 s, Hermann Schubert investigated problems in what is

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk

SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS. Eva L. Dyer, Christoph Studer, Richard G. Baraniuk SUBSPACE CLUSTERING WITH DENSE REPRESENTATIONS Eva L. Dyer, Christoph Studer, Richard G. Baraniuk Rice University; e-mail: {e.dyer, studer, richb@rice.edu} ABSTRACT Unions of subspaces have recently been

More information

div(f ) = D and deg(d) = deg(f ) = d i deg(f i ) (compare this with the definitions for smooth curves). Let:

div(f ) = D and deg(d) = deg(f ) = d i deg(f i ) (compare this with the definitions for smooth curves). Let: Algebraic Curves/Fall 015 Aaron Bertram 4. Projective Plane Curves are hypersurfaces in the plane CP. When nonsingular, they are Riemann surfaces, but we will also consider plane curves with singularities.

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

ABSTRACT NONSINGULAR CURVES

ABSTRACT NONSINGULAR CURVES ABSTRACT NONSINGULAR CURVES Affine Varieties Notation. Let k be a field, such as the rational numbers Q or the complex numbers C. We call affine n-space the collection A n k of points P = a 1, a,..., a

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Decomposing Bent Functions

Decomposing Bent Functions 2004 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 Decomposing Bent Functions Anne Canteaut and Pascale Charpin Abstract In a recent paper [1], it is shown that the restrictions

More information

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY ADVANCED TOPICS IN ALGEBRAIC GEOMETRY DAVID WHITE Outline of talk: My goal is to introduce a few more advanced topics in algebraic geometry but not to go into too much detail. This will be a survey of

More information

Rings With Topologies Induced by Spaces of Functions

Rings With Topologies Induced by Spaces of Functions Rings With Topologies Induced by Spaces of Functions Răzvan Gelca April 7, 2006 Abstract: By considering topologies on Noetherian rings that carry the properties of those induced by spaces of functions,

More information

LECTURE 3 Matroids and geometric lattices

LECTURE 3 Matroids and geometric lattices LECTURE 3 Matroids and geometric lattices 3.1. Matroids A matroid is an abstraction of a set of vectors in a vector space (for us, the normals to the hyperplanes in an arrangement). Many basic facts about

More information

On Linear Subspace Codes Closed under Intersection

On Linear Subspace Codes Closed under Intersection On Linear Subspace Codes Closed under Intersection Pranab Basu Navin Kashyap Abstract Subspace codes are subsets of the projective space P q(n), which is the set of all subspaces of the vector space F

More information

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS J. WARNER SUMMARY OF A PAPER BY J. CARLSON, E. FRIEDLANDER, AND J. PEVTSOVA, AND FURTHER OBSERVATIONS 1. The Nullcone and Restricted Nullcone We will need

More information

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES CHAPTER 1. AFFINE ALGEBRAIC VARIETIES During this first part of the course, we will establish a correspondence between various geometric notions and algebraic ones. Some references for this part of the

More information

Regression for sets of polynomial equations

Regression for sets of polynomial equations Franz J. Király, Paul von Bünau, Jan S. Müller, Duncan A. J. Blythe, Frank C. Meinecke, Klaus-Robert Müller Berlin Institute of Technology (TU Berlin), Machine Learning dept., Franklinstr. 28/29, 10587

More information

A Version of the Grothendieck Conjecture for p-adic Local Fields

A Version of the Grothendieck Conjecture for p-adic Local Fields A Version of the Grothendieck Conjecture for p-adic Local Fields by Shinichi MOCHIZUKI* Section 0: Introduction The purpose of this paper is to prove an absolute version of the Grothendieck Conjecture

More information

π X : X Y X and π Y : X Y Y

π X : X Y X and π Y : X Y Y Math 6130 Notes. Fall 2002. 6. Hausdorffness and Compactness. We would like to be able to say that all quasi-projective varieties are Hausdorff and that projective varieties are the only compact varieties.

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Rank Tests for the Observability of Discrete-Time Jump Linear Systems with Inputs

Rank Tests for the Observability of Discrete-Time Jump Linear Systems with Inputs Rank Tests for the Observability of Discrete-Time Jump Linear Systems with Inputs Ehsan Elhamifar Mihály Petreczky René Vidal Center for Imaging Science, Johns Hopkins University, Baltimore MD 21218, USA

More information

COUNTING NUMERICAL SEMIGROUPS BY GENUS AND SOME CASES OF A QUESTION OF WILF

COUNTING NUMERICAL SEMIGROUPS BY GENUS AND SOME CASES OF A QUESTION OF WILF COUNTING NUMERICAL SEMIGROUPS BY GENUS AND SOME CASES OF A QUESTION OF WILF NATHAN KAPLAN Abstract. The genus of a numerical semigroup is the size of its complement. In this paper we will prove some results

More information

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views

The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views Richard Hartley 1,2 and RenéVidal 2,3 1 Dept. of Systems Engineering 3 Center for Imaging Science Australian National University

More information

APPENDIX 3: AN OVERVIEW OF CHOW GROUPS

APPENDIX 3: AN OVERVIEW OF CHOW GROUPS APPENDIX 3: AN OVERVIEW OF CHOW GROUPS We review in this appendix some basic definitions and results that we need about Chow groups. For details and proofs we refer to [Ful98]. In particular, we discuss

More information

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d The Algebraic Method 0.1. Integral Domains. Emmy Noether and others quickly realized that the classical algebraic number theory of Dedekind could be abstracted completely. In particular, rings of integers

More information

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS.

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS. FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS. Let A be a ring, for simplicity assumed commutative. A filtering, or filtration, of an A module M means a descending sequence of submodules M = M 0

More information

Combining Memory and Landmarks with Predictive State Representations

Combining Memory and Landmarks with Predictive State Representations Combining Memory and Landmarks with Predictive State Representations Michael R. James and Britton Wolfe and Satinder Singh Computer Science and Engineering University of Michigan {mrjames, bdwolfe, baveja}@umich.edu

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information