Sparse Learning Approach to the Problem of Robust Estimation of Camera Locations
|
|
- Kelly Ball
- 5 years ago
- Views:
Transcription
1 Sparse Learning Approach to the Problem of Robust Estimation of Camera Locations Arnak Dalalyan and Renaud Keriven Université Paris Est, CERTIS, Ecole des Ponts ParisTech, Marne-la-Vallée, France Abstract In this paper, we propose a new approach inspired by the recent advances in the theory of sparse learning to the problem of estimating camera locations when the internal parameters and the orientations of the cameras are known. Our estimator is defined as a Bayesian maximum a posteriori with multivariate Laplace prior on the vector describing the outliers. This leads to an estimator in which the fidelity to the data is measured by the L -norm while the regularization is done by the L 1 -norm. Building on the papers [11, 15, 16, 14, 21, 22, 24, 18, 23] for L -norm minimization in multiview geometry and, on the other hand, on the papers [8, 4, 7, 2, 1, 3] for sparse recovery in statistical framework, we propose a two-step procedure which, at the first step, identifies and removes the outliers and, at the second step, estimates the unknown parameters by minimizing the L cost function. Both steps are fairly fast: the outlier removal is done by solving one linear program (LP), while the final estimation is performed by a sequence of LPs. An important difference compared to many existing algorithms is that for our estimator it is not necessary to specify neither the number nor the proportion of the outliers. 1. Introduction In the present paper, we are concerned with the structure and motion problem of multiview geometry. This problem, that have received a great deal of attention by the computer vision community in last decade, consists in recovering a set of 3D points (structure) and a set of camera matrices (motion), when only 2D images of the aforementioned 3D points by some cameras are available. Throughout this work we assume that the internal parameters of cameras as well as their orientations are known. Thus, only the locations of camera centers and 3D points are to be estimated. In solving the structure and motion problem by state-of-the-art methods, it is customary to start by establishing correspondences between pairs of 2D data points. We will assume in the present study that these point correspondences have been already established. From a heuristical standpoint, it is useful to think of the structure and motion problem as an inverse problem. In fact, if O denotes the operator that takes as input the set of 3D points and the set of cameras, and produces as output the 2D images of the 3D points by the cameras, then the task of the structure and motion problem is to invert the operator O. It is clear that the operator O is not injective: two different inputs may result in the same output. Fortunately, in many situations (for example, when for each pair of cameras there are at least five 3D points in general position that are seen by these cameras [2]), there is only a small number of inputs, up to an overall similarity transform, having the same image by O. In such cases, the solutions to the structure and motion problem can be found using algebraic arguments. The solutions obtained by the algebraic approach are in general very sensitive to the noise in the data. Thus, very often, there is no input that could have generated the observed output because of the noise in the measurements. A natural approach to cope with such situations consists in searching for the input providing the closest possible output to the observed data. In the problems where the output space of the operator O is not one-dimensional, as it is the case in the structure and motion problem, the choice of the metric used to measure the closeness is of high importance. A standard approach [12] consists in measuring the distance between two elements of the output space in the Euclidean L 2 -norm. In the structure and motion problem with more than two cameras, this leads to a hard non-convex optimization problem. A particularly elegant way of circumventing the nonconvexity issues inherent to the use of L 2 -norm consists in replacing it by the L -norm. This approach has been developed in a series of recent papers (cf. [11, 14, 21, 22, 24] and the references therein). It has been shown that, for some problems of structure and motion estimation, the use of the L -norm results in a pseudoconvex minimization, which can be performed very efficiently using, for example, the iterative bisection method that solves a convex program at
2 each iteration. The purpose of the present work is to introduce a new procedure of estimation in presence of noise and outliers. Our procedure is derived as a maximum a posteriori (MAP) estimator under uniformly distributed random noise and a sparsity favoring prior on the vector of outliers. Interestingly, this study bridges the work on the robust estimation in multiview geometry and the well developed theory of sparse recovery in statistical learning theory [4, 7, 5, 17]. Related work on outlier removal using L -techniques can be found in [9, 24, 15, 18]. We refer the interested reader to the paper [24] for a comprehensive, precise mathematical background on the outlier identification problem in multiview geometry. The rest of the paper is organized as follows. The next section gives the precise formulation of the translation estimation problem that constitutes an example of a problem to which the presented methodology can be applied. A brief review of the L -norm minimization algorithm consisting in sequential optimization is presented in Section 3. In Section 4, we introduce the statistical framework and derive a new estimation procedure as a MAP estimator. The main result on the accuracy of this procedure is stated and proved in Section 5, while Section 6 contains some numerical experiments. The methodology of our study is summarized in Section Translation estimation and triangulation Let us start by presenting a problem of multiview geometry to which our approach can be successfully applied: the problem of translation estimation and triangulation in the case of known rotations. For rotation estimation algorithms, we refer the interested reader to [19, 1] and the references therein. Let P i, i = 1,..., m, be a sequence of m cameras that are known up to a translation. Recall that a camera is characterized by a 3 4 matrix P with real entries that can be written as P = K[R t], where K is an invertible 3 3 camera calibration matrix, R is a 3 3 rotation matrix and t R 3. We will refer to t as the translation of the camera P. We can thus write P i = K i [R i t i ], i = 1,..., m. For a set of unknown 3D points U j P3, j = 1,..., n, we are given the images contaminated by a noise of each U j by some cameras P i. Thus, we have at our disposal the measurements 1 x ij = e T 3 P i U j [ e T 1 P i U j e T 2 P i U j ] + ξ ij, j = 1,..., n, i I j, where e l, l = 1, 2, 3, stands for the unit vector of R 3 having one as the l th coordinate and I j is the set of indices of cameras for which the point U j is visible. We will assume that the set {U j } does not contain points at infinity. Therefore, we can write U j = [X T j 1]T for some X j R3 and for every j = 1,..., n. (1) We are now in a position to state the problem of translation estimation and triangulation in the context of multiview geometry. It consists in recovering the 3-vectors {t i } (translation estimation) and the 3D points {X j } (triangulation) from the noisy measurements {x ij ; j = 1,..., n; i I j } R 2. We use the notation θ = (t T 1,..., t T m, X T 1,..., X T n ) T R 3(m+n), which is the vector of interest. Remark 1 (Cheirality). If the point U j is in front of the camera P i, then et 3 P i U j. Furthermore, we will assume that none of the true 3D points U j lies on the principal plane of a camera P i. This assumption implies that e T 3 P i U j > so that the quotients et l P i U j /et 3 P i U j, l = 1, 2, are well defined. Remark 2 (Identifiability). The parameter θ is, in general, not identifiable from the measurements {x ij }. In fact, for every α and for every t R 3, the parameters {t i, X j } and {α(t i R it), α(x j + t)} generate the same measurements. To cope with this issue, we assume that t 1 = 3 and that min i,j e T 3 P i U j = 1. Thus, in what follows we assume that t 1 is removed from θ and θ R 3(m+n 1). Further assumptions ensuring the identifiability are given below. 3. Estimation by L -minimization This section presents results on the estimation of θ based on the reprojection error minimization. This material is essential for understanding the results that are at the core of the present work. In what follows, for every s 1, we denote by x s the L s -norm of a vector x, i.e. x s s = j x j s if x = (x 1,..., x d ) T for some d >. As usual, we extend this to s = + by setting x = max j x j Estimation by L cost minimization A classical method [12] for estimating the parameter θ is based on minimizing the sum of the reprojection errors squared. This amounts to defining θ as a minimizer of the cost function C 2,2 (θ) = i,j x ij x ij (θ) 2 2, where x ij (θ) := [ e T 1 P i U j ; ] et 2 P i U T j /e T 3 P i U j is the 2-vector that we would obtain if θ were the true parameter. It can also be written as [ ] e T T x ij (θ) = 1 K i (R i X j + t i ) e T 3 K i(r i X j + t i ) ; et 2 K i (R i X j + t i ) e T 3 K. (2) i(r i X j + t i ) The minimization of C 2,2 is a hard nonconvex problem. In general, it does not admit closed-form solution and the existing iterative algorithms may often get stuck in local minima. An ingenious idea to overcome this difficulty has been proposed in recent years [11, 13]. It is based on the minimization of the L cost function C,2 (θ) = max j=1,...,n max i I j x ij x ij (θ) 2. (3)
3 Note that the substitution of the L 2 -cost function by the L -cost function has been proved to lead to improved algorithms in other estimation problems as well, cf., e.g., [6]. This cost function has the advantage of having all its sublevel sets convex. This property ensures that all minima of C,2 form a convex set and that an element of this set can be computed by solving a sequence of SOCPs [14], e.g. by the bisection algorithm presented below. This result readily generalizes to the family of cost functions C,s (θ) = max max x ij x ij (θ) s, s 1. (4) j=1,...,n i I j For s = 1 and s = +, the minimization of C,s can be recast in a sequence of LPs The bisection algorithm We briefly describe an algorithm computing θ s arg min θ C,s (θ) for any prespecified s 1. The minimization is carried out over the set of all vectors θ satisfying the cheirality condition. Let us introduce the residuals r ij (θ) = x ij x ij (θ) that can be represented as r ij (θ) = [ a T ij1 θ c T ij θ ; at ij2 θ ] T c T ij θ, (5) for some vectors a ijl, c ij R 2. Furthermore, as presented in Remark 2, the cheirality conditions imply the set of linear constraints c T ij θ 1. Thus, the problem of computing θ s can be rewritten as P s : min γ{ {θ, γ} s.t. r ij (θ) s γ, (6) c T ij θ 1. Note that the inequality r ij (θ) s γ can be replaced by A T ij θ s γc T ij θ with A ij = [a ij1 ; a ij2 ]. Although P s is not a convex problem, its solution can be well approximated by solving a sequence of convex feasibility problems of the form with fixed γ. P s,γ : find θ{ s.t. A T ij θ s γc T ij θ, (7) c T ij θ 1, Remark 3. Any solution of (7) is defined up to a scaling factor larger than one, i.e., if θ solves (7) then it is also the case for αθ with any α 1. In order to fix this scaling factor, one can redefine θ by setting it equal to θ/(min ij c T ij θ). Given a small positive number ɛ controlling the accuracy of approximation, the bisection algorithm reads as follows: Step 1: Compute a θ satisfying the cheirality conditions, e.g., by solving a linear feasibility problem. Step 2: Set γ l = and γ u = C,s ( θ). Step 3: Set γ = (γ l + γ u )/2. Step 4: If P s,γ has no solution, set γ l = γ. Otherwise, replace the current value of θ by a solution to P s,γ and set γ u = C,s ( θ). Step 5: If γ u γ l < ɛ, then assign to θ s the current value of θ and terminate. Otherwise, go to Step Robust estimation by linear programming This and the next sections contain the main theoretical contribution of the present work. We start with the precise formulation of the statistical model and define a maximum a posteriori (MAP) estimator The statistical model Let us first observe that, in view of (1) and (5), the model we are considering can be rewritten as [ a T ij1 θ ] T c T ; at ij2 θ ij θ c T = ξ ij, j = 1,..., n; i I j. (8) ij θ Let N = 2 n j=1 I j be the total number of measurements and let M = 3(n + m 1) be the size of the vector θ. Let us denote by A (resp. C) the M N matrix formed by the concatenation of the column-vectors a ijl (resp. c ij 1 ). Similarly, let us denote by ξ the N-vector formed by concatenating the vectors ξ ij. In these notation, Eq. (8) is equivalent to a T p θ = (c T p θ )ξ p, p = 1,..., N. (9) This equation defines the statistical model in the case where there is no outlier or, equivalently, all the measurements are inliers. In order to extend this model to cover the situation where some outliers are present in the measurements, we assume that there is another vector, ω R N, such that ωp = if the p th measurement is an inlier and a T p θ = ω p + (c T p θ )ξ p, p = 1,..., N. (1) It is convenient to write these equations in a matrix form: A T θ = ω + diag(c T θ )ξ, (11) where, as usual, for every vector v, diag(v) is the diagonal matrix having the components of v as diagonal entries. Statement of the problem: Given the matrices A and C, estimate the parameter-vector β = [θ T ; ω T ] T based on the following prior information: 1 To get a matrix of the same size as A, in the matrix C each column is duplicated two times.
4 C 1 : Eq. (11) holds with some small noise vector ξ, C 2 : min p c T p θ = 1 (cf. the discussion preceding Eq. (6)), C 3 : ω is sparse, i.e., only a small number of coordinates of the vector ω are different from zero Sparsity prior and MAP estimator To derive an estimator of the parameter β, we place ourselves in the Bayesian framework. To this end, we impose a probabilistic structure on the noise vector ξ and introduce a prior distribution on the unknown vector β. Since the noise ξ represents the difference (in pixels) between the measurements and the true image points, it is naturally bounded and, generally, does not exceeds the level of a few pixels. In view of this boundedness, it is reasonable to assume that the components of ξ are uniformly distributed in some compact set of R 2, centered at the origin. We assume in what follows that the subvectors ξ ij of ξ are uniformly distributed in the square [ σ, σ] 2 and are mutually independent. This implies that all the coordinates of ξ are independent. Now, we turn to the choice of the prior. Since the only information on θ is that the cheirality and the identifiability constraints should be satisfied, we define the prior on θ as the uniform distribution on the polytope P = {θ R M : C T θ 1}, where the inequality is understood componentwise. (For simplicity, we assume in this discussion that P is bounded.) The density of this distribution is p 1 (θ) 1 P (θ), where stands for the proportionality relation and 1 P (θ) = 1 if θ P and otherwise. The task of choosing a prior on ω is more delicate in that it should reflect the information that ω is sparse. The most natural prior would be the one having a density which is a decreasing function of the L -norm of ω, i.e., of the number of its nonzero coefficients. It is however well-known that the computation of estimators based on this type of priors is NP-hard. A very powerful approach for overcoming this difficulty extensively studied in the statistical literature during the past decade ([4, 7, 1] and the references therein) relies on using the L 1 -norm instead of the L - norm. Following this idea, we define the prior on ω by the density p 2 (ω) f( ω 1 ), where f is some decreasing function 2. Assuming in addition that θ and ω are independent, we get the following prior on β: π(β) = π(θ; ω) 1 P (θ) f( ω 1 ). (12) Theorem 1. Assume that the noise ξ has independent entries which are uniformly distributed in [ σ, σ] for some σ >, then the MAP estimator β = [ θ T ; ω T ] T based on the prior π defined by Eq. (12) is the solution of the opti- 2 The most common choice is f(x) = e x corresponding to the multivariate Laplace density. mization problem: LP σ : min ω 1 s.t. { a T p θ ω p σc T p θ, p c T p θ 1, p. (13) Proof. Under the probabilistic assumption made on the vector ξ, the conditional probability density f β of the data X = {x ij, j = 1,..., n; i I j } given the parameter-vector β is f β (X) N p=1 1 { a T p θ ωp ct p θ}. Consequently, the posterior probability density is given by f β X (β) N 1 { a T p θ ω p c T p θ} 1 P (θ) f( ω 1 ). (14) p=1 Since f is a decreasing function, it is obvious that the set of most likely values w.r.t. this posterior distribution coincides with the set of solutions to the problem LP σ. Remark 4 (Condition C 2 ). One easily checks that any solution of LP σ satisfies condition C 2. Indeed, if for some solution β it were not the case, then min p c T p θ > 1. Therefore, β = β/ min p c T p θ would satisfy the constraints of LP σ and ω would have a smaller L 1 -norm than ω, which is in contradiction with the fact that β solves LP σ. Remark 5 (The role of σ). In the definition of β, σ is a free parameter that can be interpreted as the level of separation of inliers from outliers. In fact, the proposed algorithm implicitly assumes that all the measurements x ij for which ξ ij > σ are outliers, while all the others are treated as inliers. In the case when σ is unknown, a reasonable way of acting is to impose a prior distribution on the possible values of σ and to define the estimator β as a MAP estimator based on the prior incorporating the uncertainty on σ. When there are no outliers and the prior on σ is decreasing, this approach leads to the estimator minimizing the L cost function (see Section 3). In the presence of outliers, the role of the shape of the prior on σ, as well as the shape of the function f (see Eq. (15)), becomes more important for the definition of the estimator. This is an interesting point for future investigation Two-step procedure Building on the arguments presented in the previous subsection, we introduce the following two-step algorithm. Input: {a p, c p ; p = 1,..., N} and σ. Step 1: Compute [ θ T ; ω T ] T as a solution to LP σ, cf. (13). Step 2: Define J = {p : ω p = } and apply the bisection algorithm to the reduced data set {x p ; p J}.
5 Two observations are in order. First, when applying the bisection algorithm at Step 2, we can use C,s ( θ) as the initial value of γ u. The second observation is that a better way of acting would be to minimize the weighted L 1 -norm of ω, where the weight assigned to ω p is inversely proportional to the depth c T p θ. Since θ is unknown, a reasonable strategy consists in adding a step in between Step 1 and Step 2, which performs the weighted minimization with weights {(c T p θ) 1 ; p = 1,..., N}. 5. Accuracy of estimation Let us introduce some additional notation. Recall the definition of P and set P = {θ : min p c T p θ = 1} and P = {θ θ : θ, θ P}. For every subset of indices J {1,..., N}, we denote by A J the M N matrix obtained from A by replacing the columns that have an index outside J by zero. Furthermore, let us define δ S = max sup J S g P A T J g 2 A T g 2, S N, (15) where J is the cardinal of J and the convention / = is used. One easily checks that δ S is well defined and is less than or equal to 1 for every S. Moreover, the mapping S δ S is nondecreasing. Assumption A: The real number λ defined by λ = min g P A T g 2 / g 2 is strictly positive. It is worth mentioning that Assumption A is necessary for identifying the parameter vector θ. In fact, if we consider the case without outliers, ω =, and if Assumption A is not fulfilled, then there is a vector g P such that A T g is much smaller than g. That is, given the matrices A and C, there are two vectors θ 1 and θ 2 satisfying min p c p θ j = 1, j = 1, 2, and such that A T (θ 1 θ 2 ) is small while θ 1 θ 2 is not. Therefore, if eventually θ 1 is the true parameter vector satisfying C 1 and C 3, then θ 2 satisfies these conditions as well but is substantially different from θ 1. As a consequence, the true vector cannot be accurately estimated from matrices A and C. Note also that a simple sufficient condition for Assumption A to be fulfilled is that the smallest eigenvalue of AA T is strictly positive The noise free case To evaluate the quality of estimation, we first place ourselves in the case where σ =. The estimator β of β is then given as a solution to the optimization problem [ θ min ω 1 over β = s.t. ω] { A T θ = ω C T θ 1. (16) Theorem 2. Let Assumption A be fulfilled and let δ S + δ 2S < 1 for some S {1,..., N}. Then, for some constant C, it holds: β β 2 C ω ω S 1, (17) where ω S stands for the vector ω with all but the S-largest entries set to zero. In particular, if ω has no more than S nonzero entries, then the estimation is exact: β = β. Proof. We set h = ω ω and g = θ θ. It follows from Remark 4 that g P. From now on, h T stands for the vector equal to h on an index set T and zero elsewhere. Let T (resp. T 1 ) denote the index set corresponding to the locations of S largest entries 3 of ω (resp. h T c, where T c is the complementary set of T ). To proceed with the proof, we need the following auxiliary result. Lemma 1. Let v R d be some vector and let S d be a positive integer. If we denote by T the indices of S largest entries of the vector v (the absolute value should be understood componentwise), then v T c 2 S 1/2 v 1. Proof of Lemma 1. Let us denote by T 1 the index set of S largest entries of v T c, by T 2 the index set of next S largest entries of v T c, and so on. By triangle inequality, one has v T c 2 j 1 v T j 2. On the other hand, one easily checks that v l 2 v l v Tj 1 1 /S for every l T j with the convention T = T. This implies that v Tj 2 2 v Tj 1 v Tj 1 1 /S, for every j 1. After taking the square root of these inequalities and summing up over j, we get the desired result in view of the obvious inequality v Tj 1 v Tj 1 1. Applying Lemma 1 to the vector v = h T c and to the index set T = T 1, we get h (T T 1) c 2 S 1/2 h T c 1. (18) On the other hand, summing up the inequalities h T c 1 (ω h) T c 1 + ω T c 1 and ω T 1 (ω h) T 1 + h T 1, and using the relation (ω h) T 1 + (ω h) T c 1 = ω h 1 = ω 1, we get h T c 1 + ω T 1 ω 1 + ω T c 1 + h T 1. (19) Since β satisfies the constraints of the optimization problem (16) a solution of which is β, we have ω 1 ω 1. This inequality, in conjunction with (18) and (19), implies h (T T 1) c 2 S 1/2 h T 1 + 2S 1/2 ω T c 1 3 in absolute value h T 2 + 2S 1/2 ω T c 1, (2)
6 where the last line follows from the Cauchy-Schwartz inequality. Once again using the fact that both β and β satisfy the constraints of (16), we get h = A T g. Therefore, h 2 h T T h (T T 1) c 2 h T T h T 2 + 2S 1/2 ω T c 1 = A T T T 1 g 2 + A T T g 2 + 2S 1/2 ω T c 1 Since ω T c (δ 2S + δ S ) A T g 2 + 2S 1/2 ω T c 1 = (δ 2S + δ S ) h 2 + 2S 1/2 ω T c 1. (21) = ω ω S, the last inequality yields h 2 ( 2S 1/2 /(1 δ S δ 2S ) ) ω ω S 1. (22) To complete the proof, it suffices to observe that β β 2 g 2 + h 2 λ 1 Ag 2 + h 2 by virtue of inequality (22). = ( λ ) h 2 C ω ω S 1, (23) We emphasize that the constant C is rather small. For example, if δ S + δ 2S =.5, then it can be deduced from the proof of Theorem 2 that max( ω ω 2, A T ( θ θ ) 2 ) (4/ S) ω ω S The noisy case The assumption σ = is an idealization of the reality that has the advantage of simplifying the mathematical derivations. While such a simplified setting is useful for conveying the main ideas behind the proposed methodology, it is of major practical importance to discuss the extensions to the more realistic noisy model. Because of space limitation, we will state the result which is the analogue of Theorem 2 in the case σ. Theorem 3. Let us denote by ξ the vector of estimated residuals satisfying A T θ = ω + diag(c T θ) ξ. If for some ɛ > we have max( diag(c T θ) ξ 2 ; diag(c T θ )ξ 2 ) ɛ, then β β 2 C ω ω S 1 + C 1 ɛ (24) where C and C 1 are numerical constants. The proof of this result is very similar to that of Theorem 2 and therefore is left to the interested reader. 6. Numerical illustration We implemented the algorithm in MatLab, using the SeDuMi package for solving linear programs [26]. Since in the present setting the matrices A and C are sparse each column of these matrices contains at most 6 non-zero Figure 1. Left: the first image of the database. Right: the positions of cameras and the scene points estimated by L -norm minimization. entries the execution times are reasonably small even for large data sets. To demonstrate the proposed methodology, we applied our algorithm of robust estimation to the well known dinosaur sequence 4. This sequence consists of 36 images of a dinosaur on a turntable, see Fig. 1 for the first image. The 2D image points which are tracked across the image sequence and the projection matrices of 36 cameras are provided as well. For solving the problem of translation estimation and triangulation, we make use only of the first three columns of the projection matrices. There are at least two factors making the analysis of the dinosaur data difficult. The first one is the size of the data set: there are image points corresponding to real world points. This entails that there are more than 15. unknown parameters. The second factor is the presence of outliers which causes the failure of the original bisection algorithm. As shown in Fig. 1, the estimated camera centers are not on the same plane and it is difficult to recognize the dinosaur from the scatter plot of the estimated 3D points. Furthermore, the maximal reprojection error in this example is equal to 63 pixel. We ran our procedure of robust estimation on this data set with σ =.5 pixel. We applied the following rule for detecting outliers: if ω p /c T p θ is larger than σ/4, then the pth measurement is considered as an outlier and is removed. The corresponding 3D scene point is also removed if, after the step of outlier removal, it was seen by only one camera. This resulted in removing 1.36 image points and 297 scene points. The plots of Fig. 2 show the estimated camera centers and estimated scene points. We see, in particular, that the camera centers are almost coplanar. Note that in this example, the second step of the procedure described in Section 4.3 does not improve on the estimator computed at the first step. Thus, an accurate estimate is obtained by solving only one linear program. An additional difficulty of this sequence of images is that there are some wrong scene points which have small reprojection error. It is noteworthy that the number of this 4 vgg/data1.html
7 Figure 2. Left: the positions of cameras and the scene points estimated by our method. Right: zoom on the scene points. Figure 4. Upper view of the scene points estimated by the method from [15] (left panel), by our method (central panel) and after removing the points seen by only two cameras (right panel). Figure 3. The scene points estimated by our method after removing the points seen by only two cameras. kind of outliers can be drastically reduced by considering only those 3D scene points for which we have at least three 2D image points. For the dinosaur sequence, this reduces the number of scene points from to These points are plotted in Fig. 3, which shows that there is no flagrant outlier in this reduced data set. For comparison, we ran on the same data the procedures proposed by [24, 15] that we will refer to as SH-procedure and KK-procedure, respectively. To remove 1,5 outliers, the SH-procedure required more than 5 cycles, each cycles consisting of a bisection algorithm containing between 5 and 1 LPs. The resulting estimator had a maximal reprojection error equal to 1.33 pixel. The boxplots of the errors for estimated camera locations are shown in Fig. 5. Concerning the KK-procedure [15], it produces an estimator which is nearly as accurate as the one presented in this paper, but requires a good estimate of the number N O of outliers, does not contain a step of outlier removal and needs solving more LPs to find the estimator. In this example, we ran the KK-procedure with m = N N O = 15., which is approximately the number of inliers detected by our method. The results presented in Fig. 4 and 5 show that our procedure compares favorably to that of [15]. This experiment clearly demonstrated the competitivity of the proposed methodology with the approaches proposed in [24] and [15] both in terms of the accuracy and the execution time. Similar behavior have been observed on other datasets, see Fig Figure 5. Boxplots of the errors of the estimated camera locations. Right: our procedure vs KK-procedure [15]. Left: our procedure vs SH-procedure. 7. Discussion In this paper, we have shown that the methodology developped for learning sparse representations can be successfully applied to the estimation problems of multiview geometry that are affected by outliers. A rigorous Bayesian framework for the problem of translation estimation and triangulation have been proposed, that have leaded to a new robust estimation procedure. The proposed estimator exploits the sparse nature of the vector of outliers through L 1 - norm minimization. We have given the mathematical proof of the result demonstrating the efficiency of the proposed estimator under some assumptions. The relaxation of these assumptions is an interesting theoretical problem that is the object of an ongoing work. Real data analysis conducted on the dinosaur sequence supports our theoretical results and show that our procedure is competitive with the most recent robust estimation procedures. References [1] P. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector, 28. Annals of Statistics, to appear. [2] E. Candès and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. Ann. Statist.,
8 Figure 6. The Fountain-P11 database [25]. Top: two images out of 11. Bottom left: estimated (by our method) cameras and scene points. Bottom right: true cameras and estimated scene points. Figure 7. The Herz-Jesu-P25 database [25]. Top: two images out of 25. Bottom left: estimated (by our method) cameras and scene points. Bottom right: true cameras and estimated scene points. 35(6): , 27. [3] E. J. Candès and P. A. Randall. Highly robust error correction by convex programming. IEEE Trans. Inform. Theory, 54(7): , 28. [4] E. J. Candès, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8): , 26. [5] A. Dalalyan and A. Tsybakov. Sparse regression learning by aggregation and langevin monte-carlo. 22th Annual Conference on Learning Theory, 29. [6] A. S. Dalalyan, A. Juditsky, and V. Spokoiny. A new algorithm for estimating the effective dimension-reduction subspace. Journal of Machine Learning Research, 9: , Aug. 28. [7] D. Donoho, M. Elad, and V. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory, 52(1):6 18, 26. [8] D. L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory, 47(7): , 21. [9] O. Enqvist and F. Kahl. Robust optimal pose estimation. In ECCV, pages I: , 28. [1] R. Hartley and F. Kahl. Global optimization through rotation space search. IJCV, 29. [11] R. I. Hartley and F. Schaffalitzky. L minimization in geometric reconstruction problems. In CVPR (1), pages 54 59, 24. [12] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, June 24. [13] F. Kahl. Multiple view geometry and the L -norm. In ICCV, pages IEEE Computer Society, 25. [14] F. Kahl and R. I. Hartley. Multiple-view geometry under the L norm. IEEE Trans. Pattern Analysis and Machine Intelligence, 3(9): , sep 28. [15] T. Kanade and Q. Ke. Quasiconvex optimization for robust geometric reconstruction. In ICCV, pages II: , 25. [16] Q. Ke and T. Kanade. Uncertainty models in quasiconvex optimization for geometric reconstruction. In CVPR, pages I: , 26. [17] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. Journal of Machine Learning Research, 1:777 81, 29. [18] H. D. Li. A practical algorithm for L triangulation with outliers. In CVPR, pages 1 8, 27. [19] D. Martinec and T. Pajdla. Robust rotation and translation estimation in multiview reconstruction. In CVPR, pages 1 8, 27. [2] D. Nistér. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell, 26(6): , 24. [21] C. Olsson, A. P. Eriksson, and F. Kahl. Efficient optimization for L problems using pseudoconvexity. In ICCV, pages 1 8, 27. [22] Y. D. Seo and R. I. Hartley. A fast method to minimize L error norm for geometric vision problems. In ICCV, pages 1 8, 27. [23] Y. D. Seo, H. J. Lee, and S. W. Lee. Sparse structures in L- infinity norm minimization for structure and motion reconstruction. In ECCV, pages I: , 28. [24] K. Sim and R. Hartley. Removing outliers using the L norm. In CVPR, pages I: , 26. [25] C. Strecha, W. von Hansen, L. V. Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multi-view stereo for high resolution imagery. pages 1 8, 29. [26] J. F. Sturm. Using SeDuMi 1.2, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw., 11/12(1-4): , 1999.
Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry
Journal of Mathematical Imaging and Vision manuscript No. (will be inserted by the editor) Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry Arnak Dalalyan Renaud Keriven
More informationRobust Estimation for an Inverse Problem Arising in Multiview Geometry
Journal of Mathematical Imaging and Vision manuscript No. (will be inserted by the editor) Robust Estimation for an Inverse Problem Arising in Multiview Geometry Arnak Dalalyan Renaud Keriven Received:
More informationUncertainty Models in Quasiconvex Optimization for Geometric Reconstruction
Uncertainty Models in Quasiconvex Optimization for Geometric Reconstruction Qifa Ke and Takeo Kanade Department of Computer Science, Carnegie Mellon University Email: ke@cmu.edu, tk@cs.cmu.edu Abstract
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationDoes Compressed Sensing have applications in Robust Statistics?
Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationRui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China
Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series
More informationOn Camera Calibration with Linear Programming and Loop Constraint Linearization
Noname manuscript No. (will be inserted by the editor) On Camera Calibration with Linear Programming and Loop Constraint Linearization Jérôme Courchay Arnak Dalalyan Renaud Keriven Peter Sturm Received:
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationLeast squares under convex constraint
Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationTrinocular Geometry Revisited
Trinocular Geometry Revisited Jean Pounce and Martin Hebert 报告人 : 王浩人 2014-06-24 Contents 1. Introduction 2. Converging Triplets of Lines 3. Converging Triplets of Visual Rays 4. Discussion 1. Introduction
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationSparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1
Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics
More informationarxiv: v2 [cs.cv] 22 Mar 2019
Closed-Form Optimal Triangulation Based on Angular Errors Seong Hun Lee Javier Civera IA, University of Zaragoza, Spain {seonghunlee, jcivera}@unizar.es arxiv:190.09115v [cs.cv] Mar 019 Abstract In this
More informationarxiv: v2 [math.st] 12 Feb 2008
arxiv:080.460v2 [math.st] 2 Feb 2008 Electronic Journal of Statistics Vol. 2 2008 90 02 ISSN: 935-7524 DOI: 0.24/08-EJS77 Sup-norm convergence rate and sign concentration property of Lasso and Dantzig
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationThe uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008
The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle
More informationMulti-Frame Factorization Techniques
Multi-Frame Factorization Techniques Suppose { x j,n } J,N j=1,n=1 is a set of corresponding image coordinates, where the index n = 1,...,N refers to the n th scene point and j = 1,..., J refers to the
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationA Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationReconstruction from projections using Grassmann tensors
Reconstruction from projections using Grassmann tensors Richard I. Hartley 1 and Fred Schaffalitzky 2 1 Australian National University and National ICT Australia, Canberra 2 Australian National University,
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationThe deterministic Lasso
The deterministic Lasso Sara van de Geer Seminar für Statistik, ETH Zürich Abstract We study high-dimensional generalized linear models and empirical risk minimization using the Lasso An oracle inequality
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationVerifying Global Minima for L 2 Minimization Problems
Verifying Global Minima for L 2 Minimization Problems Richard Hartley Australian National University and NICTA Yongduek Seo Sogang University, Korea Abstract We consider the least-squares (L2) triangulation
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationColor Scheme. swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14
Color Scheme www.cs.wisc.edu/ swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July 2016 1 / 14 Statistical Inference via Optimization Many problems in statistical inference
More informationFantope Regularization in Metric Learning
Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationRSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems
1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationGlobally Optimal Least Squares Solutions for Quasiconvex 1D Vision Problems
Globally Optimal Least Squares Solutions for Quasiconvex D Vision Problems Carl Olsson, Martin Byröd, Fredrik Kahl {calle,martin,fredrik}@maths.lth.se Centre for Mathematical Sciences Lund University,
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationOptimisation on Manifolds
Optimisation on Manifolds K. Hüper MPI Tübingen & Univ. Würzburg K. Hüper (MPI Tübingen & Univ. Würzburg) Applications in Computer Vision Grenoble 18/9/08 1 / 29 Contents 2 Examples Essential matrix estimation
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationOptimization on the Manifold of Multiple Homographies
Optimization on the Manifold of Multiple Homographies Anders Eriksson, Anton van den Hengel University of Adelaide Adelaide, Australia {anders.eriksson, anton.vandenhengel}@adelaide.edu.au Abstract It
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,
More informationEXAMPLES OF CLASSICAL ITERATIVE METHODS
EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic
More informationMultiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing
Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National
More informationA REMARK ON THE LASSO AND THE DANTZIG SELECTOR
A REMARK ON THE LAO AND THE DANTZIG ELECTOR YOHANN DE CATRO Abstract This article investigates a new parameter for the high-dimensional regression with noise: the distortion This latter has attracted a
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationRank Determination for Low-Rank Data Completion
Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationGrassmann Averages for Scalable Robust PCA Supplementary Material
Grassmann Averages for Scalable Robust PCA Supplementary Material Søren Hauberg DTU Compute Lyngby, Denmark sohau@dtu.dk Aasa Feragen DIKU and MPIs Tübingen Denmark and Germany aasa@diku.dk Michael J.
More informationExact penalty decomposition method for zero-norm minimization based on MPEC formulation 1
Exact penalty decomposition method for zero-norm minimization based on MPEC formulation Shujun Bi, Xiaolan Liu and Shaohua Pan November, 2 (First revised July 5, 22) (Second revised March 2, 23) (Final
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationregression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered,
L penalized LAD estimator for high dimensional linear regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered, where the overall number of variables
More informationLagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems
Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of
More informationThe Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views
The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views Richard Hartley 1,2 and RenéVidal 2,3 1 Dept. of Systems Engineering 3 Center for Imaging Science Australian National University
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationA Practical Method for Decomposition of the Essential Matrix
Applied Mathematical Sciences, Vol. 8, 2014, no. 176, 8755-8770 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.410877 A Practical Method for Decomposition of the Essential Matrix Georgi
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationInderjit Dhillon The University of Texas at Austin
Inderjit Dhillon The University of Texas at Austin ( Universidad Carlos III de Madrid; 15 th June, 2012) (Based on joint work with J. Brickell, S. Sra, J. Tropp) Introduction 2 / 29 Notion of distance
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationarxiv: v1 [math.na] 26 Nov 2009
Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationSCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE
SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE BETSY STOVALL Abstract. This result sharpens the bilinear to linear deduction of Lee and Vargas for extension estimates on the hyperbolic paraboloid
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationSegmentation of Dynamic Scenes from the Multibody Fundamental Matrix
ECCV Workshop on Vision and Modeling of Dynamic Scenes, Copenhagen, Denmark, May 2002 Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Dept of EECS, UC Berkeley Berkeley,
More informationConvex relaxation. In example below, we have N = 6, and the cut we are considering
Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution
More informationA Factorization Method for 3D Multi-body Motion Estimation and Segmentation
1 A Factorization Method for 3D Multi-body Motion Estimation and Segmentation René Vidal Department of EECS University of California Berkeley CA 94710 rvidal@eecs.berkeley.edu Stefano Soatto Dept. of Computer
More informationStable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University
More information