PROBLEMS of estimating relative or absolute camera pose

Size: px
Start display at page:

Download "PROBLEMS of estimating relative or absolute camera pose"

Transcription

1 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY Polynomial Eigenvalue Solutions to Minimal Problems in Computer Vision Zuzana Kukelova, Member, IEEE, Martin Bujnak, Member, IEEE, an Tomas Pajla, Member, IEEE Abstract We present a metho for solving systems of polynomial equations appearing in computer vision. This metho is base on polynomial eigenvalue solvers an is more straightforwar an easier to implement than the state-of-the-art Gröbner basis metho since eigenvalue problems are well stuie, easy to unerstan, an efficient an robust algorithms for solving these problems are available. We provie a characterization of problems that can be efficiently solve as polynomial eigenvalue problems (PEPs) an present a resultant-base metho for transforming a system of polynomial equations to a polynomial eigenvalue problem. We propose techniques that can be use to reuce the size of the compute polynomial eigenvalue problems. To show the applicability of the propose polynomial eigenvalue metho, we present the polynomial eigenvalue solutions to several important minimal relative pose problems. Inex Terms Structure from motion, relative camera pose, minimal problems, polynomial eigenvalue problems. Ç 1 INTRODUCTION PROBLEMS of estimating relative or absolute camera pose [12] from image corresponences can be formulate as minimal problems an solve from a minimal number of image points [9]. These minimal solutions [29], [38], [36], [9], [2], [3] are use in many applications such as 3D reconstruction [34], [33] an structure from motion since they are very effective as hypothesis generators in RANSAC paraigm [9] or can be use for initializing the bunle ajustment [12]. Many new minimal problems [36], [37], [38], [11], [15], [2], [3], [13], [18], [4] which have been solve recently lea to nontrivial systems of polynomial equations. A popular metho for solving such systems is base on polynomial ieal theory an Gröbner bases [6]. The Gröbner basis metho was use to solve almost all previously mentione minimal problems incluing the well-known 5-pt relative pose problem [38], the 6-pt equal focal length problem [36], the 6-pt problem for one fully calibrate an one up to focal length calibrate camera [3], or the problem of estimating relative pose an one parameter raial istortion from 8-pt corresponences [15]. The Gröbner basis approach is general but not always straightforwar an often cannot be easily use to create new solvers or to reimplement the existing ones. This is mainly because the existing general Gröbner basis algorithms [6]. Z. Kukelova an T. Pajla are with the Center for Machine Perception, Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University, Prague, Karlovo namesti 13, Praha 2, Czech Republic. {kukelova, pajla}@cmp.felk.cvut.cz.. M. Bujnak can be reache at Bzovicka 24, Bratislava, Slovakia. martin@solvergenerator.com. Manuscript receive 25 Dec. 2010; revise 10 Aug. 2011; accepte 18 Oct. 2011; publishe online 30 Nov Recommene for acceptance by A. Fitzgibbon. For information on obtaining reprints of this article, please sen to: tpami@computer.org, an reference. IEEECS Log Number TPAMI Digital Object Ientifier no /TPAMI cannot be irectly use to create efficient solvers for computer vision problems an therefore special algorithms for concrete problems have to be esigne to achieve numerical robustness an computational efficiency. An automatic generator of Gröbner basis solvers propose in [16] can be use as a black-box an helps consierably in constructing new solvers. However, aitional expert knowlege is often necessary when ealing with more ifficult problems to generate a useful solution. In this paper, we present an alternative metho for solving systems of polynomial equations appearing in computer vision base on polynomial eigenvalue solvers [1]. This metho is in some sense more straightforwar an easier to implement than the Gröbner basis metho since eigenvalue problems are well stuie, easy to unerstan, an efficient an robust algorithms for solving these problems [1] can be irectly use to solve concrete computer vision problems. The polynomial eigenvalue metho was previously use to solve several problems in computer vision, like the problem of autocalibration of one-parameter raial istortion from 9 point corresponences [10], or to estimate paracataioptric camera moel from image matches [28]. Motivate by these examples, we provie here a characterization of problems that can be efficiently solve as polynomial eigenvalue problems (PEPs) an present a resultant-base metho for transforming a system of polynomial equations to a polynomial eigenvalue problem. The resultant-base metho presente in this paper is not as general as the Gröbner basis metho, but is very simple an straightforwar an can be applie to many systems of polynomial equations. We also propose techniques for reucing the size of polynomial eigenvalue problems an suggest how transforming of systems of polynomial equation to polynomial eigenvalue problems can be one in general. To show the applicability of our approach, we present polynomial eigenvalue solutions to several minimal relative pose problems. We show that the 5-pt relative pose /12/$31.00 ß 2012 IEEE Publishe by the IEEE Computer Society

2 1382 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY 2012 problem, the 6-pt equal focal length problem, the 6-pt problem for one calibrate an one up to focal length calibrate camera, an the problem of estimating relative pose an one parameter raial istortion from 8 point corresponences can be solve robustly an efficiently as polynomial eigenvalue problems. These solutions are fast an more stable than previous solutions [29], [38], [36], [15]. They are in some sense also more straightforwar an easier to implement since these solvers only require collecting some coefficient matrices an then calling an existing efficient eigenvalue solver. This paper extens [17], [3], [15], [19]. The main contribution is in proposing a resultant-base metho for transforming a large class of systems of polynomial equations to a polynomial eigenvalue problem, presenting two techniques for reucing the size of this polynomial eigenvalue problem, an proviing more efficient polynomial eigenvalue solutions to the problems presente in [17], [3], an [15]. The paper is organize as follows: First, we introuce the polynomial eigenvalue problems an show how they can be transforme to generalize eigenvalue problems (GEPs) an solve. Then, we provie the resultant-base metho for transforming systems of polynomial equations to polynomial eigenvalue problems. Methos for reucing the size of the polynomial eigenvalue problems are escribe in Section 2.3. In Sections 3.1 an 3.2, we formulate the relative pose problems an summarize previous solutions to these problems. In Section 4, we provie polynomial eigenvalue solutions to these problems. The final section is eicate to experiments. In Appenix A, which can be foun in the Computer Society Digital Library at /TPAMI , we present an example of a system of polynomial equations an how can it be transforme to a polynomial eigenvalue problem. 2 POLYNOMIAL EIGENVALUE PROBLEMS Polynomial eigenvalue problems are problems of the form CðÞv ¼ 0; where v is a vector of monomials in all variables except for an CðÞ is a matrix polynomial in variable efine as CðÞ l C l þ l1 C l1 þþc 1 þ C 0 ; with n n coefficient matrices C j [1]. We next escribe how these problems can be solve by transforming them to the generalize eigenvalue problems. 2.1 Transformation to the Stanar Generalize Eigenvalue Problem Polynomial eigenvalue problems (1) can be transforme to the stanar generalize eigenvalue problems: A y ¼ B y: GEPs (3) are well-stuie problems an there are many efficient numerical algorithms for solving them [1]. A very useful metho for ense an small or moerate-size GEPs is the QZ algorithm [1], with time complexity Oðn 3 Þ an memory complexity Oðn 2 Þ. Different algorithms are usually use for large scale GEPs. A common approach for large ð1þ ð2þ ð3þ scale GEPs is to reuce them to stanar eigenvalue problems an then apply iterative methos. Algorithms for solving GEPs an stanar eigenvalue problems are available in almost all mathematical software an libraries, for example in LAPACK, ARPACK, or MATLAB, which provies the polyeig function for solving polynomial eigenvalue problems (1) of arbitrary egree (incluing their transformation to the generalize eigenvalue problems (3)) Quaratic Eigenvalue Problems (QEPs) To see how a PEP (1) can be transforme to a GEP (3) let us first consier an important class of polynomial eigenvalue problems, the quaratic eigenvalue problems of the form ð 2 C 2 þ C 1 þ C 0 Þv ¼ 0; where C 2, C 1, an C 0 are coefficient matrices of size n n, is a variable calle an eigenvalue, an v is a vector of monomials in all variables except calle an eigenvector. QEP (4) can be transforme to the generalize eigenvalue problem (3) with 0 I A ¼ ; B ¼ I 0 C 0 C 1 0 C 2 ð4þ ; y ¼ v v : ð5þ Here, 0 an I are n n null an ientity matrices, respectively. GEP (3) with matrices (5) gives equations v ¼ v an C 0 v C 1 v ¼ 2 C 2 v, which is equivalent to (4). Note that this GEP (5) has 2 n eigenvalues an therefore by solving it we obtain 2 n solutions to the QEP (4) Higher Orer Eigenvalue Problems Higher orer PEPs of egree l, ð l C l þ l1 C l1 þþc 1 þ C 0 Þv ¼ 0; can be also transforme to the generalize eigenvalue problem (3). Here, I I... 0 A ¼ B A ; C 0 C 1 C 2... C l ð7þ I v... B ¼ B I A ; y ¼ v B A : l1 v C l For higher orer PEPs, one has to work with larger matrices with nleigenvalues. Therefore, for larger values of n an l convergence problems when solving these problems may appear [1]. Note that if the leaing matrix C l is nonsingular an well conitione, then we can consier a monic matrix polynomial CðÞ ¼ C 1 CðÞ; with coefficient matrices C i ¼ C 1 l C i, i ¼ 0...l 1. Then, the PEP (6) can be transforme irectly to the eigenvalue problem l Ay ¼ y; ð6þ ð8þ ð9þ

3 KUKELOVA ET AL.: POLYNOMIAL EIGENVALUE SOLUTIONS TO MINIMAL PROBLEMS IN COMPUTER VISION 1383 where A ¼ 0 0 I I C 0 C 1 C 2... C l1 1 C A : ð10þ In this case, the matrix A (10) is sometimes calle the block companion matrix. It happens quite often that the leaing matrix C l is singular, but the last matrix C 0 is regular an well conitione. Then, either the escribe metho which transforms PEP (6) to the GEP (7) or the transformation ¼ 1= can be use. The transformation ¼ 1= reuces the problem to fining eigenvalues of the matrix I I... 0 A ¼ B A : ð11þ C 1 0 C l C 1 0 C l1 C 1 0 C l2... C 1 0 C 1 In some cases, other linear rational transformations which improve the conitioning of the leaing matrix can be use [1]. Now, we know how the polynomial eigenvalue problems (1) can be transforme to generalize (3) an classical eigenvalue problems (9) an solve using stanar numerical methos. We will next escribe a resultant-base metho which can be use to transform a system of polynomial equations to a polynomial eigenvalue problem. 2.2 Transformation of Systems of Polynomial Equations to a PEP Consier a system of equations f 1 ðxþ¼¼f m ðxþ ¼ 0; ð12þ which is given by a set of m polynomials F ¼ff 1 ;...;f m j f i 2 C½x 1 ;...;x n Šg in n variables x ¼ðx 1 ;...;x n Þ over the fiel of complex numbers. Let this system have a finite number of solutions. If we are lucky, like in the case of three from the four minimal relative pose problems consiere in this paper, then for some x j, let say x 1, (12) can irectly be rewritten to a polynomial eigenvalue problem: Cðx 1 Þv ¼ 0; ð13þ where Cðx 1 Þ is a matrix polynomial with square m m coefficient matrices an v is a vector of s monomials in variables x 2 ;...;x n, i.e., monomials of the form x ¼ x 2 2 x xn n. In this case, the number of monomials s is equal to the number of equations m, i.e., s ¼ m. Unfortunately, not all systems of polynomial equations (12) can be irectly transforme to a PEP (13) for some x j. This happens when, after rewriting the system to the form (13) we remain with fewer equations than monomials in these equations, i.e., s>m, an therefore we o not have square coefficient matrices C i. In such a case, we have to generate new equations as polynomial combinations of initial equations (12). This has to be one in a special way to get as many equations as monomials after rewriting the system to the form (13). Then, we can treat each monomial as a new variable an look at this system as at a linear one. Therefore, we sometimes say that we linearize our system of equations. Here, we present one metho which can be use to linearize a system of polynomial equations (12) an is base on multipolynomial resultants [6], [25], [26]. This metho constructs resultant matrices, whose eterminants express nontrivial multiples of the resultant polynomial [6] an which linearize a nonlinear polynomial system (12) in terms of matrix polynomials. This metho takes a system of nonlinear polynomial equations (12) an reuces it to a system of the form (13). After constructing resultant matrices, we irectly obtain a polynomial eigenvalue formulation of our problem an we can solve it using the methos escribe in Section 2.1. Next, we escribe the resultant-base metho [6] for transforming a system of polynomial equations to a PEP (1). It is base on the Macaulay formulation of the resultant, which oes not work for all systems of polynomial equations. Therefore, we next escribe a moification of this metho which enlarges its applicability an improves its numerical stability. Finally, we suggest a generalization of this metho which can be applie to all systems of polynomial equations Resultant-Base Metho The resultant-base metho for solving systems of polynomial equations, which reuces the problem to an eigenvalue problem, is well stuie [6], [25], [26], [40]. The metho was originally evelope for a system of n polynomial equations in n unknowns, respectively, n homogeneous polynomial equations in n þ 1 unknowns. However, it can also be applie to m n general polynomial equations (12) in n unknowns. First consier a system of n polynomial equations in n unknowns: f 1 ðx 1 ;...;x n Þ¼¼f n ðx 1 ;...;x n Þ¼0: ð14þ Assume that we want to formulate these equations as a PEP (1) for x 1. Hie x 1 in the coefficient fiel, i.e., consier these equations as n equations in n 1 variables x 2 ;...;x n an coefficients from C½x 1 Š: f 1 ;...;f n 2ðC½x 1 ŠÞ½x 2 ;...;x n Š: ð15þ Let the egrees of these equations in variables x 2 ;...;x n be 1 ; 2 ;...; n, respectively. Now consier a system of n homogeneous polynomial equations in n unknowns: F 1 ðx 2 ;...;x nþ1 Þ¼¼F n ðx 2 ;...;x nþ1 Þ¼0; ð16þ F 1 ;...;F n 2ðC½x 1 ŠÞ½x 2 ;...;x n ;x nþ1 Š; which we obtain from the system (15) by homogenizing it using a new variable x nþ1, i.e., F i ¼ x i nþ1 f ið x2 x nþ1 ;...; xn x nþ1 Þ. Set ¼ Xn ð i 1Þþ1 ¼ Xn i n þ 1: ð17þ i¼1 For instance, when ð 1 ; 2 ; 3 Þ¼ð2; 2; 1Þ, then ¼ 3. Now take the set of all monomials x ¼ x 2 2 x3 3...xn n x nþ1 nþ1 in variables x 2 ;...;x nþ1 of total egree, i.e., jj ¼ P nþ1 i¼2 i ¼, an partition it into n subsets: i¼1

4 1384 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY 2012 S 1 ¼ x : jj ¼; x 1 2 jx ; S 2 ¼ x : jj ¼; x j x but x 2 3 j x ;... S n ¼ x : jj ¼; x 1 2 ;...;x n1 n 6 j ; x but x n nþ1 j x ; ð18þ where x i j j x means that x i j ivies monomial x an x i j 6 j x means that it oes not. Note that every monomial of total egree in variables x 2 ;...;x nþ1 lies in one of these sets an these sets are mutually isjoint. Moreover, if x 2 S i, then x i iþ1 j x an x =x i iþ1 is a monomial of total egree i. We can now create a system of equations that generalizes system (16) x =x 1 2 F 1 ¼ 0 for all x 2 S 1... x =x n nþ1 F n ¼ 0 for all x 2 S n : ð19þ This system has some special properties. Since F i are homogeneous polynomials of total egree i, polynomials x =x i iþ1 F i are of total egree for all i ¼ 1;...n an therefore can be written as a linear combination of monomials of egree in n variables x 2 ;...;x nþ1. There nþ1 exist monomials of egree in n variables. The total number of equations in the system (19) is equal to the number of elements in sets S 1 ;...S n, which is also nþ1 because these sets contain all monomials of egree in variables x 2 ;...x nþ1. Thus, the system of polynomial nþ1 equations (19) consists of homogeneous equations in s nþ1 monomials (all of egree ) in variables x 2 ;...;x nþ1. Note that the coefficients of these equations are polynomials in x 1. We can next ehomogenize equations (19) by setting x nþ1 ¼ 1. This ehomogenization oes not reuce the number of monomials in these equations. This is because there are no monomials of the form x ¼ x 2 2 x3 3...x n n x nþ1 nþ1 among monomials of total egree which iffer only in the power nþ1. nþ1 Therefore, we obtain a system of equations in s nþ1 monomials up to egree in n 1 variables x 2 ;...;x n. Note that the number of monomials up to egree in n 1 variables is P k¼0 nþk2 ¼ nþ1, which is exactly k the number of monomials of egree in n variables. The system obtaine after ehomogenization is equivalent to the initial system (14), i.e., has the same solutions, an can be written as Cðx 1 Þv ¼ 0; ð20þ where Cðx 1 Þ is a matrix polynomial an v is a vector of s nþ1 monomials in variables x2 ;...;x n. For many systems of polynomial equations, we are able to choose s nþ1 linearly inepenent polynomials (incluing all initial polynomials (14)) from the generate nþ1 polynomials (19). In that case the coefficient matrices C j in the matrix polynomial Cðx 1 Þ are square an the formulation (20) is irectly a polynomial eigenvalue formulation of our system of polynomial equations (14). Unfortunately, it may sometimes happen that between polynomials (19) there are less than s linearly nþ1 inepenent polynomials or that, after rewriting these polynomials into the form (20), all coefficient matrices C j are close to singular. This may happen because the presente Macaulay resultant-base metho is not esigne for general polynomials but for ense homogeneous ones, i.e., for polynomials whose coefficients are all nonzero an generic, an it constructs a minimal set of equations that is necessary to obtain square matrices for these specific polynomials [6]. Therefore, we will use here a small moification of this Macaulay resultant-base metho. This moifie metho is very simple an iffers from the stanar metho only in the form of the sets S i (18) an prouces a higher number of polynomial equations Moifie Resultant-Base Metho We again consier all monomials x ¼ x xnþ1 nþ1 in variables x 2 ;...;x nþ1 of total egree ; however, here the sets S i have the form S 1 ¼fx : jj ¼; x 1 2 j x g; S 2 ¼fx : jj ¼; x 2 3 j x g;... S n ¼fx : jj ¼; x n nþ1 j x g: ð21þ This means that in the extene set of polynomial equations: x =x 1 2 F 1 ¼ 0 for all x 2 S 1 ;... x =x n nþ1 F n ¼ 0 for all x 2 S n ; ð22þ we multiply each homogeneous polynomial F i of egree i, i ¼ 1;...;n, with all monomials of egree i. This is because the set of monomials fx =x i iþ1 ; x i iþ1 2 S ig; ð23þ contains all monomials of egree i in variables x 2 ;...;x nþ1. Therefore, the extene system of polynomial equations (22) consists of all possible homogeneous polynomials of egree which can be obtaine from the initial homogeneous polynomials F i (16) by multiplying them by monomials. After creating this extene system of polynomial equations, the metho continues as the previously escribe stanar resultant-base metho, i.e., we ehomogenize all equations from (22) an rewrite them to the form Cðx 1 Þv ¼ 0; ð24þ where Cðx 1 Þ is a matrix polynomial an v is a vector of s nþ1 monomials up to egree in variables x 2 ;...;x n. Since in this case we generate more polynomial equations, in fact all possible of egree which can be generate in the presente way, we are able to choose s nþ1 linearly inepenent polynomials (incluing all initial polynomials (14)) more frequently from them. Moreover, when we have more than s linearly inepenent polynomials, we can select polynomials from them to obtain well-conitione coefficient matrices C j an we can in this

5 KUKELOVA ET AL.: POLYNOMIAL EIGENVALUE SOLUTIONS TO MINIMAL PROBLEMS IN COMPUTER VISION 1385 way improve numerical stability of the polynomial eigenvalue formulation (24). Of course this moifie resultant-base metho is still not completely general an cannot be use to transform all systems of polynomial equations to a PEP (1), but it is conceptually very simple an sufficient for many problems. In general it can be proven that any system of polynomial equations can be transforme to a PEP (1). One way to prove this is using the Shape Lemma [6]. Accoring to this lemma, in the ieal generate by initial polynomials [6] there exists a basis of the form g i ¼ x i þ h i ðx n Þ, i ¼ 1;...;n, where h i are polynomials in x n. After hiing x n in this basis, we obtain a polynomial eigenvalue formulation of the initial problem. Using this formulation woul be, however, very inefficient since for obtaining polynomials from this basis a huge number of new polynomials usually nees to be generate. Nevertheless, the existence of this basis simply shows that there exists at least one polynomial eigenvalue formulation of any system of polynomial equations with a finite number of solutions. To obtain a more efficient polynomial eigenvalue formulation of some system of polynomial equations, it is sufficient to systematically generate all polynomials from the ieal, not only monomial multiples of the initial polynomials, like in the presente metho, an stop when we alreay have sufficiently many polynomials for the formulation. We can systematically generate new polynomials from the ieal, for example, by multiplying all new generate polynomials or, in the first step, all initial polynomials of egree <by all iniviual variables an reucing them each time by the Gauss-Joran (G-J) elimination. After each G-J elimination we can check if we alreay have a polynomial eigenvalue formulation. If no new polynomials of egree < were generate an no polynomial eigenvalue formulation was obtaine, then we increase the egree. By checking if we alreay have a polynomial eigenvalue formulation we mean checking if there exists i polynomials containing j < i monomials in the set of alreay generate polynomials. In the presente approach, we were consiering system (14) of n equations in n unknowns. However, this metho can be easily extene to a system (12) of m n equations in n unknowns. In the case of the moifie resultant-base metho it is sufficient to multiply each homogeneous polynomial F i of egree i with all monomials of egree i, which is inepenent of the number of polynomials. For the stanar resultant-base metho presente in Section 2.2.1, all what we nee to o is to select n equations from the initial m equations with largest egrees i. Then, we can apply the presente metho to these n equations in nþ1 n unknowns. In this way, we will generate equations in s nþ1 monomials up to egree in n 1 variables x 2 ;...;x n an with polynomial coefficients in x 1. Aing the remaining m n original equations may increase the number of monomials. It is because some monomials containe in these equations o not have to be containe in the generate an the selecte equations. nþ1 However, this number will not be greater then because the egree of these m n equations is smaller than or equal to the egree of the selecte equations an therefore also smaller than. Moreover, we can a also all multiples of these remaining equations up to egree.in both cases, the resulting system of equations will contain only monomials up to egree in variables x 2 ;...;x n an there are at most nþ1 such monomials Problem Relaxation Note that the polynomial eigenvalue formulation (13) is a relaxe formulation of the original problem of fining all solutions to the system of polynomial equations (12). This is because polynomial eigenvalue formulation (13) oes not consier potential epenencies in the monomial vector v an solves for general eigenvalues x 1 an general eigenvectors v. However, after transforming the system of polynomial equations (12) to PEP (13), coorinates of the vector v are in general epenent, e.g., v ¼ðx 2 3 ;x 2x 3 ;x 2 2 ;x 3;x 2 ; 1Þ, an therefore vð1þ ¼vð4Þ 2, vð2þ ¼vð4Þvð5Þ, etc. This implies that after solving PEP (13) one has to check which of the compute eigenpairs ðx 1 ; vþ satisfy the original polynomial equations (12). This can be one either by testing all monomial epenencies in v or by substituting the solutions to the original equations an checking if they are satisfie. 2.3 Reucing the Size of the Polynomial Eigevalue Problem Removing Unnecessary Polynomials The moifie resultant metho presente in Section usually oes not lea to the smallest polynomial eigenvalue formulation of the initial system of polynomial equations (12) an, for larger systems with larger egrees, it may generate large polynomial eigenvalue problems which are not practical. Therefore, we present here a metho of removing unnecessary polynomials from the generate polynomial eigenvalue formulation. Assume that we have the polynomial eigenvalue formulation (24) of the system (14) of n polynomial equations in n unknowns x 1 ;...;x n, an that the vector v in this polynomial eigenvalue formulation (24) consist of s monomials an the coefficient matrices C j in the matrix polynomial Cðx 1 Þ have size s s. Then, we can remove unnecessary polynomials using the following proceure: i 1 while i s n o if there exist i monomials x, between monomials of the vector v, which are containe only in k i polynomials of (24), excluing the initial polynomials, from our s polynomials, then remove these k polynomials s s k i 1 else i i þ 1 en if en while In each cycle of this algorithm, we remove more or as many monomials as polynomials from (24), or we remove nothing. Therefore, after each cycle the coefficient matrices C j are square or contain more rows than columns an can be

6 1386 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY 2012 therefore transforme to square matrices by omitting some rows. Moreover, there remain at least initial polynomial equations at the en of this algorithm. Therefore, using this proceure, we obtain a polynomial eigenvalue formulation of the initial system of polynomial equations (14) which is not larger than the starting formulation (24). This algorithm may have higher impact when it is performe on an eliminate system of polynomial equations, i.e., a system of polynomial equations eliminate by the G-J elimination. For example, the G-J elimination may help in cases when the system contains some monomials which, after hiing a variable in the coefficient fiel, e.g., x 1, have only numerical (constant) coefficients, i.e., coefficients not containing x 1. Such monomials may then be irectly eliminate, not epening on in how many polynomials they appear. This is because the application of the G-J elimination on all polynomials with suitable orere monomials, before hiing x 1, may eliminate these monomials from all equations except for one equation. Therefore, such monomials will be containe in only one polynomial an will be eliminate using the presente algorithm Removing Zero Eigenvalues It often happens that matrices C j in the polynomial eigenvalue formulation (6) contain zero columns. A zero column in a matrix C i means that the monomial corresponing to this column an to i oes not appear in the consiere system of equations. Moreover, it also happens quite often that this monomial oes not appear in the system for all j, where j>i. Then, the column corresponing to this monomial is zero in all matrices C j for j>i incluing the highest orer matrix C l, which is in this case singular. We will call such monomials which prouce zero columns in matrices C j zero monomials. In the case of singular matrix C l, the transformation of the PEP to the eigenvalue problem (9) will prouce matrix A in the form (11). This matrix will contain for each such zero monomial a zero column, a column in the block corresponing to C 1 0 C l. This zero column will result in a zero eigenvalue which is, in this case, not the solution to our original problem. Therefore, we can remove this column together with the corresponing row from the matrix A an in this way remove this zero eigenvalue. Removing the row will remove 1 from the column corresponing to the same monomial as the zero monomial in the C 1 0 C l block but in this case in the block C 1 0 C l1. This means that if this zero monomial multiplie by l1 oes not appear in the system of polynomial equations we again have the zero column resulting in the zero eigenvalue. Therefore, we can also remove this zero column an the corresponing row of the matrix A. This can be repeate until the first matrix C i which contains, for this zero monomial, the zero column. In this way, we can often remove most of the parasitic zero eigenvalues an therefore solve a consierably smaller eigenvalue problem, which may significantly improve the computational efficiency an the stability of the solution. This is because the eigenvalue computation is the most time-consuming part of final solvers. 3 RELATIVE POSE PROBLEMS To show the applicability of the previously escribe polynomial eigenvalue metho, we present here polynomial eigenvalue solutions to four important minimal relative pose problems. 3.1 Problems Formulations Consier a pair of cameras P an P 0. It is known [12] that in the case of fully calibrate cameras points x j an x 0 j, which are projections of 3D point X j, are geometrically constraine by the epipolar geometry constraint x 0T j E x j ¼ 0; ð25þ where E is a 3 3 rank-2 essential matrix with two equal singular values. These constraints on E can be written as etðeþ ¼0; ð26þ 2 EE > E traceðe E > ÞE ¼ 0: ð27þ Equation (26) is calle the rank constraint an (27) the trace constraint. The usual way [29], [38] to compute the essential matrix is to linearize relation (25) into the form M X ¼ 0, where vector X contains nine elements of the matrix E an M contains proucts of image measurements. Essential matrix E is then constructe as a linear combination of the conveniently reshape null space vectors of the matrix M. The imension of the null space epens on the number of point corresponences use. Aitional constraints (26) an (27) are use to etermine the coefficients in the linear combination of the null space vectors or to project an approximate solution to the space of correct essential matrices. Funamental matrix F escribes uncalibrate cameras similarly as essential matrix E oes for calibrate cameras (25), i.e., x 0T j F x j ¼ 0; ð28þ an can be compute in an analogous way. In this paper, we also consier a camera pair with unknown but equal focal length f an a camera pair with one fully calibrate an one up to focal length calibrate camera. In both cases, all other calibration parameters are assume to be known. In such a case the calibration matrix K [12] is a iagonal matrix iagð½f f1šþ. Therefore, for two cameras with unknown, but equal focal length, the essential matrix E ¼ K > FK¼ KFKsince K is iagonal. Since K is regular, we have etðfþ ¼0; ð29þ 2 FQF > QF traceðf QF > QÞ F ¼ 0: ð30þ Equation (30) is obtaine by substituting the expression for the essential matrix into the trace constraint (27), applying the substitution Q ¼ KK, an multiplying (27) by K 1 from left an right. Note that the calibration matrix can be written as K iagð½1 11=fŠÞ to simplify equations.

7 KUKELOVA ET AL.: POLYNOMIAL EIGENVALUE SOLUTIONS TO MINIMAL PROBLEMS IN COMPUTER VISION 1387 For the case with one fully calibrate an one up to the focal length calibrate camera, we have E ¼ K F an we obtain constraints etðfþ ¼0; ð31þ 2 FF > QF traceðff > QÞ F ¼ 0: ð32þ Equation (32) is obtaine similarly as (30). Since almost all consumer camera lenses, in particular wie angle lenses, suffer from some raial istortion, we also consier here the problem of estimating relative pose of two uncalibrate cameras with one parameter raial istortion. Here, we assume the one-parameter ivision moel propose by Fitzgibon [10], which is given by the formula p u p =ð1 þ kr 2 Þ; ð33þ where k is the istortion parameter, p u ¼½x u ;y u ; 1Š >, respectively, p ¼½x ;y ; 1Š >, are the corresponing unistorte, respectively, istorte, image points, an r is the raius of p w.r.t. the istortion center. We assume that the istortion center is in the center of the image, i.e., r 2 ¼ x2 þ y2. Since the epipolar constraint (28) contains unistorte image points x i, but we measure istorte ones, we nee to put the relation (33) into (28). Let x i ¼½x i ; y i ; 1Š be the ith measure istorte image point an let r 2 i ¼ x2 i þ y2 i. Then, x i ¼½x i ; y i ; 1 þ kr 2 i Š an the epipolar constraint for the uncalibrate cameras has the form ½x 0 i ; y0 i ; 1 þ kr20 i ŠF ½x i; y i ; 1 þ kr 2 i Š> ¼ 0: ð34þ In this problem, we also have the singularity constraint on the funamental matrix, i.e., the equation etðfþ ¼0: ð35þ These are the basic equations which efine all four relative pose problems consiere in this paper an which we will use to formulate these problems as polynomial eigenvalue problems an solve them using the metho presente in Section 2. Let us first review previous solutions to all the consiere problems. 3.2 Previous Solutions Five-Point Problem The 5-pt calibrate relative pose problem was alreay stuie by Kruppa [14], who has shown that it has at most 11 solutions. Maybank an Faugeras [8] then sharpene Kruppa s result by showing that there are at most 10 solutions. Recently, Nister et al. [30] have shown that the problem really requires solving a 10 egree polynomial. The state-of-the-art methos of Nister [29] an Stewénius et al. [38], which obtain the solutions as the roots of a 10th-egree polynomial, are currently the most efficient an robust implementations for solving the 5-pt relative pose problem. In both methos [29], [38], the five linear epipolar constraints were use to parameterize the essential matrix as a linear combination of a basis E 1 ; E 2 ; E 3 ; E 4 of the space of all compatible essential matrices: E ¼ x E 1 þ y E 2 þ z E 3 þ E 4 : ð36þ Then, the rank constraint (26) an the trace constraint (27) were use to buil 10 thir-orer polynomial equations in three unknowns an 20 monomials. These equations can be written in a matrix form: M X ¼ 0; ð37þ with a coefficient matrix M reuce by the G-J elimination an the vector of all monomials X. The metho [29] use relations between polynomials (37) to erive three new equations. The new equations were arrange into a 3 3 matrix equation AðzÞ Z ¼ 0 with matrix AðzÞ containing polynomial coefficients in z an Z containing the monomials in x an y. The solutions were obtaine using the hien variable resultant metho [6] by solving the 10th-egree polynomial etðaðzþþ, fining Z as a solution to a homogeneous linear system, an constructing E from (36). Note that this hien variable formulation is, in fact, a polynomial eigenvalue formulation, solve, however, using polynomial eterminants. The metho [38] follows another classical approach to solving systems of polynomial equations. First, a Gröbner basis [6] of the ieal generate by equations (37) is foun. Then, a multiplication matrix [35] is constructe. Finally, the solutions are obtaine by computing the eigenvalues an the eigenvectors of the multiplication matrix [6]. This approach turne out to lea to a particularly simple proceure for the 5-pt problem since the particular Gröbner basis use an the multiplication matrix can be constructe irectly from the reuce coefficient matrix M. Another technique base on the hien variable resultant for solving the 5-pt relative pose problem was propose in [22]. This technique is somewhat easier to unerstan than [38], but is far less efficient an Maple was use in [22] to evaluate large eterminants. The first polynomial eigenvalue solution to this problem was propose in [17]. This solution is similar to the solution presente in this paper; however, since the reuction of the size of the PEP was not use in [17], the solution [17] was rather inefficient an resulte in eigenvalue computation of a matrix Six-Point Equal Focal Length Problem The problem of estimating relative camera pose for two cameras with unknown, but equal, focal length from minimal number of 6 point corresponences has 15 solutions [36]. The first minimal solution to this problem propose by Stewénius et al. [36] is base on the Gröbner basis techniques an is similar to the Stewénius solution to the 5-pt problem [38]. Using the linear epipolar constraints, the funamental matrix is parameterize by two unknowns as F ¼ x F 1 þ y F 2 þ F 3 : ð38þ Using the rank constraint for the funamental matrix (29) an the trace constraint for the essential matrix (30) then brings 10 thir an fifth-orer polynomial equations in three unknowns x, y, an w ¼ f 2, where f is the unknown focal length.

8 1388 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY 2012 The Gröbner basis solver [36] starts with these 10 polynomial equations, which can be represente by a matrix M. Since this matrix oes not contain all necessary polynomials for creating a multiplication matrix, several new polynomials, monomial multiples of the initial polynomials, are ae. The resulting solver therefore consists of three G-J eliminations of three matrices of size 12 33, 16 33, an The eigenvectors of the multiplication matrix provie the solutions to the three unknowns x, y, an w ¼ f 2. Another Gröbner basis solver to this problem was propose in [5]. This solver uses only one G-J elimination of a matrix an uses a special technique for improving the numerical stability of Gröbner basis solvers by selecting a suitable basis of the quotient ring moulo the ieal generate by the Gröbner basis. In this paper, it was shown that this solver gives more accurate results than the original solver [36]. A Gröbner basis solver with single G-J elimination of a matrix, which was generate using the automatic generator, was presente in [16]. This solver is also more accurate than the original solver propose by Stewénius [36]. Yet another solution base on the hien variable resultant metho was propose in [21], but it has similar problems with efficiency as the hien variable solution to the 5-pt problem [22]. Like in the 5-pt relative pose problem, the first polynomial eigenvalue solution to this problem was propose in [17] an we will iscuss it in more etail in Section Six-Point One Calibrate Camera Problem The problem of estimating relative camera pose for one fully calibrate an one up to the focal length calibrate camera from a minimal number of 6 point corresponences was solve only recently in [3]. This problem was previously stuie in [39], where a nonminimal solution was propose. First, the 7 point algorithm [12] was use for computing the funamental matrix an then the focal length was estimate in a closeform solution using Kruppa equations. In [3], two new minimal solutions to this problem were propose. The first solution is base on the Gröbner basis techniques an is similar to the Stewénius solution to the 6-pt equal focal length problem [36]. The funamental matrix is again parameterize by two unknowns, as in (38). Then, the rank constraint for the funamental matrix (31) an the trace constraint for the essential matrix (32) are use. This gives 10 thir an fourth-orer polynomial equations in three unknowns x, y, an w ¼ f 2 an 20 monomials. The final solver in this case consists of one G-J elimination of a coefficient matrix M which contains coefficients arising from concrete image measurements an the eigenvalue an eigenvector computation of a 9 9 multiplication matrix. The secon solution to this problem, which was presente in [3], was base on the polynomial eigenvalue formulation an we will iscusse it in more etail in Section Eight-Point Raial Distortion Problem The first nonminimal solution to the problem of simultaneous estimation of the epipolar geometry an the one parameter raial istortion ivision moel was propose by Fitzgibbon [10]. In this solution, the algebraic constraint et F ¼ 0 on the funamental matrix has not been use an therefore 9 point corresponences were necessary to solve the problem. Thanks to neglecting this algebraic constraint, the resulting nine equations from the epipolar constraint (34) coul be irectly formulate as a quaratic eigenvalue problem an quite easily solve using stanar numerical methos. Li an Hartley [20] solve the same problem from 9 point corresponences using the hien variable resultant technique. The quality of the result was comparable to [10]; however, the hien variable technique was consierably slower than the polynomial eigenvalue technique use in [10] since in [20] Maple was use to evaluate large eterminants. The first minimal solution to the problem of estimating epipolar geometry an one parameter ivision moel was propose in [15]. This solution use et F ¼ 0 an therefore the minimal number of 8 point corresponences were sufficient to solve it. In this solution, the initial nine polynomial equations (28) an (35) in nine unknowns were first transforme to three polynomial equations in three unknowns. Then, these equations were solve using the Gröbner basis metho [6]. The final solver consists of three G-J eliminations of the 8 22, 11 30, an matrices an the eigenvalue computation of the matrix. The improve version of solver [15], which was generate using the automatic generator of Gröbner basis solvers an consists of only one elimination of the was propose in [16]. The polynomial eigenvalue solution to this problem was propose in [19]. This solution is in fact equivalent to the solution presente in this paper. In Section 4.4, we will show how this solution can be easily obtaine using the moifie resultant-base metho presente in Section an in this way illustrate the usefulness of our metho. 4 POLYNOMIAL EIGENVALUE SOLUTION TO THE RELATIVE POSE PROBLEMS In this section, we escribe our solutions to the four relative pose problems. We show that the 5-pt relative pose problem, the 6-pt equal focal length problem, the 6-pt problem for one fully an one up to focal length calibrate camera, an the 8-pt problem for estimating relative pose an one parameter raial istortion can be formulate as the polynomial eigenvalue problems (1) of egree three, two, one, an four. 4.1 Five-Point Problem To obtain the polynomial eigenvalue solution to the 5-pt relative pose problem, we use the same formulation as it was use in [29] an [38]. In this formulation, we first use linear equations from the epipolar constraint (25) for 5 point corresponences to parametrize the essential matrix with three unknowns x, y,

9 KUKELOVA ET AL.: POLYNOMIAL EIGENVALUE SOLUTIONS TO MINIMAL PROBLEMS IN COMPUTER VISION 1389 an z (36). Using this parameterization, the rank (26) an the trace constraints (27) lea to 10 thir-orer polynomial equations in three unknowns an 20 monomials an can be written in the matrix form M X ¼ 0; ð39þ where M is a coefficient matrix an X ¼ðx 3 ;yx 2, y 2 x; y 3 ;zx 2 ; zyx; zy 2 ;z 2 x; z 2 y; z 3 ;x 2 ; yx; y 2 ; zx; zy; z 2 ;x;y;z;1þ > is the vector of all monomials. There are all monomials in all three unknowns up to egree three. So we can use any unknown to play the role of in (1). For example, taking ¼ z, these 10 equations (39) can be rewritten as ðz 3 C 3 þ z 2 C 2 þ z C 1 þ C 0 Þv ¼ 0; ð40þ where v is a 10 1 vector of monomials, v ¼ðx 3 ;x 2 y; xy 2 ;y 3 ;x 2 ; xy; y 2 ;x;y;1þ > an C 3, C 2, C 1, an C 0 are coefficient matrices: an C 3 ð m 10 Þ; C 2 ð m 8 m 9 m 16 Þ; C 1 ð0000m 5 m 6 m 7 m 14 m 15 m 19 Þ; C 0 ðm 1 m 2 m 3 m 4 m 11 m 12 m 13 m 17 m 18 m 20 Þ; where m j is the jth column from the coefficient matrix M. Since C 3, C 2, C 1, an C 0 are known square matrices, the formulation (40) is a cubic PEP an can be solve using stanar efficient algorithms presente in Section 2. In this case, the rank of the matrix C 3 is only one, while the matrix C 0 is regular. Therefore, we use the transformation ¼ 1=z an reuce the cubic PEP (40) to the problem of fining the eigenvalues of the matrix: 0 A 0 I I C 1 0 C 3 C 1 0 C 2 C 1 0 C 1 1 A: ð41þ After solving this eigenvalue problem we obtain 30 eigenvalues, solutions for ¼ 1=z, an 30 corresponing eigenvectors v from which we extract solutions for x an y. However, there are 20 zero eigenvalues among these 30 eigenvalues. These zero eigenvalues can be easily eliminate since they correspon to the 20 zero columns of the matrices C 1 0 C 3, C 1 0 C 2, an C 1 0 C 1, as escribe in Section Therefore, to solve the 5-pt relative pose problem, it is sufficient to fin the eigenvalues an the eigenvectors of the matrix, which we obtain from the matrix (41) by removing columns corresponing to the zero columns of the matrices C 1 0 C 3, C 1 0 C 2, an C 1 0 C 1 an removing the corresponing rows of this matrix (41). Note that this matrix is in fact the multiplication matrix use in the Gröbner basis solver [38], which was obtaine irectly from the polynomial eigenvalue formulation without any knowlege about the Gröbner bases or the properties of these multiplication matrices. The size of this matrix equals the imension of the problem, which was proven to be 10 [30]. 4.2 Six-Point Equal Focal Length Problem Our polynomial eigenvalue solution to the 6-pt equal focal length problem starts with the parameterization of the funamental matrix with two unknowns x an y (38) which is obtaine from the epipolar constraint (28) for 6 point corresponences. Substituting this parameterization into the rank constraint for the funamental matrix (29) an the trace constraint for the essential matrix (30) gives 10 thir an fifth-orer polynomial equations in three unknowns x, y, an w ¼ f 2, where f is the unknown focal length. This formulation is the same as the one use in [36], [21] an can be again written in a matrix form M X ¼ 0; where M is a coefficient matrix an ð42þ X ¼ðw 2 x 3 ;w 2 yx 2 ;w 2 y 2 x; w 2 y 3 ;wx 3 ; wyx 2 ; wy 2 x; wy 3 ;w 2 x 2 ;w 2 yx; w 2 y 2 ;x 3 ;yx 2 ;y 2 x; y 3 ; wx 2 ; wyx; wy 2 ;w 2 x; w 2 y; x 2 ; yx; y 2 ; wx; wy; w 2 ;x;y;w;1þ > is a vector of 30 monomials. Variables x an y appear in egree three an w only in egree two. Therefore, we have selecte ¼ w in the PEP formulation (1). Then, these 10 equations can be rewritten as ðw 2 C 2 þ w C 1 þ C 0 Þv ¼ 0; ð43þ where v is a 10 1 vector of monomials, v ¼ðx 3 ;x 2 y; xy 2 ;y 3 ;x 2 ; xy; y 2 ;x;y;1þ > an C 2, C 1, an C 0 are coefficient matrices. an C 2 ðm 1 m 2 m 3 m 4 m 9 m 10 m 11 m 19 m 20 m 26 Þ; C 1 ðm 5 m 6 m 7 m 8 m 16 m 17 m 18 m 24 m 25 m 29 Þ; C 0 ðm 12 m 13 m 14 m 15 m 21 m 22 m 23 m 27 m 28 m 30 Þ; where m j is the jth column from the coefficient matrix M (42). The formulation (43) is irectly the QEP, which can again be solve using the methos presente in Section 2. After solving this QEP (43) by transforming it to a GEP, we obtain 20 eigenvalues, solutions for w ¼ f 2, an 20 corresponing eigenvectors v from which we extract solutions for x an y. To o this we normalize solutions for v to have the last coorinate 1 an extract values from v that correspon to x an y, in this case vð8þ an vð9þ. Then, we use (38) to compute F. Again, there are zero eigenvalues, solutions to 1=w, like in the 5-pt case. However, in this case, these eigenvalues cannot be so easily eliminate since here we o not have zero columns corresponing to these eigenvalues. Therefore, this polynomial eigenvalue solver elivers 20 solutions, which is more than the number of solutions of the original problem. This is cause by the fact that we are solving a relaxe version of the original problem as escribe in Section Therefore, the solution contains not only all vectors v that (within limits of numerical accuracy) satisfy the constraints inuce by the problem, i.e., vð1þ ¼vð8Þ 3, but also aitional vectors v that o not satisfy them. Such vectors v nee to be eliminate, e.g., by verifying the monomial epenences as escribe in Section

10 1390 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY Six-Point One Calibrate Camera Problem The polynomial eigenvalue solver to the thir problem, the 6-pt relative pose problem for one fully an one up to the focal length calibrate camera, uses the same parameterization of the funamental matrix as the previous 6-pt problem. In this case, this parameterization is substitute into (31) an (32), as escribe in Section 3.1, an results in 10 equations in three unknowns x, y, an w ¼ f 2. These equations can be rewritten into the matrix form M X ¼ 0; where M is a coefficient matrix an X ¼ðwx 3 ; wyx 2 ;wy 2 x; wy 3 ;x 3 ;yx 2 ;y 2 x; y 3 ; wx 2 ; wyx; wy 2 ;x 2 ; yx; y 2 ; wx; wy; x; y; w; 1Þ > ð44þ is the coefficient matrix an X is this reorere monomial vector. After computing the G-J elimination M 0 of M we obtain eight equations M 0 X ¼ 0 of the form p i ¼ LTðp i Þþg i ðf 3;1 ;f 3;2 ;kþ¼0; ð46þ where LTðp i Þ is the leaing term of the polynomial p i [6] which is, in our case, LTðp i Þ¼f 1;1 ;f 1;2 ;f 2;1 ;f 2;2 ;f 1;3 k; f 1;3 ; f 2;3 k an f 2;3 for i ¼ 1; 2; 3; 4; 5; 6; 7; 8. Polynomials g i ðf 3;1 ;f 3;2 ;kþ are secon-orer polynomials in three variables f 3;1, f 3;2, k. Next, we can express six variables, f 1;1 ;f 1;2 ;f 1;3 ;f 2;1 ;f 2;2 ;f 2;3 as the following functions of the remaining three variables f 3;1 ;f 3;2 ;k: f 1;1 ¼g 1 ðf 3;1 ;f 3;2 ;kþ; ð47þ is a vector of 20 monomials. Unknowns x an y appear in egree three, but w appears only in egree one. Therefore, we can select ¼ w an rewrite these 10 equations as f 1;2 ¼g 2 ðf 3;1 ;f 3;2 ;kþ; f 1;3 ¼g 6 ðf 3;1 ;f 3;2 ;kþ; ð48þ ð49þ ðw C 1 þ C 0 Þv ¼ 0; ð45þ where v ¼ðx 3 ;yx 2 ;y 2 x; y 3 ;x 2 ; yx; y 2 ;x;y;1þ > is a 10 1 vector of monomials an C 1, C 0 are coefficient matrices such that C 1 ðm 1 m 2 m 3 m 4 m 9 m 10 m 11 m 15 m 16 m 19 Þ; C 0 ðm 5 m 6 m 7 m 8 m 12 m 13 m 14 m 17 m 18 m 20 Þ; where m j is the jth column from the coefficient matrix M. The formulation (45) is irectly the generalize eigenvalue problem (3), respectively, a polynomial eigenvalue problem of egree one with the regular matrix C 0, an can be easily solve by fining the eigenvalues of the matrix C 1 0 C 1. After solving (45), we obtain 10 eigenvalues, solutions for w ¼ f 2, an 10 corresponing eigenvectors v from which we extract solutions for x an y. Then, we use (38) to get solutions for F. 4.4 Eight-Point Raial Distortion Problem The solver to the problem of estimating the relative pose of two uncalibrate cameras together with the one parameter ivision moel from 8 point corresponences starts with nine equations in nine variables, i.e., eight equations from the epipolar constraint (34) an one from the singularity constraint on F (35). In the first step we simplify these equations by eliminating some variables. We use the same elimination metho as was use in [15]. Let the elements of the funamental matrix be f i;j. Assuming f 3;3 ¼ 1, the epipolar constraint (34) gives eight equations with 15 monomials ðf 1;3 k; f 2;3 k; f 3;1 k; f 3;2 k; k 2 ;f 1;1 ; f 1;2 ;f 1;3 ;f 2;1 ;f 2;2 ;f 2;3 ;f 3;1 ;f 3;2 ;k;1þ an 9 variables ðf 1;1 ;f 1;2 ; f 1;3 ;f 2;1 ;f 2;2 ;f 2;3 ;f 3;1 ;f 3;2 ;kþ. These monomials can be reorere such that monomials containing f 1;1, f 1;2, f 2;1, f 2;2, f 1;3 an f 2;3 are at the beginning. Reorere monomial vector will be X ¼ðf 1;1 ;f 1;2 ;f 2;1 ;f 2;2 ;f 1;3 k; f 1;3 ;f 2;3 k; f 2;3 ;f 3;1 k; f 3;2 k; k 2 ;f 3;1 ;f 3;2 ;k;1þ >. Then, eight equations from the epipolar constraint can be written in a matrix form M X ¼ 0, where M f 2;1 ¼g 3 ðf 3;1 ;f 3;2 ;kþ; f 2;2 ¼g 4 ðf 3;1 ;f 3;2 ;kþ; f 2;3 ¼g 8 ðf 3;1 ;f 3;2 ;kþ: ð50þ ð51þ ð52þ We can substitute expressions (47)-(52) into the remaining two equations p 5 an p 7 from the epipolar constraint (46) an also into the singularity constraint (35) for F. In this way, we obtain three polynomial equations in three unknowns (two thir-orer polynomials an one fifthorer polynomial) kðg 6 ðf 3;1 ;f 3;2 ;kþþ þ g 5 ðf 3;1 ;f 3;2 ;kþ¼0; kðg 8 ðf 3;1 ;f 3;2 ;kþþ þ g 7 ðf 3;1 ;f 3;2 ;kþ¼0; 0 g 1 g 2 1 g 6 et@ g 3 g 4 g 8 A ¼ 0: f 3;1 f 3;2 1 ð53þ ð54þ ð55þ If we take the unknown raial istortion parameter k to play the role of in PEP (1), we can rewrite these three equations as ðk 4 C 3 þ k 3 C 3 þ k 2 C 2 þ kc 1 þ C 0 Þv ¼ 0; ð56þ where v ¼ðf 3 3;1 ;f2 3;1 f 3;2;f 3;1 f 2 3;2 ;f3 3;2 ;f2 3;1 ;f 3;1f 3;2 ;f 2 3;2 ;f 3;1; f 3;2 ; 1Þ > is a 10 1 vector of monomials an C 4, C 3, C 2, C 1, an C 0 are 3 10 coefficient matrices. Polynomial eigenvalue formulation (1) requires having square coefficient matrices C j. Unfortunately, in this case, we o not have square coefficient matrices. We only have three equations an 10 monomials in the vector v. Therefore, we will use the metho of transforming the system of polynomial equations to the PEP escribe in Section The stanar resultant-base metho from Section also prouces a polynomial eigenvalue formulation of this problem but the moifie metho, Section 2.2.2, improves the numerical stability of the final solver a little bit.

11 KUKELOVA ET AL.: POLYNOMIAL EIGENVALUE SOLUTIONS TO MINIMAL PROBLEMS IN COMPUTER VISION 1391 Equations (53) an (54) are equations of egree one when consiere as polynomials form ð C½kŠÞðf 3;1 ;f 3;2 Þ, while (55) is an equation of egree three. Therefore, the egree (17), efine in this metho as ¼ P n i¼1 ð i 1Þþ1, is equal to three. This means that the sets S i (21) will consist of monomials S 1 ¼ff 3 3;1 ;f2 3;1 f 3;2;f 3;1 f 2 3;2 ;f2 3;1 ;f 3;1f 3;2 ;f 3;1 g; S 2 ¼ff 3 3;2 ;f2 3;1 f 3;2;f 3;1 f 2 3;2 ;f2 3;2 ;f 3;1f 3;2 ;f 3;2 g; S 3 ¼f1g; after the ehomogenization (see Section 2.2.2). The extene system of (22), which is equivalent to the initial system of (53)-(55), will therefore consist of 13 equations, i.e., (53) an (54) multiplie with the monomials f 2 3;1 ;f 3;1f 3;2 ;f 2 3;2 ;f 3;1;f 3;2 ; 1 an (55), an will contain only 10 monomials. Therefore, this system can be rewritten to the polynomial eigenvalue form (56) with coefficient matrices C j of size Since we have more equations than monomials, we can select 10 equations from them such that the resulting system will have a small conition number. The resulting formulation will be a PEP of egree four. After solving this PEP (56), we obtain 40 solutions for an 40 corresponing eigenvectors v from which we extract solutions to f 3;1 an f 3;2. Also in this case, the eigenvalue problem results in several zero eigenvalues. Eleven of these zero eigenvalues correspon to zero columns of the coefficient matrices C j an can be remove using the metho escribe in Section This means that we reuce the original eigenvalue problem to a eigenvalue problem. We still obtain more solutions than the number of solutions to the original system of polynomial equations, which is in this case 16. Therefore, we nee to eliminate these parasitic solutions by, e.g., verifying monomial epenencies, see Section After fining the solutions to k, f 3;1, an f 3;2, we can use (47)-(52) to get solutions for the funamental matrix F. 5 EXPERIMENTS In this section, we evaluate our solutions an compare them with the existing state-of-the-art methos. Since all methos are algebraically equivalent an solvers iffer only in the way of solving problems, we have evaluate them on synthetic noise-free ata only. We aime at stuying the numerical stability an spee of the algorithms. The properties of the iniviual problems in ifferent configurations an uner ifferent noise contaminations are stuie in [29], [38], [36], [3], [15], an [19]. 5.1 Numerical Stability In all our experiments, the scenes were generate using 3D points ranomly istribute in a 3D cube. Each 3D point was projecte by a camera with ranom feasible orientation an position an ranom or fixe focal length. For the raial istortion problem, the raial istortion using the ivision moel [10] was ae to all image points. For the calibrate 5-pt problem, we extracte camera relative rotations an translations from estimate essential Fig. 1. Numerical stability of the algorithms consiere. matrices. From the four possible choices of rotations an translations, we selecte the one where all 3D points were in front of the canonical camera pair [12]. Let R be an estimate camera relative rotation an R gt the corresponing groun-truth rotation. The rotation error is measure as the angle in the angle axis representation of the relative an the translation error as an angle between groun-truth an estimate translation vectors. Fig. 1 top-left compares results of ifferent 5-pt solvers: gb5pt enotes Stewenius Gröbner basis solver [38], nongb5pt Nister s [29], peig5pt the polyeig solver presente in [17], an fpeig5pt enotes the fast polyeig solver, with remove zero eigenvalues propose in this work. The rotation error is isplaye with soli line an the translation error with ashe line. The numerical stability of all solvers is very goo. In evaluations of both 6-pt problems, we focuse on the value of estimate focal length. We measure the relative focal length error ðf f gt Þ=f gt, where f is the estimate focal length an f gt enotes the groun truth. Fig. 1 topright compares the 6-pt solvers for a pair of cameras with a constant focal length. In this figure, gb6pt enotes the Stewenius Gröbner basis solver [36], 1elim6pt the Gröbner basis solution with single elimination create using the automatic generator [16], an peig6pt enotes the polyeig solver presente in this paper. The polynomial eigenvalue solver outperforms both Gröbner basis solvers. Solvers for a pair of an internally calibrate camera an a camera with unknown focal length are compare in Fig. 1 bottom-left. Again, the polynomial eigenvalue solver (peig6pt on-off) is a little bit more stable compare to the Gröbner basis solver (6pt on-off) escribe in [3]. Finally, the bottom-right plot in Fig. 1 compares 8-pt solvers for two cameras with unknown constant raial istortion. Here, the Gröbner basis solver [15] (r8pt) provies less stable solutions compare to both polyeig solvers. Moreover, the polyeig solver with reuce zero eigenvalues (fpr8pt) outperforms the not-reuce solver both in precision an spee. rotation RR 1 gt 5.2 Computational Complexity We have implemente all solvers presente in this paper in C++. In all our implementations, we have ecie to transform the resulting polynomial eigenvalue problems to

12 1392 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY 2012 the stanar eigenvalue problems (11), as escribe in Section 2.1, an use stanar numerical eigenvalue methos to solve them. All presente solvers are easy to implement since in each solver only a few matrices are fille with appropriate coefficients extracte from the input equations. Then, matrix (11), which is the input to some stanar numerical eigenvalue algorithm, is constructe using these matrices. We have implemente our own eigenvalue solver accoring to [32]; however, LAPACK eigenvalue routine or other wellknown algorithms can also be use [1]. We next compare the computational complexity of the solvers presente in this paper with the state-of-the-art methos. In the case of the 5-pt relative pose problem, the presente polynomial eigenvalue metho requires computing the inverse of one matrix an then computing eigenvalues of the matrix of the same size. This is equivalent to the number an type of the operations use in the Gröbner basis solver [38]. The ifference is only in the metho use for obtaining the resulting matrix. For this 5-pt problem, we have foun that it is more efficient to irectly compute the characteristic polynomial of the matrix, using the Faeev-Leverrier metho [7], an to fin solutions by computing the roots of this 10thorer characteristic polynomial using the Sturm sequences [23] than to use stanar eigenvalue algorithms. Compare to the state-of-the-art solver from [29], the polyeig solver presente in this paper is a little bit less efficient. This is because the solver from [29] requires computing the inverse of one matrix an then creates the 10th-orer polynomial by computing one 3 3 polynomial eterminant. This is more efficient than the computation of this polynomial use in our case because the characteristic polynomial formula [7] requires computing trace of a matrix to power 10. Comparing the spee of the solvers, our solver runs on Intel i7 Q720 notebook about 31 s while the only available implementation of the solver from [38] runs about 244 s. However, since the Gröbner basis solver [38] consists of the same operations as our propose solver, the same time as ours can be achieve. The optimize version of the solver from [29] shoul be even faster; however, it is not publicly available. In the case of the 6-pt equal focal length problem, the polynomial eigenvalue metho requires computing the inverse of one matrix an then computing eigenvalues of the matrix. This solver runs on the same harware as the 5-pt solver, about 182 s. The best available implementation of the Gröbner basis solver from [16] requires performing G-J elimination of a matrix an to compute eigenvalues of the matrix an runs about 650 s. The polynomial eigenvalue solver for the 6-pt problem for one calibrate an one up to focal length calibrate camera requires computing the inverse of one matrix an then eigenvalues of the matrix. The solver runs about 30 s. The Gröbner basis solver from [3] requires performing G-J elimination of a matrix an computing eigenvalues of the 9 9 matrix. Its implementation from [3] runs about 259 s. Finally, the polynomial eigenvalue solution to the 8-pt raial istortion problem requires computing the inverse of one matrix an eigenvalues of the matrix. This solver runs about 685 s. The best available implementation of the Gröbner basis solver from [16] requires performing G-J elimination of a matrix an computing eigenvalues of the matrix an runs about 640 s; however, this time can still be reuce by a better implementation of this solver. Note that all mentione Gröbner basis solvers [38], [16], [3] are partially implemente in MATLAB an therefore their running times can be further improve. However, the problem is that most of these solvers are quite complicate to unerstan an without some knowlege of algebraic geometry they cannot be easily reimplemente. About 90 percent of the time of all presente polynomial eigenvalue solvers is spent in the eigenvalue an eigenvector computation. This will also be the most consuming part in cleverly reimplemente Gröbner basis solvers [38], [16], [3]. 6 CONCLUSION In this paper, we have presente the polynomial eigenvalue metho for solving systems of polynomial equations appearing in computer vision. Compare to the state-ofthe-art Gröbner basis metho, the presente metho is more straightforwar an easier to implement since eigenvalue problems are well stuie, easy to unerstan, an efficient an robust algorithms for solving these problems are available. We have shown this by presenting very simple, fast, stable, an easy to implement solutions to four important minimal relative pose problems. Moreover, we have characterize the problems that can be efficiently solve as polynomial eigenvalue problems an presente a resultant-base metho for transforming a system of polynomial equations to a polynomial eigenvalue problem. Finally, we have propose some useful techniques for reucing the size of the compute polynomial eigenvalue problems. ACKNOWLEDGMENTS This work has been supporte by EC project FP7-SPACE PRoViScout an by The Czech Government uner the research program MSM REFERENCES [1] Z. Bai, J. Demmel, J. Dongorra, A. Ruhe, an H. van er Vorst, Templates for the Solution of Algebraic Eigenvalue Problems. SIAM, [2] M. Bujnak, Z. Kukelova, an T. Pajla, A General Solution to the P4P Problem for Camera with Unknown Focal Length, Proc. IEEE Conf. Computer Vision an Pattern Recognition, [3] M. Bujnak, Z. Kukelova, an T. Pajla, 3D Reconstruction from Image Collections with a Single Known Focal Length, Proc. 12th IEEE Int l Conf. Computer Vision, [4] M. Bujnak, Z. Kukelova, an T. Pajla, New Efficient Solution to the Absolute Pose Problem for Camera with Unknown Focal Length an Raial Distortion, Proc. 10th Asian Conf. Computer Vision, 2010.

13 KUKELOVA ET AL.: POLYNOMIAL EIGENVALUE SOLUTIONS TO MINIMAL PROBLEMS IN COMPUTER VISION 1393 [5] M. Byrö, K. Josephson, an K. Aström, Improving Numerical Accuracy of Gröbner Basis Polynomial Equation Solver, Proc. 11th IEEE Int l Conf. Computer Vision, [6] D. Cox, J. Little, an D. O Shea, Using Algebraic Geometry. Springer-Verlag, [7] D.K. Faeev an W.N. Faeeva, Computational Methos of Linear Algebra. Freeman, [8] O. Faugeras an S. Maybank, Motion from Point Matches: Multiplicity of Solutions, Int l J. Computer Vision, vol. 4, no. 3, pp , [9] M.A. Fischler an R.C. Bolles, Ranom Sample Consensus: A Paraigm for Moel Fitting with Applications to Image Analysis an Automate Cartography, Comm. ACM, vol. 24, no. 6, pp , [10] A. Fitzgibbon, Simultaneous Linear Estimation of Multiple View Geometry an Lens Distortion, Proc. IEEE CS Conf. Computer Vision an Pattern Recognition, [11] C. Geyer an H. Stewenius, A Nine-Point Algorithm for Estimating Para-Cataioptric Funamental Matrices, Proc. IEEE Conf. Computer Vision an Pattern Recognition, [12] R. Hartley an A. Zisserman, Multiple View Geometry in Computer Vision. Cambrige Univ. Press, [13] K. Josephson, M. Byrö, an K. Aström, Pose Estimation with Raial Distortion an Unknown Focal Length, Proc. IEEE Conf. Computer Vision an Pattern Recognition, [14] E. Kruppa, Zur Ermittlung eines Objektes aus Zwei Perspektiven mit Innerer Orientierung, Sitz.-Ber. Aka.Wiss., Wien, Math. Naturw. Kl., Abt. IIa., vol. 122, pp , [15] Z. Kukelova an T. Pajla, A Minimal Solution to the Autocalibration of Raial Distortion, Proc. IEEE Conf. Computer Vision an Pattern Recognition, [16] Z. Kukelova, M. Bujnak, an T. Pajla, Automatic Generator of Minimal Problem Solvers, Proc. 10th European Conf. Computer Vision, [17] Z. Kukelova, M. Bujnak, an T. Pajla, Polynomial Eigenvalue Solutions to the 5-Pt an 6-Pt Relative Pose Problems, Proc. 19th British Machine Vision Conf., Sept [18] Z. Kukelova, M. Byrö, K. Josephson, T. Pajla, an K. Aström, Fast an Robust Numerical Solutions to Minimal Problems for Cameras with Raial Distortion, Computer Vision an Image Unerstaning, vol. 114, no. 2, pp , Feb [19] Z. Kukelova an T. Pajla, A Minimal Solution to Raial Distortion Autocalibration, IEEE Trans. Pattern Analysis an Machine Intelligence, vol. 33, no. 12, pp , Dec [20] H. Li an R. Hartley, A Non-Iterative Metho for Correcting Lens Distortion from Nine-Point Corresponences, Proc. Omni- Vision, [21] H. Li, A Simple Solution to the Six-Point Two-View Focal-Length Problem, Proc. European Conf. Computer Vision, pp , [22] H. Li an R. Hartley, Five-Point Motion Estimation Mae Easy, Proc. 18th Int l Conf. Pattern Recognition, pp , [23] D. Hook an P. McAree, Using Sturm Sequences to Bracket Real Roots of Polynomial Equations, Graphic Gems I, pp , Acaemic Press, [24] X. Li, C. Wu, C. Zach, S. Lazebnik, an J. Frahm, Moeling an Recognition of Lanmark Image Collections Using Iconic Scene Graphs, Proc. 10th European Conf. Computer Vision, [25] D. Manocha an J.F. Canny, Multipolynomial Resultant Algorithms, J. Symbolic Computation, vol. 15, no. 2, pp , [26] D. Manocha, Solving Systems of Polynomial Equations, IEEE Computer Graphics an Applications, vol. 14, no. 2, pp , Mar [27] D. Martinec an T. Pajla, Robust Rotation an Translation Estimation in Multiview Reconstruction, Proc. IEEE Conf. Computer Vision an Pattern Recognition, [28] B. Micusik an T. Pajla, Estimation of Omniirectional Camera Moel from Epipolar Geometry, Proc. IEEE CS Conf. Computer Vision an Pattern Recognition, pp , [29] D. Nister, An Efficient Solution to the Five-Point Relative Pose, IEEE Trans. Pattern Analysis an Machine Intelligence, vol. 26, no. 6, pp , June [30] D. Nister, R. Hartley, an H. Stewenius, Using Galois Theory to Prove that Structure from Motion Algorithms Are Optimal, Proc. IEEE Conf. Computer Vision an Pattern Recognition, [31] J. Philip, A Non-Iterative Algorithm for Determining All Essential Matrices Corresponing to Five Point Pairs, Photogrammetric Recor, vol. 15, no. 88, pp , [32] W.H. Press, S.A. Teukolsky, W.T. Vetterling, an B.P. Flannery, Numerical Recipes in C++: The Art of Scientific Computing. Cambrige Univ. Press, Feb [33] N. Snavely, S.M. Seitz, an R.S. Szeliski, Photo Tourism: Exploring Image Collections in 3D, Proc. Siggraph, pp , [34] N. Snavely, S. Seitz, an R. Szeliski, Skeletal Graphs for Efficient Structure from Motion, Proc. IEEE Conf. Computer Vision an Pattern Recognition, [35] H.J. Stetter, Numerical Polynomial Algebra. SIAM, [36] H. Stewénius, D. Nister, F. Kahl, an F. Schaffalitzky, A Minimal Solution for Relative Pose with Unknown Focal Length, Proc. IEEE Conf. Computer Vision an Pattern Recognition, pp , [37] H. Stewenius, D. Nister, M. Oskarsson, an K. Astrom, Solutions to Minimal Generalize Relative Pose Problems, Proc. Workshop Omniirectional Vision, [38] H. Stewenius, C. Engels, an D. Nister, Recent Developments on Direct Relative Orientation, ISPRS J. Photogrammetry an Remote Sensing, vol. 60, pp , [39] M. Urbanek, R. Horau, an P. Sturm, Combining Off- an On- Line Calibration of a Digital Camera, Proc. Thir Int l Conf. 3-D Digital Imaging an Moeling, [40] A. Wallack, I.Z. Emiris, an D. Manocha, MARS: A Maple/ Matlab/C Resultant-Base Solver, Proc. Int l Symp. Symbolic an Algebraic Computation, pp , Zuzana Kukelova receive the master s egree in mathematics from the Faculty of Mathematics, Physics an Informatics, Comenius University, Bratislava, in Currently, she is working towar the PhD egree at the Center for Machine Perception, which is a part of the Department of Cybernetics at the Czech Technical University in Prague. She is intereste in algebraic geometry an computer vision, where she focuses on minimal problems an methos for solving systems of polynomial equations. She is a member of the IEEE. Martin Bujnak receive the master s egree in computer graphics an parallel processing from the Faculty of Mathematics, Physics an Informatics, Comenius University, Bratislava, in Currently, he is working towar the PhD egree at the Center for Machine Perception, which is a part of the Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University, Prague. In his research, he focuses on camera calibration, pose estimation, an structure from motion problems. He is a member of the IEEE. Tomas Pajla receive the MSc an PhD egrees from the Czech Technical University, in Prague. He works in geometry an algebra of computer vision an robotics, with emphasis on nonclassical cameras, 3D reconstruction, an inustrial vision. He contribute to introucing epipolar geometry of panoramic cameras, noncentral camera moels generate by linear mappings, generalize epipolar geometries, an to eveloping solvers for minimal problems in structure from motion. He is a member of the IEEE.. For more information on this or any other computing topic, please visit our Digital Library at

Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems

Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems Zuzana Kukelova, Martin Bujnak and Tomas Pajdla Center for Machine Perception Czech Technical University, Prague kukelova,bujnam1,pajdla@cmp.felk.cvut.cz

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Applications of the Wronskian to ordinary linear differential equations

Applications of the Wronskian to ordinary linear differential equations Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Making Minimal Problems Fast

Making Minimal Problems Fast Making Minimal Problems Fast T o m a s P a j d l a presents work of M. Bujnak, Z. Kukelova & T. Pajdla Czech Technical University Prague Center for Machine Perception M o t i v a t i o n 3 D R E C O N

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Polynomial Eigenvalue Solutions to Minimal Problems in Computer Vision

Polynomial Eigenvalue Solutions to Minimal Problems in Computer Vision IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 7, JULY 2012 Polynomial Eigenvalue Solutions to Minimal Problems in Computer Vision Zuzana Kukelova, Martin Bujnak, and Tomas

More information

Commun Nonlinear Sci Numer Simulat

Commun Nonlinear Sci Numer Simulat Commun Nonlinear Sci Numer Simulat 14 (2009) 3901 3913 Contents lists available at ScienceDirect Commun Nonlinear Sci Numer Simulat journal homepage: www.elsevier.com/locate/cnsns Dynamics of a ring network

More information

Mathematical Review Problems

Mathematical Review Problems Fall 6 Louis Scuiero Mathematical Review Problems I. Polynomial Equations an Graphs (Barrante--Chap. ). First egree equation an graph y f() x mx b where m is the slope of the line an b is the line's intercept

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21 Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting

More information

2Algebraic ONLINE PAGE PROOFS. foundations

2Algebraic ONLINE PAGE PROOFS. foundations Algebraic founations. Kick off with CAS. Algebraic skills.3 Pascal s triangle an binomial expansions.4 The binomial theorem.5 Sets of real numbers.6 Surs.7 Review . Kick off with CAS Playing lotto Using

More information

Robustness and Perturbations of Minimal Bases

Robustness and Perturbations of Minimal Bases Robustness an Perturbations of Minimal Bases Paul Van Dooren an Froilán M Dopico December 9, 2016 Abstract Polynomial minimal bases of rational vector subspaces are a classical concept that plays an important

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

Optimization of Geometries by Energy Minimization

Optimization of Geometries by Energy Minimization Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

The Press-Schechter mass function

The Press-Schechter mass function The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Math 1271 Solutions for Fall 2005 Final Exam

Math 1271 Solutions for Fall 2005 Final Exam Math 7 Solutions for Fall 5 Final Eam ) Since the equation + y = e y cannot be rearrange algebraically in orer to write y as an eplicit function of, we must instea ifferentiate this relation implicitly

More information

G j dq i + G j. q i. = a jt. and

G j dq i + G j. q i. = a jt. and Lagrange Multipliers Wenesay, 8 September 011 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

State-Space Model for a Multi-Machine System

State-Space Model for a Multi-Machine System State-Space Moel for a Multi-Machine System These notes parallel section.4 in the text. We are ealing with classically moele machines (IEEE Type.), constant impeance loas, an a network reuce to its internal

More information

and from it produce the action integral whose variation we set to zero:

and from it produce the action integral whose variation we set to zero: Lagrange Multipliers Monay, 6 September 01 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

APPPHYS 217 Thursday 8 April 2010

APPPHYS 217 Thursday 8 April 2010 APPPHYS 7 Thursay 8 April A&M example 6: The ouble integrator Consier the motion of a point particle in D with the applie force as a control input This is simply Newton s equation F ma with F u : t q q

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers Optimal Variable-Structure Control racking of Spacecraft Maneuvers John L. Crassiis 1 Srinivas R. Vaali F. Lanis Markley 3 Introuction In recent years, much effort has been evote to the close-loop esign

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

State observers and recursive filters in classical feedback control theory

State observers and recursive filters in classical feedback control theory State observers an recursive filters in classical feeback control theory State-feeback control example: secon-orer system Consier the riven secon-orer system q q q u x q x q x x x x Here u coul represent

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France APPROXIMAE SOLUION FOR RANSIEN HEA RANSFER IN SAIC URBULEN HE II B. Bauouy CEA/Saclay, DSM/DAPNIA/SCM 91191 Gif-sur-Yvette Ceex, France ABSRAC Analytical solution in one imension of the heat iffusion equation

More information

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1 Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

Permanent vs. Determinant

Permanent vs. Determinant Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an

More information

Chapter 2 Lagrangian Modeling

Chapter 2 Lagrangian Modeling Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie

More information

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

This module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics

This module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics This moule is part of the Memobust Hanbook on Methoology of Moern Business Statistics 26 March 2014 Metho: Balance Sampling for Multi-Way Stratification Contents General section... 3 1. Summary... 3 2.

More information

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the

More information

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems

Construction of the Electronic Radial Wave Functions and Probability Distributions of Hydrogen-like Systems Construction of the Electronic Raial Wave Functions an Probability Distributions of Hyrogen-like Systems Thomas S. Kuntzleman, Department of Chemistry Spring Arbor University, Spring Arbor MI 498 tkuntzle@arbor.eu

More information

A minimal solution to the autocalibration of radial distortion

A minimal solution to the autocalibration of radial distortion A minimal solution to the autocalibration of radial distortion Zuzana Kukelova Tomas Pajdla Center for Machine Perception, Dept. of Cybernetics, Faculty of Elec. Eng. Czech Technical University in Prague,

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences. S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial

More information

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS TIME-DEAY ESTIMATION USING FARROW-BASED FRACTIONA-DEAY FIR FITERS: FITER APPROXIMATION VS. ESTIMATION ERRORS Mattias Olsson, Håkan Johansson, an Per öwenborg Div. of Electronic Systems, Dept. of Electrical

More information

Calculus of Variations

Calculus of Variations Calculus of Variations Lagrangian formalism is the main tool of theoretical classical mechanics. Calculus of Variations is a part of Mathematics which Lagrangian formalism is base on. In this section,

More information

ERROR-detecting codes and error-correcting codes work

ERROR-detecting codes and error-correcting codes work 1 Convolutional-Coe-Specific CRC Coe Design Chung-Yu Lou, Stuent Member, IEEE, Babak Daneshra, Member, IEEE, an Richar D. Wesel, Senior Member, IEEE arxiv:1506.02990v1 [cs.it] 9 Jun 2015 Abstract Cyclic

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

Both the ASME B and the draft VDI/VDE 2617 have strengths and

Both the ASME B and the draft VDI/VDE 2617 have strengths and Choosing Test Positions for Laser Tracker Evaluation an Future Stanars Development ala Muralikrishnan 1, Daniel Sawyer 1, Christopher lackburn 1, Steven Phillips 1, Craig Shakarji 1, E Morse 2, an Robert

More information

The Three-dimensional Schödinger Equation

The Three-dimensional Schödinger Equation The Three-imensional Schöinger Equation R. L. Herman November 7, 016 Schröinger Equation in Spherical Coorinates We seek to solve the Schröinger equation with spherical symmetry using the metho of separation

More information

Optimal LQR Control of Structures using Linear Modal Model

Optimal LQR Control of Structures using Linear Modal Model Optimal LQR Control of Structures using Linear Moal Moel I. Halperin,2, G. Agranovich an Y. Ribakov 2 Department of Electrical an Electronics Engineering 2 Department of Civil Engineering Faculty of Engineering,

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling Balancing Expecte an Worst-Case Utility in Contracting Moels with Asymmetric Information an Pooling R.B.O. erkkamp & W. van en Heuvel & A.P.M. Wagelmans Econometric Institute Report EI2018-01 9th January

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Optimal CDMA Signatures: A Finite-Step Approach

Optimal CDMA Signatures: A Finite-Step Approach Optimal CDMA Signatures: A Finite-Step Approach Joel A. Tropp Inst. for Comp. Engr. an Sci. (ICES) 1 University Station C000 Austin, TX 7871 jtropp@ices.utexas.eu Inerjit. S. Dhillon Dept. of Comp. Sci.

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas Resistant Polynomials an Stronger Lower Bouns for Depth-Three Arithmetical Formulas Maurice J. Jansen University at Buffalo Kenneth W.Regan University at Buffalo Abstract We erive quaratic lower bouns

More information

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas Resistant Polynomials an Stronger Lower Bouns for Depth-Three Arithmetical Formulas Maurice J. Jansen an Kenneth W.Regan University at Buffalo (SUNY) Abstract. We erive quaratic lower bouns on the -complexity

More information

23 Implicit differentiation

23 Implicit differentiation 23 Implicit ifferentiation 23.1 Statement The equation y = x 2 + 3x + 1 expresses a relationship between the quantities x an y. If a value of x is given, then a corresponing value of y is etermine. For

More information

Space-time Linear Dispersion Using Coordinate Interleaving

Space-time Linear Dispersion Using Coordinate Interleaving Space-time Linear Dispersion Using Coorinate Interleaving Jinsong Wu an Steven D Blostein Department of Electrical an Computer Engineering Queen s University, Kingston, Ontario, Canaa, K7L3N6 Email: wujs@ieeeorg

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation A Novel ecouple Iterative Metho for eep-submicron MOSFET RF Circuit Simulation CHUAN-SHENG WANG an YIMING LI epartment of Mathematics, National Tsing Hua University, National Nano evice Laboratories, an

More information

Influence of weight initialization on multilayer perceptron performance

Influence of weight initialization on multilayer perceptron performance Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -

More information

The Principle of Least Action and Designing Fiber Optics

The Principle of Least Action and Designing Fiber Optics University of Southampton Department of Physics & Astronomy Year 2 Theory Labs The Principle of Least Action an Designing Fiber Optics 1 Purpose of this Moule We will be intereste in esigning fiber optic

More information

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7. Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency

More information

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations

Characterizing Real-Valued Multivariate Complex Polynomials and Their Symmetric Tensor Representations Characterizing Real-Value Multivariate Complex Polynomials an Their Symmetric Tensor Representations Bo JIANG Zhening LI Shuzhong ZHANG December 31, 2014 Abstract In this paper we stuy multivariate polynomial

More information

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule Unit # - Families of Functions, Taylor Polynomials, l Hopital s Rule Some problems an solutions selecte or aapte from Hughes-Hallett Calculus. Critical Points. Consier the function f) = 54 +. b) a) Fin

More information

ANALYSIS OF A GENERAL FAMILY OF REGULARIZED NAVIER-STOKES AND MHD MODELS

ANALYSIS OF A GENERAL FAMILY OF REGULARIZED NAVIER-STOKES AND MHD MODELS ANALYSIS OF A GENERAL FAMILY OF REGULARIZED NAVIER-STOKES AND MHD MODELS MICHAEL HOLST, EVELYN LUNASIN, AND GANTUMUR TSOGTGEREL ABSTRACT. We consier a general family of regularize Navier-Stokes an Magnetohyroynamics

More information

Chapter 4. Electrostatics of Macroscopic Media

Chapter 4. Electrostatics of Macroscopic Media Chapter 4. Electrostatics of Macroscopic Meia 4.1 Multipole Expansion Approximate potentials at large istances 3 x' x' (x') x x' x x Fig 4.1 We consier the potential in the far-fiel region (see Fig. 4.1

More information

Chapter 6: Integration: partial fractions and improper integrals

Chapter 6: Integration: partial fractions and improper integrals Chapter 6: Integration: partial fractions an improper integrals Course S3, 006 07 April 5, 007 These are just summaries of the lecture notes, an few etails are inclue. Most of what we inclue here is to

More information

First Order Linear Differential Equations

First Order Linear Differential Equations LECTURE 6 First Orer Linear Differential Equations A linear first orer orinary ifferential equation is a ifferential equation of the form ( a(xy + b(xy = c(x. Here y represents the unknown function, y

More information

A Review of Multiple Try MCMC algorithms for Signal Processing

A Review of Multiple Try MCMC algorithms for Signal Processing A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications

More information

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Ashish Goel Michael Kapralov Sanjeev Khanna Abstract We consier the well-stuie problem of fining a perfect matching in -regular bipartite

More information

Year 11 Matrices Semester 2. Yuk

Year 11 Matrices Semester 2. Yuk Year 11 Matrices Semester 2 Chapter 5A input/output Yuk 1 Chapter 5B Gaussian Elimination an Systems of Linear Equations This is an extension of solving simultaneous equations. What oes a System of Linear

More information

Code_Aster. Detection of the singularities and computation of a card of size of elements

Code_Aster. Detection of the singularities and computation of a card of size of elements Titre : Détection es singularités et calcul une carte [...] Date : 0/0/0 Page : /6 Responsable : Josselin DLMAS Clé : R4.0.04 Révision : 9755 Detection of the singularities an computation of a car of size

More information

ELEC3114 Control Systems 1

ELEC3114 Control Systems 1 ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.

More information

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number

More information

Systems & Control Letters

Systems & Control Letters Systems & ontrol Letters ( ) ontents lists available at ScienceDirect Systems & ontrol Letters journal homepage: www.elsevier.com/locate/sysconle A converse to the eterministic separation principle Jochen

More information

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

inflow outflow Part I. Regular tasks for MAE598/494 Task 1 MAE 494/598, Fall 2016 Project #1 (Regular tasks = 20 points) Har copy of report is ue at the start of class on the ue ate. The rules on collaboration will be release separately. Please always follow the

More information

Math 1272 Solutions for Spring 2005 Final Exam. asked to find the limit of the sequence. This is equivalent to evaluating lim. lim.

Math 1272 Solutions for Spring 2005 Final Exam. asked to find the limit of the sequence. This is equivalent to evaluating lim. lim. Math 7 Solutions for Spring 5 Final Exam ) We are gien an infinite sequence for which the general term is a n 3 + 5n n + n an are 3 + 5n aske to fin the limit of the sequence. This is equialent to ealuating

More information

On combinatorial approaches to compressed sensing

On combinatorial approaches to compressed sensing On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu

More information

LIAISON OF MONOMIAL IDEALS

LIAISON OF MONOMIAL IDEALS LIAISON OF MONOMIAL IDEALS CRAIG HUNEKE AND BERND ULRICH Abstract. We give a simple algorithm to ecie whether a monomial ieal of nite colength in a polynomial ring is licci, i.e., in the linkage class

More information

BEYOND THE CONSTRUCTION OF OPTIMAL SWITCHING SURFACES FOR AUTONOMOUS HYBRID SYSTEMS. Mauro Boccadoro Magnus Egerstedt Paolo Valigi Yorai Wardi

BEYOND THE CONSTRUCTION OF OPTIMAL SWITCHING SURFACES FOR AUTONOMOUS HYBRID SYSTEMS. Mauro Boccadoro Magnus Egerstedt Paolo Valigi Yorai Wardi BEYOND THE CONSTRUCTION OF OPTIMAL SWITCHING SURFACES FOR AUTONOMOUS HYBRID SYSTEMS Mauro Boccaoro Magnus Egerstet Paolo Valigi Yorai Wari {boccaoro,valigi}@iei.unipg.it Dipartimento i Ingegneria Elettronica

More information