Solving the 100 Swiss Francs Problem
|
|
- Moris Poole
- 6 years ago
- Views:
Transcription
1 Solving the 00 Swiss Francs Problem Mingfu Zhu, Guangran Jiang and Shuhong Gao Abstract. Sturmfels offered 00 Swiss Francs in 00 to a conjecture, which deals with a special case of the maximum likelihood estimation for a latent class model. This paper confirms the conjecture positively. Mathematics Subject Classification (000). Primary H0; Secondary P0, F0. Keywords. Maximum likelihood estimation, latent class model, solving polynomial equations, algebraic statistics.. The conjecture and its statistical background Sturmfels [] proposed the following problem: Maximize the likelihood function L(P ) = i= p ii i j p ij () over the set of all -matrices P = (p ij ) whose entries are nonnegative and sum to and whose rank is at most two. Based on numerical experiments by employing an expectation-maximization(em) algorithm, Sturmfels [0, ] conjectured that the matrix P = 0 is a global maximum of L(P ). He offered 00 Swiss francs for a rigorous proof in a postgraduate course held at ETH Zürich in 00. Partial results were given in the paper in [], where the general statistical background for this problem is also presented. This problem is a special case of the maximum likelihood estimation for a latent class model. More precisely, by following [], let (X,..., X d ) be a discrete multivariate random vector where each X j takes value from a finite state set S j = {,..., s j }. Let Ω = S S d be
2 M.Zhu, G.Jiang and S.Gao the sample space. For each (x,..., x d ) Ω, the joint probability mass function of (X,..., X d ) is denoted as p(x,..., x d ) = P {(X,..., X d ) = (x,..., x d )}. The variables X,..., X d may not be mutually independent generally. By introducing an unobservable variable H defined on the set [r] = {,... r}, X,..., X d become mutually independent. The joint probability mass function in the newly formed model is p(x,..., x d, h) = P {(X,..., X d, H) = (x,..., x d, h)} = p(x h) p(x d h)λ h where λ h is the marginal probability of P {H = h} and p(x j h) is the conditional probability P {X j = x j H = h}. We denote this new r-class mixture model by H. The marginal distribution of (X,..., X d ) in H is given by the probability mass function (which is also called accounting equations [8]) p(x,..., x d ) = h [r] p(x,..., x d, h) = h [r] p(x h) p(x d h)λ h. In practice, a collection of samples from Ω are observed. For each (x,..., x d ), let n(x,..., x d ) N be the number of observed occurrences of (x,..., x d ) in the samples. While the parameters p(x h),, p(x d h), λ h, p(x,..., x d ) are unknown. The maximum likelihood estimation problem is to find the model parameters that can best explain the observed data, that is, to determine the global maxima of the likelihood function L(H) = p(x,..., x d ) n(x,...,xd). (x,...,x d ) Ω Since each p(x,..., x d ) is nonnegative, it is equivalent but more convenient to use the log-likelihood function l(h) = n(x,..., x d ) ln p(x,..., x d ), () (x,...,x d ) Ω where we define ln(0) =. Finding the maxima of () is difficult and remains infeasible by current symbolic software [, 9]. We can only handle some special cases: small models or highly symmetric table. The 00 Swiss francs problem is the special case of H when d =, S = S = {A,C,G,T}, s = s = and r =. It is related to a DNA sequence alignment problem as described in [0]. In that example, the contingency table for the observed counts of ordered pairs of nucleotides (i.e. AA, AC, AG, AT, CA, CC, ) is A C G T A C G. T
3 Solving the 00 Swiss Francs Problem So the likelihood function () in this example is exactly (). Even for this simple case, the problem is surprisingly difficult. We know that the global maxima must exist, as the region of the parameters is compact. By using an EM algorithm or Newton-Raphson method and starting from suitable initial points, one can find some local maxima of the likelihood function. However, the global maximum property is not guaranteed. We prove that Sturmfels conjectured solution is indeed a global maximum. Our paper is organized as follows. We first derive some general properties for optimal solutions in Section., then provide a theoretical solution to the conjecture in Sections.. In., we make some comments about using Gröbner basis technique in solving this problem and provide a computational solution. Lastly, we suggest several new conjectures in more general cases.. Proof of the conjecture.. General Properties We focus on general n n matrices P = (p ij ) in this section. For convenience we scale each entry of P by n so the entries sum to n, and take square root of the original likelihood function. So we may assume that n L(P ) = p ij. () The problem is Maximize: L(P ) Subject to: i= p ii i j i,j n p ij = n, and p ij 0, i, j n. Suppose P = (p ij ) n n is a global maximum of L(P ). It is easy to see that P cannot be the following n n matrix... J = Since the function () is a continuous function in p ij s, if one of the entries of P approaches 0, the product has to approach 0 too, as the other entries are bounded by n. Hence the optimal solutions must occur in interior points and we don t need to worry about the boundary where some p ij = 0. Therefore, in the subsequent discussion, we may assume that P J and all its entries are positive. We show that P must have certain symmetry properties. Lemma. For an optimal solution P, its row sums and column sums must all equal n.
4 M.Zhu, G.Jiang and S.Gao Proof. Let p ij = s i. Then n s i = n and s i n n with equality if and only j= i= i if s i = n for all i. Let p ij = n s i p ij and P = ( p ij ) n n. Then rank( P ) = rank(p ) and p ij = n. However, i,j L( P ) = L(P ) nn s i i n L(P ) with equality if and only if s i = n for all i. Since P is a global maximum, L( P ) L(P ). Therefore each row sum equals n. Similarly, each column sum equals n as well. We shall express P in a form that involves fewer variables and has no rank constraint. Since P has rank at most two, by singular value decomposition theorem, there are column vectors u, u, v and v of length n such that P = σ u v t σ u v t for some nonnegative numbers σ and σ. By Proposition, P has equal row and column sums, so P has the vectors (,,..., ) and (,,..., ) t as its left and right eigenvectors both with eigenvalue. Hence we may assume that σ = and u = v = (,,..., ) t. Let v = (a, a,..., a n ) t and σ u = (b, b,..., b n ) t. Then P has the form P = J b. b n (a, a,..., a n ) = a b a n b. a i b j. a b n a n b n In this form, P has rank at most two. Also, the condition p ij = n becomes ij a i i=. b i = 0. () i= We have transformed the original problem to the following optimization problem: Maximize: l(p ) = n ln( a i b i ) ln( a i b j ) i= i j Subject to: Equation () and a i b j > 0, i, j n. The Lagrangian function would be Λ(P, λ) = l(p ) λ a i i= where λ R. Any local extrema must satisfy Λ(P, λ) b j b i = λ a i a i b j a i b i j= i= b i b j = 0, i n, () j=
5 Solving the 00 Swiss Francs Problem and Λ(P, λ) b j = i= a i a j λ a i b j a j b j a i = 0, j n. () By Lemma, for an optimal solution P, its row sums and column sums must be all equal to n. This means that a i = 0, (7) and i= i= b i = 0. (8) i= Plugging (7) and (8) into () and () respectively, we obtain the following lemma. Lemma. A global maximum P must satisfy b j b i = 0, a i b j a i b i i n, (9) and j= i= Doing some simple algebra yields j= a i a j = 0, j n. (0) a i b j a j b j Corollary. An optimal solution must satisfy = n, a i b j a i b i i n, () and i= Proof. Multiply (9) by a i and then add get (). = n, j n. () a i b j a j b j n j= a ib j a ib i to both sides, we can The n equations derived by clearing denominators of the equations in Lemma or Corollary along with equations (7) and (8) form a system of n polynomial equations with n unknowns, whose solutions contain all global maxima. From computational point of view, we may find all the solutions to this system of equations, say utilizing Gröbner basis method, and then pick a global maximum. At the time we submitted this paper (in 008), we could not solve the system for n = using Maple on a computer with moderate computation power. With both the advance in computer hardware and efficient implementations of algorithms for
6 M.Zhu, G.Jiang and S.Gao computing Gröbner basis, the system for n = now became solvable. A computational solution for this problem is attached in Section.. However, the system for n = remains unsolvable using our computers. Our strategy below is to prove that P should have high symmetry. Firstly a i s and b i s are in the same order: if a i > a j > 0, then b i > b j > 0 correspondingly (Lemma and ). For the case n = once we force a = b by scaling, we can eventually prove a i = b i for all other i s (Lemma 7 and 9). With four a i s remained, we prove that the a i s with the same signs must be identical. Finally one can solve the system by hand. Note that Fienberg et. al. [] derived results similar to Lemmas and, but our approaches are simpler and completely different. Lemma. For every i,. a i = 0 if and only if b i = 0, and. a i > 0 if and only if b i > 0. Proof. For the first part, plugging in a i = 0 to the equation (9), we have n j= b j b i = 0, thus b i = 0. Similarly, if b i = 0 then a i = 0. For the second part, note that g(x) = x is concave up in (0, ). By Jensen s Inequality, n a j= i b j =. n ( a ib j ) That is, j= Compare with equation (), we get j= a i b j n. a i b i, so a i b i 0. We conclude that a i > 0 if and only if b i > 0. Lemma. For i and j,. a i = a j if and only if b i = b j, and. a i > a j if and only if b i > b j. Proof. For the first part, suppose b i = b j. Then, by (0), Then k= a k a i = 0 and a k b i a i b i k= a k a j = 0. a k b j a j b j a i a ib i = aj a jb j, so a i = a j. Then, using (9), we have b i = b j.
7 Solving the 00 Swiss Francs Problem 7 For the second part, switch b i, b j in P to form a new matrix P. Then we should have L(P ) L( P ) due to our assumption that P is a global maximum. Note that L(P ) L( P ) = C (( a i b i ) ( a i b j )( a j b i )( a j b j ) ( a i b j ) ( a i b i )( a j b j )( a j b i ) ) = C (( a i b i )( a j b j ) ( a i b j )( a j b i )) = C (a i b i a j b j a i b j a j b i ) = C (a i a j )(b i b j ) where C, C are products of some entries of P, so C, C are positive. Thus (a i a j )(b i b j ) 0. Note that a i = a j if and only if b i = b j by part(), we conclude that a i > a j if and only if b i > b j... Theoretical solution We complete the theoretical proof for the conjecture in this section. From now on we focus on the case when n =. By Lemma, we can always assume a a a a and b b b b. We know a 0, otherwise b = 0 by Lemma, hence a i = b j = 0, which result in P = J. We also have a b > 0, so we can replace (a, a, a, a ) in P by a b (a, a, a, a ) and (b, b, b, b ) t by b a (b, b, b, b ) t. It turns out that a b b a i a b i = a i b j for any i and j, so we may always assume a = b. Thus P can be expressed as the form a a a a a a a a b a b a b a b a b a b a b a b. () a b a b a b a b If a 0, we then replace (a, a, a, a ) in P by ( a, a, a, a ) and (b, b, b, b ) t by ( b, b, b, b ) t. The new matrix with a a 0 has the same likelihood function as P. Thus we may assume a a 0. Without loss of generality, we may make the following assumption. Assumption. We can always assume the following. a a a a and b b b b,. a = b > 0, and. a a 0. The results in the rest of this section are all based on Assumption. Our first goal is to prove a = b. Lemma 7. a = b. Proof. If one of a, b is 0, then a = b = 0 by Lemma. We assume that both a, b are nonzero.
8 8 M.Zhu, G.Jiang and S.Gao Also Apply Corollary to the first row of matrix (). We have a a a From the two equations above we get a a a a a a a a a = 0. where f is a bivariate function in x, y defined as a a =. a a a a = f (a a, a a ) () f (x, y) = x y x x y. y Similarly, apply Corollary to the second row of matrix (). We get Along with we get a b a b a b a b a b a b a b = 0, a b =. a b a b = f (a b, a b ). () Since a, b are nonzero, we combine equations () and () to get f (a, a a ) a = f (a b, a b ) b. () Normalizing () we can derive a trivariate polynomial equation, say f (a, a, b ) = 0. (7) Symmetrically apply Corollary to the first column and the second column, we get f (a, a b ) a = f (a b, a a ) a. (8) One can see that equation (8) is obtainable by switching a with b in equation (). Thus we have f (a, b, a ) = 0. (9) Subtracting (9) from (7) yields f (a, a, b ) f (a, b, a ) = 0. Since we only switched a and b in polynomial f, there must be a factor a b for f (a, a, b ) f (a, b, a ), say (a b )f (a, a, b ) = 0, (0)
9 Solving the 00 Swiss Francs Problem 9 where f (a, a, b ) = (0a b a b a b a b b )a (a b a b a 0a b a b a b )a a a b a a b. Thus a = b if f (a, a, b ) 0. This is true because we have some bounds for a, a a, a b as presented in Lemma 8 below, which can be applied to get f (a, a, b ) = (0a b a b a b a b b )a < (a b a b a 0a b a b a b )a a a b a a b 0 a b a b a b 0 a b a b < a b a b 9 00 < 0. Therefore, f (a, a, b ) 0 and a = b, just as needed. Lemma 8.. a,. 0 a a, and. 0 a b. Proof. () Let A i = a a i for i =,...,, then A A > 0 and Since we have Let A A A A =. A A A A = i= A A, = A A A A A A g(a ) = A where g is a function in R[x]. Then A A, g(a ) = A A ( A A ). A i =, A A, A A. ()
10 0 M.Zhu, G.Jiang and S.Gao Note that A A, thus A A and g(a) A g(a ) g() for A, that is, Hence by inequality (),. A A A A A A. A A A A We get A A 0, i.e. A. Thus a. () Assume A = a a >. Then g(a ) > g( ). That is, > A A A A A A. 0. Therefore The solution set of A is (, 0) ( 8, ) (, ). Note that A > 0 and A = a 8, we then get < A <, which contradicts with A A. Thus A and 0 a a. () This result is followed by letting A = a and A i = a b i for i. The above proofs in part () and () remain good. Lemma 9. a i = b i for i =,. Proof. Let A i = a i b for i =,...,. Then A i = i= and =. A A A A By the two equations above, since A A, we can derive explicit expression for A, A in the variables A, A, say A = h (A, A ) and A = h (A, A ). If we let B i = a b i, we can get B = h (B, B ) and B = h (B, B ) in a similar manner. Note that A = B and A = a b = b a = B, we deduce that A i = B i for i =,. Since a = b > 0, a i = b i for i =,. where By Lemmas 7 and 9, we have a i = b i for all i. Hence P can be expressed as a a a a a a a P = a a a a a a a a a a a a a a a a a a a a a a i = 0. () i=
11 Solving the 00 Swiss Francs Problem By Corollary we have the following system of equations a a a a a a a a a With () and (), we claim that a a a a a a a a a a a =, a a =, a a =, a a a a =. () Lemma 0. a i = a j if a i a j > 0. Proof. Let F (x) = a x a x a x a x x = 0. Normalizing F (x) yields a polynomial (the numerator) of degree in x whose constant is 0 and whose coefficient of the term x is i= a i = 0. So a, a, a, a, 0, 0 are all the zeros of F (x). Suppose there exists consecutive i, j such that a i > a j > 0 (or a j < a i < 0 respectively). Then F (x) is continuous in the interval ( a j, a i ). Note that lim F (x) = and x a j lim F (x) =. x a i There must be a zero lying in ( a j, a i ), say a 0. Then a 0 < a i (or a 0 > a j respectively), i.e. a i a 0 < 0 (or a j a 0 < 0 respectively). Since a 0 0, x 0 must be one of a k, k =,...,. Thus a i a 0 (or a j a 0, respectively) is an entry in matrix P, contradicting the fact that each entry of P is positive. Therefore if i, j are consecutive and a i a j > 0, we must have a i = a j. Hence a i a j > 0 implies a i = a j for any i, j. With Lemma 0 it is handy to solve the system (). Under Assumption () there are only possible patterns of signs for (a, a, a, a ). If the signs are (,,, ), then a = a = a = a. Substitute this to any equation in () yields a = a = a = and a =. The matrix would be P = 8.
12 M.Zhu, G.Jiang and S.Gao For the case when the signs are (,,, ), we get a = and the matrix would be P =. When the signs are (,, 0, ), a =, and the matrix would be P =. And when the signs are (, 0, 0, ), a = and the matrix would be P =. The matrices obtaining local maximum of the likelihood function must be among the matrices above. We conclude that matrix P obtains the global maximum. Finally, multiplying matrix P by yields P = 0... Approach via Gröbner bases Gröbner basis technique is a general approach for solving systems of equations. Buchberger introduced in 9 the first algorithm for computing Gröbner basis (see []), and subsequently there have been extensive efforts in improving its efficiency. It is not our purpose here to give a detailed survey of all the algorithms in the literature, but we mention two important algorithms F (Faugère 999, []) and F (Faugère 00, []) where signatures are introduced to detect useless S-pairs without performing reductions. F is believed to be the fastest algorithm in the last decade. Most recently, Gao, Guan and Volny (00, []) introduced an incremental algorithm (GV) that is simpler and several times faster than F, and Gao, Volny and Wang (00, [7]) developed a more general algorithm that avoids the incremental nature of F and GV and is flexible in signature orders.
13 Solving the 00 Swiss Francs Problem All these algorithms are for general polynomial systems. If a large system of polynomials have certain structures, it is not known how to use these algorithms to take advantage of the structures of the polynomial system. After we submitted our paper (in 008), one of the referees pointed out that it is possible to compute the Gröbner basis for our polynomial system with n =. We give more details on this computation. The solution starts from Equations (7-0), using the scaling at of the beginning of Section.. Without the scaling the solutions are infinite. For this one needs to assume a = b 0. Note that this assumption relies on Lemmas and we proved. It takes about ten minutes for the whole computation in Maple on a moderate computer. Precisely, one can construct an ideal J 0 = a b, a i, b i, h,, h 8 C[X] i= i= where h i is a numerator on the left hand side of Equations 9,0, C is the complex field and X represents the list of unknowns: a,, a, b,, b. Let J = J 0 u a C[X, u] where u is a new variable. Then a 0 for any solution of J. We compute the Gröbner basis G of J in an elimination term order with u > X. Let G = G C[X]. Then G is a Gröbner basis of J C[X]. Now G is a zero-dimensional ideal, and its rational univariate representation can be computed. In this step, a univariate polynomial r(v) with a new variable v is computed, whose roots can represent all the solutions of G. It has degree of 98, with real roots. By substituting each real root to the representations, there are 8 roots making that some entries of P equal 0 thus L(P ) = 0. Each of the remaining solutions gives one of the following: P = , P =, P =, P =, up to a permutation of variables a i s and b i s. It is straightforward to check that P is the optimal solution. We also tried to the case for n =, but our computation did not finish after more than one day, mainly because the computation for the first Gröbner basis G did not finish. Gröbner basis encodes both real and complex solutions. For our system with n =, there are far more complex solutions than real solutions. For
14 M.Zhu, G.Jiang and S.Gao a system of polynomials with finitely many complex solutions, it is expected that in general, the more solutions with the system, the harder to compute Gröbner basis (for any term order). Also, even if a final Gröbasis is small, the intermediate polynomials may be large (in number of nonzero terms as well as the size of the coefficients), hence the algorithms can not finish in reasonable time in practice, in fact, it s more likely that the computer is out of memory quickly. For our theoretical approach (by hand), we were able to explore some partial structure in our polynomial system. For example, we have a polynomial of the form (a b )f (a, a, b ) in the proof for Lemma 7. Our approach is to justify that the factor f (a, a, b ), a trivariate polynomial with 7 terms, is nonzero by applying some bounds from Lemma 8, so that we can derive the simplest equation a b = 0. In the proof we used the fact that we are looking only for real solutions. However, it is possible that f (a, a, b ) is zero for some complex solutions. The locus of all solutions may be much more complicated than that of real solutions, hence the Gröbner basis is much more time consuming to compute.. Some comments on more general likelihood functions In this section, we consider some generalization of the likelihood problem. We let the exponent in the likelihood function () be symbolic, and consider the function n L(P ) = p t ij, () i= p s ii i j where P = (p ij ) is still an n n matrix as before. The question is how the optimal solution depends on (s, t). Even for the case when n =, it seems hard to find the optimal solutions. In the following, we describe some possible solutions in the form of conjectures. Conjecture. For given 0 < t < s where t, s are two integers, among the set of all non-negative matrices whose rank is at most and whose entries sum to, the matrix P = s t st st st t t st t t t t st st t t is a global maximum for the likelihood function L(P) in () when n =. The results in Section. remain good for this likelihood function. The equation (0) becomes b a i b b a i b b a i b st st b a i b ( s t )a i a i b i = 0.
15 Solving the 00 Swiss Francs Problem But the bounds in Lemma 8 involve the fraction s t and become complicated. A similar equation to (0) can be derived, but the nonzero factor is difficult to claim. Hopefully we may also prove a = b. Then a = b and a = b can be derived in a similar manner to Lemma 9. So does Lemma 0. Finally we can find local extrema and need only compare them to obtain the global maximum. In the case when the signs of (a, a, a, a ) are (,,, ), we have the equation Thus a = s t s9t a ((s 9t)a (s t)) = 0., and the matrix would be P = s8t s9t s8t s9t s8t s9t t s9t s8t s9t s8t s9t s8t s9t t s9t s8t s9t s8t s9t s8t s9t t s9t t s9t t s9t t s9t s s9t. In the case when the signs are (,,, ), we get a = matrix would be P = st st st st t st t st st st st st t st t st t st t st st st st st t st t st st st st st s t st and the. () One can prove that L(P ) < L(P ) by some calculus technique, for example, taking the partial derivative of L(P) L(P ) with respect to s. In similar approaches one can also show that L(P ) < L(P ) and L(P ) < L(P ) where P, P are the corresponding matrices for the cases when signs are (,, 0, ) and (, 0, 0, ) respectively. Thus the matrix in () is a global maximum. More generally, let (u) l l be a block matrix with every entry being u where l l N and u > 0. Conjecture. Let n and 0 < t < s. Then the matrix P = ns (n )nt is a global maximum for L(P) in (). ( s t n t) n n (t) n n (t) n n ( s t n t) n n
16 M.Zhu, G.Jiang and S.Gao Conjecture. Let n and 0 < s t. Then the matrix s t n (st) n n n (st) n n n n P =..... n n n n t s n (st) n n n (st) is a global maximum for L(P) in (). Acknowledgment The authors were partially supported by the National Science Foundation under grants DMS-009 and DMS-009 and National Security Agency under grant H We would like to thank Bernd Sturmfels for his encouragement and anonymous referees for their helpful comments, in particular one of them provided Maple codes to us. References [] Buchberger, B. Gröbner-Basis: An Algorithmic Method in Polynomial Ideal Theory. Reidel Publishing Company, Dodrecht - Boston - Lancaster (98) [] Catanese, F., Hoşten, S., Khetan, A., Sturmfels, B.: The maximum likelihood degree, Am. J. Math. 8, 7-97 (00) [] Faugère, J. C. A new efficient algorithm for computing Gröbner basis (F). J Pure Appl Algebra 9(-), -88 (999) [] Faugère, J. C. A new efficient algorithm for computing Gröbner basis without reduction to zero (F). In ISSAC 0: Proceedings of the 00 international symposium on Symbolic and algebraic computation. New York, NY, USA, ACM, pp. 7-8 (00) [] Fienberg, S., Hersh, P., Rinaldo, A., Zhou, Y.: Maximum likelihood estimation in latent class models for contingency table data, in Algebraic and geometric methods in statistics (eds Gibilisco P. et al), Cambridge University Press (009) [] Gao, S., Guan, Y., and Volny IV, F.: A new incremental algorithm for computing Gröbner basis. In ISSAC 0: Proceedings of the 00 International Symposium on Symbolic and Algebraic Computation. Munich, Germany, ACM, pp. -9 (00) [7] Gao, S., Volny IV, F., and Wang, S.: A new algorithm for computing Gröbner bases. Submitted (00) Available at [8] Henry, N.W., Lazarsfeld, P.F.: Latent Structure Analysis, Houghton Mufflin Company (98) [9] Hoşten, S., Khetan, A., Sturmfels, B.: Solving the likelihood equations, Found. Comput Math, (00)
17 Solving the 00 Swiss Francs Problem 7 [0] Pachter, L., Sturmfels, B.: Algebraic Statistics for Computational Biology, Cambridge University Press (00) [] Sturmfels, B.: Open problems in Algebraic Statistics, in Emerging Applications of Algebraic Geometry, (eds Putinar M. and Sullivant S. ), I.M.A. Volumes in Mathematics and its Applications, 9, Springer, New York, pp. - (008) Mingfu Zhu Center for Human Genome Variation Duke University Durham, NC 7708, USA mingfu.zhu@duke.edu Guangran Jiang Department of Computer Science Zhejiang University Hangzhou, Zhejiang, 007, China rpggpr@zju.edu.cn Shuhong Gao Department of Mathematical Sciences Clemson University Clemson, SC 9-097, USA sgao@clemson.edu
Maximum Likelihood Estimation in Latent Class Models for Contingency Table Data
Maximum Likelihood Estimation in Latent Class Models for Contingency Table Data Stephen E. Fienberg Department of Statistics, Machine Learning Department, Cylab Carnegie Mellon University May 20, 2008
More informationOpen Problems in Algebraic Statistics
Open Problems inalgebraic Statistics p. Open Problems in Algebraic Statistics BERND STURMFELS UNIVERSITY OF CALIFORNIA, BERKELEY and TECHNISCHE UNIVERSITÄT BERLIN Advertisement Oberwolfach Seminar Algebraic
More informationComputing Minimal Polynomial of Matrices over Algebraic Extension Fields
Bull. Math. Soc. Sci. Math. Roumanie Tome 56(104) No. 2, 2013, 217 228 Computing Minimal Polynomial of Matrices over Algebraic Extension Fields by Amir Hashemi and Benyamin M.-Alizadeh Abstract In this
More informationMinimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA
Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals Satoshi AOKI and Akimichi TAKEMURA Graduate School of Information Science and Technology University
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationAlgebra and Trigonometry 2006 (Foerster) Correlated to: Washington Mathematics Standards, Algebra 2 (2008)
A2.1. Core Content: Solving problems The first core content area highlights the type of problems students will be able to solve by the end of, as they extend their ability to solve problems with additional
More informationSPECTRA - a Maple library for solving linear matrix inequalities in exact arithmetic
SPECTRA - a Maple library for solving linear matrix inequalities in exact arithmetic Didier Henrion Simone Naldi Mohab Safey El Din Version 1.0 of November 5, 2016 Abstract This document briefly describes
More informationReference Material /Formulas for Pre-Calculus CP/ H Summer Packet
Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet Week # 1 Order of Operations Step 1 Evaluate expressions inside grouping symbols. Order of Step 2 Evaluate all powers. Operations Step
More informationarxiv: v1 [math.co] 3 Nov 2014
SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationon Newton polytopes, tropisms, and Puiseux series to solve polynomial systems
on Newton polytopes, tropisms, and Puiseux series to solve polynomial systems Jan Verschelde joint work with Danko Adrovic University of Illinois at Chicago Department of Mathematics, Statistics, and Computer
More informationTHE N-VALUE GAME OVER Z AND R
THE N-VALUE GAME OVER Z AND R YIDA GAO, MATT REDMOND, ZACH STEWARD Abstract. The n-value game is an easily described mathematical diversion with deep underpinnings in dynamical systems analysis. We examine
More informationALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION. 1. Introduction. f 0 (x)
ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION JIAWANG NIE AND KRISTIAN RANESTAD Abstract. Consider the polynomial optimization problem whose objective and constraints are all described by multivariate polynomials.
More informationApproximation algorithms for nonnegative polynomial optimization problems over unit spheres
Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu
More informationCentral Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J
Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central
More informationMaximum Likelihood Estimation in Latent Class Models For Contingency Table Data
Maximum Likelihood Estimation in Latent Class Models For Contingency Table Data Stephen E. Fienberg Department of Statistics, Machine Learning Department and Cylab Carnegie Mellon University Pittsburgh,
More informationarxiv: v3 [math.ra] 10 Jun 2016
To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles
More informationPolynomials, Ideals, and Gröbner Bases
Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields
More informationUMA Putnam Talk LINEAR ALGEBRA TRICKS FOR THE PUTNAM
UMA Putnam Talk LINEAR ALGEBRA TRICKS FOR THE PUTNAM YUFEI ZHAO In this talk, I want give some examples to show you some linear algebra tricks for the Putnam. Many of you probably did math contests in
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More information2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable
Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The
More informationTHE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R
THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R YIDA GAO, MATT REDMOND, ZACH STEWARD Abstract. The n-value game is a dynamical system defined by a method of iterated differences. In this paper, we
More informationk-protected VERTICES IN BINARY SEARCH TREES
k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from
More informationDeterministic distinct-degree factorisation of polynomials over finite fields*
Article Submitted to Journal of Symbolic Computation Deterministic distinct-degree factorisation of polynomials over finite fields* Shuhong Gao 1, Erich Kaltofen 2 and Alan G.B. Lauder 3 1 Department of
More informationOverview of Computer Algebra
Overview of Computer Algebra http://cocoa.dima.unige.it/ J. Abbott Universität Kassel J. Abbott Computer Algebra Basics Manchester, July 2018 1 / 12 Intro Characteristics of Computer Algebra or Symbolic
More informationCounting and Gröbner Bases
J. Symbolic Computation (2001) 31, 307 313 doi:10.1006/jsco.2000.1575 Available online at http://www.idealibrary.com on Counting and Gröbner Bases K. KALORKOTI School of Computer Science, University of
More informationMaximum Likelihood Estimation in Latent Class Models For Contingency Table Data
Maximum Likelihood Estimation in Latent Class Models For Contingency Table Data Stephen E. Fienberg Department of Statistics, Machine Learning Department and Cylab Carnegie Mellon University Pittsburgh,
More informationAnalysis of California Mathematics standards to Common Core standards Algebra I
Analysis of California Mathematics standards to Common Core standards Algebra I CA Math Standard Domain Common Core Standard () Alignment Comments in 1.0 Students identify and use the arithmetic properties
More informationChapter 2: Linear Independence and Bases
MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space
More information26 : Spectral GMs. Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G.
10-708: Probabilistic Graphical Models, Spring 2015 26 : Spectral GMs Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G. 1 Introduction A common task in machine learning is to work with
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationA Potpourri of Nonlinear Algebra
a a 2 a 3 + c c 2 c 3 =2, a a 3 b 2 + c c 3 d 2 =0, a 2 a 3 b + c 2 c 3 d =0, a 3 b b 2 + c 3 d d 2 = 4, a a 2 b 3 + c c 2 d 3 =0, a b 2 b 3 + c d 2 d 3 = 4, a 2 b b 3 + c 2 d 3 d =4, b b 2 b 3 + d d 2
More informationComputational Complexity of Tensor Problems
Computational Complexity of Tensor Problems Chris Hillar Redwood Center for Theoretical Neuroscience (UC Berkeley) Joint work with Lek-Heng Lim (with L-H Lim) Most tensor problems are NP-hard, Journal
More informationMinimizing Cubic and Homogeneous Polynomials over Integers in the Plane
Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Alberto Del Pia Department of Industrial and Systems Engineering & Wisconsin Institutes for Discovery, University of Wisconsin-Madison
More informationThe decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t
The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationAlgebra II. A2.1.1 Recognize and graph various types of functions, including polynomial, rational, and algebraic functions.
Standard 1: Relations and Functions Students graph relations and functions and find zeros. They use function notation and combine functions by composition. They interpret functions in given situations.
More informationSOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS
SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS HAIBIN CHEN, GUOYIN LI, AND LIQUN QI Abstract. In this paper, we examine structured tensors which have sum-of-squares (SOS) tensor decomposition, and study
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationJónsson posets and unary Jónsson algebras
Jónsson posets and unary Jónsson algebras Keith A. Kearnes and Greg Oman Abstract. We show that if P is an infinite poset whose proper order ideals have cardinality strictly less than P, and κ is a cardinal
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationRATIONAL REALIZATION OF MAXIMUM EIGENVALUE MULTIPLICITY OF SYMMETRIC TREE SIGN PATTERNS. February 6, 2006
RATIONAL REALIZATION OF MAXIMUM EIGENVALUE MULTIPLICITY OF SYMMETRIC TREE SIGN PATTERNS ATOSHI CHOWDHURY, LESLIE HOGBEN, JUDE MELANCON, AND RANA MIKKELSON February 6, 006 Abstract. A sign pattern is a
More informationIntroduction to Algebraic Statistics
Introduction to Algebraic Statistics Seth Sullivant North Carolina State University January 5, 2017 Seth Sullivant (NCSU) Algebraic Statistics January 5, 2017 1 / 28 What is Algebraic Statistics? Observation
More informationAlgebra 2 (2006) Correlation of the ALEKS Course Algebra 2 to the California Content Standards for Algebra 2
Algebra 2 (2006) Correlation of the ALEKS Course Algebra 2 to the California Content Standards for Algebra 2 Algebra II - This discipline complements and expands the mathematical content and concepts of
More informationGRÖBNER BASES AND POLYNOMIAL EQUATIONS. 1. Introduction and preliminaries on Gróbner bases
GRÖBNER BASES AND POLYNOMIAL EQUATIONS J. K. VERMA 1. Introduction and preliminaries on Gróbner bases Let S = k[x 1, x 2,..., x n ] denote a polynomial ring over a field k where x 1, x 2,..., x n are indeterminates.
More informationA New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating
Applied Mathematical Sciences, Vol. 4, 200, no. 9, 905-9 A New Method to Find the Eigenvalues of Convex Matrices with Application in Web Page Rating F. Soleymani Department of Mathematics, Islamic Azad
More informationWhat is Singular Learning Theory?
What is Singular Learning Theory? Shaowei Lin (UC Berkeley) shaowei@math.berkeley.edu 23 Sep 2011 McGill University Singular Learning Theory A statistical model is regular if it is identifiable and its
More informationMATH 117 LECTURE NOTES
MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationarxiv: v2 [math.st] 22 Oct 2016
Vol. 0 (0000) arxiv:1608.08789v2 [math.st] 22 Oct 2016 On the maximum likelihood degree of linear mixed models with two variance components Mariusz Grządziel Department of Mathematics, Wrocław University
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationMINIMAL NORMAL AND COMMUTING COMPLETIONS
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND
More informationEXAMPLES OF PROOFS BY INDUCTION
EXAMPLES OF PROOFS BY INDUCTION KEITH CONRAD 1. Introduction In this handout we illustrate proofs by induction from several areas of mathematics: linear algebra, polynomial algebra, and calculus. Becoming
More informationNew Directions in Multivariate Public Key Cryptography
New Directions in Shuhong Gao Joint with Ray Heindl Clemson University The 4th International Workshop on Finite Fields and Applications Beijing University, May 28-30, 2010. 1 Public Key Cryptography in
More informationChange of Ordering for Regular Chains in Positive Dimension
Change of Ordering for Regular Chains in Positive Dimension X. Dahan, X. Jin, M. Moreno Maza, É. Schost University of Western Ontario, London, Ontario, Canada. École polytechnique, 91128 Palaiseau, France.
More information8.6 Partial Fraction Decomposition
628 Systems of Equations and Matrices 8.6 Partial Fraction Decomposition This section uses systems of linear equations to rewrite rational functions in a form more palatable to Calculus students. In College
More informationTOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology
TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology DONALD M. DAVIS Abstract. We use ku-cohomology to determine lower bounds for the topological complexity of mod-2 e lens spaces. In the
More informationMatrix l-algebras over l-fields
PURE MATHEMATICS RESEARCH ARTICLE Matrix l-algebras over l-fields Jingjing M * Received: 05 January 2015 Accepted: 11 May 2015 Published: 15 June 2015 *Corresponding author: Jingjing Ma, Department of
More information8.3 Partial Fraction Decomposition
8.3 partial fraction decomposition 575 8.3 Partial Fraction Decomposition Rational functions (polynomials divided by polynomials) and their integrals play important roles in mathematics and applications,
More informationEigenvalues and Eigenvectors
Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors
More informationSolving Homogeneous Systems with Sub-matrices
Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State
More informationComputer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization
Prof. Daniel Cremers 6. Mixture Models and Expectation-Maximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationComputing a Lower Bound for the Canonical Height on Elliptic Curves over Q
Computing a Lower Bound for the Canonical Height on Elliptic Curves over Q John Cremona 1 and Samir Siksek 2 1 School of Mathematical Sciences, University of Nottingham, University Park, Nottingham NG7
More informationON EXPLICIT FORMULAE AND LINEAR RECURRENT SEQUENCES. 1. Introduction
ON EXPLICIT FORMULAE AND LINEAR RECURRENT SEQUENCES R. EULER and L. H. GALLARDO Abstract. We notice that some recent explicit results about linear recurrent sequences over a ring R with 1 were already
More informationPERIODIC POINTS OF THE FAMILY OF TENT MAPS
PERIODIC POINTS OF THE FAMILY OF TENT MAPS ROBERTO HASFURA-B. AND PHILLIP LYNCH 1. INTRODUCTION. Of interest in this article is the dynamical behavior of the one-parameter family of maps T (x) = (1/2 x
More informationRational Univariate Reduction via Toric Resultants
Rational Univariate Reduction via Toric Resultants Koji Ouchi 1,2 John Keyser 1 Department of Computer Science, 3112 Texas A&M University, College Station, TX 77843-3112, USA Abstract We describe algorithms
More informationThe initial involution patterns of permutations
The initial involution patterns of permutations Dongsu Kim Department of Mathematics Korea Advanced Institute of Science and Technology Daejeon 305-701, Korea dskim@math.kaist.ac.kr and Jang Soo Kim Department
More informationJUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson
JUST THE MATHS UNIT NUMBER.5 DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) by A.J.Hobson.5. Maclaurin s series.5. Standard series.5.3 Taylor s series.5.4 Exercises.5.5 Answers to exercises
More information8. Limit Laws. lim(f g)(x) = lim f(x) lim g(x), (x) = lim x a f(x) g lim x a g(x)
8. Limit Laws 8.1. Basic Limit Laws. If f and g are two functions and we know the it of each of them at a given point a, then we can easily compute the it at a of their sum, difference, product, constant
More informationThe following two problems were posed by de Caen [4] (see also [6]):
BINARY RANKS AND BINARY FACTORIZATIONS OF NONNEGATIVE INTEGER MATRICES JIN ZHONG Abstract A matrix is binary if each of its entries is either or The binary rank of a nonnegative integer matrix A is the
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationA Questionable Distance-Regular Graph
A Questionable Distance-Regular Graph Rebecca Ross Abstract In this paper, we introduce distance-regular graphs and develop the intersection algebra for these graphs which is based upon its intersection
More informationMath 164-1: Optimization Instructor: Alpár R. Mészáros
Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing
More informationOn the mean connected induced subgraph order of cographs
AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 71(1) (018), Pages 161 183 On the mean connected induced subgraph order of cographs Matthew E Kroeker Lucas Mol Ortrud R Oellermann University of Winnipeg Winnipeg,
More informationChapter 2 Formulas and Definitions:
Chapter 2 Formulas and Definitions: (from 2.1) Definition of Polynomial Function: Let n be a nonnegative integer and let a n,a n 1,...,a 2,a 1,a 0 be real numbers with a n 0. The function given by f (x)
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationarxiv: v1 [math.ra] 11 Aug 2014
Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationAlgebraic Equations for Blocking Probabilities in Asymmetric Networks
Algebraic Equations for Blocking Probabilities in Asymmetric Networks Ian Dinwoodie, Emily Gamundi and Ed Mosteig Technical Report #2004-15 April 6, 2004 This material was based upon work supported by
More informationIntroduction to Series and Sequences Math 121 Calculus II Spring 2015
Introduction to Series and Sequences Math Calculus II Spring 05 The goal. The main purpose of our study of series and sequences is to understand power series. A power series is like a polynomial of infinite
More informationLecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem
Math 280 Geometric and Algebraic Ideas in Optimization April 26, 2010 Lecture 5 Lecturer: Jesús A De Loera Scribe: Huy-Dung Han, Fabio Lapiccirella 1 Goermans-Williamson Algorithm for the maxcut problem
More informationSome Results Concerning Uniqueness of Triangle Sequences
Some Results Concerning Uniqueness of Triangle Sequences T. Cheslack-Postava A. Diesl M. Lepinski A. Schuyler August 12 1999 Abstract In this paper we will begin by reviewing the triangle iteration. We
More informationThe F 4 Algorithm. Dylan Peifer. 9 May Cornell University
The F 4 Algorithm Dylan Peifer Cornell University 9 May 2017 Gröbner Bases History Gröbner bases were introduced in 1965 in the PhD thesis of Bruno Buchberger under Wolfgang Gröbner. Buchberger s algorithm
More informationx 9 or x > 10 Name: Class: Date: 1 How many natural numbers are between 1.5 and 4.5 on the number line?
1 How many natural numbers are between 1.5 and 4.5 on the number line? 2 How many composite numbers are between 7 and 13 on the number line? 3 How many prime numbers are between 7 and 20 on the number
More informationConflict-Free Colorings of Rectangles Ranges
Conflict-Free Colorings of Rectangles Ranges Khaled Elbassioni Nabil H. Mustafa Max-Planck-Institut für Informatik, Saarbrücken, Germany felbassio, nmustafag@mpi-sb.mpg.de Abstract. Given the range space
More informationOn Boolean functions which are bent and negabent
On Boolean functions which are bent and negabent Matthew G. Parker 1 and Alexander Pott 2 1 The Selmer Center, Department of Informatics, University of Bergen, N-5020 Bergen, Norway 2 Institute for Algebra
More informationChapter 9: Systems of Equations and Inequalities
Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.
More informationMATH 118, LECTURES 27 & 28: TAYLOR SERIES
MATH 8, LECTURES 7 & 8: TAYLOR SERIES Taylor Series Suppose we know that the power series a n (x c) n converges on some interval c R < x < c + R to the function f(x). That is to say, we have f(x) = a 0
More informationarxiv:math/ v5 [math.ac] 17 Sep 2009
On the elementary symmetric functions of a sum of matrices R. S. Costas-Santos arxiv:math/0612464v5 [math.ac] 17 Sep 2009 September 17, 2009 Abstract Often in mathematics it is useful to summarize a multivariate
More informationCHINO VALLEY UNIFIED SCHOOL DISTRICT INSTRUCTIONAL GUIDE ALGEBRA II
CHINO VALLEY UNIFIED SCHOOL DISTRICT INSTRUCTIONAL GUIDE ALGEBRA II Course Number 5116 Department Mathematics Qualification Guidelines Successful completion of both semesters of Algebra 1 or Algebra 1
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationExecutive Assessment. Executive Assessment Math Review. Section 1.0, Arithmetic, includes the following topics:
Executive Assessment Math Review Although the following provides a review of some of the mathematical concepts of arithmetic and algebra, it is not intended to be a textbook. You should use this chapter
More informationHIGHER-ORDER DIFFERENCES AND HIGHER-ORDER PARTIAL SUMS OF EULER S PARTITION FUNCTION
ISSN 2066-6594 Ann Acad Rom Sci Ser Math Appl Vol 10, No 1/2018 HIGHER-ORDER DIFFERENCES AND HIGHER-ORDER PARTIAL SUMS OF EULER S PARTITION FUNCTION Mircea Merca Dedicated to Professor Mihail Megan on
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More information