Exponential families of mixed Poisson distributions
|
|
- Abel Norton
- 5 years ago
- Views:
Transcription
1 Exponential families of mixed Poisson distributions A. Ferrari a, G. Letac b,, J.-Y. Tourneret c a LUAN, Université de Nice Sophia-Antipolis, 68 Nice, France b LSP, Université Paul Sabatier, 362 Toulouse, France c IRIT/ENSEEIHT/TESA, 37 Toulouse, France Abstract If I =(I,..., I d ) is a random variable on [, ) d with distribution μ(dλ,..., dλ d ), the mixed Poisson distribution MP(μ) on N d is the distribution of (N (I ),..., N d (I d )) where N,..., N d are ordinary independent Poisson processes which are also independent of I. The paper proves that if F is a natural exponential family on [, ) d then MP(F) is also a natural exponential family if and only if a generating probability of F is the distribution of v + v Y + + v q Y q for some q d, for some vectors v,..., v q of [, ) d with disjoint supports and for independent standard real gamma random variables Y,..., Y q. Keywords: Natural exponential families; Multivariate gamma; Overdispersion. Introduction Consider the Poisson distribution with parameter λ > defined by P(λ)(dx) = n= λ λn e n! δ n(dx). If we randomize the parameter λ by some probability μ(dλ) on (, ) we get a new probability MP(μ) on the set N of nonnegative integers defined by MP(μ) = P(λ)μ(dλ). We have the Corresponding author. Fax: address: gerard.letac@tele2.fr (G. Letac).
2 identifiability property: P(μ) = P(μ ) if and only if μ = μ : just compute the generating function f MP(μ) (z) = z n MP(μ)({n}) n= of MP(μ) and link it to the Laplace transform L μ (θ) of μ by f MP(μ) (z) = L μ (z ). One sees MP(μ) as the distribution of N(I) where t N(t) is a standard Poisson process on N which is independent of the random variable I with distribution μ. One reason of the interest on these mixed Poisson distributions lies on the fact that they are overdispersed, in the sense that their variance is bigger than their mean. However, it should be pointed out that one easily constructs an overdispersed distribution ν concentrated on N such that no μ with ν = MP(μ) can possibly exist. An example is n= ν n z n = ( + z + z 2 )/(6 3z). There is an abundant literature on the topic for which Grandell [5] offers a good synthesis and references. Suppose now that μ belongs to a natural exponential family (NEF) F concentrated on [, ). Denote by V F (m) its variance function defined on the domain of the means M F (, ) in the sense initiated by Morris []. We consider the model MP(F) = {MP(μ); μ F}. Let us start with the following simple observation: if μ F has mean m then the variance of MP(μ) is m + V F (m). It is tantalizing to think that we have created a new natural exponential family G with variance function V G such that M G M F and such that V G (m) = m + V F (m) on M F. This is false: if p > is not an integer consider the NEF F such that M F = (, ) and V F (m) = (m ) p. Such a NEF does exist from Bar-Lev and Enis [] or Jorgensen [7]. Then one sees that V G (m) = m + (m ) p is not a variance function: from Theorems 3. and 3.2 of Letac and Mora [8] one should have M G = (, ) and thus for m > we would have the contradiction m dm m + (m ) p =. Furthermore, for some NEF F s the function m + V F (m) can be actually the variance function of some NEF G with no relation either with MP(F) or with F. A provocative example is V F (m) = m p with p > is not an integer and M F = (, ). In this case V G (m) = m+m p is the variance function of a NEF such that M G = (, ) but G is concentrated on the additive semigroup N + pn. For checking this it is enough is to compute the corresponding cumulant transform and to observe that the elements of G must be infinitely divisible with a discrete Lévy measure concentrated on N + pn. Finally Bent Jorgensen [7] offers a different construction of mixed Poisson distributions from a NEF (see the remark in Section 3 below for a description of the Jorgensen s manner.) Thus, a natural question is: if the NEF G exists do we have G MP(F)? In Section 3 Theorem says : yes, if and only if F is a gamma family, i.e. when there exists a number p > such that V F (m) = m 2 /p. In this case G is a negative binomial NEF. Section 4 extends the question to N d. We randomize (λ,..., λ d ) in the product P(λ )(dx ) P(λ d )(dx d ) by the probability μ(dλ) on [, ) d and consider the probability on N d defined by MP(μ) =... P(λ ) P(λ d )μ(dλ,..., dλ d ). We get a similar characterization (Theorem 2) which is described in the abstract above. The multivariate distributions which occur in Theorem 2 have been recently isolated and characterized 2
3 by Konstancja Bobecka and Jacek Wesołowski [2] (details are given in Section 4). The proof of Theorem 2 needs some care and the particular case d = of Theorem is a preparation to d 2. Section 2 recalls some facts about NEF. This study was motivated by statistical optics. Mixed Poisson distributions are commonly used to model data recorded from low flux objects or with short exposure times using photocounting cameras. This physical model arises from the semiclassical theory of statistical optics described in Goodman [4]. In this theory, the classical theory of propagation is used up to the camera, leading to a high flux image. Conditionally to this image, the number of photons counted on the pixels is distributed according to a Poisson distribution whose mean is the high flux intensity. A common problem for example in astrophysics is to estimate parameters of the wavefront (the mixing distribution) from photocounts recorded on a set of pixels. A description is found in Ferrari et al. [3]. A general assumption is that the vector of complex amplitudes associated to adjacent pixels of the image is a zero mean Gaussian vector, which implies that the vector of the corresponding intensities is distributed according to a multivariate gamma distribution. An important question is to derive conditions ensuring that the associated mixed Poisson distribution belongs to a NEF. This result is important since the computational complexity of most estimation or detection methods is usually reduced when applied to distributions belonging to an NEF. 2. NEF on R and R d This section describes the notations and classical facts about natural exponential families, mainly taken from Morris [] and Letac and Mora [8]. Denote by L ν (θ) = e θ,λ ν(dλ) R d the Laplace transform of a positive measure ν defined for θ R d not concentrated on any affine hyperplane. The Hölder inequality proves that the set D(ν) of θ R d such that L ν (θ) < is a convex set and that the cumulant function k ν = log L ν is a strictly convex function on this set. Denote by Θ(ν) the interior of D(ν) and assume that Θ(ν) is not empty. Then k ν is real analytic on Θ(ν). The set F(ν) of probabilities ν θ (dλ) = e θ,λ k ν(θ) ν(dλ), where θ runs Θ(ν) is called the NEF with generating measure ν. Note that F(ν) = F(ν ) does not imply ν = ν but implies only the existence of some a R d and b R such that ν(dλ) = e a,λ +b ν (dλ). Thus, a member μ of the NEF F(ν) can always be taken as a generating measure. However, some generating measures are not necessarily probabilities and can even be unbounded. We mention also that θ ν θ (dλ) is called a canonical parametrization of the NEF. Other parametrizations of the type t ν α(t) (dλ) = e α(t),λ +β(t) ν(dλ) with β(t) = k ν (α(t)), could be considered. Since θ k ν (θ) is a strictly convex function on the open set Θ(ν), the map θ m = k ν (θ) = R d λν θ(dλ) from Θ(ν) to R d is one to one. The open set M F = k ν (Θ(ν)) is called the domain of the means of F. Denote by m θ = ψ ν (m) the inverse function of k ν from M F onto Θ(ν). The Hessian 3
4 matrix k ν (θ) is the covariance matrix of the probability ν θ(dλ). Denoting V F (m) = k ν (ψ ν(m)), the map from M F to the positive definite symmetric matrices of order d defined by m V F (m) is called the variance function of F. It characterizes F since its knowledge leads by integration of a differential equation to the knowledge of the cumulant function of a generating measure of F. 3. The case of real exponential families For d =, for p and a > the gamma distribution with shape parameter p and scale parameter a is λ/a λp γ p,a (dλ) = e a p (, )(λ) dλ Γ(p). () This is a member of the NEF F generated by ν(dλ) = λp Γ(p) (λ)dλ. The domain of the means of F is M F = (, ) and its variance function is V F (m) = m 2 /p. The Laplace transform of γ p,a is L(z) = ( az) p with a suitable definition of this analytic function in {z C; Rz < /a}: we have simply to impose that it is real on the real axis. We see that the generating function of MP(γ p,a ) is (+a az) p = ( q qz )p with the notation q = a/( + a) (, ). Thus, if N MP(γ p,a ) then with the Pochhammer notation (p) = and (p) k+ = (p + k)(p) k we have Pr(N = k) = k! (p) k( q) p q k = k! (p) k, (2) ( + a) p+k a negative binomial distribution. This is a member of the NEF G of negative binomial distributions generated by k= k! (p) kδ k. The domain of the means of G is M G = (, ) and its variance function is V F (m) = m+m 2 /p. Thus, both F and G = MP(F) are NEF in this particular example. We show that this is the only case: Theorem. If the image of the NEF F(ν) on [, ) by μ MP(μ) is still an NEF, then there exists p > such that F(ν) is the family of gamma distributions with fixed shape parameter p. Proof. Denote L ν (θ) = e λθ ν(dλ) for θ Θ, where Θ is the interior of the convergence domain of L ν (θ). Note that Θ is either R or some half line (, a). Suppose that the image of F(ν) by μ MP(μ) is an NEF on N generated by some measure n= p n δ n. Consequently, there exists two functions α and β defined on Θ + such that for all n which can be rewritten λ(θ ) λn e n! ν(dλ) = p ne nα(θ)+β(θ), (3) L (n) ν (θ ) = n!p ne nα(θ)+β(θ). (4) Recall thatνis concentrated on(, ) and thus thatl ν is not a constant. Being a Laplace transform the function L ν cannot be a polynomial and L ν (n) cannot be identically. This implies p n = for 4 a k
5 all n. Eq. (??) shows that α(θ) and β(θ) are real-analytic functions on the interval Θ +. Indeed θ L ν (θ ) is analytic in the half complex plane Θ + + ir as well as its nth derivative L ν (n) (θ ). Furthermore, since L (n) ν (θ ) is positive on Θ + (because p n > ), its logarithm is real-analytic. Consequently, nα(θ) + β(θ) and (n + )α(θ) + β(θ) are real-analytic on Θ +, which implies by linear combination that α(θ) and β(θ) are real-analytic on Θ +. This proves the existence of α (θ) and β (θ). By taking the logarithms of both sides of (??) and differentiating with respect to θ we get L (n+) ν (θ ) L (n) ν (θ ) = nα (θ) + β (θ). (5) We now fix θ. Assume first that a = α (θ) = and denote p = β (θ)/α (θ). Eq. (??) can be written L (n+) ν (θ ) = a(p + n)l (n) (θ ) hence L ν (n) (θ ) = L ν (θ )(p) n a n. Since ν is concentrated on (, ) we have L ν (θ ) = a >. Since L ν (θ ) = pa2 /2 > we have p >. The Taylor formula applied to the analytic function L ν for small values of h can be written as follows: (ah) n L ν (θ + h) = L ν (θ ) (p) n n! n= = L ν (θ )( ah) p. The result L ν(θ +h) = ( ah) L ν (θ ) p is valid for any h (, /a), since the Laplace transform is an analytic function. The right-hand side of this expression is the Laplace transform of the gamma distribution γ p,a. Moreover, the Laplace transform of ν θ is e λh ν θ (dλ) = e λ(h+θ ) e k ν(θ ) ν(dλ) = L ν(θ + h) L ν (θ ) which shows that ν θ = γ p,a. In other words, the exponential family for {MP(μ θ ); θ Θ} is the family of gamma distributions with fixed shape parameter p. If α (θ) =, (??) yields L(n+) ν (θ ) = β (θ) which leads to L (n) ν (θ ) L ν (θ + h) L ν (θ ) = e β (θ)h. This is the noninteresting case where ν is a Dirac measure concentrated on the point β (θ). Our definition of NEF excludes this and the proof of Theorem is complete. Remark. For clarification it should be pointed out that Jorgensen [7, pp ] mentions a different object. Given a NEF F = F(ν) on (, ) and taking the number ψ in a suitable interval, Bent Jorgensen considers a NEF H ψ on N with a cumulant function of the form θ k ν (ψ+e θ ). In this case, introducing the reciprocal function h ψ (m) of the map z zk ν (ψ+z) he proves that ( ) m V Hψ (m) = m + V F h ψ (m) 2. h ψ (m) The Jorgensen s construction seems motivated by the particular case V F (m) = m p with p as considered by Hougaard et al. [6]. This family H ψ is obtained from F by a Poisson mixing, 5
6 process, but in a slightly complicated way. For describing it adopt the following notation: if μ is a measure on R and c > denote d c μ the image of the measure μ by the dilation x cx. Then H ψ is the set of all MP(d c ν ψ+c ) such that c is in Θ(ν). For instance if F is a gamma family generated by ν(dλ) = λp Γ(p) (, )(λ) dλ a simple calculation gives that H ψ is a negative binomial NEF with variance function V Hψ (m) = m + ψ p m2. 4. The case of multivariate exponential families A line multivariate gamma distribution governed by a nonzero vector (a,..., a d ) in [, ) d and the parameter p is the distribution of the random variable X = (a Y,..., a d Y) where Y is a real random variable with distribution γ p,. Its Laplace transform is L μ (z) = E(e θ,x ) = ( a θ a d θ d ) p. Its image by μ MP(μ) is the negative multinomial distribution on N d with generating function E(z N... z N d d ) = cp ( c(a z + + a d z d )) p where c = ( + a + + a d ). If some of the a i s are zero, (say a i > if and only if i m) then the half line image of [, ) by y (a y,..., a m y) is concentrated on [, ) m and the corresponding distribution is concentrated on N m. For an integer q consider q + subsets of {,..., d} denoted by {T,..., T q } and such that {,..., d} = q m= T m, T m =, m and T i T j =, i = j. Consider a product of line multivariate gamma distributions concentrated on [, ) T m for m =,..., q and a Dirac mass on [, ) T. More specifically, consider a distribution ν on [, ) d such that there exist nonnegative numbers a,..., a d and positive numbers p,..., p q such that the Laplace transform of ν is q k T a L ν (θ) = e k θ k m= m a k θ k p. (6) k T m Another presentation of these distributions can be helpful. If v = (v (),..., v (d) ) is in [, ) d let us call support of v the set of i {,..., d} such that v (i) >. If we now define v m (i) = a i Tm (i) then the q + vectors v,..., v q of R d have disjoint supports. Introduce the random variables Y m with distribution γ pm, such that (Y,..., Y q ) are independent. Then ν defined by (??) is the distribution of v + v Y + + v q Y q. Conversely if v,..., v q have disjoint supports the distribution of v + v Y + + v q Y q has type (??). These distributions (at least for v = ) have been characterized in Bobecka and Wesołowski [2] as follows: suppose that X and X are independent random variables of (, ) d. Write ( ) X X X d X + X = X + X,..., X d + X d Then they obtain an elegant multivariate version of a theorem due to Lukacs [9]: the random variables X + X X and X+X are independent if and only if there exists a nonzero sequence of 6
7 vectors (v,..., v q ) in [, ) d with disjoint supports and independent standard gamma variables (Y,..., Y q, Y,..., Y q ) such that X v Y + + v q Y q and X v Y + + v qy q. The real domain D(ν) of existence of this Laplace transform (??) is open and is the set Θ of θ k s such that k T m a k θ k > for all m =,..., q. For θ Θ, the element ν θ of the natural exponential family F generated by ν has the following Laplace transform: h L ν(θ + h) L ν (θ) q k T a = e k h k m= r m k T m a k h k p m, where r m = r m (θ) = ( k T m a k θ k ). Note that ν θ is also a product of line multivariate gamma distributions. The reader can verify that the familymp(f) = {MP(ν θ ); θ Θ} is indeed a natural exponential family generated by MP(ν) (warning: the parametrization θ MP(ν θ ) of MP(F) is not the canonical one). The next theorem shows that we have obtained in this way all natural exponential families F such that MP(F) is also an exponential family. It is an extension of the above Theorem. Theorem 2. If the image of the NEF F on [, ) d by μ MP(μ) is still a natural exponential family, then there exists q + disjoints subsets of {,..., d} denoted by {T, T,..., T q } and there exist nonnegative numbers a,..., a d and positive numbers p,..., p q such that F has a generating measure ν with Laplace transform (??). Proof. Similar to the proof of Theorem, the case where F is concentrated on a subspace of R d such as (,..., ) R q with q < d is discarded. This case leads to mixed Poisson distributions concentrated on (,..., ) N q. Denote by ν an arbitrary generating measure of F and L(θ) its Laplace transform defined as L(θ) = e λ,θ ν(dλ), [, ) d for θ Θ, where Θ is the interior of the domain of convergence of L(θ). Note that the fact that ν is concentrated on [, ) d implies that Θ + a Θ for any a = (a,..., a d ) such that a i. Suppose that the image of F(ν) by μ MP(μ) is an NEF on N d generated by some measure n N d p nδ n. We write R d for the vector with components equal to. We use the standard notations λ n = λ n λn d d and n! = n! n d!, for n = (n,..., n d ) in N d and λ in [, ) d. Similarly L (n) (θ) means n θ n... n d θ n L(θ). d d Thus there exist two functions α : Θ + R d and β : Θ + R such that e λ,(θ ) λ n [, ) d n! ν(dλ) = p ne n,α(θ) +β(θ), which can be rewritten L (n) (θ ) = n!p n e n,α(θ) +β(θ) n N d. (7) A discussion similar to that of Theorem shows that p n > for all n N d, since ν is not concentrated on some subspace of type (,..., ) R q. As a consequence, the real-analyticity 7
8 of α and β on Θ + can be deduced of the analyticity of L, by imitating again the proof of Theorem. Denote e i = (,...,,..., ) N d, where the unique is in position i. We also write α(θ) = (α,..., α d ), α ij = α j and β θ i i = β. By taking the logarithms of both sides of θ i (??) and applying we get θ i L (n+e i) (θ ) L (n) (θ ) = This last equality implies n, α(θ) + β(θ) = θ i θ i d n k α ik + β i. (8) k= α ij α jk = α ji α ik, α ij β j = α ji β i, (9) () for all i, j, k in {,..., d}. Indeed, by using (??) and ) can be written in two ways: L (n+e i+e j ) (θ ) = L (n) (θ ) ( d 2 θ i θ j = 2, the quantity L (n+e i+e j ) (θ θ j θ i n k α jk + β j ) ( d n k α ik + β i + α ij) k= k= ( d ) = L (n) (θ ) n k α jk + β j + α ji. k= n k α ik + β i) ( d k= Since this equality holds for all n, (??) and (??) are easily obtained. Assume first that α ij = for all i and j (separating this case is not absolutely necessary but makes the reading easier) and fix θ (as we did in the proof of Theorem ). In this case, (??) and (??) imply that there exist numbers a,..., a d and p such that a i = α ij for all j and p = β i /α ij for all i and j. Equality (??) can then be written L (n+e i) (θ ) L (n) (θ ) = a i (p + ) d n k. k= As a consequence, the following result can obtained: after noting L(θ + h) L(θ ) = ( ( d k= a k h k ) p = ) p d a k h k, () k= s= ( d ) s (p) s a k h k = s! k= (p) s s= n; n + +n d =s n!, d (a k h k ) n k. Consider now the implications of (??) and (??) in the general case where some α ij can be. For this, consider a directed graph G = (V, E) whose set of vertices V = {,..., d} is such that (i,j) is an edge if and only if α ij =. We also write i j instead of α ij = or (i,j) E and i i when the loop (i,i) exists (this loop may or may not exist). Suppose that there k= 8
9 exists k such that α ki = for all i. Eq. (??) can then be written L (n+e k) (θ ) L (n) (θ ) = β k. Thus, for any integer n k, we have L (n ke k ) (θ ) = L(θ )(β k ) n k. After multiplication by h n k k /n k! and summation (with respect to n k ) we obtain L(θ + h) = L(θ )e β k h k, for h = h k e k. More generally, denote by T the set of k such that there is no i such that k i. The above reasoning shows that k T β L(θ + h) = L(θ )e k θ k, h = h k e k. k T Some definitions about graphs need to be recalled. Consider a directed graph G = (V, E), where V is a finite set and E V V. The graph G = (V, E ) is called a subgraph of G if V V and E E (V V ). Furthermore, G is the induced graph on V if E = E (V V ). The following result can be easily obtained: Lemma. Consider the graph G defined as above on V ={,..., d} by the matrix (α ij ) satisfying (??). Then. Let i and j be distinct in V. If the induced subgraph G on V = {i, j} contains either the subgraph i j j or the subgraph i j, then the induced graph is i i j j. 2. If the induced subgraph G on V = {i, j, k} contains the subgraph i j k then G contains the subgraph k i i j j k. 3. If the induced subgraph G on V = {i, j} is either the subgraph i i j or the subgraph i j, then β j =. These results are illustrated in Fig.??. The proof of the lemma involves the three following cases:. If i j j, by setting k = j and k = i in (??), we obtain α ji = and α ii =. If i j, by setting k = j and k = i in (??), we obtain α jj = and α ii =. 2. (??) imply α ji = α jk = α jj = a j and α ij = α ii = α ik = a i. If α ij =, we obtain α ii = and α ik =. Similarly, if α jk =, we obtain α jj = and α ji =. 3. Apply (??). We come back to the proof of Theorem 2. Define the relation i j on V = {,..., d} by either i = j or the induced graph on {i, j} is i i j j. It is easy to deduce from the Lemma that is an equivalence relation. We remark that this implies that each element of T is alone in its equivalence class. Recall also that the definition of T implies that there are no arrows between two elements of T. Denote the other equivalence classes by T,..., T q. Suppose now that there exists i q m= T m and k T such that i k. Then part 3 of the Lemma implies that β k =. Eq. (??) can be used to prove that L (n+ek) = for all n N d. Thus, h L(θ + h) does not depend on h k. Since β k =, this implies that μ is concentrated on {λ R d ; λ k = }, a case which has been excluded from the beginning. There are finally no arrows between q m= T m and T. 9
10 Fig.. Illustration of the lemma. As a summary, the picture of the graph G is. a collection T of vertices without any arrow and any loop. 2. q disjoint classes T,..., T q without arrows between vertices of different classes, and with all possible arrows (including loops) inside a same class T m. Consider a fixed m in {,..., q}. For all i, j in T m, by setting k = i in (??), we obtain α ij α ji = α ji α ii. Thus, α ji = implies α ij = α ii. By using (??), for i T m and for any n N d, the following result can be obtained: L (n+e i) (θ ) L (n) (θ ) = α ii β i + n k. (2) α ii k T m Recall that the number p m = β i /α ii does not depend on i when i runs T m (indeed α ij = α ii and use (??)). Denote a k = α kk as above. The imitation of the proof of (??) and the formula (??) lead to: L(θ + h) = p m, a k h k (3) L(θ ) k T m for any h = k T m h k e k. This concludes the proof of theorem. References [] S. Bar-Lev, P. Enis, Reproducibility and natural exponential families with power variance functions, Ann. Statist. 4 (986) [2] K. Bobecka, J. Wesołowski, Multivariate Lukacs theorem, J. Multiv. Anal. 9 (24) [3] A. Ferrari, G. Letac, J.-Y. Tourneret, Multivariate mixed Poisson distributions, in: Proceedings of 2th Conference on Signal Processing (EUSIPCO 4), Vienna, Austria, September 6, 24, pp [4] J. Goodman, Statistical Optics, Wiley, New York, 985. [5] J. Grandell, Mixed Poisson Processes, Chapman & Hall, London, 997. [6] P. Hougaard, M.-T.T. Lee, G.A. Whitmore, Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes, Biometrics 53 (997) [7] B. Jorgensen, The Theory of Dispersion Models, Chapman & Hall, London, 997. [8] G. Letac, M. Mora, Natural exponential families with cubic variance functions, Ann. Statist. 8 (99) 37. [9] E. Lukacs, A characterization of the gamma distribution, Ann. Math. Statist. 26 (955) [] C.N. Morris, Natural exponential families with quadratic variance functions, Ann. Statist. (982) 65 8.
On Lévy measures for in nitely divisible natural exponential families
On Lévy measures for in nitely divisible natural exponential families Célestin C. Kokonendji a;, a University of Pau - LMA & IUT STID, Pau, France Mohamed Khoudar b b University of Pau - IUT STID, Pau,
More informationAssociated exponential families and elliptic functions
Associated exponential families and elliptic functions June 18 th, 2015, Aarhus Meeting Ole, meeting the exponential families I remember the very long young man who came to Montreal 1968 for talking about
More informationCauchy-Stieltjes kernel families
Cauchy-Stieltjes kernel families Wlodek Bryc Fields Institute, July 23, 2013 NEF versus CSK families The talk will switch between two examples of kernel families K(µ) = {P θ (dx) : θ Θ} NEF versus CSK
More informationOn Equi-/Over-/Underdispersion. and Related Properties of Some. Classes of Probability Distributions. Vladimir Vinogradov
On Equi-/Over-/Underdispersion and Related Properties of Some Classes of Probability Distributions Vladimir Vinogradov (Ohio University, on leave at the Fields Institute, University of Toronto and York
More informationCS Lecture 19. Exponential Families & Expectation Propagation
CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationKernel families of probability measures. Saskatoon, October 21, 2011
Kernel families of probability measures Saskatoon, October 21, 2011 Abstract The talk will compare two families of probability measures: exponential, and Cauchy-Stjelties families. The exponential families
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Monday, September 26, 2011 Lecture 10: Exponential families and Sufficient statistics Exponential Families Exponential families are important parametric families
More informationMeixner matrix ensembles
Meixner matrix ensembles W lodek Bryc 1 Cincinnati April 12, 2011 1 Based on joint work with Gerard Letac W lodek Bryc (Cincinnati) Meixner matrix ensembles April 12, 2011 1 / 29 Outline of talk Random
More informationClimbing an Infinite Ladder
Section 5.1 Section Summary Mathematical Induction Examples of Proof by Mathematical Induction Mistaken Proofs by Mathematical Induction Guidelines for Proofs by Mathematical Induction Climbing an Infinite
More informationThe Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1
Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September
More informationClimbing an Infinite Ladder
Section 5.1 Section Summary Mathematical Induction Examples of Proof by Mathematical Induction Mistaken Proofs by Mathematical Induction Guidelines for Proofs by Mathematical Induction Climbing an Infinite
More informationLARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*
LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationA = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1
Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random
More informationULTRASPHERICAL TYPE GENERATING FUNCTIONS FOR ORTHOGONAL POLYNOMIALS
ULTRASPHERICAL TYPE GENERATING FUNCTIONS FOR ORTHOGONAL POLYNOMIALS arxiv:083666v [mathpr] 8 Jan 009 Abstract We characterize, up to a conjecture, probability distributions of finite all order moments
More informationMULTI-RESTRAINED STIRLING NUMBERS
MULTI-RESTRAINED STIRLING NUMBERS JI YOUNG CHOI DEPARTMENT OF MATHEMATICS SHIPPENSBURG UNIVERSITY SHIPPENSBURG, PA 17257, U.S.A. Abstract. Given positive integers n, k, and m, the (n, k)-th m- restrained
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationAdvanced topics from statistics
Advanced topics from statistics Anders Ringgaard Kristensen Advanced Herd Management Slide 1 Outline Covariance and correlation Random vectors and multivariate distributions The multinomial distribution
More informationOn matrix variate Dirichlet vectors
On matrix variate Dirichlet vectors Konstencia Bobecka Polytechnica, Warsaw, Poland Richard Emilion MAPMO, Université d Orléans, 45100 Orléans, France Jacek Wesolowski Polytechnica, Warsaw, Poland Abstract
More informationMultivariate Normal-Laplace Distribution and Processes
CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by
More informationIntroduction to Real Analysis Alternative Chapter 1
Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces
More informationLinear Algebra I. Ronald van Luijk, 2015
Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition
More informationMonotonicity and Aging Properties of Random Sums
Monotonicity and Aging Properties of Random Sums Jun Cai and Gordon E. Willmot Department of Statistics and Actuarial Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 E-mail: jcai@uwaterloo.ca,
More informationMathematical Induction. Section 5.1
Mathematical Induction Section 5.1 Section Summary Mathematical Induction Examples of Proof by Mathematical Induction Mistaken Proofs by Mathematical Induction Guidelines for Proofs by Mathematical Induction
More informationFree Meixner distributions and random matrices
Free Meixner distributions and random matrices Michael Anshelevich July 13, 2006 Some common distributions first... 1 Gaussian Negative binomial Gamma Pascal chi-square geometric exponential 1 2πt e x2
More informationThe extreme rays of the 5 5 copositive cone
The extreme rays of the copositive cone Roland Hildebrand March 8, 0 Abstract We give an explicit characterization of all extreme rays of the cone C of copositive matrices. The results are based on the
More informationMFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015
MFM Practitioner Module: Quantitative Risk Management September 23, 2015 Mixtures Mixtures Mixtures Definitions For our purposes, A random variable is a quantity whose value is not known to us right now
More informationMath 3013 Problem Set 6
Math 3013 Problem Set 6 Problems from 31 (pgs 189-190 of text): 11,16,18 Problems from 32 (pgs 140-141 of text): 4,8,12,23,25,26 1 (Problems 3111 and 31 16 in text) Determine whether the given set is closed
More informationSemi-self-decomposable distributions on Z +
Ann Inst Stat Math (2008 60:901 917 DOI 10.1007/s10463-007-0124-6 Semi-self-decomposable distributions on Z + Nadjib Bouzar Received: 19 August 2005 / Revised: 10 October 2006 / Published online: 7 March
More informationSTAT 7032 Probability Spring Wlodek Bryc
STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,
More informationSOLUTION SETS OF RECURRENCE RELATIONS
SOLUTION SETS OF RECURRENCE RELATIONS SEBASTIAN BOZLEE UNIVERSITY OF COLORADO AT BOULDER The first section of these notes describes general solutions to linear, constant-coefficient, homogeneous recurrence
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More information1 Inverse Transform Method and some alternative algorithms
Copyright c 2016 by Karl Sigman 1 Inverse Transform Method and some alternative algorithms Assuming our computer can hand us, upon demand, iid copies of rvs that are uniformly distributed on (0, 1), it
More informationDepartment of Large Animal Sciences. Outline. Slide 2. Department of Large Animal Sciences. Slide 4. Department of Large Animal Sciences
Outline Advanced topics from statistics Anders Ringgaard Kristensen Covariance and correlation Random vectors and multivariate distributions The multinomial distribution The multivariate normal distribution
More information1 Random Variable: Topics
Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?
More informationLecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices
Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationAnalysis-3 lecture schemes
Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space
More informationPairwise likelihood estimation for multivariate mixed Poisson models generated by Gamma intensities
Noname manuscript No. (will be inserted by the editor) Pairwise likelihood estimation for multivariate mixed Poisson models generated by Gamma intensities Florent Chatelain Sophie Lambert-Lacroix Jean-Yves
More informationON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES
ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated
More information(1) Which of the following are propositions? If it is a proposition, determine its truth value: A propositional function, but not a proposition.
Math 231 Exam Practice Problem Solutions WARNING: This is not a sample test. Problems on the exams may or may not be similar to these problems. These problems are just intended to focus your study of the
More informationInfinite series, improper integrals, and Taylor series
Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions
More information2: LINEAR TRANSFORMATIONS AND MATRICES
2: LINEAR TRANSFORMATIONS AND MATRICES STEVEN HEILMAN Contents 1. Review 1 2. Linear Transformations 1 3. Null spaces, range, coordinate bases 2 4. Linear Transformations and Bases 4 5. Matrix Representation,
More informationIntroduction to Applied Bayesian Modeling. ICPSR Day 4
Introduction to Applied Bayesian Modeling ICPSR Day 4 Simple Priors Remember Bayes Law: Where P(A) is the prior probability of A Simple prior Recall the test for disease example where we specified the
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationChapter Generating Functions
Chapter 8.1.1-8.1.2. Generating Functions Prof. Tesler Math 184A Fall 2017 Prof. Tesler Ch. 8. Generating Functions Math 184A / Fall 2017 1 / 63 Ordinary Generating Functions (OGF) Let a n (n = 0, 1,...)
More informationLecture 5. Shearer s Lemma
Stanford University Spring 2016 Math 233: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: April 6, 2016 Scribe: László Miklós Lovász Lecture 5. Shearer s Lemma 5.1 Introduction
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationASSIGNMENT 1 SOLUTIONS
MATH 271 ASSIGNMENT 1 SOLUTIONS 1. (a) Let S be the statement For all integers n, if n is even then 3n 11 is odd. Is S true? Give a proof or counterexample. (b) Write out the contrapositive of statement
More informationSECTION 9.2: ARITHMETIC SEQUENCES and PARTIAL SUMS
(Chapter 9: Discrete Math) 9.11 SECTION 9.2: ARITHMETIC SEQUENCES and PARTIAL SUMS PART A: WHAT IS AN ARITHMETIC SEQUENCE? The following appears to be an example of an arithmetic (stress on the me ) sequence:
More informationStat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016
Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 1 Theory This section explains the theory of conjugate priors for exponential families of distributions,
More information#A69 INTEGERS 13 (2013) OPTIMAL PRIMITIVE SETS WITH RESTRICTED PRIMES
#A69 INTEGERS 3 (203) OPTIMAL PRIMITIVE SETS WITH RESTRICTED PRIMES William D. Banks Department of Mathematics, University of Missouri, Columbia, Missouri bankswd@missouri.edu Greg Martin Department of
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationExam C Solutions Spring 2005
Exam C Solutions Spring 005 Question # The CDF is F( x) = 4 ( + x) Observation (x) F(x) compare to: Maximum difference 0. 0.58 0, 0. 0.58 0.7 0.880 0., 0.4 0.680 0.9 0.93 0.4, 0.6 0.53. 0.949 0.6, 0.8
More information8.5 Taylor Polynomials and Taylor Series
8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:
More informationON THE CHEBYSHEV POLYNOMIALS. Contents. 2. A Result on Linear Functionals on P n 4 Acknowledgments 7 References 7
ON THE CHEBYSHEV POLYNOMIALS JOSEPH DICAPUA Abstract. This paper is a short exposition of several magnificent properties of the Chebyshev polynomials. The author illustrates how the Chebyshev polynomials
More informationMihhail Juhkam POPULATIONS WITH LARGE NUMBER OF CLASSES: MODELS AND ESTIMATION OF SAMPLE COVERAGE AND SAMPLE SIZE. Master s thesis (40 CP)
UNIVERSITY OF TARTU Faculty of Mathematics and Computer Science Institute of Mathematical Statistics Mihhail Juhkam POPULATIONS WITH LARGE NUMBER OF CLASSES: MODELS AND ESTIMATION OF SAMPLE COVERAGE AND
More informationFour-coloring P 6 -free graphs. I. Extending an excellent precoloring
Four-coloring P 6 -free graphs. I. Extending an excellent precoloring Maria Chudnovsky Princeton University, Princeton, NJ 08544 Sophie Spirkl Princeton University, Princeton, NJ 08544 Mingxian Zhong Columbia
More informationPractice Final Exam Solutions for Calculus II, Math 1502, December 5, 2013
Practice Final Exam Solutions for Calculus II, Math 5, December 5, 3 Name: Section: Name of TA: This test is to be taken without calculators and notes of any sorts. The allowed time is hours and 5 minutes.
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationDiscrete Mathematics. Spring 2017
Discrete Mathematics Spring 2017 Previous Lecture Principle of Mathematical Induction Mathematical Induction: rule of inference Mathematical Induction: Conjecturing and Proving Climbing an Infinite Ladder
More informationMath 110: Worksheet 3
Math 110: Worksheet 3 September 13 Thursday Sept. 7: 2.1 1. Fix A M n n (F ) and define T : M n n (F ) M n n (F ) by T (B) = AB BA. (a) Show that T is a linear transformation. Let B, C M n n (F ) and a
More informationAn Inverse Problem for Gibbs Fields with Hard Core Potential
An Inverse Problem for Gibbs Fields with Hard Core Potential Leonid Koralov Department of Mathematics University of Maryland College Park, MD 20742-4015 koralov@math.umd.edu Abstract It is well known that
More informationNotes 6 : First and second moment methods
Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationPRIME NUMBERS YANKI LEKILI
PRIME NUMBERS YANKI LEKILI We denote by N the set of natural numbers: 1,2,..., These are constructed using Peano axioms. We will not get into the philosophical questions related to this and simply assume
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationConsidering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.
Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and
More informationFisher Information in Gaussian Graphical Models
Fisher Information in Gaussian Graphical Models Jason K. Johnson September 21, 2006 Abstract This note summarizes various derivations, formulas and computational algorithms relevant to the Fisher information
More informationLines With Many Points On Both Sides
Lines With Many Points On Both Sides Rom Pinchasi Hebrew University of Jerusalem and Massachusetts Institute of Technology September 13, 2002 Abstract Let G be a finite set of points in the plane. A line
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationN E W S A N D L E T T E R S
N E W S A N D L E T T E R S 74th Annual William Lowell Putnam Mathematical Competition Editor s Note: Additional solutions will be printed in the Monthly later in the year. PROBLEMS A1. Recall that a regular
More informationCOMPLETELY INVARIANT JULIA SETS OF POLYNOMIAL SEMIGROUPS
Series Logo Volume 00, Number 00, Xxxx 19xx COMPLETELY INVARIANT JULIA SETS OF POLYNOMIAL SEMIGROUPS RICH STANKEWITZ Abstract. Let G be a semigroup of rational functions of degree at least two, under composition
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationA Note on Certain Stability and Limiting Properties of ν-infinitely divisible distributions
Int. J. Contemp. Math. Sci., Vol. 1, 2006, no. 4, 155-161 A Note on Certain Stability and Limiting Properties of ν-infinitely divisible distributions Tomasz J. Kozubowski 1 Department of Mathematics &
More informationSTA216: Generalized Linear Models. Lecture 1. Review and Introduction
STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general
More informationThe Turán number of sparse spanning graphs
The Turán number of sparse spanning graphs Noga Alon Raphael Yuster Abstract For a graph H, the extremal number ex(n, H) is the maximum number of edges in a graph of order n not containing a subgraph isomorphic
More informationLecture 4 September 15
IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric
More information18.S34 (FALL 2007) PROBLEMS ON ROOTS OF POLYNOMIALS
18.S34 (FALL 2007) PROBLEMS ON ROOTS OF POLYNOMIALS Note. The terms root and zero of a polynomial are synonyms. Those problems which appeared on the Putnam Exam are stated as they appeared verbatim (except
More informationSolutions to Tutorial for Week 4
The University of Sydney School of Mathematics and Statistics Solutions to Tutorial for Week 4 MATH191/1931: Calculus of One Variable (Advanced) Semester 1, 018 Web Page: sydneyeduau/science/maths/u/ug/jm/math191/
More informationLectures on Elementary Probability. William G. Faris
Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................
More informationOn Systems of Diagonal Forms II
On Systems of Diagonal Forms II Michael P Knapp 1 Introduction In a recent paper [8], we considered the system F of homogeneous additive forms F 1 (x) = a 11 x k 1 1 + + a 1s x k 1 s F R (x) = a R1 x k
More informationTaylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.
11.10 Taylor and Maclaurin Series Copyright Cengage Learning. All rights reserved. We start by supposing that f is any function that can be represented by a power series f(x)= c 0 +c 1 (x a)+c 2 (x a)
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If
More informationConvergence Rate of Nonlinear Switched Systems
Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the
More informationDisjoint paths in tournaments
Disjoint paths in tournaments Maria Chudnovsky 1 Columbia University, New York, NY 10027, USA Alex Scott Mathematical Institute, University of Oxford, 24-29 St Giles, Oxford OX1 3LB, UK Paul Seymour 2
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationPartition of Integers into Distinct Summands with Upper Bounds. Partition of Integers into Even Summands. An Example
Partition of Integers into Even Summands We ask for the number of partitions of m Z + into positive even integers The desired number is the coefficient of x m in + x + x 4 + ) + x 4 + x 8 + ) + x 6 + x
More information04. Random Variables: Concepts
University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative
More informationPARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION
PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION ALESSIO FIGALLI AND YOUNG-HEON KIM Abstract. Given Ω, Λ R n two bounded open sets, and f and g two probability densities concentrated
More informationPattern Recognition. Parameter Estimation of Probability Density Functions
Pattern Recognition Parameter Estimation of Probability Density Functions Classification Problem (Review) The classification problem is to assign an arbitrary feature vector x F to one of c classes. The
More informationOn the Chi square and higher-order Chi distances for approximating f-divergences
On the Chi square and higher-order Chi distances for approximating f-divergences Frank Nielsen, Senior Member, IEEE and Richard Nock, Nonmember Abstract We report closed-form formula for calculating the
More informationLecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1
More informationContinued fractions for complex numbers and values of binary quadratic forms
arxiv:110.3754v1 [math.nt] 18 Feb 011 Continued fractions for complex numbers and values of binary quadratic forms S.G. Dani and Arnaldo Nogueira February 1, 011 Abstract We describe various properties
More information(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define
Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that
More informationExponential families also behave nicely under conditioning. Specifically, suppose we write η = (η 1, η 2 ) R k R p k so that
1 More examples 1.1 Exponential families under conditioning Exponential families also behave nicely under conditioning. Specifically, suppose we write η = η 1, η 2 R k R p k so that dp η dm 0 = e ηt 1
More informationProbability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014
Probability Machine Learning and Pattern Recognition Chris Williams School of Informatics, University of Edinburgh August 2014 (All of the slides in this course have been adapted from previous versions
More informationMath 0230 Calculus 2 Lectures
Math 00 Calculus Lectures Chapter 8 Series Numeration of sections corresponds to the text James Stewart, Essential Calculus, Early Transcendentals, Second edition. Section 8. Sequences A sequence is a
More information