Issues on Fitting Univariate and Multivariate Hyperbolic Distributions

Size: px
Start display at page:

Download "Issues on Fitting Univariate and Multivariate Hyperbolic Distributions"

Transcription

1 Issues on Fitting Univariate and Multivariate Hyperbolic Distributions Thomas Tran PhD student Department of Statistics University of Auckland Fitting Hyperbolic Distributions

2 Acknowledgement I am grateful my supervisor for his guidance The financial assistance of my department and the workshop organizers is highly appreciated Fitting Hyperbolic Distributions 1

3 Outline Generalized inverse Gaussian distributions, normal meanvariance mixture distributions, generalized hyperbolic distributions, hyperbolic distribution Fitting the univariate hyperbolic distribution using functions in the HyperbolicDist package EM and MCECM algorithms Fitting multivariate generalized hyperbolic distributions Fitting Hyperbolic Distributions 2

4 Grounding work: References Barndorff-Nielsen, O. E (1977) Exponentially decreasing distributions for the logarithm of particle size, Proceeding of the Royal Society. London. Series. A, 353, Barndorff-Nielsen, O. E and Blæsild, P (1981) Hyperbolic distributions and ramifications: Contributions to theory and application. In C. Taillie, G. Patil and B. Baldessari (Eds), Statistical Distributions in Scientific work, vol.4 Dordreacht:Reidel Blæsild, P (1981) The two-dimensional hyperbolic distribution and related distributions, with an application to Johannsen s Bean Data, Biometrika, vol Fitting Hyperbolic Distributions 3

5 Dempster, A.P., Laird, N.M. and Rubin, D.B.(1977). Maximum likelihood from incomplete data via the EM algorithm(with discussion). J. Roy. Statist. Soc, Ser B Recently, among others: Barndorff-Nielsen, O. E. Normal inverse Gaussian distributions and stochastic volatility modeling. Scand. J. Statist 24:1-13, Bibby, B. M. and Sørensen, M. (2003) Hyperbolic processes in finance. In Handbook of Heavy Tailed Distributions in Finance, Elsevier Science B. V. pp Eberlein, E. and Keller, U. (1995) Hyperbolic distributions in Finance. Bernoulli Prause, K. (1999) The Generalized Fitting Hyperbolic Distributions 4

6 Hyperbolic Model: Estimation, Financial Derivatives, and Risk Measures. Ph.D. Thesis, Universität Freiburg. Practical work Practical work Blæsild, P. and Sørensen, M.(1992). hyp -a computer program for analyzing data by means of the hyperbolic distribution. Research Report 248, Department of Theoretical Statistics, University of Aarhus. McNeil, A. J., Frey, R., and Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Princeton University Press, Princeton. The R package HyperbolicDist, QRMlib and others Fitting Hyperbolic Distributions 5

7 Generalized Inverse Gaussian distributions h(w λ, χ, ψ) = E(W α ) = (ψ/χ)λ/2 2 K λ ( ψχ) wλ 1 exp ( χ )α 2 K λ+α ( χψ) ψ K λ ( χψ) ( 1 2 ( ψw 1 + ψw )) Where w > 0, χ > 0 and ψ > 0 K λ denotes the modified Bessel function of the third kind with order λ Representable in 4 parameterizations with λ specifies the order of the modified Bessel function. The skewness and kurtosis parameters can be: (χ, ψ), (δ, γ), (α, β), (ω, ζ). Function gigchangepars Fitting Hyperbolic Distributions 6

8 Normal Mean-Variance Mixtures distributions Univariate generalized hyperbolic distribution f(x) = 0 1 (2π) 1 2w 1 2 [ exp Multivariate generalized hyperbolic distribution ] (x M )2 h(w)dw 2w g(x) = 0 1 (2π) d 2 Σ 1 2w d 2 [ exp (x M ) Σ 1 (x M ) ] h(w)dw 2w where Σ = AA and Σ has rank d. In both cases, M = µ + wγ and h(w) is the densisty of the generalized inverse Gaussian distribution Fitting Hyperbolic Distributions 7

9 Multivariate hyperbolic distribution From the mean-variance mixture, a multivariate generalized hyperbolic distribution can be derived ( [χ + (x µ) Σ 1 (x µ)][ψ + γ Σ 1 γ] ) g(x) = c K λ d 2 ( [χ + (x µ) Σ 1 (x µ)][ψ + γ Σ 1 γ] ) d 2 λ e(x µ) Σ 1 γ Where the normalizing constant is c = ( χψ) λ ψ λ (ψ + γ Σ 1 γ) d 2 λ (2π) d 2 Σ 1 2K λ ( χψ) If λ = d+1 2 we have a d-dimensional hyperbolic distribution. This is represented in the (λ, χ, ψ, Σ, γ, µ) parameterizations. The parameterizations suggested in Blæsild (1985) is (λ, α, δ,, β, µ) Fitting Hyperbolic Distributions 8

10 Univariate Generalized Hyperbolic Distributions Univariate generalized hyperbolic distribution f(x λ, α, β, δ, µ) = (γ/δ)λ 2πKλ (δγ) K λ 1/2 (α δ 2 + (x µ) 2 ) ( δ 2 + (x µ) 2 eβ(x µ) /α) 1/2 λ where K λ denotes the third modified Bessel function of order λ, γ 2 = α 2 β 2 and < x < Has 5 parameters Can be represented by 4 parameterizations All have λ, δ, µ as the order of the Bessel function, scale and location paramter respectively. The skewness and kurtosis parameters can be: (α, β), (ζ, ρ), (ξ, χ), and (ᾱ, β). Function ghypchangepars Fitting Hyperbolic Distributions 9

11 The Hyperbolic Distribution For µ= 0 and δ=1 the density of the hyperbolic distribution is 1 ( 2 ( ) exp ζ 1 + π 2 K 1 ζ [ 1 + π x 2 πx]) where K 1 denotes the modified Bessel function of third kind with index 1. Four different parameterizations. All have µ as location and δ as scale parameter. The skewness and kurtosis parameters can be (π, ζ); (α, β); (φ, γ); (ξ, χ) each parameterizations can be useful for different purposes. Function hyperbchangepars Fitting Hyperbolic Distributions 10

12 The Hyperbolic Distribution Shape Triangle E L L E 0.75 ξ 0.50 GIG(1) GIG(1) N Plot of the log densities of different values of ξ and χ χ Fitting Hyperbolic Distributions 11

13 Fitting the Univariate Hyperbolic Distribution For fitting purpose of our talk we use the (π, ζ) parameterizations c(π, ζ, δ, µ) exp ζ ( ) 2 ( ) x µ 1 + π 1 x µ 2 + π δ δ where c(π, ζ, δ, µ) = [2δ ( ) ] 1 + π 2 1 K 1 ζ as in the (π, ζ) the only restriction is ζ > 0 Fitting Hyperbolic Distributions 12

14 Fitting the Univariate Hyperbolic Distribution hyperbfitstart(x, startvalues = "BN", startmethodsl = "Nelder-Mead", startmethodmom = "Nelder-Mead",...) This function returns 4 different starting values for the optimization process carried out by the hyperbfit function. BN: Barndorf Neilsen 1977 SL: Fitted Skewed Laplace FN: Fitted normal MoM: Method of Moment US: User supplied Fitting Hyperbolic Distributions 13

15 Fitting the Univariate Hyperbolic Distribution Fitting functions: hyperbfit(x, startvalues = "BN", method = "Nelder-Mead",... Nelder-Mead: A non-derivative optimization method suggested by Nelder and Mead (1965) BFGS: A derivative optimization method pioneered by Broyden- Fletcher-Goldfarb-Shanno Fitting Hyperbolic Distributions 14

16 Fitting the Univariate Hyperbolic Distribution Three questions raised in Barndorff-Nielsen and Blæsild (1981) regarding to: Sample size? Parameterization? Fitting methods? Eberlein and Keller (1985): Daily returns of 10 out of 30 of the DAX from 2 October 1989 to 30 September 1992, 745 data points Fitting Hyperbolic Distributions 15

17 Fitting the Univariate Hyperbolic Distribution Xi fitting Chi fitting ξ^ ξ ξ = 0.3χ = 0.04 χ^ χ ξ = 0.3χ = 0.04 Xi fitting Chi fitting ξ^ ξ ξ = 0.3χ = 0.04 χ^ χ ξ = 0.3χ = 0.04 Fitting Hyperbolic Distributions 16

18 Fitting the Univariate hyperbolic distribution Xi fitting Chi fitting ξ^ ξ ξ = 0.6χ = 0.04 χ^ χ ξ = 0.6χ = 0.04 Xi fitting Chi fitting ξ^ ξ ξ = 0.6χ = 0.04 χ^ χ ξ = 0.6χ = 0.04 Fitting Hyperbolic Distributions 17

19 Fitting the Univariate hyperbolic distribution Xi fitting Chi fitting ξ^ ξ ξ = 0.98χ = 0.9 χ^ χ ξ = 0.98χ = 0.9 Xi fitting Chi fitting ξ^ ξ ξ = 0.98χ = 0.9 χ^ χ ξ = 0.98χ = 0.9 Fitting Hyperbolic Distributions 18

20 Fitting the Univariate hyperbolic distribution Xi fitting Chi fitting ξ^ ξ ξ = 0.98χ = 0.9 χ^ χ ξ = 0.98χ = 0.9 Xi fitting Chi fitting ξ^ ξ ξ = 0.98χ = 0.9 χ^ χ ξ = 0.98χ = 0.9 Fitting Hyperbolic Distributions 19

21 Conclusions on fitting Univariate Hyperbolic Distribution None of the methods outperforms one another Better fit when sample size becomes larger. The fairly large sample size is about 2000 or more observations The fitting of ξ appears to be fairly accurate even with small sample size when the hyperbolic distributions are at the left and right boundary of the shape triangle is notable Fitting Hyperbolic Distributions 20

22 Expectation - Maximization algorithm Proposed by Dempster et al. (1977). If we were able to have a complete data Y we wish L(θ Y ) logf(y θ) θ Θ R d We assume Y = (Y obs, Y mis ). Each iteration of the EM algorithm comprises of two steps E and M. The (t + 1 )st E-step finds: Q(θ θ (t) ) = L(θ Y )f(y mis Y obs, θ = θ (t) )dy mis = E[L(θ Y ) Y obs, θ = θ (t) ] The (t + 1 )st M-step then finds θ (t+1) that maximizes Q(θ (t+1) θ (t) ) Q(θ θ (t) ) θ Θ Fitting Hyperbolic Distributions 21

23 Fitting the Multivariate Hyperbolic Distribution The complete data Y can also be thought of as the augmented data Y aug. The (t + 1 ) E step can be written as: Q(θ θ (t) ) = E[L(θ Y aug ) Y obs, θ = θ (t) ] We augmented the data to fit our model which is f(x θ = λ, χ, ψ, µ, Σ, γ). Using the mean variance mixture feature we could also alter our model to fit the data by partitioning θ = (θ 1, θ 2 ) where: θ 1 = (µ, Σ, γ) θ 2 = (λ, χ, ψ) Fitting Hyperbolic Distributions 22

24 Fitting the Multivariate Hyperbolic Distribution We could construct the augmented log likelihood ln [L(θ X 1... X n, W 1... W n )] = + n ln (f X W X i W i ; θ 1 ) (1) i=1 n ln (h W W i ; θ 2 ) (2) i=1 E step. We calculate Q(θ; θ t ) = E[ln L(θ; X 1,..., X n, W 1,..., W n X 1,..., X n ; θ t )] M step. Q(θ (t+1) θ (t) ) Q(θ θ (t) ) Fitting Hyperbolic Distributions 23

25 Fitting the Multivariate Hyperbolic Distribution In practice the calculation of(t + 1 )st E step reduces to find the values of the following quantities: δ [t] i = E[W 1 i X i ; θ [t] ] η [t] i = E[W i X i ; θ [t] ] Where W i X i follows a GIG with; ζ [t] i = E[ln(W i ) X i ; θ [t] ] λ = λ 1 2 d χ = (X i µ) Σ 1 (X i µ) + χ ψ = ψ + γ Σ 1 γ Fitting Hyperbolic Distributions 24

26 Fitting the Multivariate Hyperbolic Distribution In the M step we maximize the augmented log likelihood by maximizing each of its two components Q 1 (µ, Σ, γ; θ [t] 1 ) and Q 2 (λ, χ, ψ; θ [t] 2 ) separately. Q 1 can be maximized analytically as the first derivative of Q 1 wrt each parameter can be achieved in closed form Q 2 must be evaluated numerically Fitting Hyperbolic Distributions 25

27 MCECM algorithm Meng and Rubin (1993) proposed a variant of the EM called MCECM algorithm. At (t)th iteration The E step remains the same as with the EM algorithm The maximization method for Q 1 is unchanged and carried out first. This results in (θ [t+1] 1 = µ [t+1], Σ [t+1], γ [t+1] ) Q 2 is then numerically maximized as Q 2 (λ, χ, ψ; θ [t] 2, θ[t+1] 1 ). Compared with the calculation of Q 2 in the EM algorithm, it is now included θ [t+1] 1 Fitting Hyperbolic Distributions 26

28 Results and conclusions Comparison of convergence speed between EM and MCECM algorithm for 4 dimensional hyperbolic distribution conv.code loglik number.iter MCECM.BFGS MCECM.NM EM.BFGS EM.NM The MCECM appears to converge faster than the EM The estimation of λ causes both algorithms become unstable Fitting Hyperbolic Distributions 27

29 Results var var var 3 Generated data Fitting Hyperbolic Distributions 28

30 Results var var var 3 Fitted data Fitting Hyperbolic Distributions 29

Tail dependence coefficient of generalized hyperbolic distribution

Tail dependence coefficient of generalized hyperbolic distribution Tail dependence coefficient of generalized hyperbolic distribution Mohalilou Aleiyouka Laboratoire de mathématiques appliquées du Havre Université du Havre Normandie Le Havre France mouhaliloune@gmail.com

More information

Three Skewed Matrix Variate Distributions

Three Skewed Matrix Variate Distributions Three Skewed Matrix Variate Distributions ariv:174.531v5 [stat.me] 13 Aug 18 Michael P.B. Gallaugher and Paul D. McNicholas Dept. of Mathematics & Statistics, McMaster University, Hamilton, Ontario, Canada.

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Risk management with the multivariate generalized hyperbolic distribution, calibrated by the multi-cycle EM algorithm

Risk management with the multivariate generalized hyperbolic distribution, calibrated by the multi-cycle EM algorithm Risk management with the multivariate generalized hyperbolic distribution, calibrated by the multi-cycle EM algorithm Marcel Frans Dirk Holtslag Quantitative Economics Amsterdam University A thesis submitted

More information

EM Algorithm II. September 11, 2018

EM Algorithm II. September 11, 2018 EM Algorithm II September 11, 2018 Review EM 1/27 (Y obs, Y mis ) f (y obs, y mis θ), we observe Y obs but not Y mis Complete-data log likelihood: l C (θ Y obs, Y mis ) = log { f (Y obs, Y mis θ) Observed-data

More information

Estimating the parameters of hidden binomial trials by the EM algorithm

Estimating the parameters of hidden binomial trials by the EM algorithm Hacettepe Journal of Mathematics and Statistics Volume 43 (5) (2014), 885 890 Estimating the parameters of hidden binomial trials by the EM algorithm Degang Zhu Received 02 : 09 : 2013 : Accepted 02 :

More information

arxiv: v2 [math.pr] 23 Jun 2014

arxiv: v2 [math.pr] 23 Jun 2014 COMPUTATION OF COPULAS BY FOURIER METHODS ANTONIS PAPAPANTOLEON arxiv:08.26v2 [math.pr] 23 Jun 204 Abstract. We provide an integral representation for the (implied) copulas of dependent random variables

More information

Research Article A Comparison of Generalized Hyperbolic Distribution Models for Equity Returns

Research Article A Comparison of Generalized Hyperbolic Distribution Models for Equity Returns Journal of Applied Mathematics, Article ID 63465, 15 pages http://dx.doi.org/1.1155/14/63465 Research Article A Comparison of Generalized Hyperbolic Distribution Models for Equity Returns Virginie Konlack

More information

Optimization Methods II. EM algorithms.

Optimization Methods II. EM algorithms. Aula 7. Optimization Methods II. 0 Optimization Methods II. EM algorithms. Anatoli Iambartsev IME-USP Aula 7. Optimization Methods II. 1 [RC] Missing-data models. Demarginalization. The term EM algorithms

More information

MULTIVARIATE AFFINE GENERALIZED HYPERBOLIC DISTRIBUTIONS: AN EMPIRICAL INVESTIGATION

MULTIVARIATE AFFINE GENERALIZED HYPERBOLIC DISTRIBUTIONS: AN EMPIRICAL INVESTIGATION Economics Research Group IBMEC Business School Rio de Janeiro http://professores.ibmecrj.br/erg IBMEC RJ ECONOMICS DISCUSSION PAPER 8-1 Full series available at http://professores.ibmecrj.br/erg/dp/dp.htm.

More information

An ECM algorithm for Skewed Multivariate Variance Gamma Distribution in Normal Mean-Variance Representation

An ECM algorithm for Skewed Multivariate Variance Gamma Distribution in Normal Mean-Variance Representation An ECM algorithm for Skewed Multivariate Variance Gamma Distribution in Normal Mean-Variance Representation Thanakorn Nitithumbundit and Jennifer S.K. Chan School of Mathematics and Statistics, University

More information

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 )

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 ) Expectation Maximization (EM Algorithm Motivating Example: Have two coins: Coin 1 and Coin 2 Each has it s own probability of seeing H on any one flip. Let p 1 = P ( H on Coin 1 p 2 = P ( H on Coin 2 Select

More information

STAT Advanced Bayesian Inference

STAT Advanced Bayesian Inference 1 / 8 STAT 625 - Advanced Bayesian Inference Meng Li Department of Statistics March 5, 2018 Distributional approximations 2 / 8 Distributional approximations are useful for quick inferences, as starting

More information

Biostat 2065 Analysis of Incomplete Data

Biostat 2065 Analysis of Incomplete Data Biostat 2065 Analysis of Incomplete Data Gong Tang Dept of Biostatistics University of Pittsburgh October 20, 2005 1. Large-sample inference based on ML Let θ is the MLE, then the large-sample theory implies

More information

EM for ML Estimation

EM for ML Estimation Overview EM for ML Estimation An algorithm for Maximum Likelihood (ML) Estimation from incomplete data (Dempster, Laird, and Rubin, 1977) 1. Formulate complete data so that complete-data ML estimation

More information

Expectation Maximization

Expectation Maximization Expectation Maximization Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr 1 /

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

DISCRIMINATING BETWEEN THE NORMAL INVERSE GAUSSIAN AND GENERALIZED HYPERBOLIC SKEW-T DISTRIBUTIONS WITH A FOLLOW-UP THE STOCK EXCHANGE DATA

DISCRIMINATING BETWEEN THE NORMAL INVERSE GAUSSIAN AND GENERALIZED HYPERBOLIC SKEW-T DISTRIBUTIONS WITH A FOLLOW-UP THE STOCK EXCHANGE DATA Yugoslav Journal of Operations Research 8 (018), Number, 185 199 DOI: https://doi.org/10.98/yjor170815013p DISCRIMINATING BETWEEN THE NORMAL INVERSE GAUSSIAN AND GENERALIZED HYPERBOLIC SKEW-T DISTRIBUTIONS

More information

Risk Forecasting with GARCH, Skewed t Distributions, and Multiple Timescales

Risk Forecasting with GARCH, Skewed t Distributions, and Multiple Timescales Risk Forecasting with GARCH, Skewed t Distributions, and Multiple Timescales Alec N. Kercheval Department of Mathematics Florida State University Yang Liu Department of Mathematics Florida State University

More information

A simple graphical method to explore tail-dependence in stock-return pairs

A simple graphical method to explore tail-dependence in stock-return pairs A simple graphical method to explore tail-dependence in stock-return pairs Klaus Abberger, University of Konstanz, Germany Abstract: For a bivariate data set the dependence structure can not only be measured

More information

Some Approximations of the Logistic Distribution with Application to the Covariance Matrix of Logistic Regression

Some Approximations of the Logistic Distribution with Application to the Covariance Matrix of Logistic Regression Working Paper 2013:9 Department of Statistics Some Approximations of the Logistic Distribution with Application to the Covariance Matrix of Logistic Regression Ronnie Pingel Working Paper 2013:9 June

More information

A Derivation of the EM Updates for Finding the Maximum Likelihood Parameter Estimates of the Student s t Distribution

A Derivation of the EM Updates for Finding the Maximum Likelihood Parameter Estimates of the Student s t Distribution A Derivation of the EM Updates for Finding the Maximum Likelihood Parameter Estimates of the Student s t Distribution Carl Scheffler First draft: September 008 Contents The Student s t Distribution The

More information

PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS

PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS Statistica Sinica 15(2005), 831-840 PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS Florin Vaida University of California at San Diego Abstract: It is well known that the likelihood sequence of the EM algorithm

More information

COMPETING RISKS WEIBULL MODEL: PARAMETER ESTIMATES AND THEIR ACCURACY

COMPETING RISKS WEIBULL MODEL: PARAMETER ESTIMATES AND THEIR ACCURACY Annales Univ Sci Budapest, Sect Comp 45 2016) 45 55 COMPETING RISKS WEIBULL MODEL: PARAMETER ESTIMATES AND THEIR ACCURACY Ágnes M Kovács Budapest, Hungary) Howard M Taylor Newark, DE, USA) Communicated

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Efficient rare-event simulation for sums of dependent random varia

Efficient rare-event simulation for sums of dependent random varia Efficient rare-event simulation for sums of dependent random variables Leonardo Rojas-Nandayapa joint work with José Blanchet February 13, 2012 MCQMC UNSW, Sydney, Australia Contents Introduction 1 Introduction

More information

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015 MFM Practitioner Module: Quantitative Risk Management September 23, 2015 Mixtures Mixtures Mixtures Definitions For our purposes, A random variable is a quantity whose value is not known to us right now

More information

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications

More information

Generalized normal mean variance mixtures and subordinated Brownian motions in Finance. Elisa Luciano and Patrizia Semeraro

Generalized normal mean variance mixtures and subordinated Brownian motions in Finance. Elisa Luciano and Patrizia Semeraro Generalized normal mean variance mixtures and subordinated Brownian motions in Finance. Elisa Luciano and Patrizia Semeraro Università degli Studi di Torino Dipartimento di Statistica e Matematica Applicata

More information

Accelerating the EM Algorithm for Mixture Density Estimation

Accelerating the EM Algorithm for Mixture Density Estimation Accelerating the EM Algorithm ICERM Workshop September 4, 2015 Slide 1/18 Accelerating the EM Algorithm for Mixture Density Estimation Homer Walker Mathematical Sciences Department Worcester Polytechnic

More information

Finite Singular Multivariate Gaussian Mixture

Finite Singular Multivariate Gaussian Mixture 21/06/2016 Plan 1 Basic definitions Singular Multivariate Normal Distribution 2 3 Plan Singular Multivariate Normal Distribution 1 Basic definitions Singular Multivariate Normal Distribution 2 3 Multivariate

More information

ON SOME TWO-STEP DENSITY ESTIMATION METHOD

ON SOME TWO-STEP DENSITY ESTIMATION METHOD UNIVESITATIS IAGELLONICAE ACTA MATHEMATICA, FASCICULUS XLIII 2005 ON SOME TWO-STEP DENSITY ESTIMATION METHOD by Jolanta Jarnicka Abstract. We introduce a new two-step kernel density estimation method,

More information

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van

More information

Modeling Multiscale Differential Pixel Statistics

Modeling Multiscale Differential Pixel Statistics Modeling Multiscale Differential Pixel Statistics David Odom a and Peyman Milanfar a a Electrical Engineering Department, University of California, Santa Cruz CA. 95064 USA ABSTRACT The statistics of natural

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

New mixture models and algorithms in the mixtools package

New mixture models and algorithms in the mixtools package New mixture models and algorithms in the mixtools package Didier Chauveau MAPMO - UMR 6628 - Université d Orléans 1ères Rencontres BoRdeaux Juillet 2012 Outline 1 Mixture models and EM algorithm-ology

More information

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm by Korbinian Schwinger Overview Exponential Family Maximum Likelihood The EM Algorithm Gaussian Mixture Models Exponential

More information

Copulas. Mathematisches Seminar (Prof. Dr. D. Filipovic) Di Uhr in E

Copulas. Mathematisches Seminar (Prof. Dr. D. Filipovic) Di Uhr in E Copulas Mathematisches Seminar (Prof. Dr. D. Filipovic) Di. 14-16 Uhr in E41 A Short Introduction 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 The above picture shows a scatterplot (500 points) from a pair

More information

Efficient estimation of a semiparametric dynamic copula model

Efficient estimation of a semiparametric dynamic copula model Efficient estimation of a semiparametric dynamic copula model Christian Hafner Olga Reznikova Institute of Statistics Université catholique de Louvain Louvain-la-Neuve, Blgium 30 January 2009 Young Researchers

More information

Package HyperbolicDist

Package HyperbolicDist Version 0.6-2 Date 2009-09-09 Title The hyperbolic distribution Package HyperbolicDist Author David Scott February 19, 2015 Maintainer David Scott Depends

More information

Modeling conditional distributions with mixture models: Applications in finance and financial decision-making

Modeling conditional distributions with mixture models: Applications in finance and financial decision-making Modeling conditional distributions with mixture models: Applications in finance and financial decision-making John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università

More information

Pattern Recognition and Machine Learning. Bishop Chapter 9: Mixture Models and EM

Pattern Recognition and Machine Learning. Bishop Chapter 9: Mixture Models and EM Pattern Recognition and Machine Learning Chapter 9: Mixture Models and EM Thomas Mensink Jakob Verbeek October 11, 27 Le Menu 9.1 K-means clustering Getting the idea with a simple example 9.2 Mixtures

More information

Empirical likelihood and self-weighting approach for hypothesis testing of infinite variance processes and its applications

Empirical likelihood and self-weighting approach for hypothesis testing of infinite variance processes and its applications Empirical likelihood and self-weighting approach for hypothesis testing of infinite variance processes and its applications Fumiya Akashi Research Associate Department of Applied Mathematics Waseda University

More information

Introduction to Probability and Statistics (Continued)

Introduction to Probability and Statistics (Continued) Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:

More information

Last lecture 1/35. General optimization problems Newton Raphson Fisher scoring Quasi Newton

Last lecture 1/35. General optimization problems Newton Raphson Fisher scoring Quasi Newton EM Algorithm Last lecture 1/35 General optimization problems Newton Raphson Fisher scoring Quasi Newton Nonlinear regression models Gauss-Newton Generalized linear models Iteratively reweighted least squares

More information

Multivariate Statistics

Multivariate Statistics Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical

More information

An introduction to Variational calculus in Machine Learning

An introduction to Variational calculus in Machine Learning n introduction to Variational calculus in Machine Learning nders Meng February 2004 1 Introduction The intention of this note is not to give a full understanding of calculus of variations since this area

More information

Lecture 6: April 19, 2002

Lecture 6: April 19, 2002 EE596 Pat. Recog. II: Introduction to Graphical Models Spring 2002 Lecturer: Jeff Bilmes Lecture 6: April 19, 2002 University of Washington Dept. of Electrical Engineering Scribe: Huaning Niu,Özgür Çetin

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

Generalised Fractional-Black-Scholes Equation: pricing and hedging

Generalised Fractional-Black-Scholes Equation: pricing and hedging Generalised Fractional-Black-Scholes Equation: pricing and hedging Álvaro Cartea Birkbeck College, University of London April 2004 Outline Lévy processes Fractional calculus Fractional-Black-Scholes 1

More information

Time Series Models for Measuring Market Risk

Time Series Models for Measuring Market Risk Time Series Models for Measuring Market Risk José Miguel Hernández Lobato Universidad Autónoma de Madrid, Computer Science Department June 28, 2007 1/ 32 Outline 1 Introduction 2 Competitive and collaborative

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Statistical Estimation

Statistical Estimation Statistical Estimation Use data and a model. The plug-in estimators are based on the simple principle of applying the defining functional to the ECDF. Other methods of estimation: minimize residuals from

More information

Bayesian inference for multivariate skew-normal and skew-t distributions

Bayesian inference for multivariate skew-normal and skew-t distributions Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential

More information

15-388/688 - Practical Data Science: Basic probability. J. Zico Kolter Carnegie Mellon University Spring 2018

15-388/688 - Practical Data Science: Basic probability. J. Zico Kolter Carnegie Mellon University Spring 2018 15-388/688 - Practical Data Science: Basic probability J. Zico Kolter Carnegie Mellon University Spring 2018 1 Announcements Logistics of next few lectures Final project released, proposals/groups due

More information

f X (y, z; θ, σ 2 ) = 1 2 (2πσ2 ) 1 2 exp( (y θz) 2 /2σ 2 ) l c,n (θ, σ 2 ) = i log f(y i, Z i ; θ, σ 2 ) (Y i θz i ) 2 /2σ 2

f X (y, z; θ, σ 2 ) = 1 2 (2πσ2 ) 1 2 exp( (y θz) 2 /2σ 2 ) l c,n (θ, σ 2 ) = i log f(y i, Z i ; θ, σ 2 ) (Y i θz i ) 2 /2σ 2 Chapter 7: EM algorithm in exponential families: JAW 4.30-32 7.1 (i) The EM Algorithm finds MLE s in problems with latent variables (sometimes called missing data ): things you wish you could observe,

More information

Introduction An approximated EM algorithm Simulation studies Discussion

Introduction An approximated EM algorithm Simulation studies Discussion 1 / 33 An Approximated Expectation-Maximization Algorithm for Analysis of Data with Missing Values Gong Tang Department of Biostatistics, GSPH University of Pittsburgh NISS Workshop on Nonignorable Nonresponse

More information

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation

Clustering by Mixture Models. General background on clustering Example method: k-means Mixture model based clustering Model estimation Clustering by Mixture Models General bacground on clustering Example method: -means Mixture model based clustering Model estimation 1 Clustering A basic tool in data mining/pattern recognition: Divide

More information

THE UNIVERSITY OF CHICAGO CONSTRUCTION, IMPLEMENTATION, AND THEORY OF ALGORITHMS BASED ON DATA AUGMENTATION AND MODEL REDUCTION

THE UNIVERSITY OF CHICAGO CONSTRUCTION, IMPLEMENTATION, AND THEORY OF ALGORITHMS BASED ON DATA AUGMENTATION AND MODEL REDUCTION THE UNIVERSITY OF CHICAGO CONSTRUCTION, IMPLEMENTATION, AND THEORY OF ALGORITHMS BASED ON DATA AUGMENTATION AND MODEL REDUCTION A DISSERTATION SUBMITTED TO THE FACULTY OF THE DIVISION OF THE PHYSICAL SCIENCES

More information

Reducing Model Risk With Goodness-of-fit Victory Idowu London School of Economics

Reducing Model Risk With Goodness-of-fit Victory Idowu London School of Economics Reducing Model Risk With Goodness-of-fit Victory Idowu London School of Economics Agenda I. An overview of Copula Theory II. Copulas and Model Risk III. Goodness-of-fit methods for copulas IV. Presentation

More information

A polynomial expansion to approximate ruin probabilities

A polynomial expansion to approximate ruin probabilities A polynomial expansion to approximate ruin probabilities P.O. Goffard 1 X. Guerrault 2 S. Loisel 3 D. Pommerêt 4 1 Axa France - Institut de mathématiques de Luminy Université de Aix-Marseille 2 Axa France

More information

Tail bound inequalities and empirical likelihood for the mean

Tail bound inequalities and empirical likelihood for the mean Tail bound inequalities and empirical likelihood for the mean Sandra Vucane 1 1 University of Latvia, Riga 29 th of September, 2011 Sandra Vucane (LU) Tail bound inequalities and EL for the mean 29.09.2011

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore

More information

Multivariate Asset Return Prediction with Mixture Models

Multivariate Asset Return Prediction with Mixture Models Multivariate Asset Return Prediction with Mixture Models Swiss Banking Institute, University of Zürich Introduction The leptokurtic nature of asset returns has spawned an enormous amount of research into

More information

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute

More information

EM Algorithm. Expectation-maximization (EM) algorithm.

EM Algorithm. Expectation-maximization (EM) algorithm. EM Algorithm Outline: Expectation-maximization (EM) algorithm. Examples. Reading: A.P. Dempster, N.M. Laird, and D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. R. Stat. Soc.,

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

Dimension Reduction and Clustering of High. Dimensional Data using a Mixture of Generalized. Hyperbolic Distributions

Dimension Reduction and Clustering of High. Dimensional Data using a Mixture of Generalized. Hyperbolic Distributions Dimension Reduction and Clustering of High Dimensional Data using a Mixture of Generalized Hyperbolic Distributions DIMENSION REDUCTION AND CLUSTERING OF HIGH DIMENSIONAL DATA USING A MIXTURE OF GENERALIZED

More information

A Conditional Approach to Modeling Multivariate Extremes

A Conditional Approach to Modeling Multivariate Extremes A Approach to ing Multivariate Extremes By Heffernan & Tawn Department of Statistics Purdue University s April 30, 2014 Outline s s Multivariate Extremes s A central aim of multivariate extremes is trying

More information

Optimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.

Optimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X. Optimization Background: Problem: given a function f(x) defined on X, find x such that f(x ) f(x) for all x X. The value x is called a maximizer of f and is written argmax X f. In general, argmax X f may

More information

Adaptive Rejection Sampling with fixed number of nodes

Adaptive Rejection Sampling with fixed number of nodes Adaptive Rejection Sampling with fixed number of nodes L. Martino, F. Louzada Institute of Mathematical Sciences and Computing, Universidade de São Paulo, Brazil. Abstract The adaptive rejection sampling

More information

Likelihood, MLE & EM for Gaussian Mixture Clustering. Nick Duffield Texas A&M University

Likelihood, MLE & EM for Gaussian Mixture Clustering. Nick Duffield Texas A&M University Likelihood, MLE & EM for Gaussian Mixture Clustering Nick Duffield Texas A&M University Probability vs. Likelihood Probability: predict unknown outcomes based on known parameters: P(x q) Likelihood: estimate

More information

Stat 710: Mathematical Statistics Lecture 12

Stat 710: Mathematical Statistics Lecture 12 Stat 710: Mathematical Statistics Lecture 12 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 12 Feb 18, 2009 1 / 11 Lecture 12:

More information

Empirical Comparison of ML and UMVU Estimators of the Generalized Variance for some Normal Stable Tweedie Models: a Simulation Study

Empirical Comparison of ML and UMVU Estimators of the Generalized Variance for some Normal Stable Tweedie Models: a Simulation Study Applied Mathematical Sciences, Vol. 10, 2016, no. 63, 3107-3118 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2016.69238 Empirical Comparison of and Estimators of the Generalized Variance for

More information

Machine Learning for Signal Processing Expectation Maximization Mixture Models. Bhiksha Raj 27 Oct /

Machine Learning for Signal Processing Expectation Maximization Mixture Models. Bhiksha Raj 27 Oct / Machine Learning for Signal rocessing Expectation Maximization Mixture Models Bhiksha Raj 27 Oct 2016 11755/18797 1 Learning Distributions for Data roblem: Given a collection of examples from some data,

More information

Tail dependence for two skew slash distributions

Tail dependence for two skew slash distributions 1 Tail dependence for two skew slash distributions Chengxiu Ling 1, Zuoxiang Peng 1 Faculty of Business and Economics, University of Lausanne, Extranef, UNIL-Dorigny, 115 Lausanne, Switzerland School of

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

PERFORMANCE OF THE EM ALGORITHM ON THE IDENTIFICATION OF A MIXTURE OF WAT- SON DISTRIBUTIONS DEFINED ON THE HYPER- SPHERE

PERFORMANCE OF THE EM ALGORITHM ON THE IDENTIFICATION OF A MIXTURE OF WAT- SON DISTRIBUTIONS DEFINED ON THE HYPER- SPHERE REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 111 130 PERFORMANCE OF THE EM ALGORITHM ON THE IDENTIFICATION OF A MIXTURE OF WAT- SON DISTRIBUTIONS DEFINED ON THE HYPER- SPHERE Authors: Adelaide

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Sahar Z Zangeneh Robert W. Keener Roderick J.A. Little Abstract In Probability proportional

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 (continued) Lecture 8 Key points in probability CLT CLT examples Prior vs Likelihood Box & Tiao

More information

VaR vs. Expected Shortfall

VaR vs. Expected Shortfall VaR vs. Expected Shortfall Risk Measures under Solvency II Dietmar Pfeifer (2004) Risk measures and premium principles a comparison VaR vs. Expected Shortfall Dependence and its implications for risk measures

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 679 Part XIX

More information

PENALIZED LIKELIHOOD PARAMETER ESTIMATION FOR ADDITIVE HAZARD MODELS WITH INTERVAL CENSORED DATA

PENALIZED LIKELIHOOD PARAMETER ESTIMATION FOR ADDITIVE HAZARD MODELS WITH INTERVAL CENSORED DATA PENALIZED LIKELIHOOD PARAMETER ESTIMATION FOR ADDITIVE HAZARD MODELS WITH INTERVAL CENSORED DATA Kasun Rathnayake ; A/Prof Jun Ma Department of Statistics Faculty of Science and Engineering Macquarie University

More information

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Preliminary Statistics. Lecture 3: Probability Models and Distributions Preliminary Statistics Lecture 3: Probability Models and Distributions Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Revision of Lecture 2 Probability Density Functions Cumulative Distribution

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Tail negative dependence and its applications for aggregate loss modeling

Tail negative dependence and its applications for aggregate loss modeling Tail negative dependence and its applications for aggregate loss modeling Lei Hua Division of Statistics Oct 20, 2014, ISU L. Hua (NIU) 1/35 1 Motivation 2 Tail order Elliptical copula Extreme value copula

More information

ESTIMATING BIVARIATE TAIL

ESTIMATING BIVARIATE TAIL Elena DI BERNARDINO b joint work with Clémentine PRIEUR a and Véronique MAUME-DESCHAMPS b a LJK, Université Joseph Fourier, Grenoble 1 b Laboratoire SAF, ISFA, Université Lyon 1 Framework Goal: estimating

More information

Nonlinear Time Series Modeling

Nonlinear Time Series Modeling Nonlinear Time Series Modeling Part II: Time Series Models in Finance Richard A. Davis Colorado State University (http://www.stat.colostate.edu/~rdavis/lectures) MaPhySto Workshop Copenhagen September

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Chapter 3 : Likelihood function and inference

Chapter 3 : Likelihood function and inference Chapter 3 : Likelihood function and inference 4 Likelihood function and inference The likelihood Information and curvature Sufficiency and ancilarity Maximum likelihood estimation Non-regular models EM

More information

A short introduction to INLA and R-INLA

A short introduction to INLA and R-INLA A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

A note on shaved dice inference

A note on shaved dice inference A note on shaved dice inference Rolf Sundberg Department of Mathematics, Stockholm University November 23, 2016 Abstract Two dice are rolled repeatedly, only their sum is registered. Have the two dice

More information

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Find the maximum likelihood estimate of θ where θ is a parameter

More information

Sharpening Jensen s Inequality

Sharpening Jensen s Inequality Sharpening Jensen s Inequality arxiv:1707.08644v [math.st] 4 Oct 017 J. G. Liao and Arthur Berg Division of Biostatistics and Bioinformatics Penn State University College of Medicine October 6, 017 Abstract

More information

Modelling Dropouts by Conditional Distribution, a Copula-Based Approach

Modelling Dropouts by Conditional Distribution, a Copula-Based Approach The 8th Tartu Conference on MULTIVARIATE STATISTICS, The 6th Conference on MULTIVARIATE DISTRIBUTIONS with Fixed Marginals Modelling Dropouts by Conditional Distribution, a Copula-Based Approach Ene Käärik

More information