Bayesian Inference, Monte Carlo Sampling and Operational Risk.

Similar documents
Estimation of Operational Risk Capital Charge under Parameter Uncertainty

Modelling Operational Risk Using Bayesian Inference

Markov Chain Monte Carlo methods

Bayesian Methods for Machine Learning

Likelihood-free MCMC

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian Inference. Chapter 1. Introduction and basic concepts

Markov Chain Monte Carlo methods

STA 4273H: Statistical Machine Learning

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Computational statistics

Statistical Inference for Stochastic Epidemic Models

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Marginal Specifications and a Gaussian Copula Estimation

ABC methods for phase-type distributions with applications in insurance risk problems

STA 4273H: Statistical Machine Learning

Bayesian Computation

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

Confidence Intervals. CAS Antitrust Notice. Bayesian Computation. General differences between Bayesian and Frequntist statistics 10/16/2014

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Default Priors and Effcient Posterior Computation in Bayesian

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

eqr094: Hierarchical MCMC for Bayesian System Reliability

Theory and Methods of Statistical Inference

Bayesian Nonparametric Regression for Diabetes Deaths

Statistics: Learning models from data

17 : Markov Chain Monte Carlo

Multimodal Nested Sampling

Bayesian Inference: Concept and Practice

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Approximate Bayesian Computation: a simulation based approach to inference

A Note on Bayesian Inference After Multiple Imputation

Bayesian model selection for computer model validation via mixture model estimation

Approximate Bayesian computation for the parameters of PRISM programs

Bayesian model selection: methodology, computation and applications

Markov Chain Monte Carlo in Practice

16 : Approximate Inference: Markov Chain Monte Carlo

STA 4273H: Sta-s-cal Machine Learning

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Subject CS1 Actuarial Statistics 1 Core Principles

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation

Iterative Markov Chain Monte Carlo Computation of Reference Priors and Minimax Risk

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

Theory and Methods of Statistical Inference. PART I Frequentist likelihood methods

Bagging During Markov Chain Monte Carlo for Smoother Predictions

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics

A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait

The Bayesian Approach to Multi-equation Econometric Model Estimation

7. Estimation and hypothesis testing. Objective. Recommended reading

A Bayesian perspective on GMM and IV

Bayesian Inference for the Multivariate Normal

Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood

Bayesian Inference in Astronomy & Astrophysics A Short Course

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California

arxiv: v1 [stat.co] 23 Apr 2018

Statistical Practice

A Note on Auxiliary Particle Filters

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements

Nonparametric Bayesian Methods - Lecture I

Quantile POD for Hit-Miss Data

Fitting the Bartlett-Lewis rainfall model using Approximate Bayesian Computation

Chapter 12 PAWL-Forced Simulated Tempering

A note on Reversible Jump Markov Chain Monte Carlo

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Choosing the Summary Statistics and the Acceptance Rate in Approximate Bayesian Computation

Item Parameter Calibration of LSAT Items Using MCMC Approximation of Bayes Posterior Distributions

Bayesian Reliability Demonstration

Penalized Loss functions for Bayesian Model Choice

Tools for Parameter Estimation and Propagation of Uncertainty

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

Markov chain Monte Carlo

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian inference & Markov chain Monte Carlo. Note 1: Many slides for this lecture were kindly provided by Paul Lewis and Mark Holder

Seminar über Statistik FS2008: Model Selection

Control Variates for Markov Chain Monte Carlo

General Bayesian Inference I

Bayesian Assessment of Lorenz and Stochastic Dominance in Income Distributions

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix

Markov chain Monte Carlo methods for visual tracking

an introduction to bayesian inference

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

Bayesian tests of hypotheses

Tutorial on ABC Algorithms

Bayesian Inference: Probit and Linear Probability Models

Bayesian inference J. Daunizeau

Learning Gaussian Process Models from Uncertain Data

7. Estimation and hypothesis testing. Objective. Recommended reading

Heriot-Watt University

Divergence Based priors for the problem of hypothesis testing

Introduction to Bayesian Methods

BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS

Transcription:

Bayesian Inference, Monte Carlo Sampling and Operational Risk. G. W. Peters and S. A. Sisson School of Mathematics and Statistics University of New South Wales Sydney, Australia Email: peterga@maths.unsw.edu.au Email: Scott.Sisson@unsw.edu.au Corresponding Author. 7 th October 6, revised 8 th November 6.

Abstract Operational risk is an important quantitative topic as a result of the Basel II regulatory requirements. Operational risk models need to incorporate internal and external loss data observations in combination with expert opinion surveyed from business specialists. Following the Loss Distributional Approach, this article considers three aspects of the Bayesian approach to the modeling of operational risk. Firstly we provide an overview of the Bayesian approach to operational risk, before expanding on the current literature through consideration of general families of non-conjugate severity distributions, g-and-h and GB distributions. Bayesian model selection is presented as an alternative to popular frequentist tests, such as Kolmogorov-Smirnov or Anderson-Darling. We present a number of examples and develop techniques for parameter estimation for general severity and frequency distribution models from a Bayesian perspective. Finally we introduce and evaluate recently developed stochastic sampling techniques and highlight their application to operational risk through the models developed. Keywords: Approximate Bayesian Computation; Basel II Advanced Measurement Approach; Bayesian Inference; Compound Processes; Loss Distributional Approach; Markov Chain Monte Carlo; Operational Risk.

) Introduction Operational Risk is an important quantitative topic in the banking world as a result of the Basel II regulatory requirements. Through the Advanced Measurement Approach, banks are permitted significant flexibility over the approaches that may be used in the development of operational risk models. Such models incorporate internal and external loss data observations in combination with expert opinion surveyed from business subject matter experts. Accordingly the Bayesian approach provides a natural, probabilistic framework in which to evaluate risk models. Of the methods developed to model operational risk, the majority follow the Loss Distributional Approach (LDA). The idea of LDA is to fit severity and frequency distributions over a predetermined time horizon, typically annual. Popular choices include exponential, weibull, lognormal, generalised Pareto, and g-and-h distributions [Dutta et al. 6]. The best fitting models are then used to produce compound processes for the annual loss distribution, from which VaR and capital estimates may be derived. Under the compound process, Y = N X i i=, () where the random variable X i ~ f(x) follows the fitted severity distribution. The random variable N ~ g(n), the fitted frequency distribution, is commonly modeled by Poisson, binomial and negative binomial distributions [Dutta et al. 6]. The distribution of Y has no general closed form as it involves an infinite sum over all possible values of N, where the n th term in the sum is weighted by the probability Pr(N=n) and involves an n-fold convolution of the chosen severity distribution, conditional on N=n. Actuarial research has considered the distribution function of Y for insurance purposes through Panjer recursions [Panjer, 6]. Other approaches utilize inversion techniques such as inverse Fourier transforms to approximate annual loss distributions, although they typically require assumptions such as independence between frequency and severity random variables [Embrechts et al. 3]. Techniques commonly adopted to fit frequency and severity models include extreme value theory [Cruz, ], Bayesian inference [Schevchenko et al. 6; Cruz, ], dynamic Bayesian networks [Ramamurthy et al. 5], maximum likelihood [Dutta et al. 6] and EM algorithms [Bee, 6]. There are a number of pertinent issues in fitting models to operational risk data: the combination of data sources from expert opinions and observed loss data; the elicitation of information from subject matter experts, which incorporates survey design considerations; sample biases in loss data collection, such as survival bias, censoring, incomplete data sets, truncation and, since rare events are especially important, small data sets. The LDA approach is convenient framework that can be enhanced through application of the Bayesian paradigm. [Shevchenko et al. 6] introduce Bayesian modeling under an LDA framework but restrict consideration to classes of frequency and severity models to 3

those which admit conjugate forms. This permits simple posterior simulation and parameter estimates can often be derived analytically. Modern Bayesian methods have moved away from restrictive conjugate modelling, which has been made possible by the development of sophisticated simulation procedures such as Markov chain Monte Carlo (MCMC), importance sampling (IS) and sequential Monte Carlo (SMC) algorithms [Doucet et al. 6; Peters, 5]. As modelling of severity and frequency distributions in operational risk becomes more mature, moving away from conjugate families becomes necessary. For example, [Dutta et al. 6] recently propose the use of g-and-h distributions for the severity distribution, which only admits conjugacy in special cases, such as when it reduces to a lognormal form. In these special cases, the work of [Shevchenko et al. 6] could be applied. In this article we consider three aspects of Bayesian inference as applied to the modelling of operational risk. We firstly present an exposition of why and how Bayesian methods should be considered as the modeling paradigm of choice, and explore how this approach fits into an LDA setting. We examine applications of the Bayesian approach to model selection and introduce to operational risk the notions of Bayes factors, Bayesian information criterion (BIC) and deviance information criterion (DIC). Secondly, we extend the current literature on Bayesian modelling as applied to operational risk to incorporate general families of severity and frequency distributions. Finally we investigate modern techniques for sampling and parameter estimation for general severity and frequency distribution models. These include MCMC, simulated annealing and approximate Bayesian computation. Throughout we focus on the development of non-conjugate severity distribution models initially when the likelihood has an analytic form, and subsequently when the likelihood function may not be written down analytically, such as in the case of the g-and-h distribution. We conclude with some practical examples of the ideas developed in an operational risk modeling situation, followed by a discussion. ) Bayesian Inference.) Bayesian Inference and Operational Risk Here we briefly review the fundamental ideas required for Bayesian analysis. There are two well-studied approaches to performing probabilistic inference in the analysis of data the frequentist and Bayesian frameworks. In the classical frequentist approach one takes the view that probabilities may be seen as relative frequencies of occurrence of random variables. This approach is often associated with the work of Neyman and Pearson who described the logic of statistical hypothesis testing. In a Bayesian approach the distinction between random variables and model parameters is artificial, and all quantities have an associated probability distribution, representing a degree of plausibility. By conditioning on the observed data a probability distribution is derived 4

over competing hypotheses. For in-depth discussion on the details of each approach see, for example, [Bernardo et al. 4]. The Bayesian paradigm is a widely accepted means to implement a modern statistical data analysis involving the distributional estimation of k unknown parameters, θ :k =[ θ,.., θ k ], from a set of n observations y :n = [ y,..., y n ]. In operational risk, the observations could be counts of loss events, loss amounts or annual loss amounts in dollars. Prior knowledge of the system being modeled can be formulated through a prior distribution p( θ :k ). In operational risk this would involve prior elicitation from subject matter experts through surveys and workshops. From the mathematical model approximating the observed physical phenomena, the likelihood p( y : n θ : k ) may be derived, thereby relating the parameters to the observed data. In the context of operational risk this reflects the quantitative team s modeling assumptions, such as the class of severity and frequency distributions representing the loss data observations. The prior and likelihood are then combined through Bayes rule [Bayes, 763]: p( y: n θ: k ) p( θ: k ) p( θ : k y: n ) = () p y θ p θ dθ Θ ( ) ( ) : n to give the posterior probability of the parameters having observed the data and elicited expert opinion. The posterior may then be used for purposes of predictive inference. In this manner Bayesian approaches naturally provides a sound and robust approach to combining actual loss data observations and subject matter expert opinions and judgments. Much literature has been devoted to understanding how to sensibly assign prior probability distributions and their interpretation in varying contexts. There are many useful texts on Bayesian inference for further details see e.g. [Box et al. 99; Gelman et al. 995; Robert, 4]. : k : k : k.) Bayesian Parameter Estimation and Operational Risk Maximum likelihood is the most common technique for parameter estimation employed by operational risk practitioners, and this approach has several useful properties including asymptotic efficiency and consistency. This is typically applied to an LDA analysis by using an optimization routine to find the maximum of a truncated likelihood distribution surface. One issue with this approach is that variability inherent in very small and incomplete datasets can produce drastically varying parameter estimates. This is where approaches such as the EM algorithm can be useful. In addition, even given severity and frequency maximum likelihood parameter estimates, the problem of fusing fitted severity and frequency distributions with expert opinion still remains. The Bayesian approach avoids these problems under a single probabilistic framework. 5

In the Bayesian setting, the full posterior distribution should be used directly for predictive and inferential purposes. However, if the operational risk practitioner wishes to perform estimation of fixed parameter values, a number of sensible options are available. The two most common estimators are the maximum a posteriori (MAP) and the minimum mean square error (MMSE) or minimum variance estimator [Ruanaidh et al. 996]. The MAP estimator maximises the likelihood function weighted by the prior probability as θ = arg max p θ y, (3) ( ) : k, MAP : k θ: k whereas the MMSE estimator is given by θ :k, MMSE = θ :k p( θ :k y :n )dθ :k. (4) Thus in a Bayesian setting the MAP and MMSE estimators respectively equal the posterior mode and mean. It is routine in MCMC literature to estimate the MAP or MMSE estimates using samples from the Markov Chain Monte Carlo algorithm created to sample from the posterior of interest. These estimates are then used for LDA, through simulation, to obtain empirical estimates of the compound process annual loss distribution characteristics, such as quantiles, mean, variance, skew and kurtosis. This simulation of the compound process could be performed easily through a Monte Carlo sampling approach or alternatively through a fast inverse Fourier transform approximation of the compound process characteristic function. : n.3) Bayesian Model Selection and Operational Risk Before the quantitative analyst can estimate annual loss distribution characteristics, choices are required regarding the severity and frequency distributions. We propose three popular Bayesian model selection criteria which may be adopted for frequency and severity distribution model selection in the context of operational risk. These approaches provide Bayesian alternatives to standard tools used by operational risk practitioners who frequently utilize Kolmogorov-Smirnov, Anderson-Darling and chi-squared tests. The task is therefore to implement Bayesian model selection criteria as a means of selecting between different posterior distributions. In an LDA setting, situations which require decisions between different posterior distributions include where different prior distributions are derived from expert opinion, or when fitting competing severity distribution models to observed data. The ideal case for choosing between posterior models occurs when integration of the posterior (with respect to the parameters) is possible. Here one can evaluate the posterior odds ratio, which is the Bayesian alternative to classical hypothesis testing. For two severity distribution models M and M, parameterized by θ : k and α : j respectively, the posterior odds ratio of M over M is given by p( y :n M )p(m ) p( y :n M )p(m ) = p(m ) p y ( θ,m :n :k ) p( θ :k M )dθ :k = p(m ) p(m ) p( y :n α : j, M ) p( α : j M )dα : j p(m ) BF, (5) 6

which is the ratio of posterior model probabilities, having observed the data, where p(m i ) is the prior probability of model M i. This quantity is the Bayesian version of a likelihood ratio test, and a value greater than indicates that model M is more likely than model M (given the observed data, prior beliefs and choice of the two models). The term BF is known as the Bayes factor. We note that the application of Bayes factors in operational risk modeling is sensible as the priors elicited from subject matter experts will be proper, rather than vague or improper, which can complicate model comparisons when evaluating (5). If analytic integration is not possible, either numeric or analytic approximations can be made. The latter approach leads to the Bayes and deviance information criteria. The Bayesian information criterion (BIC) or the Schwarz criterion [Robert, 4] is given by BIC = ln( L)+ k ln( n), (6) where n is the number of observations, k is the number of free parameters in the prior to be estimated and L is the maximized value of the likelihood function for the estimated model. This criterion may be evaluated for each fitted severity or frequency distribution model with the selected model i.e. the one with the lowest BIC value corresponding to a model which both fits the observed data well and is parsimonious. The BIC criterion, first derived in [Schwarz, 978], is obtained by forming a Gaussian approximation of the log posterior distribution around the posterior mode using Laplace s method. BIC is an asymptotic approximation, so the number of observations should be large. An alternative model selection criterion for operational risk practitioners, when developing hierarchical Bayesian models, is the deviance information criterion (DIC): DIC = D( ϑ ) D( ϑ) (6) which is a generalization of BIC for Bayesian hierarchical models. The bar denotes an averaging over ϑ and the term D( ϑ) is the Bayesian deviance D( ϑ)= log( p( y :n ϑ ))+ C, where p ( y : n ϑ) is the likelihood function depending on data y : n and parameters ϑ, and where C is a constant. (This constant is common to all models, and so may be safely ignored). See, e.g., [Spiegelhalter et al. ]. The DIC is particularly useful in Bayesian model selection problems in which the posterior distributions of the models have been determined through simulation. We shortly detail methods in which this may be achieved as operational risk itself moves away from conjugate families of severity and frequency distributions. For detailed discussion on model selection criterion and comparison of BIC, Bayes Factors and Aikake information criterion (a popular frequentist model selection criterion) see [Wasserman, 997]. For a detailed exposition on DIC criterion see [Spiegelhalter et al. ]. These model selection techniques form an integral part of model development where posterior distributions are only known up to a normalization constant or when no analytic form for the likelihood is obtainable, as we now consider. 7

3) Beyond Conjugate Classes of Distributions for Modelling Operational Risk Hierarchical Bayesian Models for Operational Risk [Shevchenko et al. 6] present conjugate poisson-gamma, lognormal-normal and pareto-gamma classes of Bayesian frequency and severity models, and discuss how to handle these families in the context of truncated data, which is a common occurrence in operational risk. This derives from policies in which institutions only record losses when they are above a defined threshold level. So what is conjugacy? Prior distributions are conjugate to a likelihood function if the resulting posterior distribution has a standard distributional form. Accordingly, the approach of [Shevchenko et al. 6] is elegant as the posterior distribution of the parameters is analytically known and easy to sample from. However, conjugacy restricts the classes of distributions which may be used to elicit information from subject matter experts, and is also restrictive in that it does not in general accommodate recently analysed classes of distributions for operational risk such as the g-and-h or the GB distributions [Dutta et al. 6; He et al. 6]. In this section and those that follow, we will demonstrate the application of the Bayesian framework to non-conjugate families and obtain parameter estimates for use in LDA analysis. We note that in allowing for greater modelling flexibility by moving away from conjugate families of distributions, one incurs a greater computational burden to simulate from the posterior distribution. This burden is greatest when the likelihood itself admits no closed form. The models we consider for the observed loss data are assumed to be from either the g- and-h distribution family or the generalized Beta distribution of the nd kind (GB) as presented in [Dutta et al. 6]. The methodology we present is not restricted to these choices. Consideration of the GB and the g-and-h distribution is believed to be directly beneficial to modeling of Operational Risk data. The study performed by [Dutta et al. 6] demonstrates empirical evidence of why these two types of distribution should be considered in the context of Operational Risk modeling, using actual loss data collected from a variety of financial institutions. One of the key reasons for considering these models is their flexibility. It is a heavily debated topic in the banking industry around what the most appropriate class of distributions, parametric or non-parametric, to use for the modeling of actual loss data. This issue is compounded by the fact that modeling of operational risk data can be done at various levels of an organization, ranging from the parent level in which all loss data in the organization is analyzed down to individual business unit and risk type levels. Hence, designing models with flexibility to capture salient features of loss data at multiple levels of an organization is of direct benefit. In addition to these issues, even after developing an appropriate model for loss data observations, a mathematically sound technique is required for combining this loss data model with expert opinion. In this context, this paper provides a robust method in which 8

these flexible models for the actual observed loss data can be combined with expert opinion at different levels of an organization. The requirement in doing this is the ability to encode in a prior distribution the subject matter experts judgments, which are typically elicited through surveys. The design of these surveys is critical, an excellent discussion of constructing and performing expert judgment elicitation is provided by [O Hagan, 6]. The flexibility of the GB and the g-and-h distributions lies in the fact that they encompass families of distributions. What this means is that depending on the parameter values used in specification of these distributions, they can recover a range of parametric distributions. This includes the typical severity distribution models which are routinely applied in practical industry modeling of operational risk data, such as Lognormal, Weibull and Gamma. For a review of which parametric distributions belong to the g-andh family see [Bookstaber et al, 987]. In addition to this they also allow the model to achieve a wide range of location, scale, skewness and kurtosis, the purpose of such flexibility as mentioned previously is to allow the model being fitted to the data to capture as much of the salient features as possible. The cost of fitting such flexible models is an increase in the number of parameters that are required to be estimated. This can be problematic if modeling of loss data is being performed a granular level in which only small data sample sizes are present and should be considered in practical implementation. The GB distribution has density function given by a y f ( ap y)= b ap B p,q [ ] p +q I, ( ) + ( y /b) a ( ) y ( ), (8) where B(p,q) is the Beta function and the parameters a, p and q control the shape of the distribution and b is the scale parameter. For discussion of how the parameters are interpreted in terms of location, scale and shape see [Dutta et al. 6]. This distribution poses a challenge to Bayesian modelling as it does not admit a conjugate family of prior distributions, and accordingly necessitates more advanced posterior simulation techniques. The g-and-h distributional family are obtained through transformation of a standard normal random variable Z~N(,) via gz exp ( ) ( ) ( hz / ) Yg, h Z = e. (9) g In this article we consider the g-and-h four parameter family of distributions presented in [Dutta et al. 6] given by the linear transformation X Z = A + B Y Z, () ( ) ( ) g, h. g, h where g and h are real valued functions of Z and where A and B are location and scale parameters. The h parameter is responsible for kurtosis and elongation (how heavy the tails are) and the g parameter controls the degree of skewness. Positive and negative values for g respectively skew the distribution to the right and left. With examples, [Dutta et al. 6] discuss the shapes of the g-and-h distribution for different settings of parameter values. 9

For simplicity we adopt constant values for g and h, although relaxation of this assumption does not preclude inference. This distributional family poses a unique challenge to Bayesian modeling as the likelihood function does not in general admit an analytic closed form, since the density may only be expressed through a highly non-linear transform of a standard normal random variable. Accordingly, very recent state-of-the-art Monte Carlo sampling strategies are required, designed specifically for this purpose. These methods, known as approximate Bayesian computation (ABC), were first introduced in the statistical genetics literature [Beaumont et al. ; Tavaré et al. 3]. As noted in [Dutta et al. 6], the g-and-h distribution has a domain given by the entire real line, this can be handled in practice by introducing a truncation threshold into the model. This paper only considers the case of deterministic, known threshold situations, though future extensions could incorporate unknown or stochastic thresholds. In the above models we adopt Gamma priors although, as before, this choice is arbitrary and other selections of prior could be used without difficulty. For example, Gaussian priors can be used if one does not want to restrict the domain of the parameters in the model to R +. The choice of prior parameter values will in general be elicited from a subject matter expert through a survey or workshop. Prior elicitation in general is a challenging process. In the current setting an interactive feedback process may be appropriate, whereby subject matter experts repeatedly modify their prior parameters until they judge a realistic posterior annual loss distribution has been produced. Other parameter extraction processes exist; these include graphical distributional shape demonstrations, and combinations of this with questions about worst case (maxima) and typical loss (quantile) values and frequencies of losses. See, for example, [Garthwaite et al. ; O Hagan, 998]. Model GB likelihood with Gamma prior distributions. The posterior distribution for this model is derived from (8) and the independent Gamma priors p a α, β ), p(b α b,β b ), p(p α p,β p ), and p(q α q,β q ), yielding ( a a p( a,b, p,q y :n )= h(y :n a,b, p,q)p(a)p(b)p(p)p(q) h(x a,b, p,q)p(a)p(b)p(p)p(q) dadpdq h(y :n a,b, p,q)p(a α a,β a )p(b α a,β a )p( p α a,β a )p(q α a,β a ) = n i= b ap B p,q a y i ap [ ] p +q ( ) + ( y i /b) a ( ) α p ( q /β q ) α q ( a /β a ) α a ( b/β b ) α b p /β p exp( a/β Γ( α a )Γ( α b )Γ( α p )Γ( a b /β b p /β p q/β q )I, α q )β a β b β p β q () Clearly this model does not admit a posterior from which it is easy to simulate from using simple methods such inversion sampling. Model g-and-h likelihood with Gamma prior distributions. p, : n, cannot be analytically derived. However, we assume subject matter expert elicited gamma priors The posterior distribution for this model, ( g h, A, B y ) ( ). ( ) y

p(a α A,β A ), p(b α B,β B ), p(g α g,β g ) and p(h α h,β h ), and for simplicity of presentation, constant g and h. 4) Algorithms for Simulation from Bayesian Posterior Models for Operational Risk The overall simulation approach to generate an empirical estimate of the annual loss distribution under LDA proceeds as follows:. Simulate N realizations,{ θ : k,( i) } i = : N, from the posterior distribution using an appropriate simulation technique, (MCMC, SA-MCMC, ABC-MCMC).. Using these samples of parameters to form an empirical estimate of posterior distribution one can then obtain Bayesian parameter estimates from the sample θ where either MAP or MMSE estimators can be empirically estimated. { : k,( i) } i = : N 3. Apply the Bayesian model selection criterion of your choice to determine the most appropriate model given the data and the elicited expert opinions. Now the application of an LDA Operational Risk approach given Bayesian parameter estimates (MAP or MMSE) proceeds as follows: Repeat j=,, M times { 4. Using the Bayesian estimated frequency parameter(s), sample a realization for the number of losses in the j th year, N(j), from the chosen frequency distribution. 5. Using the Bayesian estimated severity parameters, sample N(j) realizations for the losses in dollars for the j th year from the chosen severity distribution. 6. Calculate Y(j) using equation () to obtain the annual loss for the j th year. } For the posterior simulation in step, possible methods include Markov chain Monte Carlo (MCMC), importance sampling, annealed MCMC, sequential Monte Carlo (SMC) and approximate Bayesian computation (ABC). In this article we examine the MCMC, annealed MCMC and ABC in the context of Models and above. 4.) Simulation Technique : Markov chain Monte Carlo For most distributions of interest, it will not be feasible to draw samples via inversion or rejection sampling this includes Model. Markov chain Monte Carlo constructs an ergodic Markov chain, {,..., N }, taking values, i, in a measurable space. For Model, =(a,b,p,q). The Markov chain is constructed to have the posterior distribution p ( a, b, p, q y : n ) as its stationary distribution, thereby permitting the chain realizations to be used as samples from the posterior. These can then be used to produce MAP and MMSE estimates which have asymptotic convergence properties as the length of the

chain N. An excellent review of the properties of general state space Markov chain theory is given by [Meyn et al. 993]. Under the general framework established by Metropolis and Hastings [Metropolis et al. 953; Hastings et al. 97; Gilks et al. 996], transitions from one element in the chain to the next are determined via a transition kernel K( i-, i) which satisfies the detailed balance condition p( i- y :n ) K( i-, i ) = p( i y :n ) K( i, i- ) () where p( i- y :n ) is the desired stationary distribution. The transition kernel K( i-, i) defines the probability of proposing to go from i- to i. The transition kernel contains in its definition a proposal density q, from which the proposed next value in the chain is drawn, and an acceptance probability α, which determines whether the proposed value is accepted or if the chain remains in its present state. The acceptance probability is crucial as it ensures that the Markov chain has the required stationary distribution through () [Gilks et al. 996]. In this article, for simplicity, we adopt a Gaussian random walk as the proposal density q, with variance chosen to allow efficient movement around the posterior support. When evaluating Monte Carlo estimates for an integral, such as the posterior mean (i.e. the MMSE estimator), in order to ensure the variance of the estimate is as small as possible it is common to use only those realizations which one is fairly confident come from the Markov chain once it has reached its stationary regime. This is typically achieved by discarding a certain number of initial samples (known as burnin ) [Gilks et al. 996]. Metropolis-Hastings Algorithm for Model :. Initialise i= and = [a,b,p,q] randomly sampled from the support of the posterior.. Draw proposal θ i+ from proposal distribution q( i,.). 3. Evaluate the acceptance probability α θ i,θ i+ = min, p θ ( i+ y :n )q( θ i+,θ i ) p( θ i y :n )q( θ i,θ i+ ). 4. Sample random variate U U[,]. 5. If U α ( θ i, θ i + ) then set θ = θ Else set θ i+ = θ i. i+ i+ 6. Increment i = i+. 7. If i<n go to. Note that this sampling strategy only requires the posterior distribution to be known up to its normalization constant. This is important as solving the integral for the normalizing constant in this model is not straightforward.

4.) Simulation Technique : Approximate Bayesian Computation It is not possible to apply standard MCMC techniques for Model, as the likelihood function, and therefore the acceptance probability, may not be written down analytically or evaluated. A recent advancement in computational statistics, approximate Bayesian computation methods are ideally suited to situations in which the likelihood is either computationally prohibitive to evaluate or cannot be expressed analytically [Beaumont et al. ; Tavaré et al. 3; Sisson et al. 6]. Under most ABC algorithms, evaluation of the likelihood is circumvented by the generation of a simulated dataset y :n from the likelihood conditional upon the proposed parameter vector θ i+. Accordingly, this simulation method is appropriate for the g-and-h distribution as there is no general analytic expression available, although simulation from this density is computationally inexpensive. We present the MCMC version of ABC methods, introduced by [Tavaré et al. 3]. Approximate Bayesian computation algorithm for Model :. Initialise i= and = [A,B,p,q] randomly sampled from the support of the posterior.. Draw proposal θ i+ from proposal distribution q( i,.). 3. Generate a simulated data set y :n from the likelihood conditional on the proposal parameters θ i+. 4. Evaluate the acceptance probability α( θ i,θ i+ )= min, p θ ( i+)q ( θ i+,θ i ) p( θ i )q( θ i,θ i+ ) I ρ S [ ( ( y :n),s ( y :n ))< ε]. 5. Sample random variate U U[,]. 6. If U α ( θ i, θ i + ) then set θ = θ Else set θ i+ = θ i. 7. Increment i = i+. 8. If i<n go to. i+ i+ The difference between ABC-MCMC and standard MCMC, is that the intractable likelihood is replace with an approximation. This approximation crudely replaces the likelihood evaluation with one when the distance (under a metric ) between a vector of summary statistics S(.) acting on both the simulated data y : n and observed loss data y : n is within an acceptable tolerance, and zero otherwise. The Markov chain created θ, y ρ S y S y ε, from through this process has a stationary distribution p ( : n ( ( : n ), ( : n )) ) which the desired marginal, ( θ ρ( S( y ) S( y )) ε ) is exactly p( θ y :n ). p : n, : n, is easily derived. For ε = this Various summary statistics could be used, such as mean, variance, skewness, kurtosis and quantiles. For the illustrations in this article, we consider the sample mean, variance and skewness under Euclidean distance. One option for the tolerance level,, is to let it be 3

adaptively set by the algorithm [Roberts et al. 6], however, for simplicity we consider the tolerance as constant. 4.3) Simulation Technique 3: Simulated Annealing Developed in statistical physics [Kirkpatrick et al. 983], simulated annealing is a form of probabilistic optimization which may be used to obtain MAP parameter estimates. It implements an MCMC sampler which traverses a sequence of distributions in such a way that one may quickly find the mode of the distribution with the largest mass. A typical sequence of distributions used is given by [p(a,b,p,q y :n )] (i), where (i) is the temperature of the distribution, or the annealing schedule. When (i)< the distribution is flat thereby permitting efficient exploration of the distribution by the Markov chain sampler. As (i)> increases the posterior modes become highly peaked, thereby forcing the MCMC sampler towards the MAP estimate, which is taken as the final chain realization. The approach taken in this article will be to perform annealing with a combination of Markov chain samplers started at random locations in the support of the posterior using a logarithmic annealing schedule. Simulated Annealing algorithm:. Initialise i= and = [a,b,p,q] randomly sampled from the support of the posterior.. Draw proposal θ i+ from proposal distribution q( i,.). 3. Evaluate the acceptance probability γ ( i+ ) ( ) ( ) ( ) p θ i+ y: n q θ i+, θ i α θ i, θ i+ = min, ( i+ ) ( ) (. γ p θ i y: n q θ i, θ i+ ) 4. Sample random variate U U[,]. 5. If U α ( θ i, θ i + ) then set θ i+ = θ i+ Else set θ i+ = θ i. 6. Increment i = i+. 7. If i<n go to. 4

5) Simulation Results and Discussion 5.) Parameter Estimation and Simulation from Model : GB severity distribution It was demonstrated in [Bookstaber et al, 987] that the GB distribution (8) encapsulates many different classes of distribution for certain limits on the parameter set. These include the lognormal ( a, q ), log Cauchy ( a ) and Weibull/Gamma (q ) distributions, all of which are important severity distributions used in operational risk models in practice. We now construct a synthetic dataset with which to illustrate the above methods. The severity loss data will be generated from a Lomax distribution (Pareto distribution of the nd kind or Johnson Type VI distribution) with parameters [a, b, p, q] = [, b,, q], which when reparameterized in terms of >, > and x >, is given by ( α + ) α x = + h ( x; α, λ). (3) λ λ The heavy-tailed Lomax distribution has shape parameter (=q) and scale parameter (=b). The Lomax distribution has been selected for this study for two reasons. The first reason is of a practical nature, in practical applications of Operational Risk Modelling, it is believed that heavy tailed severity distributions should be considered in order to capture the characteristics of actual Operational Risk severity observations. It is standard industry practice in the area of Operational Risk modeling in Banking to consider such distributions, and discussed in [Dutta et al. 6]. The second, less significant reason for consideration of the Lomax distribution, is that it provides a practically viable distribution from which to sample the loss data set to be analysed. We generate 5 artificial loss values in thousands of dollars. In order to generate these data we use the relation whereby the 4 parameter GB distribution can be obtained through the transformation of X ~Gamma(p,) and X ~ Gamma(q,) via [Devroye, 986] / a X Y = b X. (4) Accordingly we use a GB distribution with parameters [,,, ]. Figure demonstrates the shape of this density. 5

x 4 Severity Distribution Lomax 8 f(x) 6 4 3 4 5 6 7 8 9 Loss $ Figure. We further specify the parameters for the Gamma prior distributions as b = 5, b =, a = p = q =.5 and a = p = q =. Results for Markov chain Monte Carlo: We implement MCMC of length N=5, with the initial, iterations discarded as burnin, and using a truncated multivariate Gaussian distribution MVN([a,b,p,q], prop I 4 )I([a,b,p,q] > [,,,]) as the random walk proposal density, with mean given by [a,b,p,q] and covariance matrix propi 4, where prop =. and I 4 is the 4 by 4 identity matrix and I( ) is the indicator function which is one when the logical condition is met and zero otherwise. Figure illustrates sample paths of a, b, p and q. These paths demonstrate that the Markov chain produced through this simulation process is mixing well over the support of the posterior distribution and is exploring the regions of support for the posterior which coincide with the true parameter values, thereby demonstrating correct sampler performance. 6

6 Markov Chain Sample Paths a, b, p, q a 4.5.5.5 3 3.5 4 4.5 5 x 5 b.5.5.5 3 3.5 4 4.5 5 6 x 5 4 p.5.5.5 3 3.5 4 4.5 5 6 x 5 q 4.5.5.5 3 3.5 4 4.5 5 Iteration Figure. The resulting MAP estimates are shown in Figure 3 as the modes of the histogram density estimates (see results for simulated annealing). The posterior mean (MMSE) estimates and posterior standard deviations were approximated using the Markov chain realizations as, a MMSE =.449 (std.dev..66), b MMSE = 9.664 (std.dev..767), p MMSE =.979 (std.dev..544) and q MMSE =.8356 (std.dev..47). x 5 7

3 x 5 Histogram of samples a, b, p, q p(a y) p(b y) 3 4 5 6 5 x 4 5 p(p y) p(q y) 4 6 8 4 6 8 3 x 5 3 4 5 6 3 x 5.5.5.5 3 3.5 4 4.5 5 Figure 3. Results for Simulated Annealing We implement the simulated annealing algorithm with random selected starting points drawn from a Gaussian distribution N([a,b,p,q],I) I([a,b,p,q] > [,,,]). These starting points are then used to produce parallel MCMC chains which each used the same proposal and prior specifications the MCMC simulation. The annealing schedule length was N=, and the schedule i was logarithmic between log(e) and log(). We note that simulated annealing is appropriate in this setting since the LDA approach to modeling operational risk only requires point estimates for the parameters of the severity and frequency distributions. In this case a probabilistic optimization technique has been used to obtain the point estimates. The mean MAP estimates and the standard deviations of the MAP estimates obtained from the parallel chains are a MAP =.8 (std.dev..98), b MAP =.434 (std.dev..3343), p MAP =.367 (std.dev..697) and q MAP =.7 (std.dev..69). Figure 4 illustrates the MAP estimates obtained for the (random initial starting points) parallel annealed MCMC chains. These estimates were then averaged to get the mean MAP and standard deviation estimates above. More consistent results between chains can be achieved through longer chains and more gradual annealing schedules. 8

Annealed MCMC MAP estimates 3 a 3 4 5 6 7 8 9 5 b 5 3 4 5 6 7 8 9 6 p 4 3 4 5 6 7 8 9 6 q 4 3 4 5 6 7 8 9 Index of Parrallel Annealed MCMC Chain Figure 4. There is a trade-off between using simulated annealing and MCMC in Model. Annealing stochastically optimizes the posterior distribution and is typically very fast in terms of simulation time. The total simulation time to produce all the parallel chains took less than 6 minutes in non-optimized Matlab code on a P4.4GHz laptop with 48MB of RAM. However, annealing provides MAP estimates of parameters, whereas the MCMC estimates the entire posterior distribution. Similarly, a tradeoff exists between the length of the annealing schedule and the number of annealing chains. 5.) Parameter Estimation and Simulation from Model : g-and-h severity distribution We now demonstrate the use of the g-and-h distribution for modeling the severity distribution in an LDA operational risk framework. Posterior simulation from Model requires the use of approximate Bayesian computation methods. We generate an observed dataset of length 5 using the parameters A =, B =, g = and h =, as illustrated in Figure 5, after truncation is applied. Parameters for the Gamma priors are specified as a = b = q =.5, a = b = q =, p = and p =. 9

Loss Data from g and h Distribution 8 6 4 Counts 8 6 4 5 5 5 3 Loss $ k Figure 5. The ABC-MCMC algorithm requires specification of both summary statistics S() and tolerance level. Accordingly, after performing an initial analysis we examine the effect of varying firstly the tolerance and then the summary statistics. Initial ABC analysis For our initial analysis we specify the sample mean as the sole summary statistic, and a tolerance level of =.. The Markov chain has length N=5,, discarding the first, realizations. The Markov chain proposal distribution q was again the truncated multivariate Gaussian distribution, but with prop =. The sample paths of the parameters A, B, g and h are illustrated in Figure 6, indicating that the Markov chain is mixing well for these data.

6 Sample Paths for g and h Posterior Parameters A, B, g, h A 4.5.5.5 3 3.5 4 4.5 5 5 x 5 B 5.5.5.5 3 3.5 4 4.5 5 4 x 5 g.5.5.5 3 3.5 4 4.5 5 3 x 5 h.5.5.5 3 3.5 4 4.5 5 Iteration Figure 6 We note that the ABC-MCMC algorithm may become stuck in one part of the parameter space place for an extended period of time. This occurs as the acceptance rate of any state is proportional to the posterior at that point, and so when the current state is in an area of relatively low probability it may take a while to escape from this point. [Bortot et al. 6; Sisson et al. 6] have examined different methods of permitting the tolerance to vary to mitigate this problem. The resulting posterior mean (MMSE) and standard deviation estimates are A MMSE =.7489 (std.dev..465), B MMSE =.367 (std.dev..456), g MMSE =.585 (std.dev..8) and h MMSE =.486 (std.dev..438). These results are acceptable given that only the sample mean was used as a summary statistic to capture all information from the likelihood. We might anticipate improved estimates if the summary statistics were augmented to include other quantities. We examine this shortly. We firstly consider the effect of varying the tolerance level. x 5

ABC Analysis of tolerance levels We present a summary of results for the same analysis as before, but where the tolerance level takes the values.,.,. and and the Markov chain proposal distribution q was the truncated multivariate Gaussian distribution, but with prop =.5. Parameter sample paths for =. are illustrated in Figure 7. Instances of the chain becoming stuck are now more pronounced. The simplest methods to avoid this problem are to run a much longer chain, or to choose a larger tolerance, although see [Bortot et al. 6; Sisson et al. 6]. For tolerances greater than., the sticking was sufficiently minor to justify using the Markov chain realizations as a sample from the posterior. In the following simulations the length of the Markov chain was increased to N =,, with the first 4, realizations discarded as burnin. 6 Sample Paths of g and h posterior A, B, g, h A 4.5.5.5 3 3.5 4 4.5 5 4 x 5 B.5.5.5 3 3.5 4 4.5 5 3 x 5 g.5.5.5 3 3.5 4 4.5 5 3 x 5 h.5.5.5 3 3.5 4 4.5 5 iterations Figure 7. With =., the resulting MMSE and standard deviation estimates were A MMSE =.9693 (std.dev..7379), B MMSE =.455 (std.dev..447), g MMSE =.769 (std.dev..4545) and h MMSE =.553 (std.dev..366). With =., the resulting MMSE and standard deviation estimates were A MMSE =.674 (std.dev..437), B MMSE =.33 (std.dev..635), g MMSE =.565 (std.dev..795) and h MMSE =.465 (std.dev..4435). x 5

With =, the resulting MMSE and standard deviation estimates were A MMSE =.894 (std.dev..53), B MMSE =.658 (std.dev..956), g MMSE =.53 (std.dev..7469) and h MMSE =.4887 (std.dev..5373). These results demonstrate that as the tolerance decreases the precision of the estimates improves this is to be expected. As the tolerance decreases, fewer unlikely parameter realizations generate data which is within the tolerance level ε. Accordingly the precision of the posterior distribution increases. However, on average a proposal θ i+ will be accepted with probability Pr ( ρ ( S( y : n ), S( y: n )) ε θ i+ ), and this will decrease (rapidly) as the tolerance level ε decreases. That is, chain efficiency is strongly linked to the tolerance if one is increased, the other decreases, and vice versa. However, commonly an acceptable accuracy and computational intensity tradeoff can be found. ABC analysis of summary statistics We now examine the effect of using different summary statistics. We consider the combinations {mean}, {mean, variance}, {mean, skewness} and {mean, variance, skewness} to approximate the information in the data through the likelihood. The Markov chain proposal distribution q was again the truncated multivariate Gaussian distribution, this time using prop =.. With tolerance fixed at =, our chain length is N=,, with the initial, iterations discarded. The resulting MMSE and standard deviation estimates are as follows: {mean} A MMSE =.7743 (std.dev..565), B MMSE =.933 (std.dev..4874), g MMSE =.4837 (std.dev..78 ) and h MMSE =.485 (std.dev..5354). {mean, variance} A MMSE =.7369 (std.dev..538), B MMSE =.3678 (std.dev..77), g MMSE =.793 (std.dev..5) and h MMSE =.657 (std.dev..387 ). {mean, skew} A MMSE =.4955 (std.dev..876), B MMSE =.74 (std.dev..47), g MMSE =.737 (std.dev..4674) and h MMSE =.3753 (std.dev..45). {mean, variance, skew} A MMSE =.49 (std.dev..5656), B MMSE =. (std.dev..448), g MMSE =.7385 (std.dev..3) and h MMSE =.768 (std.dev..699). Clearly the latter case produces the most accurate and precise results (as shown by parameter estimates and standard deviations respectively) as the largest set of summary statistics naturally encapsulates distributional characteristics most precisely. Conversely, the most inaccurate results are obtained when the least restrictive criterion is applied. Additionally one can relate these results back to the role each parameter plays in the shape of the g-and-h distribution. For example, parameter g relates to the skewness of the distribution and as such, one might expect improved performance in estimates of this 3

parameter when skewness is incorporated as a summary statistic. The results support such a hypothesis as the mean is more precise and standard deviation reduced in the posterior marginal distribution for g when skewness is included. Similarly, h relates to kurtosis and accordingly realizes improved estimates upon inclusion of the variance. However, as expanding the vector of summary statistics increases the number of restrictions placed on the data, proposed parameter values become less likely to generate data sets which will satisfy the selection criteria for a fixed tolerance. Accordingly the acceptance rate of the Markov chain will decrease, thereby necessitating longer runs to obtain reliable sample estimates. 6) Discussion In this article, by introducing existing and novel simulation procedures we have substantially extended the range of models admissible for Bayesian inference under the LDA operational risk modeling framework. We strongly advocate that Bayesian approaches to operational risk modeling should be considered as a serious alternative for practitioners in banks and financial institutions, as it provides a mathematically rigorous paradigm in which to combine observed data and expert opinion. We hope that the presented class of algorithms provides an attractive and feasible approach in which to realize these models. Future work will consider introduction of correlation in a Bayesian setting, such as correlation between parameters for frequency and severity distributions under an LDA approach. Acknowledgements The first author is supported by an Australian Postgraduate Award, through the Department of Statistics at UNSW. The second author is supported by the Australian Research Council through the Discovery Project scheme (DP66497) and by the Australian Centre of Excellence for Risk Analysis. References. Bayes, T. (763). An essay towards solving a problem with the doctrine of Chances. Philos. Trans. R. Soc. London, 53, 37 48.. Bee M. (6). Estimating and simulating loss distributions with incomplete data, Oprisk and Compliance, 7 (7), 38-4. 3. Bernardo J. and A. Smith (994). Bayesian Theory. Wiley Series in Probability and Statistics, Wiley. 4. Bookstaber R. and J. McDonald (987). A general distribution for describing security price returns. The Journal of Business, 6 (3), 4-44. 5. Bortot P., S. G. Coles and S. A. Sisson (6). Inference for stereological extremes. J. Amer. Stat. Assoc. In press. 6. Beaumont M. A., W. Zhang and D. J. Balding (). Approximate Bayesian 4

computation in population genetics. Genetics, 6, 5 35. 7. Box and Tiao (99) Bayesian Inference in Statistical Analysis. Wiley Classics Library. 8. Cruz M. (). Modelling, Measuring and Hedging Operational Risk. John Wiley & Sons, Chapter 4. 9. Devroye L. (986). Non-Uniform Random Variate Generation. Springer-Verlag, New York.. Doucet A., P. Del Moral and A. Jasra (6). Sequential Monte Carlo samplers, J. Roy. Satist. Soc. B, 68, (3), 4 436.. Dutta K. and J. Perry (6). A tale of tails: An empirical analysis of loss distribution models for estimating operational risk capital. Federal Reserve Bank of Boston, Working Papers No. 6-3.. Embrechts P., H. Furrer and R. Kaufmann (3). Quantifying regulatory capital for operational risk. Derivatives Use, Trading & Regulation, 9 (3), 7 3. 3. Garthwaite P. and A. O Hagan (). Quantifying expert opinion in the UK water industry: An experimental study. The Statistician, 49 (4), 455 477. 4. Gelman A., J. B. Carlin, H.S. Stern and D.B. Rubin (995). Bayesian Data Analysis. Chapman and Hall. 5. Gilks W., S. Richardson and D. Spiegelhalter (996). Markov Chain Monte Carlo in Practice. Chapman and Hall. 6. Hastings W. (97). Monte Carlo sampling methods using Markov Chains and their Applications. Biometrika, 57, 97 9. 7. HeY. and Raghunathan T. (6). Tukey s gh Distribution for Multiple Imputation, The American Statistician, Vol. 6, 3 8. Kirkpatrick S., C. Gelatt and M. Vecci (983). Optimization by simulated annealing. Science,, 87 68. 9. Metropolis N., A. Rosenbluth, M. Rosenbluth, A. Teller, E. and Teller (953). Equations of state calculations by fast computing machines. J. Chem. Phys.,, 87 9.. Meyn S. and R. Tweedie (993). Markov Chains and Stochastic Stability, Springer.. O Hagan A. (6). Uncertain Judgements: Eliciting Expert s Probabilities, Wiley, Statistics in Practice.. O Hagan A. (998). Eliciting expert beliefs in substantial practical applications. The Statistician, 47 (), 35. 3. Panjer H. (6). Operational Risk: Modeling Analytics, Wiley. 4. Peters G. (5). Topics in Sequential Monte Carlo Samplers. University of Cambridge, M.Sc. Thesis, Department of Engineering. 5. Ramamurthy S., H. Arora and A. Ghosh (5). Operational risk and probabilistic networks An application to corporate actions processing. Infosys White Paper. 6. Robert C. (4). The Bayesian Choice, nd Edition. Springer Texts in Statistics. 7. Roberts G. O. and J. Rosenthal (6). Examples of adaptive MCMC. Technical Report, Lancaster University. 8. Ruanaidh, J. and W. Fitzgerald (996). Numerical Bayesian Methods Applied to Signal Processing. Statistics and Computing, Springer. 5