Backtesting or Backestimating 1?

Size: px
Start display at page:

Download "Backtesting or Backestimating 1?"

Transcription

1 Gerhard Stahl Backtesting or Backestimating 1? I Introduction Recently Kupiec articulated his critique of various approaches to Backtesting, by investigating formal statistical properties of related test statistics. His results culminate in the approval of their low power, i.e. the type II error. Spoken in nontechnical terms, we just need more information, to raise the power. To overcome this information dilemma, we propose to incorporate all available information, experimental one given by the data and nonexperimental one, given by a prior and the loss function, details are given below. These kinds of nonexperimental information are typical, what is called a bayesian framework. Expressing degrees of beliefs about different values of a parameter by a prior function, whereas the consequences of our decisions are measured by a loss function. Combining all these components to one decision or test rule, we maximise exploitation of information. The following investigations oppose Kupiec s opinions, showing possibilities out of the dilemma, he constructed. Results of foregoing research state the backtesting-problem in the framework of a Bernoulli process (Kupiec). These investigations take a classical approach to statistical problems into account, i.e. a frequentistic point of view, assuming n iid observations of a Bin(1,θ) distributed random variable X, where the true parameter θ is fix, but unknown. As usual Bin(1,θ), θ Θ, denotes the Binomial distribution and Θ the parameter space. The work of Crnkovic\Drachman generalising the Bernoulli approach is also formulated in the classical setting. In this paper we introduce and apply bayesian ideas and concepts to backtesting. In contrast to classical statistics bayesians introduce a new random quantity to the statistical battlefied, the so called prior distribution π( θ ) on Θ. As a consequence they interpret parametrical models pθ ( x) as conditional distributions p( x θ ). Using the idendity p( x,θ) = π( θ) p( x θ) = p( x) p( θ x) a second new element p( θ x) = π( θ) p( x θ) / p( x) the posterior distribution is introduced, where p(x) is defined by p x = π θ p x θ dθ. Θ The posterior, p( θ x ), contains all information provided by the data, so called experimental information, and all other non experimental information. Sometimes a loss function 1 I especially want to point out that opinions in this article are of pure private nature; none of these may be cited or interpreted as opinions of the Federal Banking Supervisory Office of Germany.

2 L( θ, θ ), is additionally given, where θ is an estimate of θ. In this lucky case we may combine all elements (prior, loss and posterior) to a bayesian decision problem. Let us point out our motivation for the bayesian approach. As a common feature of classical procedures the drawn conclusions are based on the information given by the data, neglecting other sources of non experimental information like a loss function or a prior on Θ. These are well-known possible drawbacks of the classical setting, reflecting that classical inferences are, for the most part, made without regard to the use to which they are to be put. Clearly testing concepts are usually worked out to solve confirmatory aspects. But attention should be drawn to the fact, that the estimated VaR figure is a target value and any deviation from the true value causes some losses in the one or the other sense. This situation is similar to recent developments in the area of quality control, where the revolutionary findings of Taguchi, preferred concepts of estimation to well known testing procedures, mirroring that any deviation from a target value will cause some loss (testing procedures are characterised by 0-1 loss functions, grouping deviations very roughly into two disjoint subsets of Θ). Overestimation will possibly lead to suboptimal allocation of capital resources (limits for example). Even worse is the underestimation of true risk which may cause more serious problems with capital charges. We stress the point that the loss function s asymmetry is not reflected at all in the classical concept. The paper is organised as follows: the first section is the introduction, as given above. The second summarises the results of Kupiec and Crnkovic\Drachman. The third section gives the bayesian version of the Bernoulli backtesting, viewed as a test problem. In the fourth section we develop a general bayesian decision framework, interpreting the backtesting problem merely as a estimation problem, then a testing problem. A short summary is given at the end of the paper. II Results of previous research In this section we shortly review Kupiec s paper and give main results of Crnkovic\Drachman. Kupiec criticises common backtesting procedures in a sound but theoretical way, whereas Crnkovic\Drachman introduce a new nonparametric procedure to measure accuracy of risk measurement techniques. This paper was worked out at JP Morgan, New York. For practitioners it worth noting that the authors of RiskMetrics felt a need to improve their methods of performance measurement given in RiskMetrics. Kupiec examines various possible backtesting methods for parametric and nonparametric VaR-models, where the emphasis is in the field of parametric models. His starting point are distributions (geometric, binomial, etc.) closely related to the Bernoulli process. These distribution are members of the one parameter exponential family. Hence, general theorems of test theory (Lehmann) apply and yield a monotone likelihood ratio (LR) test statistic, implying that the test based on the LR is a uniformly most powerful test for a given sample size. We consider the following problem.

3 An observation x is the number of successes in n independent trials with unknown probability θ of success in each. The α-level LR-test of the hypothesis θ θ 0 against the alternative θ > θ 0 is given by λ( x ) defined as follows. If x n θ 0, then p x; λ x ( 1 ) θ0 θ0 θ0 = = x p( x; x / n) ( x / n) ( 1 x / n) x n x n x 1 if x n > θ 0 then p( x; x / n) λ x x ( x / n) ( 1 x / n) = = p( x θ ) x n x ; θ 1 θ n x. An important large sample result yields that 2 log λ is asymptotically χ 2 distributed with an appropriate degree of freedom. This in mind we get Kupiec s representation of the LR-statistic in the binomial case. Besides theoretical analysis Kupiec studies type II error probabilities simulating numerical examples. From exhibit 6 of his paper, we extract the typical example: Null Alternative Type II Error Type II Error Type II Error Hypothesis Hypothesis Rate n= 255 Rate n= 510 Rate n= 1000 P = 0.01 P = Remember, the VaR (for any linear instrument) at the 99% level for a one day holding period is defined by VaRF = MV * F 1 ( 0. 99) * δ, where δ denotes the sensitivity factor, F 1 ( 0. 99) the 99%-quantile of the distribution F and MV the market value of the position. For distributions F with fat tails, the difference between F 1 ( 0. 99) and F 1 ( 0. 98) gets substantial (extreme choices give 100%). From the table above we conclude that there is a good chance that the underestimated true risk is not detected by the binomial test. This result carries over to other related tests and the nonparametric case. Type I and type II error rates are inversely related, so one might hope to rescue by lowering type I error. Kupiec points out that even this idea does not hit for α=0.75. Kupiec s investigations give an imperative to look for new methods. Crnkovic\Drachman s paper is very interesting in this respect. Starting point for their improvement of the LR s power is the idea to test all quantiles simultaneously by means of nonparametric tests instead of one quantile as in the binomial case. Hence their test is a fully and superior generalisation of the binomial case. As usual in the nonpaprametric framework it is difficult to achieve conclusive results about power or robustness properties of the test. For a given sequence of independent, not necessarily identical, distributed (id) random variables, X i F i, the transformed sequence F i ( X i ) is also id, but uniform on [0, 1], we denote this by U i. Hence the observed percentiles F i ( x i ) may be aggregated (here comes the id argument into play) and then be compared with the uniform distribution. The distance between the empirical distribution function, P n := (number of observed percentiles p that are less or equal to t) / n of the observed percentiles and the distribution function of U i is measured by the Kuiper statistic, K( P n, t):= max { P n -t} + max {t - P n } 0 x 1 0 x 1

4 a cousin of the Kolmogroff Smirnoff statistic (Durbin). Taking the importance of the tails of the VaR-distribution into account, the authors suggest to apply a weight function w(t) w(t) = -0.5 ln(t(1-t)) to the distributions P n and t. Some few words about the id assumption and the asymptotics. The authors prefer the BDS-test for checking. Details and merits of this test are explained in (Brock et al). An open question is whether the asymptotics of the BDS or the transformation w(t) causes the need for large (at most 500, better 1000) sample sizes. The presented works of this section have one point in common, they clarified the need for much empirical information (data) to end up with reliable conclusions about the accuracy of VaR estimates. The main result of these studies is that backtesting methods should be based on a historical period of four years. III Bayesian Backtesting At a first stage of experience, users of VaR-models may probably not to be able to foundate backtesting on history of four years, possibly necessary to get conclusive results for backtesting. Therefore, we propose bayesian methods, to break the lack of data information. Starting point are n realisations from iid random variables X i X i iid X, where the X Ber (θ). Their sum, denoted by n X n = X i= 1 i follows a binomial distribution, n k n k p( X n = k ) = θ θ 1 θ, k k denotes the number of success. In the bayesian framework it is convenient to suppose a beta prior, π( θ) = Beta ( α, β), π( θ ) = π ( θ α β Γ α + β ) α β θ α 1 θ β 1 ;, = 1 1 [ ] ( θ), 0, 1 Γ Γ when working with the binomiai. Two advantages motivate using the beta family. First, it is sufficiently rich in respect to their shape, enabling to express symmetric, skewed or even improper priors within. Second betas are conjugated to the binomials implying the posterior, p( θ x) itself, is a beta distribution p( θ X n = k ) = Beta( α + k β + n k ),. Furthermore we suppose a given loss function L L( θ, θ ) on the states of nature. To test H 1 :θ θ 0 against H 2 :θ 0 >θ we calculate the posterior probability of H 1 and H 2, P( H1 x) and P( H x) θ0 P( H1 x) = p( θ x) dθ 0 and P( H2 x) = 1- P( H x) 2, by 1. Interpreting backtesting as a test problem completely described by the table below,

5 States of nature Acts H 1 is appropriate H 2 is appropriate Expected Loss Choose H 1 0 L 12 L 12 P( H2 x) Choose H 2 L 21 0 L 21 P( H1 x) Probabilities P( H1 x) P( H2 x) we decide for H 1 by the following rule 1 < L 21 P( H1 x) / L 12 P( H2 x). Further considerations (see Berger) of this decision rule explore parallels to the LRtest, introduced in the last section. In the present bayesian case the decision rule reflects priors and losses and sample information, whereas the classical LR decision is mainly driven by the level of significance, which is difficult to relate in a concrete manner to the problem s economic substance. In this respect we conclude bayes methods superior to classical ones. IV Backtesting from a bayesian decision theoretical viewpoint Anew, remember the roots of VaR-models are originally economic decisions problems, far away from isolated statistical exercises. Assuming n iid realisations from bernoullian distributed random variables, X Ber (θ), we have to infer or to decide about θ, to measure the VaR-model s accuracy. In the last section we solved this problem with the help of a statistical test. But is this in the line with our intuition? Let us have a closer look at the loss function in the table above, choosing exemplary H 1 : θ against H 2 : 0. 95>θ. Judging the situations θ [0, 0.5] and θ [0.93, 0.95] with the loss function, yields the same loss. Every riskmanager would be confused about this answer. But what is wrong with his intuition? Taguchi told us in an analogue situation stemming from quality control that slipping from a testing interpretation of our statistical task to an estimating one could help. This means in the considered example, any deviation from the target value 0.95 would cause some loss, as mentioned earlier the loss function is asymmetric. Before we begin with some considerations to bayesian inference about θ some words how the prior may be generally determined seem worthwhile. So far, our presentation has been restricted to the case of beta priors. Of course this is not the only answer. First it seems natural to transform to prior to an appropriate interval [0.75, 1] for example. Or the user feels more comfortable if a prior is specified by means of some quantiles of a cumulative distribution function, where only some probabilities are to be known. Finally, we mention the method of moments to specify a prior. With these remarks in mind we turn back to our inference problem. First we calculate the posterior combining the binomial model, the data and the prior: p θ X = k. ( n ) We may use the posterior to report a point estimate θ for θ, applying a generalised maximum likelihood method to θ = max L(θ) = max p( θ X n = k ). θ Θ θ Θ Parallel to classical lines an error estimate should be adjoined to θ. A famous candidate is the posterior variance of θ defined by

6 ( θ θ) E p( θ x) 2. It may be also very vulnerable to calculate a credible set C, the bayesian analogue to classical confidence intervals, for θ from the posterior. 1 α p( C x) As usual these intervals are not uniquely determined, for details see Berger. Besides these inferential approaches we focus a decision theoretic setting using a loss function L. We are interested to determine a decision θ minimising posterior expected loss. To be specific, we only examine the case of an asymmetric linear loss function L, defined by K if L(, ( ) 0 θ θ θ θ 0 θ θ) = K ( θ θ ) if θ θ 1 < 0. Under the met assumptions any ( K0 / ( K0 + K1 )) -fractile of the posterior p( θ X n = k ) is a Bayes estimate of θ. A first look at this bayesian machinery shows that the greater K 0 is, the greater is the quantile. This coincides with the imagination that greater losses should increase the quantile. The bayesian estimate reflects the asymmetry between underestimation and overestimation of the accuracy of VaRmodels, our prior beliefs about θ - for example we know that using normal distributions tend to underestimate risk - and the experimental information provided by the data. These or other methods of backestimation should deserve further investigations. V Summary and conclusions Backtesting methods are important tools applying VaR-models. By means of the proposed bayesian methods all subjective and objective informations may be incorporated to draw a adequate picture of the risk, to be faced. Hence, these methods have a clear-cut advantage to established ones especially in situations of the beginning to use VaR-models. Furthermore we feel that bayesian methods are close to the heuristics of nonstatisticians. It seems to us that the developed decision and test rules are easier to understand, better linked to the economical problem and well communicable compared than the classical ones. The likelihood ratio tests stand for a good example. In any case the bayesian methods should be used parallel to classical ones and a comparison should give further insights to the driven conclusions. Clearly many questions remain open. So it seems not been investigated how to use backtesting methods to optimise model selection (GARCH, t-distributions,..). VI Bibliography Basle Committee on Banking Supervision (1996), Proposal to Issue a Supplement to the Basle Capital Accord to Cover Market Risks. Basle, December. Berger, J.O. (1985) Statistical Decision Theory and Bayesian Analysis. 2nd Ed. Springer New York Crnkovic, C. and J. Drachman (1995), A Universal Tool to Discriminate Among Risk Measurement Techniques. Preprint, to appear in RISK-Magazine. Brock, W.A., Hsieh, D.A. and LeBaron, B. (1993) Nonlinear Dynamics, Chaos and Instability: Statistical Theory and Econmic Evidence. MIT Press, Cambride, Massachusetts.

7 Durbin, J. (1973) Distribution Theory for Tests based on the Sample Distribution Function. SIAM, Philadelphia Group of Thirty (1993), Derivatives: Practices and Principles. Washington, D.C. Kupiec, P. (1995), Techniques for Verifying the Accuracy of Risk Measurement Models. Journal of Derivatives. Lehmann, E.L. (1986) Testing Statistical Hypotheses. Wiley, New York. Morgan Guarantly Trust Company, (1995) RiskMetrics TM - Technical Document. 3rd Edition. Zellner, A. (1990) Bayesian Inference. The New Palgrave in Statistics. Gerhard Stahl Ringslebenstr. 2 D Berlin

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist

More information

The Analysis of Power for Some Chosen VaR Backtesting Procedures - Simulation Approach

The Analysis of Power for Some Chosen VaR Backtesting Procedures - Simulation Approach The Analysis of Power for Some Chosen VaR Backtesting Procedures - Simulation Approach Krzysztof Piontek Department of Financial Investments and Risk Management Wroclaw University of Economics ul. Komandorska

More information

Bayesian Inference: Posterior Intervals

Bayesian Inference: Posterior Intervals Bayesian Inference: Posterior Intervals Simple values like the posterior mean E[θ X] and posterior variance var[θ X] can be useful in learning about θ. Quantiles of π(θ X) (especially the posterior median)

More information

Sequential Monitoring of Clinical Trials Session 4 - Bayesian Evaluation of Group Sequential Designs

Sequential Monitoring of Clinical Trials Session 4 - Bayesian Evaluation of Group Sequential Designs Sequential Monitoring of Clinical Trials Session 4 - Bayesian Evaluation of Group Sequential Designs Presented August 8-10, 2012 Daniel L. Gillen Department of Statistics University of California, Irvine

More information

Bios 6649: Clinical Trials - Statistical Design and Monitoring

Bios 6649: Clinical Trials - Statistical Design and Monitoring Bios 6649: Clinical Trials - Statistical Design and Monitoring Spring Semester 2015 John M. Kittelson Department of Biostatistics & Informatics Colorado School of Public Health University of Colorado Denver

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Subject CS1 Actuarial Statistics 1 Core Principles

Subject CS1 Actuarial Statistics 1 Core Principles Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

On Backtesting Risk Measurement Models

On Backtesting Risk Measurement Models On Backtesting Risk Measurement Models Hideatsu Tsukahara Department of Economics, Seijo University e-mail address: tsukahar@seijo.ac.jp 1 Introduction In general, the purpose of backtesting is twofold:

More information

Performance Evaluation and Comparison

Performance Evaluation and Comparison Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Cross Validation and Resampling 3 Interval Estimation

More information

Bayesian vs frequentist techniques for the analysis of binary outcome data

Bayesian vs frequentist techniques for the analysis of binary outcome data 1 Bayesian vs frequentist techniques for the analysis of binary outcome data By M. Stapleton Abstract We compare Bayesian and frequentist techniques for analysing binary outcome data. Such data are commonly

More information

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics DS-GA 100 Lecture notes 11 Fall 016 Bayesian statistics In the frequentist paradigm we model the data as realizations from a distribution that depends on deterministic parameters. In contrast, in Bayesian

More information

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:

More information

Contents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1

Contents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 Contents Preface to Second Edition Preface to First Edition Abbreviations xv xvii xix PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 1 The Role of Statistical Methods in Modern Industry and Services

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

Seminar über Statistik FS2008: Model Selection

Seminar über Statistik FS2008: Model Selection Seminar über Statistik FS2008: Model Selection Alessia Fenaroli, Ghazale Jazayeri Monday, April 2, 2008 Introduction Model Choice deals with the comparison of models and the selection of a model. It can

More information

COMP90051 Statistical Machine Learning

COMP90051 Statistical Machine Learning COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 2. Statistical Schools Adapted from slides by Ben Rubinstein Statistical Schools of Thought Remainder of lecture is to provide

More information

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

Chapter Three. Hypothesis Testing

Chapter Three. Hypothesis Testing 3.1 Introduction The final phase of analyzing data is to make a decision concerning a set of choices or options. Should I invest in stocks or bonds? Should a new product be marketed? Are my products being

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 11 January 7, 2013 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline How to communicate the statistical uncertainty

More information

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior: Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances Advances in Decision Sciences Volume 211, Article ID 74858, 8 pages doi:1.1155/211/74858 Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances David Allingham 1 andj.c.w.rayner

More information

Bayesian Methods. David S. Rosenberg. New York University. March 20, 2018

Bayesian Methods. David S. Rosenberg. New York University. March 20, 2018 Bayesian Methods David S. Rosenberg New York University March 20, 2018 David S. Rosenberg (New York University) DS-GA 1003 / CSCI-GA 2567 March 20, 2018 1 / 38 Contents 1 Classical Statistics 2 Bayesian

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The

More information

The Adequate Bootstrap

The Adequate Bootstrap The Adequate Bootstrap arxiv:1608.05913v1 [stat.me] 21 Aug 2016 Toby Kenney Department of Mathematics and Statistics, Dalhousie University and Hong Gu Department of Mathematics and Statistics, Dalhousie

More information

Quality control of risk measures: backtesting VAR models

Quality control of risk measures: backtesting VAR models De la Pena Q 9/2/06 :57 pm Page 39 Journal o Risk (39 54 Volume 9/Number 2, Winter 2006/07 Quality control o risk measures: backtesting VAR models Victor H. de la Pena* Department o Statistics, Columbia

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Econometrics Working Paper EWP0402 ISSN 1485-6441 Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Lauren Bin Dong & David E. A. Giles Department

More information

Chapter 10. Hypothesis Testing (I)

Chapter 10. Hypothesis Testing (I) Chapter 10. Hypothesis Testing (I) Hypothesis Testing, together with statistical estimation, are the two most frequently used statistical inference methods. It addresses a different type of practical problems

More information

Inference for a Population Proportion

Inference for a Population Proportion Al Nosedal. University of Toronto. November 11, 2015 Statistical inference is drawing conclusions about an entire population based on data in a sample drawn from that population. From both frequentist

More information

2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling

2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling 2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling Jon Wakefield Departments of Statistics and Biostatistics University of Washington Outline Introduction and Motivating

More information

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006 Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Estimation of Operational Risk Capital Charge under Parameter Uncertainty

Estimation of Operational Risk Capital Charge under Parameter Uncertainty Estimation of Operational Risk Capital Charge under Parameter Uncertainty Pavel V. Shevchenko Principal Research Scientist, CSIRO Mathematical and Information Sciences, Sydney, Locked Bag 17, North Ryde,

More information

Bayes Factors for Grouped Data

Bayes Factors for Grouped Data Bayes Factors for Grouped Data Lizanne Raubenheimer and Abrie J. van der Merwe 2 Department of Statistics, Rhodes University, Grahamstown, South Africa, L.Raubenheimer@ru.ac.za 2 Department of Mathematical

More information

Learning Bayesian network : Given structure and completely observed data

Learning Bayesian network : Given structure and completely observed data Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution

More information

New Bayesian methods for model comparison

New Bayesian methods for model comparison Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison

More information

Foundations of Probability and Statistics

Foundations of Probability and Statistics Foundations of Probability and Statistics William C. Rinaman Le Moyne College Syracuse, New York Saunders College Publishing Harcourt Brace College Publishers Fort Worth Philadelphia San Diego New York

More information

Lecture 4: September Reminder: convergence of sequences

Lecture 4: September Reminder: convergence of sequences 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused

More information

Bayesian Inference. STA 121: Regression Analysis Artin Armagan

Bayesian Inference. STA 121: Regression Analysis Artin Armagan Bayesian Inference STA 121: Regression Analysis Artin Armagan Bayes Rule...s! Reverend Thomas Bayes Posterior Prior p(θ y) = p(y θ)p(θ)/p(y) Likelihood - Sampling Distribution Normalizing Constant: p(y

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

A nonparametric two-sample wald test of equality of variances

A nonparametric two-sample wald test of equality of variances University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 211 A nonparametric two-sample wald test of equality of variances David

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

SPECIFICATION TESTS IN PARAMETRIC VALUE-AT-RISK MODELS

SPECIFICATION TESTS IN PARAMETRIC VALUE-AT-RISK MODELS SPECIFICATION TESTS IN PARAMETRIC VALUE-AT-RISK MODELS J. Carlos Escanciano Indiana University, Bloomington, IN, USA Jose Olmo City University, London, UK Abstract One of the implications of the creation

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

A Discussion of the Bayesian Approach

A Discussion of the Bayesian Approach A Discussion of the Bayesian Approach Reference: Chapter 10 of Theoretical Statistics, Cox and Hinkley, 1974 and Sujit Ghosh s lecture notes David Madigan Statistics The subject of statistics concerns

More information

Structure learning in human causal induction

Structure learning in human causal induction Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 jbt,gruffydd @psych.stanford.edu Abstract We use

More information

Bayesian Inference for Binomial Proportion

Bayesian Inference for Binomial Proportion 8 Bayesian Inference for Binomial Proportion Frequently there is a large population where π, a proportion of the population, has some attribute. For instance, the population could be registered voters

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective

More information

Preface Introduction to Statistics and Data Analysis Overview: Statistical Inference, Samples, Populations, and Experimental Design The Role of

Preface Introduction to Statistics and Data Analysis Overview: Statistical Inference, Samples, Populations, and Experimental Design The Role of Preface Introduction to Statistics and Data Analysis Overview: Statistical Inference, Samples, Populations, and Experimental Design The Role of Probability Sampling Procedures Collection of Data Measures

More information

Estimation of Quantiles

Estimation of Quantiles 9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles

More information

Invariant HPD credible sets and MAP estimators

Invariant HPD credible sets and MAP estimators Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in

More information

Part 2: One-parameter models

Part 2: One-parameter models Part 2: One-parameter models 1 Bernoulli/binomial models Return to iid Y 1,...,Y n Bin(1, ). The sampling model/likelihood is p(y 1,...,y n ) = P y i (1 ) n P y i When combined with a prior p( ), Bayes

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Introduction to Bayesian Statistics

Introduction to Bayesian Statistics Bayesian Parameter Estimation Introduction to Bayesian Statistics Harvey Thornburg Center for Computer Research in Music and Acoustics (CCRMA) Department of Music, Stanford University Stanford, California

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Hypothesis Testing Problem. TMS-062: Lecture 5 Hypotheses Testing. Alternative Hypotheses. Test Statistic

Hypothesis Testing Problem. TMS-062: Lecture 5 Hypotheses Testing. Alternative Hypotheses. Test Statistic Hypothesis Testing Problem TMS-062: Lecture 5 Hypotheses Testing Same basic situation as befe: Data: random i. i. d. sample X 1,..., X n from a population and we wish to draw inference about unknown population

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

(1) Introduction to Bayesian statistics

(1) Introduction to Bayesian statistics Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they

More information

Introduction to Applied Bayesian Modeling. ICPSR Day 4

Introduction to Applied Bayesian Modeling. ICPSR Day 4 Introduction to Applied Bayesian Modeling ICPSR Day 4 Simple Priors Remember Bayes Law: Where P(A) is the prior probability of A Simple prior Recall the test for disease example where we specified the

More information

Carl N. Morris. University of Texas

Carl N. Morris. University of Texas EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

Bayesian Models in Machine Learning

Bayesian Models in Machine Learning Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of

More information

Asymptotic distribution of the sample average value-at-risk

Asymptotic distribution of the sample average value-at-risk Asymptotic distribution of the sample average value-at-risk Stoyan V. Stoyanov Svetlozar T. Rachev September 3, 7 Abstract In this paper, we prove a result for the asymptotic distribution of the sample

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2018 Examinations Subject CT3 Probability and Mathematical Statistics Core Technical Syllabus 1 June 2017 Aim The

More information

Recommendations for presentation of error bars

Recommendations for presentation of error bars Draft 0.00 ATLAS Statistics Forum 15 February, 2011 Recommendations for presentation of error bars 1 Introduction This note summarizes recommendations on how to present error bars on plots. It follows

More information

Confidence Intervals for Normal Data Spring 2014

Confidence Intervals for Normal Data Spring 2014 Confidence Intervals for Normal Data 18.05 Spring 2014 Agenda Today Review of critical values and quantiles. Computing z, t, χ 2 confidence intervals for normal data. Conceptual view of confidence intervals.

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Discussion of Dempster by Shafer. Dempster-Shafer is fiducial and so are you.

Discussion of Dempster by Shafer. Dempster-Shafer is fiducial and so are you. Fourth Bayesian, Fiducial, and Frequentist Conference Department of Statistics, Harvard University, May 1, 2017 Discussion of Dempster by Shafer (Glenn Shafer at Rutgers, www.glennshafer.com) Dempster-Shafer

More information

Dover- Sherborn High School Mathematics Curriculum Probability and Statistics

Dover- Sherborn High School Mathematics Curriculum Probability and Statistics Mathematics Curriculum A. DESCRIPTION This is a full year courses designed to introduce students to the basic elements of statistics and probability. Emphasis is placed on understanding terminology and

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Miscellany : Long Run Behavior of Bayesian Methods; Bayesian Experimental Design (Lecture 4)

Miscellany : Long Run Behavior of Bayesian Methods; Bayesian Experimental Design (Lecture 4) Miscellany : Long Run Behavior of Bayesian Methods; Bayesian Experimental Design (Lecture 4) Tom Loredo Dept. of Astronomy, Cornell University http://www.astro.cornell.edu/staff/loredo/bayes/ Bayesian

More information

Introduction to Machine Learning. Lecture 2

Introduction to Machine Learning. Lecture 2 Introduction to Machine Learning Lecturer: Eran Halperin Lecture 2 Fall Semester Scribe: Yishay Mansour Some of the material was not presented in class (and is marked with a side line) and is given for

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

AN EMPIRICAL LIKELIHOOD RATIO TEST FOR NORMALITY

AN EMPIRICAL LIKELIHOOD RATIO TEST FOR NORMALITY Econometrics Working Paper EWP0401 ISSN 1485-6441 Department of Economics AN EMPIRICAL LIKELIHOOD RATIO TEST FOR NORMALITY Lauren Bin Dong & David E. A. Giles Department of Economics, University of Victoria

More information

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014 Warwick Business School Forecasting System Summary Ana Galvao, Anthony Garratt and James Mitchell November, 21 The main objective of the Warwick Business School Forecasting System is to provide competitive

More information

Class 26: review for final exam 18.05, Spring 2014

Class 26: review for final exam 18.05, Spring 2014 Probability Class 26: review for final eam 8.05, Spring 204 Counting Sets Inclusion-eclusion principle Rule of product (multiplication rule) Permutation and combinations Basics Outcome, sample space, event

More information

Evaluating Value-at-Risk models via Quantile Regression

Evaluating Value-at-Risk models via Quantile Regression Evaluating Value-at-Risk models via Quantile Regression Luiz Renato Lima (University of Tennessee, Knoxville) Wagner Gaglianone, Oliver Linton, Daniel Smith. NASM-2009 05/31/2009 Motivation Recent nancial

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation. PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using

More information

HANDBOOK OF APPLICABLE MATHEMATICS

HANDBOOK OF APPLICABLE MATHEMATICS HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester

More information

Chapter 5. Bayesian Statistics

Chapter 5. Bayesian Statistics Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective

More information