Trading Friction Noise 1

Similar documents
In this section we derive some finite-sample properties of the OLS estimator. b is an estimator of β. It is a function of the random sample data.

Chapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Statistical Analysis on Uncertainty for Autocorrelated Measurements and its Applications to Key Comparisons

Efficient GMM LECTURE 12 GMM II

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

6.3 Testing Series With Positive Terms

Economics 250 Assignment 1 Suggested Answers. 1. We have the following data set on the lengths (in minutes) of a sample of long-distance phone calls

Section 11.8: Power Series

Random Variables, Sampling and Estimation

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

Lecture 3. Properties of Summary Statistics: Sampling Distribution

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15

Kolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Lecture 27. Capacity of additive Gaussian noise channel and the sphere packing bound

Notes 27 : Brownian motion: path properties

Riemann Sums y = f (x)

4.3 Growth Rates of Solutions to Recurrences

Chapter 10: Power Series

1.3 Convergence Theorems of Fourier Series. k k k k. N N k 1. With this in mind, we state (without proof) the convergence of Fourier series.

Math 113 Exam 3 Practice

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

Elements of Statistical Methods Lots of Data or Large Samples (Ch 8)

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach

Math 113 Exam 4 Practice

1.010 Uncertainty in Engineering Fall 2008

Statisticians use the word population to refer the total number of (potential) observations under consideration

Unit 6: Sequences and Series

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Number of fatalities X Sunday 4 Monday 6 Tuesday 2 Wednesday 0 Thursday 3 Friday 5 Saturday 8 Total 28. Day

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Simple Random Sampling!

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

Chapter 10 Advanced Topics in Random Processes

Lesson 10: Limits and Continuity

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Series Review. a i converges if lim. i=1. a i. lim S n = lim i=1. 2 k(k + 2) converges. k=1. k=1

STAT331. Example of Martingale CLT with Cox s Model

Generalized Semi- Markov Processes (GSMP)

Homework 3 Solutions

Chapter 22. Comparing Two Proportions. Copyright 2010, 2007, 2004 Pearson Education, Inc.

FIR Filters. Lecture #7 Chapter 5. BME 310 Biomedical Computing - J.Schesser

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Section 14. Simple linear regression.

Statistical Fundamentals and Control Charts

Properties and Hypothesis Testing

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities

The Poisson Distribution

ARIMA Models. Dan Saunders. y t = φy t 1 + ɛ t

Math 113 Exam 3 Practice

Department of Mathematics

Chapter 2 Descriptive Statistics

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

Bayesian Methods: Introduction to Multi-parameter Models

4. Partial Sums and the Central Limit Theorem

Topic 9: Sampling Distributions of Estimators

DS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

CHAPTER 10 INFINITE SEQUENCES AND SERIES

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Appendix to: Hypothesis Testing for Multiple Mean and Correlation Curves with Functional Data

CHAPTER 5. Theory and Solution Using Matrix Techniques

Sequences and Series of Functions

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Estimation of the Mean and the ACVF

Discrete Mathematics and Probability Theory Spring 2012 Alistair Sinclair Note 15

Linear Regression Demystified

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p).

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Good luck! School of Business and Economics. Business Statistics E_BK1_BS / E_IBA1_BS. Date: 25 May, Time: 12:00. Calculator allowed:

Statistics 511 Additional Materials

4. Basic probability theory

The Phi Power Series

Big Picture. 5. Data, Estimates, and Models: quantifying the accuracy of estimates.

Addition: Property Name Property Description Examples. a+b = b+a. a+(b+c) = (a+b)+c

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Power Series: A power series about the center, x = 0, is a function of x of the form

Lecture 33: Bootstrap

Chapter 9: Numerical Differentiation

Physics 232 Gauge invariance of the magnetic susceptibilty

Mathematical Induction

Understanding Samples

Test One (Answer Key)

Lecture 6: Integration and the Mean Value Theorem. slope =

Estimation for Complete Data

Basis for simulation techniques

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n

FIR Filter Design: Part II

(a) (b) All real numbers. (c) All real numbers. (d) None. to show the. (a) 3. (b) [ 7, 1) (c) ( 7, 1) (d) At x = 7. (a) (b)

1 Covariance Estimation

Zeros of Polynomials

Transcription:

Ecoomics 883 Sprig 205 Tauche Tradig Frictio Noise Setup Let Y be the usual cotiuous semi-martigale Y t = t 0 cs dw s We will cosider jump discotiuities later. The usual setup for modelig tradig frictio is the observed X is Y plus oise: X i = Y i + χ i 2 where χ i is a statioary mea zero process with variace Varχ i = σ 2 χ. The the icremets i X are i X = i Y + χ i χ i 3 Note X is ot a semi-martigale ad its icremets have a additioal MA piece. The oise imparts a bias o the realized variace E i X 2 = c s ds + 2σχ 2 4 i= The oise overwhelms the sigal i the limit. The mistake is samplig too fie, i.e., takig the semi-martigale model seriously at the highest frequecies a very bad idea i practice.. Coarse Samplig The simplest, ad model free, way to hadle the oise is ot to sample to fiely. Sometimes this strategy is called course samplig. Let k be a positive iteger. Course samplig is oly usig the prices X ik, i =, 2,..., /k 5 For example, if = /400 about oe miute, σ χ = 0.020, ad IV =.25, the E RV = E i= i X 2 = IV + 2σ 2 χ: 0 IV =.2500, ERV IV =.570, ERV IV /IV = 0.256 6 the -miute RV is systematically off ceter by 26 percet. O the other had, with 5-mi samplig Possible typos remai IV =.2500, ERV IV =.32, ERV IV /IV = 0.052 7

so RV is oly off by about 5 percet, a very small amout relative to the estimatio error i RV. With 0-miute samplig, IV =.2500, ERV IV =.282, ERV IV /IV = 0.0256 8 so RV is oly off by a egligible 2 percet. The message is that coarse samplig early elimiates the oise problem without puttig ay further structure o χ i. Note that X ik X i k = a j i jx, a j. 9 Thus coarse samplig is a form of pre-averagig o o-overlappig itervals. 2 R-MSE Iitial Look I the presece of tradig frictio oise the realized variace, ad other measures of variatio, ca be biased or off ceter. The time-hoored stadard for measure accuracy i the this case is the root mea squared error: R-MSE = E ˆ IV IV 2 = Var ˆ IV + E ˆ IV IV 2 0 where it is uderstood throughout that momets are coditioal o IV, E IV, Var IV, but the coditioal o IV is suppressed to reduce clutter i otatio. The bias, ad thus squared bias, ca usually straightforward to compute but the variace is more ivolved with more terms to track i all of the sums. A little classical time series comes i hady. 3 A Brief Remider from Time Series Some expressios usually used i stadard macro time series are sometimes helpful. Let x t be a discrete time covariace statioary process. Symbols like x ad t etc used i this sectio have othig to with the high-frequecy otatio. Put c j = Cov x t, x t j. Here x t is scalar but the oly thig that chages i the vector case is that c j is a matrix ad c j = c j; also, x t is defied for all periods, t = 2,, 0,, 2,... ad likewise c j is defied over j = 2,, 0,, 2,.... The covariace polyomial is cl = L j, cl = c 0 + c j L j + c j L j 2 j= j= 2

I geeral, cl is a ifiite polyomial over positive ad egative powers of the lag operator L. A coveiet result is that if bl is a polyomial i L, usually oly o-egative powers such as bl = b 0 + b L + b 2 L 2 +, ad if where ɛ t is white oise, the the covariace polyomial of x t is x t = blɛ t 3 cl = blbl σ 2 ɛ. 4 Thus if bl defies a fiite movig average, b j = 0 for j > K, K <, the cl ivolves at most powers of L ad L up to degree K: cl = K j= K The expressio for the variace of the sum is Var x j = c 0 + If c j = 0 for j > K, the j= Var x j = c 0 + j= c l L j 5 j= + K j= K 4 Back to Tradig Frictio Noise j c j 6 j c j 7 Some researchers may? argue that coarse samplig throws away data ad is therefore iefficiet for some reaso. A alterative is to use the very high frequecy data but try to average out the oise. This alterative is called pre-averagig. The idea is to form a local average of the log-price series by way of j= a j X i j 8 From the liearity it is obvious that the icremets i X are just weighted sums of the icremets i X of X. So the pre-averaged returs process, a j i jx 9 3

is the essetially but ot quite the retur o the pre-averaged price. 2 The pre-averaged returs is the cumulative geometric retur o a particular simple mechaical tradig strategy. If the effect of tradig frictios is to impart ad additive statioary error as i the the icremets are X i = Y i + χ i 20 i X = i Y + χ i χ i 2 The above is simple, sigal plus first differece of a statioary process, but etails very strog assumptios about how tradig frictios work out i actual markets. Cosider the pre-averaged retur, i a j i j X 22 I order to preserve variatio we impose /k k a2 j =. The i a j i jy + a j u i j 23 = i Ȳ + ū i What is a good choice of the pre-averagig weights, b j? At this poit it becomes plug ad grid. There is o harm i assumig that that the Y process has equal local variace C s = σ 2. Also, for the momet assume the tradig frictio oise χ i is white oise. The estimate of IV = σ 2 based o the overlappig pre-averaged data is S = k 2 i X 24 To aalyze the MSE we eed the mea ad variace of S. For the mea, E S = k E i Ȳ 2 + k E { [bl Lχ i ] 2} 25 k The first term is /k b2 j σ 2 = σ 2. To get a hadle o the secod term let cl = bl L L bl = k k c j L j 26 2 Oe might thik of averagig the price levels ad the takig logs, but we thik of pre-averagig the log-price itself vitiatig the otio of the retur o the pre-averaged price iterpretatio literally. 4

The secod term is /k c 0 where c 0 is the coefficiet of L 0 i the polyomial cl. The terms {c j } k k ca be determied from the b j but the algebra is tedious. E S = σ 2 + c 0 /k 27 Keep i mid that c 0 depeds upo the b j ad k. As for variace [ Var S = Var E i k Ȳ ] 2 + Var k k b j χ i j χ i j 2 28 The ideas are simple but the algebra tedious. 5