MULTISTEP YULE-WALKER ESTIMATION OF AUTOREGRESSIVE MODELS YOU TINGYAN
|
|
- Deirdre Arnold
- 6 years ago
- Views:
Transcription
1 MULTISTEP YULE-WALKER ESTIMATION OF AUTOREGRESSIVE MODELS YOU TINGYAN NATIONAL UNIVERSITY OF SINGAPORE 2010
2 MULTISTEP YULE-WALKER ESTIMATION OF AUTOREGRESSIVE MODELS YOU TINGYAN (B.Sc. Nanjing Noral University) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF STATISTICS AND APPLIED PROBABILITY NATIONAL UNIVERSITY OF SINGAPORE 2010
3 i Acknowledgeents It is a pleasure to convey y gratitude to those who ade this thesis possible all in y huble acknowledgent. In the first place I a heartily thankful to y supervisor, Prof. Xia Yingcun, whose encourageent, supervision and support fro the preliinary to the concluding level enabled e to develop an understanding of the subject. His supervision, advice, and guidance fro the very early stage of this research as well as giving e extraordinary experiences through the work are the critical support to the copleteness of this thesis. Above all and the ost needed, he provided e sustaining encourageent and support in various ways. His truly scientist intuition has ade hi as a constant oasis of ideas and passions in science, which exceptionally inspire and enrich y growth as a student, a researcher and a scientist want to be. I a gratefully appreciating hi ore than he knows. I also would like to record y gratitude to y classates and seniors, Jiang Binyan, Jiang Qian, Liangxuehua, Zhu Yongting, Yu Xiaojiang, Jiang Xiaojun, for their involveent with y research. It s so kind of the all always kindly to grant e their tie even for answering soe of y unintelligent questions about tie series
4 ii and estiation ethods. Many thanks go in particular to Fu Jingyu, who used her precious tie to read this thesis and gave her critical and constructive coents about it. Lastly, I offer y regards and blessings to staffs in the general office of departent, and all of those who supported e in any respect during the copletion of the project.
5 iii Contents Acknowledgeents i Suary vi List of Tables vii List of Figures ix 1 Introduction Introduction AR odel and its estiation Organization of this Thesis Literature Review Univariate Tie Series Background Tie series Models Autoregressive (AR) Model AR odel Properties
6 iv Stationarity ACF and PACF for AR Model Basic Methods for Paraeter Estiation Maxiu Likelihood Estiation (MLE) Least Square Estiation Method (LS) Yule-Walk Method (YW) Burg s Estiation Method (B) Monte Carlo Siulation Multistep Yule-Walker Estiation Method Multistep Yule-Walker Estiation (MYW) Bias of YW ethod on Finite Saples Theoretical Support of MYW Siulation Results Coparisons for Estiation Accuracy for AR (2) odel Percentage for Outperforance of MYW Difference between the SSE of ACFs for YW and MWY ethods The Effect of Different Forward Step Estiation Accuracy for Fractional ARIMA Model Real Data Application 52
7 v 5.1 Data Source Nuerical Results Conclusion and Future Research 58 Bibliography 59 Appendix 65
8 vi Suary The ai of this work is to fit a wrong odel to an observed tie series by eploying higher order Yule-Walker equations in order to enhance the fitting accuracy. Several paraeter estiation ethods for autoregressive odels were reviewed, such as Maxiu Likelihood ethod, Least Square ethod, Yule-Walker ethod, Burg s ethod, etc. Coparison of the estiation accuracy between the well-known Yule-Walker ethod and our new ultistep Yule-Walker ethod based on the autocorrelation function (ACF) is ade. The effect of different nuber of Yule-Walker equations on the estiation perforance is investigated. Monte Carlo analysis and real data are used to check the perforance of the proposed ethod. Keywords: Tie series, Autoregressive Model, Least Square ethod, Yule-Walker Method, ACF
9 vii List of Tables 4.1 Detailed Percentage for a Better Perforance of MYW ethod List of best for MYW ethod
10 viii List of Figures 4.1 Percentage for outperance of MYW out of 1000 siulation iterations for n=200, 500, 1000 and SSE of ACF for both ethod and its difference with n= SSE of ACF for both ethod and its difference with n= SSE of ACF for both ethod and its difference with n= SSE of ACF for both ethod and its difference with n= SSE of ACF for MYW ethod with n=200, 500, 1000 and Difference of SSE of ACF with n=200, 500, 1000 and 2000 for p=2, d= Difference of SSE of ACF for n=500 with p= Difference of SSE of ACF for n=500 with p= Difference of SSE of ACF for n=500 with p= Difference of SSE of ACF for n=500 with p= Difference of SSE of ACF for n=1000 with p= Difference of SSE of ACF for n=1000 with p=
11 ix 4.15 Difference of SSE of ACF for n=1000 with p= Difference of SSE of ACF for n=1000 with p= Difference of SSE of ACF for n=2000 with p= Difference of SSE of ACF for n=2000 with p= Difference of SSE of ACF for n=2000 with p= Difference of SSE of ACF for n=2000 with p= Difference between SSE of ACF for two ethods with p= SSE of ACF for MYW ethod with p= Difference between SSE of ACF for two ethods with p= SSE of ACF for MYW ethod with p= Difference between SSE of ACF for two ethods with p= SSE of ACF for MYW ethod with p= Difference between SSE of ACF for two ethods with p= SSE of ACF for MYW ethod with p= Difference between SSE of ACF for two ethods with p= SSE of ACF for MYW ethod with p=
12 CHAPTER 1. INTRODUCTION 1 Chapter 1 Introduction 1.1 Introduction In recent years, great interests have been given to the developent and application of tie series data. There are two categories of ethods for tie series analysis, one is frequency-doain ethods and the other one is tie-doain ethods. The forer includes spectral analysis and wavelet analysis; the latter includes autocorrelation and cross-correlation analysis. These ethods are coonly used for astronoic phenoena, weather patterns, financial asset prices, econoic activities, etc. The tie series odels introduced include siple autoregressive (AR) odels, siple oving-average (MA) odels, ixed autoregressive oving-average (ARMA) odels, seasonal odels, unit-root nonstationarity, and fractionally differenced odels for long-range dependence. The ost fundaental class of tie series should be the autoregressive oving average odel(arma). Techniques to
13 CHAPTER 1. INTRODUCTION 2 estiate the paraeters of the ARMA odel fall into two classes. One is to construct a likelihood function and derive the paraeters by axiizing it using soe iterative nonlinear optiization procedure. The other class of technique gets the paraeter in two steps: firstly obtain the coefficients of autoregressive (AR) paraeters, then derive the spectral paraeters in oving-average (MA) part subsequently. In the scope of our work, focus will be put on the ethod for paraeter estiation for AR paraeters. After reviewing several coonly used AR odel paraeter estiation ethods, a new ultistep Yule-Walker estiation ethod is introduced which increases the equation nuber in the Yule-Walker ethod to enhance the fitting accuracy. The criteria used to copare the perforance of the ethods is the ACFs atching between odel generated series and original series, which was detailed introduced by Xia and H.Tong( 2010). 1.2 AR odel and its estiation Various odels have been developed to iic the observed tie series. However, it is said that to soe extend all the odels are wrong due to certain reasons. No odel could exactly reflects the observed series and inaccuracy is always existing for the postulated odel. The only effort we could ake is to find a odel which can capture the characteristic of the series to the axiu extend and to fit the wrong odel with a paraeter estiation ethod which can reduce the estiation bias effectively. Our work will be focusing on the AR odels and its
14 CHAPTER 1. INTRODUCTION 3 estiation ethods in order to evaluate the perforance of different paraeter estiation ethods for fitting the AR odel. The autoregressive (AR) odel, which was developed by Box and Jenkins in 1970, represents a linear regression relationship of the current value of series against one or ore past values of the series. Early in the id seventies, autoregressive odeling was first introduced in the nuclear engineering and widely used in other industries soon after. Nowadays, autoregressive odeling is a popular eans for identifying, onitoring, alfunctioning detecting and diagnosing syste perforance. An autoregressive odel depends on a liited nuber of paraeters, which are estiated fro tie series data. There are a lot of techniques exist for coputing AR coefficients, aong which the ain two categories are Least Squares and Burg s ethod. We could find a wide range of supported techniques in MatLab for these ethods. When using the various algoriths fro different sources, there are two points to be paid attention to. One is to check whether or not the series has already been taken out the ean, the other one is whether the sign of the coefficients are inverted in the definition or assuptions. Coparisons of the estiated finite-saple accuracies within these ethods have been ade and these results provided soe useful insights into the behavior of these estiators. It has already been proved that these estiation techniques should lead to approxiately the sae paraeter estiates in large data saple cases. But either the Yule-Walker or the Least Squares ethod is frequently used copared with other ethods ostly due to
15 CHAPTER 1. INTRODUCTION 4 soe historical reasons. Aong all of the ethods, the ost coon ethod is so called Yule-Walker ethod which applies the least squares regression ethod on the Yule-Walker equations syste. The basic steps to get the Yule-Walker equations is firstly to derive the coefficients by ultiplying the AR odel by its prior values with lag n = 1, 2,, p, and then to take the expectation of the ultiple values and noralize it (Box and Jenkins, 1976). However, soe previous research has been done to show that in soe occasions the Yule-Walker estiation ethod leads to poor paraeter estiates with large bias even for oderately sized data saples. In our study, we propose an iproved ethod on the Yule-Walker ethod which is to increase the equation nubers in the Yule-Walker syste and try to figure out whether this could help to enhance the paraeter estiation accuracy. The Monte Carlo analysis will be used here to generate siulation results for this new ethod and real data will also be applied to check its perforance. 1.3 Organization of this Thesis The outline of this work is as follows: In Chap 1, the ai and purpose of this work is presented and a general introduction on the approaches to the paraeter estiation for autoregressive odel is given. In Chap 2, Literature review has been done on the definition of univariate tie series, background of tie series odel classes and properties of autoregressive odel. Ephasis has been given to the ethods for estiating the paraeters in the AR(p) odel, including the Maxi-
16 CHAPTER 1. INTRODUCTION 5 u Likelihood ethod, Least Square ethod, Yule-Walker ethod and Burg s ethod. The Monte Carlo analysis which will be used in nuerical exaples in the following section is also briefly described. In Chap 3, we will show the odification we proposed on the Yule-Walker ethod. The bias of the Yule-Walker estiator in finite saple which lead to the poor perforance of the Yule-Walker ethod is deonstrated. Theoretical support for better estiation perforance of Multistep Yule-Walker ethod is given. Siulation results of the autoregressive processes to support the odification are illustrated in Chap 4, while in Chap 5, we will illuinate our findings with the application of Multistep Yule-Walker odeling ethod for daily exchange rate of Japanese Yen for US Dollar. Finally, conclusions for this work and soe rearks for further study are presented in Chap 6.
17 CHAPTER 2. LITERATURE REVIEW 6 Chapter 2 Literature Review 2.1 Univariate Tie Series Background Tie series is a set of observations {x t } which is recorded at a specific tie t sequentially over equal tie increents or continuous tie. If the set is of single observations, the series is called a univariate tie series. Univariate tie series can be extended to deal with vector-valued data, which eans ore than one observations are recorded at a tie. This leads to the ultivariate tie-series odels and vectors are used for the ultivariate data. Another extensions is the forcing tie series, on which the observed series ay not have a causal effect. The difference between the ultivariate series and the forcing series is that we could control the forcing series under experience design, which eans it is deterinistic, while the ultivariate series is totally stochastic. We will only cover the univariate tie series in this thesis, so hereinafter univariate tie series is siply be put as
18 CHAPTER 2. LITERATURE REVIEW 7 tie series. Tie series can be either discrete or continuous. A discrete-tie tie series is one in which the tie for observation recording are is a discrete set, for exaple, when observations are recorded at fixed tie intervals. Continuous-tie tie series are obtained when tie set recording the observations are continuous. Tie series have been widely used in a wide range of areas. It arise when onitoring engineering processes, recording stock price in financial arket or tracking corporate business etrics, etc. Due to the fact that data points taken over tie ay have an internal structure, such as autocorrelation, trend or seasonal variation, tie series analysis has been developed to accounted for these issues and investigate the inforation behind the series. For exaple, in the financial industry, tie series analysis is used to observe the price changing trends on stock, bond, or other financial asset over tie; it can also be used to copare the change of these financial variables with other coparable variables within the sae tie period. To be ore specific, if you wanted to analyze how the daily closing stock prices for a given stock over a period of one year change, a list of all the closing prices for the stock over each day for the year should be obtained and recorded in chronological order as a tie series with daily interval and a one-year period. There are a nuber of approaches to odeling tie series, fro the siplest odel to ore coplicated odel which take trend and seasonal and residual effect into account. Decopositions is one approach is to decopose the tie series into a trend, seasonal, and residual coponent. Another approach, is to analyze the se-
19 CHAPTER 2. LITERATURE REVIEW 8 ries in the frequency doain, which is the coon ethod used in scientific and engineering applications. We will not cover the coplicated odels in this work and only outline a few of the ost coon approaches below. The siplest odel for a tie series is one in which there is no trends or seasonal coponent. The observations are siply independent and identically distributed (i.i.d.) rando variables with zero ean, which is referred as X 1, X 2,. We define the series of rando variables X t as tie series if for any positive integer n and real nuber x 1, x 2,, x n, P [X 1 < x 1,, X n < x n ] = P [X 1 < x 1 ] P [X n < x n ] = F (x 1 ) F (x n ) (2.1) where F (.) is the cuulative distribution function of the i.i.d rando variables X 1, X 2,. To siplify this odel, we do not consider the dependence between observations. Specially, for all h >> 1 and all x, x 1,, x n, if P [X n+h < x X 1 = x 1, X n = x n ] = P [X n+h < x], (2.2) we can say that X 1,..., X n contain no useful inforation when forecasting the possible behavior of X n+h. The function that iniizes the ean square error E[(X n+h f(x 1, X n )) 2 ] will equal to zero if the values of X 1, X n is given. This property akes the i.i.d. series quite uninteresting and liits its use for forecasting. However, it plays a very critical part as a building block for ore coplex tie series odels. In other tie series, trend is clear in the data pattern, thus, the zero ean odel is no longer suitable for these cases. So, we have the following odel: X t = t + Y t (2.3)
20 CHAPTER 2. LITERATURE REVIEW 9 The odel separate the tie series into two parts: t is the trend coponent function which changes slowly over tie and Y t is a tie series with zero ean. A coon assuption in any tie series techniques is that the data are stationary. If a tie series {X t } has siilar properties to those tie shifted series, we can loosely say that this tie series is stationary. To be ore strict on the properties, we focus on the first-order and second-order oents of {X t }. Firstly the first-order oent of {X t } is the ean function µ x (t) = E(X t ). Usually we will assue {X t } be a tie series with E(X 2 t ) <. For the secondorder oent E(Xt 2 ), we introduce the conception of covariance. The covariance γ i = Cov(X t, X t i ) is called the lag-i autocovariance of {X t }. It has two iportant properties: (a) γ 0 = V ar(x t ) and (b) γ i = γ i. The second property holds because Cov(X t, X t ( i) ) = Cov(X t ( i), X t ) = Cov(X t+i, X t ) = Cov(X t1, X t1 i), where t i = t + i. When noralized the autocovariance by its variance, the autocorrelation (ACF) is obtained. For a stationary process, the ean, variance and autocorrelation structure do not change over tie. So if we have a series of which the above statistical properties are constant and no periodic fluctuations in seasonal trend, we can call it stationary. But stationarity have ore precise atheatical definitions. In section 2.4.1, ore introduction on stationary on autoregressive process will be given for our purpose.
21 CHAPTER 2. LITERATURE REVIEW Tie series Models A tie series odel for the observed series {X t } is a specification of the joint distributions of the sequence of rando variables {X t }. Different odels for tie series data have any different fors and represent different stochastic processes. We have briefly introduced the siplest odel for a tie series which are siply independent and identically distributed (i.i.d.) rando variables with zero ean and without trends or seasonal coponents. Three broad classes of practical iportance for odeling variations of a process exist: the autoregressive (AR) odels, the oving average (MA) odels and the integrated (I) odels. Autoregressive (AR) odel is a linear regression relationship of the current value of the series against one or ore past values of the series. We will give a detailed description on autoregressive odel in the following section. Moving average (MA) odel is a linear regression relationship of the current value of the series against the rando shocks of one or ore past values of the series. The rando shocks at each point are assued to coe fro the sae distribution, typically a noral distribution with zero ean and constant finite variance. In the oving average odel, these rando shocks are passed to future values of the tie series, which ake it distinct fro other class of odel. Fitting the MA estiates is ore coplicated than fitting the AR odels because the error ters in MA odels are not observable. This eans that iterative non-linear fitting procedures should be used for MA odel estiation instead of linear least squares. We will not go further on
22 CHAPTER 2. LITERATURE REVIEW 11 this topic in this study. New odels, such as the autoregressive oving average (ARMA) odel and autoregressive integrated oving average (ARIMA) odel can be obtained if we extend the odels by cobining the fundaental classes together. The autoregressive oving average (ARMA) odel is a cobination of autoregressive (AR) odel and Moving Average(MA) odel. The autoregressive integrated oving average (ARIMA) was introduced by Box and Jenkins (1976). It predicts ean values in a tie series as a linear cobination of its own past values and past errors. Autoregressive integrated oving average (ARIMA) odel was advanced by Box and Jenkins which requires long tie series data. Box and Jenkins introduced the concept of seasonal non-seasonal (S-NS) ARIMA odels for describing a seasonal tie series and also provided an iterative procedure for developing such odels. Although seasonality violates stationarity assuption, the autoregressive fractionally integrated oving average (ARFIMA) odel is also introduced to explicitly incorporate the seasonality into the tie series odel. All these above classes represent a linearly relationship between the current data and previous data points. In epirical situation in which ore coplicated tie series are involved, linear odels are not sufficient to cover all the inforation. It is also an interesting topic to consider the non-linear dependence of a series on previous data points which generates a chaotic tie series. So odels to represent the changes of variance over tie, which is also called heteroskedasticity, are
23 CHAPTER 2. LITERATURE REVIEW 12 introduced. These odels are called autoregressive conditional heteroskedasticity (ARCH) and the collection of this odel class has a wide variety of representations, such as GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc. In the ARCH odel class, changes in variability are related to recent past values of the observed series. Siilarly the GARCH odel assues that there is correlation between a tie series data and its own lagged data. These ARCH odel class have been widely used in predicting several tie series data including inflation, stock prices, exchange rates, interest rates and for forecasting. 2.3 Autoregressive (AR) Model This study focuses on one specific type of tie series odel: the autoregressive (AR) odel. The AR(p) odel was developed by Box and Jenkins in 1970 (Box, 1994). As entioned above, AR (p) odel is a linear regression relationship of the current value of the series against past values of the series. The value of p is called the order of the AR odel, which eans that the current value is represented by p past values in the series. An autoregressive process of order p is a zero ean stationary process. To better understand the general autoregressive odel, we will start fro the siplest AR(1)odel: X t = ϕ 0 + ϕ 1 X t 1 (2.4)
24 CHAPTER 2. LITERATURE REVIEW 13 For AR(1) odel, conditional on the past observation, we have E(X t X t 1 ) = ϕ 0 + ϕ 1 X t 1 (2.5) V ar(x t X t 1 ) = V ar(a t ) = σ 2 a (2.6) Fro the above conditional ean and variance on the past data point X t 1, the value of X t 1 is not correlated to the value of X t i for i > 1. The current data point is centered around ϕ 0 +ϕ 1 X t 1 with standard deviation σ a. So, the past data point X t 1 is not enough to deterine the conditional expectation of X t, which inspires us to take ore past data points into the odel to give a better indication for the current data point. Thus a ore flexible and generalized odel is extended as AR(p) odel satisfies the following equation: X(t) = ϕ 1 X(t 1) + ϕ 2 X(t 2) ϕ p X(t p) + a t (2.7) where p is the order and {a t } is assued to be a white noise series with zero ean and constant finite variance σa. 2 The representation of the AR(p) odel has the sae for as the linear regression odel if X t is served as the dependent variable and lagged values X t 1, X t 2,..., X t p are served as the explanatory variable. Thus, the autoregressive odel has several properties siilar to those of the siple linear regression odel. However there are still soe differences between the two odels. In this odel, the past p values X t i (i = 1,..., p) jointly deterine the conditional expectation of X t given the past data. The coefficients ϕ 1, ϕ 2,, ϕ p are such that
25 CHAPTER 2. LITERATURE REVIEW 14 all the roots of the polynoial equation 1 p ϕ p x i = 0 (2.8) i=1 fall inside the unit circle; or another polynoial for A(z) = 1 + ϕ 1 z ϕ n z n (2.9) has all its zeros outside the unit circle. This is a necessary condition for the stationarity of the autoregressive process, which will be the ain content of the following section. 2.4 AR odel Properties Stationarity The foundation of tie series analysis is stationarity. We refer a tie series {X t } to be strictly stationary if the joint distribution of (X t1,..., X tk ) is identical to that of (X t1 +t,..., X tk +t) for all t, where k is an arbitrary positive integer and (t 1,,..., t k ) is a collection of k positive integers represent the recorded tie. To put it in a ore understandable way, if the joint distribution of (X t1 +t,..., X tk +t) is invariant under tie shift, the tie series can be recognized as strict stationary. This condition is very strong and usually used in theoretical research. However, in real world tie series, it is hard to achieve. Thus, we use another version of stationarity called weak stationarity. Fro the nae we can see that it is a weaker
26 CHAPTER 2. LITERATURE REVIEW 15 for of stationarity which stands if both the ean of X t and the covariance between X t and X t i are tie-invariant, where i is an arbitrary integer. That is to say, for a tie series {X t } to eet the requireent of weakly stationary, it should satisfy two conditions: (a) Constant ean: E(X t ) = µ; and (b) Cov(X t, X t i ) = γ i only depends on i. To illustrate the weak stationarity clearly, we take a series of T observed data points {X t t = 1,..., T } as exaple. If we look at the tie plot of this weak stationary series, we can find that the values of the series are fluctuating within a fixed interval and with a constant variation. In practical applications, weak stationarity has a wider use and enables one to ake inferences concerning future observations. If the first two oents of {X t } are finite, the tie series can be regarded as under the weak stationarity condition iplicitly. Fro the definitions, a tie series {X t } under strictly stationary condition has its first two oents to be finite, so we can conclude that the strong stationary iplies the weak stationary. However, the converse deduction does not hold. In addition, if the tie series {X t } is norally distributed, then the two stationarity is equivalent to each other due to the special properties of the noral distribution ACF and PACF for AR Model Methods for tie series analysis ay be divided into two classes: frequencydoain ethods and tie-doain ethods. Auto-correlation and cross-correlation analysis are included in the latter class, which is to exaine serial dependence.
27 CHAPTER 2. LITERATURE REVIEW 16 In linear tie series analysis, correlation is of great iportance to understand various classes of odels. Special attention has been paid to the correlations between the variable and its past values. This concept of correlation is generalized to autocorrelation, which is the basic tool for studying a stationary tie series. In other text it is also referred as serial correlations. Consider a weakly stationary tie series {X t }, the linear dependence between X t and its past values X t i is of interest. We call the correlation coefficient between X t and X t i as the lag-i autocorrelation of X t and is coonly denoted by ρ i. Specifically, we define ρ i = Cov(X t, X t i ) V ar(xt )V ar(x t i ) = Cov(X t, X t i ) V ar(x t ) = γ i γ 0 (2.10) Under the weak stationarity condition, V ar(x t ) = V ar(x t i ) and ρ i is a function of i only. Fro the definition, we have ρ 0 = 1, ρ i = ρ i, and 1 ρ i 1. In addition, a weakly stationary series {X t } is not autocorrelated if and only if ρ i = 0 for all i > 0. Here, we also introduce the partial autoregressive function (PACF) for a stationary tie series to understand other properties of the series. PACF is a function of its ACF and is a powerful ethod for deterining the order p of an AR odel. A siple, yet effective way to introduce PACF is to consider the following AR odels in consecutive orders: x t = Φ 0,1 + Φ 1,1 x t 1 + e 1t x t = Φ 0,2 + Φ 1,2 x t 1 + +Φ 2,2 x t 2 + e 2t
28 CHAPTER 2. LITERATURE REVIEW 17 x t = Φ 0,3 + Φ 1,3 x t 1 + +Φ 2,3 x t 2 + +Φ 3,3 x t 3 + e 3t x t = Φ 0,4 + Φ 1,4 x t 1 + +Φ 2,4 x t 2 + +Φ 3,4 x t 3 + +Φ 4,4 x t 4 + e 4t.. where Φ 0,j, Φ i,j, and e jt are the constant ter, the coefficient of x t i, and the error ter of an AR(j) odel respectively. These above equations all have the sae for with a ultiple linear regression and the estiation for PACF estiator as the coefficient in the odel can use the concept of least squares regression for estiation. Following is a specific description for the PACF estiator: the estiate Φ 1,1 in the first equation is called the lag-1 saple PACF of x t ; the estiate Φ 2,2 in the second equation is the lag-2 saple PACF of x t ; the estiate Φ 3,3 in the third equation is the lag-3 saple PACF of x t, and so on. Fro the definition, the lag-2 PACF Φ 2,2 shows the added contribution of x t 2 to x t over the AR(1) odel x t = Φ 0 + Φ 1 x t 1 + e 1t. The lag-3 PACF shows the added contribution of x t 3 to x t over an AR(2) odel, and so on. Therefore, for an AR(p) odel, the lag-p saple PACF should not be zero, but Φ j,j should be close to zero for all j > p. This eans that the saple PACF cuts off at lag p and this property is often used to deterine the value of order p for the autoregressive odel. The following other properties of saple PACF can be obtained for a stationary AR(p) odel: Φ p,p converges to Φ p as the saple size T goes to infinity. The asyptotic variance of Φ j,j is 1/T for j > p.
29 CHAPTER 2. LITERATURE REVIEW Basic Methods for Paraeter Estiation The AR odel is widely used in science, engineering, econoetrics, bioetrics, geophysics, etc. When a series is to be odeled by the AR odel, the appropriate order p should be deterined and the paraeters of the odel ust be estiated. There are a nuber of ethods available for estiating its paraeters for this odel and of these the following three aybe the ost coonly used Maxiu Likelihood Estiation (MLE) Maxiu Likelihood ethod has a wide use for estiation. Tie series analysis also adopts it to estiate the paraeters of the stationary ARMA(p,q) odel. To use the Maxiu Likelihood ethod, let s assue that tie series {X t } follows the Gaussian distribution. Consider the gereral ARMA (p,q) odel X t = ϕ 1 X t ϕ p X t p + a t θ 1 a t 1 θ q a t q (2.11) where µ = E(X t ) and a t N(0, σ 2 a). The joint probability density of a = (a 1, a 2,, a n ) is P (a ϕ, µ, θ, σ 2 a) = (2ϕσ 2 a) n/2 exp[ 1 2σ 2 a n a 2 t ] (2.12) Set X 0 and a 0 to be the initial values for X and a, we get the log-likelihood t=1 function ln L (ϕ, µ, θ, σ 2 a) = n 2 ln 2πσ2 a S (ϕ, µ, θ) 2σ 2 a (2.13)
30 CHAPTER 2. LITERATURE REVIEW 19 where S (ϕ, µ, θ) = n a 2 t (ϕ, µ, θ X 0, a 0, X) (2.14) t=1 By axiizing ln L for the given series data, Maxiu Likelihood estiator is obtained. Since the above log-likelihood function is based on the initial condition, so the estiators ϕ, µ and θ are called the condition Maxiu Likelihood estiators. The estiator σ a 2 of σa 2 is obtained as σ 2 a = S ( ϕ, µ, θ) n (2p + q + 1) (2.15) after ϕ, µ and θ are calculated. Alternatively, because of the stationarity of the tie series, an iproveent was proposed by Box, Jenkins, and Reinsel (1994) with the unknown future value in the forward for and unknown past backward fors. The unconditional log-likelihood function cae out with this iproveent ln L(ϕ, µ, θ, σ 2 a) = n 2 ln 2πσ2 a S(ϕ, µ, θ) 2σ 2 a (2.16) with the unconditional su of square function n S(ϕ, µ, θ) = [E(a t ϕ, µ, θ)] 2 (2.17) t= Siilarly, the estiator σ 2 a of σ 2 a is calculated as σ 2 a = S ( ϕ, µ, θ) n (2.18)
31 CHAPTER 2. LITERATURE REVIEW 20 The unconditional Maxiu Likelihood ethod is efficient in the situations for seasonal odels, or nonstationary odels or relatively short series. Both the conditional and unconditional likelihood function are approxiations. The exact closed for is very difficult to derive. Newbold (1974) illustrated an expression for the ARMA(p,q) odel. One thing to ention here is that when X 1, X 2,..., X n are independent and identically distributed (i.i.d), when n is sufficiently large, the Maxiu Likelihood estiators follow approxiately norally distributions, the variances of which are at least as sall as those of other asyptotically norally distributed estiators (Lehaann, 1983). Even if {X t } is not noral distributed, Equation 2.16 still can be used as a easure of goodness to fit the odel and the estiator calculated by axiizing Equation 2.16 is still called Maxiu Likelihood estiators. For the scope of our study, we can obtained the ML estiator for the AR process setting θ = Least Square Estiation Method (LS) Regression analysis is possibly the ost widely used statistical ethod in data analysis. Aong the various regression ethods, Least Square is well developed for the linear regression odels and been used frequently for estiation. The principal of Least Square approach is to iniize the standard su of squares of the errors ter ϵ t. AR odel is a siple linear regression odel and it utilizes
32 CHAPTER 2. LITERATURE REVIEW 21 the least squares ethod to fit a odel by iniizing the su of square errors for estiating paraeters. Consider the following AR(p) odel: Y (t) = ϕ 1 Y (t 1) + ϕ 2 Y (t 2) ϕ p Y (t p) + ε t (2.19) The shock a t is under the following assuptions: 1. E(ε t ) = 0 2. E(ε 2 t ) = σe 2 3. E(ε t ε k ) = 0 for t k 4. E(Y t ε t ) = 0 That is, ε t is a zero ean white noise series of constant variance σt 2. Let ϕ denote the vector of known paraeter ϕ = [ϕ 1,..., ϕ p ] T (2.20) The AR odel paraeters in equation 2.19 are estiated by iniizing the su of squares error n t=1 ε2 t. So the Least Square estiate of ϕ is defined as n ϕ LS = arg in [y(t) t=1 p y(t i)ϕ i ] 2 (2.21) i=1 Denote [ Ỹ (t) = y(t 1) y(t p) ] T (2.22) After calculation, equation (2.21) yields the results [ n ] 1 [ n ] ϕ LS = Ỹ (t) T Ỹ (t) Ỹ (t) T y(t) t=p+1 t=p+1 (2.23)
33 CHAPTER 2. LITERATURE REVIEW 22 Detailed inforation for the above algorith was explained by Kay and Marple Later, we found that the LS ethod uses the noral equations to ipleent the linear syste. We have two coon ethods for solving the noral equation. One is by Cholesky factorization and the other one is by QR factorization. While Cholesky factorization is faster in coputation, QR factorization has better nuerical properties. In Least Square ethod, we assue that the earlier observations receive the sae weight as recent observations. It gives the linear systes equation Ax = b fro least squares noral equation as follows: N N N y 2 t 1 y t 1 y t 2 y t 1 y t p t=p+1 t=p+1 t=p+1 N N N y t 1 y t 2 yt 2 2 y t 2 y t p t=p+1 t=p+1 t=p N N N y t 1 y t p y t 2 y t p ϕ 1 ϕ 2 =. ϕ p t=p+1 t=p+1 yt p 2 t=p+1 t=p+1 t=p+1 t=p+1 N y t y t 1 N y t y t 2. N y t y t p QR factorization (Golub and Van Loan, 1996) are used to solve the first and second linear syste equations. Let s rewrite noral equations A T Ax = A T b using QR factorization A = QR: A T Ax = A T b R T Q T QRx = R T Q T b R T Rx = R T Q T b (QT Q = I) Rx = Q T b (R nonsingular) The results fro this ethod were used as the odel paraeters. In this ethod, we assue the earlier observations in LS ethod receive the sae weight
34 CHAPTER 2. LITERATURE REVIEW 23 as recent observations. However, the recent observations ay be ore iportant for the true behavior of the process so that, so discounted least squares ethod was proposed to take into account the condition that the older observations receive proportionally less weight than the recent ones Yule-Walk Method (YW) Yule-Walk Method ethod, also called the autocorrelation ethod, is a nuerically siple approach to estiate the AR paraeters of the ARMA odel. In this ethod, an autoregressive (AR) odel is also fitted by iniizing the forward prediction error in a sense of least-squares regression. The difference is that Yule- Walker ethod is to solves the Yule-Walker equations, which is fored fro saple covariances. A stationary autoregressive (AR) process {Y t } of order p can be fully identified fro the first p + 1 autocovariances, that is cov(y t, Y t+k ), k = 0, 1,, p, by the Yule-Walker equations. Moreover, the Yule-Walker equations have been eployed in estiating the AR paraeters and the disturbance variance fro the first p + 1 saple autocovariances. Rewrite Equation 2.19, we can get Y t in the for of Y t = p ϕ j Y t j + ε t (2.24) j=1 By Multiplying both side of Equation 2.24 by Y t j, j = 0, 1,, p, then taking expections, we could get the YW Equation Γ p ϕ = γ p (2.25)
35 CHAPTER 2. LITERATURE REVIEW 24 where Γ p is the covariance atrix [γ(i j)] p i,j=1 and γ p = (γ(1),, γ(p)). Replacing the covariance γ(j) by the corresponding saple covariances γ(j), the Yule- Walker estiator of ϕ is given below by (Young and Jakean 1979) γ(0) γ(1) γ(p 1) γ(1) γ(1) γ(0) γ(p 2) γ(2) ϕ Y W = γ(p 1) γ(p 2) γ(0) γ(p) (2.26) or: Γ p ϕy W = γ p (2.27) Here, autocovariance could be replaced with autocorrelation (ACF) when noralized by the variance, then the autocovariance γ i becoes the autocorrelation ρ i with the values varying within interval [-1,1]. The ters autocovariance and autocorrelation can be used interchangeably. Various algoriths, such as the Least Square algorith or Levinson-Durbin algorith, can be used here to solve the above linear Yule-Walker syste. The Levinson-Durbin recursion is quite efficient for coputation to get the AR (p) paraeters with the first p autocorrelations. Toeplitz structure of the atrix in Equation 2.26 provides convenience for coputation and akes the Yule-Walker ethods ore attractive with ore coputational efficiency than the Least Square ethod. The advantage of the coputational siplicity akes Yule-Walker an attractive choice for any applications.
36 CHAPTER 2. LITERATURE REVIEW Burg s Estiation Method (B) Burg s ethod is another different class of estiation ethod. It has been found that Burg s ethod, which is to solve the lattice filter equations using the haronic ean of forward and backward squared prediction errors, gives a quite good perforance with high accuracy and is regarded to be the ost preferable ethod when the signal energy is non-uniforly distributed in a frequency range. This is often the case with audio signals. Burg s ethod is quite different fro the Least Square and Yule-Walker ethod which estiate the autoregressive paraeters directly. Different fro the Least Square ethod which iniizing the residual, Burg s ethod deals with prediction error. Different fro the Yule-Walker ethod, in which the estiated coefficients ϕ p1,, ϕ pp are precisely the coefficients of the best linear predictor of Y P +1 in ters of Y p,, Y 1 under the assuption that the ACF of Y t coincides with the saple ACF at lag 1,..., p, Burg s ethod first estiates the reflection coefficients, which are defined as the last autoregressive paraeter estiate for each odel order p. Reflection coefficients consists of unbiased estiates of the partial autocorrelation (PACF) coefficient. Under Burg s ethod, PACF Φ 11, Φ 22,, Φ pp is estiated by iniizing the su of squares of forward and backward one-step prediction errors with respect to the coefficients Φ ii. Levinson-Durbin algorith is also used here to deterine the paraeter estiates. It recursively coputes the successive interediate reflection coefficients to derive the paraeters for the AR odel. Given a observed stationary zero ean series
37 CHAPTER 2. LITERATURE REVIEW 26 Y(t), we denote u i (t), t = i 1,..., n, 0 i < n, to be the difference between x n+1+i t and the best linear estiate of x n+1+i t in ters of the preceding i observations. Also, denote v i (t), t = i 1,..., n, 0 i < n, to be the difference between x n+1 t and the best linear estiate of x n+1 t in ters of the subsequent i observations. u i (t) and v i (t) are so called forward and backward prediction errors and satisfy the following recursions: u 0 (t) = v 0 (t) = x n+1 t (2.28) u i (t) = u i+1 (t 1) Φ ii v i 1 (t) (2.29) v i (t) = v i 1 (t) Φ ii u i 1 (t 1) (2.30) Burg s estiate Φ (B) 11 of Φ 11 is obtained by iniizing δ 2 1, i.e. Φ (B) 11 = arg in δ1 2 1 = arg in 2(n 1) n [u 2 1(t) + v1(t)] 2 (2.31) t=2 The values for u 1 (t), v 1 (t) and δ 2 1 generated fro Equation 2.31 can be used to replace the value in above recursion steps with i = 2 and Burg s estiate Φ (B) 22 of Φ 22 is obtained. Continuing this recursion process, we can finally get Φ (B) pp. For pure autoregressive odels, Burg s ethod usually perfors better with a higher likelihood than Yule-Walker ethod. 2.6 Monte Carlo Siulation Monte Carlo siulation is a ethod that takes sets of rando nubers as input to iteratively evaluate a deterinistic odel. The ai of Monte Carlo siulation
38 CHAPTER 2. LITERATURE REVIEW 27 is to understand the ipact of uncertainty, and to develop plans to itigate or otherwise cope with risk. This ethod is especially useful for uncertainty propagation situations such as variation deterination, sensitivity error affects, perforance or reliability of the syste odeling without enough inforation. For a siulation involving in extreely large nuber of evaluations of the odel could only be done with super coputers. Monte Carlo siulation is a sapling ethod which randoly generates the inputs fro probability distributions to siulate the process of sapling fro an actual population. To use this ethod, we firstly should choose a distribution for the inputs to atch the existing data, or to represent our current state of knowledge. There are several ethods to represent the data generated fro the siulation, such as histogra, suary statistics, error bars, reliability predictions, tolerance zones, and confidence intervals. Monte Carlo siulation is a all round ethod with a wide range of applications in various fields. We can benefit a lot fro the siulation ethod for analyzing the behavior of soe activity, plan or process that involves uncertainty. To deal with variable arket deand in econoy, fluctuating costs in business, variation in a anufacturing process, or unpredictable weather data in eteorology, you can always find the iportant role of Monte Carlo siulation. Thought Monte Carlo siulation has a powerful function, the steps in it are quite siple. The following steps illustrate the coon siulation procedures: Step 1: Create a paraetric odel, y = f(x 1, x 2,..., x q ).
39 CHAPTER 2. LITERATURE REVIEW 28 Step 2: Generate a set of rando inputs, x i 1, x i 2,..., x i q. Step 3: Evaluate the odel and store the results as y i. Step 4: Repeat steps 2 and 3 for i = 1 to n. Step 5: Analyze the results using probability distribution, confidence interval, etc.
40 CHAPTER 3. MULTISTEP YULE-WALKER ESTIMATION METHOD 29 Chapter 3 Multistep Yule-Walker Estiation Method When introducing the Yule-Walker Method, we can find its coputational attractiveness, however, its drawback has also coe into our eyes. We have the unnoralized autocorrelation (also called autocovariance) γ k = E[y(t)y(t k)] (3.1) and saple autocovariance γ k = 1 N k y(t)y(t k) (3.2) N k t=1 In the Yule-Walker Method, AR(p) paraeters depend on erely the first p + 1 lags fro γ 0 to γ p. This subset of the given autocorrelation lags can reflect only part of the inforation contained in the series, which eans that AR odel
41 CHAPTER 3. MULTISTEP YULE-WALKER ESTIMATION METHOD 30 generated fro Yule-Walker ethod will have autocorrelation behavior atch the first p+1 well, but it has very poor representation for the reaining autocorrelation lags fro γ p+1 afterwards. Realizing the poor perforance of the straightforward application of the original Yule-Walker ethod, odifications have been proposed for better estiation perforance. Several odifications have been presented on the basic ethod, such as to increase the nuber of the equation in the Yule- Walker syste and to increase the order of the estiated odel. The basic ideas of the odifications are very siple but significant iproveents in the quality of the estiates have been achieved. Different algoriths and a wide range of clais about their relative perforances are presented by a nuber of researchers. In our work, focus will be ainly on clarifying and putting in proper perspective the forer odification which is to increase the nuber of the Yule-Walker equations. We will call this ethod as Multistep Yule-Walker (MYW) ethod hereinafter in this work. Following is the detailed description for this odification. 3.1 Multistep Yule-Walker Estiation (MYW) To reflect the coplete set of autocorrelation set, it is better to take the autocorrelation lags beyond p into account. Thus, the extended Yule-Walker syste is proposed:
42 CHAPTER 3. MULTISTEP YULE-WALKER ESTIMATION METHOD 31 γ(0) γ(1) γ(p 1) γ(1) γ(0) γ(p 2) γ(p 1) γ(p 2) γ(0) γ(p) γ(p 1) γ(1) γ(p + 1) γ(p) γ(2) γ(p + 1) γ(p + 2) γ() ϕ MY W = γ(1) γ(2). γ(p) γ(p + 1) γ(p + 2). γ(p + ) (3.3) or: Γ ϕmy W = γ (3.4) Multistep Yule-Walker estiate ϕ MY W can be obtained fro the above syste which involves high lag coefficients γ k, k > p. In the above syste, the equation nuber is larger than the paraeter nuber. This over deterined syste of equations can be solved in a sense of least square regression. The ϕ MY W is thus given by ϕ MY W = arg in γ(0) γ(p 1)..... γ(p + 1) γ() ϕ MY W γ(1). γ(p + ) 2 Q (3.5) where x 2 Q = xt Qx and Q is a positive definite weighting atrix. Q is generally set to be I for siplicity. The QR factorization procedure entioned in
43 CHAPTER 3. MULTISTEP YULE-WALKER ESTIMATION METHOD 32 Section can be also applied here for solving the above syste. 3.2 Bias of YW ethod on Finite Saples In soe applications, such as radar application, nuber of observations is finite. However, for such finite saple cases, ϕ Y W does not show a good fitting perforance. The autocorrelation estiates in YW ethod have a sall triangular bias. An finite order AR odel can be written as y t + ϕ 1 y t ϕ p y t p = ε t (3.6) where ε t is a white noise process with zero ean and finite variance σt 2. The first p true paraeters can deterine the first p lags of the true AR noralized autocorrelation function, which has the siilar Yule-Walker relationship with the true paraeter ϕ i as follows: ρ(q) + ϕ 1 ρ(q 1) + + ϕ p ρ(q p) = 0. (3.7) The estiator for the noralized autocorrelation function of N observation y n for lag q is given below: ρ(q) = γ(q) γ(0) = 1 N N q 1 N t=1 y ty t+q N q t=1 y2 t (3.8) We can get the expectation for the autocovariance estiator E[ γ(q)] = 1 N N q t=1 y t y t+q = N q N E[y ty t+q ] = γ(q){1 q N } (3.9)
44 CHAPTER 3. MULTISTEP YULE-WALKER ESTIMATION METHOD 33 (Piet M.T. Broersen, 2008). Fro Equation 3.9 above, we could get a triangular bias 1 q/n for γ(q), estiator of the true autocovariance. In Yule-Walker ethod, we replace the noralized autocovariance ρ(q) in Equation 3.7 with its estiators in Equation 3.8 to derive the autoregressive paraeters ϕ(i) in p equations below: ρ(q) + ϕ 1 ρ(q 1) + + ϕ p ρ(q p) = 0 (3.10) The bias in Equation 3.9 is passed down fro the estiated autocorrelation function to the estiated AR odel paraeters in this Yule-Walker ethod, which akes the Yule-Walker estiator greatly biased fro the true coefficients. 3.3 Theoretical Support of MYW Suppose y t is the observed tie series which is a strictly stationary and strongly ixing sequence with exponentially decreasing ixing-coefficients and x t is the tie series generated by the paraetric odel: x t = g ϕ (x t 1,, x t p ) + ε t, (3.11) where ε t is the innovation and function g ϕ ( ) is known up to paraeters ϕ. Denote the l-step-ahead prediction of y t+l based on odel 3.11 by g [l] ϕ = E(x t+l x t = y t ). (3.12) For AR odel which is linear, g [l] ϕ is siply a copound function, g [l] ϕ = g ϕ(g ϕ ( g ϕ (y t ) )). (3.13)
45 CHAPTER 3. MULTISTEP YULE-WALKER ESTIMATION METHOD 34 If we use AR (p) odel to atch y t, by the Yule-Walker equation, we have the recursive forula for its ACF, i.e. γ(k) = γ(k 1)ϕ 1 + γ(k 2)ϕ γ(k p)ϕ p, k = 1, 2,... (3.14) Let l > p, ϕ = (ϕ 1, ϕ 2,, ϕ p ) T ), γ l = (γ(1), γ(2),, γ(l)) T, and Γ l = γ(0) γ(1) γ(p 1) γ(1) γ(0) γ(p 2) γ(l 1) γ(l 2) γ(0) (3.15) So the Yule-Walker equation can be write as Γ l ϕ = γ l (3.16) Since ϕ is selected to atch the ACF of y t, we can replace the ACF of x t by the ACF of y t, which is denoted by γ(k) and estiated by γ(k) = T 1 t = 1 T k (y t ȳ)(y t+k ȳ). Denote Γ l and γ l to be the saple versions of Γ l and γ l respectively and Γ and γ to be the corresponding population entities for y t. Let ϕ {l} be the general for of the two ethods with ϕ {l} = ϕ Y W for l = p and ϕ {l} = ϕ MY W for l > p. Denoting the iniizer by ϕ {l}, we have ϕ {l} = ( Γ T l Γ l ) 1 ΓT l γ l (3.17) It is easy to find that ϕ {p} is the ost efficient aong all ϕ {l}, l = p, p + 1, for observation-error-free case, i.e. ε t = 0. Otherwise, we have the following theore
46 CHAPTER 3. MULTISTEP YULE-WALKER ESTIMATION METHOD 35 (Xia and Tong, 2010): Theore 3.1 Assuing the oents E y t 2δ, E g [k] ϑ (y t,, y t p ) 2δ, E g [k] ϑ (y t,, y t p )/ ϕ 2δ and E 2 g [k] ϑ (y t,, y t p )/ ϕ ϕ T 2δ exist for soe δ > 2, we have in distribution n{ ϕ{l} ϑ} N(0, Σ l ) (3.18) where ϑ = (Γ T l Γ l) 1 Γ T l γ l and Σ l is a positive definite atrix. As a special case, if y t = x t + ϵ t with V ar(ε t ) > 0 and V ar(ϵ t ) = σ 2 ϵ > 0, then the above asyptotic results holds with ϑ = ϕ + σ 2 ϵ (Γ T l Γ l + σ 2 ϵ Γ p + σ 4 ϵ I) 1 (Γ p + σ 2 ϵ I)ϕ. Clearly, bias σ 2 ϵ (Γ T l Γ l +σ 2 ϵ Γ p +σ 4 ϵ I) 1 (Γ p +σ 2 ϵ I)ϕ in the estiator will be saller when l is larger. Denote γ k = (γ(k), γ(k + 1),, γ(k + p 1)), then we have Γ T l Γ l = Γ T p Γ p + l k=p γ k γ k T Thus, if a larger l is used or the ACF decays very slowly, the bias for the estiator could be reduced effectively. This leads to the result that Multistep Yule-Walker ethod ( > 1 or l > p) has a less significant bias than the old Yule-Walker ethod and estiation accuracy ay increase considerably with increasing the nuber of YW equations. Siulations in Chapter 4 give a strong support of this results.
Machine Learning Basics: Estimators, Bias and Variance
Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationKeywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution
Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality
More informationW-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS
W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well
More informationThe proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).
A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More informationAnalyzing Simulation Results
Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient
More informationInspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information
Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub
More informationESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics
ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents
More informationBootstrapping Dependent Data
Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly
More informationUsing EM To Estimate A Probablity Density With A Mixture Of Gaussians
Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationExperimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis
City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationExtension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels
Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationOrder Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples
Order Recursion Introduction Order versus Tie Updates Matrix Inversion by Partitioning Lea Levinson Algorith Interpretations Exaples Introduction Rc d There are any ways to solve the noral equations Solutions
More informationBiostatistics Department Technical Report
Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent
More information13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices
CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationInteractive Markov Models of Evolutionary Algorithms
Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary
More informationTesting the lag length of vector autoregressive models: A power comparison between portmanteau and Lagrange multiplier tests
Working Papers 2017-03 Testing the lag length of vector autoregressive odels: A power coparison between portanteau and Lagrange ultiplier tests Raja Ben Hajria National Engineering School, University of
More informationProbability Distributions
Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples
More informationSoft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis
Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES
More informationProc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES
Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co
More informationPolygonal Designs: Existence and Construction
Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G
More informationCSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13
CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial
More informationMSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE
Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationIn this chapter, we consider several graph-theoretic and probabilistic models
THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions
More informationPORTMANTEAU TESTS FOR ARMA MODELS WITH INFINITE VARIANCE
doi:10.1111/j.1467-9892.2007.00572.x PORTMANTEAU TESTS FOR ARMA MODELS WITH INFINITE VARIANCE By J.-W. Lin and A. I. McLeod The University of Western Ontario First Version received Septeber 2006 Abstract.
More informationŞtefan ŞTEFĂNESCU * is the minimum global value for the function h (x)
7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More information2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all
Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either
More informationLecture 2: Univariate Time Series
Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:
More informationTEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES
TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,
More informationEnsemble Based on Data Envelopment Analysis
Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807
More informationA note on the multiplication of sparse matrices
Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani
More informationPseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory
Pseudo-arginal Metropolis-Hastings: a siple explanation and (partial) review of theory Chris Sherlock Motivation Iagine a stochastic process V which arises fro soe distribution with density p(v θ ). Iagine
More informationIN modern society that various systems have become more
Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto
More informationEstimating Parameters for a Gaussian pdf
Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3
More informationChapter 6 1-D Continuous Groups
Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationBayesian Approach for Fatigue Life Prediction from Field Inspection
Bayesian Approach for Fatigue Life Prediction fro Field Inspection Dawn An and Jooho Choi School of Aerospace & Mechanical Engineering, Korea Aerospace University, Goyang, Seoul, Korea Srira Pattabhiraan
More informationEstimation of Korean Monthly GDP with Mixed-Frequency Data using an Unobserved Component Error Correction Model
100Econoic Papers Vol.11 No.1 Estiation of Korean Monthly GDP with Mixed-Frequency Data using an Unobserved Coponent Error Correction Model Ki-Ho Ki* Abstract Since GDP is announced on a quarterly basis,
More informationCh 12: Variations on Backpropagation
Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith
More informationTesting equality of variances for multiple univariate normal populations
University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate
More informationASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical
IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul
More informationDetection and Estimation Theory
ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informatione-companion ONLY AVAILABLE IN ELECTRONIC FORM
OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer
More informationAutoregressive Moving Average (ARMA) Models and their Practical Applications
Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:
More informationParameter estimation: ACVF of AR processes
Parameter estimation: ACVF of AR processes Yule-Walker s for AR processes: a method of moments, i.e. µ = x and choose parameters so that γ(h) = ˆγ(h) (for h small ). 12 novembre 2013 1 / 8 Parameter estimation:
More informationIntroduction to ARMA and GARCH processes
Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,
More informationData-Driven Imaging in Anisotropic Media
18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands
More informationFigure 1: Equivalent electric (RC) circuit of a neurons membrane
Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of
More informationA method to determine relative stroke detection efficiencies from multiplicity distributions
A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,
More informationUnivariate Time Series Analysis; ARIMA Models
Econometrics 2 Fall 24 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing
More informationSPECTRUM sensing is a core concept of cognitive radio
World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile
More informationPh 20.3 Numerical Solution of Ordinary Differential Equations
Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing
More informationEmpirical Market Microstructure Analysis (EMMA)
Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg
More informationMulti-Scale/Multi-Resolution: Wavelet Transform
Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the
More informationSome Time-Series Models
Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random
More informationEE5900 Spring Lecture 4 IC interconnect modeling methods Zhuo Feng
EE59 Spring Parallel LSI AD Algoriths Lecture I interconnect odeling ethods Zhuo Feng. Z. Feng MTU EE59 So far we ve considered only tie doain analyses We ll soon see that it is soeties preferable to odel
More information3.3 Variational Characterization of Singular Values
3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and
More informationA Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair
Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving
More informationQ ESTIMATION WITHIN A FORMATION PROGRAM q_estimation
Foration Attributes Progra q_estiation Q ESTIMATION WITHIN A FOMATION POGAM q_estiation Estiating Q between stratal slices Progra q_estiation estiate seisic attenuation (1/Q) on coplex stratal slices using
More informationStatistical Logic Cell Delay Analysis Using a Current-based Model
Statistical Logic Cell Delay Analysis Using a Current-based Model Hanif Fatei Shahin Nazarian Massoud Pedra Dept. of EE-Systes, University of Southern California, Los Angeles, CA 90089 {fatei, shahin,
More informationSymbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm
Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat
More informationAn Extension to the Tactical Planning Model for a Job Shop: Continuous-Time Control
An Extension to the Tactical Planning Model for a Job Shop: Continuous-Tie Control Chee Chong. Teo, Rohit Bhatnagar, and Stephen C. Graves Singapore-MIT Alliance, Nanyang Technological Univ., and Massachusetts
More informationPredicting FTSE 100 Close Price Using Hybrid Model
SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK Predicting FTSE 100 Close Price Using Hybrid Model Bashar Al-hnaity, Departent of Electronic and Coputer Engineering, Brunel University
More informationA Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)
1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu
More informationOBJECTIVES INTRODUCTION
M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and
More informationTopic 5a Introduction to Curve Fitting & Linear Regression
/7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline
More informationA Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay
A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer
More informationSupport recovery in compressed sensing: An estimation theoretic approach
Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de
More informationTracking using CONDENSATION: Conditional Density Propagation
Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation
More informationProf. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis
Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation
More informationThe Weierstrass Approximation Theorem
36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined
More informationANALYTICAL INVESTIGATION AND PARAMETRIC STUDY OF LATERAL IMPACT BEHAVIOR OF PRESSURIZED PIPELINES AND INFLUENCE OF INTERNAL PRESSURE
DRAFT Proceedings of the ASME 014 International Mechanical Engineering Congress & Exposition IMECE014 Noveber 14-0, 014, Montreal, Quebec, Canada IMECE014-36371 ANALYTICAL INVESTIGATION AND PARAMETRIC
More informationGeneral Properties of Radiation Detectors Supplements
Phys. 649: Nuclear Techniques Physics Departent Yarouk University Chapter 4: General Properties of Radiation Detectors Suppleents Dr. Nidal M. Ershaidat Overview Phys. 649: Nuclear Techniques Physics Departent
More informationLONG-TERM PREDICTIVE VALUE INTERVAL WITH THE FUZZY TIME SERIES
Journal of Marine Science and Technology, Vol 19, No 5, pp 509-513 (2011) 509 LONG-TERM PREDICTIVE VALUE INTERVAL WITH THE FUZZY TIME SERIES Ming-Tao Chou* Key words: fuzzy tie series, fuzzy forecasting,
More informationFE570 Financial Markets and Trading. Stevens Institute of Technology
FE570 Financial Markets and Trading Lecture 5. Linear Time Series Analysis and Its Applications (Ref. Joel Hasbrouck - Empirical Market Microstructure ) Steve Yang Stevens Institute of Technology 9/25/2012
More informationPULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE
PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE 1 Nicola Neretti, 1 Nathan Intrator and 1,2 Leon N Cooper 1 Institute for Brain and Neural Systes, Brown University, Providence RI 02912.
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationAn Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period
An Approxiate Model for the Theoretical Prediction of the Velocity... 77 Central European Journal of Energetic Materials, 205, 2(), 77-88 ISSN 2353-843 An Approxiate Model for the Theoretical Prediction
More informationA remark on a success rate model for DPA and CPA
A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance
More informationUniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval
Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,
More informationMultiscale Entropy Analysis: A New Method to Detect Determinism in a Time. Series. A. Sarkar and P. Barat. Variable Energy Cyclotron Centre
Multiscale Entropy Analysis: A New Method to Detect Deterinis in a Tie Series A. Sarkar and P. Barat Variable Energy Cyclotron Centre /AF Bidhan Nagar, Kolkata 700064, India PACS nubers: 05.45.Tp, 89.75.-k,
More informationCombining Classifiers
Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/
More informationA Comparative Study of Parametric and Nonparametric Regressions
Iranian Econoic Review, Vol.16, No.30, Fall 011 A Coparative Study of Paraetric and Nonparaetric Regressions Shahra Fattahi Received: 010/08/8 Accepted: 011/04/4 Abstract his paper evaluates inflation
More informationCHAPTER 19: Single-Loop IMC Control
When I coplete this chapter, I want to be able to do the following. Recognize that other feedback algoriths are possible Understand the IMC structure and how it provides the essential control features
More information1 Bounding the Margin
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost
More informationFast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials
Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter
More informationQuantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search
Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths
More information