High Breakdown Point Estimation in Regression

Size: px
Start display at page:

Download "High Breakdown Point Estimation in Regression"

Transcription

1 WDS'08 Proceedings of Contributed Papers, Part I, 94 99, ISBN MATFYZPRESS High Breakdown Point Estimation in Regression T. Jurczyk Charles University, Faculty of Mathematics and Physics, Prague, Czech Republic. Abstract. In robust regression theory the estimators, which can resist contamination of nearly fifty percent of the data, due to the fact that they are highly important in practice, were intensively studied. In this paper we describe three methods with high breakdown point: the least trimmed squares (LTS), the least median of squares (LMS) and their generalization the least weighted squares (LWS) estimator 1. Instead of showing how powerful these procedures can be, we are especially interested in potential problems and situations when the methods can behave rather strange. These are illustrated by various data examples. We want to show that the high breakdown point regression should be performed with caution. Introduction During few decades before 1984 there was a great effort in searching for some multivariate regression estimators which will have breakdown point of nearly 50% (the exact definition of the breakdown point is mentioned below). It was due to the belief that such estimators can give a hint which of the estimates are near to the true model (see Rousseeuw and Leroy [1987]). The first proposal of such method was based on an idea by Hampel [1975] and presented by Rousseeuw [1984]. This method is called the least median of squares. In the same paper (Rousseeuw [1984]) another important estimator the least trimmed squares was also introduced. Before we give the exact definitions, let us set up some notations. We consider the linear regression model p Y i = X iβ 0 e i = X ij βj 0 e i, i = 1, 2,..., n. j=1 For any β R p r i (β) = Y i X i β denotes the i-th residual and r2 (j)(β) the j-th order statistic among the squared residuals. Definition 1 Let n/2 h n, then ˆβ (LMS,n,h) = argmin r(h) 2 (β) and ˆβ(LT S,n,h) = argmin β R p β R p h r(i) 2 (β) are called the least median of squares (LMS) and the least trimmed squares (LTS) estimator, respectively. We are going to define also estimator proposed by Víšek [2001]. This estimator is also based upon ordered squared residuals but they are weighted in addition. Definition 2 Let for any n N i=1 1 = w 1 w 2... w n 0 be some weights. Then ˆβ (LW S,n,w) = argmin β R p is called the least weighted squares (LWS) estimator. n w i r(i) 2 (β) Notice please that the single weight isn t related directly to some observation (don t confuse it with another regression method the weighted least squares). The estimator itself assigns the weights implicitly to the observations (so as in LTS case). From both definitions we can clearly see that the LTS are special case of the LWS with w i = I{i h} (I denotes indicator). The ordinary least squares (OLS) are the special case of the LWS as well. (If we wanted, we could define the LWS even more generally without the restriction on monotonicity of the weights, then also the LMS would be the special case of the LWS. But we use only the LWS in the form of definition 2 in what follows). i=1 1 LWS estimator has breakdown point in dependance on the choice of its weights. 94

2 Properties of the methods Let us summarize some important properties of just defined estimators. But at first we need recall some definitions. Definition 3 An estimator T is called regression equivariant if T ({(X i, Y i X i v); i = 1,..., n}) = T ({(X i, Y i ); i = 1,..., n}) v, scale equivariant if T ({(X i, cy i ); i = 1,..., n}) = ct ({(X i, Y i ); i = 1,..., n}) and we say that T is affine equivariant if T ({(X i A, Y i ); i = 1,..., n}) = A 1 T ({(X i, Y i ); i = 1,..., n}), where v is any column vector, c is any constant and A is any nonsingular square matrix. Definition 4 Let Z be any sample of n data points (X 1, Y 1 ),..., (X n, Y n ), T (Z) is estimate of β 0. Let Zm be sample, where any m of the original data points are replaced by arbitrary values (could be also infinite). Then the breakdown point of the estimator T at Z is ɛ (T, Z) = min { m n : sup Z m T (Z ) T (Z) is infinite }. Remark 1: Notice that this definition counts also with replacing of X ij, which is in general more serious problem than coping with outlier in Y -values. Remark 2: An estimator has high breakdown point if it can resist contamination of nearly 50% of the data (i.e. ɛ (T, Z) tends to 0.5 as n goes to infinity). Unfortunately, we don t have enough space to show correctness of all following LTS and LMS properties but they can be found in Rousseeuw and Leroy [1987]. Some important properties of the LMS and LTS method: They are regression, scale and affine equivariant (in terms of Definition 3). If we put h in Definition 1 equal to n 2 p1 2 ( denotes integer part), then LMS and LTS will attain the breakdown point ɛ max = 1 n ( n p 2 1) (any regression equivariant estimator has the breakdown point equal or less than ɛ max). Both estimators also have exact fit property if strictly more than 1 2 (n p 1) observations satisfy model exactly and are in general position (when h equals to above defined value). If strictly more than 1 2 (n p 1) observations satisfy model Y i = X i β 0 exactly and are in general position, then estimates of both methods are equal to β 0 (when h equals to above defined value). Remark 3: This last relation is called the exact fit property. In words: If the majority of the data follows a linear relationship exactly and regression method yields this equation, then this regression technique is said to possess the exact fit property. Concerning LWS we look at the LWS estimator satisfying the requirement: w 1,..., w h are strictly positive and others are 0. Then also this estimator has all above mentioned properties like LMS and LTS have. Throughout the paper this special LWS will be called the LWS with high breakdown point. Remark 4: Computing the LMS estimate in simple regression (regression with one explanatory variable and intercept) is equivalent to finding the narrowest stripe covering h observations (see Rousseeuw and Leroy [1987]). Negative consequences of high breakdown point In following part of the paper we will talk about some problems which we can come across using high breakdown point estimators. These are illustrated especially through data examples. Before we show this data examples we should say some words about used software. All calculations were performed by the R software. Used LMS method is implemented in the library MASS (function lqs). For LWS calculations we had to program own LWS procedure according to the article by Víšek [2008], because the LWS in the R language were still missing 2. Then the program was checked out (as its special case of LTS) on various data examples, of which we know the exact solution for LTS. The algorithm gives quite satisfactory results. The source code of the LWS procedure is available upon request on the address jurczyk@karlin.mff.cuni.cz. The LTS estimates were computed by our LWS procedure. 2 Our LWS procedure works for nonincreasing weights and for data points in general position. 95

3 a) Instability of the estimators The whole discussion about instability of high breakdown point estimators started with the paper by Hettmansperger and Sheather [1991]. They introduced the Engine Knock Data with 16 observations and 4 explanatory variables. They typed inadvertently one wrong value (among 80 values) 15.1 (we shall call this data damaged data) instead of the original value 14.1 (correct data). After they had computed estimates for the correct and damaged data, they discovered that the estimates for the LMS estimator were considerably different. It was very surprising because of the common belief that LMS is highly resistant method. We show values of the estimates for all three introduced methods for both damaged and correct data sets in Table 1 (exact choices of LWS weights are in following remark). We can see that such an unstable behaviour is common to other methods, too. Unstable in the sence that a small change of the data can cause big change of the estimates. Remark 5 (choice of LWS weights): One of the ways how to choose LWS weights is to choose some nonincreasing function f : [0, 1] [0, 1] with f(0) = 1 and then assign w i = f((i 1)/n). LWS 1, LWS 2 and LWS 3 in our example correspond to f 1 (x) = 1 if x x 0.3 if x (0.35, 0.65) 0 if x 0.65 f 2 (x) = 1 if x x 0.1 if x (0.55, 0.65) 0 if x 0.65 f 3 (x) = 1 2 arctan(50x 30) 2 arctan( 30), f 1 and f 2 is defined to make h = 11 strictly positive weights (notice please that h is tied with n and p), f 3 provides the possible smooth low breakdown point alternative to the LTS method and should be important when we need all positive weights, i.e. when we don t want to discard any observation from the data. Table 1. Engine Knock Data: n = 16, p = 5, h = = 11 Estimates for correct data Estimates for damaged data OLS LMS LTS LWS 1 LWS 2 LWS 3 OLS LMS LTS LWS 1 LWS 2 LWS 3 Intercept X X X X Like Hettmansperger and Sheather [1991] we construct also an exact fit example for simple regression (Figure 1). It is clear that we can construct data where arbitrary small change of one datum can cause large change of estimate (if estimator has the exact fit property). The increasing number of observations also doesn t solve the problem. The theory of robustness is based on the presumtion that small change of underlying model cause small change of robust estimate. But if we think about it for a while, we will realize that the situation here is different, because small change of the data changes the whole true model. The true model in robust regression means the model which covers the majority of the data. Hence, when we look once more at the exact fit example, we realize that this unstable behaviour is not suprising at all. And in this case the strange behaviour in Figure 1 is exactly what we expect from good robust procedure. The only problem here is to realize that a small change of the data can cause the large change of underlying model. b) Diversity of the estimates When we use for example LTS and LMS on some data, according to various experiments (e.g. Víšek [1996]), usually (rlms 2 ) (h) from the LMS fit is smaller than (rlt 2 S ) (h) from the LTS fit and conversely h i=1 (r2 LT S ) (i) is smaller than h i=1 (r2 LMS ) (i). Therefore the estimates for LTS and LMS are usually somehow different. An artificial data example (in Figure 2) shows that such a diversity may be pretty large. For this data, the LTS and LMS estimates are nearly orthogonal to each other. If we want, we will be able to construct examples where LWS and LTS give different estimates as well (like in Figure 4). Construction of such artificial data is possible due to the fact that investigated estimators use for estimating β 0 only n(1 ɛ ) 1 chosen observations. Each estimator may find important different observations (like in our example), thus the estimates may be also different. The increasing number of observations again doesn t help. 96

4 Figure 1. The exact fit example: A change in position of one datum (signed as a circle) causes the large change of the estimate. This is because of the exact fit property of the estimator. The regression line will follow the majority of the data. LMS LTS Figure 2. LTS estimate perpendicular to LMS estimate: n = 11, h = 6, the stripe covering 6 observations in squares is narrower than the stripe covering 6 observations in circles and at the same time the sum of the squared residuals of encircled observations is less than it would be for the LTS estimate using 6 observations in squares. c) Subsample stability Now, we are interested in how the estimate from the whole data set will change if we use for calculation only a subsample of the data. If the estimator is stable with respect to subsamples, then this change of the estimates shouldn t be large. And it is quite natural to require such subsample stability for a good estimator. Let us explain it a bit. If some method recognize the true model for the whole data, then it should recognize (approximately) the same model for subsample. The requirement of subsample stability is also connected with consistency, because consistency may be viewed as a stability for large sample sizes. Hereafter, we will study only the difference of the estimates for the whole data and for the data without one obsevation. Denote ˆβ (M,n 1,l,h) (resp. ˆβ(LW S,n 1,l,w) ) the estimate by method M (resp. LWS) for the data without l-th observation, h has the same meaning as it had in the definitions of the methods. Some results of subsample stability for the LTS method are available in Víšek [2006]. They derived the asymptotic representation of the n( ˆβ (LT S,n,h) ˆβ (LT S,n 1,l,h) ) and concluded that the LTS estimator has in some way similar behaviour as M-estimators with discontinuous ψ-function, which means that it can be sometimes sensitive even to deletion of one observation. One of their proposals, which may solve this problem, is to use LWS estimator with smoothly redescending weights. We were searching for an example of such improvement by using LWS estimator and we found it in already mentioned Engine Knock Data. We used the same three sets of weights for the LWS as in Remark 5 (for computing the LWS estimate for subsample with n 1 observations the same weights w as for ˆβ (LW S,n,w) were used but without n-th coordinate. We prefer this choice because of the compatibility with the results shown by Víšek [2006]). The results for Engine Knock Data are displayed in Figure 3. Due to the limited space there are shown only results for the first explanatory variable, but other variables 97

5 behave in the same way. We can see (left picture in Figure 3) that at least for one subsample the value of β 0 2 estimate for LMS and LTS is larger than 4, although estimates for the full data is around 0. We conclude that in this example the LWS estimators really have better behaviour and are much more stable with respect to subsamples than the LTS estimator (and also than the LMS estimator). In order to understand the principle why the LWS are better, we constructed also an artificial data with 9 observations (upper part of Figure 4), where the LTS estimator is unstable while the LWS estimator is stable. It seems that we finally find some improvement of LMS and LTS methods through the LWS with high breakdown point. But unfortunately it isn t so simple, because on the contrary it can be also constructed an alternative example where the LWS are unstable and the LTS aren t (bottom pictures in Figure 4). And also from other numerical examples (including simulated as well as real data sets) with different sample sizes and different number of explanatory variables, which we tried, we can t say that LWS are better than LTS or LMS in terms of subsample stability. Simply there exist cases of data sets of both kinds (like in the artificial data) where LWS can solve LTS instability and vice versa ols lms lts lws1 lws2 lws Figure 3. Engine Knock data subsample sensitivity: Pictures for the first explanatory variable (M,n 1,l,h) (second coordinate of β). The left picture is a boxplot of { ˆβ 2, l = 1,..., 16}, each box belongs to one method M, red circles are estimates from the whole data for corresponding methods. The second (LT S,n,h) (LT S,n 1,l,h) (LW S1,n,w) picture is comparison of LTS and LWS 1, 16 points here are { ˆβ 2 ˆβ 2 ˆβ 2 (LW S1,n 1,l,w[1:(n 1)]) ˆβ 2, l = 1..., 16} in dependence on l, positive values correspond to better behaviour of LWS 1. Conclusion The paper shows that some caution is inevitable. We have done already some conclusions about presented issues in previous text, we saw that above discussed problems and maybe also others (e.g. discussion about arbitrary low efficiency at central model with respect to ordinary least squares in finite samples by Stefanski [1991]) may occur for some data and are inherent with high breakdown point methods. When one works with such procedure he/she should take it into consideration and don t blindly believe in stability under any circumstances. Further, when we meet with the data where different high breakdown methods have rather different estimates, then we should reconsider what is the real structure of data in question. Although proposed high breakdown point LWS methods share some negative consequeces of other high breakdown point methods, we believe that the possibility of continuous weighting and also weighting with small instead of zero weights could be benefitial. Acknowledgments. This work was supported by grants GAČR 202/06/0408 and 201/05/H007. The author also thanks to Prof. Jan Ámos Víšek for the professional help. 98

6 LTS weights: (1,1,1,1,1,0,0,0,0) LWS weights: (1,1,1,1,0.5,0,0,0,0) LTS weights: (1,1,1,1,1,0,0,0,0) LWS weights: (1,1,1,1,0.5,0,0,0,0) Figure 4. Subsample instability: upper data: x 1 = ( 1, 0.7, 0.7, 1, 0.2, 0.1, 0, 0.1, 0.2), y = (0, 0, 0, 0, 0.7, 0.2, 0, 0.4, 0.6) ; bottom data x 1 = ( 2.1, 1.9, 0.1, 0.1, 1.9, 2.1, 0.5, 1, 0.5), y = ( 1.9, 2.1, 0.1, 0.1, 2.1, 1.9, 0.5, 1, 1.15). In each picture there are black line as the whole data estimate and 9 lines for the estimates based upon the data without one observation (LTS - left pictures, LWS - right pictures). The weights are in titles of pictures, for smaller data is utilized first 8 coordinates from the weights. References Hampel, F. R., Beyond location parameters: Robust concepts and methods, Bulletin of the International Statistical Institute, Rousseeuw, P. J., Least median of squares regression, Journal of The American Statistical Association, 79, , Rousseeuw, P. J., and Leroy, A. M., Robust Regression and Outlier Detection, John Wiley & Sons, Hettmansperger, T. P., and Sheather, S.J., L. A., A Cautionary Note on the Method of Least Median Squares, The American Statistician, 46, 79-83, Stefanski, L. A., A note on high-breakdown estimators, Statistics & Probability Letters, 11, , Víšek, J. Á., On High Breakdown Point Estimation, Computational Statistics, 11, , Víšek, J. Á., Regression with high breakdown point, Robust 2000 (eds. Jaromír Antoch & Gejza Dohnal, published by Union of Czech Mathematicians and Physicists), Prague: matfyzpress, , Víšek, J. Á., The Least Trimmed Squares. Sensitivity Study., Proceedings of the Prague Stochastics 2006, eds. Marie Hušková & Martin Janžura, Prague: MatFyz Press, , Víšek, J. Á., Consistency of the Instrumental Weighted Variables, To appear in Annals of the Institute of Statistical Mathematics,

Acta Universitatis Carolinae. Mathematica et Physica

Acta Universitatis Carolinae. Mathematica et Physica Acta Universitatis Carolinae. Mathematica et Physica TomĂĄĹĄ Jurczyk Ridge least weighted squares Acta Universitatis Carolinae. Mathematica et Physica, Vol. 52 (2011), No. 1, 15--26 Persistent URL: http://dml.cz/dmlcz/143664

More information

Introduction Robust regression Examples Conclusion. Robust regression. Jiří Franc

Introduction Robust regression Examples Conclusion. Robust regression. Jiří Franc Robust regression Robust estimation of regression coefficients in linear regression model Jiří Franc Czech Technical University Faculty of Nuclear Sciences and Physical Engineering Department of Mathematics

More information

ARE THE BAD LEVERAGE POINTS THE MOST DIFFICULT PROBLEM FOR ESTIMATING THE UNDERLYING REGRESSION MODEL?

ARE THE BAD LEVERAGE POINTS THE MOST DIFFICULT PROBLEM FOR ESTIMATING THE UNDERLYING REGRESSION MODEL? ARE THE BAD LEVERAGE POINTS THE MOST DIFFICULT PROBLEM FOR ESTIMATING THE UNDERLYING REGRESSION MODEL? RESEARCH SUPPORTED BY THE GRANT OF THE CZECH SCIENCE FOUNDATION 13-01930S PANEL P402 ROBUST METHODS

More information

Robust estimation of model with the fixed and random effects

Robust estimation of model with the fixed and random effects Robust estimation of model wh the fixed and random effects Jan Ámos Víšek, Charles Universy in Prague, visek@fsv.cuni.cz Abstract. Robustified estimation of the regression model is recalled, generalized

More information

odhady a jejich (ekonometrické)

odhady a jejich (ekonometrické) modifikace Stochastic Modelling in Economics and Finance 2 Advisor : Prof. RNDr. Jan Ámos Víšek, CSc. Petr Jonáš 27 th April 2009 Contents 1 2 3 4 29 1 In classical approach we deal with stringent stochastic

More information

Regression Analysis for Data Containing Outliers and High Leverage Points

Regression Analysis for Data Containing Outliers and High Leverage Points Alabama Journal of Mathematics 39 (2015) ISSN 2373-0404 Regression Analysis for Data Containing Outliers and High Leverage Points Asim Kumer Dey Department of Mathematics Lamar University Md. Amir Hossain

More information

A Modified M-estimator for the Detection of Outliers

A Modified M-estimator for the Detection of Outliers A Modified M-estimator for the Detection of Outliers Asad Ali Department of Statistics, University of Peshawar NWFP, Pakistan Email: asad_yousafzay@yahoo.com Muhammad F. Qadir Department of Statistics,

More information

Introduction to robust statistics*

Introduction to robust statistics* Introduction to robust statistics* Xuming He National University of Singapore To statisticians, the model, data and methodology are essential. Their job is to propose statistical procedures and evaluate

More information

Indian Statistical Institute

Indian Statistical Institute Indian Statistical Institute Introductory Computer programming Robust Regression methods with high breakdown point Author: Roll No: MD1701 February 24, 2018 Contents 1 Introduction 2 2 Criteria for evaluating

More information

Midwest Big Data Summer School: Introduction to Statistics. Kris De Brabanter

Midwest Big Data Summer School: Introduction to Statistics. Kris De Brabanter Midwest Big Data Summer School: Introduction to Statistics Kris De Brabanter kbrabant@iastate.edu Iowa State University Department of Statistics Department of Computer Science June 20, 2016 1/27 Outline

More information

Small Sample Corrections for LTS and MCD

Small Sample Corrections for LTS and MCD myjournal manuscript No. (will be inserted by the editor) Small Sample Corrections for LTS and MCD G. Pison, S. Van Aelst, and G. Willems Department of Mathematics and Computer Science, Universitaire Instelling

More information

Notes 11: OLS Theorems ECO 231W - Undergraduate Econometrics

Notes 11: OLS Theorems ECO 231W - Undergraduate Econometrics Notes 11: OLS Theorems ECO 231W - Undergraduate Econometrics Prof. Carolina Caetano For a while we talked about the regression method. Then we talked about the linear model. There were many details, but

More information

9. Robust regression

9. Robust regression 9. Robust regression Least squares regression........................................................ 2 Problems with LS regression..................................................... 3 Robust regression............................................................

More information

Leverage effects on Robust Regression Estimators

Leverage effects on Robust Regression Estimators Leverage effects on Robust Regression Estimators David Adedia 1 Atinuke Adebanji 2 Simon Kojo Appiah 2 1. Department of Basic Sciences, School of Basic and Biomedical Sciences, University of Health and

More information

A Brief Overview of Robust Statistics

A Brief Overview of Robust Statistics A Brief Overview of Robust Statistics Olfa Nasraoui Department of Computer Engineering & Computer Science University of Louisville, olfa.nasraoui_at_louisville.edu Robust Statistical Estimators Robust

More information

Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity and Outliers Problems

Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity and Outliers Problems Modern Applied Science; Vol. 9, No. ; 05 ISSN 9-844 E-ISSN 9-85 Published by Canadian Center of Science and Education Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity

More information

Two Simple Resistant Regression Estimators

Two Simple Resistant Regression Estimators Two Simple Resistant Regression Estimators David J. Olive Southern Illinois University January 13, 2005 Abstract Two simple resistant regression estimators with O P (n 1/2 ) convergence rate are presented.

More information

Robust model selection criteria for robust S and LT S estimators

Robust model selection criteria for robust S and LT S estimators Hacettepe Journal of Mathematics and Statistics Volume 45 (1) (2016), 153 164 Robust model selection criteria for robust S and LT S estimators Meral Çetin Abstract Outliers and multi-collinearity often

More information

Methods for Detection of Word Usage over Time

Methods for Detection of Word Usage over Time Methods for Detection of Word Usage over Time Ondřej Herman and Vojtěch Kovář Natural Language Processing Centre Faculty of Informatics, Masaryk University Botanická 68a, 6 Brno, Czech Republic {xherman,xkovar}@fi.muni.cz

More information

robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression

robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression Robust Statistics robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html

More information

Ordinary Least Squares Linear Regression

Ordinary Least Squares Linear Regression Ordinary Least Squares Linear Regression Ryan P. Adams COS 324 Elements of Machine Learning Princeton University Linear regression is one of the simplest and most fundamental modeling ideas in statistics

More information

Parameter Estimation of the Stable GARCH(1,1)-Model

Parameter Estimation of the Stable GARCH(1,1)-Model WDS'09 Proceedings of Contributed Papers, Part I, 137 142, 2009. ISBN 978-80-7378-101-9 MATFYZPRESS Parameter Estimation of the Stable GARCH(1,1)-Model V. Omelchenko Charles University, Faculty of Mathematics

More information

Highly Robust Variogram Estimation 1. Marc G. Genton 2

Highly Robust Variogram Estimation 1. Marc G. Genton 2 Mathematical Geology, Vol. 30, No. 2, 1998 Highly Robust Variogram Estimation 1 Marc G. Genton 2 The classical variogram estimator proposed by Matheron is not robust against outliers in the data, nor is

More information

Multivariate Least Weighted Squares (MLWS)

Multivariate Least Weighted Squares (MLWS) () Stochastic Modelling in Economics and Finance 2 Supervisor : Prof. RNDr. Jan Ámos Víšek, CSc. Petr Jonáš 23 rd February 2015 Contents 1 2 3 4 5 6 7 1 1 Introduction 2 Algorithm 3 Proof of consistency

More information

Computing Robust Regression Estimators: Developments since Dutter (1977)

Computing Robust Regression Estimators: Developments since Dutter (1977) AUSTRIAN JOURNAL OF STATISTICS Volume 41 (2012), Number 1, 45 58 Computing Robust Regression Estimators: Developments since Dutter (1977) Moritz Gschwandtner and Peter Filzmoser Vienna University of Technology,

More information

Lecture 12 Robust Estimation

Lecture 12 Robust Estimation Lecture 12 Robust Estimation Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Financial Econometrics, Summer Semester 2007 Copyright These lecture-notes

More information

A Comparison of Robust Estimators Based on Two Types of Trimming

A Comparison of Robust Estimators Based on Two Types of Trimming Submitted to the Bernoulli A Comparison of Robust Estimators Based on Two Types of Trimming SUBHRA SANKAR DHAR 1, and PROBAL CHAUDHURI 1, 1 Theoretical Statistics and Mathematics Unit, Indian Statistical

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Line Integrals and Path Independence

Line Integrals and Path Independence Line Integrals and Path Independence We get to talk about integrals that are the areas under a line in three (or more) dimensional space. These are called, strangely enough, line integrals. Figure 11.1

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

WEIGHTED GMM ESTIMATION

WEIGHTED GMM ESTIMATION ROBUS 2004 c JČMF 2004 WEIGHED GMM ESIMAION Jan Ámos Víšek Keywords: Robustified GMM-estimation, robust regression, equivariance of the estimation, influential points, contamination, heteroscedasticity,

More information

IMPROVING THE SMALL-SAMPLE EFFICIENCY OF A ROBUST CORRELATION MATRIX: A NOTE

IMPROVING THE SMALL-SAMPLE EFFICIENCY OF A ROBUST CORRELATION MATRIX: A NOTE IMPROVING THE SMALL-SAMPLE EFFICIENCY OF A ROBUST CORRELATION MATRIX: A NOTE Eric Blankmeyer Department of Finance and Economics McCoy College of Business Administration Texas State University San Marcos

More information

Breakdown points of Cauchy regression-scale estimators

Breakdown points of Cauchy regression-scale estimators Breadown points of Cauchy regression-scale estimators Ivan Mizera University of Alberta 1 and Christine H. Müller Carl von Ossietzy University of Oldenburg Abstract. The lower bounds for the explosion

More information

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers.

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers. 2 VECTORS, POINTS, and LINEAR ALGEBRA. At first glance, vectors seem to be very simple. It is easy enough to draw vector arrows, and the operations (vector addition, dot product, etc.) are also easy to

More information

Fast and robust bootstrap for LTS

Fast and robust bootstrap for LTS Fast and robust bootstrap for LTS Gert Willems a,, Stefan Van Aelst b a Department of Mathematics and Computer Science, University of Antwerp, Middelheimlaan 1, B-2020 Antwerp, Belgium b Department of

More information

CPSC 340: Machine Learning and Data Mining

CPSC 340: Machine Learning and Data Mining CPSC 340: Machine Learning and Data Mining MLE and MAP Original version of these slides by Mark Schmidt, with modifications by Mike Gelbart. 1 Admin Assignment 4: Due tonight. Assignment 5: Will be released

More information

Study skills for mathematicians

Study skills for mathematicians PART I Study skills for mathematicians CHAPTER 1 Sets and functions Everything starts somewhere, although many physicists disagree. Terry Pratchett, Hogfather, 1996 To think like a mathematician requires

More information

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY G.L. Shevlyakov, P.O. Smirnov St. Petersburg State Polytechnic University St.Petersburg, RUSSIA E-mail: Georgy.Shevlyakov@gmail.com

More information

Improved Holt Method for Irregular Time Series

Improved Holt Method for Irregular Time Series WDS'08 Proceedings of Contributed Papers, Part I, 62 67, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Improved Holt Method for Irregular Time Series T. Hanzák Charles University, Faculty of Mathematics and

More information

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017 CPSC 340: Machine Learning and Data Mining MLE and MAP Fall 2017 Assignment 3: Admin 1 late day to hand in tonight, 2 late days for Wednesday. Assignment 4: Due Friday of next week. Last Time: Multi-Class

More information

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets 1 Rational and Real Numbers Recall that a number is rational if it can be written in the form a/b where a, b Z and b 0, and a number

More information

Nonparametric Bootstrap Estimation for Implicitly Weighted Robust Regression

Nonparametric Bootstrap Estimation for Implicitly Weighted Robust Regression J. Hlaváčová (Ed.): ITAT 2017 Proceedings, pp. 78 85 CEUR Workshop Proceedings Vol. 1885, ISSN 1613-0073, c 2017 J. Kalina, B. Peštová Nonparametric Bootstrap Estimation for Implicitly Weighted Robust

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = 0 + 1 x 1 + x +... k x k + u 6. Heteroskedasticity What is Heteroskedasticity?! Recall the assumption of homoskedasticity implied that conditional on the explanatory variables,

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

High-dimensional regression

High-dimensional regression High-dimensional regression Advanced Methods for Data Analysis 36-402/36-608) Spring 2014 1 Back to linear regression 1.1 Shortcomings Suppose that we are given outcome measurements y 1,... y n R, and

More information

Alternative Biased Estimator Based on Least. Trimmed Squares for Handling Collinear. Leverage Data Points

Alternative Biased Estimator Based on Least. Trimmed Squares for Handling Collinear. Leverage Data Points International Journal of Contemporary Mathematical Sciences Vol. 13, 018, no. 4, 177-189 HIKARI Ltd, www.m-hikari.com https://doi.org/10.1988/ijcms.018.8616 Alternative Biased Estimator Based on Least

More information

Outliers and Robust Regression Techniques

Outliers and Robust Regression Techniques POLS/CSSS 503: Advanced Quantitative Political Methodology Outliers and Robust Regression Techniques Christopher Adolph Department of Political Science and Center for Statistics and the Social Sciences

More information

Variable Selection under Measurement Error: Comparing the Performance of Subset Selection and Shrinkage Methods

Variable Selection under Measurement Error: Comparing the Performance of Subset Selection and Shrinkage Methods Variable Selection under Measurement Error: Comparing the Performance of Subset Selection and Shrinkage Methods Ellen Sasahara Bachelor s Thesis Supervisor: Prof. Dr. Thomas Augustin Department of Statistics

More information

ROBUST - September 10-14, 2012

ROBUST - September 10-14, 2012 Charles University in Prague ROBUST - September 10-14, 2012 Linear equations We observe couples (y 1, x 1 ), (y 2, x 2 ), (y 3, x 3 ),......, where y t R, x t R d t N. We suppose that members of couples

More information

Small sample corrections for LTS and MCD

Small sample corrections for LTS and MCD Metrika (2002) 55: 111 123 > Springer-Verlag 2002 Small sample corrections for LTS and MCD G. Pison, S. Van Aelst*, and G. Willems Department of Mathematics and Computer Science, Universitaire Instelling

More information

1 Differentiable manifolds and smooth maps

1 Differentiable manifolds and smooth maps 1 Differentiable manifolds and smooth maps Last updated: April 14, 2011. 1.1 Examples and definitions Roughly, manifolds are sets where one can introduce coordinates. An n-dimensional manifold is a set

More information

Chapter 7 Summary Scatterplots, Association, and Correlation

Chapter 7 Summary Scatterplots, Association, and Correlation Chapter 7 Summary Scatterplots, Association, and Correlation What have we learned? We examine scatterplots for direction, form, strength, and unusual features. Although not every relationship is linear,

More information

Inequalities Relating Addition and Replacement Type Finite Sample Breakdown Points

Inequalities Relating Addition and Replacement Type Finite Sample Breakdown Points Inequalities Relating Addition and Replacement Type Finite Sample Breadown Points Robert Serfling Department of Mathematical Sciences University of Texas at Dallas Richardson, Texas 75083-0688, USA Email:

More information

ON THE CALCULATION OF A ROBUST S-ESTIMATOR OF A COVARIANCE MATRIX

ON THE CALCULATION OF A ROBUST S-ESTIMATOR OF A COVARIANCE MATRIX STATISTICS IN MEDICINE Statist. Med. 17, 2685 2695 (1998) ON THE CALCULATION OF A ROBUST S-ESTIMATOR OF A COVARIANCE MATRIX N. A. CAMPBELL *, H. P. LOPUHAA AND P. J. ROUSSEEUW CSIRO Mathematical and Information

More information

UNIT 3. Rational Functions Limits at Infinity (Horizontal and Slant Asymptotes) Infinite Limits (Vertical Asymptotes) Graphing Rational Functions

UNIT 3. Rational Functions Limits at Infinity (Horizontal and Slant Asymptotes) Infinite Limits (Vertical Asymptotes) Graphing Rational Functions UNIT 3 Rational Functions Limits at Infinity (Horizontal and Slant Asymptotes) Infinite Limits (Vertical Asymptotes) Graphing Rational Functions Recall From Unit Rational Functions f() is a rational function

More information

Metric spaces and metrizability

Metric spaces and metrizability 1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively

More information

AP Statistics L I N E A R R E G R E S S I O N C H A P 7

AP Statistics L I N E A R R E G R E S S I O N C H A P 7 AP Statistics 1 L I N E A R R E G R E S S I O N C H A P 7 The object [of statistics] is to discover methods of condensing information concerning large groups of allied facts into brief and compendious

More information

Chapter 7. Scatterplots, Association, and Correlation. Copyright 2010 Pearson Education, Inc.

Chapter 7. Scatterplots, Association, and Correlation. Copyright 2010 Pearson Education, Inc. Chapter 7 Scatterplots, Association, and Correlation Copyright 2010 Pearson Education, Inc. Looking at Scatterplots Scatterplots may be the most common and most effective display for data. In a scatterplot,

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 3: More on linear regression (v3) Ramesh Johari ramesh.johari@stanford.edu 1 / 59 Recap: Linear regression 2 / 59 The linear regression model Given: n outcomes Y i, i = 1,...,

More information

Robust Preprocessing of Time Series with Trends

Robust Preprocessing of Time Series with Trends Robust Preprocessing of Time Series with Trends Roland Fried Ursula Gather Department of Statistics, Universität Dortmund ffried,gatherg@statistik.uni-dortmund.de Michael Imhoff Klinikum Dortmund ggmbh

More information

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1 Regression diagnostics As is true of all statistical methodologies, linear regression analysis can be a very effective way to model data, as along as the assumptions being made are true. For the regression

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

Chapter 6. September 17, Please pick up a calculator and take out paper and something to write with. Association and Correlation.

Chapter 6. September 17, Please pick up a calculator and take out paper and something to write with. Association and Correlation. Please pick up a calculator and take out paper and something to write with. Sep 17 8:08 AM Chapter 6 Scatterplots, Association and Correlation Copyright 2015, 2010, 2007 Pearson Education, Inc. Chapter

More information

CM10196 Topic 2: Sets, Predicates, Boolean algebras

CM10196 Topic 2: Sets, Predicates, Boolean algebras CM10196 Topic 2: Sets, Predicates, oolean algebras Guy McCusker 1W2.1 Sets Most of the things mathematicians talk about are built out of sets. The idea of a set is a simple one: a set is just a collection

More information

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky What follows is Vladimir Voevodsky s snapshot of his Fields Medal work on motivic homotopy, plus a little philosophy and from my point of view the main fun of doing mathematics Voevodsky (2002). Voevodsky

More information

MULTIVARIATE TECHNIQUES, ROBUSTNESS

MULTIVARIATE TECHNIQUES, ROBUSTNESS MULTIVARIATE TECHNIQUES, ROBUSTNESS Mia Hubert Associate Professor, Department of Mathematics and L-STAT Katholieke Universiteit Leuven, Belgium mia.hubert@wis.kuleuven.be Peter J. Rousseeuw 1 Senior Researcher,

More information

Introduction to Robust Statistics. Elvezio Ronchetti. Department of Econometrics University of Geneva Switzerland.

Introduction to Robust Statistics. Elvezio Ronchetti. Department of Econometrics University of Geneva Switzerland. Introduction to Robust Statistics Elvezio Ronchetti Department of Econometrics University of Geneva Switzerland Elvezio.Ronchetti@metri.unige.ch http://www.unige.ch/ses/metri/ronchetti/ 1 Outline Introduction

More information

7.0 Lesson Plan. Regression. Residuals

7.0 Lesson Plan. Regression. Residuals 7.0 Lesson Plan Regression Residuals 1 7.1 More About Regression Recall the regression assumptions: 1. Each point (X i, Y i ) in the scatterplot satisfies: Y i = ax i + b + ɛ i where the ɛ i have a normal

More information

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS 1. Cardinal number of a set The cardinal number (or simply cardinal) of a set is a generalization of the concept of the number of elements

More information

LEBESGUE INTEGRATION. Introduction

LEBESGUE INTEGRATION. Introduction LEBESGUE INTEGATION EYE SJAMAA Supplementary notes Math 414, Spring 25 Introduction The following heuristic argument is at the basis of the denition of the Lebesgue integral. This argument will be imprecise,

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

Computational and Statistical Learning theory

Computational and Statistical Learning theory Computational and Statistical Learning theory Problem set 2 Due: January 31st Email solutions to : karthik at ttic dot edu Notation : Input space : X Label space : Y = {±1} Sample : (x 1, y 1,..., (x n,

More information

Generating Function Notes , Fall 2005, Prof. Peter Shor

Generating Function Notes , Fall 2005, Prof. Peter Shor Counting Change Generating Function Notes 80, Fall 00, Prof Peter Shor In this lecture, I m going to talk about generating functions We ve already seen an example of generating functions Recall when we

More information

Re-weighted Robust Control Charts for Individual Observations

Re-weighted Robust Control Charts for Individual Observations Universiti Tunku Abdul Rahman, Kuala Lumpur, Malaysia 426 Re-weighted Robust Control Charts for Individual Observations Mandana Mohammadi 1, Habshah Midi 1,2 and Jayanthi Arasan 1,2 1 Laboratory of Applied

More information

Faster Kriging on Graphs

Faster Kriging on Graphs Faster Kriging on Graphs Omkar Muralidharan Abstract [Xu et al. 2009] introduce a graph prediction method that is accurate but slow. My project investigates faster methods based on theirs that are nearly

More information

Section 20: Arrow Diagrams on the Integers

Section 20: Arrow Diagrams on the Integers Section 0: Arrow Diagrams on the Integers Most of the material we have discussed so far concerns the idea and representations of functions. A function is a relationship between a set of inputs (the leave

More information

WEIGHTED LEAST SQUARES. Model Assumptions for Weighted Least Squares: Recall: We can fit least squares estimates just assuming a linear mean function.

WEIGHTED LEAST SQUARES. Model Assumptions for Weighted Least Squares: Recall: We can fit least squares estimates just assuming a linear mean function. 1 2 WEIGHTED LEAST SQUARES Recall: We can fit least squares estimates just assuming a linear mean function. Without the constant variance assumption, we can still conclude that the coefficient estimators

More information

Last Update: March 1 2, 201 0

Last Update: March 1 2, 201 0 M ath 2 0 1 E S 1 W inter 2 0 1 0 Last Update: March 1 2, 201 0 S eries S olutions of Differential Equations Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections

More information

On Modifications to Linking Variance Estimators in the Fay-Herriot Model that Induce Robustness

On Modifications to Linking Variance Estimators in the Fay-Herriot Model that Induce Robustness Statistics and Applications {ISSN 2452-7395 (online)} Volume 16 No. 1, 2018 (New Series), pp 289-303 On Modifications to Linking Variance Estimators in the Fay-Herriot Model that Induce Robustness Snigdhansu

More information

19. Basis and Dimension

19. Basis and Dimension 9. Basis and Dimension In the last Section we established the notion of a linearly independent set of vectors in a vector space V and of a set of vectors that span V. We saw that any set of vectors that

More information

What is proof? Lesson 1

What is proof? Lesson 1 What is proof? Lesson The topic for this Math Explorer Club is mathematical proof. In this post we will go over what was covered in the first session. The word proof is a normal English word that you might

More information

Some problems related to Singer sets

Some problems related to Singer sets Some problems related to Singer sets Alex Chmelnitzki October 24, 2005 Preface The following article describes some of the work I did in the summer 2005 for Prof. Ben Green, Bristol University. The work

More information

MA 1125 Lecture 15 - The Standard Normal Distribution. Friday, October 6, Objectives: Introduce the standard normal distribution and table.

MA 1125 Lecture 15 - The Standard Normal Distribution. Friday, October 6, Objectives: Introduce the standard normal distribution and table. MA 1125 Lecture 15 - The Standard Normal Distribution Friday, October 6, 2017. Objectives: Introduce the standard normal distribution and table. 1. The Standard Normal Distribution We ve been looking at

More information

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Due: Monday, February 13, 2017, at 10pm (Submit via Gradescope) Instructions: Your answers to the questions below,

More information

Test for Discontinuities in Nonparametric Regression

Test for Discontinuities in Nonparametric Regression Communications of the Korean Statistical Society Vol. 15, No. 5, 2008, pp. 709 717 Test for Discontinuities in Nonparametric Regression Dongryeon Park 1) Abstract The difference of two one-sided kernel

More information

Optimal normalization of DNA-microarray data

Optimal normalization of DNA-microarray data Optimal normalization of DNA-microarray data Daniel Faller 1, HD Dr. J. Timmer 1, Dr. H. U. Voss 1, Prof. Dr. Honerkamp 1 and Dr. U. Hobohm 2 1 Freiburg Center for Data Analysis and Modeling 1 F. Hoffman-La

More information

Median Cross-Validation

Median Cross-Validation Median Cross-Validation Chi-Wai Yu 1, and Bertrand Clarke 2 1 Department of Mathematics Hong Kong University of Science and Technology 2 Department of Medicine University of Miami IISA 2011 Outline Motivational

More information

A Re-Introduction to General Linear Models (GLM)

A Re-Introduction to General Linear Models (GLM) A Re-Introduction to General Linear Models (GLM) Today s Class: You do know the GLM Estimation (where the numbers in the output come from): From least squares to restricted maximum likelihood (REML) Reviewing

More information

On Consistency of Estimators in Simple Linear Regression 1

On Consistency of Estimators in Simple Linear Regression 1 On Consistency of Estimators in Simple Linear Regression 1 Anindya ROY and Thomas I. SEIDMAN Department of Mathematics and Statistics University of Maryland Baltimore, MD 21250 (anindya@math.umbc.edu)

More information

Circuit Theory Prof. S.C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi

Circuit Theory Prof. S.C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Circuit Theory Prof. S.C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 43 RC and RL Driving Point Synthesis People will also have to be told I will tell,

More information

1 What does the random effect η mean?

1 What does the random effect η mean? Some thoughts on Hanks et al, Environmetrics, 2015, pp. 243-254. Jim Hodges Division of Biostatistics, University of Minnesota, Minneapolis, Minnesota USA 55414 email: hodge003@umn.edu October 13, 2015

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

Sequence convergence, the weak T-axioms, and first countability

Sequence convergence, the weak T-axioms, and first countability Sequence convergence, the weak T-axioms, and first countability 1 Motivation Up to now we have been mentioning the notion of sequence convergence without actually defining it. So in this section we will

More information

Discussion of Brock and Durlauf s Economic Growth and Reality by Xavier Sala-i-Martin, Columbia University and UPF March 2001

Discussion of Brock and Durlauf s Economic Growth and Reality by Xavier Sala-i-Martin, Columbia University and UPF March 2001 Discussion of Brock and Durlauf s Economic Growth and Reality by Xavier Sala-i-Martin, Columbia University and UPF March 2001 This is a nice paper that summarizes some of the recent research on Bayesian

More information

Multivariate Least Weighted Squares (MLWS)

Multivariate Least Weighted Squares (MLWS) () Stochastic Modelling in Economics and Finance 2 Supervisor : Prof. RNDr. Jan Ámos Víšek, CSc. Petr Jonáš 12 th March 2012 Contents 1 2 3 4 5 1 1 Introduction 2 3 Proof of consistency (80%) 4 Appendix

More information

Non-Spherical Errors

Non-Spherical Errors Non-Spherical Errors Krishna Pendakur February 15, 2016 1 Efficient OLS 1. Consider the model Y = Xβ + ε E [X ε = 0 K E [εε = Ω = σ 2 I N. 2. Consider the estimated OLS parameter vector ˆβ OLS = (X X)

More information

Basics: Definitions and Notation. Stationarity. A More Formal Definition

Basics: Definitions and Notation. Stationarity. A More Formal Definition Basics: Definitions and Notation A Univariate is a sequence of measurements of the same variable collected over (usually regular intervals of) time. Usual assumption in many time series techniques is that

More information

Available from Deakin Research Online:

Available from Deakin Research Online: This is the published version: Beliakov, Gleb and Yager, Ronald R. 2009, OWA operators in linear regression and detection of outliers, in AGOP 2009 : Proceedings of the Fifth International Summer School

More information

Lecture 10: Duality in Linear Programs

Lecture 10: Duality in Linear Programs 10-725/36-725: Convex Optimization Spring 2015 Lecture 10: Duality in Linear Programs Lecturer: Ryan Tibshirani Scribes: Jingkun Gao and Ying Zhang Disclaimer: These notes have not been subjected to the

More information

Minimization of Matched Formulas

Minimization of Matched Formulas WDS'11 Proceedings of Contributed Papers, Part I, 101 105, 2011. ISBN 978-80-7378-184-2 MATFYZPRESS Minimization of Matched Formulas Š. Gurský Charles University, Faculty of Mathematics and Physics, Prague,

More information