Announcements. J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, / 45

Similar documents
The flu example from last class is actually one of our most common transformations called the log-linear model:

Midterm 2 - Solutions

Midterm 2 - Solutions

Final Exam - Solutions

Ecn Analysis of Economic Data University of California - Davis February 23, 2010 Instructor: John Parman. Midterm 2. Name: ID Number: Section:

LECTURE 15: SIMPLE LINEAR REGRESSION I

Simple Regression Model. January 24, 2011

Final Exam - Solutions

Linear Regression. Chapter 3

Ch 7: Dummy (binary, indicator) variables

Lecture 8 CORRELATION AND LINEAR REGRESSION

EC4051 Project and Introductory Econometrics

Final Exam Details. J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 March 8, / 24

Introduction. ECN 102: Analysis of Economic Data Winter, J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 January 4, / 51

Multiple Choice Questions (circle one part) 1: a b c d e 2: a b c d e 3: a b c d e 4: a b c d e 5: a b c d e

The Simple Linear Regression Model

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit

Business Statistics. Lecture 9: Simple Regression

One-sample categorical data: approximate inference

Time-Series Analysis. Dr. Seetha Bandara Dept. of Economics MA_ECON

Nonlinear Regression Functions

Psych 10 / Stats 60, Practice Problem Set 10 (Week 10 Material), Solutions

Sociology 593 Exam 2 Answer Key March 28, 2002

Inference in Regression Model

ECON 5350 Class Notes Functional Form and Structural Change

Applied Statistics and Econometrics

Statistics and Quantitative Analysis U4320. Segment 10 Prof. Sharyn O Halloran

Interactions. Interactions. Lectures 1 & 2. Linear Relationships. y = a + bx. Slope. Intercept

Chapter 3 Multiple Regression Complete Example

where Female = 0 for males, = 1 for females Age is measured in years (22, 23, ) GPA is measured in units on a four-point scale (0, 1.22, 3.45, etc.

ECON 497 Final Exam Page 1 of 12

Wooldridge, Introductory Econometrics, 4th ed. Chapter 6: Multiple regression analysis: Further issues

Econometrics Review questions for exam

9.8 Exponential and Logarithmic Equations and Problem Solving

Binary Logistic Regression

Lecture 6: Linear Regression

Ordinary Least Squares Linear Regression

Note on Bivariate Regression: Connecting Practice and Theory. Konstantin Kashin

Inference with Simple Regression

Chapter 16. Simple Linear Regression and dcorrelation

L21: Chapter 12: Linear regression

14.44/ Energy Economics, Spring 2006 Problem Set 2

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

review session gov 2000 gov 2000 () review session 1 / 38

Regression Models. Chapter 4. Introduction. Introduction. Introduction

Applied Regression Analysis. Section 4: Diagnostics and Transformations

One sided tests. An example of a two sided alternative is what we ve been using for our two sample tests:

EC402 - Problem Set 3

Economics 390 Economic Forecasting

Trendlines Simple Linear Regression Multiple Linear Regression Systematic Model Building Practical Issues

Applied Quantitative Methods II

Applied Regression Analysis. Section 2: Multiple Linear Regression

LECTURE 5. Introduction to Econometrics. Hypothesis testing

CS 147: Computer Systems Performance Analysis

Machine Learning Linear Regression. Prof. Matteo Matteucci

Regression with Nonlinear Transformations

Section 3: Simple Linear Regression

Section 2.6 Solving Linear Inequalities

STATS DOESN T SUCK! ~ CHAPTER 16

STAT 212 Business Statistics II 1

Lecture 12. Functional form

Chapter 1 Review of Equations and Inequalities

Operation and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Harvard University. Rigorous Research in Engineering Education

4. Nonlinear regression functions

STA441: Spring Multiple Regression. This slide show is a free open source document. See the last slide for copyright information.

Business Statistics. Lecture 10: Course Review

Logarithms and Exponentials

Week 9: An Introduction to Time Series

14.32 Final : Spring 2001

Outline for Today. Review of In-class Exercise Bivariate Hypothesis Test 2: Difference of Means Bivariate Hypothesis Testing 3: Correla

Inferences for Regression

CRP 272 Introduction To Regression Analysis

EXAMINATION QUESTIONS 63. P f. P d. = e n

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

ECON 497 Midterm Spring

Regression of Inflation on Percent M3 Change

Mathematics for Economics MA course

Can you tell the relationship between students SAT scores and their college grades?

Multiple Regression. Midterm results: AVG = 26.5 (88%) A = 27+ B = C =

CHAPTER 5 FUNCTIONAL FORMS OF REGRESSION MODELS

UNST 232 Mentor Section Assignment 5 Historical Climate Data

LECTURE 9: GENTLE INTRODUCTION TO

Chapter 4. Regression Models. Learning Objectives

Answer all questions from part I. Answer two question from part II.a, and one question from part II.b.

ECONOMETRIC MODEL WITH QUALITATIVE VARIABLES

Testing for Discrimination

POL 681 Lecture Notes: Statistical Interactions

Mathematics Review Revised: January 9, 2008

Lecture 7: Sections 2.3 and 2.4 Rational and Exponential Functions. Recall that a power function has the form f(x) = x r where r is a real number.

Do students sleep the recommended 8 hours a night on average?

Review for Final Exam Stat 205: Statistics for the Life Sciences

Business Statistics 41000: Homework # 5

Properties of estimator Functional Form. Econometrics. Lecture 8. Nathaniel Higgins JHU. Nathaniel Higgins Lecture 8

Notes 6: Multivariate regression ECO 231W - Undergraduate Econometrics

Nonlinear relationships Richard Williams, University of Notre Dame, Last revised February 20, 2015

Chapter 16. Simple Linear Regression and Correlation

LECTURE 04: LINEAR REGRESSION PT. 2. September 20, 2017 SDS 293: Machine Learning

Chapter 12: Linear regression II

Lecture 6: Linear Regression (continued)

Transcription:

Announcements Solutions to Problem Set 3 are posted Problem Set 4 is posted, It will be graded and is due a week from Friday You already know everything you need to work on Problem Set 4 Professor Miller will be filling in for me in Thursday s lecture I will still have my regular Thursday office hours J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 1 / 45

Bivariate Regression Review: Hypothesis Testing Let s review bivariate regression with an ecology example Isle Royale has both wolves and moose, both populations are completely cutoff from the mainland Scientists study the island to see how the dynamics of the two populations work Let s try to estimate the effect of the wolf population on the moose population J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 2 / 45

Bivariate Regression Review: Hypothesis Testing Causal Relationship J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 3 / 45

Bivariate Regression Review: Hypothesis Testing Num mber of wolves 60 50 40 30 20 10 Wolf population Moose population 3000 2500 2000 1500 1000 500 Num mber of moose 0 1959 1964 1969 1974 1979 1984 1989 1994 1999 2004 Year 0 J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 4 / 45

Bivariate Regression Review: Hypothesis Testing Let s start by asking a very basic question: Is there any statistically significant relationship between growth of the wolf population and growth of the moose population? Consider the population model: g m = β 1 + β 2 g w + ε Then the hypotheses we want to test are: H 0 : β 2 = 0 H a : β 2 0 To Excel (wolf-moose.csv)... J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 5 / 45

Bivariate Regression Review: Hypothesis Testing Our estimated slope coefficient was -0.19 suggesting that a 1 percentage point increase in the wolf population growth rate is associated with a 0.19 percentage point decrease in the moose population growth rate Is this coefficient large enough to reject the null hypothesis? t = 0.19 0 0.12 = 1.58 Pr( T t ) = TDIST (1.58, 47, 2) = 0.12 Our p-value is 0.12, so we fail to reject the null hypothesis that β 2 equals 0 at a 10% (or 5% or 1%) significance level J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 6 / 45

Bivariate Regression Review: Hypothesis Testing What if we think what really matters for the growth of the moose population is how many wolves are out there (not whether the number of wolves is getting bigger or smaller): g m = β 1 + β 2 n w + ε Now, β 1 tells us what the growth rate of the moose population would be with no wolves around β 2 tells us the change in the growth rate associated with adding one more wolf to the island Back to Excel... J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 7 / 45

Bivariate Regression Review: Hypothesis Testing SUMMARY OUTPUT: growth of moose population as dependent variable Regression Statistics Multiple R 0.26894367 R Square 0.0723307 Adjusted R Square 0.05259305 Standard Error 0.21503659 Observations 49 Coefficients Standard Error t Stat P value Intercept 0.16194925 0.088565436 1.828583 0.073812 n_wolves 0.00674033 0.003521012 1.91432 0.061679 J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 8 / 45

Bivariate Regression Review: Confidence Intervals Let s try a slightly different way of looking at the relationship between the two populations In particular, let s switch our independent variable to something that more directly measures the effect of wolves on the moose population The predation rate is the average percentage of the moose population killed each month by wolves Let s get a 95% confidence interval for the slope coefficient in the following population model: g m = β 1 + β 2 predation + ε J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 9 / 45

Bivariate Regression Review: Confidence Intervals 0.25 rowth rate of moose population Annual g 02 0.2 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 y = 10.71x + 0.155 R² = 0.414 0 0.005005 001 0.01 0.015015 002 0.02 0.025025 003 0.03 Monthly predation rate J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 10 / 45

Bivariate Regression Review: Confidence Intervals We got a slope coefficient of -10.7, an increase in the predation rate by 1 percentage point is associated with a decrease of 10.7 percentage points in the annual growth rate of the moose population The 95% confidence interval for this coefficient: b 2 ± t α 2,n 2 s b2 10.7 ± t 0.025,18 3.0 10.7 ± 2.1 3.0 10.7 ± 6.3 J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 11 / 45

Bivariate Regression Review: Choosing Variables and Assessing Results A few things to think about with our regression: Is it better to use the growth rate of each population or the size of each population? Could the direction of causality go the other way (or both ways)? What else is influencing these populations? How well are these numbers being measured? How do we assess the magnitudes and p-values of the coefficients? What do we expect to happen if we gather more years of data? J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 12 / 45

Bivariate Regression Review: Statistical vs. Economic Significance Recall from last class the distinction between statistical and economic significance Statistical significance is just telling us whether we can reject the hypothesis that a coefficient is equal to zero (or whatever constant we chose) Economic significance is about whether the magnitude of the coefficient is large enough to care about We should always consider the economic significance of the coefficient and its confidence interval (one end of the interval may lead to very different interpretations than the other) J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 13 / 45

Statistical vs. Economic Significance: An Example The following guidelines are given for LDL cholesterol levels: less than 130 mg/dl is optimal or near optimal, 130 to 159 mg/dl is borderline high, 160 to 189 mg/dl is high, and above 190 mg/dl is very high Suppose we run a study looking at oatmeal consumption and cholesterol levels and regress the cholesterol level on bowls of oatmeal eaten per week How would you interpret the following three different 95% confidence intervals for β 2 :.5 ±.05 (1).05 ±.01 (2).05 ± 8 (3) J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 14 / 45

A Few Regression Loose Ends There are a couple of extra regression details worth pointing out First, the typical way regression results are displayed: MPG = 33.08-3.48 x DISPLACEMENT R 2 =.63 (1.09) (0.28) What s in the parentheses can be standard errors, t-stats or p-values In tables of regression output, the first column typically lists the independent variables, the second column gives the regression coefficient and standard error (or t-stats or p-values) in parentheses below the coefficient for each variable J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 15 / 45

A Few Regression Loose Ends 1272 QUARTERLY JOURNAL OF ECONOMICS TABLE VIII GDP PER CAPITA AND INSTITUTIONS Dependent variable is log GDP per capita (PPP) in 1995 Institutions as measured by: Average protection against expropriation risk, 1985 1995 Constraint on executive in 1990 Constraint on executive in first year of independence (1) (2) (3) (4) (5) (6) Panel A: Second-stage regressions Institutions 0.52 0.88 0.84 0.50 0.37 0.46 (0.10) (0.21) (0.47) (0.11) (0.12) (0.16) Urbanization in 1500 0.024 0.030 0.023 (0.021) (0.078) (0.034) Log population density 0.08 0.10 0.13 in 1500 (0.10) (0.10) (0.10) Panel B: First-stage regressions Log settler mortality 1.21 0.47 0.75 0.88 1.81 0.78 (0.23) (0.14) (0.44) (0.20) (0.40) (0.25) Urbanization in 1500 0.042 0.088 0.043 (0.035) (0.066) (0.061) Log population density 0.21 0.35 0.24 in 1500 (0.11) (0.15) (0.17) R 2 0.53 0.29 0.17 0.37 0.56 0.26 Number of observations 38 64 37 67 38 67 Panel C: Coefficient on institutions without urbanization or population density in 1500 Institutions 0.56 0.96 0.77 0.54 0.39 0.52 (0.09) (0.17) (0.33) (0.09) (0.11) (0.15) Standard errors are in parentheses. Dependent variable is log GDP per capita (PPP) in 1995. The measure of institutions used in each regression is indicated at the head of each column. Urbanization in 1500 is percent of the population living in towns with 5000 or more people. Population density is calculated as total population divided by arable land area. Constraint on the executive in 1990, 1900, and the first year of J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 16 / 45

An Extra Application of Regression Results Two ways that regression results are often used are to predict either a conditional mean of y or an individual value of y The conditional mean: E(y x = x ) = β 1 + β 2 x The best estimate of the conditional mean: ŷ = b 1 + b 2 x The standard error of ŷ as an estimate of the conditional mean: 1 s e n + (x x) 2 n i=1 (x i x) 2 J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 17 / 45

An Extra Application of Regression Results The actual value of y given x : y = β 1 + β 2 x + ε The best estimate of the individual value of y given x : ŷ = b 1 + b 2 x The standard error of ŷ as an estimate of the individual value of y: s e 1 + 1 n + (x x) 2 n i=1 (x i x) 2 J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 18 / 45

Bivariate Data Transformation We have a couple of problems that can come up with our approach to bivariate data analysis First, we ve assumed that there is a linear relationship between y and x Often, the relationship between y and x will be nonlinear A second problem is that our methods don t make much sense for categorical variables We can use data transformations to deal with these problems J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 19 / 45

An Example of Data Transformation The early spread of a virus is often characterized by exponential growth The number of infected people (N) will be related to time (t) by an exponential growth equation: N t = β 1 e β 2t Our methods won t work for estimating β 1 and β 2 What can we do? J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 20 / 45

The Spread of H1N1 (Swine Flu) Number of cases of H1N1 flu worldwide from April 28, 2009 to May 16, 2009 (WHO data) J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 21 / 45

The Spread of H1N1 (Swine Flu) J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 22 / 45

An Example of Data Transformation We can transform our data to get two variables that are linearly related: N t = β 1 e β 2t ln N t = ln(β 1 e β 2t ) ln N t = ln β 1 + ln(e β 2t ) ln N t = ln β 1 + β 2 t So we can use our techniques if we regress ln N t on t J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 23 / 45

An Example of Data Transformation J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 24 / 45

An Example of Data Transformation J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 25 / 45

An Example of Data Transformation J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 26 / 45

An Example of Data Transformation J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 27 / 45

The Log-Linear Model This example is actually one of our most common transformations called the log-linear model: ln Y = β 1 + β 2 X + ε We can use ordinary least squares to estimate b 1 and b 2 : ln y i = b 1 + b 2 x i Remember that a change in logs is roughly equal to the percentage change (as a decimal): 100 b 2 = 100 ln y x = % y x J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 28 / 45

The Linear-Log Model Another variation using logs is the linear-log model: Y = β 1 + β 2 ln X + ε We can use ordinary least squares to estimate b 1 and b 2 : Interpreting b 2 : 1 100 b 2 = ŷ i = b 1 + b 2 ln x i y 100 ln x = y % x J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 29 / 45

The Linear-Log Model fe expectancy at birth Li 90 85 80 75 70 65 60 55 50 45 40 y = 0.001x + 62.78 R² = 0.377 0 5000 10000 15000 20000 25000 30000 Consumption per capita Data are for the year 2000 from the World Development Indicators dataset. J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 30 / 45

The Linear-Log Model fe expectancy at birth Lif 90 85 80 75 70 65 60 55 50 45 40 y = 5.663x + 26.19 R² = 0.696 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 ln(consumption per capita) Data are for the year 2000 from the World Development Indicators dataset. J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 31 / 45

The Log-Log Model Our last variation using logs: ln Y = β 1 + β 2 ln X + ε We can use ordinary least squares to estimate b 1 and b 2 : Interpreting b 2 : b 2 = ln y i = b 1 + b 2 lnx i 100 ln y 100 ln x = % y % x J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 32 / 45

The Log-Log Model 60 a CO O2 emissions per capita 50 40 30 20 10 0 y = 0.000x + 2.257 R² = 0.281 0 5000 10000 15000 20000 25000 30000 Consumption per capita Data are for the year 2000 from the World Development Indicators dataset. J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 33 / 45

The Log-Log Model emissions per capita) ln(co2 e 5 4 y = 0.918x 6.029 R² = 0.687 3 2 1 0 1 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 2 3 4 5 ln(consumption per capita) Data are for the year 2000 from the World Development Indicators dataset. J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 34 / 45

When to Use Logs Log-linear model: Useful when the underlying relationship between x and y is exponential (population growth, education and wages, etc.) Linear-log model: Useful when x is on a very different scale for different observations (when the independent variable is county population, income, etc.) Log-log model: Useful when both x and y are on very different scales for different observations or when calculating elasticities J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 35 / 45

Another Example of Data Transformation A general pattern of wages over the life cycle is that they rise early in your working career and then fall off at the end of your career For this reason, economists often think that a linear model is not a good way to model wages or income as a function of age Instead, wages (or ln(wages)) are often regressed on a polynomial of age J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 36 / 45

Another Example of Data Transformation 1200 1100 1000 Annual Earnings 900 800 700 600 500 400 20 25 30 35 40 45 50 55 60 Age J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 37 / 45

Another Example of Data Transformation Regressing ln(income) on a quadratic in age: ln y i = b 1 + b 2 age i + b 3 age 2 i How do we interpret the coefficients? d ln y dage = b 2 + 2b 3 age The effect of an additional year of age on income varies with age J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 38 / 45

Polynomial Transformations Quadratic model: Y = β 1 + β 2 X + β 3 X 2 + ε Using a polynomial of order p: Y = β 1 + β 2 X + β 3 X 2 +... + β p+1 X p + ε These are multivariate linear models that can still be estimated with ordinary least squares They are useful when there is a nonlinear but smooth relationship between x and y J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 39 / 45

Interpreting the Coefficients Let s focus on interpreting the coefficients in the quadratic case The change in y associated with a change in x of one unit will depend on the magnitude of x Suppose we are looking at years of education as our independent variable and log income as our dependent variable and estimate b 2 equal to 10 and b 3 equal to -.05 In this case, log income is increasing in education (b 2 > 0) but at a decreasing rate (b 3 < 0) J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 40 / 45

Interpreting the Coefficients J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 41 / 45

Interpreting the Coefficients J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 42 / 45

Categorical Variables So far, our analysis has focused on numerical variables Another case where we have to transform the data is when we have categorical variables Suppose I have data on ice cream sales and the month of the year My data points would look like ($1500, July) I can t just regress ice cream sales on month What if I just convert month to a number, January equals 1, February equals 2, etc.? Doesn t work, these numbers don t have any real meaning so a change in y resulting from a change in month number isn t meaningful J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 43 / 45

Categorical Variables Solution: dummy variables Dummy variables are a way to transform categorical variables into a set of binary variables In the ice cream example, we could define a dummy variable for summer months : summer = 1 if month (June, July, August) summer = 0 otherwise Now we can regress ice cream sales on this dummy: sales = b 1 + b 2 summer Notice that if it is a non-summer month, predicted sales are equal to b 1 while if it is a summer month, predicted sales are equal to b 1 + b 2 So b 2 captures the additional sales associated with summer months relative to non-summer months J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 44 / 45

Categorical Variables Our general model with a dummy variable: Y = β 1 + β 2 D where D is equal to 1 if a certain condition holds and zero otherwise We can get estimates b 1 and b 2 by regressing y i on x i : Interpreting results: ŷ i = b 1 + b 2 d i ŷ(d = 0) = b 1 ŷ(d = 1) = b 1 + b 2 ŷ(d = 1) ŷ(d = 0) = b 2 J. Parman (UC-Davis) Analysis of Economic Data, Winter 2011 February 8, 2011 45 / 45