Multivariate Transformation of Variables and Maximum Likelihood Estimation

Similar documents
Maximum Likelihood Estimation

Line Fitting and Regression

Chapter 14 Logistic Regression Models

STK4011 and STK9011 Autumn 2016

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

LINEAR REGRESSION ANALYSIS

Chapter 8: Statistical Analysis of Simulated Data

ρ < 1 be five real numbers. The

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ " 1

Lecture 3. Sampling, sampling distributions, and parameter estimation

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

1 Solution to Problem 6.40

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

Chapter 4 Multiple Random Variables

Special Instructions / Useful Data

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions.

Midterm Exam 1, section 1 (Solution) Thursday, February hour, 15 minutes

Lecture Note to Rice Chapter 8

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

Functions of Random Variables

Econometric Methods. Review of Estimation

Chapter 5 Properties of a Random Sample

Training Sample Model: Given n observations, [[( Yi, x i the sample model can be expressed as (1) where, zero and variance σ

CHAPTER VI Statistical Analysis of Experimental Data

X ε ) = 0, or equivalently, lim

Lecture Notes Types of economic variables

Simple Linear Regression

Summary of the lecture in Biostatistics

Midterm Exam 1, section 2 (Solution) Thursday, February hour, 15 minutes

Lecture 3 Probability review (cont d)

M2S1 - EXERCISES 8: SOLUTIONS

Multiple Linear Regression Analysis

Lecture Notes Forecasting the process of estimating or predicting unknown situations

A New Family of Transformations for Lifetime Data

TESTS BASED ON MAXIMUM LIKELIHOOD

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions

ECON 482 / WH Hong The Simple Regression Model 1. Definition of the Simple Regression Model

ENGI 4421 Propagation of Error Page 8-01

12.2 Estimating Model parameters Assumptions: ox and y are related according to the simple linear regression model

6.867 Machine Learning

ESS Line Fitting

Simple Linear Regression and Correlation.

Lecture 8: Linear Regression

Multiple Choice Test. Chapter Adequacy of Models for Regression

ENGI 3423 Simple Linear Regression Page 12-01

Simple Linear Regression

Point Estimation: definition of estimators

Module 7. Lecture 7: Statistical parameter estimation

Linear Regression Linear Regression with Shrinkage. Some slides are due to Tommi Jaakkola, MIT AI Lab

Bayes Estimator for Exponential Distribution with Extension of Jeffery Prior Information

Objectives of Multiple Regression

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

Qualifying Exam Statistical Theory Problem Solutions August 2005

Simulation Output Analysis

ECON 5360 Class Notes GMM

Chapter 3 Sampling For Proportions and Percentages

Correlation and Simple Linear Regression

Introduction to Matrices and Matrix Approach to Simple Linear Regression

A NEW LOG-NORMAL DISTRIBUTION

Lecture 2: Linear Least Squares Regression

b. There appears to be a positive relationship between X and Y; that is, as X increases, so does Y.

Logistic regression (continued)

Analysis of Variance with Weibull Data

22 Nonparametric Methods.

Detection and Estimation Theory

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model

Part I: Background on the Binomial Distribution

CHAPTER 3 POSTERIOR DISTRIBUTIONS

GENERALIZED METHOD OF MOMENTS CHARACTERISTICS AND ITS APPLICATION ON PANELDATA

Recall MLR 5 Homskedasticity error u has the same variance given any values of the explanatory variables Var(u x1,...,xk) = 2 or E(UU ) = 2 I

ENGI 4421 Joint Probability Distributions Page Joint Probability Distributions [Navidi sections 2.5 and 2.6; Devore sections

Comparing Different Estimators of three Parameters for Transmuted Weibull Distribution

STA 108 Applied Linear Models: Regression Analysis Spring Solution for Homework #1

( ) 2 2. Multi-Layer Refraction Problem Rafael Espericueta, Bakersfield College, November, 2006

COV. Violation of constant variance of ε i s but they are still independent. The error term (ε) is said to be heteroscedastic.

CLASS NOTES. for. PBAF 528: Quantitative Methods II SPRING Instructor: Jean Swanson. Daniel J. Evans School of Public Affairs

Chapter 4 Multiple Random Variables

Example: Multiple linear regression. Least squares regression. Repetition: Simple linear regression. Tron Anders Moger

Linear Regression with One Regressor

LECTURE - 4 SIMPLE RANDOM SAMPLING DR. SHALABH DEPARTMENT OF MATHEMATICS AND STATISTICS INDIAN INSTITUTE OF TECHNOLOGY KANPUR

Analyzing Two-Dimensional Data. Analyzing Two-Dimensional Data

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

ECONOMETRIC THEORY. MODULE VIII Lecture - 26 Heteroskedasticity

4 Inner Product Spaces

Comparison of Parameters of Lognormal Distribution Based On the Classical and Posterior Estimates

Overview. Basic concepts of Bayesian learning. Most probable model given data Coin tosses Linear regression Logistic regression


Multiple Regression. More than 2 variables! Grade on Final. Multiple Regression 11/21/2012. Exam 2 Grades. Exam 2 Re-grades

Probability and. Lecture 13: and Correlation

Simple Linear Regression and Correlation. Applied Statistics and Probability for Engineers. Chapter 11 Simple Linear Regression and Correlation

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA

Simple Linear Regression Analysis

Dr. Shalabh. Indian Institute of Technology Kanpur

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

Chapter Two. An Introduction to Regression ( )

Section 2 Notes. Elizabeth Stone and Charles Wang. January 15, Expectation and Conditional Expectation of a Random Variable.

The Generalized Inverted Generalized Exponential Distribution with an Application to a Censored Data

Continuous Distributions

A Markov Chain Competition Model

Transcription:

Marquette Uversty Multvarate Trasformato of Varables ad Maxmum Lkelhood Estmato Dael B. Rowe, Ph.D. Assocate Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Copyrght 03 by

Marquette Uversty Outle Multvarate Trasformato of Varables Maxmum Lkelhood Estmato (MLE)

Marquette Uversty Recall Uvarate Chage of Varable Gve a cotuous RV x, let y=y(x) be a oe-to-oe trasformato wth verse trasformato x=x(y). The, f fx ( x ) s the PDF of x, the PDF of y s foud as f ( y ) f ( x( y) ) J( x y) Y X ( ) where J( x y) dx y. dy 3

Marquette Uversty Recall Bvarate Chage of Varable Gve two cotuous radom varables, wth jot probablty dstrbuto fucto f ( x, x ). Let y( x, x) be a trasformato from ( x, x) to ( y, y) y ( x, x ) wth verse trasformato x ( y, y). x ( y, y ) ( x, x) X, X 4

Marquette Uversty Recall Bvarate Chage of Varable The, the jot probablty dstrbuto fucto f YY, ( y, y ) of ( y, y) ca be foud va f ( y, y ) f x ( y, y ), x ( y, y ) J( x, x y, y ) Y, Y X, X dx ( y, y) dx ( y, y) where dy dy J ( x, x y, y) dx( y, y) dx( y, y). dy dy 5

Marquette Uversty Multvarate Chage of Varable Gve cotuous radom varables, ( x,..., x ) wth jot probablty dstrbuto fucto f ( x,..., x ). X Let y y ( x,, x ) y y ( x,, x ) y y ( x,, x ) be a -dmesoal trasformato from ( x,, x) to ( y,, y ) wth verse trasformato x x ( y,, y ) x x ( y,, y ) x x ( y,, y ). 6

Marquette Uversty Multvarate Chage of Varable The, the jot probablty dstrbuto fucto f ( y,..., y ) Y of ca be foud va f ( y,..., y ) f x ( y,..., y ),..., x ( y,..., y ) Y X J( x,..., x y,..., y ) where J ( x,..., x y,..., y ). ( y,..., y ) dx ( y,..., y) dx ( y,..., y) dy dy dx( y,..., y) dx( y,..., y) dy dy 7

Marquette Uversty Multvarate Chage of Varable The mportat moral to lear from our study of trasformato of varables s: Measuremets have statstcal varato ad a statstcal dstrbuto assocated wth them ad every tme we do somethg wth a measuremet (.e. math operato o t) we chage ts statstcal propertes ad ts dstrbuto! 8

Marquette Uversty Maxmum Lkelhood Estmato We have bee sayg that y~ N(, ), whe what we actually mea s that y That s, y has some true uderlg value μ, where ~ N(0, ). but there s addtve measuremet error (ose). We kow that f ~ N(0, ), the from a lear trasformato of varable, we get y~ N(, ). 9

Marquette Uversty Maxmum Lkelhood Estmato - Mea If we have a radom sample of sze wth y, where ~ N(0, ). y The we have, ~ N(0, ) for =,,. Sce these are depedet observatos, the jot dstrbuto s f ( y,..., y, ) ( ) exp ( y ) L(, ) / 0

Marquette Uversty Maxmum Lkelhood Estmato - Mea L(, ) s called the lkelhood fucto. What we wat to do s fd the values of (, ) that maxmze L(, ) L(, ). The value of μ that maxmzes s the value that mmzes ( y ). The value of σ that maxmzes ˆ ( y ˆ ). ˆ L(, ) s d y ˆ mmze d

Marquette Uversty Maxmum Lkelhood Estmato - Mea L(, ) s called the lkelhood fucto. What we do s dfferetate solve. That s, wrt μ ad σ, set = 0 ad L(, ) ad 0. However, ths s messy, but we ca stead maxmze LL L (, ) (, ) l( L(, )) L(, ) ˆ, ˆ 0 ˆ, ˆ because t s a mootoc fucto. Use log( ) for l( ).

Marquette Uversty Maxmum Lkelhood Estmato - Mea Wth y ad ~ N(0, ), depedet, LL LL / f ( y,..., y, ) ( ) exp ( y ) LL(, ) log( ) log( ) ( ) y (, ) ( y ˆ)( ) 0 ˆ ˆ, ˆ (, ) ( ) ˆ, ˆ ( y ˆ ) 0 ˆ ˆ 3

Marquette Uversty Maxmum Lkelhood Estmato - Mea Solvg for μ ad σ yelds ˆ y ˆ ( y ˆ ) ad. These are MLEs, most probable or modal values. Note that the deomator s ad ot -. ( ) ˆ s a based estmator of, E( ˆ ). ˆ ˆ ~ ( ) E( ) E( ˆ ) ( ) s ( ) s ~ ( ) E Es ( ) Ths s why we use a deomator -. 4

Marquette Uversty Maxmum Lkelhood Estmato - Mea ˆ ~ N, y =0;, mu=5;, sgma=; y=sgma*rad(0^6,)+mu; ybar=mea(y,); fgure() hst(ybar,(0:.:0)') axs([0 0 0 70000]) mea(ybar),var(ybar) 5 / 0.4 y ˆ 5.0003 s ˆ 0.3987 7 x 04 6 5 4 3 ˆ y 0 0 3 4 5 6 7 8 9 0 ˆ 5

Marquette Uversty Maxmum Lkelhood Estmato - Mea ˆ ~ ( ) sgmahat=var(y',)'; fgure() hst(sgmahat,(0:.:0)') axs([0 0 0 55000]) mea(sgmahat) var(sgmahat) 4 ( ) 3.6 ( ).88 y.8805 ˆ 5.5 x 04 5 4.5 4 3.5 3.5.5 3.600 ˆ ˆ s ( y ˆ ) Toggle wth ext slde. horzotal- axs scale 0.5 0 0 4 6 8 0 4 6 8 0 ˆ 6

Marquette Uversty Maxmum Lkelhood Estmato - Mea ˆ ˆ ~ ( ) ch=*sgmahat/sgma^; fgure(3) hst(ch,(0:.5:50)') axs([0 50 0 55000]) mea(ch) var(ch) ( ) 9 ( ) 8 5.5 x 04 5 4.5 4 3.5 3.5.5 y 9.000 ˆ 8.003 ˆ ˆ s ( y ˆ ) Toggle wth prevous slde. horzotal- axs scale 0.5 0 0 5 0 5 0 5 30 35 40 45 50 ˆ ˆ / 7

Marquette Uversty Maxmum Lkelhood Estmato - Lear Ths techque, ca be geeralzed to lear regresso. Let y a bx, y 5 where ~ N(0, ) 4 d 5 are depedet. 3 d 3 d 4,..., Measuremet Error True Le y a bx d d d y aˆ bx ˆ 0 0 3 4 5 x 8

Marquette Uversty Maxmum Lkelhood Estmato - Lear Ths techque, ca be geeralzed to lear regresso. Let y a bx, where ~ N(0, ) d 5 are depedet. d 3 d 4,..., True Le y a bx Measuremet Error d d x 9

Marquette Uversty Maxmum Lkelhood Estmato - Lear Ths techque, ca be geeralzed to lear regresso. Let y a bx, where ~ N(0, ) are depedet. The, the lkelhood s f ( y,..., y a, b, ) ( ) exp ( y a bx ) ad the log lkelhood s LL( a, b, ) log( ) log( ) ( y ) a bx. 0

Marquette Uversty Maxmum Lkelhood Estmato - Lear L( a, b, ) s aga called the lkelhood fucto. What we wat to do s fd the values of ( ab,, ) that maxmze L( a, b, ) L( a, b, ). The values (a,b) that maxmze are the values ( ab ˆ, ˆ) that mmze ( y aˆ bx ˆ ). The value of σ that maxmzes ˆ ˆ ( y ˆ a bx). L( a, b, ) s d y aˆ bx ˆ mmze d wrt a, b

Marquette Uversty Maxmum Lkelhood Estmato - Lear Dfferetate LL( a, b, ) wrt a, b, ad σ, the set = 0 LL( a, b, ) log( ) log( ) ( y ) a bx LL a b LL a b (,, ) ˆ ˆ a ˆ ˆ ab ˆ,, ˆ (,, ) y ˆ ˆ a bx x b ˆ ˆ ab ˆ,, ˆ LL a b ( y a bx )( ) 0 (,, ) ( y ˆ ˆ ) 0 a bx ˆ ˆ ( ˆ ) ab ˆ,, ˆ ( )( ) 0

Marquette Uversty Maxmum Lkelhood Estmato - Lear Solvg for the estmated parameters yelds bˆ aˆ ( x y ) ( x )( y ) ( x ) ( x) ( y )( x ) ( x )( x y ) ( x ) ( x) â y bx ˆ ˆ ˆ ( y ˆ a bx) y 5 ˆ 4 3 d yˆ aˆbx d 0 0 3 4 5 d 3 d 4 d y aˆ bx ˆ x d 5 3

Marquette Uversty Maxmum Lkelhood Estmato - Lear The regresso model y a bx where ~ N(0, ),..., that we preseted, ca be equvaletly wrtte as measured data y X y y y y where x, x X, a,, b x ad ~ N(0, I ). I s a -dmesoal detty matrx. desg matrx regresso coeffcets d measuremet error 4

Marquette Uversty Maxmum Lkelhood Estmato - Lear Wth y X ad ~ N(0, I ) The lkelhood s f ( y,..., y a, b, ) ( ) exp ( y X )'( y X ) ad the log lkelhood s LL( a, b, ) log( ) log( ) ( y X )'( y X ). 5

Marquette Uversty Maxmum Lkelhood Estmato - Lear L(, ) s aga called the lkelhood fucto. What we wat to do s fd the values of (, ) that maxmze L(, ). The value of β that maxmzes L(, ) s the value ˆ that mmzes ( y X )'( y X ). The value of σ that maxmzes ˆ ( y X )'( y. X ) ˆ ˆ We eed to fd ˆ. L(, ) s d y aˆ bx ˆ mmze wrt β ( y X )'( y X ) 6

Marquette Uversty Maxmum Lkelhood Estmato - Lear We do t eed to take the dervatve of L(, ) wrt β (although we could). We ca wrte wth algebra ( y X )'( y X ) ( y X ˆ )'( y X ˆ ) ( ˆ )'( X ' X )( ˆ ) add ad subtract X ˆ vertble does ot deped o β where ˆ ( X ' X ) X ' y. It ca be see that ˆ maxmzes LL(, ) because t makes ( y X )'( y X ) LL(, ) log( ) log( ) smallest ( y X ˆ )'( y X ˆ ) ( ˆ )'( X ' X )( ˆ ) 7

Marquette Uversty Maxmum Lkelhood Estmato - Lear More geerally, we ca have a multple regresso model y X y y y y where measured data ~ (0, ) N I desg matrx ad regresso coeffcets measuremet error x x q 0 x x q, X,,,. x x q q (q+) (q+) 8

Marquette Uversty Maxmum Lkelhood Estmato - Lear The MLEs are the same, ˆ ( X ' X ) ' (q+) I addto, X y ˆ ~ N, ( X ' X ) (q+) ad ˆ ( y X ˆ )'( y X ˆ ). ˆ ad ~ ( q ). ( y X )'( y X ) ( y X ˆ )'( y X ˆ ) ( ˆ )'( X ' X )( ˆ ) ( ) ( q) ( q) Ths meas we should use a deomator of -q- for ubased estmator of σ. depedet 9

Marquette Uversty Maxmum Lkelhood Estmato - Lear Let ( ab, )', X (, x), the ˆ ~ N, ( X ' X ) (q+) um=0^6; a=.8;b=.5;, sgma=; x=[,,3,4,5]'; =legth(x); mu=a+b*x';, X=[oes(,),x]; y=sgma*rad(um,)... +oes(um,)*mu; betahat=v(x'*x)*x'*y'; fgure(), hst(betahat(,:),(-0:.:0)') fgure(), hst(betahat(,:),(-5:.:5)') betabar=mea(betahat,); varbetahat=var(beta,hat,); Colum of settgs 3.5 3.5.5 0.5 0-5 -0-5 0 5 0 5 3.5 3.5.5 0.5 4 x 04 4 x 04 a 0.8 ya ˆ 0.7997 y b ˆ 0.5 0.5005 b W W 4.4 sˆ 4.4009 W 0.4 ˆ 0.3997 s b 0-5 -0-5 0 5 0 5 a (X'X) cov(a,b)=-. corr(a,b)= -0.9045 30

Marquette Uversty Maxmum Lkelhood Estmato - Lear ˆ ˆ ~ ( ) resd=y-(x*betahat)'; sgmahat=var(resd',)'; ch=*sgmahat/sgma^; fgure(3) hst(ch,(0:.5:30)') xlm([0 30]) mea(ch), var(ch) q ( q) ( ) 3 ( ) 6 y 3.0006 ˆ s 6.0038 ˆ 4 x 04 0 8 6 4 ˆ ˆ ˆ ( y a bx) 0 0 5 0 5 0 5 30 ˆ ˆ / 3

Marquette Uversty Maxmum Lkelhood Estmato - Expoetal Ths s a more geeral method tha just for lear fuctos bx Let y ae, y 5 ˆ yˆ ae ˆ bx where ~ N(0, ) 4 are depedet. 3 d,..., d d y ae ˆ bx ˆ d 3 d 4 0 0 3 4 5 x 3

Marquette Uversty Maxmum Lkelhood Estmato - Expoetal Ths s a more geeral method tha just for lear fuctos bx Let y ae, where ~ N(0, ) are depedet. The, the lkelhood s f ( y,..., y a, b, ) ( ) exp ( y ae ) bx ad the log lkelhood s bx LL( a, b, ) log( ) log( ) ( y ) ae. 33

Marquette Uversty Maxmum Lkelhood Estmato - Expoetal L( a, b, ) s aga called the lkelhood fucto. What we wat to do s fd the values of ( ab,, ) that maxmze L( a, b, ) L( a, b, ). The values (a,b) that maxmze bx are the values ( ab ˆ, ˆ) that mmze ( y ae ). The value of σ that maxmzes ˆ bx ˆ ( y ˆ ae ). L( a, b, ) s bx ˆ d ˆ y ae mmze d wrt a, b 34

Marquette Uversty Maxmum Lkelhood Estmato - Expoetal Dfferetate LL( a, b, ) wrt a, b, ad σ, the set = 0 bx LL( a, b, ) log( ) log( ) ( y ) ae LL a b a (,, ) LL a b ˆ ab ˆ,, ˆ (,, ) LL a b b bx ˆ bx ˆ ( y ˆ )( ) 0 ae e ˆ (,, ) ab ˆ, ˆ, ˆ bx ˆ ( y ˆ ae ) 0 ˆ ( ˆ ) bx ˆ bx ˆ ab ˆ, ˆ, ˆ ( y ae ˆ )( ax ˆ e ) 0 ˆ 35

Marquette Uversty Maxmum Lkelhood Estmato - Expoetal Solvg for the estmated parameters yelds aˆ bˆ ˆ ye e bx ˆ bx ˆ x y e xe bx ˆ bx ˆ bx ˆ ( y ˆ ae ) No aalytc soluto. Need umercal Soluto. y 5 4 3 d d 0 0 3 4 5 ˆ yˆ ae ˆ bx d 3 d y ae ˆ d 4 x bx ˆ d 5 36

Marquette Uversty Maxmum Lkelhood Estmato - Expoetal Sce we had to umercally maxmze the lkelhood, we do ot have ce formulas y 5 ˆ yˆ ae ˆ bx for the mea ad varace of ( ab,, ) 4 3 d a ad b that mmze d ˆ bx ˆ ( y ˆ ae ) d bx ˆ d ˆ y ae d 3 d 4 d 5 0 0 3 4 5 x 37

Marquette Uversty Homework : ) Prove a) s ( x x) ( x x) ( x x) b) ( y X )'( y X ) ( y X ˆ )'( y X ˆ ) ( ˆ )'( X ' X )( ˆ ) 38

Marquette Uversty Homework : ) Gve observed data pots (,), (3,), (,3), (4,4). a) Plot the pots. b) Aalytcally ft a regresso le to the pots..e. fd ˆ yˆ aˆbx by estmatg â ad ˆb. Fd ˆ. c) Numercally ft a regresso le to the pots. Set up a terval of possble a ad b values. Select Δa ad Δb values. Compute ( y ) a bx for each combato. Fd a ad b that make σ smallest. The a ad b that m σ are ad ad the σ â ˆb s ˆ. d) Plot the two les o the same graph as the pots. e) Plot the surface of (a,b,σ ) values from c). f) Commet. a b. 39

Marquette Uversty Homework : 3) Gve observed data pots (/, 3.), (,.8), (,.86), (3,.0), (4,.06), (5,.40). a) Plot the pots. b) Numercally ft a regresso sgle expoetal to the pots. ˆ. fd yˆ ae ˆ bx Set up a terval of possble a ad b values. Select Δa ad Δb values. Compute ( bx y ) ae for each combato. Fd a ad b that make σ smallest. The a ad b that m σ are ad ad the σ â ˆb s ˆ. ˆ c) Plot the curve yˆ ae ˆ bx o the same graph as the pots. d) Plot the surface of (a,b,σ ) values from b). e) Commet. a b. 40

Marquette Uversty Homework : 4) Gve same observed data pots as 3). a) Take the atural log of each y pot, y =log(y). b) Plot the pots (y ad old x). b) Guess where the best ft le to the data s. c) Aalytcally ft a lear regresso le to the pots..e. fd yˆ' cˆdx ˆ, where c log( a) ad d b. cˆ dx ˆ d) Plot the curve yˆ e e o the same graph as the pots ad the prevous ftted curve from 3). cˆ dx ˆ e) Compute ˆ from y=exp(y ) ad yˆ e e. f) Commet. 4

Marquette Uversty Homework : 5) Let x,, x be a depedet sample from each of the followg PDFs. I each case fd the MLE ˆ of θ. x e a) f( x ), x 0,,,..., 0, f (0 ). x! b) f ( x ) x, 0x, 0. x/ c) f ( x ) e,,. 0 x 0 x d) f ( x ) e,,. x ( x ) e) f ( x ) e, x,. f(x θ)=0 where ot defed 4