6 More about likelihood
|
|
- David Rogers
- 6 years ago
- Views:
Transcription
1 6 More about lkelhood 61 Invarance property of mle s ( θ) Theorem If θ s an mle of θ and f g s a functon, then g s an mle of g(θ) Proof If g s one-to-one, then L(θ) =L ( g 1 (g(θ)) ) are both maxmsed by θ, so θ = g 1(ĝ(θ)) or g( θ) = ĝ(θ) If g s many-to-one, then θ whch maxmses L(θ) stll corresponds to g ( θ), so g( θ) stll corresponds to the maxmum of L(θ) Example Suppose X 1,X 2,,X n s a random sample from a Bernoull dstrbuton B(1,θ) Consder mle s of the mean, θ, and varance, θ(1 θ) Note, by the way, that θ(1 θ) s not a 1-1 functon of θ The log-lkelhood s and l(θ) = x log θ +(n x ) log(1 θ) dl(θ) dθ = x / θ (n x )/ (1 θ) so t s easly shown that the mle of θ s θ = X Puttng ν = θ(1 θ), dl(ν) dl (ν(θ)) = dθ dν dθ dν so t s easly seen that, snce dθ s not, n general, equal to zero, dν ν = ν( θ) = X ( 1 X ) 34
2 62 Relatve lkelhood If sup θ L(θ) <, therelatve lkelhood s RL(θ) = L(θ) sup L(θ) θ ; 0 RL(θ) 1 Relatve lkelhood s nvarant to known 1-1 transformatons of x, for f y s a 1-1 functon of x, f Y (y; θ) =f X (x(y); θ) dx dy dx s ndependent of θ, so RL X (θ) =RL Y (θ) dy 63 Lkelhood summares Realstc statstcal problems often have many parameters These cause problems because t can be hard to vsualse L(θ), and t becomes necessary to use summares Key dea In large samples, log-lkelhoods are often approxmately quadratc near the maxmum 35
3 Example Suppose X 1,X 2,,X n s a random sample from an exponental dstrbuton wth parameter λ e f X (x) =λe λx, x 0 Then l(λ) = n log λ x dl(λ) λ, = n/ λ dλ x, d 2 l(λ) dλ 2 = n/ λ 2, d 3 l(λ) dλ 3 = 2n/ λ 3 The log-lkelhood has a maxmum at λ = n / x,so (λ RL(λ) = λ)n e n λ x = (λ λ e1 λ/ λ)n, λ > 0 1 as λ λ Now, what happens as, for λ fxed, n? log RL(λ) = l(λ) l( λ) = l( λ) +l ( λ) (λ λ) l ( λ) (λ 1 λ)2 l( λ) λ λ usng Taylor seres, where λ 1 < λ Now l ( λ) =0and l ( λ) = n / λ2, so n(λ 1 λ)2 log RL(λ) = as n λ 2 unless λ = λ Thus, as n, RL(λ) { 1, λ = λ, 0, otherwse Concluson Lkelhood becomes more concentrated about the maxmum as n,and values far from the maxmum become less and less plausble 36
4 In general We call the value θ whch maxmses L(θ) or, equvalently, l(θ) = log L(θ) the maxmum lkelhood estmate, and J(θ) = 2 l(θ) θ 2 s called the observed nformaton Usually J(θ) > 0 and J( θ) measures the concentraton of l(θ) at θ Close to θ, we summarse 1 l(θ) l( θ) θ)2 J( θ) 2(θ 64 Informaton In a model wth log-lkelhood l(θ), the observed nformaton s J(θ) = 2 l(θ) θ 2 When observatons are ndependent, L(θ) s a product of denstes so l(θ) = log f (x ; θ) and J(θ) = 2 θ 2 log f (x ; θ) Snce 1 l(θ) l( θ) θ)2 J( θ), 2(θ for θ near to θ, we see that large J( θ) mples that l(θ) s more concentrated about θ Ths means that the data are less ambguous about possble values of θ, e we have more nformaton about θ 37
5 65 Expected nformaton 651 Unvarate dstrbutons Before an experment s conducted, we have no data so that we cannot evaluate J(θ) But we can fnd ts expected value ) I(θ) =E ( 2 l(θ) θ 2 Ths s called the expected nformaton or Fsher s nformaton If the observatons are a random sample, then the whole sample expected nformaton s I(θ) =n(θ) ) where (θ) =E ( 2 θ log f (X ; θ), 2 the sngle observaton Fsher nformaton Example Suppose X 1,X 2,,X n s a random sample n from a Posson dstrbuton wth parameter θ θ x e θ L(θ) =, x! = =1 gvng l(θ) = log L(θ) x log θ nθ log x! Thus J(θ) = 2 l(θ) = θ 2 To fnd I(θ), weneede (X )=θ and I(θ) = 1 θ 2 x / θ 2 E (X )= n θ 38
6 652 Multvarate dstrbutons If θ s a (p 1) vector of parameters, then I(θ) and J(θ) are (p ) p) matrces ( 2 l(θ) {J(θ)} rs = 2 l(θ) and {I(θ)} θ r θ rs = E s These matrces are obvously symmetrc We can also wrte the above as J(θ) = 2 l(θ) θ θ T and I(θ) =E ( 2 l(θ) θ θ T θ r θ s ) Example X 1,X 2,,X n s a random sample from a normal dstrbuton wth parameters µ and σ 2 We have already seen that so and ( 2) ( L µ, σ = 2πσ 2) n/2 exp [ 1 2σ 2 l ( µ, σ 2) = n 2 log 2π n 2 log σ2 1 2σ 2 l µ = 1 σ 2 (x µ), l σ 2 = n 2σ (x 2σ 4 µ) 2, 2 l µ 2 = n, σ 2 2 l µ σ 2 = 1 (x σ 4 µ) (x µ) 2], (x µ) 2 To fnd I(µ,σ 2 ),use 2 l (σ 2 ) 2 = n 1 (x 2σ 4 σ 6 µ) 2 J(µ, σ 2 )=( n σ σ 4 (x µ) (x σ 4 µ) 1 (x σ 6 µ) 2 n 2σ 4 ) so that E (X ) = µ, V (X ) = E [ (X µ) 2] = σ2, I(µ, σ 2 )=E ( J(µ, σ 2 ) ) =( n 39 σ 2 0 n 0 2σ 4 )
7 Example Censored exponental data Lfetmes of n components, safety devces, etc are observed for a tme c, when r have faled and (n r) are stll OK We have two knds of observaton: 1 Exact falure tmes x observed f x c, sothat 2 x unobserved f x >c, f(x; λ) =λe λx, x 0; P (X >c)=e λc }{{} Data are therefore x 1,,x r,c,,c n r tmes The (n r) components, safety devces, etc whch have not faled are sad to be censored The lkelhood s L(λ) = r =1 = λ r exp n [ e λc =r+1 λ( r λe λx =1 x +(n r)c )] l(λ) = r log λ λ ( r =1 x +(n r)c) l (λ) = r/ λ ( r =1 x +(n r)c) l (λ) = r / λ 2 / Thus J(λ) =r λ 2 > 0 f r>0 so we must observe at least one exact falure tme observed exactly) I(λ) =E ( r / λ 2) = 1 λ 2 E (#X Now P (X observed exactly) =P (X c) =1 e λc,so I c (λ) = n( 1 e λc) λ 2 40
8 No censorng f c, gvng I (λ) = n λ 2 >I c(λ) as one mght expect The asymptotc effcency when there s censorng at c relatve to no censorng s I c (λ)/i (λ) =1 e λc Example Events n a Posson process Events are observed for perod (0,T) n events occur at tmes 0 <t 1 <t 2 <<t n <T Two observers A and B A records exact tmes, B uses an automatc counter and goes to the pub (e B merely records how many events there are) A knows exact tmes, and tmes between events are ndependent and exponentally dstrbuted, so L A (λ) = λe λt 1 λe λ(t 2 t1) λe λ(t n tn 1) λe (λt t n) = λ n e λt B merely observes the event [N = n], where N Po(λT ), so Log-lkelhoods are L (λt e B (λ) = )n λt l A (λ) = n log λ λt, l B (λ) = n log λ + n log T λt log n! n! and J A (λ) =J B (λ) =n / λ 2 E(N) =λt,soi A (λ) =I B (λ) =T /λ, and both observers get the same nformaton As usual, the one who went to the pub dd the rght thng 41
9 66 Maxmum lkelhood estmates The maxmum lkelhood estmate θ of θ maxmses L(θ) and often (but not always) satsfes the lkelhood equaton wth for a maxmum θ( θ) l =0, J( θ) = 2 l θ 2( θ) > 0 In the vector case, θ solves smultaneously wth l θ r ( θ) =0, det J( θ) > 0 r =1,,p, (e J( θ) postve defnte) If the lkelhood equaton has many solutons, we fnd them all and check L(θ) for each Usually, the equaton has to be solved numercally One way s by Newton- Raphson Suppose we have a startng value θ 0 Then 0= l θ whch may be re-arranged to ( θ) ) l θ (θ 0)+ 2 l θ (θ 0)( θ θ0 2 θ = θ0 + U (θ 0) J(θ 0 ), where U(θ) = l θ J(θ) = 2 l θ 2 s the score functon, s the observed nformaton 42
10 Now we terate usng θ 0 as a startng value and θ n+1 = θ n + U(θ n) J(θ n ) Example Extreme value (Gumbel) dstrbuton Ths dstrbuton s used to model such thngs as annual maxmum temperature Data due to Blss on numbers of beetles klled by exposure to carbon dsulphde are ftted by ths model The cdf s and the densty s F (x) =exp ( e (x η)), x R, η R, f (x) =exp [ (x η) e (x η)], x R, η R The sample log-lkelhood s l(η) = (x η) e (x η), so that U (η) = n e (x η), J(η) = Startng at η 0 = x, terate usng e (x η) η n+1 = η n + n e (x η n ) e (x η n ) 661 Fsher scorng Ths smply nvolves replacng J(θ) wth I(θ) Example Extreme value dstrbuton We need = [ η)] I(η) = E [J(η)] E e (X = n [ (x η)] e (x η) exp (x η) e dx 43
11 Put u = e (x η) and the ntegral becomes I(η) =n so Fsher scorng gves the teraton η n+1 = η n +1 1 n 0 ue u du = n, e (x η n ) 44
12 67 Suffcent statstcs You have already seen a lkelhood whch cannot be summarsed by a quadratc Example f (x ; θ) =θ 1, 0 <x <θ, so L(θ) =θ n, 0 < max {x } <θ Clearly a quadratc approxmaton s useless here Suppose there exsts a statstc s(x) such that L(θ) only depends upon data x through s(x) Then s(x) s a suffcent statstc for θ and obvously always exsts The mportant queston s: Does s(x) reduce the dmensonalty of the problem? Defnton If S = s(x) s such that the condtonal densty f X S(x s; θ) s ndependent of θ, thens s a suffcent statstc 45
13 Example Suppose X 1,X 2 B(n,θ) and consder P (X 1 = x X 1 + X 2 = r) P (X 1 = x, X 1 + X 2 = r) = P (X 1 + X 2 = r) P (X 1 = x, X 2 = r x) = P (X 1 + X 2 = r) = = x) n x( r x) (n θ r) x n (1 θ) θ r x (1 θ) n r+x x)( r x) (2n θ r (1 θ) 2n r r) (n n (2n Ths does not contan θ, so that X 1 + X 2 s a suffcent statstc for θ 46
14 Example X 1,X 2,,X n U(0,θ), sothat L(θ) =θ n, 0 <x 1,,x n <θ Suppose we fnd the condtonal densty of X 1,X 2,,X n gven X (n) The jont densty of X (1),X (2),,X (n) s f (x (1),x (2),,x (n) )= n! θ, 0 <x (1),,x n (n) <θ and the densty of X (n) s nx n 1 /θ n so that the condtonal densty of X (1),,X (n 1) X(n) = y s n! θ n / nx n 1 θ n = Thus the densty of X 1,X 2,,X n X (n) s ( n ) f x 1,,x x(n) = y = 1 x n 1, (n 1)! x n 1, 0 <x (1),,x (n 1) <y 0 <x 1,,x n <y, whch s free of θ, sothatx (n) s a suffcent statstc for θ Factorzaton Theorem s(x) s a suffcent statstc for θ f and only f there exst functons g and h such that f (x; θ) =g (s(x); θ) h(x) for all x R n,θ Θ Proof for dscrete random varables () Let s(x) =a and suppose the factorzaton condton to be satsfed, so that f(x; θ) =g (s(x); θ) h(x) Then P (s(x) =a) = p(y) =g(a; θ) h(y) y s 1 (a) y s 1 (a) Hence P (X = x s(x) =a) = and ths does not depend upon θ 47 h(x) h(y) y s 1 (a)
15 () Let s(x) be a suffcent statstc for θ Then P (X = x) =P (X = x s(x) =a) P (s(x) =a) But suffcency P (X = x s(x) =a) does not depend upon θ so, wrtng P (s(x) =a) =g(a; θ) and P (X = x s(x) =a) =h(x) gves the result The proof n the contnuous case requres measure theory and s beyond the scope of ths course Example Suppose X 1,X 2,,X n s a random sample from a Bernoull dstrbuton Then p(x; θ) =θ x (1 θ) n x Trvally ths factorzes wth s(x) = x and h(x) =1 Example Suppose X 1,X 2,,X n s a random sample from a N(µ, σ 2 ) dstrbuton, where (µ, σ 2 ) T s a vector of unknown parameters Then ( f (x; µ, σ 2 ) = 2πσ 2) n/2 exp [ 1 2σ 2 ( = 2πσ 2) n/2 exp [ 1 2σ 2 (x µ) 2 ] (x x) 2 + n(x µ) 2 Agan ths factorzes where s(x) =(x, (x x) 2 ) T, a vector valued functon ] 48
16 68 The exponental famly The densty functon/probablty mass functon has the form f(x; ϕ) = exp [a(x)b(ϕ) c(ϕ) +d(x)], where x may be contnuous or dscrete and ϕ s n a sutable space (usually open reals) For a random sample X 1,X [b(ϕ) 2,,X n, we obtan + ] L(ϕ) =exp a(x ) nc(ϕ) d(x ), and, therefore, by the factorzaton theorem, a(x ) s suffcent for ϕ Example Let X B(n, θ) Then ( ) n p X (x) = θ x (1 θ) x n x [ = exp [ = exp x log θ +(n x) log(1 θ) + log( n x )] x log (θ/(1 θ)) + n log(1 θ) + log( n x Callng Y = a(x), θ = b(ϕ) the natural parametersaton, we can wrte the densty functon/probablty mass functon n the form f (y; θ) =exp[yθ k(θ)] m(y) Y Clearly s a suffcent statstc Note that, n the contnuous case, the moment generatng functon s E ( e ty) = e ty+θy k(θ) m(y)dy = e k(θ+t) k(θ) = e k(θ+t) k(θ) e ty+θy k(θ+t) m(y)dy The functon k(θ) s called the cumulant generator Letusseewhy 49 )]
17 The cumulant generatng functon If X s a random varable wth moment generatng functon M(t), then K(t) = log M(t) s sad to be the cumulant generatng functon Dfferentatng, K (t) = M (t) M (t), K (0) = M (0) M (0) = E(X), K (t) = M (t) M (t) M (t) 2 M(t) 2, K (0) = M (0) M (0) 2 M (0) M(0) 2 and so on The cumulants are generated drectly For the exponental famly, = V (X) K(t) =logm (t) =k(θ + t) k(θ) so that K (t) =k (θ + t), K (0) = k (θ), and so on The cumulants are generated by repeated dfferentaton of k(θ) Example Posson dstrbuton The pmf s so that p(x; µ) = µx e µ x!, x =0, 1, = exp [x log µ µ log x!] a(x) = x, b(µ) = log µ, c(µ) = µ, d(x) = log x! Under natural parametersaton, y = x, θ =logµ, k(θ) =e θ, m(y) = 1 y! Cumulants are gven by dervatves of k(θ), all of whch are e θ = µ Example Bnomal dstrbuton 50
18 The pmf s p(x; p) = ( ) n p x (1 p) n x, x [ = exp Natural parametersaton s For the cumulants, and so on x log( ) p 1 p y = x, θ =log( p 1 p k (θ) = ) x =0, 1,,n ne θ 1+e θ = np, k (θ) = ne θ + n log(1 p) + log( n x k(θ) =n log(1 + e θ ) )] = np(1 p), (1 + e θ 2 ) 69 Large sample dstrbuton of θ From the data summary pont of vew, the mle θ and J ( θ)have been thought of n terms of a partcular set of data We now wsh to thnk of θ n terms of repeated samplng (e as a random varable) Man results In many stuatons and subject to regularty condtons θ D N(θ, I(θ) 1 ), and an approxmate 95% confdence nterval for θ s gven by θ ± 196I( θ) 1/2 [or θ ± 196J( θ) 1/2, regarded by many as better, but not n the books] In the multvarate case, θ D N(θ, I(θ) 1 ) 51
19 Example Exponental dstrbuton For an exponental dstrbuton wth mean θ, L(θ) = θ n e x /θ, θ > 0, l(θ) = n log θ x /θ, so that U (θ) = n x θ +, J(θ) = n θ 2 θ + 2 x 2 θ 3 Thus n θ = x, J( θ) = x 2 and an approxmate 95% confdence nterval s x ± 196x / n Example Normal dstrbuton For a normal random sample, Therefore µ = x, σ 2 = n 1 (x x) 2, J ( µ, σ 2) = ( n /σ 2 σ 4 (x µ) σ 4 (x µ) σ 6 (x µ) 2 n /2σ 4 I ( µ, σ 2) = ( n / σ 2 / 0 n 2 σ 4 An approxmate 95% confdence nterval for µ s 0 ) ) and for σ 2 s x ± 196 σ / n, σ 2 ± 196 σ 2 2 n Note that the estmators µ and σ 2 are asymptotcally uncorrelated The exact nterval for µ s x ± S n t 0975 (n 1) 52
20 whch s not qute the same Proof of asymptotc normalty Suppose X 1,X 2,,X n s a random sample from a dstrbuton wth pdf f(x; θ) Then the log-lkelhood, score and observed nformaton are = l(θ) = U(θ) J(θ) = log f(x ; θ), θ log f(x ; θ), 2 θ 2 log f (x ; θ) Let U (θ) be the random varable U (θ) = θ log f (X ; θ), and, provded that condtons are such that ntegraton and dfferentaton are nterchangeable, and So 0 = θ = = E[ 2 E [U (θ)] = f(x; θ) θ = = θ f (x; θ) log f (x; θ)dx θ f(x; θ)dx θ log f(x; θ)dx f(x; θ)dx = θ 1=0 f(x; θ) 2 θ log f(x; ] θ)dx + θ)( 2 θ f (x; θ) log f (x; θ)dx θ )2 θ log f (X; θ) + f (x; 2 θ log f (x; θ) dx 0= (θ) +E [ U (θ) 2] and, therefore, V [U (θ)] = (θ) It follows that E [U(θ)]=0, V [U (θ)] = n(θ) =I(θ), and the CLT shows that U(θ) D N (0,I(θ)) Now the mle s a soluton of U( θ) =0, so that, Taylor expandng, U (θ) +U (θ)( θ θ) 0 53
21 or Re-arrangng, U (θ) J(θ)( θ θ) 0 I(θ)( θ θ) U (θ) I(θ) J(θ) / = U(θ) J(θ) I(θ) I(θ) From the CLT, and from WLLN U(θ) D N (0, 1) I(θ) J(θ) I(θ) P 1 Slutsky s Theorem therefore results n D I(θ)( θ θ) N(0, 1) or θ D N(θ, I(θ) 1 ) 54
22 Requrements of ths proof 1 The true value of θ s nteror to the partameter space 2 Dfferentaton under the ntegral s vald, so that E [U(θ)] = 0 and V [U(θ)] = n(θ) Ths allows a central lmt theorem to apply to U(θ) 3 Taylor expansons are vald for the dervatves of the log-lkelhood, so that hgher order terms may be neglected 4 A weak law of large numbers apples to J(θ) Example: Exponental famly In the natural parametersaton, the lkelhood has the form so that L(θ) =m(y)e θ y nk(θ) = U(θ) y nk (θ) J(θ) =nk (θ) θ solves U( θ) =0 k (θ) =y Expandng, so that Snce we have k (θ) +( θ θ)k (θ) y y k θ (θ) θ + k 1 (θ) E(Y )=n E(Y )=k (θ), E( θ) θ V ( θ) = 1 k (θ) 2 V (Y )=n 1 k (θ) k (θ) 2 = 1 nk (θ) whch, of course, we could have obtaned drectly from θ N (θ, I(θ) 1 ) Example: Exponental dstrbuton 55
23 so f(x; λ) =λe λx, x > 0, λ > 0, θ = λ, k(θ) = log λ = log( θ) so 1 θ = y, Thus, approxmately, gves a confdence nterval for θ, and gves a confdence nterval for λ k (θ) = 1 θ, k (θ) = 1 θ 2 I( θ) = n y 2 1 y ± z y α n 1 y ± z y α n 56
F71SM1 STATISTICAL METHODS TUTORIAL ON 7 ESTIMATION SOLUTIONS
F7SM STATISTICAL METHODS TUTORIAL ON 7 ESTIMATION SOLUTIONS RJG. (a) E[X] = 0 xf(x)dx = θ 0 y e y dy = θγ(3) = θ [or note X gamma(, /θ) wth mean θ (from Yellow Book)] Settng X = θ MME θ = X/ (b) L(θ) =
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationSolutions Homework 4 March 5, 2018
1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors
Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationStat 543 Exam 2 Spring 2016
Stat 543 Exam 2 Sprng 206 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of questons. Do at least 0 of the parts of the man exam. I wll score
More informationConjugacy and the Exponential Family
CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationTHE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE
THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationMATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)
1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons
More informationMATH 281A: Homework #6
MATH 28A: Homework #6 Jongha Ryu Due date: November 8, 206 Problem. (Problem 2..2. Soluton. If X,..., X n Bern(p, then T = X s a complete suffcent statstc. Our target s g(p = p, and the nave guess suggested
More informationStat 543 Exam 2 Spring 2016
Stat 543 Exam 2 Sprng 2016 I have nether gven nor receved unauthorzed assstance on ths exam. Name Sgned Date Name Prnted Ths Exam conssts of 11 questons. Do at least 10 of the 11 parts of the man exam.
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationFirst Year Examination Department of Statistics, University of Florida
Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve
More informationMaximum Likelihood Estimation
Maxmum Lkelhood Estmaton INFO-2301: Quanttatve Reasonng 2 Mchael Paul and Jordan Boyd-Graber MARCH 7, 2017 INFO-2301: Quanttatve Reasonng 2 Paul and Boyd-Graber Maxmum Lkelhood Estmaton 1 of 9 Why MLE?
More informationTopic 5: Non-Linear Regression
Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.
More informationGoodness of fit and Wilks theorem
DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationSee Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)
Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes
More informationThe EM Algorithm (Dempster, Laird, Rubin 1977) The missing data or incomplete data setting: ODL(φ;Y ) = [Y;φ] = [Y X,φ][X φ] = X
The EM Algorthm (Dempster, Lard, Rubn 1977 The mssng data or ncomplete data settng: An Observed Data Lkelhood (ODL that s a mxture or ntegral of Complete Data Lkelhoods (CDL. (1a ODL(;Y = [Y;] = [Y,][
More informationPredictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore
Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.
More informationGeorgia Tech PHYS 6124 Mathematical Methods of Physics I
Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationSELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:
SELECTED PROOFS DeMorgan s formulas: The frst one s clear from Venn dagram, or the followng truth table: A B A B A B Ā B Ā B T T T F F F F T F T F F T F F T T F T F F F F F T T T T The second one can be
More informationEstimation of the Mean of Truncated Exponential Distribution
Journal of Mathematcs and Statstcs 4 (4): 84-88, 008 ISSN 549-644 008 Scence Publcatons Estmaton of the Mean of Truncated Exponental Dstrbuton Fars Muslm Al-Athar Department of Mathematcs, Faculty of Scence,
More informationThe Expectation-Maximisation Algorithm
Chapter 4 The Expectaton-Maxmsaton Algorthm 4. The EM algorthm - a method for maxmsng the lkelhood Let us suppose that we observe Y {Y } n. The jont densty of Y s f(y ; θ 0), and θ 0 s an unknown parameter.
More informationExpectation propagation
Expectaton propagaton Lloyd Ellott May 17, 2011 Suppose p(x) s a pdf and we have a factorzaton p(x) = 1 Z n f (x). (1) =1 Expectaton propagaton s an nference algorthm desgned to approxmate the factors
More informationChapter 20 Duration Analysis
Chapter 20 Duraton Analyss Duraton: tme elapsed untl a certan event occurs (weeks unemployed, months spent on welfare). Survval analyss: duraton of nterest s survval tme of a subject, begn n an ntal state
More informationModelli Clamfim Equazioni differenziali 7 ottobre 2013
CLAMFIM Bologna Modell 1 @ Clamfm Equazon dfferenzal 7 ottobre 2013 professor Danele Rtell danele.rtell@unbo.t 1/18? Ordnary Dfferental Equatons A dfferental equaton s an equaton that defnes a relatonshp
More informationHomework 9 for BST 631: Statistical Theory I Problems, 11/02/2006
Due Tme: 5:00PM Thursda, on /09/006 Problem (8 ponts) Book problem 45 Let U = X + and V = X, then the jont pmf of ( UV, ) s θ λ θ e λ e f( u, ) = ( = 0, ; u =, +, )! ( u )! Then f( u, ) u θ λ f ( x x+
More informationSimulation and Random Number Generation
Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty
More informationStatistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals
Internatonal Journal of Scentfc World, 2 1) 2014) 1-9 c Scence Publshng Corporaton www.scencepubco.com/ndex.php/ijsw do: 10.14419/jsw.v21.1780 Research Paper Statstcal nference for generalzed Pareto dstrbuton
More informationStrong Markov property: Same assertion holds for stopping times τ.
Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up
More informationChapter 4: Root Finding
Chapter 4: Root Fndng Startng values Closed nterval methods (roots are search wthn an nterval o Bsecton Open methods (no nterval o Fxed Pont o Newton-Raphson o Secant Method Repeated roots Zeros of Hgher-Dmensonal
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More informationLecture 4 Hypothesis Testing
Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to
More informationChapter 7 Channel Capacity and Coding
Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More informationRELIABILITY ASSESSMENT
CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department
More informationACTM State Calculus Competition Saturday April 30, 2011
ACTM State Calculus Competton Saturday Aprl 30, 2011 ACTM State Calculus Competton Sprng 2011 Page 1 Instructons: For questons 1 through 25, mark the best answer choce on the answer sheet provde Afterward
More informationChapter 7 Channel Capacity and Coding
Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models
More informationSTAT 405 BIOSTATISTICS (Fall 2016) Handout 15 Introduction to Logistic Regression
STAT 45 BIOSTATISTICS (Fall 26) Handout 5 Introducton to Logstc Regresson Ths handout covers materal found n Secton 3.7 of your text. You may also want to revew regresson technques n Chapter. In ths handout,
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More informationParameters Estimation of the Modified Weibull Distribution Based on Type I Censored Samples
Appled Mathematcal Scences, Vol. 5, 011, no. 59, 899-917 Parameters Estmaton of the Modfed Webull Dstrbuton Based on Type I Censored Samples Soufane Gasm École Supereure des Scences et Technques de Tuns
More informationComputing MLE Bias Empirically
Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.
More informationStatistical analysis using matlab. HY 439 Presented by: George Fortetsanakis
Statstcal analyss usng matlab HY 439 Presented by: George Fortetsanaks Roadmap Probablty dstrbutons Statstcal estmaton Fttng data to probablty dstrbutons Contnuous dstrbutons Contnuous random varable X
More informationDepartment of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution
Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationReview of Taylor Series. Read Section 1.2
Revew of Taylor Seres Read Secton 1.2 1 Power Seres A power seres about c s an nfnte seres of the form k = 0 k a ( x c) = a + a ( x c) + a ( x c) + a ( x c) k 2 3 0 1 2 3 + In many cases, c = 0, and the
More informationProbability and Random Variable Primer
B. Maddah ENMG 622 Smulaton 2/22/ Probablty and Random Varable Prmer Sample space and Events Suppose that an eperment wth an uncertan outcome s performed (e.g., rollng a de). Whle the outcome of the eperment
More informationRandomness and Computation
Randomness and Computaton or, Randomzed Algorthms Mary Cryan School of Informatcs Unversty of Ednburgh RC 208/9) Lecture 0 slde Balls n Bns m balls, n bns, and balls thrown unformly at random nto bns usually
More information6. Stochastic processes (2)
Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space
More informationCalculus of Variations Basics
Chapter 1 Calculus of Varatons Bascs 1.1 Varaton of a General Functonal In ths chapter, we derve the general formula for the varaton of a functonal of the form J [y 1,y 2,,y n ] F x,y 1,y 2,,y n,y 1,y
More information6. Stochastic processes (2)
6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationDifferentiating Gaussian Processes
Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the
More informationIntroduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:
CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More information3 Basic boundary value problems for analytic function in the upper half plane
3 Basc boundary value problems for analytc functon n the upper half plane 3. Posson representaton formulas for the half plane Let f be an analytc functon of z throughout the half plane Imz > 0, contnuous
More informationConvergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More informationEconomics 130. Lecture 4 Simple Linear Regression Continued
Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do
More informationANSWERS CHAPTER 9. TIO 9.2: If the values are the same, the difference is 0, therefore the null hypothesis cannot be rejected.
ANSWERS CHAPTER 9 THINK IT OVER thnk t over TIO 9.: χ 2 k = ( f e ) = 0 e Breakng the equaton down: the test statstc for the ch-squared dstrbuton s equal to the sum over all categores of the expected frequency
More informationx i1 =1 for all i (the constant ).
Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by
More informationLecture 20: Hypothesis testing
Lecture : Hpothess testng Much of statstcs nvolves hpothess testng compare a new nterestng hpothess, H (the Alternatve hpothess to the borng, old, well-known case, H (the Null Hpothess or, decde whether
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationAppendix B. Criterion of Riemann-Stieltjes Integrability
Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for
More informationProbability Theory (revisited)
Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationA be a probability space. A random vector
Statstcs 1: Probablty Theory II 8 1 JOINT AND MARGINAL DISTRIBUTIONS In Probablty Theory I we formulate the concept of a (real) random varable and descrbe the probablstc behavor of ths random varable by
More informationRockefeller College University at Albany
Rockefeller College Unverst at Alban PAD 705 Handout: Maxmum Lkelhood Estmaton Orgnal b Davd A. Wse John F. Kenned School of Government, Harvard Unverst Modfcatons b R. Karl Rethemeer Up to ths pont n
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationCSCE 790S Background Results
CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationTHE WEIGHTED WEAK TYPE INEQUALITY FOR THE STRONG MAXIMAL FUNCTION
THE WEIGHTED WEAK TYPE INEQUALITY FO THE STONG MAXIMAL FUNCTION THEMIS MITSIS Abstract. We prove the natural Fefferman-Sten weak type nequalty for the strong maxmal functon n the plane, under the assumpton
More informationModelli Clamfim Equazione del Calore Lezione ottobre 2014
CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationb ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere
Fall Analyss of Epermental Measurements B. Esensten/rev. S. Errede Some mportant probablty dstrbutons: Unform Bnomal Posson Gaussan/ormal The Unform dstrbuton s often called U( a, b ), hch stands for unform
More informationMath 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions
Exercses from Ross, 3, : Math 26: Probablty MWF pm, Gasson 30 Homework Selected Solutons 3, p. 05 Problems 76, 86 3, p. 06 Theoretcal exercses 3, 6, p. 63 Problems 5, 0, 20, p. 69 Theoretcal exercses 2,
More informationAs is less than , there is insufficient evidence to reject H 0 at the 5% level. The data may be modelled by Po(2).
Ch-squared tests 6D 1 a H 0 : The data can be modelled by a Po() dstrbuton. H 1 : The data cannot be modelled by Po() dstrbuton. The observed and expected results are shown n the table. The last two columns
More informationNotes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology
Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for
More informationMath 702 Midterm Exam Solutions
Math 702 Mdterm xam Solutons The terms measurable, measure, ntegrable, and almost everywhere (a.e.) n a ucldean space always refer to Lebesgue measure m. Problem. [6 pts] In each case, prove the statement
More informationCS 798: Homework Assignment 2 (Probability)
0 Sample space Assgned: September 30, 2009 In the IEEE 802 protocol, the congeston wndow (CW) parameter s used as follows: ntally, a termnal wats for a random tme perod (called backoff) chosen n the range
More informationLecture 10: Euler s Equations for Multivariable
Lecture 0: Euler s Equatons for Multvarable Problems Let s say we re tryng to mnmze an ntegral of the form: {,,,,,, ; } J f y y y y y y d We can start by wrtng each of the y s as we dd before: y (, ) (
More informationModeling and Simulation NETW 707
Modelng and Smulaton NETW 707 Lecture 5 Tests for Random Numbers Course Instructor: Dr.-Ing. Magge Mashaly magge.ezzat@guc.edu.eg C3.220 1 Propertes of Random Numbers Random Number Generators (RNGs) must
More informationChapter 3 Describing Data Using Numerical Measures
Chapter 3 Student Lecture Notes 3-1 Chapter 3 Descrbng Data Usng Numercal Measures Fall 2006 Fundamentals of Busness Statstcs 1 Chapter Goals To establsh the usefulness of summary measures of data. The
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationQuantifying Uncertainty
Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems
More informationk t+1 + c t A t k t, t=0
Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,
More information