A Nonparametric Prior for Simultaneous Covariance Estimation
|
|
- Pamela Stone
- 6 years ago
- Views:
Transcription
1 WEB APPENDIX FOR A Nonparametric Prior for Simultaneous Covariance Estimation J. T. Gaskins and M. J. Daniels Appendix : Derivation of Teoretical Properties Tis appendix contains proof for te properties presented in Section 5. A. Sparsity Grouping Prior Te proofs for properties. 3. can be found in te Appendix of Dunson et al P rφ mj φ m j P rφ mj φ m j 0 + P rφ mj φ m j 0 { { E π mj π m jδ ξj R\0 + E π mj δ ξj 0 π m jiδ ξji 0 i { { E π mj π m jδ ξj R\0 + E π mj π m jδ ξj 0 +E { i+ π mj π m jiδ ξj 0δ ξji 0 { { E π mj π m jδ ξj R + E { { E π mj π m j + ɛ E i+ π mj π m jiδ ξj 0δ ξji 0 i+ π mj π m ji I + ɛ II,
2 were expressions I and II are calculated below. { I E U m U m Xj Uml X jl U m lx jl + U ml U m lxjl EU m EU m EX j l< EUml EX jl + EU ml EU m lexjl l< + α + β + β + α + β + β + α + β. + α + β + + α + β + β + α + β + α + β + β ] ] II E { U m X j U m X j i+ l< i l+ i+ U m lx jl ] EUm EX j EU m EU m EXj EUm iex ji U ml X jl U m lx jl U m ix ji ] i EUml EX jl + EU ml EU m lexjl EU m lex jl l< l+ ] + α + β + α + β + β + α + β i+ ] + α + β + + α + β + β ] + α + β + α + β + α + β + + α + β + β ] + α + β + α + β ] + α + β ] i + α + β + α + β + β ]
3 ]. + α + β Using I and II, we ave P rφ mj φ m j I + ɛ II ɛ + ɛ + α + β. 5. To compute te correlation, we first obtain te expected value of te product of te distributions. { E F mj AF mj A E π mj π mj δ ξj Aδ ξj A +E { i+ π mj π mj iδ ξj Aδ ξj i A { { ΨA E π mj π mj + ΨA E ΨA III + ΨA IV, i+ π mj π mj i were III and IV follow. { III E UmX j X j Uml X jl U ml X j l + UmlX jl X j l l< EUmEX j EX j EUml EX jl + EUmlEX jl EX j l l< + α + α + β + α + α + β + α + β. + α + β + + α + α + β + α + β + α + α + β ] ] 3
4 IV E Tus, { U m X j U m X j i+ l< i l+ i+ l< i+ U ml X j l ] EUm EX j EUmEX j EX j EUmi EX j i ] i EUml EX jl + EUmlEX jl EX j l U ml X jl U ml X j l U mi X j i l+ + α + β + α + α + β + α + β + + α + α + β ] + α + β + α + β + α + β + + α + α + β ] + α + β + α + β ]. + α + β EU m lex jl ] + α + β ] ] + α + β + α + β + α + α + β ] i ] E F mj AF mj A ΨA III + ΨA IV ΨA EF mj A EF mj A, and F mj A and F mj A are uncorrelated. Te proof of CovF mj AF mj A proceeds similarly; see expressions V and VI from Appendix A.. 6. P rφ mj φ mj P rφ mj φ mj 0 + P rφ mj φ mj ɛ q ɛ q. 4
5 A. Lag-block Sparsity Grouping Prior Properties. 4. follow as in Appendix A.. 5. Let q qj qj. Making use of te previously derived formulas III and IV, { { E F mj AF mj A E π mj π mj δ ξq A + E π mj π mj iδ ξq Aδ ξq i A i+ { { ΨA E π mj π mj + ΨA E ΨA III + ΨA IV wic gives te correlation stated. + α + β ΨA ΨA] + ΨA, If q qj q qj, ten { E F mj AF mj A E π mj π mj δ ξq Aδ ξq A +E { i+ i+ π mj π mj iδ ξq Aδ ξq i A ΨA III + ΨA IV ΨA. π mj π mj i 6. Let q qj qj. P rφ mj φ mj P rφ mj φ mj 0 + P rφ mj φ mj 0 { { E π mj π mj δ ξq R\0 + E π mj δ ξq 0 π mj iδ ξqi 0 i { { E π mj π mj δ ξq R + E π mj π mj iδ ξq 0δ ξqi 0 If q qj q qj, III + ɛ IV ɛ q + i+ ɛ q + α + β. P rφ mj φ mj P rφ mj φ mj 0 + P rφ mj φ mj ɛ q ɛ q. 5
6 7. Let q qj qj. Ten, { { E F mj AF m j A E π mj π m j δ ξq A + E π mj π m j iδ ξq Aδ ξqi A i+ { { ΨA E π mj π m j + ΨA E π mj π m j i i+ ΨA V + ΨA VI, were { V E U m U m X j X j U ml X jl U ml X j l + U ml U m lx jl X j l and VI E + α + β + α + β + α + β { l< + α + β + + α + β + α + β + α + β U m X j U m X j i+ l< i l+ i+ U m lx j l + α + β + α + β + α + β + α + β + α + β + + α + β ] + α + β + α + β ] + α + β ]. 6 ] ] U ml X jl U m lx j l U m ix j i ] ] ] i + α + β + α + β + + α + β + α + β + α + β ] ]
7 Using expressions V and VI, we obtain te stated correlation in Property 7. For q qj q qj. { E F mj AF m j A E π mj π m j δ ξq Aδ ξq A { +E π mj π m j iδ ξq Aδ ξq i A i+ { { ΨA E π mj π m j + ΨA E i+ ΨA V + ΨA VI ΨA. π mj π m j i 8. Let q qj qj. P rφ mj φ m j P rφ mj φ m j 0 + P rφ mj φ m j { { 0 E π mj π m j δ ξq R\0 + E π mj δ ξq 0 π m j iδ ξqi 0 i { { E π mj π m j δ ξq R + E π mj π m j iδ ξq 0δ ξqi 0 If q qj q qj, V + ɛ VI ɛ q + i+ ɛ q + α + β. P rφ mj φ m j P rφ mj φ m j 0 + P rφ mj φ m j ɛ qɛ q. A.3 Innovation Variance Properties Properties. 4. follow as in Dunson et al For a common value of α and β, te distributions of U m and W m, as well as X j and Z j, are te same. Hence, te set {τ mj will be distributed te same as te set {π mj, and we may continue to use te expressions I VI to obtain expectations of te IV stick-breaking 7
8 weigts. { E G mj AG mj A E τ mj τ mj δ ηj Aδ ηj A +E { i+ { E δ ηj Aδ ηj A τ mj τ mj iδ ηj Aδ ηj i A { E τ mj τ mj + Eδ ηj A Eδ ηj A E { E δ ηj Aδ ηj A { i+ τ mj τ mj i III + Eδ ηj A Eδ ηj A IV { E δ ωj log Aδ ωj + α + β log A ] Eδ ωj log A Eδ ωj log A + Eδ ωj log A Eδ ωj log A + α + β Cov δ ωj log A, δ ωj log A + Φlog A + α + β Cov I{ω j log A, I{ω j log A + Φlog A. Applying Var{δ ωj log A Φlog A Φlog A and properties. and. gives te final result. 6. Te proof of property 6. follows te same as above, except one uses expressions V and VI in place of III and IV. 7. Follows from te observation tat ω j ω j almost surely as a consequence of te multivariate normal distribution wit a non-degenerate correlation. Appendix : MCMC Details As mentioned in Section 6., we introduce several latent variables to facilitate te MCMC simulation from te distributions F mj and G mj in equations and 4, following te algoritm of 8
9 Dunson et al We will draw te random variables R mj and A mj from multinomial distributions wit respective probabilities of {π mj and {τ mj. To tis end, first consider te following four sets of binary dummy variables, for all m, j, : u mj BernU m, m,..., M, j,..., J,,..., H φ ; x mj BernX j, m,..., M, j,..., J,,..., H φ ; w mj BernW m, m,..., M, j,..., p,,..., H γ ; z mj BernZ j, m,..., M, j,..., p,,..., H γ. Now define R mj min { : u mj x mj and A mj min { : w mj z mj. Tese R mj s and A mj s are distributed according to te appropriate multinomial distributions. We let R mj designate wic ξ j to coose as φ mj, and likewise, A mj gives te η j to select as γ mj. Hence, Φ is determined by {R mj and {ξ j and Γ by {A mj and {η j. Tus, after sampling te values of {R mj, {ξ j, {A mj, and {η j, te values of Φ and Γ are determined. Now we calculate te conditional distributions tat we will need for our Gibbs sampler for eac of te grouping priors. Notationally, we denote te conditional distribution for a random variable, say C, conditional on te remaining random variables by C. A. Posterior Computations for Sparsity/InvGamma Grouping Prior. Conditional for ξ j for j,..., J and,..., H φ : It is important to recall te definition of te GARP parameters. For instance, te first parameter φ m is te regression coefficient for y mi onto y mi wit innovation variance γ m. Likewise, φ m and φ m3 are te coefficients of y mi and y mi for modeling y mi3 wit variance γ m. For fixed j, we let x mi denote te component of y mi tat corresponds to te j t GARP parameter regressor, e.g. x mi y mi for j, and x mi y mi for j 3. Similarly, we let γm denote te relevant innovation variance. For j, γm γ m, and for j and 3, 9
10 γ m γ m. Finally, we define e mi to be te residual for te regression equation, excluding te contribution of x mi. Tat is, for j, e mi y mi, for j, e mi y mi3 φ m3 y mi, and for j 3, e mi y mi3 φ m y mi. In general te -variables are defined in te natural way for eac j so tat e mi Nφ mj x mi, γ m. Having establised te necessary notation, we see tat te contribution to te distribution of te Y mi s from φ mj is proportional to { exp n m e γm mi φ mj x mi. i However, we do not draw te φ mj s but ξ j. Te contribution from Y about ξ j is n m exp e mi ξ j x mi. 6 γ m:r mj m i Tis summation over m suc tat R mj means tat we are only including te samples wose jt GARP parameter is drawn from cluster. From tis observation, we ave tat te conditional distribution of ξ j is n m πξ j exp e γ mi ξ j x mi m:r mj m i { ɛ qj δ 0 ξ j + ɛ qj πσ exp ξ j σ { ɛ qj δ 0 ξ j + ɛ qj σ µ σ exp σ Nµ, σ, 7 were µ σ n m m:r mj i e mix mi γ m and σ σ + m:r mj i n m x mi Tus, to sample from tis conditional, we set ξ j to zero wit probability ɛ qj {, ɛ qj + ɛ qj σ exp µ σ σ γ m. 8 and draw from te specified Nµ, σ distribution oterwise. Note tat if tere are no groups wit R mj ten µ 0 and σ σ, and so 7 simplifies to te original prior for ξ j given by 3. 0
11 . Conditional for {R mj, {u mj, and {x mj : First, we draw R mj from te marginal over {u mj, x mj of te conditional distribution of te tree. Define γm, e mi, x mi as in step. Ten we ave { P R mj \{u mj, x mj π mj exp γ m n m i e mi ξ j x mi. 9 Hence, we draw R mj from te multinomial distribution wit probabilities from 9, normalized to sum to one. Given te value of R mj, we can draw te set {u mj, x mj to require tat R mj is te first occasion were bot u mj and x mj are one. For > R mj draw u mj BernU m and x mj BernX j, and wen R mj, u mj x mj. For < R mj, ten we jointly draw u mj and x mj in accordance to te following probabilities P u mj 0, x mj 0 U m X j / U m X j, P u mj, x mj 0 U m X j / U m X j, P u mj 0, x mj U m X j / U m X j. 3. Conditional for U m and X j : Given te values of te u mj s and te oter variables, te conditional for U m for < H φ is J J U m Beta + u mj, α φ + u mj. j j Likewise, for < H φ, M M X j Beta + x mj, β φ + x mj. m m U mhφ and X jhφ are drawn from distribution degenerate at. One sould recognize tat tis is sligtly different from te specification of Dunson et al Tis is because te autors only define u mj and x mj for R mj, and so te above conditional as sape parameters determined by summing over j or m were R mj. We coose to include latent variable for eac combination of m, j, for clarity, but one may follow Dunson et al. s coice as well.
12 4. Conditional for ɛ q, q,..., p : By placing a Betaα q, β q prior on ɛ q, te conditional for ɛ q is ɛ q Beta α q + H φ δ 0 ξ j, β q + H φ δ 0 ξ j, j:qjq j:qjq were te sum over j : qj q is simply te sum over te j corresponding to te lag-q GARPs. It is necessary to specify te values of α q and β q. We recommend using α q β q for all q, wic gives a Unif0, prior for eac ɛ q. Alternatively, one could coose te values of α q and β q to more aggressively srink ɛ q for lower lags toward zero and ɛ q for iger lags toward one. 5. Conditional for η j for j,..., J and,..., H γ : Let ẽ mi be te residual obtained from te difference of y mij and te previous components of y mi multiplied by te appropriate GARP. For instance, wen j ẽ mi y mi, and for j ẽ mi y mi φ m y mi, and so on. Note tat tis is a different definition of tese ẽ-residuals from te e -residuals used in te ξ j step. For eac value of j, tis yields ẽ mi N0, γ mj. Te contribution to te likeliood from Y mi N 0, ΣΦ m, Γ m is proportional to Hence, te conditional for eac η j is η j InvGamma λ + 6. Conditional for {A mj, {w mj, and {z mj : { η j exp ẽ mi δ A mj. η j m:a mj n m, λ + m:a mj i n m ẽ mi To draw A mj we will proceed similarly to step by looking at te conditional marginally over {w mj, z mj. P A mj \{w mj, z mj τ mj η nm j exp { η j. n m ẽ mi i. 0
13 Hence, we draw A mj from te multinomial distribution wit probabilities from 0, normalized to sum to one. As before, we simulate te sets w mj and z mj conditional on A mj being te first occasion were bot w mj and z mj are one. For > A mj draw w mj BernW m and z mj BernZ j, and wen A mj, w mj z mj. For < A mj, we jointly draw w mj and z mj in accordance to te following probabilities P w mj 0, z mj 0 W m Z j / W m Z j, P w mj, z mj 0 W m Z j / W m Z j, P w mj 0, z mj W m Z j / W m Z j. 7. Conditional for W m and Z j : Proceeding identically to step 3, we get te following conditionals for < H γ, J J W m Beta + w mj, α γ + w mj, Z j Beta + and W mhγ, Z jhγ δ. j M z mj, β γ + m j M z mj, m We now look at some of te issues involved in dealing wit te yperparameters. In practice, it will generally be infeasible to specify values for tese quantities, so we wis to coose reasonable, disperse prior distributions for tem. 8. Te first yperparameter of interest is te variance σ from te normal component of te ξ j s in equation 3. We coose te InvGammaa, b family of distributions for te prior, so tat we will ave conjugacy. Tis yields te following conditional distribution for σ, σ InvGamma a + δ 0 ξ j, b + ξj. j, One must now specify te values of a, b. We recommend InvGamma0., 0., so tat our prior approximates te commonly-used improper prior πσ σ. 3 j,
14 9. Te α φ and β φ control te amount of clustering for te GARP parameters. It is not intuitively obvious were tese parameters would congregate, so we require priors tat will not too strongly inform te posterior. Following te example for Dunson et al. 008, we coose a Gamma, prior for bot α φ and β φ. Ten te conditional for α φ is H M φ α φ Gamma MH φ +, log U m. m Likewise, β φ Gamma JH φ +, J j H φ log X j. Clearly, we can coose a different Gammaa, b prior instead of Gamma,, and we will maintain te Gamma-Gamma conjugacy. 0. Te λ and λ parameters control te distribution of te η j. We place independent Gamma, priors on eac. Te conditional for λ is λ Gamma λ ph γ +, + j, η j. Te conditional for λ is πλ Γλ phγ λ λ ph γ exp { λ + j, logη j, but tis is not a standard distribution to use in te Gibbs sampler. So it becomes necessary to implement an alternative sampling metod, and we coose to introduce a Metropolis in Gibbs step to approximately simulate from te conditional of λ. Draw te candidate value λ to replace te current value λ from te Nλ, ζ distribution, and accept te move to λ wit probability { min, exp log Γλ + λ Γλ λ ph + γ j, ] phγ logη j + ph γ logλ Iλ > 0, 4
15 It is necessary to prespecify a candidate variance ζ suc tat te acceptance rate is 0 to 40% Gelman et al., Te α γ and β γ parameters control te amount of clustering for te innovation variance parameters. As in step, we put a Gamma, prior on bot, and we ave te following conditionals: H M γ α γ Gamma MH γ +, log W m, β γ Gamma MH γ +, m H J γ log Z j. j Having specified all of te necessary conditionals for te model, te MCMC algoritm is implemented by sampling te parameters from eac set in order. A. Posterior Computations for te Non-sparse Grouping Prior Most of te parameters of te non-sparse prior yield identical conditional distribution to tose from te sparsity grouping prior. Hence, we only discuss tose parameters wit diverging distributions.. Because te prior distribution of te ξ j does not incorporate a zero point mass for te GARP parameters, te conditional will no longer be a mixture of a zero point mass and normal. We ave ξ j Nµ, σ, were te normal parameters come from Equation Tere are no longer any ɛ s in te non-sparse prior, so tis is an empty step. 8. Te distribution of te variance for te GARP candidates is σ InvGamma a + JH φ, b + were te prior for σ is InvGammaa, b. j, ξ j, 5
16 A.3 Posterior Computations for te Lag-block Prior. Te conditional for ξ q will again be a mixture of a point mass at zero and a normal distribution. Let P q denotes te set of m, j suc tat qj q and R mj, wic is te set of group-garp pairs tat contribute to te estimation of ξ q. For eac m, j P q, we let e mij, x mij, γmj be te residual, GARP-regressor, and IV suc tat e mij Nφ mj x mij, γmj, as described in te step for te sparsity grouping prior. Defining n m µ σ e mijx mij and σ γmj σ + m,j P q i m,j P q n m x mij γ i mj, we ave tat ξ q is a mixture of zero and te Nµ, σ distribution, were we draw te point mass at 0 wit probability ɛ q {. ɛ q + ɛ q σ exp µ σ σ Note if P q is empty, ten te conditional is ɛ q δ 0 + ɛ q N0, σ.. Te lag-block conditional for R mj marginalized over {u mj, x mj is multinomial wit probabilities proportional to P R mj \{u mj, x mj π mj exp { γ m n m i e mi ξ qj x mi Te conditionals for {u mj, x mj are te same as te sparsity grouping case. 4. Wit a Betaα q, β q prior on ɛ q, te conditional is H φ H φ ɛ q Beta α q + δ 0 ξ q, β q + δ 0 ξ q. 8. Wit te prior for σ of InvGammaa, b, we ave te conditional distribution σ InvGamma a + δ 0 ξ q, b + ξq. q, q,. 6
17 A.4 Posterior Computations for te Correlated-logNormal Prior 5. Instead of considering te conditional for η j, we instead coose to look in terms of ω j log η j. For eac sampling set, we partition ω into ω A, ω B so tat ω A contains te collection of ω j suc tat A mj for at least one m. Tis divides ω into te ω B, wic can be drawn easily troug a conjugate distribution, and te ω A, wic require a more advanced sampling metod. To sample ω B given te remaining variables, we let a denote te lengt of ω A and b p a denote te lengt of ω B. Define R AA to be te submatrix of Rρ corresponding to te elements of ω A, R BB corresponding to te elements of ω B, and R BA contain te elements of te rows of ω B and columns of ω A. Ten, using standard multivariate normal results, ω B ω A, N b ψb + R BA R AA ω A ψ a, ΩR BB R BA R AA R BA. Jointly drawing te vector ω B leads to better mixing tan drawing eac component separately. To sample ω A, we cycle troug te components ω α of ω A for α,..., a. We recognize tat te contribution to te conditional of ω α from te prior is were { exp Ω ω α ψ, ψ ψ + R α, α R α, α ω α ψ p, Ω Ω R α, α R α, α R α, α, ω α is te ω vector after removing ω α, R α, α is te Rρ matrix formed by removing te row and column corresponding to α, and R α, α is te vector defined by taking te α row of Rρ and removing te α component. We view tis equivalently as η α expω α 7
18 lognormalψ, Ω, and calculate te conditional distribution in terms of η α. Tis gives πη α η α, η m nmδ A mj α exp { η α m n m i ẽ mij δ A mj Ω log η α ψ. Sampling from tis distribution requires an approximate sampling step. We recommend slice sampling Neal 003, altoug an alternative sampling strategy could be used. 0. Wit te correlated-lognormal prior, we no longer ave te yperparameters λ, λ, but we now ave ψ, Ω, ρ. Coosing Ω InvGammaa, b and ψ Ω N0,c Ω as priors for te two yperparameters yields te following conditionals p Rρ ψ Ω, ρ, N ω, c + H γ prρ p Ω ψ, ρ, InvGamma a + ph γ +, b + ψ c + Ω c + H γ prρ p, ω ψ p Rρ ω ψ p. In te simulation and data example, we use a b., c 000. As mentioned in Section 6., it as been our experience tat sampling ρ leads to instability, and we generally recommend fixing it. A.5 Final Comments about MCMC Computations We finally note tat one can view our grouping priors in a ierarcical fasion wit multiple levels. As is often te case in ierarcical models, tere may be little information about te parameters in te lowest levels. We ave often found tis to be te case for te grouping priors resulting in poor mixing for some of te model parameters. Wile te values of te GARPs and IVs tend to mix well, as evidenced by trace and autocorrelation plots, te stick-breaking parameters α φ, β φ, α γ, and β γ do not mix as well. Wile te GARPs/IVs sow minimal autocorrelation witin ten 8
19 iterations, te stick-breaking parameters require more tan fifty. As we are usually not interested in directly performing inference on α, β and due to te previously mentioned concerns about te computational time, we recommend selecting a tinning value tat accommodates good mixing of te GARPs and IVs. We also encourage te user to consider te trace plot formed by te log density of te data given te values of te mean if non-zero and covariance parameters. An alternative solution is to run a sort initial cain and fix te values of te stick-breaking parameters at teir posterior means/modes for use in te full MCMC analysis. Wen using te correlated-lognormal grouping prior, we similarly observe problems wit te sampling for te ω correlation ρ. In many cases, ρ will alternate between values close to and -, wic does not correspond wit our intuition about te IVs. Hence, we opt to treat ρ as a tuning parameter. We recommend specifying a default value suc as ρ 0.75, possibly trying a few oter coice and selecting te value wit te superior DIC. As sown in te depression data study see Table 5, te tree coices of ρ0.5, 0.75, and 0.9 lead to similar model fits as measured by te deviance. Based on our simulation studies, we believe tat te correlated-lognormal prior is fairly robust to te coice of ρ. Appendix 3: Additional Risk Simulation Details Here we include details about some additional risk simulations beyond tose discussed in Section 7 of te article. A3. Risk Simulation A We perform anoter risk simulation similar to te first wit five groups and p 4. Te true covariance matrices are given by Φ Φ, 0.5,, 0.5, 0.5,, Γ Γ 3,,,, Φ 3 Φ 4, -0.5,, -0.5, -0.5,, Γ Γ 4 4, 4, 4, 4, Φ 5, -.0,, -0.5, -.0,, Γ 5,,,. 9
20 As in te article, we create fifty datasets and use te same sample sizes n... n 4 30, n 5 5. Tere sould be a large amount of clustering in tis case, since tere is a great deal of commonality among GARPs and IVs for different samples. Tese covariance matrices also do not ave any conditional independence relationsips to exploit since eac of te GARPs are nonzero. We now specify H φ H γ 30 for te grouping priors and use te same yperpriors as before. Risk estimates are sown in Table. As in te previous risk simulation, te lag-block/correlatedlognormal prior produces te best risk 5% and 0% lower tan te top naive prior NB/NB. For tis specification of Σ, we see tat te priors tat do not promote zeros in te T Φ m matrices NB and non-sparse grouping perform better tan teir sparsity-inducing counterparts NB and sparsity grouping. Tis is not unexpected because tis coice of GARPs does not ave any conditional independence relationsips. Te lag-block again is te top prior for te GARPs because it allows for saring information across all GARP parameters of a common lag qj, instead of only te GARPs at a common j. As before modeling te innovation variances is improved from te naive Bayes prior to te InvGamma prior to te correlated-lognormal prior. For tis particular coice of Σ, we again see tat te grouping priors significantly improved te estimation of te covariance matrices wit risk improvements ranging from 0 36% for L and 5 30% for L over te group-specific flat prior. Risk simulations wit tese covariance specifications and a doubled sample size for eac group produced te similar results to tese. Te lag-block and grouping priors continue to dominate over te flat prior and te naive Bayes estimates. A3. Risk Simulation A We explore ow te estimates obtained from te proposed priors perform wit an increase to te dimension of te covariance matrices and te number of groups as in Risk Simulation of te article. Here we allow for M 8 groups and consider 6 6 covariance matrices, defined by te 0
21 GARP and IV parameters in Table. Tis coice for Φ incorporates commonality bot witin lag and across groups, as well as possessing many conditional independence relationsips among te iger lag terms. We coose a sample size of tirty for te first five groups and fifteen for te final tree groups, and tirty clusters for te grouping priors. Te estimated risk associated wit estimating te covariance matrices for eac of te two loss functions is sown in Table 3. Wit te increased values of p and M, all of te grouping priors beat te naive priors. Te ability to borrow strengt across groups improves te estimation suc tat even te non-sparse grouping prior, wic does not allow te correct independence relationsips, beats te NB prior, wic correctly incorporates te potential independence. Te lag-block/correlated-lognormal prior continues to beat te remainder of te grouping priors, wit a risk improvement of 30 and 3% over te NB/NB prior and 64 and 5% over te group-specific flat prior. From tese and oter simulation studies, we believe tat as te number of groups M and te dimension of te covariance matrix p increases, te grouping estimators for Σ will outperform te naive Bayes estimators and te margin by wic tey do so increases. Tis is particularly important since te number of possible models increases as p and M increase. Web Appendix References Dunson, D. B., Xue, Y., and Carin, L Te matrix stick-breaking process: Flexible Bayes meta-analysis. Journal of te American Statistical Association, 0348: Gelman, A., Roberts, G. O., and Gilks, W. R Efficient Metropolis jumping rules. In Bernardo, J., Berger, J., Dawid, A., and Smit, A., editors, Bayesian Statistics 5 Proceedings of te Fift Valencia International Meeting, pages Neal, R. M Slice sampling. Te Annals of Statistics, 33:
22 Priors Estimated Risk GARP IV Loss Fcn Loss Fcn Lag-block Corr-logNormal Lag-block InvGamma Non-sparse InvGamma Sparsity Corr-logNormal NB NB Sparsity InvGamma NB NB Group-specific flat Common-Σ flat Table : Risk Estimates for Simulation A Φ 0.7, 0., 0.7, 0, 0., 0.7, 0, 0, 0., 0.7, 0, 0, 0, 0., 0.7 Φ 0.7, 0., 0.7, 0., 0., 0.7, 0, 0., 0., 0.7, 0, 0, 0., 0., 0.7 Φ 3 0.3, 0, 0.3, 0, 0, 0.3, 0, 0, 0, 0.3, 0, 0, 0, 0, 0.3 Φ 4 0.3, 0, 0.3, -0., 0, 0.3, 0, -0., 0, 0.3, 0, 0, -0., 0, 0.3 Φ 5, -0.5,, 0, -0.5,, 0, 0, -0.5,, 0, 0, 0, -0.5, Φ 6, -0.5,, 0.3, -0.5,, 0, 0.3, -0.5,, 0, 0, 0.3, -0.5, Φ 7, -0.,, -0., -0.,, -0., -0., -0.,, -0., -0., -0., -0., Φ 8, -0.,, -0., -0.,, -0., -0., -0.,, -0., -0., -0., -0., Γ Γ,,,,, Γ 3 Γ 8 3.4, 3.,.8,.5,.,.8 Γ 4 3, 3,,,, Γ 5 5, 3, 3, 4, 4, 4 Γ 6 5, 5, 3, 3,, Γ 7,.8,.6,.4,., Table : Parameter Values for Simulation A Priors Estimated Risk GARP IV Loss Fcn Loss Fcn Lag-block Corr-logNormal Lag-block InvGamma Sparsity Corr-logNormal Sparsity InvGamma Non-sparse InvGamma NB NB NB NB Group-specific flat Common-Σ flat Table 3: Risk Estimates for Simulation A
A = h w (1) Error Analysis Physics 141
Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.
More informationTe comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab
To appear in: Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. I. Jordan and T. Petsce. MIT Press, 997 Bayesian Model Comparison by Monte Carlo Caining David Barber D.Barber@aston.ac.uk
More informationFinancial Econometrics Prof. Massimo Guidolin
CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis
More informationRegularized Regression
Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize
More information3.4 Worksheet: Proof of the Chain Rule NAME
Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x
More information5.1 We will begin this section with the definition of a rational expression. We
Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go
More informationMA455 Manifolds Solutions 1 May 2008
MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using
More informationConsider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.
Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions
More informationSymmetry Labeling of Molecular Energies
Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry
More information7 Semiparametric Methods and Partially Linear Regression
7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).
More informationContinuity and Differentiability Worksheet
Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;
More informationA MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES
A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties
More informationOnline Appendix to Estimating Fiscal Limits: The Case of Greece
Online Appendix to Estimating Fiscal Limits: Te Case of Greece Huixin Bi and Nora Traum May 5, Tis appendix includes details of nonlinear numerical solutions, estimation diagnostics and additional results
More informationNew families of estimators and test statistics in log-linear models
Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics
More informationExam 1 Review Solutions
Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),
More informationRobotic manipulation project
Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex
More informationContinuous Stochastic Processes
Continuous Stocastic Processes Te term stocastic is often applied to penomena tat vary in time, wile te word random is reserved for penomena tat vary in space. Apart from tis distinction, te modelling
More informationKrazy Katt, the mechanical cat
Krazy Katt, te mecanical cat Te cat rigting relex is a cat's innate ability to orient itsel as it alls in order to land on its eet. Te rigting relex begins to appear at 3 4 weeks o age, and is perected
More informationEfficient algorithms for for clone items detection
Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire
More informationLIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION
LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y
More informationFunction Composition and Chain Rules
Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function
More information2.8 The Derivative as a Function
.8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open
More informationMath Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim
Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More informationSolution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.
December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need
More informationSpike train entropy-rate estimation using hierarchical Dirichlet process priors
publised in: Advances in Neural Information Processing Systems 26 (23), 276 284. Spike train entropy-rate estimation using ierarcical Diriclet process priors Karin Knudson Department of Matematics kknudson@mat.utexas.edu
More informationDifferentiation in higher dimensions
Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends
More informationMinimizing D(Q,P) def = Q(h)
Inference Lecture 20: Variational Metods Kevin Murpy 29 November 2004 Inference means computing P( i v), were are te idden variables v are te visible variables. For discrete (eg binary) idden nodes, exact
More informationTHE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein
Worksop on Transforms and Filter Banks (WTFB),Brandenburg, Germany, Marc 999 THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS L. Trautmann, R. Rabenstein Lerstul
More informationKernel Density Based Linear Regression Estimate
Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency
More informationCombining functions: algebraic methods
Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)
More information5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems
5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we
More informationNotes on Neural Networks
Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat
More informationPreface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.
Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser
More informationIntroduction to Machine Learning. Recitation 8. w 2, b 2. w 1, b 1. z 0 z 1. The function we want to minimize is the loss over all examples: f =
Introduction to Macine Learning Lecturer: Regev Scweiger Recitation 8 Fall Semester Scribe: Regev Scweiger 8.1 Backpropagation We will develop and review te backpropagation algoritm for neural networks.
More informationHOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS
HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS Po-Ceng Cang National Standard Time & Frequency Lab., TL, Taiwan 1, Lane 551, Min-Tsu Road, Sec. 5, Yang-Mei, Taoyuan, Taiwan 36 Tel: 886 3
More informationThe derivative function
Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative
More informationProbabilistic Graphical Models Homework 1: Due January 29, 2014 at 4 pm
Probabilistic Grapical Models 10-708 Homework 1: Due January 29, 2014 at 4 pm Directions. Tis omework assignment covers te material presented in Lectures 1-3. You must complete all four problems to obtain
More information1watt=1W=1kg m 2 /s 3
Appendix A Matematics Appendix A.1 Units To measure a pysical quantity, you need a standard. Eac pysical quantity as certain units. A unit is just a standard we use to compare, e.g. a ruler. In tis laboratory
More informationIEOR 165 Lecture 10 Distribution Estimation
IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat
More informationContinuity and Differentiability of the Trigonometric Functions
[Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te
More informationHow to Find the Derivative of a Function: Calculus 1
Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te
More informationLesson 6: The Derivative
Lesson 6: Te Derivative Def. A difference quotient for a function as te form f(x + ) f(x) (x + ) x f(x + x) f(x) (x + x) x f(a + ) f(a) (a + ) a Notice tat a difference quotient always as te form of cange
More informationNumerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems
Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for
More informationSECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY
(Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative
More informationDifferentiation. Area of study Unit 2 Calculus
Differentiation 8VCE VCEco Area of stud Unit Calculus coverage In tis ca 8A 8B 8C 8D 8E 8F capter Introduction to limits Limits of discontinuous, rational and brid functions Differentiation using first
More information1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible.
004 Algebra Pretest answers and scoring Part A. Multiple coice questions. Directions: Circle te letter ( A, B, C, D, or E ) net to te correct answer. points eac, no partial credit. Wic one of te following
More information64 IX. The Exceptional Lie Algebras
64 IX. Te Exceptional Lie Algebras IX. Te Exceptional Lie Algebras We ave displayed te four series of classical Lie algebras and teir Dynkin diagrams. How many more simple Lie algebras are tere? Surprisingly,
More informationThe Priestley-Chao Estimator
Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are
More informationOverdispersed Variational Autoencoders
Overdispersed Variational Autoencoders Harsil Sa, David Barber and Aleksandar Botev Department of Computer Science, University College London Alan Turing Institute arsil.sa.15@ucl.ac.uk, david.barber@ucl.ac.uk,
More informationBounds on the Moments for an Ensemble of Random Decision Trees
Noname manuscript No. (will be inserted by te editor) Bounds on te Moments for an Ensemble of Random Decision Trees Amit Durandar Received: Sep. 17, 2013 / Revised: Mar. 04, 2014 / Accepted: Jun. 30, 2014
More informationCopyright c 2008 Kevin Long
Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula
More informationExercises for numerical differentiation. Øyvind Ryan
Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can
More informationPre-Calculus Review Preemptive Strike
Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly
More informationHOMEWORK HELP 2 FOR MATH 151
HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,
More informationBasic Nonparametric Estimation Spring 2002
Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,
More informationCS522 - Partial Di erential Equations
CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its
More informationLecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.
Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative
More information4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.
Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra
More informationREVIEW LAB ANSWER KEY
REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g
More informationAN IMPROVED WEIGHTED TOTAL HARMONIC DISTORTION INDEX FOR INDUCTION MOTOR DRIVES
AN IMPROVED WEIGHTED TOTA HARMONIC DISTORTION INDEX FOR INDUCTION MOTOR DRIVES Tomas A. IPO University of Wisconsin, 45 Engineering Drive, Madison WI, USA P: -(608)-6-087, Fax: -(608)-6-5559, lipo@engr.wisc.edu
More information4.2 - Richardson Extrapolation
. - Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Definition Let x n n converge to a number x. Suppose tat n n is a sequence
More informationDerivatives. By: OpenStaxCollege
By: OpenStaxCollege Te average teen in te United States opens a refrigerator door an estimated 25 times per day. Supposedly, tis average is up from 10 years ago wen te average teenager opened a refrigerator
More informationLECTURE 14 NUMERICAL INTEGRATION. Find
LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use
More informationThe Complexity of Computing the MCD-Estimator
Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,
More informationIntroduction to Derivatives
Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))
More informationThe total error in numerical differentiation
AMS 147 Computational Metods and Applications Lecture 08 Copyrigt by Hongyun Wang, UCSC Recap: Loss of accuracy due to numerical cancellation A B 3, 3 ~10 16 In calculating te difference between A and
More informationMathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative
Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x
More informationQuantum Mechanics Chapter 1.5: An illustration using measurements of particle spin.
I Introduction. Quantum Mecanics Capter.5: An illustration using measurements of particle spin. Quantum mecanics is a teory of pysics tat as been very successful in explaining and predicting many pysical
More informationGeneric maximum nullity of a graph
Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n
More informationThe Verlet Algorithm for Molecular Dynamics Simulations
Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical
More informationThese errors are made from replacing an infinite process by finite one.
Introduction :- Tis course examines problems tat can be solved by metods of approximation, tecniques we call numerical metods. We begin by considering some of te matematical and computational topics tat
More informationA Reconsideration of Matter Waves
A Reconsideration of Matter Waves by Roger Ellman Abstract Matter waves were discovered in te early 20t century from teir wavelengt, predicted by DeBroglie, Planck's constant divided by te particle's momentum,
More informationEDML: A Method for Learning Parameters in Bayesian Networks
: A Metod for Learning Parameters in Bayesian Networks Artur Coi, Kaled S. Refaat and Adnan Darwice Computer Science Department University of California, Los Angeles {aycoi, krefaat, darwice}@cs.ucla.edu
More informationSolve exponential equations in one variable using a variety of strategies. LEARN ABOUT the Math. What is the half-life of radon?
8.5 Solving Exponential Equations GOAL Solve exponential equations in one variable using a variety of strategies. LEARN ABOUT te Mat All radioactive substances decrease in mass over time. Jamie works in
More informationSolving Continuous Linear Least-Squares Problems by Iterated Projection
Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu
More informationThese error are made from replacing an infinite process by finite one.
Introduction :- Tis course examines problems tat can be solved by metods of approximation, tecniques we call numerical metods. We begin by considering some of te matematical and computational topics tat
More information1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist
Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter
More informationVolume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households
Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of
More informationBootstrap confidence intervals in nonparametric regression without an additive model
Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap
More information1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).
. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use
More informationDerivatives of Exponentials
mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function
More informationChapter 4: Numerical Methods for Common Mathematical Problems
1 Capter 4: Numerical Metods for Common Matematical Problems Interpolation Problem: Suppose we ave data defined at a discrete set of points (x i, y i ), i = 0, 1,..., N. Often it is useful to ave a smoot
More informationA h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation
Capter Grid Transfer Remark. Contents of tis capter. Consider a grid wit grid size and te corresponding linear system of equations A u = f. Te summary given in Section 3. leads to te idea tat tere migt
More informationFinite Difference Methods Assignments
Finite Difference Metods Assignments Anders Söberg and Aay Saxena, Micael Tuné, and Maria Westermarck Revised: Jarmo Rantakokko June 6, 1999 Teknisk databeandling Assignment 1: A one-dimensional eat equation
More information1 2 x Solution. The function f x is only defined when x 0, so we will assume that x 0 for the remainder of the solution. f x. f x h f x.
Problem. Let f x x. Using te definition of te derivative prove tat f x x Solution. Te function f x is only defined wen x 0, so we will assume tat x 0 for te remainder of te solution. By te definition of
More informationINTRODUCTION AND MATHEMATICAL CONCEPTS
INTODUCTION ND MTHEMTICL CONCEPTS PEVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips of sine,
More information2.11 That s So Derivative
2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point
More informationChapter 5 FINITE DIFFERENCE METHOD (FDM)
MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential
More informationKernel Smoothing and Tolerance Intervals for Hierarchical Data
Clemson University TigerPrints All Dissertations Dissertations 12-2016 Kernel Smooting and Tolerance Intervals for Hierarcical Data Cristoper Wilson Clemson University, cwilso6@clemson.edu Follow tis and
More informationTHE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA EXAMINATION MODULE 5
THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA EXAMINATION NEW MODULAR SCHEME introduced from te examinations in 009 MODULE 5 SOLUTIONS FOR SPECIMEN PAPER B THE QUESTIONS ARE CONTAINED IN A SEPARATE FILE
More informationLecture 21. Numerical differentiation. f ( x+h) f ( x) h h
Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However
More informationSome Review Problems for First Midterm Mathematics 1300, Calculus 1
Some Review Problems for First Midterm Matematics 00, Calculus. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd,
More informationDigital Filter Structures
Digital Filter Structures Te convolution sum description of an LTI discrete-time system can, in principle, be used to implement te system For an IIR finite-dimensional system tis approac is not practical
More informationChapters 19 & 20 Heat and the First Law of Thermodynamics
Capters 19 & 20 Heat and te First Law of Termodynamics Te Zerot Law of Termodynamics Te First Law of Termodynamics Termal Processes Te Second Law of Termodynamics Heat Engines and te Carnot Cycle Refrigerators,
More informationSTA 216, GLM, Lecture 16. October 29, 2007
STA 216, GLM, Lecture 16 October 29, 2007 Efficient Posterior Computation in Factor Models Underlying Normal Models Generalized Latent Trait Models Formulation Genetic Epidemiology Illustration Structural
More informationBlock Bootstrap Prediction Intervals for Autoregression
Department of Economics Working Paper Block Bootstrap Prediction Intervals for Autoregression Jing Li Miami University 2013 Working Paper # - 2013-02 Block Bootstrap Prediction Intervals for Autoregression
More information