Controlling Bayes Directional False Discovery Rate in Random Effects Model 1
|
|
- Darrell Sullivan
- 5 years ago
- Views:
Transcription
1 Controlling Bayes Directional False Discovery Rate in Random Effects Model 1 Sanat K. Sarkar a, Tianhui Zhou b a Temple University, Philadelphia, PA 19122, USA b Wyeth Pharmaceuticals, Collegeville, PA 19426, USA Abstract Starting with a decision theoretic formulation of simultaneous testing of null hypotheses against two-sided alternatives, a procedure controlling the Bayesian directional false discovery rate (BDFDR) is developed through controlling the posterior directional false discovery rate (PDFDR). This is an alternative to Lewis and Thayer (2004) with a better control of the BDFDR. Moreover, it is optimum in the sense of being the non-randomized part of the procedure maximizing the posterior expectation of the directional per-comparison power rate given the data, while controlling the PDFDR. A corresponding empirical Bayes method is proposed in the context of one way random effects model. Simulation study shows that the proposed Bayes and empirical Bayes methods perform much better from a Bayesian perspective than the procedures available in the literature. 1 Introduction In simultaneous testing of null hypotheses against two-sided alternatives, it is often required to make directional decisions corresponding to those that are rejected. This may result in Type III errors. For instance, in testing the null hypothesis H i : θ i = θ i0 against the corresponding alternative K i : θ i < θ i0 or K + i : θ i > θ i0, simultaneously for i = 1,..., n, a Type III error occurrs once H i is rejected if θ i < θ i0 (or θ i > θ i0 ) is the true situation but one falsely claims that K + i (or K i ) is true. Controlling an error rate measuring directional errors would be a desirable objective in such a multiple testing situation. 1 Research of the first author was supported by NSF Grants DMS and DMS The work of the second author, who worked under the first author s supervision for her PhD, was supported by NSF Grant DMS MSC : Primary: 62J15, 62F03; Secondary: 62F15, 62C10, 62C12 Keywords: Multiple hypotheses testing; Directional decisions; Bayesian decision theory; False discovery rate 1
2 Different error rates, directional as well as non-directional and from frequentist as well as Bayes points of view, can be defined in terms of the different outcomes given in the following table. Table 1: Outcomes in testing n null hypotheses against two-sided alternative hypotheses True Situation Decision accept H accept K accept K + θ = θ 0 U V 1 V 2 n 0 θ < θ 0 T 1 S 1 S 2 n θ > θ 0 T 2 S 3 S 4 n + Total A R 1 R 2 n The quantity S 2 + S 3 is the number of directional (Type III) errors that occurred among the total number of rejections R = R 1 + R 2, with the ratio DFDP = S 2 + S 3 R 1, (1) where R 1 = max(r, 1), representing the pure Directional False Discovery Proportion. It is an analog of FDP = V 1 + V 2 R 1, (2) the False Discovery Proportion defined only in terms of the Type I errors. The Mixed Directional False Discovery Proportion is defined by combining these two proportions as follows MDFDP = FDP + DFDP. (3) Two types of directional error rates in the FDR framework, the Directional False Discovery Rate (DFDR) = E(DFDP) and the Mixed Directional False Discovery Rate (MDFDR) = E(MDFP), have been defined by Benjamini et al. (1993) from a frequentist point of view, considering these expectations with respect to data given parameters. Benjamini and Yekutieli (2005) have shown that that these error rates can be controlled by suitably 2
3 augmenting the original FDR procedure of Benjamini and Hochberg (1995), providing a proof of what is conjectured in Benjamini and Hochberg (2000), Shaffer (2002) and Williams et al. (1999). The idea of controlling directional false discoveries from a Bayesian point of view was considered by Lewis and Thayer (2004) and Shaffer (1999). Considering a one-way random effects model, which provides a Bayesian framework, Lewis and Thayer have shown that the DFDR, with its expectation taken with respect to both data and parameters, can be controlled by a multiple decision rule that minimizes a per-comparison Bayes risk defined in terms of an additive 0-1 loss function, providing a theoretical support of Shaffer (1999) s simulation-based findings. It is often argued that a point null hypothesis is never true and that it is only conventionally emphasized; see, for example, Jones and Tukey (2000), Shaffer (2002), and Williams et al. (1999). Or, when we use a Bayesian model with continuous prior, as in Shaffer (1999) and Lewis and Thayer (2004), the probability of a point null is zero. In such instances, the DFDR and MDFDR are same. In this article, we take another look at the problem considered by Lewis and Thayer (2004) of developing a procedure that controls the DFDR from a Bayesian perspective. Starting with a decision theoretic formulation of the underlying multiple testing problem in a Bayesian framework and calling E(DFDP) the Bayes DFDR (BDFDR) when the expectation is taken with respect to both data and parameter, we construct an alternative procedure that controls the BDFDR. Our procedure is developed through controlling the posterior DFDR (PDFDR), which is the conditional expectation of the DFDP given the parameter. While this is similar to what Lewis and Thayer (2004) also did, this new procedure, however, offers a better control of the BDFDR. Moreover, it is optimum in that it maximizes the posterior expectation E {(S 1 + S 4 )/n} given the data, while controlling the PDFDR. We will refer to this conditional expectation as the Posterior Directional Per- Comparison Power Rate (PDPCPR). We organize the article as follows. In Section 2, we present a Bayesian decision theoretic formulation of multiple hypothesis testing problem with directional decisions, and formulate the BDFDR and PDFDR. The new procedure controlling the BDFDR is then developed and properties of it are discussed in Section 3. In Section 4, we go back to the example of one-way random effects model considered by Lewis and Thayer (2004) and Shaffer (1999) and illustrate our Bayes procedure in that context, first assuming that both within and between variances are known and then considering 3
4 the case when the between variance is unknown. With unknown between variance, we incorporate the estimate of it considered by Lewis and Thayer (2004) and Shaffer (1999) into our Bayes procedure and discuss the behavior of the resulting new empirical Bayes procedure with respect to the actual value of the between variance. In Section 5, we discuss the findings of a simulation study comparing our Bayes and empirical Bayes procedures with other procedures that control the BDFDR for the multiple testing problem with directional decisions in a one-way random effects model. We demonstrate through this study that our proposed Bayes and empirical Bayes procedures provide much better control of directional false discoveries from a Bayesian perspective than those that are currently available in the literature. The article concludes with some final remarks. 2 A decision theoretic formulation In this section, we first present a general decision theoretic formulation of a multiple testing problem with directional decisions allowing the decisions to be randomized before we restrict ourselves in the rest of the paper to only non-randomized decisions. Suppose we have a multiple testing problem involving a set of statistics X = (X 1,..., X n ) with a probability distribution P θ with θ = (θ 1,..., θ n ) Θ R n, which is being used to test H i : θ i Θ 0 against K i : θ i Θ or K + i : θ i Θ +, simultaneously for i = 1,..., n. Let d i = 0, 1 or 1 according as H i, K i or K + i is accepted. Then, d = (d 1,..., d n ) represents a decision vector with D = {(d 1,..., d n ) : d i = 0, 1 or 1 i} being the decision space. Given X = x, we consider choosing the decision vector d according to a probability distribution over D: δ(d x) = n [ {δ 0 i (x)} I(di=0) {δ i (x)}i(d i= 1) {δ ] + i (x)}i(d i=1), d D, (4) i=1 for some 0 δ i (x), δ+ i (x) 1, i = 1,..., n, allowing the decisions to be made independently of each other given X. The vector δ(x) = (δ1 (X), δ 1 + (X), 4
5 ..., δn (X), δ n + (X)) is referred to as a multiple decision rule or multiple testing procedure. If 0 < δ i (X) < 1 or 0 < δ+ i (X) < 1, for at least one i, then δ(x) is randomized; otherwise, it is non-randomized. The main objective in a multiple testing problem is to determine δ(x), the choice of which is typically assessed based on a risk as measured by averaging a loss L(θ, δ), which one incurs in selecting d, over uncertainties. In a frequentist approach, only the uncertainty in X given θ is considered; whereas, in a Bayesian approach one would like to further utilize prior information on θ. Let h = (h 1,..., h n ), with h i = 0, 1 or 1 according as θ i Θ 0, θ i Θ or θ i Θ +, represent the true state of nature. Given Q(h, d), a measure of error providing an overall discrepancy between h and d, the loss function is given by L(θ, δ(x)) = d D Q(h, d)δ(d X). (5) The frequentist risk is given by R δ (θ) = E X θ L(θ, δ(x)), (6) and, given a prior distribution of θ, the posterior risk is and the Bayes risk is Π δ (X) = E θ X L(θ, δ(x)), (7) r δ = E θ R δ (θ) = E X Π δ (X). (8) Among different possible choices of Q(h, d) providing different concepts of error rates in multiple testing, the one we are interested in here is the following: n i=1 DFDP = I(h id i = 1) n i=1 I( d i = 1) 1, (9) 5
6 with the corresponding loss (as defined in 5) given by n i=1 I(h id i = 1) n i=1 I( d i = 1) 1 δ(d X) d D = [ { 1 } I(h i = +1) + I(h i = 1) J:>0 i J i J { + δ i (X) δ + i (X) }] [1 δ i (X) δ+ i (X)] i J i J + i J c = { 1 } I(h i = +1) + I(h i = 1) φ J,J +(X), (10) J:>0 i J i J + where J = {i : d i = 1}, J + = {i : d i = 1}, J = J J +, is the cardinality of J, and φ J,J +(X) = δ i (X) δ + i (X) [1 δ i (X) δ+ i (X)] (11) i J i J + i J c is the probability, given X, of rejecting the set of null hypotheses {H i, i J}. Under a prior distribution of θ, the posterior DFDR (PDFDR) is given by PDFDR = E θ X (DFDP) = { 1 s + i (X) + } s i (X) J:>0 i J i J + φ J,J +(X), (12) where s i (X) = P {h i = 1 X} and s + i (X) = P {h i = +1 X}, the posterior probabilities of negative and positive alternatives, respectively. The Bayes DFDR (BDFDR) is the expectation of (12) with respect to X. A non-randomized multiple testing procedure controlling the BDFDR is going to be constructed in the next section through controlling the PDFDR under a continuous prior. Before we do that, it is important to note that for a non-randomized rule δ, d can be replaced by δ in the above formulation. Furthermore, since the prior is continuous, the posterior probability of h i = 0 is zero. 3 Controlling Bayes directional FDR Let s i (X) = min{s i (X), s+ i (X)}, i = 1,..., n. Then, for any procedure with J = {i : s + i (X) s i (X)} and J + = {i : s i (X) s+ i (X)}, the PDFDR in 6
7 (12) can be equivalently expressed as P DF DR = { 1 } s i (X) φ J (X), (13) J:>0 with φ J (X) φ J,J +(X). Let s 1:n(X)... s n:n (X) be the ordered values of s i (X), i = 1,..., n and H i:n be the the null hypothesis corresponding to s i:n. Define A j (X) as the average of the first j smallest s-values, that is, A j (X) = 1 j i J j s i:n (X). (14) i=1 Our proposed new procedure controlling the BDFDR is then given in the following: Theorem 1 Let K(X) = { max {j : Aj (X) α}, if the maximum exists 0, otherwise. (15) Given K(X) = k, reject {H 1:n,..., H k:n } and accept the rest. Among these rejected hypotheses, positive sign decision is made if s i (X) < s+ i (X) and negative sign decision is made otherwise. The BDFDR of this procedure is less than or equal to α. Proof. The theorem follows by noting that the PDFDR of the procedure in this theorem is A K (X), which is less than or equal to α. It is important to note that, in the procedure of Lewis and Thayer (2004), a control of the BDFDR is achieved by selecting (with probability one) J and J +, given X, as follows: J = {i : s + i (X) α}, J + = {s i (X) α}. (16) Whereas, these subsets are chosen in our procedure as follows: where J = {i J : s + i (X) s i (X)}, 1 J + = {i J : s i (X) s+ i (X)}, min{s i (X), s+ i (X)} α. (17) i J 7
8 i J s + i i J + s i Clearly, the union of J and J +, that is, the set of rejected hypotheses, is smaller in (16) compared to that in (17), implying that our procedure is more powerful in the sense of allowing more rejections, yet keeping the BDFDR controlled at the same level. The formula for the PDFDR in (12) allows one to construct other BDFDR procedures. For instance, one might consider choosing J and J + by sepa- 1 1 rately controlling each of (X) and (X). This will J J + again be more powerful than Lewis and Thayer (2004), even though it is not going to be better than our procedure. In fact, one can see from (12) that our procedure provides in a certain sense an optimum choice of these subsets subject to a control of the PDFDR. More specifically, we have the following result. Proposition 1 The procedure in Theorem 1 is the non-randomized part of the procedure that maximizes the posterior directional per-comparison power rate (PDPCPR), defined as PDPCPR = E θ X ( S1 + S 4 n 1 ), (18) among all procedures with J = {i : s + i (X) s i (X)} and J + = {i : s i (X) s + i (X)} and subject to a control of the PDFDR at α. Proof. For any procedure φ, PDPCPR = { 1 [ 1 s + i n (X)] + [ 1 s i (X)]} φ J,J +(X) J:>0 i J i J + = { } 1 [1 s i (X)] φ J (X), (19) n J:>0 i J which, given s i:n (X), i = 1,..., n, can be expressed as PDPCPR = { } 1 [1 s i:n (X)] φ J (X). (20) n J:>0 Also, given s i:n (X), i = 1,..., n, the PDFDR is PDFDR = { } 1 s i:n (X) φ J (X). (21) J:>0 8 i J i J
9 Hence, it follows from the Neyman-Pearson lemma that the following procedure { 1 if φ 0 i J J(X) = [1 s 1 i:n(x)] > C α i J s i:n(x) 0 if i J [1 s 1 i:n(x)] < C α i J s (22) i:n(x), with some C α > 0 satisfying { } 1 s i:n (X) φ 0 J(X) = α, (23) J:>0 i J maximizes the PDPCPR subject to a control of the PDFDR at level α, given s i:n (X), i = 1,..., n, and hence given X. Since, for every > 0, [1 s i:n (X)] > C α s i:n (X) i J i J 1 s i:n (X) <, (24) + C α i J i J s i:n(x) and we can always find a C α the φ 0 1 J (X) in (22) is based on satisfying (23). The non-randomized part of this is our procedure in Theorem 1. Remark 1. Obviously, the randomized procedure in Proposition 1 has a better control of the PDFDR and has higher PDPCPR than its nonrandomized part. Nevertheless, we propose using the slightly more conservative non-randomized procedure in order to avoid the difficulty in offering a practical justification for using a randomized test instead of its nonrandomized part. A concept of power, called the average power, is defined from a frequentist point of view in terms of the proportion, (S 1 + S 4 )/(n + n + ), of alternatives that are correctly rejected [Shaffer (1999) and Dudoit et al. (2003)]. This proportion is same as (S 1 + S 4 )/n, when n 0 = 0. This is what Lewis and Thayer (2004) consider before taking its expectation with respect to both data and parameter to define what they call the directional per-comparison power rate as a measure of power from a Bayesian perspective. We will use the same measure in this paper, but refer to it as the Bayes directional percomparison power rate (BDPCPR), to compare the power performance of different procedures controlling the BDFDR in the context of the one-way random effects model considered by Lewis and Thayer (2004). 9
10 4 One-way random effects model Consider data from m independent studies with w j being the number of observations from study j. The vector of sample means for these studies is X = (X 1,..., X m ). Assume that the X j s are independently distributed with ) X j µ j, σ 2 N (µ j, σ2, j = 1,..., m. (25) w j The population means µ 1,..., µ m are considered random and assumed to be independently and identically distributed as µ j θ, τ 2 N(θ, τ 2 ), j = 1,..., m. (26) We consider θ and σ 2 to be known. Regarding τ 2, we will first assume that it is known and illustrate our BDFDR procedure. Then, assuming it unknown, we develop our empirical BDFDR procedure in terms of an estimate of it. Based on Bayes theorem, it is known that, conditionally given X = x, θ, σ 2, and τ 2, µ j s are independently distributed as follows: where and µ j x j, θ, σ 2, τ 2 N(ˆµ j, v j ), j = 1,..., m, (27) ˆµ j = τ 2 x j + (σ 2 /w j )θ τ 2 + σ 2 /w j v j = τ 2 σ 2 /w j τ 2 + σ 2 /w j. We consider the following multiple testing problem involving all pairwise comparisons among the µ j s: H ij : µ i µ j = 0 v.s. K ij : µ i µ j < 0 or K + ij : µ i µ j > 0, (28) for all 1 i < j m. The conditional marginal distributions of µ i µ j given x are µ i µ j x N (ˆµ i ˆµ j, v i + v j ), 1 i < j m, (29) 10
11 from which we can determine s ij and s+ ij as follows: ( ) s ij (x) = 1 s+ ij (x) = 1 Φ ˆµi ˆµ j, (30) vi + v j with Φ being the cdf of N(0, 1), before developing our Bayes procedure as in Theorem 1, assuming, of course, that θ, σ 2 and τ 2 are known. Note that the procedure of Lewis and Thayer (2004) rejects H ij in favor of K + ij (or K ij ) if ˆµ i ˆµ j vi + v j Z α (or Z α ), (31) that is, if s ij (x) (or s+ ij (x)) α, and accepts H ij otherwise. With unknown τ 2, we follow the idea in Shaffer (1999) and Lewis and Thayer (2004), and consider estimating τ 2 using the estimator ˆτ 2 = (F 1)σ 2 [(m 1) w j ]/[( w j ) 2 w 2 j ], (32) where F = max { w j (x j x) 2 /[(m 1)σ 2 ], 1}, and ignore the estimation of θ and σ. The resulting procedure obtained by replacing τ 2 by this ˆτ 2 is our proposed empirical Bayes procedure; of course, when ˆτ = 0, we accept all null hypotheses. To see how this empirical Bayes procedure performs in terms of controlling the BDFDR when τ 2 is replaced by its estimate ˆτ 2, we carried out a simulation study with α = We noticed that the BDFDR for this procedure is a decreasing function of τ 2, and when τ 0, the BDFDR is not controlled for m between 4 and 40. The limiting behavior of the BDFDR for τ 0 is presented in Table 2. Table 2: The limiting BDFDR as τ 2 0 for our empirical Bayes procedure m BDFDR Standard errors are.0001, based on 100,000 replications. To achieve BDFDR α for small τ 2, a second critical value F is determined through simulation, as in Table 3. 11
12 Table 3: Value of F in New EB to control DFDR at.025 m F When ˆτ 2 is small or F F, indicating that the µ j s are very close to each other, we recommend, as in Lewis and Thayer (2004), that all the null hypotheses be accepted, otherwise, our new empirical Bayes procedure be conducted by replacing τ by ˆτ. Compared with the corresponding values in Lewis and Thayer (2004), the values in Table 2 are the same, while those in Table 3 are slightly smaller. Remark 2. Before we proceed to numerically investigate in the next section the performance of our proposed procedures relative to others in the context of the one-way random effects model, including the frequentist procedures (assuming fixed effects) considered in Lewis and Thayer (2004), we want to make a few observations on these frequentist procedures. A frequentist procedure is one that is meant to control the DFDR and is developed typically by suitably augmenting a FDR procedure for testing the null hypotheses against the two-sided alternatives; see, for example, Benjamini and Hochberg (2000), Benjamini and Yekutieli (2005), Shaffer (2002) and Williams et al. (1999). For instance, with P ij = 2 min(p ij, 1 P ij ), the two-sided p-values corresponding to the test-statistics Z ij = X i X j, 1 i < j m, (33) σ 1 w i + 1 w j and the one-sided p-values P ij = 1 Φ(Z ij ), the following procedure is a directional version of the α-level FDR Bonferroni procedure: Directional Bonferroni Procedure: (i). Apply the Bonferroni procedure at level α to test the n = m(m 1)/2 null hypotheses H ij against the corresponding two-sided alternatives. (ii). Reject H ij in favor of K ij if P ij < α/n and P ij > 1/2. (iii). Reject H ij in favor of K + ij if P ij < α/n and P ij < 1/2. Assuming that n 0 = 0, the above procedure controls the DFDR at α/2. 12
13 The other frequentist DFDR procedure considered by Lewis and Thayer (2004) is the following directional version of the original BH FDR procedure [Benjamini and Yekutieli (2005)]: Directional BH Procedure: (i). Apply the BH procedure at level α to test the n = m(m 1)/2 null hypotheses H ij against the corresponding two-sided alternatives. Let R be the number of null hypotheses rejected. (ii). Reject H ij in favor of K ij if P ij < Rα/n and P ij > 1/2. (iii). Reject H ij in favor of K + ij if P ij < Rα/n and P ij < 1/2. It is important to note that, even though Lewis and Thayer (2004) considered it, the above directional BH procedure is not known to control the DFDR in the present context. This is due to the fact that the underlying p-values are dependent, unlike in Benjamini and Yekutieli (2005) where they are assumed to be independent. Had these p-values been independent, the DFDR would have been controlled at α/2, when n 0 = 0, as in the case of the directional Bonferroni procedure. This is why Lewis and Thayer (2004) have considered controlling the DFDR at level α through a 2α-level FDR procedure, and we will do that too in this paper. Benjamini and Yekutieli (2005) have proposed an alternative approach to controlling the DFDR in the dependence case based only on the one-sided p-values. Again, there is no guarantee that this alternative procedure is going to work, as the required positive regression dependence condition of the one-sided p-values is not met in the present context involving pairwise differences. Thus, the directional Bonferroni procedure seems to be the only frequentist procedure that is known to control the DFDR, and hence the BDFDR (with random effects), in the present context. Nevertheless, we will follow Lewis and Thayer (2004) and consider both this and the directional BH procedure, along with the unadjusted frequentist procedure and the Bayes procedure of Lewis and Thayer (2004), and compare them numerically with our proposed Bayes and empirical Bayes procedures. The Empirical Bayes procedure of Lewis and Thayer (2004) is not considered, as its performance is very close to that of their Bayes procedure. Also, it should be noted that the unadjusted procedure is not going to control the DFDR. It is kept in our comparative study, as in Lewis and Thayer (2004), only to provide a complete picture of how different procedures would perform. 13
14 5 Numerical Comparisons Simulation studies in a one-way random effects setup were conducted based on 25,000 replications of x and µ, with m = (2, 3, 4, 5, 10, 50), σ 2 /w j = 1 and τ = (.01,.50, 1.0, 1.5, 2.0, 3.0, 4.0, 6.0, 8.0). The values of both the BDFDR and BDPCPR were computed for each of the proposed Bayes and empirical Bayes procedures (referred to as New Bayes and New EB, respectively), the unadjusted procedure (labelled Unadjusted), the directional Bonferroni procedure (labelled Bonferroni), the directional BH procedure (labelled BH), and Lewis-Thayer s Bayes procedure (labelled LT), all designed to control the BDFDR at α = Figures 1 and 2 present the comparisons in terms of the BDFDR and BDPCR, respectively, among these different procedures with m = 10. The unadjusted procedure does not control the BDFDR, as expected; although, it seems to do so for large τ. All other procedures control the BDFDR, interestingly, including the BH procedure for which, as discussed before, we have no theory to support its control of the BDFDR. While the New Bayes procedure is quite conservative for small values of τ (less than.5), it provides much better control of the BDFDR with a value close to 0.025, compared to any other procedure, when τ > 1. The performance of the New EB procedure in terms of controlling the BDFDR is very similar to that of the New Bayes procedure, except for small τ when it appears to work the best. The LT procedure is always very conservative whatever the value of τ is, although it is slightly better than the BH procedure for large τ. In terms of power (BDPCPR), when τ is small, there is not much difference among all of the procedures. However, when τ is not small, the New Bayes and Empirical Bayes procedures are both noticeably more powerful than every other procedure. Figure 3 compares the powers of the New Bayes and the LT procedures for different number of means. The New Bayes procedure is the same as the LT procedure when there is only one comparison (m = 2). With more than one comparison, the New Bayes procedure is uniformly more powerful than the LT procedure for different τ and the power gain is increasing in m. Figure 4 presents a comparison of powers between the New EB and the BH procedures for different number of means. When there is only one comparison (m = 2), the BH procedure is more powerful than New EB procedure. With more than one comparison (m > 2), however, the New EB procedure is more powerful and the power gain is increasing in m. 14
15 In conclusion, simulation results confirm that new Bayes and empirical Bayes methods proposed in this paper are more powerful directional error controlling procedures for pairwise comparisons in a one-way random effects model than those available in the literature. 6 Concluding remarks We like to clarify here a few points about this paper as prompted by the referees. First, we should emphasize that our proposed procedure is not claimed to be a Bayes decision procedure in the sense of minimizing Bayes risk under a certain loss function. In fact, in the Bayesian paradigm, there is no concept of controlling a posterior expected loss, instead of trying to minimize it. The idea of controlling an error rate is basically a frequentist notion, even when it is defined by averaging over both parameters and data. On the other hand, the procedure in Lewis and Thayer (2004) is a Bayes decision procedure that is developed using a loss function of the form L(λ) = i=1 {I(h id i = 1) + λi(h i 0, d i = 0)} and minimizing per-comparison Bayes risk r = E X,θ L(λ) with λ = α. Second, it is worth mentioning that it is a problem with almost all current multiple comparison procedures that, for large enough values of τ, even small differences with conventional p-values approaching 1.0 may be declared significant because of great majority of p- values for the pairwise comparisons will be very close to zero. Our procedure, by not controlling a per comparison error rate, does not have this problem. 7 Acknowledgements We thank the referees for their valuable comments. References Benjamini, Y. and Y. Hochberg (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B 57, Benjamini, Y. and Y. Hochberg (2000). On the adaptive control of the false 15
16 discovery rate in multiple testing with independent statistics. Journal of educational and behavioral statistics 25, Benjamini, Y., Y. Hochberg, and Y. Kling (1993). False discovery rate control in pairwise comparisons. Research Paper, Dept. of Statistics and O.R., Tel Aviv University. Benjamini, Y. and D. Yekutieli (2005). False discovery rate-adjusted multiple confidence intervals for selected paramaters. Journal of the American Statistical Association 100, Dudoit, S., J. P. Shaffer, and J. C. Boldrick (2003). Multiple hypothesis testing in microarray experiment. Statistical Science 18 (1), Jones, L. V. and J. W. Tukey (2000). A sensible formulation of the significance test. Psychological Method 5, Lewis, C. and D. T. Thayer (2004). A loss function related to the fdr for random effects multiple comparison. Journal of Statistical Planning and Inference 125, Shaffer, J. P. (1999). A semi-bayesian study of Duncan s Bayesian multiple comparison procedure. Journal of Statistical Planning and Inference 82, Shaffer, J. P. (2002). Multiplicity, directional(type III) errors, and the null hypothesis. Psychological Methods 7, Williams, V. S., L. V. Jones, and J. W. Tukey (1999). Controlling error in multiple comparisons, with examples from state-to-state differences in educational achievement. Journal of educational and behavioral statistics 24,
17 0.04 Unadjusted New Bayes New EB LT BH Bonferroni 0.03 BDFDR τ Figure 1: The BDFDR for all pairwise comparisons in one-way random effects setup with m=10 and σ2 w j = 1 17
18 BDPCPR Unadjusted New Bayes New EB LT BH Bonferroni τ Figure 2: The BDPCPR for all pairwise comparisons in one-way random effects setup with m=10 and σ2 w j = 1 18
19 0.20 m=2 m=3 m=4 m=5 m=10 m= Power Bayes Power LT τ Figure 3: The difference in the BDPCPR between the New Bayes procedure and Lewis and Thayer s Bayes procedure (Power Bayes - Power LT) with = 1 as a function of τ for pairwise comparisons of m means. σ 2 w j 19
20 Power Empirical Bayes Power BH m=2 m=3 m=4 m=5 m=10 m= τ Figure 4: The difference in the BDPCPR between the New Empirical Bayes procedure and Benjamini and Hochberg s procedure (Power Empirical Bayes - Power BH ) with σ2 w j = 1 as a function of τ for pairwise comparisons of m means. 20
A GENERAL DECISION THEORETIC FORMULATION OF PROCEDURES CONTROLLING FDR AND FNR FROM A BAYESIAN PERSPECTIVE
A GENERAL DECISION THEORETIC FORMULATION OF PROCEDURES CONTROLLING FDR AND FNR FROM A BAYESIAN PERSPECTIVE Sanat K. Sarkar 1, Tianhui Zhou and Debashis Ghosh Temple University, Wyeth Pharmaceuticals and
More informationModified Simes Critical Values Under Positive Dependence
Modified Simes Critical Values Under Positive Dependence Gengqian Cai, Sanat K. Sarkar Clinical Pharmacology Statistics & Programming, BDS, GlaxoSmithKline Statistics Department, Temple University, Philadelphia
More informationPROCEDURES CONTROLLING THE k-fdr USING. BIVARIATE DISTRIBUTIONS OF THE NULL p-values. Sanat K. Sarkar and Wenge Guo
PROCEDURES CONTROLLING THE k-fdr USING BIVARIATE DISTRIBUTIONS OF THE NULL p-values Sanat K. Sarkar and Wenge Guo Temple University and National Institute of Environmental Health Sciences Abstract: Procedures
More informationFDR-CONTROLLING STEPWISE PROCEDURES AND THEIR FALSE NEGATIVES RATES
FDR-CONTROLLING STEPWISE PROCEDURES AND THEIR FALSE NEGATIVES RATES Sanat K. Sarkar a a Department of Statistics, Temple University, Speakman Hall (006-00), Philadelphia, PA 19122, USA Abstract The concept
More informationMultiple Testing. Hoang Tran. Department of Statistics, Florida State University
Multiple Testing Hoang Tran Department of Statistics, Florida State University Large-Scale Testing Examples: Microarray data: testing differences in gene expression between two traits/conditions Microbiome
More informationHigh-throughput Testing
High-throughput Testing Noah Simon and Richard Simon July 2016 1 / 29 Testing vs Prediction On each of n patients measure y i - single binary outcome (eg. progression after a year, PCR) x i - p-vector
More informationApplying the Benjamini Hochberg procedure to a set of generalized p-values
U.U.D.M. Report 20:22 Applying the Benjamini Hochberg procedure to a set of generalized p-values Fredrik Jonsson Department of Mathematics Uppsala University Applying the Benjamini Hochberg procedure
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationStatistical Applications in Genetics and Molecular Biology
Statistical Applications in Genetics and Molecular Biology Volume 5, Issue 1 2006 Article 28 A Two-Step Multiple Comparison Procedure for a Large Number of Tests and Multiple Treatments Hongmei Jiang Rebecca
More informationarxiv: v1 [math.st] 31 Mar 2009
The Annals of Statistics 2009, Vol. 37, No. 2, 619 629 DOI: 10.1214/07-AOS586 c Institute of Mathematical Statistics, 2009 arxiv:0903.5373v1 [math.st] 31 Mar 2009 AN ADAPTIVE STEP-DOWN PROCEDURE WITH PROVEN
More informationTwo-stage stepup procedures controlling FDR
Journal of Statistical Planning and Inference 38 (2008) 072 084 www.elsevier.com/locate/jspi Two-stage stepup procedures controlling FDR Sanat K. Sarar Department of Statistics, Temple University, Philadelphia,
More informationHigh-Throughput Sequencing Course. Introduction. Introduction. Multiple Testing. Biostatistics and Bioinformatics. Summer 2018
High-Throughput Sequencing Course Multiple Testing Biostatistics and Bioinformatics Summer 2018 Introduction You have previously considered the significance of a single gene Introduction You have previously
More informationFALSE DISCOVERY AND FALSE NONDISCOVERY RATES IN SINGLE-STEP MULTIPLE TESTING PROCEDURES 1. BY SANAT K. SARKAR Temple University
The Annals of Statistics 2006, Vol. 34, No. 1, 394 415 DOI: 10.1214/009053605000000778 Institute of Mathematical Statistics, 2006 FALSE DISCOVERY AND FALSE NONDISCOVERY RATES IN SINGLE-STEP MULTIPLE TESTING
More informationBayesian Determination of Threshold for Identifying Differentially Expressed Genes in Microarray Experiments
Bayesian Determination of Threshold for Identifying Differentially Expressed Genes in Microarray Experiments Jie Chen 1 Merck Research Laboratories, P. O. Box 4, BL3-2, West Point, PA 19486, U.S.A. Telephone:
More informationA Bayesian Determination of Threshold for Identifying Differentially Expressed Genes in Microarray Experiments
A Bayesian Determination of Threshold for Identifying Differentially Expressed Genes in Microarray Experiments Jie Chen 1 Merck Research Laboratories, P. O. Box 4, BL3-2, West Point, PA 19486, U.S.A. Telephone:
More informationOn adaptive procedures controlling the familywise error rate
, pp. 3 On adaptive procedures controlling the familywise error rate By SANAT K. SARKAR Temple University, Philadelphia, PA 922, USA sanat@temple.edu Summary This paper considers the problem of developing
More informationFalse discovery rate and related concepts in multiple comparisons problems, with applications to microarray data
False discovery rate and related concepts in multiple comparisons problems, with applications to microarray data Ståle Nygård Trial Lecture Dec 19, 2008 1 / 35 Lecture outline Motivation for not using
More informationControl of Directional Errors in Fixed Sequence Multiple Testing
Control of Directional Errors in Fixed Sequence Multiple Testing Anjana Grandhi Department of Mathematical Sciences New Jersey Institute of Technology Newark, NJ 07102-1982 Wenge Guo Department of Mathematical
More informationAnnouncements. Proposals graded
Announcements Proposals graded Kevin Jamieson 2018 1 Hypothesis testing Machine Learning CSE546 Kevin Jamieson University of Washington October 30, 2018 2018 Kevin Jamieson 2 Anomaly detection You are
More informationThe miss rate for the analysis of gene expression data
Biostatistics (2005), 6, 1,pp. 111 117 doi: 10.1093/biostatistics/kxh021 The miss rate for the analysis of gene expression data JONATHAN TAYLOR Department of Statistics, Stanford University, Stanford,
More informationFDR and ROC: Similarities, Assumptions, and Decisions
EDITORIALS 8 FDR and ROC: Similarities, Assumptions, and Decisions. Why FDR and ROC? It is a privilege to have been asked to introduce this collection of papers appearing in Statistica Sinica. The papers
More informationControlling the False Discovery Rate: Understanding and Extending the Benjamini-Hochberg Method
Controlling the False Discovery Rate: Understanding and Extending the Benjamini-Hochberg Method Christopher R. Genovese Department of Statistics Carnegie Mellon University joint work with Larry Wasserman
More informationStep-down FDR Procedures for Large Numbers of Hypotheses
Step-down FDR Procedures for Large Numbers of Hypotheses Paul N. Somerville University of Central Florida Abstract. Somerville (2004b) developed FDR step-down procedures which were particularly appropriate
More informationDecision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over
Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we
More informationLecture 2: Statistical Decision Theory (Part I)
Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical
More informationStatistica Sinica Preprint No: SS R1
Statistica Sinica Preprint No: SS-2017-0072.R1 Title Control of Directional Errors in Fixed Sequence Multiple Testing Manuscript ID SS-2017-0072.R1 URL http://www.stat.sinica.edu.tw/statistica/ DOI 10.5705/ss.202017.0072
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationFalse Discovery Control in Spatial Multiple Testing
False Discovery Control in Spatial Multiple Testing WSun 1,BReich 2,TCai 3, M Guindani 4, and A. Schwartzman 2 WNAR, June, 2012 1 University of Southern California 2 North Carolina State University 3 University
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationPolitical Science 236 Hypothesis Testing: Review and Bootstrapping
Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The
More informationON TWO RESULTS IN MULTIPLE TESTING
ON TWO RESULTS IN MULTIPLE TESTING By Sanat K. Sarkar 1, Pranab K. Sen and Helmut Finner Temple University, University of North Carolina at Chapel Hill and University of Duesseldorf Two known results in
More informationA Large-Sample Approach to Controlling the False Discovery Rate
A Large-Sample Approach to Controlling the False Discovery Rate Christopher R. Genovese Department of Statistics Carnegie Mellon University Larry Wasserman Department of Statistics Carnegie Mellon University
More informationAdaptive Filtering Multiple Testing Procedures for Partial Conjunction Hypotheses
Adaptive Filtering Multiple Testing Procedures for Partial Conjunction Hypotheses arxiv:1610.03330v1 [stat.me] 11 Oct 2016 Jingshu Wang, Chiara Sabatti, Art B. Owen Department of Statistics, Stanford University
More informationA semi-bayesian study of Duncan's Bayesian multiple
A semi-bayesian study of Duncan's Bayesian multiple comparison procedure Juliet Popper Shaer, University of California, Department of Statistics, 367 Evans Hall # 3860, Berkeley, CA 94704-3860, USA February
More informationSTAT 263/363: Experimental Design Winter 2016/17. Lecture 1 January 9. Why perform Design of Experiments (DOE)? There are at least two reasons:
STAT 263/363: Experimental Design Winter 206/7 Lecture January 9 Lecturer: Minyong Lee Scribe: Zachary del Rosario. Design of Experiments Why perform Design of Experiments (DOE)? There are at least two
More informationPost-Selection Inference
Classical Inference start end start Post-Selection Inference selected end model data inference data selection model data inference Post-Selection Inference Todd Kuffner Washington University in St. Louis
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationDetection and Estimation Chapter 1. Hypothesis Testing
Detection and Estimation Chapter 1. Hypothesis Testing Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2015 1/20 Syllabus Homework:
More informationOn Methods Controlling the False Discovery Rate 1
Sankhyā : The Indian Journal of Statistics 2008, Volume 70-A, Part 2, pp. 135-168 c 2008, Indian Statistical Institute On Methods Controlling the False Discovery Rate 1 Sanat K. Sarkar Temple University,
More informationPermutation Test for Bayesian Variable Selection Method for Modelling Dose-Response Data Under Simple Order Restrictions
Permutation Test for Bayesian Variable Selection Method for Modelling -Response Data Under Simple Order Restrictions Martin Otava International Hexa-Symposium on Biostatistics, Bioinformatics, and Epidemiology
More informationHypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationSpecific Differences. Lukas Meier, Seminar für Statistik
Specific Differences Lukas Meier, Seminar für Statistik Problem with Global F-test Problem: Global F-test (aka omnibus F-test) is very unspecific. Typically: Want a more precise answer (or have a more
More informationResampling-Based Control of the FDR
Resampling-Based Control of the FDR Joseph P. Romano 1 Azeem S. Shaikh 2 and Michael Wolf 3 1 Departments of Economics and Statistics Stanford University 2 Department of Economics University of Chicago
More informationhttp://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random
More informationHypothesis Testing Chap 10p460
Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies
More informationJournal of Statistical Software
JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. doi: 10.18637/jss.v000.i00 GroupTest: Multiple Testing Procedure for Grouped Hypotheses Zhigen Zhao Abstract In the modern Big Data
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationON STEPWISE CONTROL OF THE GENERALIZED FAMILYWISE ERROR RATE. By Wenge Guo and M. Bhaskara Rao
ON STEPWISE CONTROL OF THE GENERALIZED FAMILYWISE ERROR RATE By Wenge Guo and M. Bhaskara Rao National Institute of Environmental Health Sciences and University of Cincinnati A classical approach for dealing
More informationDirection: This test is worth 250 points and each problem worth points. DO ANY SIX
Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes
More informationOn Procedures Controlling the FDR for Testing Hierarchically Ordered Hypotheses
On Procedures Controlling the FDR for Testing Hierarchically Ordered Hypotheses Gavin Lynch Catchpoint Systems, Inc., 228 Park Ave S 28080 New York, NY 10003, U.S.A. Wenge Guo Department of Mathematical
More informationSummary and discussion of: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing
Summary and discussion of: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing Statistics Journal Club, 36-825 Beau Dabbs and Philipp Burckhardt 9-19-2014 1 Paper
More informationTesting Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA
Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize
More informationAdvanced Statistical Methods: Beyond Linear Regression
Advanced Statistical Methods: Beyond Linear Regression John R. Stevens Utah State University Notes 3. Statistical Methods II Mathematics Educators Worshop 28 March 2009 1 http://www.stat.usu.edu/~jrstevens/pcmi
More informationTable of Outcomes. Table of Outcomes. Table of Outcomes. Table of Outcomes. Table of Outcomes. Table of Outcomes. T=number of type 2 errors
The Multiple Testing Problem Multiple Testing Methods for the Analysis of Microarray Data 3/9/2009 Copyright 2009 Dan Nettleton Suppose one test of interest has been conducted for each of m genes in a
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More informationLarge-Scale Hypothesis Testing
Chapter 2 Large-Scale Hypothesis Testing Progress in statistics is usually at the mercy of our scientific colleagues, whose data is the nature from which we work. Agricultural experimentation in the early
More informationSTA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources
STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various
More informationStat 206: Estimation and testing for a mean vector,
Stat 206: Estimation and testing for a mean vector, Part II James Johndrow 2016-12-03 Comparing components of the mean vector In the last part, we talked about testing the hypothesis H 0 : µ 1 = µ 2 where
More informationLooking at the Other Side of Bonferroni
Department of Biostatistics University of Washington 24 May 2012 Multiple Testing: Control the Type I Error Rate When analyzing genetic data, one will commonly perform over 1 million (and growing) hypothesis
More informationSome General Types of Tests
Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.
More informationSimultaneous Testing of Grouped Hypotheses: Finding Needles in Multiple Haystacks
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 2009 Simultaneous Testing of Grouped Hypotheses: Finding Needles in Multiple Haystacks T. Tony Cai University of Pennsylvania
More informationHypothesis testing (cont d)
Hypothesis testing (cont d) Ulrich Heintz Brown University 4/12/2016 Ulrich Heintz - PHYS 1560 Lecture 11 1 Hypothesis testing Is our hypothesis about the fundamental physics correct? We will not be able
More informationBiostatistics Advanced Methods in Biostatistics IV
Biostatistics 140.754 Advanced Methods in Biostatistics IV Jeffrey Leek Assistant Professor Department of Biostatistics jleek@jhsph.edu Lecture 11 1 / 44 Tip + Paper Tip: Two today: (1) Graduate school
More informationChapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)
Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis
More informationAlpha-Investing. Sequential Control of Expected False Discoveries
Alpha-Investing Sequential Control of Expected False Discoveries Dean Foster Bob Stine Department of Statistics Wharton School of the University of Pennsylvania www-stat.wharton.upenn.edu/ stine Joint
More informationThe Pennsylvania State University The Graduate School Eberly College of Science GENERALIZED STEPWISE PROCEDURES FOR
The Pennsylvania State University The Graduate School Eberly College of Science GENERALIZED STEPWISE PROCEDURES FOR CONTROLLING THE FALSE DISCOVERY RATE A Dissertation in Statistics by Scott Roths c 2011
More informationSanat Sarkar Department of Statistics, Temple University Philadelphia, PA 19122, U.S.A. September 11, Abstract
Adaptive Controls of FWER and FDR Under Block Dependence arxiv:1611.03155v1 [stat.me] 10 Nov 2016 Wenge Guo Department of Mathematical Sciences New Jersey Institute of Technology Newark, NJ 07102, U.S.A.
More information(1) Introduction to Bayesian statistics
Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationMultivariate statistical methods and data mining in particle physics
Multivariate statistical methods and data mining in particle physics RHUL Physics www.pp.rhul.ac.uk/~cowan Academic Training Lectures CERN 16 19 June, 2008 1 Outline Statement of the problem Some general
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More informationA Sequential Bayesian Approach with Applications to Circadian Rhythm Microarray Gene Expression Data
A Sequential Bayesian Approach with Applications to Circadian Rhythm Microarray Gene Expression Data Faming Liang, Chuanhai Liu, and Naisyin Wang Texas A&M University Multiple Hypothesis Testing Introduction
More informationHypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004
Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),
More informationParameter Estimation, Sampling Distributions & Hypothesis Testing
Parameter Estimation, Sampling Distributions & Hypothesis Testing Parameter Estimation & Hypothesis Testing In doing research, we are usually interested in some feature of a population distribution (which
More informationExceedance Control of the False Discovery Proportion Christopher Genovese 1 and Larry Wasserman 2 Carnegie Mellon University July 10, 2004
Exceedance Control of the False Discovery Proportion Christopher Genovese 1 and Larry Wasserman 2 Carnegie Mellon University July 10, 2004 Multiple testing methods to control the False Discovery Rate (FDR),
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationDoing Cosmology with Balls and Envelopes
Doing Cosmology with Balls and Envelopes Christopher R. Genovese Department of Statistics Carnegie Mellon University http://www.stat.cmu.edu/ ~ genovese/ Larry Wasserman Department of Statistics Carnegie
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationProcedures controlling generalized false discovery rate
rocedures controlling generalized false discovery rate By SANAT K. SARKAR Department of Statistics, Temple University, hiladelphia, A 922, U.S.A. sanat@temple.edu AND WENGE GUO Department of Environmental
More informationLECTURE 5 HYPOTHESIS TESTING
October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.
More informationMultiple Testing. Anjana Grandhi. BARDS, Merck Research Laboratories. Rahway, NJ Wenge Guo. Department of Mathematical Sciences
Control of Directional Errors in Fixed Sequence arxiv:1602.02345v2 [math.st] 18 Mar 2017 Multiple Testing Anjana Grandhi BARDS, Merck Research Laboratories Rahway, NJ 07065 Wenge Guo Department of Mathematical
More informationProbabilistic Inference for Multiple Testing
This is the title page! This is the title page! Probabilistic Inference for Multiple Testing Chuanhai Liu and Jun Xie Department of Statistics, Purdue University, West Lafayette, IN 47907. E-mail: chuanhai,
More informationNon-specific filtering and control of false positives
Non-specific filtering and control of false positives Richard Bourgon 16 June 2009 bourgon@ebi.ac.uk EBI is an outstation of the European Molecular Biology Laboratory Outline Multiple testing I: overview
More informationIMPROVING TWO RESULTS IN MULTIPLE TESTING
IMPROVING TWO RESULTS IN MULTIPLE TESTING By Sanat K. Sarkar 1, Pranab K. Sen and Helmut Finner Temple University, University of North Carolina at Chapel Hill and University of Duesseldorf October 11,
More informationBayesian Learning (II)
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP
More informationFalse discovery rate control for non-positively regression dependent test statistics
Journal of Statistical Planning and Inference ( ) www.elsevier.com/locate/jspi False discovery rate control for non-positively regression dependent test statistics Daniel Yekutieli Department of Statistics
More informationare equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,
Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations
More informationThe Pennsylvania State University The Graduate School A BAYESIAN APPROACH TO FALSE DISCOVERY RATE FOR LARGE SCALE SIMULTANEOUS INFERENCE
The Pennsylvania State University The Graduate School A BAYESIAN APPROACH TO FALSE DISCOVERY RATE FOR LARGE SCALE SIMULTANEOUS INFERENCE A Thesis in Statistics by Bing Han c 2007 Bing Han Submitted in
More informationLinear Combinations. Comparison of treatment means. Bruce A Craig. Department of Statistics Purdue University. STAT 514 Topic 6 1
Linear Combinations Comparison of treatment means Bruce A Craig Department of Statistics Purdue University STAT 514 Topic 6 1 Linear Combinations of Means y ij = µ + τ i + ǫ ij = µ i + ǫ ij Often study
More informationTests about a population mean
October 2 nd, 2017 Overview Week 1 Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 1: Descriptive statistics Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter 8: Confidence
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationSTAT 5200 Handout #7a Contrasts & Post hoc Means Comparisons (Ch. 4-5)
STAT 5200 Handout #7a Contrasts & Post hoc Means Comparisons Ch. 4-5) Recall CRD means and effects models: Y ij = µ i + ϵ ij = µ + α i + ϵ ij i = 1,..., g ; j = 1,..., n ; ϵ ij s iid N0, σ 2 ) If we reject
More informationLECTURE 5. Introduction to Econometrics. Hypothesis testing
LECTURE 5 Introduction to Econometrics Hypothesis testing October 18, 2016 1 / 26 ON TODAY S LECTURE We are going to discuss how hypotheses about coefficients can be tested in regression models We will
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Week 12. Testing and Kullback-Leibler Divergence 1. Likelihood Ratios Let 1, 2, 2,...
More informationLecture 21: October 19
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use
More informationLec 1: An Introduction to ANOVA
Ying Li Stockholm University October 31, 2011 Three end-aisle displays Which is the best? Design of the Experiment Identify the stores of the similar size and type. The displays are randomly assigned to
More informationNotes on Decision Theory and Prediction
Notes on Decision Theory and Prediction Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico October 7, 2014 1. Decision Theory Decision theory is
More informationMULTIPLE TESTING PROCEDURES AND SIMULTANEOUS INTERVAL ESTIMATES WITH THE INTERVAL PROPERTY
MULTIPLE TESTING PROCEDURES AND SIMULTANEOUS INTERVAL ESTIMATES WITH THE INTERVAL PROPERTY BY YINGQIU MA A dissertation submitted to the Graduate School New Brunswick Rutgers, The State University of New
More information