Duke University. Duke Biostatistics and Bioinformatics (B&B) Working Paper Series. Randomized Phase II Clinical Trials using Fisher s Exact Test

Size: px
Start display at page:

Download "Duke University. Duke Biostatistics and Bioinformatics (B&B) Working Paper Series. Randomized Phase II Clinical Trials using Fisher s Exact Test"

Transcription

1 Duke University Duke Biostatistics and Bioinformatics (B&B) Working Paper Series Year 2010 Paper 7 Randomized Phase II Clinical Trials using Fisher s Exact Test Sin-Ho Jung sinho.jung@duke.edu This working paper is hosted by The Berkeley Electronic Press (bepress) and may not be commercially reproduced without the permission of the copyright holder. Copyright c 2010 by the author.

2 Randomized Phase II Clinical Trials using Fisher s Exact Test Sin-Ho Jung Abstract A typical phase II trial is conducted as a single-arm trial to compare the response probabilities between an experimental therapy and a historical control. Historical control data, however, often have a small sample size, are collected from a different patient population, or use a different response assessment method, so that a direct comparison between a historical control and an experimental therapy may be severely biased. Randomized phase II trials entering patients prospectively to both experimental and control arms have been proposed to avoid any bias in such case. In this paper, we propose two-stage randomized phase II trials based on Fisher s exact test. Through numerical studies, we observe that the proposed method controls the type I error accurately and maintains a high power. If we can specify the response probabilities of two arms under the alternative hypothesis accurately, we can identify good randomized phase II trial designs by adopting the Simon s minimax and optimal design concepts that were developed for single-arm phase II trials.

3 Randomized Phase II Clinical Trials using Fisher s Exact Test Sin-Ho Jung 1 SUMMARY A typical phase II trial is conducted as a single-arm trial to compare the response probabilities between an experimental therapy and a historical control. Historical control data, however, often have a small sample size, are collected from a different patient population, or use a different response assessment method, so that a direct comparison between a historical control and an experimental therapy may be severely biased. Randomized phase II trials entering patients prospectively to both experimental and control arms have been proposed to avoid any bias in such case. In this paper, we propose two-stage randomized phase II trials based on Fisher s exact test. Through numerical studies, we observe that the proposed method controls the type I error accurately and maintains a high power. If we can specify the response probabilities of two arms under the alternative hypothesis accurately, we can identify good randomized phase II trial designs by adopting the Simon s minimax and optimal design concepts that were developed for single-arm phase II trials. KEY WORDS: Minimax design, Optimal design, Sufficient statistic, Two-stage design, Unbalanced allocation 1 Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina, 27710, U.S.A. ( sinho.jung@duke.edu) 1 Hosted by The Berkeley Electronic Press

4 1 Introduction A phase II cancer clinical trial is to investigate if an experimental therapy has a promising efficacy worth of further investigation. The most popular primary outcome is overall response, meaning the therapy shrinks the tumors, of the experimental therapy. As an effort to speed up this procedure, a phase II clinical trial usually recruits a small number of patients only to the experimental therapy arm to be compared to a historical control. So, traditional single-arm phase II trials are feasible only when reliable and valid data for an existing standard therapy are available for the same patient populations. Furthermore, the response assessment method in the historical control data should be identical to the one that will be used for a new study. If there exist no historical control data satisfying these conditions or the existing data are too small to represent the whole patient population, we have to consider a randomized phase II clinical trial with a prospective control to be compared with the experimental therapy under investigation. Cannistra [1] recommends a randomized phase II trial if a single-arm design is subject to any of these and other issues. Let p 1 and p 2 denote the response probabilities of an experimental and a control arms, respectively. In a randomized phase II trial, we want to test H 0 : p 1 p 2 against H 1 : p 1 > p 2. The null distribution of the binomial test statistic depends on the common response probability p 1 = p 2, see Jung [2]. Consequently, if the true response probabilities are different from the specified ones, the testing based on binomial distributions may not control the type I error accurately. In order to avoid this issue, Jung [2] proposes to control the type I error rate at p 1 = p 2 = 1/2. This results in a strong conservativeness when the true response probability is different from 50%. Asymptotic tests avoid specification of p 1 = p 2 by replacing them with their consistent estimators, but the sample sizes of phase II trials usually are not large enough for a good large sample approximation. Fisher s [3] exact test has been a popular testing method for comparing two sample binomial proportions with small sample sizes. In a randomized phase II trial setting, Fisher s exact test is based on the distribution of the number of responders on one arm conditioning on the total number of responders which is a sufficient statistic of p 1 = p 2 under H 0. Hence, the rejection value of Fisher s exact test does not require specification of the common response 2

5 probabilities p 1 = p 2 under H 0. In this paper, we propose two-stage randomized phase II trial designs based on Fisher s exact test. Using some example designs, we show that Fisher s exact test accurately controls the type I error over the wide range of true response values, and is more powerful than Jung s method based on binomial test if the true response probabilities are different from 50%. If we can project the true response probabilities accurately at the design stage, we can identify efficient designs by adopting the Simon s [4] optimal and minimax design concepts that were proposed for single-arm phase II trials. We provide tables of minimax and optimal two-stage designs under various practical design settings. In this paper, we limit our focus on randomized phase II trials for evaluating the efficacy of an experimental therapy compared to a prospective control. Other types I randomized phase II trial designs have been proposed by many investigators including Simon, Wittes and Ellenberg [5], Sargent and Goldberg [6], Thall, Simon and Ellenberg [7], Palmer [8], and Steinberg and Venzon [9]. Rubinstein et al. [10] discuss the strengths and weaknesses of some of these methods, and propose a method for randomized phase II screening designs based on large-sample approximation. 2 Single-Stage Design If patient accrual is fast or it takes much time (say, longer than 6 months) for response assessment, we may consider using a single-stage design. Suppose that n patients are randomized to each arm, and X and Y denote the number of responders in arms 1 (experimental) and 2 (control), respectively. Let q k = 1 p k for arm k(= 1, 2). Then the frequencies (and response probabilities in the parentheses) can be summarized as in Table 1. (Table 1 may be placed here.) At the design stage, n is prespecified. Fisher s exact test is based on the conditional distribution of X given the total number of responders Z = X + Y with a probability mass function f(x z, θ) = ( )( ) n n x z x θ x ( )( ) m+ n n i=m i z i θ i 3 Hosted by The Berkeley Electronic Press

6 for m x m +, where m = max(0, z n), m + = min(z, n), and θ = p 1 q 2 /(p 2 q 1 ) denotes the odds ratio. Suppose that we want to control the type I error rate below α. Given X + Y = z, we reject H 0 : p 1 = p 2 (i.e. θ = 1) in favor of H 1 : p 1 > p 2 (i.e. θ > 1) if X Y a, where a is the smallest integer satisfying P(X Y a z, H 0 ) = m + x=(z+a)/2 f(x z, θ = 1) α. Hence, the critical value a depends on the total number of responders z. Under H 1 : θ = θ 1 (> 1), the conditional power on X + Y = z is given by 1 β(z) = P(X Y a z, H 1 ) = m + x=(z+a)/2 f(x z, θ 1 ). We propose to choose n so that the marginal power is no smaller than a specified power level 1 β, i.e. E{1 β(z)} = 2n z=0 {1 β(z)}g(z) 1 β where g(z) is the probability mass function of Z = X + Y under H 1 : p 1 > p 2 that is given as g(z) = m + x=m ( ) ( ) n n p x x 1q1 n x p2 z x q2 n z+x z x for z = 0, 1,..., 2n. Note that the marginal type I error rate is controlled below α since the conditional type I error rate is controlled below α for any z value. Given a type I error rate and a power (α, 1 β ) and a specific alternative hypothesis H 1 : (p 1, p 2 ), we find a sample size n as follows. Algorithm for Single-Stage Design: 1. For n = 1, 2,..., (a) For z = 0, 1,..., 2n, find the smallest a = a(z) such that α(z) = P(X Y a z, θ = 1) α and calculate conditional power for the chosen a = a(z) 1 β(z) = P(X Y a z, θ 1 ). 4

7 (b) Calculate the marginal power 1 β = E{1 β(z)}. 2. Find the smallest n such that 1 β 1 β. Fisher s test that is based on the conditional distribution is valid under θ = 1 (i.e. controls the type I error rate exactly), and its conditional power depends only on the odds ratio θ 1 under H 1. However, the marginal power, and hence the sample size n, depends on (p 1, p 2 ), so that we need to specify (p 1, p 2 ) at the design stage. If (p 1, p 2 ) are mis-specified, the trial may be over- or under-powered but the type I error in data analysis will always be appropriately controlled. 3 Two-Stage Design For ethical and economical reasons, clinical trials are often conducted using multiple stages. Phase II trials usually enter small number of patients, so that the number of stages is mostly two at most. We consider most popular two-stage phase II trial designs with an early stopping when the experimental therapy has a low efficacy. Suppose that n l (l = 1, 2) patients are randomized to each arm during stage l(= 1, 2). Let n 1 + n 2 = n denote the maximal sample size for each arm, X l and Y l denote the number of responders during stage l in arms 1 and 2, respectively, X = X 1 + X 2 and Y = Y 1 + Y 2. At the design stage, n l are prespecified. Note that X 1 and X 2 are independent, and, given X l + Y l = z l, X l has the conditional probability mass function f l (x l z l, θ) = ( nl )( ) nl x l z l x l θ x l ( )( ) ml+ nl nl i=m l i z l i θ i for m l x l m l+, where m l = max(0, z l n l ) and m l+ = min(z l, n l ). We consider a two-stage randomized phase II trial whose rejection values are chosen conditional on z 1 and z 2 as follows. Stage 1: Randomize n 1 patients to each arm, and observe x 1 and y 1. 5 Hosted by The Berkeley Electronic Press

8 a. Given z 1 (= x 1 + y 1 ), find a stopping value a 1 = a 1 (z 1 ). b. If x 1 y 1 a 1, proceed to stage 2. c. Otherwise, stop the trial. Stage 2: Randomize n 2 patients to each arm, observe x 2 and y 2 (z 2 = x 2 + y 2 ). a. Given (z 1, z 2 ), find a rejection value a = a(z 1, z 2 ). b. Accept the experimental arm if x y a. Now, the question is how to choose rejection values (a 1, a) conditioning on (z 1, z 2 ). 3.1 How to choose a 1 and a In this section, we assume that n 1 and n 2 are given. We consider different options in choosing a 1. For example, We may want to stop the trial if the experimental arm is worse than the control. In this case, we choose a 1 = 0. This a 1 is constant with respect to z 1. We may choose a 1 so that the conditional probability of early termination given z 1 is no smaller than a level γ 0 (= 0.6 to 0.8) under H 0 : θ = 1, i.e. PET 0 (z 1 ) = P(X 1 X 2 < a H 0 ) = (a 1 +z 1 )/2 1 x 1 =m 1 f 1 (x 1 z 1, θ = 1) γ 0. We may choose a 1 so that the conditional probability of early termination given z 1 is no larger than a level γ 1 (= 0.02 to 0.1) under H 1 : θ = θ 1, i.e. PET 1 (z 1 ) = P(X 1 X 2 < a H 1 ) = (a 1 +z 1 )/2 1 x 1 =m 1 f 1 (x 1 z 1, θ 1 ) γ 1. Among these options, we propose to use a 1 = 0. Most of optimal two-stage phase II trials also stop early when the observed response probability from stage 1 is no larger than the specified response probability under H 0, refer to Simon [4] and Jung et al. [11] for single-arm trial cases and Jung [2] for randomized trial cases. 6

9 With a 1 fixed at 0, we choose the second stage rejection value a conditioning on (z 1, z 2 ). Given type I error rate α, a is chosen as the smallest integer satisfying We calculate α(z 1, z 2 ) by = m 1+ x 1 =m 1 α(z 1, z 2 ) P(X 1 Y 1 a 1, X Y a z 1, z 2, θ = 1) α. P(X 1 (a 1 + z 1 )/2, X 1 + X 2 (a + z 1 + z 2 )/2 z 1, z 2, θ = 1) m 2+ x 2 =m 2 I{x 1 (a 1 + z 1 )/2, x 1 + x 2 (a + z 1 + z 2 )/2}f 1 (x 1 z 1, 1)f 2 (x 2 z 2, 1), where I( ) is the indicator function. = Given z 1 and z 2, the conditional power under H 1 : θ = θ 1 is obtained by m 1+ x 1 =m 1 m 2+ 1 β(z 1, z 2 ) = P(X 1 Y 1 a 1, X Y a z 1, z 2, θ 1 ) x 2 =m 2 I{x 1 (a 1 + z 1 )/2, x 1 + x 2 (a + z 1 + z 2 )/2}f 1 (x 1 z 1, θ 1 )f 2 (x 2 z 2, θ 1 ). Note that, as in the single-stage case, the calculations of type I error rate α(z 1, z 2 ) and rejection values (a 1, a) do not require specification of the common response probability p 1 = p 2 under H 0, and the conditional power 1 β(z 1, z 2 ) requires specification of the odds ratio θ 1 under H 1, but not the response probabilities for two arms, p 1 and p How to choose n 1 and n 2 In this section we discuss how to choose sample sizes n 1 and n 2 at the design stage based on some criteria. Given (α, β ), we propose to choose n 1 and n 2 so that the marginal power is maintained above 1 β while controlling the conditional type I error rates for any (z 1, z 2 ) below α as described in Section 3.1. For stage l(= 1, 2), the marginal distribution of Z l = X l + Y l has a probability mass function g l (z l ) = m l+ x l =m l ( ) nl x l p x l 1 q n l x l 1 ( nl z l x l ) p z l x l 2 q n l z l +x l 2 for z l = 0,..., 2n l. Under H 0 : p 1 = p 2 = p 0, this is expressed as ) g 0l (z l ) = p z l 0 q 2n l z l 0 m l+ x l =m l ( )( nl nl x l z l x l. 7 Hosted by The Berkeley Electronic Press

10 Further, Z 1 and Z 2 are independent. Hence, we choose n 1 and n 2 so that the marginal power is no smaller than a specified level 1 β, i.e. 1 β 2n 1 2n 2 z 1 =0 z 2 =0 The marginal type I error is calculated by α 2n 1 {1 β(z 1, z 2 )}g 1 (z 1 )g 2 (z 2 ) 1 β. 2n 2 z 1 =0 z 2 =0 α(z 1, z 2 )g 01 (z 1 )g 02 (z 2 ). Since the conditional type I error rate is controlled below α for any (z 1, z 2 ), the marginal type I error rate is no larger than α. Although we do not have to specify the response probabilities for testing, we need to do so when choosing (n 1, n 2 ) at the design stage. If the specified response probabilities are different from the true ones, then the marginal power may be different from the expected one. But in this case our testing is still valid in the sense that it always controls the (both conditional and marginal) type I error rate below the specified level. Let PET 0 E{PET 0 (Z 1 ) H 0 } = 2n1 z 1 =0 PET 0 (z 1 )g 0 (z 1 ) denote the marginal probability of early termination under H 0. Then, among those (n 1, n 2 ) satisfying the (α, 1 β )-condition, the minimax and the optimal designs are chosen as follows. Minimax design chooses (n 1, n 2 ) with the smallest maximal sample size n(= n 1 + n 2 ). Optimal design chooses (n 1, n 2 ) with the smallest marginal expected sample size EN under H 0, where EN = n 1 PET 0 + n (1 PET 0 ). Tables 2 to 5 report the sample sizes (n, n 1 ) of the minimax and optimal two-stage designs for α = 0.15 or 0.2, 1 β = 0.8 or 0.85, and various combinations of (p 1, p 2 ) under H 1. For comparison, we also list the sample size n of the single-stage design under each setting. Note that the maximal sample size of the minimax is slightly smaller than or equal to the sample size of the single-stage design. If the experimental therapy is inefficacious, however, the expected sample sizes of minimax and optimal designs are much smaller than the sample size of the single-stage design. 8

11 4 Numerical Studies Jung [2] proposes a randomized phase II design method based on the binomial test, called MaxTest in this paper, by controlling the type I error rate at p 1 = p 2 = 50%. Since the type I error rate of the two-sample binomial test is maximized at p 1 = p 2 = 50%, this test will be conservative if the true response probability under H 0 is different from 50%. We want to compare the performance of our Fisher s test with that of MaxTest. Figure 1 displays the type I error rate and power in the range of 0 < p 2 < 1 for single-stage designs with n = 60 per arm, = p 1 p 2 = 0.15 or 0.2 under H 1 and α = 0.1, 0.15 or 0.2 under H 0 : p 1 = p 2. The solid lines are for Fisher s test and the dotted lines are for MaxTest, and the lower two lines are for type I error rate and the upper two lines are for power. As is well known, Fisher s test controls the type I error conservatively over the range of p 2. The conservativeness gets slightly stronger with small p 2 values close to 0. MaxTest controls the type I error accurately around p 2 = 0.5, but becomes more conservative for p 2 values far from 0.5, especially with small p 2 values. For α = 0.1, Fisher s test and MaxTest have similar power around 0.2 p except that MaxTest is slightly more powerful for p Otherwise, Fisher s test is more powerful. The difference in power between the two methods becomes larger with = We observe similar trends overall, but the difference in power becomes smaller with = 0.2, especially when combined with a large α(= 0.2). Figure 2 displays the type I error rate and power of two-stage designs with n 1 = n 2 = 30 per arm. We observe that, compared to MaxTest, Fisher s test controls the type I error more accurately in most range of p 2 values. If α = 0.1, Fisher s test is more powerful than MaxTest over the whole range of p 2 values. But with a larger α, such as 0.15 or 0.2, MaxTest is slightly more powerful for p As in the single-stage design case, the difference in power diminishes as and α increase. 5 Discussions We have proposed design and analysis methods for two-stage randomized phase II clinical trials based on Fisher s exact test. While the binomial test by Jung [2] requires specification 9 Hosted by The Berkeley Electronic Press

12 of the response probability of the control arm p 2 or conservatively controls the type I error rate at p 2 = 0.5, Fisher s exact test does not require specification of p 2. If p 1 and p 2 under H 1 can be accurately specified at the design stage, we can calculate the expected sample size under H 0 and the sample sizes (n 1, n 2 ) for the minimax and optimal two-stage designs. Even though p 1 and p 2 are mis-specified at the design, Fisher s test accurately control the type I error rate and maintains a higher power than the binomial test, especially if p 1 and p 2 are different from 50%. For two-stage Fisher s exact test, the rejection value of the first stage is fixed at a 1 = 0, but the rejection value of the second stage a is chosen depending on the total numbers of responders through two stages, (z 1, z 2 ). Hence, a design based on Fisher s exact test will be specified by the sample sizes (n 1, n 2 ) only, while Jung s [2] designs based on the binomial test are specified by the sample sizes and rejection values (n 1, n 2, a 1, a). The proposed method assumes an equal allocation between two arms. However, extension to unbalanced allocations is straightforward. Let n kl denote the sample size for arm k(= 1, 2) at stage l(= 1, 2). Then, we can find a design and conduct the statistical testing using f l (x l z l, θ) = ( n1l )( ) n1l x l z l x l θ x l ( )( ) ml+ n2l n2l i=m l i z l i θ i for m l x l m l+ and g l (z l ) = m l+ x l =m l ( ) n1l x l p x l 1 q n 2l x l 1 ( n2l z l x l ) p z l x l 2 q n 2l z l +x l 2 for z l = 0,..., n 1l + n 2l, where m l = max(0, z l n 2l ) and m l+ = min(z l, n 1l ). Even though the sample sizes (n 1, n 2 ) are determined at the design stage, the realized sizes when the study is completed may be slightly different from the pre-specified ones. This kind of discrepancy in sample sizes becomes no issue for our method by performing a Fisher s exact test conditioning on the realized sample sizes as well as the total number of responders. The Fortran program to find minimax and optimal designs are available from the author. 10

13 REFERENCES 1. Cannistra SA. (2009). Phase II trials in Journal of Clinical Oncology. Journal of Clinical Oncology. 27(19), Jung SH. (2008). Randomized phase II trials with a prospective control. Statistics in Medicine. 27, Fisher RA. (1935). The logic of inductive inference (with discussion). Journal of Royal Statistical Society. 98, Simon R. Optimal two-stage designs for phase II clinical trials. Controlled Clinical Trials 1989; 10: Simon R, Wittes RE, Ellenberg SS. Randomized phase II clinical trials. Cancer Treatment Reports 1985; 69: Sargent DJ, Goldberg RM. A flexible design for multiple armed screening trials. Statistics in Medicine 2001; 20: Thall PF, Simon R, Ellenberg SS. A two-stage design for choosing among several experimental treatments and a control in clinical trials. Biometrics 1989; 45: Palmer CR. A comparative phase II clinical trials procedure for choosing the best of three treatments. Statistics in Medicine 1991; 10: Steinberg SM, Venzon DJ. Early selection in a randomized phase II clinical trial. Statistics in Medicine 2002; 21: Rubinstein LV, Korn EL, Freidlin B, Hunsberger S, Ivy SP, Smith MA. Design issues of randomized phase II trials and a proposal for phase II screening trials. Journal of Clinical Oncology 2005; 23(28): Jung SH, Lee TY, Kim KM, George S. Admissible two-stage designs for phase II cancer clinical trials. Statistics in Medicine 2004; 23: Hosted by The Berkeley Electronic Press

14 Table 1. Frequencies (and response probabilities in the parentheses) of a single-stage randomized phase II trial Arm 1 Arm 2 Total Response Yes x (p 1 ) y (p 2 ) z No n x (q 1 ) n y (q 2 ) 2n z Total n n 12

15 Table 2. Single-stage designs, and minimax and optimal two-stage Fisher designs for (α, 1 β ) = (.15,.8) and balanced allocation (r = 1) Single-Stage Design Minimax Two-Stage Design Optimal Two-Stage Design p 2 p 1 θ n α 1 β (n, n 1 ) α 1 β EN (n, n 1 ) α 1 β EN (78, 40) (81, 26) (44, 17) (44, 17) (29, 11) (29, 11) (56, 25) (58, 19) (36, 16) (37, 12) (65, 36) (69.22) (41, 19) (42, 14) (74, 42) (79, 26) (46, 23) (49, 14) (81, 37) (84, 30) (47, 27) (50, 17) (85, 65) (95, 27) (49, 26) (52, 19) (86, 66) (95, 32) (54, 21) (55, 19) (87, 59) (94, 35) (54, 22) (56, 18) (87, 59) (94, 35) (54, 21) (55, 19) (86, 66) (95, 32) (49, 26) (52, 19) (85, 65) (96, 26) (47, 27) (50, 17) (81, 37) (84, 30) (46, 23) (50, 12) (74, 42) (81, 23) (41, 19) (43, 12) (65, 36) (69, 22) (36, 16) (38, 9) (56, 25) (59, 17) (29, 11) (30, 7) (44, 17) (46, 11) (78, 40) (83, 22) Hosted by The Berkeley Electronic Press

16 Table 3. Single-stage designs, and minimax and optimal two-stage Fisher designs for (α, 1 β ) = (.15,.85) and balanced allocation (r = 1) Single-Stage Design Minimax Two-Stage Design Optimal Two-Stage Design p 2 p 1 θ n α 1 β (n, n 1 ) α 1 β EN (n, n 1 ) α 1 β EN (92, 48) (94, 35) (51, 24) (52,18) (34, 11) (34, 11) (65, 37) (68, 24) (41, 21) (42, 16) (78, 48) (82, 29) (49, 23) (51, 17) (88, 43) (93, 32) (52, 29) (56, 18) (94, 68) (102, 37) (59, 30) (62, 19) (100, 55) (107, 39) (60, 39) (65, 22) (106, 79) (114, 39) (61, 37) (65, 24) (107, 74) (115, 45) (61, 38) (66, 23) (107, 74) (115, 45) (61, 37) (65, 24) (106, 79) (114, 39) (60, 39) (65, 22) (100, 55) (107, 39) (59, 30) (62, 19) (94, 68) (102, 37) (52, 29) (56, 18) (88, 43) (93, 32) (49, 23) (52, 15) (78, 48) (84, 26) (41, 21) (43, 14) (65, 37) (69, 22) (34, 11) (35, 7) (51, 24) (53, 15) (92, 48) (98, 27)

17 Table 4. Single-stage designs, and minimax and optimal two-stage Fisher designs for (α, 1 β ) = (.2,.8) and balanced allocation (r = 1) Single-Stage Design Minimax Two-Stage Design Optimal Two-Stage Design p 2 p 1 θ n α 1 β (n, n 1 ) α 1 β EN (n, n 1 ) α 1 β EN (65, 36) (68, 24) (38, 15) (38, 15) (25, 10) (25, 10) (47, 19) (47, 19) (30, 19) (31, 13) (54, 40) (60, 18) (34, 15) (35, 13) (62, 29) (65, 23) (39, 26) (40, 14) (67, 47) (73, 24) (40, 35) (45, 12) (68, 55) (75, 30) (41, 28) (45, 16) (69, 54) (76, 32) (42, 25) (46, 16) (70, 50) (77, 32) (42, 26) (45, 18) (70, 50) (77, 32) (42, 25) (46, 16) (69, 54) (76, 32) (41, 28) (45, 16) (68, 55) (77, 28) (41, 23) (45, 12) (67, 47) (73, 24) (39, 26) (40, 14) (62, 29) (65, 23) (34, 15) (36, 11) (54, 40) (60, 18) (30, 19) (33, 7) (47, 19) (48, 17) (25, 10) (26, 6) (38, 15) (40, 9) (65, 36) (70, 20) Hosted by The Berkeley Electronic Press

18 Table 5. Single-stage designs, and minimax and optimal two-stage Fisher designs for (α, 1 β ) = (.2,.85) and balanced allocation (r = 1) Single-Stage Design Minimax Two-Stage Design Optimal Two-Stage Design p 2 p 1 θ n α 1 β (n, n 1 ) α 1 β EN (n, n 1 ) α 1 β EN (78, 37) (81, 29) (44, 19) (44, 19) (30, 11) (30, 11) (56, 36) (59, 21) (35, 19) (37, 12) (65, 35) (69, 26) (42, 19) (43, 16) (74, 51) (80, 30) (45, 27) (48, 18) (78, 50) (84, 36) (46, 32) (50, 20) (87, 68) (92, 36) (50, 29) (52, 21) (89, 63) (93, 40) (53, 27) (55, 23) (89, 71) (96, 39) (53, 28) (57, 21) (89, 71) (96, 39) (53, 27) (55, 23) (89, 63) (92, 50) (50, 29) (52, 21) (87, 68) (92, 36) (46, 32) (50, 20) (78, 50) (85, 34) (45, 27) (48, 18) (74, 51) (82, 27) (42, 19) (44, 14) (65, 35) (69, 26) (35, 19) (37, 12) (56, 36) (59, 21) (30, 11) (31, 7) (44, 19) (46, 14) (78, 37) (83, 26)

19 Figure 1: Single-stage designs with n = 60 per arm: Type I error rate and power for Fisher s test (solid lines) and MaxTest (dotted lines) = 0.15, α= 0.1 = 0.15, α= 0.15 = 0.15, α= = 0.2, α= 0.1 = 0.2, α= 0.15 = 0.2, α= Hosted by The Berkeley Electronic Press

20 Figure 2: Two-stage designs with n 1 = n 2 = 30 per arm: Type I error rate and power for Fisher s test (solid lines) and MaxTest (dotted lines) = 0.15, α= 0.1 = 0.15, α= 0.15 = 0.15, α= = 0.2, α= 0.1 = 0.2, α= 0.15 = 0.2, α=

arxiv: v1 [stat.me] 24 Jul 2013

arxiv: v1 [stat.me] 24 Jul 2013 arxiv:1307.6275v1 [stat.me] 24 Jul 2013 A Two-Stage, Phase II Clinical Trial Design with Nested Criteria for Early Stopping and Efficacy Daniel Zelterman Department of Biostatistics School of Epidemiology

More information

Group-Sequential Tests for One Proportion in a Fleming Design

Group-Sequential Tests for One Proportion in a Fleming Design Chapter 126 Group-Sequential Tests for One Proportion in a Fleming Design Introduction This procedure computes power and sample size for the single-arm group-sequential (multiple-stage) designs of Fleming

More information

Pubh 8482: Sequential Analysis

Pubh 8482: Sequential Analysis Pubh 8482: Sequential Analysis Joseph S. Koopmeiners Division of Biostatistics University of Minnesota Week 10 Class Summary Last time... We began our discussion of adaptive clinical trials Specifically,

More information

Adaptive Designs: Why, How and When?

Adaptive Designs: Why, How and When? Adaptive Designs: Why, How and When? Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj ISBS Conference Shanghai, July 2008 1 Adaptive designs:

More information

Early selection in a randomized phase II clinical trial

Early selection in a randomized phase II clinical trial STATISTICS IN MEDICINE Statist. Med. 2002; 21:1711 1726 (DOI: 10.1002/sim.1150) Early selection in a randomized phase II clinical trial Seth M. Steinberg ; and David J. Venzon Biostatistics and Data Management

More information

The Design of a Survival Study

The Design of a Survival Study The Design of a Survival Study The design of survival studies are usually based on the logrank test, and sometimes assumes the exponential distribution. As in standard designs, the power depends on The

More information

Bayes Factor Single Arm Time-to-event User s Guide (Version 1.0.0)

Bayes Factor Single Arm Time-to-event User s Guide (Version 1.0.0) Bayes Factor Single Arm Time-to-event User s Guide (Version 1.0.0) Department of Biostatistics P. O. Box 301402, Unit 1409 The University of Texas, M. D. Anderson Cancer Center Houston, Texas 77230-1402,

More information

University of Texas, MD Anderson Cancer Center

University of Texas, MD Anderson Cancer Center University of Texas, MD Anderson Cancer Center UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series Year 2012 Paper 77 Fast Approximation of Inverse Gamma Inequalities John D.

More information

Bayesian Enhancement Two-Stage Design for Single-Arm Phase II Clinical. Trials with Binary and Time-to-Event Endpoints

Bayesian Enhancement Two-Stage Design for Single-Arm Phase II Clinical. Trials with Binary and Time-to-Event Endpoints Biometrics 64, 1?? December 2008 DOI: 10.1111/j.1541-0420.2005.00454.x Bayesian Enhancement Two-Stage Design for Single-Arm Phase II Clinical Trials with Binary and Time-to-Event Endpoints Haolun Shi and

More information

A simulation study for comparing testing statistics in response-adaptive randomization

A simulation study for comparing testing statistics in response-adaptive randomization RESEARCH ARTICLE Open Access A simulation study for comparing testing statistics in response-adaptive randomization Xuemin Gu 1, J Jack Lee 2* Abstract Background: Response-adaptive randomizations are

More information

University of Texas, MD Anderson Cancer Center

University of Texas, MD Anderson Cancer Center University of Texas, MD Anderson Cancer Center UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series Year 2012 Paper 74 Uniformly Most Powerful Bayesian Tests Valen E. Johnson Dept.

More information

Sample size re-estimation in clinical trials. Dealing with those unknowns. Chris Jennison. University of Kyoto, January 2018

Sample size re-estimation in clinical trials. Dealing with those unknowns. Chris Jennison. University of Kyoto, January 2018 Sample Size Re-estimation in Clinical Trials: Dealing with those unknowns Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj University of Kyoto,

More information

SAMPLE SIZE RE-ESTIMATION FOR ADAPTIVE SEQUENTIAL DESIGN IN CLINICAL TRIALS

SAMPLE SIZE RE-ESTIMATION FOR ADAPTIVE SEQUENTIAL DESIGN IN CLINICAL TRIALS Journal of Biopharmaceutical Statistics, 18: 1184 1196, 2008 Copyright Taylor & Francis Group, LLC ISSN: 1054-3406 print/1520-5711 online DOI: 10.1080/10543400802369053 SAMPLE SIZE RE-ESTIMATION FOR ADAPTIVE

More information

Optimising Group Sequential Designs. Decision Theory, Dynamic Programming. and Optimal Stopping

Optimising Group Sequential Designs. Decision Theory, Dynamic Programming. and Optimal Stopping : Decision Theory, Dynamic Programming and Optimal Stopping Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj InSPiRe Conference on Methodology

More information

A PREDICTIVE PROBABILITY INTERIM DESIGN FOR PHASE II CLINICAL TRIALS WITH CONTINUOUS ENDPOINTS

A PREDICTIVE PROBABILITY INTERIM DESIGN FOR PHASE II CLINICAL TRIALS WITH CONTINUOUS ENDPOINTS University of Kentucky UKnowledge Theses and Dissertations--Epidemiology and Biostatistics College of Public Health 2017 A PREDICTIVE PROBABILITY INTERIM DESIGN FOR PHASE II CLINICAL TRIALS WITH CONTINUOUS

More information

Two-stage Adaptive Randomization for Delayed Response in Clinical Trials

Two-stage Adaptive Randomization for Delayed Response in Clinical Trials Two-stage Adaptive Randomization for Delayed Response in Clinical Trials Guosheng Yin Department of Statistics and Actuarial Science The University of Hong Kong Joint work with J. Xu PSI and RSS Journal

More information

Type I error rate control in adaptive designs for confirmatory clinical trials with treatment selection at interim

Type I error rate control in adaptive designs for confirmatory clinical trials with treatment selection at interim Type I error rate control in adaptive designs for confirmatory clinical trials with treatment selection at interim Frank Bretz Statistical Methodology, Novartis Joint work with Martin Posch (Medical University

More information

Interim Monitoring of Clinical Trials: Decision Theory, Dynamic Programming. and Optimal Stopping

Interim Monitoring of Clinical Trials: Decision Theory, Dynamic Programming. and Optimal Stopping Interim Monitoring of Clinical Trials: Decision Theory, Dynamic Programming and Optimal Stopping Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj

More information

Statistical Aspects of Futility Analyses. Kevin J Carroll. nd 2013

Statistical Aspects of Futility Analyses. Kevin J Carroll. nd 2013 Statistical Aspects of Futility Analyses Kevin J Carroll March Spring 222013 nd 2013 1 Contents Introduction The Problem in Statistical Terms Defining Futility Three Common Futility Rules The Maths An

More information

Optimal Designs for Two-Arm Randomized Phase II Clinical Trials with Multiple Constraints. Wei Jiang

Optimal Designs for Two-Arm Randomized Phase II Clinical Trials with Multiple Constraints. Wei Jiang Optimal Designs for Two-Arm Randomized Phase II Clinical Trials with Multiple Constraints By Wei Jiang Submitted to the graduate degree program in Biostatistics and the Graduate Faculty of the University

More information

BAYESIAN DOSE FINDING IN PHASE I CLINICAL TRIALS BASED ON A NEW STATISTICAL FRAMEWORK

BAYESIAN DOSE FINDING IN PHASE I CLINICAL TRIALS BASED ON A NEW STATISTICAL FRAMEWORK Statistica Sinica 17(2007), 531-547 BAYESIAN DOSE FINDING IN PHASE I CLINICAL TRIALS BASED ON A NEW STATISTICAL FRAMEWORK Y. Ji, Y. Li and G. Yin The University of Texas M. D. Anderson Cancer Center Abstract:

More information

Overrunning in Clinical Trials: a Methodological Review

Overrunning in Clinical Trials: a Methodological Review Overrunning in Clinical Trials: a Methodological Review Dario Gregori Unit of Biostatistics, Epidemiology and Public Health Department of Cardiac, Thoracic and Vascular Sciences dario.gregori@unipd.it

More information

Superiority by a Margin Tests for One Proportion

Superiority by a Margin Tests for One Proportion Chapter 103 Superiority by a Margin Tests for One Proportion Introduction This module provides power analysis and sample size calculation for one-sample proportion tests in which the researcher is testing

More information

Generalized Likelihood Ratio Statistics and Uncertainty Adjustments in Efficient Adaptive Design of Clinical Trials

Generalized Likelihood Ratio Statistics and Uncertainty Adjustments in Efficient Adaptive Design of Clinical Trials Sequential Analysis, 27: 254 276, 2008 Copyright Taylor & Francis Group, LLC ISSN: 0747-4946 print/1532-4176 online DOI: 10.1080/07474940802241009 Generalized Likelihood Ratio Statistics and Uncertainty

More information

Adaptive designs to maximize power. trials with multiple treatments

Adaptive designs to maximize power. trials with multiple treatments in clinical trials with multiple treatments Technion - Israel Institute of Technology January 17, 2013 The problem A, B, C are three treatments with unknown probabilities of success, p A, p B, p C. A -

More information

Two-stage k-sample designs for the ordered alternative problem

Two-stage k-sample designs for the ordered alternative problem Two-stage k-sample designs for the ordered alternative problem Guogen Shan, Alan D. Hutson, and Gregory E. Wilding Department of Biostatistics,University at Buffalo, Buffalo, NY 14214, USA July 18, 2011

More information

Precision of maximum likelihood estimation in adaptive designs

Precision of maximum likelihood estimation in adaptive designs Research Article Received 12 January 2015, Accepted 24 September 2015 Published online 12 October 2015 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/sim.6761 Precision of maximum likelihood

More information

An Adaptive Futility Monitoring Method with Time-Varying Conditional Power Boundary

An Adaptive Futility Monitoring Method with Time-Varying Conditional Power Boundary An Adaptive Futility Monitoring Method with Time-Varying Conditional Power Boundary Ying Zhang and William R. Clarke Department of Biostatistics, University of Iowa 200 Hawkins Dr. C-22 GH, Iowa City,

More information

Adaptive Extensions of a Two-Stage Group Sequential Procedure for Testing a Primary and a Secondary Endpoint (II): Sample Size Re-estimation

Adaptive Extensions of a Two-Stage Group Sequential Procedure for Testing a Primary and a Secondary Endpoint (II): Sample Size Re-estimation Research Article Received XXXX (www.interscience.wiley.com) DOI: 10.100/sim.0000 Adaptive Extensions of a Two-Stage Group Sequential Procedure for Testing a Primary and a Secondary Endpoint (II): Sample

More information

Modification and Improvement of Empirical Likelihood for Missing Response Problem

Modification and Improvement of Empirical Likelihood for Missing Response Problem UW Biostatistics Working Paper Series 12-30-2010 Modification and Improvement of Empirical Likelihood for Missing Response Problem Kwun Chuen Gary Chan University of Washington - Seattle Campus, kcgchan@u.washington.edu

More information

Comparing Adaptive Designs and the. Classical Group Sequential Approach. to Clinical Trial Design

Comparing Adaptive Designs and the. Classical Group Sequential Approach. to Clinical Trial Design Comparing Adaptive Designs and the Classical Group Sequential Approach to Clinical Trial Design Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj

More information

Compare Predicted Counts between Groups of Zero Truncated Poisson Regression Model based on Recycled Predictions Method

Compare Predicted Counts between Groups of Zero Truncated Poisson Regression Model based on Recycled Predictions Method Compare Predicted Counts between Groups of Zero Truncated Poisson Regression Model based on Recycled Predictions Method Yan Wang 1, Michael Ong 2, Honghu Liu 1,2,3 1 Department of Biostatistics, UCLA School

More information

TESTS FOR EQUIVALENCE BASED ON ODDS RATIO FOR MATCHED-PAIR DESIGN

TESTS FOR EQUIVALENCE BASED ON ODDS RATIO FOR MATCHED-PAIR DESIGN Journal of Biopharmaceutical Statistics, 15: 889 901, 2005 Copyright Taylor & Francis, Inc. ISSN: 1054-3406 print/1520-5711 online DOI: 10.1080/10543400500265561 TESTS FOR EQUIVALENCE BASED ON ODDS RATIO

More information

Lecture 25. Ingo Ruczinski. November 24, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University

Lecture 25. Ingo Ruczinski. November 24, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University Lecture 25 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University November 24, 2015 1 2 3 4 5 6 7 8 9 10 11 1 Hypothesis s of homgeneity 2 Estimating risk

More information

Step-down FDR Procedures for Large Numbers of Hypotheses

Step-down FDR Procedures for Large Numbers of Hypotheses Step-down FDR Procedures for Large Numbers of Hypotheses Paul N. Somerville University of Central Florida Abstract. Somerville (2004b) developed FDR step-down procedures which were particularly appropriate

More information

Liang Li, PhD. MD Anderson

Liang Li, PhD. MD Anderson Liang Li, PhD Biostatistics @ MD Anderson Behavioral Science Workshop, October 13, 2014 The Multiphase Optimization Strategy (MOST) An increasingly popular research strategy to develop behavioral interventions

More information

Research Article Sample Size Calculation for Controlling False Discovery Proportion

Research Article Sample Size Calculation for Controlling False Discovery Proportion Probability and Statistics Volume 2012, Article ID 817948, 13 pages doi:10.1155/2012/817948 Research Article Sample Size Calculation for Controlling False Discovery Proportion Shulian Shang, 1 Qianhe Zhou,

More information

Estimation in Flexible Adaptive Designs

Estimation in Flexible Adaptive Designs Estimation in Flexible Adaptive Designs Werner Brannath Section of Medical Statistics Core Unit for Medical Statistics and Informatics Medical University of Vienna BBS and EFSPI Scientific Seminar on Adaptive

More information

Play-the-winner rule in clinical trials: Models for adaptative designs and Bayesian methods

Play-the-winner rule in clinical trials: Models for adaptative designs and Bayesian methods Play-the-winner rule in clinical trials: Models for adaptative designs and Bayesian methods Bruno Lecoutre and Khadija Elqasyr ERIS, Laboratoire de Mathématiques Raphael Salem, UMR 6085, C.N.R.S. et Université

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 2010 Paper 267 Optimizing Randomized Trial Designs to Distinguish which Subpopulations Benefit from

More information

Group Sequential Designs: Theory, Computation and Optimisation

Group Sequential Designs: Theory, Computation and Optimisation Group Sequential Designs: Theory, Computation and Optimisation Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj 8th International Conference

More information

A weighted differential entropy based approach for dose-escalation trials

A weighted differential entropy based approach for dose-escalation trials A weighted differential entropy based approach for dose-escalation trials Pavel Mozgunov, Thomas Jaki Medical and Pharmaceutical Statistics Research Unit, Department of Mathematics and Statistics, Lancaster

More information

Two-Phase, Three-Stage Adaptive Designs in Clinical Trials

Two-Phase, Three-Stage Adaptive Designs in Clinical Trials Japanese Journal of Biometrics Vol. 35, No. 2, 69 93 (2014) Preliminary Report Two-Phase, Three-Stage Adaptive Designs in Clinical Trials Hiroyuki Uesaka 1, Toshihiko Morikawa 2 and Akiko Kada 3 1 The

More information

Group sequential designs with negative binomial data

Group sequential designs with negative binomial data Group sequential designs with negative binomial data Ekkehard Glimm 1 Tobias Mütze 2,3 1 Statistical Methodology, Novartis, Basel, Switzerland 2 Department of Medical Statistics, University Medical Center

More information

The SEQDESIGN Procedure

The SEQDESIGN Procedure SAS/STAT 9.2 User s Guide, Second Edition The SEQDESIGN Procedure (Book Excerpt) This document is an individual chapter from the SAS/STAT 9.2 User s Guide, Second Edition. The correct bibliographic citation

More information

Adaptive designs beyond p-value combination methods. Ekkehard Glimm, Novartis Pharma EAST user group meeting Basel, 31 May 2013

Adaptive designs beyond p-value combination methods. Ekkehard Glimm, Novartis Pharma EAST user group meeting Basel, 31 May 2013 Adaptive designs beyond p-value combination methods Ekkehard Glimm, Novartis Pharma EAST user group meeting Basel, 31 May 2013 Outline Introduction Combination-p-value method and conditional error function

More information

Prospective inclusion of historical efficacy data in clinical trials

Prospective inclusion of historical efficacy data in clinical trials Prospective inclusion of historical efficacy data in clinical trials Stavros Nikolakopoulos Ingeborg van der Tweel Kit Roes dept. of Biostatistics and Research support, Julius Center, UMC Utrecht The Netherlands

More information

Tutorial 4: Power and Sample Size for the Two-sample t-test with Unequal Variances

Tutorial 4: Power and Sample Size for the Two-sample t-test with Unequal Variances Tutorial 4: Power and Sample Size for the Two-sample t-test with Unequal Variances Preface Power is the probability that a study will reject the null hypothesis. The estimated probability is a function

More information

Non-Inferiority Tests for the Ratio of Two Proportions in a Cluster- Randomized Design

Non-Inferiority Tests for the Ratio of Two Proportions in a Cluster- Randomized Design Chapter 236 Non-Inferiority Tests for the Ratio of Two Proportions in a Cluster- Randomized Design Introduction This module provides power analysis and sample size calculation for non-inferiority tests

More information

A Gate-keeping Approach for Selecting Adaptive Interventions under General SMART Designs

A Gate-keeping Approach for Selecting Adaptive Interventions under General SMART Designs 1 / 32 A Gate-keeping Approach for Selecting Adaptive Interventions under General SMART Designs Tony Zhong, DrPH Icahn School of Medicine at Mount Sinai (Feb 20, 2019) Workshop on Design of mhealth Intervention

More information

Bayesian methods for sample size determination and their use in clinical trials

Bayesian methods for sample size determination and their use in clinical trials Bayesian methods for sample size determination and their use in clinical trials Stefania Gubbiotti Abstract This paper deals with determination of a sample size that guarantees the success of a trial.

More information

Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk

Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk Ann Inst Stat Math (0) 64:359 37 DOI 0.007/s0463-00-036-3 Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk Paul Vos Qiang Wu Received: 3 June 009 / Revised:

More information

Robustifying Trial-Derived Treatment Rules to a Target Population

Robustifying Trial-Derived Treatment Rules to a Target Population 1/ 39 Robustifying Trial-Derived Treatment Rules to a Target Population Yingqi Zhao Public Health Sciences Division Fred Hutchinson Cancer Research Center Workshop on Perspectives and Analysis for Personalized

More information

Analysing Survival Endpoints in Randomized Clinical Trials using Generalized Pairwise Comparisons

Analysing Survival Endpoints in Randomized Clinical Trials using Generalized Pairwise Comparisons Analysing Survival Endpoints in Randomized Clinical Trials using Generalized Pairwise Comparisons Dr Julien PERON October 2016 Department of Biostatistics HCL LBBE UCBL Department of Medical oncology HCL

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 2008 Paper 241 A Note on Risk Prediction for Case-Control Studies Sherri Rose Mark J. van der Laan Division

More information

Harvard University. A Note on the Control Function Approach with an Instrumental Variable and a Binary Outcome. Eric Tchetgen Tchetgen

Harvard University. A Note on the Control Function Approach with an Instrumental Variable and a Binary Outcome. Eric Tchetgen Tchetgen Harvard University Harvard University Biostatistics Working Paper Series Year 2014 Paper 175 A Note on the Control Function Approach with an Instrumental Variable and a Binary Outcome Eric Tchetgen Tchetgen

More information

Inference for Binomial Parameters

Inference for Binomial Parameters Inference for Binomial Parameters Dipankar Bandyopadhyay, Ph.D. Department of Biostatistics, Virginia Commonwealth University D. Bandyopadhyay (VCU) BIOS 625: Categorical Data & GLM 1 / 58 Inference for

More information

Sample Size/Power Calculation by Software/Online Calculators

Sample Size/Power Calculation by Software/Online Calculators Sample Size/Power Calculation by Software/Online Calculators May 24, 2018 Li Zhang, Ph.D. li.zhang@ucsf.edu Associate Professor Department of Epidemiology and Biostatistics Division of Hematology and Oncology

More information

Lecture 21. December 19, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.

Lecture 21. December 19, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Testing Independence

Testing Independence Testing Independence Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM 1/50 Testing Independence Previously, we looked at RR = OR = 1

More information

Auxiliary-variable-enriched Biomarker Stratified Design

Auxiliary-variable-enriched Biomarker Stratified Design Auxiliary-variable-enriched Biomarker Stratified Design Ting Wang University of North Carolina at Chapel Hill tingwang@live.unc.edu 8th May, 2017 A joint work with Xiaofei Wang, Haibo Zhou, Jianwen Cai

More information

Division of Pharmacoepidemiology And Pharmacoeconomics Technical Report Series

Division of Pharmacoepidemiology And Pharmacoeconomics Technical Report Series Division of Pharmacoepidemiology And Pharmacoeconomics Technical Report Series Year: 2013 #006 The Expected Value of Information in Prospective Drug Safety Monitoring Jessica M. Franklin a, Amanda R. Patrick

More information

A Type of Sample Size Planning for Mean Comparison in Clinical Trials

A Type of Sample Size Planning for Mean Comparison in Clinical Trials Journal of Data Science 13(2015), 115-126 A Type of Sample Size Planning for Mean Comparison in Clinical Trials Junfeng Liu 1 and Dipak K. Dey 2 1 GCE Solutions, Inc. 2 Department of Statistics, University

More information

A New Confidence Interval for the Difference Between Two Binomial Proportions of Paired Data

A New Confidence Interval for the Difference Between Two Binomial Proportions of Paired Data UW Biostatistics Working Paper Series 6-2-2003 A New Confidence Interval for the Difference Between Two Binomial Proportions of Paired Data Xiao-Hua Zhou University of Washington, azhou@u.washington.edu

More information

Welcome! Webinar Biostatistics: sample size & power. Thursday, April 26, 12:30 1:30 pm (NDT)

Welcome! Webinar Biostatistics: sample size & power. Thursday, April 26, 12:30 1:30 pm (NDT) . Welcome! Webinar Biostatistics: sample size & power Thursday, April 26, 12:30 1:30 pm (NDT) Get started now: Please check if your speakers are working and mute your audio. Please use the chat box to

More information

Bios 6649: Clinical Trials - Statistical Design and Monitoring

Bios 6649: Clinical Trials - Statistical Design and Monitoring Bios 6649: Clinical Trials - Statistical Design and Monitoring Spring Semester 2015 John M. Kittelson Department of Biostatistics & Informatics Colorado School of Public Health University of Colorado Denver

More information

Maximally selected chi-square statistics for at least ordinal scaled variables

Maximally selected chi-square statistics for at least ordinal scaled variables Maximally selected chi-square statistics for at least ordinal scaled variables Anne-Laure Boulesteix anne-laure.boulesteix@stat.uni-muenchen.de Department of Statistics, University of Munich, Akademiestrasse

More information

Tutorial 5: Power and Sample Size for One-way Analysis of Variance (ANOVA) with Equal Variances Across Groups. Acknowledgements:

Tutorial 5: Power and Sample Size for One-way Analysis of Variance (ANOVA) with Equal Variances Across Groups. Acknowledgements: Tutorial 5: Power and Sample Size for One-way Analysis of Variance (ANOVA) with Equal Variances Across Groups Anna E. Barón, Keith E. Muller, Sarah M. Kreidler, and Deborah H. Glueck Acknowledgements:

More information

6 Sample Size Calculations

6 Sample Size Calculations 6 Sample Size Calculations A major responsibility of a statistician: sample size calculation. Hypothesis Testing: compare treatment 1 (new treatment) to treatment 2 (standard treatment); Assume continuous

More information

ADAPTIVE EXPERIMENTAL DESIGNS. Maciej Patan and Barbara Bogacka. University of Zielona Góra, Poland and Queen Mary, University of London

ADAPTIVE EXPERIMENTAL DESIGNS. Maciej Patan and Barbara Bogacka. University of Zielona Góra, Poland and Queen Mary, University of London ADAPTIVE EXPERIMENTAL DESIGNS FOR SIMULTANEOUS PK AND DOSE-SELECTION STUDIES IN PHASE I CLINICAL TRIALS Maciej Patan and Barbara Bogacka University of Zielona Góra, Poland and Queen Mary, University of

More information

Lecture 01: Introduction

Lecture 01: Introduction Lecture 01: Introduction Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University of South Carolina Lecture 01: Introduction

More information

CHL 5225H Advanced Statistical Methods for Clinical Trials: Multiplicity

CHL 5225H Advanced Statistical Methods for Clinical Trials: Multiplicity CHL 5225H Advanced Statistical Methods for Clinical Trials: Multiplicity Prof. Kevin E. Thorpe Dept. of Public Health Sciences University of Toronto Objectives 1. Be able to distinguish among the various

More information

Sample Size Calculations for Group Randomized Trials with Unequal Sample Sizes through Monte Carlo Simulations

Sample Size Calculations for Group Randomized Trials with Unequal Sample Sizes through Monte Carlo Simulations Sample Size Calculations for Group Randomized Trials with Unequal Sample Sizes through Monte Carlo Simulations Ben Brewer Duke University March 10, 2017 Introduction Group randomized trials (GRTs) are

More information

BIOS 312: Precision of Statistical Inference

BIOS 312: Precision of Statistical Inference and Power/Sample Size and Standard Errors BIOS 312: of Statistical Inference Chris Slaughter Department of Biostatistics, Vanderbilt University School of Medicine January 3, 2013 Outline Overview and Power/Sample

More information

Bayesian designs of phase II oncology trials to select maximum effective dose assuming monotonic dose-response relationship

Bayesian designs of phase II oncology trials to select maximum effective dose assuming monotonic dose-response relationship Guo and Li BMC Medical Research Methodology 2014, 14:95 RESEARCH ARTICLE Open Access Bayesian designs of phase II oncology trials to select maximum effective dose assuming monotonic dose-response relationship

More information

A Sampling of IMPACT Research:

A Sampling of IMPACT Research: A Sampling of IMPACT Research: Methods for Analysis with Dropout and Identifying Optimal Treatment Regimes Marie Davidian Department of Statistics North Carolina State University http://www.stat.ncsu.edu/

More information

Sample size determination for a binary response in a superiority clinical trial using a hybrid classical and Bayesian procedure

Sample size determination for a binary response in a superiority clinical trial using a hybrid classical and Bayesian procedure Ciarleglio and Arendt Trials (2017) 18:83 DOI 10.1186/s13063-017-1791-0 METHODOLOGY Open Access Sample size determination for a binary response in a superiority clinical trial using a hybrid classical

More information

OPTIMAL TESTS OF TREATMENT EFFECTS FOR THE OVERALL POPULATION AND TWO SUBPOPULATIONS IN RANDOMIZED TRIALS, USING SPARSE LINEAR PROGRAMMING

OPTIMAL TESTS OF TREATMENT EFFECTS FOR THE OVERALL POPULATION AND TWO SUBPOPULATIONS IN RANDOMIZED TRIALS, USING SPARSE LINEAR PROGRAMMING Johns Hopkins University, Dept. of Biostatistics Working Papers 5-15-2013 OPTIMAL TESTS OF TREATMENT EFFECTS FOR THE OVERALL POPULATION AND TWO SUBPOPULATIONS IN RANDOMIZED TRIALS, USING SPARSE LINEAR

More information

Statistics and Probability Letters. Using randomization tests to preserve type I error with response adaptive and covariate adaptive randomization

Statistics and Probability Letters. Using randomization tests to preserve type I error with response adaptive and covariate adaptive randomization Statistics and Probability Letters ( ) Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: wwwelseviercom/locate/stapro Using randomization tests to preserve

More information

University of Michigan School of Public Health

University of Michigan School of Public Health University of Michigan School of Public Health The University of Michigan Department of Biostatistics Working Paper Series Year 003 Paper Weighting Adustments for Unit Nonresponse with Multiple Outcome

More information

Comparison of Two Samples

Comparison of Two Samples 2 Comparison of Two Samples 2.1 Introduction Problems of comparing two samples arise frequently in medicine, sociology, agriculture, engineering, and marketing. The data may have been generated by observation

More information

Pubh 8482: Sequential Analysis

Pubh 8482: Sequential Analysis Pubh 8482: Sequential Analysis Joseph S. Koopmeiners Division of Biostatistics University of Minnesota Week 12 Review So far... We have discussed the role of phase III clinical trials in drug development

More information

clinical trials Abstract Multi-arm multi-stage (MAMS) trials can improve the efficiency of the drug

clinical trials Abstract Multi-arm multi-stage (MAMS) trials can improve the efficiency of the drug A multi-stage drop-the-losers design for multi-arm clinical trials James Wason 1, Nigel Stallard 2, Jack Bowden 1, and Christopher Jennison 3 1 MRC Biostatistics Unit, Cambridge 2 Warwick Medical School,

More information

Optimal SPRT and CUSUM Procedures using Compressed Limit Gauges

Optimal SPRT and CUSUM Procedures using Compressed Limit Gauges Optimal SPRT and CUSUM Procedures using Compressed Limit Gauges P. Lee Geyer Stefan H. Steiner 1 Faculty of Business McMaster University Hamilton, Ontario L8S 4M4 Canada Dept. of Statistics and Actuarial

More information

HOW TO DETERMINE THE NUMBER OF SUBJECTS NEEDED FOR MY STUDY?

HOW TO DETERMINE THE NUMBER OF SUBJECTS NEEDED FOR MY STUDY? HOW TO DETERMINE THE NUMBER OF SUBJECTS NEEDED FOR MY STUDY? TUTORIAL ON SAMPLE SIZE AND POWER CALCULATIONS FOR INEQUALITY TESTS. John Zavrakidis j.zavrakidis@nki.nl May 28, 2018 J.Zavrakidis Sample and

More information

Clinical Trials. Olli Saarela. September 18, Dalla Lana School of Public Health University of Toronto.

Clinical Trials. Olli Saarela. September 18, Dalla Lana School of Public Health University of Toronto. Introduction to Dalla Lana School of Public Health University of Toronto olli.saarela@utoronto.ca September 18, 2014 38-1 : a review 38-2 Evidence Ideal: to advance the knowledge-base of clinical medicine,

More information

Statistics in medicine

Statistics in medicine Statistics in medicine Lecture 3: Bivariate association : Categorical variables Proportion in one group One group is measured one time: z test Use the z distribution as an approximation to the binomial

More information

Use of frequentist and Bayesian approaches for extrapolating from adult efficacy data to design and interpret confirmatory trials in children

Use of frequentist and Bayesian approaches for extrapolating from adult efficacy data to design and interpret confirmatory trials in children Use of frequentist and Bayesian approaches for extrapolating from adult efficacy data to design and interpret confirmatory trials in children Lisa Hampson, Franz Koenig and Martin Posch Department of Mathematics

More information

Q learning. A data analysis method for constructing adaptive interventions

Q learning. A data analysis method for constructing adaptive interventions Q learning A data analysis method for constructing adaptive interventions SMART First stage intervention options coded as 1(M) and 1(B) Second stage intervention options coded as 1(M) and 1(B) O1 A1 O2

More information

Mantel-Haenszel Test Statistics. for Correlated Binary Data. Department of Statistics, North Carolina State University. Raleigh, NC

Mantel-Haenszel Test Statistics. for Correlated Binary Data. Department of Statistics, North Carolina State University. Raleigh, NC Mantel-Haenszel Test Statistics for Correlated Binary Data by Jie Zhang and Dennis D. Boos Department of Statistics, North Carolina State University Raleigh, NC 27695-8203 tel: (919) 515-1918 fax: (919)

More information

Monitoring clinical trial outcomes with delayed response: incorporating pipeline data in group sequential designs. Christopher Jennison

Monitoring clinical trial outcomes with delayed response: incorporating pipeline data in group sequential designs. Christopher Jennison Monitoring clinical trial outcomes with delayed response: incorporating pipeline data in group sequential designs Christopher Jennison Department of Mathematical Sciences, University of Bath http://people.bath.ac.uk/mascj

More information

ADVANCED PROGRAMME MATHEMATICS MARKING GUIDELINES

ADVANCED PROGRAMME MATHEMATICS MARKING GUIDELINES GRADE EXAMINATION NOVEMBER ADVANCED PROGRAMME MATHEMATICS MARKING GUIDELINES Time: hours marks These marking guidelines are prepared for use by examiners and sub-examiners, all of whom are required to

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 2009 Paper 251 Nonparametric population average models: deriving the form of approximate population

More information

Harvard University. Harvard University Biostatistics Working Paper Series. Year 2016 Paper 208

Harvard University. Harvard University Biostatistics Working Paper Series. Year 2016 Paper 208 Harvard University Harvard University Biostatistics Working Paper Series Year 2016 Paper 208 Moving beyond the conventional stratified analysis to estimate an overall treatment efficacy with the data from

More information

SAMPLE SIZE ESTIMATION FOR SURVIVAL OUTCOMES IN CLUSTER-RANDOMIZED STUDIES WITH SMALL CLUSTER SIZES BIOMETRICS (JUNE 2000)

SAMPLE SIZE ESTIMATION FOR SURVIVAL OUTCOMES IN CLUSTER-RANDOMIZED STUDIES WITH SMALL CLUSTER SIZES BIOMETRICS (JUNE 2000) SAMPLE SIZE ESTIMATION FOR SURVIVAL OUTCOMES IN CLUSTER-RANDOMIZED STUDIES WITH SMALL CLUSTER SIZES BIOMETRICS (JUNE 2000) AMITA K. MANATUNGA THE ROLLINS SCHOOL OF PUBLIC HEALTH OF EMORY UNIVERSITY SHANDE

More information

Power assessment in group sequential design with multiple biomarker subgroups for multiplicity problem

Power assessment in group sequential design with multiple biomarker subgroups for multiplicity problem Power assessment in group sequential design with multiple biomarker subgroups for multiplicity problem Lei Yang, Ph.D. Statistical Scientist, Roche (China) Holding Ltd. Aug 30 th 2018, Shanghai Jiao Tong

More information

Optimal rejection regions for testing multiple binary endpoints in small samples

Optimal rejection regions for testing multiple binary endpoints in small samples Optimal rejection regions for testing multiple binary endpoints in small samples Robin Ristl and Martin Posch Section for Medical Statistics, Center of Medical Statistics, Informatics and Intelligent Systems,

More information

Maximally selected chi-square statistics for ordinal variables

Maximally selected chi-square statistics for ordinal variables bimj header will be provided by the publisher Maximally selected chi-square statistics for ordinal variables Anne-Laure Boulesteix 1 Department of Statistics, University of Munich, Akademiestrasse 1, D-80799

More information

Session 9 Power and sample size

Session 9 Power and sample size Session 9 Power and sample size 9.1 Measure of the treatment difference 9.2 The power requirement 9.3 Application to a proportional odds analysis 9.4 Limitations and alternative approaches 9.5 Sample size

More information

Some Properties of the Randomized Play the Winner Rule

Some Properties of the Randomized Play the Winner Rule Journal of Statistical Theory and Applications Volume 11, Number 1, 2012, pp. 1-8 ISSN 1538-7887 Some Properties of the Randomized Play the Winner Rule David Tolusso 1,2, and Xikui Wang 3 1 Institute for

More information