Adaptive designs to maximize power. trials with multiple treatments

Similar documents
Optimal allocation to maximize power of two-sample tests for binary response

Two-stage Adaptive Randomization for Delayed Response in Clinical Trials

Some Properties of the Randomized Play the Winner Rule

Implementing Response-Adaptive Randomization in Multi-Armed Survival Trials

Construction and statistical analysis of adaptive group sequential designs for randomized clinical trials

Targeted Group Sequential Adaptive Designs

A simulation study for comparing testing statistics in response-adaptive randomization

Reports of the Institute of Biostatistics

RANDOMIZATIONN METHODS THAT

Statistics and Probability Letters. Using randomization tests to preserve type I error with response adaptive and covariate adaptive randomization

Duke University. Duke Biostatistics and Bioinformatics (B&B) Working Paper Series. Randomized Phase II Clinical Trials using Fisher s Exact Test

1 Statistical inference for a population mean

ADJUSTED POWER ESTIMATES IN. Ji Zhang. Biostatistics and Research Data Systems. Merck Research Laboratories. Rahway, NJ

6 Sample Size Calculations

Adaptive Designs: Why, How and When?

Sample size re-estimation in clinical trials. Dealing with those unknowns. Chris Jennison. University of Kyoto, January 2018

Bayesian concept for combined Phase 2a/b trials

CHL 5225H Advanced Statistical Methods for Clinical Trials: Multiplicity

The information complexity of sequential resource allocation

The Design of a Survival Study

Sample Size Determination

Lawrence D. Brown, T. Tony Cai and Anirban DasGupta

PubH 7470: STATISTICS FOR TRANSLATIONAL & CLINICAL RESEARCH

Confidence Intervals of the Simple Difference between the Proportions of a Primary Infection and a Secondary Infection, Given the Primary Infection

Comparing Adaptive Designs and the. Classical Group Sequential Approach. to Clinical Trial Design

Group Sequential Designs: Theory, Computation and Optimisation

A GENERALIZED DROP-THE-LOSER URN FOR CLINICAL TRIALS WITH DELAYED RESPONSES

Use of frequentist and Bayesian approaches for extrapolating from adult efficacy data to design and interpret confirmatory trials in children

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Likelihood Construction, Inference for Parametric Survival Distributions

IEC Reliability testing Compliance test plans for success ratio. Proposal for review of sequential tests

TESTS FOR EQUIVALENCE BASED ON ODDS RATIO FOR MATCHED-PAIR DESIGN

Test Volume 11, Number 1. June 2002

University of California, Berkeley

University of California, Berkeley

One-sample categorical data: approximate inference

The information complexity of best-arm identification

Application of Time-to-Event Methods in the Assessment of Safety in Clinical Trials

Power and Sample Size Calculations with the Additive Hazards Model

Adaptive Extensions of a Two-Stage Group Sequential Procedure for Testing a Primary and a Secondary Endpoint (II): Sample Size Re-estimation

QED. Queen s Economics Department Working Paper No Hypothesis Testing for Arbitrary Bounds. Jeffrey Penney Queen s University

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

Play-the-winner rule in clinical trials: Models for adaptative designs and Bayesian methods

The t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary

Statistical Inference for Data Adaptive Target Parameters

Optimal rejection regions for testing multiple binary endpoints in small samples

Adaptive Trial Designs

SAMPLE SIZE RE-ESTIMATION FOR ADAPTIVE SEQUENTIAL DESIGN IN CLINICAL TRIALS

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method

Sample Size and Power I: Binary Outcomes. James Ware, PhD Harvard School of Public Health Boston, MA

9 Generalized Linear Models

Brock University. Department of Computer Science. A Fast Randomisation Test for Rule Significance

c Copyright 2014 Navneet R. Hakhu

Bios 6648: Design & conduct of clinical research

When enough is enough: early stopping of biometrics error rate testing

Pubh 8482: Sequential Analysis

AP-Optimum Designs for Minimizing the Average Variance and Probability-Based Optimality

Chapter 4 Fall Notations: t 1 < t 2 < < t D, D unique death times. d j = # deaths at t j = n. Y j = # at risk /alive at t j = n

Clinical Trial Design Using A Stopped Negative Binomial Distribution. Michelle DeVeaux and Michael J. Kane and Daniel Zelterman

2015 Duke-Industry Statistics Symposium. Sample Size Determination for a Three-arm Equivalence Trial of Poisson and Negative Binomial Data

Multistage Tests of Multiple Hypotheses

Technical Manual. 1 Introduction. 1.1 Version. 1.2 Developer

Estimating the Mean Response of Treatment Duration Regimes in an Observational Study. Anastasios A. Tsiatis.

Goodness-of-fit tests for randomly censored Weibull distributions with estimated parameters

Bios 6649: Clinical Trials - Statistical Design and Monitoring

PubH 5450 Biostatistics I Prof. Carlin. Lecture 13

Robustifying Trial-Derived Treatment Rules to a Target Population

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk

BIOS 312: Precision of Statistical Inference

Approximate and Fiducial Confidence Intervals for the Difference Between Two Binomial Proportions

DEALING WITH MULTIVARIATE OUTCOMES IN STUDIES FOR CAUSAL EFFECTS

Superchain Procedures in Clinical Trials. George Kordzakhia FDA, CDER, Office of Biostatistics Alex Dmitrienko Quintiles Innovation

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

6.4 Type I and Type II Errors

OPTIMAL TESTS OF TREATMENT EFFECTS FOR THE OVERALL POPULATION AND TWO SUBPOPULATIONS IN RANDOMIZED TRIALS, USING SPARSE LINEAR PROGRAMMING

Understanding Ding s Apparent Paradox

Sample Size/Power Calculation by Software/Online Calculators

Effect of investigator bias on the significance level of the Wilcoxon rank-sum test

Performance of Bayesian methods in non-inferiority tests based on relative risk and odds ratio for dichotomous data

Session 9 Power and sample size

ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12

Blinded sample size reestimation with count data

a Sample By:Dr.Hoseyn Falahzadeh 1

Optimal Few-Stage Designs for Clinical Trials. Janis Hardwick Quentin F. Stout

Bayesian Enhancement Two-Stage Design for Single-Arm Phase II Clinical. Trials with Binary and Time-to-Event Endpoints

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

A Gate-keeping Approach for Selecting Adaptive Interventions under General SMART Designs

A Clinical Trial Simulation System, Its Applications, and Future Challenges. Peter Westfall, Texas Tech University Kuenhi Tsai, Merck Research Lab

Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses

Chapter 9. Inferences from Two Samples. Objective. Notation. Section 9.2. Definition. Notation. q = 1 p. Inferences About Two Proportions

Bayesian Uncertainty Directed Trial Designs arxiv: v1 [stat.ap] 29 Jun 2018

Adaptive clinical trials with subgroup selection

Pubh 8482: Sequential Analysis

Welcome! Webinar Biostatistics: sample size & power. Thursday, April 26, 12:30 1:30 pm (NDT)

Comparison of Two Samples

A weighted differential entropy based approach for dose-escalation trials

Finding Critical Values with Prefixed Early. Stopping Boundaries and Controlled Type I. Error for A Two-Stage Adaptive Design

Sample size considerations in IM assays

Transcription:

in clinical trials with multiple treatments Technion - Israel Institute of Technology January 17, 2013

The problem A, B, C are three treatments with unknown probabilities of success, p A, p B, p C. A - placebo, B - standard treatment, C - new treatment. An n subjects experiment (binary response) is preformed in order to test the new therapy. Target: to find a sequential design that maximizes the power of the relevant tests keeping in mind the ethical goal of assigning a small fraction of subjects to the inferior treatment.

The problem (cont.) Let ν A (n), ν B (n), ν C (n) be the fraction of subjects (allocations) assigned to treatments A, B, or C. Two steps: 1 Oracle: to find the optimal allocation (asymptotically) for a fixed design. 2 As the optimal allocation depends on unknown parameters, implementation requires a sequential design.

Preliminaries At the end of the experiment, three one-sided hypotheses may be tested: H 0 : p A = p B versus H 1 : p A < p B (1) H 0 : p A = p C versus H 1 : p A < p C (2) H 0 : p B = p C versus H 1 : p B < p C. (3) We assume that p A < p B < p C ; two criteria of optimality are studied: Correct selection: to maximize the power (=probability that both are rejected) of (2) and (3). Correct order: to maximize the power of (1) and (3).

Preliminaries (cont.) At stage n, the estimators ˆp i (n), for i = A, B, C, are available. ˆp i (n) (d) = Binomial(nν i(n), p i ). nν i (n) For example, we reject the null in (1) if ˆp B (n) ˆp A (n) ˆV 1/2 C r ; ˆV = an estimator of the variance; C r 0 = a critical value. The power of correct selection is } [{ˆpC (n) ˆp A (n) }] {ˆpC (n) ˆp B (n) P CS (n) := P C r C r ˆV 1/2 ˆV 1/2 The power of correct order is [{ˆpB (n) ˆp A (n) P CO (n) := P ˆV 1/2 C r } {ˆpC (n) ˆp B (n) ˆV 1/2 C r }].

Step 1 - optimal allocation for a fixed design Two different notions of optimality, related to Pitman and Bahadur efficiency, can be considered. In this case, Pitman s approach does not work (details are omitted). Bahadur s approach: P CS (n) 1 e C 1(ν)n, P CO (n) 1 e C 2(ν)n. We aim at finding the allocation that maximizes C 1 (ν), C 2 (ν).

Two treatments; Azriel, Mandel and Rinott (2012) We would like to find allocations that maximize } {ˆpB (n) ˆp A (n) P C r. ˆV 1/2 The standard argument is based on the following normal approximation: { } ˆpB (n) ˆp A (n) P pa,p B C r = ˆV { 1/2 ˆpB (n) ˆp A (n) (p B p A ) P pa,p B > Cr ˆV } 1/2 (p B p A ) V 1/2 V 1/2 1 Φ ( C r ˆV 1/2 (p B p A ) V 1/2 ) ( 1 Φ C r p B p ) A = 1 Φ V 1/2 ( ) C r n1/2 (p B p A ). (nv) 1/2 The normal approximation is not valid because V C/n. The error of the Normal approximation is in the order of 1/ n while the approximation itself is in the order of e n.

Two treatments; Neyman allocation Neyman allocation, i.e., ν Neyman = pa (1 p A ) pa (1 p A )+ p B (1 p B ) is recommended by many authors, since it minimizes nv = p A(1 p A ) ν + p B(1 p B ) 1 ν. (e.g., Brittain and Schlesselman (1982); Rosenberger et. al (2001); Hu and Rosenberger (2003); Bandyopadhyay and Bhattacharya (2006); Hu et. al (2006); Hu and Rosenberger (2006); Tymofyeyev et. al (2007); Biswas et. al (2010); Zhu and Hu (2010); Chambaz and van der Laan (2010)). For example, Hu et. al (2006) state that If one wishes to maximize the power of the usual test comparing two binomial probabilities, it is well-known that Neyman allocation, the ratio of the standard deviations, should be used".

Two treatments (cont.) Theorem Assume that p B > p A and lim n ν A (n) = ν, lim n ν B (n) = 1 ν for certain 0 < ν < 1; then for any C r 0 { )} 1 (ˆpB lim n n log (n) ˆp 1 P A (n) C r = g(p A, p B, ν), ˆV 1/2 where g(p A, p B, ν) := log{(1 p B ) ν (1 p A ) 1 ν + p ν B p1 ν A }. Let ν = ν (p A, p B ) := arg min ν g(ν, p A, p B ), (g(, p A, p B ) is convex).

Two treatments; numerical illustration. Table: The optimal allocation ν for different parameters compared to Neyman allocation. p A p B ν ν Neyman 0.5 0.8 0.518 0.556 0.5 0.65 0.504 0.512 0.6 0.75 0.510 0.531 0.7 0.75 0.505 0.514 0.7 0.85 0.521 0.562 0.7 0.9 0.535 0.604 0.85 0.95 0.541 0.621 0.5 0.9 0.542 0.625 Conclusion: For two treatments the optimal allocation is quite close to 0.5 and therefore adaptive design cannot significantly improve power in that case. We shall see that for three treatments the situation is different.

Large deviations result - three treatments Theorem Assume that lim n ν A (n) = ν A, lim n ν B (n) = ν B and lim n ν C (n) = ν C for certain ν A, ν B, ν C > 0; then for any C r 0 and, lim n 1 n log{1 P CS(n)} = H CS (ν A, ν B, ν C ), lim n 1 n log{1 P CO(n)} = H CO (ν A, ν B, ν C ).

Definitions Where g(p 1, p 2, ν) := log{(1 p 2 ) ν (1 p 1 ) 1 ν + p ν 2 p1 ν 1 }. H CS (ν A, ν B, ν C ) = max{(ν B + ν C )g(p B, p C, ), ν B + ν C (ν A + ν C )g(p A, p C, ν B ν A ν A + ν C )}. H CO (ν A, ν B, ν C ) = max{(ν B + ν C )g(p B, p C, ), ν B + ν C (ν A + ν B )g(p A, p B, ν B ν A ν A + ν B )}

Properties of the optimal allocation We now focus on P CS (n). Consider the case C r = 0. P CS (n) = P [{ˆp C (n) ˆp B (n)} {ˆp C (n) ˆp A (n)}] ; 1 P CS (n) = P[{ˆp B (n) > ˆp C (n)} {ˆp A (n) > ˆp C (n)}] =P{ˆp B (n)>ˆp C (n)}+p{ˆp A (n)>ˆp C (n)} P[{ˆp B (n)>ˆp C (n)} {ˆp A (n)>ˆp C (n)}]. The large deviations rate is determined by the maximum rate of the first two probabilities in the latter expression; this explains the form of H CS, which is based on the result in two treatments.

Properties of the optimal allocation (cont.) The optimal allocation is arg min H CS (ν A, ν B, ν C ); ν CS := arg min max{(ν B + ν C )g(p B, p C, ), ν A,ν B,ν C ν B + ν C (ν A + ν C )g(p A, p C, ν B ν A ν A + ν C )}. By minimax argument, under the optimal allocation the two terms in the curly brackets are equal. The same effort is made to distinguish between p A and p C and between p B and p C. The latter is harder to distinguish than the former and therefore νa CS is smaller than νcs B, νcs C.

Numerical examples Table: Comparison of optimal allocations. p A p B p C ν CS A ν CS B ν CS C ν CO A ν CO B ν CO C 0.1 0.2 0.5 0.119 0.415 0.466 0.449 0.497 0.054 0.1 0.6 0.9 0.039 0.519 0.442 0.100 0.492 0.408 0.2 0.7 0.8 0.008 0.507 0.485 0.012 0.505 0.483 0.3 0.5 0.7 0.066 0.471 0.462 0.290 0.421 0.290 0.3 0.5 0.8 0.105 0.459 0.436 0.433 0.460 0.107 0.4 0.5 0.7 0.128 0.433 0.439 0.466 0.472 0.062 0.5 0.6 0.7 0.071 0.467 0.462 0.315 0.416 0.269

Step 2 - adaptive design The optimal allocations ν CS and ν CO depend on p A, p B, p C, which are unknown, and in order to implement them an adaptive rule is needed. Adaptive design: x n {A, B, C} is the treatment assigned to the n th subjects and y n denotes its binary response. x n F n 1 := σ{(x 1, y 1 ), (x 2, y 2 ),...,(x n 1, y n 1 )}. We assume that y n F n 1 Bernoulli(p xn ).

Adaptive design - RCS Consider the function ( ν CS (p A, p B, p C ) := ν CS A ) (p A, p B, p C ), νb CS (p A, p B, p C ), νc CS (p A, p B, p C ). At each stage n of the experiment the next subject is allocated according to A w. p. ˆν A (n + 1) := νa CS(ˆp A(n), ˆp B (n), ˆp C (n)) x n+1 = B w. p. ˆν B (n + 1) := νb CS(ˆp A(n), ˆp B (n), ˆp C (n)) C w. p. ˆν C (n + 1) := νc CS(ˆp A(n), ˆp B (n), ˆp C (n)), under some truncation. We call this design RCS - random correct selection.

Properties of the RCS design The properties of the RCS design are stated in the following theorem. Theorem The RCS design satisfies I ˆp A (n), ˆp B (n), ˆp C (n) are strongly consistent, asymptotically normal and asymptotically independent; ( ) n II ˆν A (n), ˆν B (n), ˆν C (n) ν CS (p A, p B, p C ) almost surely; ( ) n III ν A (n), ν B (n), ν C (n) ν CS (p A, p B, p C ) almost surely.

The large deviation rate of the RCS Consider the following randomized design Let π A, π B, π C > 0 be such that π A + π B + π C = 1. At stage n of the experiment we choose x n = i with probability π i for i = A, B, C. Theorem For this design we have for any C r 0 lim n 1 n log{1 P CS(n)} = max{r(p A, p C, π A, π C ), R(p B, p C, π B, π C )}, where R(p 1,p 2,π 1,π 2 ):=max K [p1,p 2 ] log{f(k,p 1,π 1 )+f(k,p 2,π 2 )+1 (π 1 +π 2 )}, f(k,p,π):=π( p K ) K ( 1 p 1 K ) 1 K.

Comment It can be shown that R(p 1, p 2, π 1, π 1 ) (π 1 + π 2 )g(p 1, p 2, π 1 π 1 + π 2 ). The R term is minus the rate in a randomized design and the g term is that of a fixed design. Thus, the probability of an error decreases faster to zero in a fixed design than in a randomized design. Designs such as the biased coin of Efron that decrease the randomness may also cause increase of power.

Comment (cont.) Table: Comparison of rates between the optimal fixed design (optimal) and the randomized design with optimal proportions (rand). correct selection correct order p A p B p C optimal rand optimal rand 0.1 0.2 0.5 0.0464 0.0419 0.0096 0.0089 0.1 0.6 0.9 0.0652 0.0325 0.0610 0.0501 0.2 0.7 0.8 0.0067 0.0046 0.0067 0.0051 0.3 0.5 0.7 0.0199 0.0178 0.0147 0.0146 0.3 0.5 0.8 0.0472 0.0407 0.0190 0.0181 0.4 0.5 0.7 0.0186 0.0179 0.0048 0.0046 0.5 0.6 0.7 0.0051 0.0050 0.0036 0.0036

The large deviation rate of the RCS (cont.) The following theorem implies that the RCS design is as good as the randomized design with the optimal proportions. Theorem The RCS design satisfies for any C r 0 lim sup n 1 n log{1 P CS(n)} max{r(p A, p C, νa CS, νcs C ), R(p B, p C, νb CS, νcs C )}, The theorem states that the lim sup is smaller than the limit in the randomized design. The other direction seems also true but we could not find a formal proof for this claim.

Simulations P_CS(n) 0.2 0.4 0.6 0.8 1.0 P_CO(n) 0.2 0.4 0.6 0.8 1.0 0 50 100 150 200 n 0 50 100 150 200 250 300 n (a) P CS (n) in RCS and BD (b) P CO (n) in RCO and BD Figure: Estimates of P CS (n), ν A (n), P CO (n) for p A = 0.3, p B = 0.5, p C = 0.8, based on the mean of 2000 simulations. The solid line presents the RCS or RCO designs and dotted line is BD; the thin line indicates a 95% CI.

Simulations (cont.) v_a(n) 0.10 0.15 0.20 0.25 0.30 0.35 v_a(n) 0.24 0.26 0.28 0.30 0.32 0.34 0 50 100 150 200 n 0 50 100 150 200 250 300 n (a) ν A (n) in RCS and BD (b) ν A (n) in RCO and BD Figure: Estimates of P CS (n), ν A (n), P CO (n) for p A = 0.3, p B = 0.5, p C = 0.8, based on the mean of 2000 simulations. The solid line presents the RCS or RCO designs and dotted line is BD; the thin line indicates a 95% CI.

Extensions We considered the Wald test; the same results hold also for other tests such as log-risk and arcsin transformation. The generalization for more than three treatment is simple. For four treatments, for example, the maximum is computed over three terms rather than two. Similar results can be obtained for normal responses as well as other distributions (with m.g.f).

Normal responses Table: Comparison of optimal allocations for normal responses. µ A µ B µ C σa 2 σb 2 σc 2 νa CS νb CS νc CS νa CO νb CO νc CO 1 2 3 1 1 1 0.07 0.46 0.47 0.29 0.41 0.29 1 2 3 2 1 1 0.12 0.43 0.44 0.42 0.37 0.21 1 2 3 4 1 1 0.22 0.38 0.40 0.56 0.31 0.14 1 2 3 3 2 1 0.12 0.51 0.37 0.44 0.41 0.15 1 2 3 1 2 3 0.03 0.44 0.54 0.15 0.41 0.44

Implication Strand et al. (2004) analyze two double-blind randomized trials comparing leflunomide (treatment) with placebo and sulfasalazine, or methotrexate, (standard) in active rheumatoid arthritis. In both trials the patients were randomized in a 2:3:3 ratio to placebo, standard and treatment. Primary outcome measures in both trials were ACR20 responses after several months. This criterion is binary and is defined as 20% or more improvement in at-least four out of six selected measures.

Implication (cont.) This kind of three arms clinical trials are referred to as active control equivalence trials in Liu (2000). To establish equivalence it is required that the hypotheses below will be rejected: H 0 : p A = p B versus H 1 : p A < p B H 0 : p A = p C versus H 1 : p A < p C H 0 : p C p B δ versus H 1 : p B δ < p C, for some small δ > 0 (e.g. δ = 0.05).

Implication (cont.) Table: Optimal allocation for active control equivalence trials. p A p B p C νa νb νc 0.29 0.55 0.57 0.017 0.492 0.490 0.3 0.5 0.6 0.150 0.440 0.410 0.35 0.55 0.6 0.065 0.471 0.464 0.25 0.6 0.58 0.002 0.499 0.499 Actually used 0.25 0.375 0.375 The optimal allocation is both more powerful and more ethical than the allocation actually used.

Discussion For two treatment adaptive designs can lead, at best, to a practically negligible improvements in terms of power. For three treatments the situation is different. An adaptive rule can be optimal for power maximization and also assigns a small fraction of subjects to the placebo treatment in certain situations. The optimal allocations are generally both more powerful and more ethical than standard allocations.

Discussion (cont.) Consider three statisticians that: 1 Knows the true parameters but can use this knowledge only for the purpose of designing the experiment and not for inference. 2 Also knows the parameters but he can use only randomized designs, that is, at each stage he assigns x n = i with probability π i for i = A, B, C. 3 Does not know the parameters and uses an adaptive design. We showed that, in terms of the large deviation rate of an error, the first statistician can outperform the others and the second and third statisticians can perform equally well.

The end. End.

Bibliography Azriel D., Mandel M., Rinott Y. (2012). Optimal allocation to maximize power of two-sample tests for binary response. Biometrika, 99, 101 113. Bandyopadhyay U. and Bhattacharya, R. (2006). Adaptive Allocation and Failure Saving in Randomised Clinical Trials, Journal of Biopharmaceutical Statistics, 16 817-829. Biswas A., Mandal S., Bhattacharya R. (2010). Multi-treatment optimal response-adaptive designs for phase III clinical trials, Journal of the Korean Statistical Society, in press. Brittain E., Schlesselman J.J. (1982). Optimal Allocation for the Comparison of Proportions, Biometrics, 38, 1003 1009. Chambaz A., van der Laan M.J. (2010). Targeting The Optimal Design In Randomized Clinical Trials With Binary Outcomes And No Covariate U.C. Berkeley Division of Biostatistics Working Paper Series, 258.

Bibliography(cont.) Hu F., Rosenberger W.F. and Zhang L. (2006). Asymptotically best response-adaptive randomization procedures, Journal of Statistical Planning and Inference, 136 1911-1922. Liu J.P. (2000). Therapeutic equivalence. Encyclopedia of Biopharmaceutical Statistics, Marcel Dekker: New York. Hu F., Rosenberger W.F. (2006). The theory of response-adaptive randomization in clinical trials, Wiley, New York. Hu F., Rosenberger W.F. (2003), Optimality, Variability, Power: Evaluating Response-Adaptive Randomization Procedures for Treatment Comparisons, Journal of the American Statistical Association, 98, 671 678.

Bibliography(cont.) Rosenberger W.F., Stallard N., Ivanova A., Harper C. N., and Ricks M. L. (2001). Optimal Adaptive Designs for Binary Response Trials, Biometrics, 57 909 913. Strand V., Cohen S., Crawford B., Smolen J.S., Scott D.L. (2004). Patient-reported outcomes better discriminate active treatment from placebo in randomized controlled trials in rheumatoid arthritis, Rheumatology, 43, 640-647. Tymofyeyev Y., Rosenberger W.F. and Hu F. (2007). Implementing Optimal Allocation in Sequential Binary Response Experiments, Journal of the American Statistical Association, 102 224 234. Zhu H. and Hu F. (2010). Sequential Monitoring of response-adaptive randomized clinical trials, The Annals of Statistics 38, 2218-2241.