Prospective Graduate Students / Postdocs
This faculty member is currently not looking for graduate students or Postdoctoral Fellows. Please do not contact the faculty member with any such requests.
This faculty member is currently not looking for graduate students or Postdoctoral Fellows. Please do not contact the faculty member with any such requests.
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
Pragmatic trials are randomized controlled trials (RCTs) conducted in usual health-care settings to evaluate the effectiveness of treatments. These trials often require more complex methodologies to address real-world issues and ensure validity and efficiency. Inspired by three real-world examples, in this dissertation, we developed novel design and analysis methodologies to enhance the efficiency of pragmatic trials.First, we showed that in a stepped-wedge design with unequal cluster sizes, the post-randomization attained power may differ substantially from the pre-randomization expected power. Allocations with a large treatment-vs-time period correlation yield lower attained power. The risk of obtaining an allocation with inadequate attained power increases with lower ICCs, higher CV of the cluster sizes, and smaller numbers of clusters. Trialists can reduce the risk by restricting the randomization algorithm to exclude allocations with low attained power. We then extended the methodology to other cluster-randomized designs and multiple types of outcomes and then implemented them in an R package to enable trial designers to apply these methods to their trials. Second, we developed a prototype online elicitation app to assist experts in eliciting informative joint prior distributions to reduce the sample size in Bayesian clinical trials. The app implemented three different approaches, two novel and one pre-existing, to eliciting the joint prior distribution. Usability testers reported satisfaction with the user interface but suggested that additional explanation of the meaning of elicitation parameters would be helpful. Last, we showed that in a trial comparing three or more treatment durations with a time-to-event outcome, re-casting the primary hypotheses based on a pragmatic perspective and analyzing using appropriate time-varying Cox proportional hazards models leads to results that are more interpretable and precise than what is obtained using the conventional pair-wise comparison of arms. Simulation results showed that with the same number of patients, the new approach significantly increased statistical power, typically by more than 10%. In addition, we developed a novel sample size reallocation algorithm to balance the powers of the multiple primary hypothesis tests.
View record
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
Stepped-wedge cluster randomized trials are characterized by the sequential transition of clusters from control to intervention. Most studies that explored the statistical properties of such trials relied on asymptotic theory and/or assumed equal cluster sizes. In practice, the sample size is often limited, and cluster sizes are subject to variation. The impact of unequal cluster sizes has been studied in the context of parallel-arm cluster randomized trials, but it is unclear whether these results are generalizable to the stepped-wedge trial. Also, statistical performance varies across different allocations when the cluster sizes are unbalanced. We conducted simulations for continuous and binary outcomes to evaluate the performance of various analytical approaches for cross-sectional stepped-wedge trials with limited sample size and unequal cluster sizes. We explored methods commonly used for parallel-arm cluster randomized trials including the Wald test, F-tests with degrees of freedom approximations (Hemming et al. and Satterthwaite), the Kenward-Roger approximation and the bootstrap. Type I error was generally inflated with the Wald test for the continuous outcome while the Kenward-Roger approximation was overly conservative for both binary and continuous outcomes. Bootstrapping and F-tests with degrees of freedom approximations generally helped reduce Type I error inflation. Bias in the treatment effect estimate and its standard error are minimal for continuous outcomes and moderate for binary outcomes with the Wald test. Those metrics are also somewhat correlated with the correlation of treatment and time and the imbalance of treatment. When adjusted for differences in Type I error, the tests were similarly powerful with minimal bias in treatment effect estimates. We provide general recommendations for choosing an analysis approach given the parameter values of the design.
View record
If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.