首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Typically, clusters and individuals in cluster randomized trials are allocated across treatment conditions in a balanced fashion. This is optimal under homogeneous costs and outcome variances. However, both the costs and the variances may be heterogeneous. Then, an unbalanced allocation is more efficient but impractical as the outcome variance is unknown in the design stage of a study. A practical alternative to the balanced design could be a design optimal for known and possibly heterogeneous costs and homogeneous variances. However, when costs and variances are heterogeneous, both designs suffer from loss of efficiency, compared with the optimal design. Focusing on cluster randomized trials with a 2 × 2 design, the relative efficiency of the balanced design and of the design optimal for heterogeneous costs and homogeneous variances is evaluated, relative to the optimal design. We consider two heterogeneous scenarios (two treatment arms with small, and two with large, costs or variances, or one small, two intermediate, and one large costs or variances) at each design level (cluster, individual, and both). Within these scenarios, we compute the relative efficiency of the two designs as a function of the extents of heterogeneity of the costs and variances, and the congruence (the cheapest treatment has the smallest variance) and incongruence (the cheapest treatment has the largest variance) between costs and variances. We find that the design optimal for heterogeneous costs and homogeneous variances is generally more efficient than the balanced design and we illustrate this theory on a trial that examines methods to reduce radiological referrals from general practices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
At the design stage of a study, it is crucial to compute the sample size needed for treatment effect estimation with maximum precision and power. The optimal design depends on the costs, which may be known at the design stage, and on the outcome variances, which are unknown. A balanced design, optimal for homogeneous costs and variances, is typically used. An alternative to the balanced design is a design optimal for the known and possibly heterogeneous costs, and homogeneous variances, called costs considering design. Both designs suffer from loss of efficiency, compared with optimal designs for heterogeneous costs and variances. For 2 × 2 multicenter trials, we compute the relative efficiency of the balanced and the costs considering designs, relative to the optimal designs. We consider 2 heterogeneous costs and variance scenarios (in 1 scenario, 2 treatment conditions have small and 2 have large costs and variances; in the other scenario, 1 treatment condition has small, 2 have intermediate, and 1 has large costs and variances). Within these scenarios, we examine the relative efficiency of the balanced design and of the costs considering design as a function of the extents of heterogeneity of the costs and of the variances and of their congruence (congruent when the cheapest treatment has the smallest variance, incongruent when the cheapest treatment has the largest variance). We find that the costs considering design is generally more efficient than the balanced design, and we illustrate this theory on a 2 × 2 multicenter trial on lifestyle improvement of patients in general practices.  相似文献   

3.
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost‐effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra‐cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost‐effectiveness of an intervention. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
In meta‐analyses, where a continuous outcome is measured with different scales or standards, the summary statistic is the mean difference standardised to a common metric with a common variance. Where trial treatment is delivered by a person, nesting of patients within care providers leads to clustering that may interact with, or be limited to, one or more of the arms. Assuming a common standardising variance is less tenable and options for scaling the mean difference become numerous. Metrics suggested for cluster‐randomised trials are within, between and total variances and for unequal variances, the control arm or pooled variances. We consider summary measures and individual‐patient‐data methods for meta‐analysing standardised mean differences from trials with two‐level nested clustering, relaxing independence and common variance assumptions, allowing sample sizes to differ across arms. A general metric is proposed with comparable interpretation across designs. The relationship between the method of standardisation and choice of model is explored, allowing for bias in the estimator and imprecision in the standardising metric. A meta‐analysis of trials of counselling in primary care motivated this work. Assuming equal clustering effects across trials, the proposed random‐effects meta‐analysis model gave a pooled standardised mean difference of ?0.27 (95% CI ?0.45 to ?0.08) using summary measures and ?0.26 (95% CI ?0.45 to ?0.09) with the individual‐patient‐data. While treatment‐related clustering has rarely been taken into account in trials, it is now recommended that it is considered in trials and meta‐analyses. This paper contributes to the uptake of this guidance. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re‐estimate the variances and re‐calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re‐estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Nesting of patients within care providers in trials of physical and talking therapies creates an additional level within the design. The statistical implications of this are analogous to those of cluster randomised trials, except that the clustering effect may interact with treatment and can be restricted to one or more of the arms. The statistical model that is recommended at the trial level includes a random effect for the care provider but allows the provider and patient level variances to differ across arms. Evidence suggests that, while potentially important, such within‐trial clustering effects have rarely been taken into account in trials and do not appear to have been considered in meta‐analyses of these trials. This paper describes summary measures and individual‐patient‐data methods for meta‐analysing absolute mean differences from randomised trials with two‐level nested clustering effects, contrasting fixed and random effects meta‐analysis models. It extends methods for incorporating trials with unequal variances and homogeneous clustering to allow for between‐arm and between‐trial heterogeneity in intra‐class correlation coefficient estimates. The work is motivated by a meta‐analysis of trials of counselling in primary care, where the control is no counselling and the outcome is the Beck Depression Inventory. Assuming equal counsellor intra‐class correlation coefficients across trials, the recommended random‐effects heteroscedastic model gave a pooled absolute mean difference of ?2.53 (95% CI ?5.33 to 0.27) using summary measures and ?2.51 (95% CI ?5.35 to 0.33) with the individual‐patient‐data. Pooled estimates were consistently below a minimally important clinical difference of four to five points on the Beck Depression Inventory. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
As the costs of medical care increase, more studies are evaluating cost in addition to effectiveness of treatments. Cost‐effectiveness analyses in randomized clinical trials have typically been conducted only at the end of follow‐up. However, cost‐effectiveness may change over time. We therefore propose a nonparametric estimator to assess the incremental cost‐effectiveness ratio over time. We also derive the asymptotic variance of our estimator and present formulation of Fieller‐based simultaneous confidence bands. Simulation studies demonstrate the performance of our point estimators, variance estimators, and confidence bands. We also illustrate our methods using data from a randomized clinical trial, the second Multicenter Automatic Defibrillator Implantation Trial. This trial studied the effects of implantable cardioverter‐defibrillators on patients at high risk for cardiac arrhythmia. Results show that our estimator performs well in large samples, indicating promising future directions in the field of cost‐effectiveness. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Outcome‐dependent sampling (ODS) scheme is a cost‐effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well‐known such design is the case‐control design for binary response, the case‐cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under‐developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome‐dependent sampling (multivariate‐ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate‐ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple‐random‐sample portion of the multivariate‐ODS or the estimator from a simple random sample with the same sample size. The multivariate‐ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Evaluating biomarkers in epidemiological studies can be expensive and time consuming. Many investigators use techniques such as random sampling or pooling biospecimens in order to cut costs and save time on experiments. Commonly, analyses based on pooled data are strongly restricted by distributional assumptions that are challenging to validate because of the pooled biospecimens. Random sampling provides data that can be easily analyzed. However, random sampling methods are not optimal cost‐efficient designs for estimating means. We propose and examine a cost‐efficient hybrid design that involves taking a sample of both pooled and unpooled data in an optimal proportion in order to efficiently estimate the unknown parameters of the biomarker distribution. In addition, we find that this design can be used to estimate and account for different types of measurement and pooling error, without the need to collect validation data or repeated measurements. We show an example where application of the hybrid design leads to minimization of a given loss function based on variances of the estimators of the unknown parameters. Monte Carlo simulation and biomarker data from a study on coronary heart disease are used to demonstrate the proposed methodology. Published in 2010 by John Wiley & Sons, Ltd.  相似文献   

10.
As medical expenses continue to rise, methods to properly analyze cost outcomes are becoming of increasing relevance when seeking to compare average costs across treatments. Inverse probability weighted regression models have been developed to address the challenge of cost censoring in order to identify intent‐to‐treat effects (i.e., to compare mean costs between groups on the basis of their initial treatment assignment, irrespective of any subsequent changes to their treatment status). In this paper, we describe a nested g‐computation procedure that can be used to compare mean costs between two or more time‐varying treatment regimes. We highlight the relative advantages and limitations of this approach when compared with existing regression‐based models. We illustrate the utility of this approach as a means to inform public policy by applying it to a simulated data example motivated by costs associated with cancer treatments. Simulations confirm that inference regarding intent‐to‐treat effects versus the joint causal effects estimated by the nested g‐formula can lead to markedly different conclusions regarding differential costs. Therefore, it is essential to prespecify the desired target of inference when choosing between these two frameworks. The nested g‐formula should be considered as a useful, complementary tool to existing methods when analyzing cost outcomes.  相似文献   

11.

Aim

To compare the theoretical costs of best‐practice weight management delivered by dietitians in a traditional, in‐person setting compared to remote consultations delivered using eHealth technologies.

Methods

Using national guidelines, a framework was developed outlining dietitian‐delivered weight management for in‐person and eHealth delivery modes. This framework mapped one‐on‐one patient–dietitian consultations for an adult requiring active management (BMI ≥ 30 kg/m2) over a one‐year period using both delivery modes. Resources required for both the dietitian and patient to implement each treatment mode were identified, with costs attributed for material, fixed, travel and personnel components. The resource costs were categorised as either establishment or recurring costs associated with the treatment of one patient.

Results

Establishment costs were higher for eHealth compared to in‐person costs ($1394.21 vs $90.05). Excluding establishment costs, the total (combined dietitian and patient) cost for one patient receiving best‐practice weight management for 12 months was $560.59 for in‐person delivery, compared to $389.78 for eHealth delivery. Compared to the eHealth mode, a higher proportion of the overall recurring delivery costs was attributed to the patient for the in‐person mode (46.4% and 33.9%, respectively).

Conclusions

Although it is initially more expensive to establish an eHealth service mode, the overall reoccurring costs per patient for delivery of best‐practice weight management were lower compared to the in‐person mode. This theoretical cost evaluation establishes preliminary evidence to support alternative obesity management service models using eHealth technologies. Further research is required to determine the feasibility, efficacy and cost‐effectiveness of these models within dietetic practice.  相似文献   

12.
We evaluate two‐phase designs to follow‐up findings from genome‐wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation‐maximization‐based inference under a semiparametric maximum likelihood formulation tailored for post‐GWAS inference. A GWAS‐SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT‐SNP‐dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme‐QT strata yields significant power improvements compared to marginal QT‐ or SNP‐based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure.  相似文献   

13.
Missing outcomes are a commonly occurring problem for cluster randomised trials, which can lead to biased and inefficient inference if ignored or handled inappropriately. Two approaches for analysing such trials are cluster‐level analysis and individual‐level analysis. In this study, we assessed the performance of unadjusted cluster‐level analysis, baseline covariate‐adjusted cluster‐level analysis, random effects logistic regression and generalised estimating equations when binary outcomes are missing under a baseline covariate‐dependent missingness mechanism. Missing outcomes were handled using complete records analysis and multilevel multiple imputation. We analytically show that cluster‐level analyses for estimating risk ratio using complete records are valid if the true data generating model has log link and the intervention groups have the same missingness mechanism and the same covariate effect in the outcome model. We performed a simulation study considering four different scenarios, depending on whether the missingness mechanisms are the same or different between the intervention groups and whether there is an interaction between intervention group and baseline covariate in the outcome model. On the basis of the simulation study and analytical results, we give guidance on the conditions under which each approach is valid. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
A comparison of 2 treatments with survival outcomes in a clinical study may require treatment randomization on clusters of multiple units with correlated responses. For example, for patients with otitis media in both ears, a specific treatment is normally given to a single patient, and hence, the 2 ears constitute a cluster. Statistical procedures are available for comparison of treatment efficacies. The conventional approach for treatment allocation is the adoption of a balanced design, in which half of the patients are assigned to each treatment arm. However, considering the increasing acceptability of responsive‐adaptive designs in recent years because of their desirable features, we have developed a response‐adaptive treatment allocation scheme for survival trials with clustered data. The proposed treatment allocation scheme is superior to the balanced design in that it allows more patients to receive the better treatment. At the same time, the test power for comparing treatment efficacies using our treatment allocation scheme remains highly competitive. The advantage of the proposed randomization procedure is supported by a simulation study and the redesign of a clinical study.  相似文献   

15.
ObjectiveSimple guidelines for calculating efficient sample sizes in cluster randomized trials with unknown intraclass correlation (ICC) and varying cluster sizes.MethodsA simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given budget or minimizing total cost for a given power. The problems of cluster size variation and specification of the ICC of the outcome are solved in a simple yet efficient way.ResultsThe optimal number of clusters goes up, and the optimal sample size per cluster goes down as the ICC goes up or as the cluster-to-person cost ratio goes down. The available budget, desired power, and effect size only affect the number of clusters and not the sample size per cluster, which is between 7 and 70 for a wide range of cost ratios and ICCs. Power loss because of cluster size variation is compensated by sampling 10% more clusters. The optimal design for the ICC halfway the range of realistic ICC values is a good choice for the first stage of a two-stage design. The second stage is needed only if the first stage shows the ICC to be higher than assumed.ConclusionEfficient sample sizes for cluster randomized trials are easily computed, provided the cost per cluster and cost per person are specified.  相似文献   

16.
Delay in the outcome variable is challenging for outcome‐adaptive randomization, as it creates a lag between the number of subjects accrued and the information known at the time of the analysis. Motivated by a real‐life pediatric ulcerative colitis trial, we consider a case where a short‐term predictor is available for the delayed outcome. When a short‐term predictor is not considered, studies have shown that the asymptotic properties of many outcome‐adaptive randomization designs are little affected unless the lag is unreasonably large relative to the accrual process. These theoretical results assumed independent identical delays, however, whereas delays in the presence of a short‐term predictor may only be conditionally homogeneous. We consider delayed outcomes as missing and propose mitigating the delay effect by imputing them. We apply this approach to the doubly adaptive biased coin design (DBCD) for motivating pediatric ulcerative colitis trial. We provide theoretical results that if the delays, although non‐homogeneous, are reasonably short relative to the accrual process similarly as in the iid delay case, the lag is also asymptotically ignorable in the sense that a standard DBCD that utilizes only observed outcomes attains target allocation ratios in the limit. Empirical studies, however, indicate that imputation‐based DBCDs performed more reliably in finite samples with smaller root mean square errors. The empirical studies assumed a common clinical setting where a delayed outcome is positively correlated with a short‐term predictor similarly between treatment arm groups. We varied the strength of the correlation and considered fast and slow accrual settings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Stepped‐wedge cluster randomised trials (SW‐CRTs) are being used with increasing frequency in health service evaluation. Conventionally, these studies are cross‐sectional in design with equally spaced steps, with an equal number of clusters randomised at each step and data collected at each and every step. Here we introduce several variations on this design and consider implications for power. One modification we consider is the incomplete cross‐sectional SW‐CRT, where the number of clusters varies at each step or where at some steps, for example, implementation or transition periods, data are not collected. We show that the parallel CRT with staggered but balanced randomisation can be considered a special case of the incomplete SW‐CRT. As too can the parallel CRT with baseline measures. And we extend these designs to allow for multiple layers of clustering, for example, wards within a hospital. Building on results for complete designs, power and detectable difference are derived using a Wald test and obtaining the variance–covariance matrix of the treatment effect assuming a generalised linear mixed model. These variations are illustrated by several real examples. We recommend that whilst the impact of transition periods on power is likely to be small, where they are a feature of the design they should be incorporated. We also show examples in which the power of a SW‐CRT increases as the intra‐cluster correlation (ICC) increases and demonstrate that the impact of the ICC is likely to be smaller in a SW‐CRT compared with a parallel CRT, especially where there are multiple levels of clustering. Finally, through this unified framework, the efficiency of the SW‐CRT and the parallel CRT can be compared. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

18.
In two‐armed trials with clustered observations the arms may differ in terms of (i) the intraclass correlation, (ii) the outcome variance, (iii) the average cluster size, and (iv) the number of clusters. For a linear mixed model analysis of the treatment effect, this paper examines the expected efficiency loss due to varying cluster sizes based upon the asymptotic relative efficiency of varying versus constant cluster sizes. Simple, but nearly cost‐optimal, correction factors are derived for the numbers of clusters to repair this efficiency loss. In an extensive Monte Carlo simulation, the accuracy of the asymptotic relative efficiency and its Taylor approximation are examined for small sample sizes. Practical guidelines are derived to correct the numbers of clusters calculated under constant cluster sizes (within each treatment) when planning a study. Because of the variety of simulation conditions, these guidelines can be considered conservative but safe in many realistic situations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
We consider a study‐level meta‐analysis with a normally distributed outcome variable and possibly unequal study‐level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing‐completely‐at‐random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta‐regression to impute the missing sample variances. Our method takes advantage of study‐level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross‐over studies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
In contrast to the usual ROC analysis with a contemporaneous reference standard, the time‐dependent setting introduces the possibility that the reference standard refers to an event at a future time and may not be known for every patient due to censoring. The goal of this research is to determine the sample size required for a study design to address the question of the accuracy of a diagnostic test using the area under the curve in time‐dependent ROC analysis. We adapt a previously published estimator of the time‐dependent area under the ROC curve, which is a function of the expected conditional survival functions. This estimator accommodates censored data. The estimation of the required sample size is based on approximations of the expected conditional survival functions and their variances, derived under parametric assumptions of an exponential failure time and an exponential censoring time. We also consider different patient enrollment strategies. The proposed method can provide an adequate sample size to ensure that the test's accuracy is estimated to a prespecified precision. We present results of a simulation study to assess the accuracy of the method and its robustness to departures from the parametric assumptions. We apply the proposed method to design of a study of positron emission tomography as predictor of disease free survival in women undergoing therapy for cervical cancer. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号