首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis   总被引:17,自引:0,他引:17  
Asymmetry in funnel plots may indicate publication bias in meta-analysis, but the shape of the plot in the absence of bias depends on the choice of axes. We evaluated standard error, precision (inverse of standard error), variance, inverse of variance, sample size and log sample size (vertical axis) and log odds ratio, log risk ratio and risk difference (horizontal axis). Standard error is likely to be the best choice for the vertical axis: the expected shape in the absence of bias corresponds to a symmetrical funnel, straight lines to indicate 95% confidence intervals can be included and the plot emphasises smaller studies which are more prone to bias. Precision or inverse of variance is useful when comparing meta-analyses of small trials with subsequent large trials. The use of sample size or log sample size is problematic because the expected shape of the plot in the absence of bias is unpredictable. We found similar evidence for asymmetry and between trial variation in a sample of 78 published meta-analyses whether odds ratios or risk ratios were used on the horizontal axis. Different conclusions were reached for risk differences and this was related to increased between-trial variation. We conclude that funnel plots of meta-analyses should generally use standard error as the measure of study size and ratio measures of treatment effect.  相似文献   

2.
This study challenges two core conventional meta‐analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random‐effects meta‐analysis when there is publication (or small‐sample) bias and better than a fixed‐effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small‐sample) bias and identical to fixed‐effect meta‐analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed‐effect meta‐analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
PurposeLead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it.MethodsSurveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence, and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study.ResultsFor this particular study design, as average preclinical duration increased, the bias in the total-physical activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null.ConclusionsThe finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies.  相似文献   

4.
We present a new procedure for combining P-values from a set of L hypothesis tests. Our procedure is to take the product of only those P-values less than some specified cut-off value and to evaluate the probability of such a product, or a smaller value, under the overall hypothesis that all L hypotheses are true. We give an explicit formulation for this P-value, and find by simulation that it can provide high power for detecting departures from the overall hypothesis. We extend the procedure to situations when tests are not independent. We present both real and simulated examples where the method is especially useful. These include exploratory analyses when L is large, such as genome-wide scans for marker-trait associations and meta-analytic applications that combine information from published studies, with potential for dealing with the "publication bias" phenomenon. Once the overall hypothesis is rejected, an adjustment procedure with strong family-wise error protection is available for smaller subsets of hypotheses, down to the individual tests.  相似文献   

5.
Publication bias and related bias in meta-analysis is often examined by visually checking for asymmetry in funnel plots of treatment effect against its standard error. Formal statistical tests of funnel plot asymmetry have been proposed, but when applied to binary outcome data these can give false-positive rates that are higher than the nominal level in some situations (large treatment effects, or few events per trial, or all trials of similar sizes). We develop a modified linear regression test for funnel plot asymmetry based on the efficient score and its variance, Fisher's information. The performance of this test is compared to the other proposed tests in simulation analyses based on the characteristics of published controlled trials. When there is little or no between-trial heterogeneity, this modified test has a false-positive rate close to the nominal level while maintaining similar power to the original linear regression test ('Egger' test). When the degree of between-trial heterogeneity is large, none of the tests that have been proposed has uniformly good properties.  相似文献   

6.
Hong S  Wang Y 《Statistics in medicine》2007,26(19):3525-3534
Randomized designs have been increasingly called for use in phase II oncology clinical trials to protect against potential patient selection bias. However, formal statistical comparison is rarely conducted due to the sample size restriction, despite its appeal. In this paper, we offer an approach to sample size reduction by extending the three-outcome design of Sargent et al. (Control Clin. Trials 2001; 22:117-125) for single-arm trials to randomized comparative trials. In addition to the usual two outcomes of a hypothesis testing (rejecting the null hypothesis or rejecting the alternative hypothesis), the three-outcome comparative design allows a third outcome of rejecting neither hypotheses when the testing result is in some 'grey area' and leaves the decision to the clinical judgment based on the overall evaluation of trial outcomes and other relevant factors. By allowing a reasonable region of uncertainty, the three-outcome design enables formal statistical comparison with considerably smaller sample size, compared to the standard two-outcome comparative design. Statistical formulation of the three-outcome comparative design is discussed for both the single-stage and two-stage trials. Sample sizes are tabulated for some common clinical scenarios.  相似文献   

7.
ObjectivesPublication bias (PB) may seriously compromise inferences from meta-analyses. The aim of this article was to assess the potential effect of small-study effects and PB on the recently estimated relative effectiveness and ranking of pharmacological treatments for schizophrenia.Study Design and SettingWe used a recently published network of 167 trials involving 36,871 patients and comparing the effectiveness of 15 antipsychotics and placebo. We used novel visual and statistical methods to explore if smaller trials are associated with larger treatment effects and a selection model to explore if the probability of trial publication is associated with the magnitude of effect. We conducted a network meta-analysis of the published evidence as our primary analysis and used a sensitivity analysis considering low, moderate, and severe selection bias (that corresponds to the number of unpublished trials) with an aim to evaluate robustness of point estimates and ranking. We explored whether placebo-controlled and head-to-head trials are associated with different levels of PB.ResultsWe found that small placebo-controlled trials exaggerated slightly the efficacy of antipsychotics, and PB was not unlikely in the evidence based on placebo-controlled trials; however, ranking of antipsychotics remained robust.ConclusionThe total evidence comprises many head-to-head trials that do not appear to be prone to small-study effects or PB, and indirect evidence appears to “wash out” some of the biases in the placebo-controlled trials.  相似文献   

8.
Effects on overviews of early stopping rules for clinical trials   总被引:1,自引:0,他引:1  
Use of inappropriate stopping rules for clinical trials results in an excess of false positive conclusions when no true survival differences exist. Overviews of such trials, however, consist mainly of trials which were not stopped early, plus a few of reduced sample size which were. Simulations confirm that the level of such an overview is minimally elevated. Additional follow-up for survival further corrects the level. In fact, for individual trials conducted according to inappropriate rules, late tests have nearly correct level. On the other hand, publication bias (differential reporting of positive results) can substantially increase the level of an overview if only published studies are included.  相似文献   

9.
The impact of competing risks on tests of association between disease and haplotypes has been largely ignored. We consider situations in which linkage phase is ambiguous and show that tests for disease-haplotype association can lead to rejection of the null hypothesis, even when true, with more than the nominal 5 per cent frequency. This problem tends to occur if a haplotype is associated with overall mortality, even if the haplotype is not associated with disease risk. A small simulation study illustrates the magnitude of bias (high type I error rate) in the context of a cohort study in which a modest number of disease cases (about 350) occur over time. The bias remains even if the score test is based on a logistic model that includes age as a covariate. For cohort studies, we propose a new test based on a modification of the proportional hazards model and for case-control studies, a test based on a conditional likelihood that have the correct size under the null even in the presence of competing risks, and that can be used when haplotype is ambiguous.  相似文献   

10.
The potential for bias from population stratification (PS) has raised concerns about case-control studies involving admixed ethnicities. We evaluated the potential bias due to PS in relating a binary outcome with a candidate gene under simulated settings where study populations consist of multiple ethnicities. Disease risks were assigned within the range of prostate cancer rates of African Americans reported in SEER registries assuming k=2, 5, or 10 admixed ethnicities. Genotype frequencies were considered in the range of 5-95%. Under a model assuming no genotype effect on disease (odds ratio (OR)=1), the range of observed OR estimates ignoring ethnicity was 0.64-1.55 for k=2, 0.72-1.33 for k=5, and 0.81-1.22 for k=10. When genotype effect on disease was modeled to be OR=2, the ranges of observed OR estimates were 1.28-3.09, 1.43-2.65, and 1.62-2.42 for k=2, 5, and 10 ethnicities, respectively. Our results indicate that the magnitude of bias is small unless extreme differences exist in genotype frequency. Bias due to PS decreases as the number of admixed ethnicities increases. The biases are bounded by the minimum and maximum of all pairwise baseline disease odds ratios across ethnicities. Therefore, bias due to PS alone may be small when baseline risk differences are small within major categories of admixed ethnicity, such as African Americans.  相似文献   

11.
Perhaps the greatest threat to the validity of a meta-analysis is the possibility of publication bias, where studies that are interesting or statistically significant are more likely to be published than those with less encouraging results. In particular, there is the concern that this bias might be 'one-sided', where studies indicating that the treatment is beneficial have a greater probability of publication. The impact that this type of bias has on the estimate of treatment effect has received a great deal of attention but this also has implications for estimates of between-study variance. Using step functions to model the bias it can be demonstrated that it is impossible to make generalizations concerning how we should revise estimates of between-study variance when presented with the possibility of publication bias. To determine this, assumptions must be made concerning the form that the bias takes, which is unknown in practice.  相似文献   

12.
We used a Bayesian hierarchical selection model to study publication bias in 1106 meta‐analyses from the Cochrane Database of Systematic Reviews comparing treatment with either placebo or no treatment. For meta‐analyses of efficacy, we estimated the ratio of the probability of including statistically significant outcomes favoring treatment to the probability of including other outcomes. For meta‐analyses of safety, we estimated the ratio of the probability of including results showing no evidence of adverse effects to the probability of including results demonstrating the presence of adverse effects. Results: In the meta‐analyses of efficacy, outcomes favoring treatment had on average a 27% (95% Credible Interval (CI): 18% to 36%) higher probability to be included than other outcomes. In the meta‐analyses of safety, results showing no evidence of adverse effects were on average 78% (95% CI: 51% to 113%) more likely to be included than results demonstrating that adverse effects existed. In general, the amount of over‐representation of findings favorable to treatment was larger in meta‐analyses including older studies. Conclusions: In the largest study on publication bias in meta‐analyses to date, we found evidence of publication bias in Cochrane systematic reviews. In general, publication bias is smaller in meta‐analyses of more recent studies, indicating their better reliability and supporting the effectiveness of the measures used to reduce publication bias in clinical trials. Our results indicate the need to apply currently underutilized meta‐analysis tools handling publication bias based on the statistical significance, especially when studies included in a meta‐analysis are not recent. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
A comparison of methods to detect publication bias in meta-analysis   总被引:15,自引:0,他引:15  
Meta-analyses are subject to bias for many of reasons, including publication bias. Asymmetry in a funnel plot of study size against treatment effect is often used to identify such bias. We compare the performance of three simple methods of testing for bias: the rank correlation method; a simple linear regression of the standardized estimate of treatment effect on the precision of the estimate; and a regression of the treatment effect on sample size. The tests are applied to simulated meta-analyses in the presence and absence of publication bias. Both one-sided and two-sided censoring of studies based on statistical significance was used. The results indicate that none of the tests performs consistently well. Test performance varied with the magnitude of the true treatment effect, distribution of study size and whether a one- or two-tailed significance test was employed. Overall, the power of the tests was low when the number of studies per meta-analysis was close to that often observed in practice. Tests that showed the highest power also had type I error rates higher than the nominal level. Based on the empirical type I error rates, a regression of treatment effect on sample size, weighted by the inverse of the variance of the logit of the pooled proportion (using the marginal total) is the preferred method.  相似文献   

14.
In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if a body of evidence is associated with a high risk of publication bias. Even when individual studies included in best-evidence summaries have a low risk of bias, publication bias can result in substantial overestimates of effect. Authors should suspect publication bias when available evidence comes from a number of small studies, most of which have been commercially funded. A number of approaches based on examination of the pattern of data are available to help assess publication bias. The most popular of these is the funnel plot; all, however, have substantial limitations. Publication bias is likely frequent, and caution in the face of early results, particularly with small sample size and number of events, is warranted.  相似文献   

15.
BACKGROUND: Case-control study is still one of the most commonly used study designs in epidemiological research. Misclassification of case-control status remains a significant issue because it will bias the results of a case-control study. There exist two types of misclassification, differential versus nondifferential. It is commonly accepted that nondifferential misclassification will bias the results of the study towards the null hypothesis. Conversely, no reports have assessed the impact and direction of differential misclassification on odds ratio (OR) estimate. The goal of the present study is to demonstrate by statistical derivation that patterns exist on the bias induced by differential misclassification. METHODS: Based on a 2 x 2 case-control study design, we derive the odds ratio without misclassification, and those with misclassification according to: (1) controls are misclassified as cases by exposure status; (2) cases are misclassified as controls by exposure status; and (3) both controls and cases are misclassified by exposure status simultaneously. Furthermore, mathematical derivations are shown for each of the ratios of the two odds ratios with and without misclassification. These methods are carried out by simulation analyses. RESULTS: Simulation analyses show that quite a number of biased odds ratios tend to move away from the null hypothesis and result in approaching zero or infinity with increasing proportion of misclassification among cases, controls, or both. These patterns are associated with the exposure status and the values of unbiased odds ratio (<1, 1, or >1). CONCLUSIONS: Our findings suggest that, unlike nondifferential misclassification, differential misclassification of case-control status in a case-control study may not weaken the exposure-outcome association towarding the null hypothesis. Care needs to be taken for interpreting the results of a case-control study when there exists differential misclassification bias, a practical issue in epidemiological research.  相似文献   

16.
The trim and fill method allows estimation of an adjusted meta-analysis estimate in the presence of publication bias. To date, the performance of the trim and fill method has had little assessment. In this paper, we provide a more comprehensive examination of different versions of the trim and fill method in a number of simulated meta-analysis scenarios, comparing results with those from usual unadjusted meta-analysis models and two simple alternatives, namely use of the estimate from: (i) the largest; or (ii) the most precise study in the meta-analysis. Findings suggest a great deal of variability in the performance of the different approaches. When there is large between-study heterogeneity the trim and fill method can underestimate the true positive effect when there is no publication bias. However, when publication bias is present the trim and fill method can give estimates that are less biased than the usual meta-analysis models. Although results suggest that the use of the estimate from the largest or most precise study seems a reasonable approach in the presence of publication bias, when between-study heterogeneity exists our simulations show that these estimates are quite biased. We conclude that in the presence of publication bias use of the trim and fill method can help to reduce the bias in pooled estimates, even though the performance of this method is not ideal. However, because we do not know whether funnel plot asymmetry is truly caused by publication bias, and because there is great variability in the performance of different trim and fill estimators and models in various meta-analysis scenarios, we recommend use of the trim and fill method as a form of sensitivity analysis as intended by the authors of the method.  相似文献   

17.
The use of meta-analysis to combine results of several trials is still increasing in the medical field. The validity of a meta-analysis may be affected by various sources of bias (for example, publication bias, language bias). Therefore, an analysis of bias should be an integral part of any systematic review. Statistical tests and graphical methods have been developed for this purpose. In this paper, two statistical tests for the detection of bias in meta-analysis were investigated in a simulation study. Binary outcome data, which are very common in medical applications, were considered and relative effect measures (odds ratios, relative risk) were used for pooling. Sample sizes were generated according to findings in a survey of eight German medical journals. Simulation results indicate an inflation of type I error rates for both tests when the data are sparse. Results get worse with increasing treatment effect and number of trials combined. Valid statistical tests for the detection of bias in meta-analysis with sparse data need to be developed.  相似文献   

18.
The need for statistical methodologies for analysing a small size study, such as a pilot or so-called 'proof of concept' study, has not been paid much attention in the past. Recently the Institute of Medicine (IOM) formed a committee and held a workshop to discuss methodologies for conducting clinical trials with small number participants. In this paper we argue that the hypothesis of treatment effect in a small pilot study should be set up to test whether any individual subject has an effect rather than whether the group mean or median has shifted as often done for large, confirmatory clinical trials. Based on this paradigm we propose multiple test procedures as one option when individuals have enough observations, and a mixture-distribution approach when individuals have one or more observations. The latter approach may be used in either a one- or two-group setting, and is our focus in this paper. We present the likelihood ratio tests for the mixture models. Examples are given to demonstrate the methods.  相似文献   

19.
Publication and selection biases in meta-analysis are more likely to affect small studies, which also tend to be of lower methodological quality. This may lead to "small-study effects," where the smaller studies in a meta-analysis show larger treatment effects. Small-study effects may also arise because of between-trial heterogeneity. Statistical tests for small-study effects have been proposed, but their validity has been questioned. A set of typical meta-analyses containing 5, 10, 20, and 30 trials was defined based on the characteristics of 78 published meta-analyses identified in a hand search of eight journals from 1993 to 1997. Simulations were performed to assess the power of a weighted regression method and a rank correlation test in the presence of no bias, moderate bias or severe bias. We based evidence of small-study effects on P < 0.1. The power to detect bias increased with increasing numbers of trials. The rank correlation test was less powerful than the regression method. For example, assuming a control group event rate of 20% and no treatment effect, moderate bias was detected with the regression test in 13.7%, 23.5%, 40.1% and 51.6% of meta-analyses with 5, 10, 20 and 30 trials. The corresponding figures for the correlation test were 8.5%, 14.7%, 20.4% and 26.0%, respectively. Severe bias was detected with the regression method in 23.5%, 56.1%, 88.3% and 95.9% of meta-analyses with 5, 10, 20 and 30 trials, as compared to 11.9%, 31.1%, 45.3% and 65.4% with the correlation test. Similar results were obtained in simulations incorporating moderate treatment effects. However the regression method gave false-positive rates which were too high in some situations (large treatment effects, or few events per trial, or all trials of similar sizes). Using the regression method, evidence of small-study effects was present in 21 (26.9%) of the 78 published meta-analyses. Tests for small-study effects should routinely be performed in meta-analysis. Their power is however limited, particularly for moderate amounts of bias or meta-analyses based on a small number of small studies. When evidence of small-study effects is found, careful consideration should be given to possible explanations for these in the reporting of the meta-analysis.  相似文献   

20.
Using 14 meta-analyses that included both published (n = 199) and unpublished (n = 50) randomized trials, we evaluated the utility of different analytical approaches to detect, assess robustness, and minimize publication bias in meta-analysis. The rank correlation and graphical tests indicated funnel plot asymmetry in 3 and 7 of the 14 meta-analyses, respectively. The file drawer number estimates using Iyengar-Greenhouse method were between 1.5 and 4.7 times smaller compared to Rosenthal's estimates. The median difference between the Trim and Fill estimates and the actual number of missing studies was 1 (range -4, 6). Weighted estimation methods adjusted for publication bias and provided estimates of intervention effect close to the reference standard, on average. We showed there are differences in the conclusions one would reach clinically based on the different analytical approaches dealing with publication bias. Our results also suggest that the appropriate use of these methods improves the reliability and accuracy of meta-analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号