首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 295 毫秒
1.
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population.  相似文献   

2.
Mixed Poisson models are often used for the design of clinical trials involving recurrent events since they provide measures of treatment effect based on rate and mean functions and accommodate between individual heterogeneity in event rates. Planning studies based on these models can be challenging when there is a little information available on the population event rates, or the extent of heterogeneity characterized by the variance of individual‐specific random effects. We consider methods for adaptive two‐stage clinical trial design, which enable investigators to revise sample size estimates using data collected during the first phase of the study. We describe blinded procedures in which the group membership and treatment received by each individual are not revealed at the interim analysis stage, and a ‘partially blinded’ procedure in which group membership is revealed but not the treatment received by the groups. An EM algorithm is proposed for the interim analyses in both cases, and the performance is investigated through simulation. The work is motivated by the design of a study involving patients with immune thrombocytopenic purpura where the aim is to reduce bleeding episodes and an illustrative application is given using data from a cardiovascular trial. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
In many biomedical studies, it is often of interest to model event count data over the study period. For some patients, we may not follow up them for the entire study period owing to informative dropout. The dropout time can potentially provide valuable insight on the rate of the events. We propose a joint semiparametric model for event count data and informative dropout time that allows for correlation through a Gamma frailty. We develop efficient likelihood‐based estimation and inference procedures. The proposed nonparametric maximum likelihood estimators are shown to be consistent and asymptotically normal. Furthermore, the asymptotic covariances of the finite‐dimensional parameter estimates attain the semiparametric efficiency bound. Extensive simulation studies demonstrate that the proposed methods perform well in practice. We illustrate the proposed methods through an application to a clinical trial for bleeding and transfusion events in myelodysplastic syndrome. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

4.
Bayesian signal detection methods, including the multiitem gamma Poisson shrinker (MGPS), assume a Poisson distribution for the number of reports. However, the database of the adverse event reporting system often has a large number of zero-count cells. A zero-inflated Poisson (ZIP) distribution can be more appropriate in this situation than a Poisson distribution. Few studies have considered ZIP-based models for Bayesian signal detection. In addition, most studies on Bayesian signal detection methods include simulation studies conducted assuming a gamma distribution for the prior. Herein, we extend the MGPS method using the ZIP model and apply various prior distributions. We evaluated the extended methods through an extensive simulation using more varied settings for the model and prior than existing methods. We varied the total number of reports, the number of true signals, the relative reporting rate, and the probability of observing a true zero. The results show that as the probability of observing a zero count increased, methods based on the ZIP model outperformed the Poisson model in most cases. We also found that using the mixture log-normal prior resulted in more conservative detection than other priors when the relative reporting rate is high. Conversely, more signals were found when using the mixture truncated normal distributions. We applied the Bayesian signal detection methods to data from the Korea Adverse Event Reporting System from 2012 to 2016.  相似文献   

5.
Detecting the association between a set of variants and a phenotype of interest is the first and important step in genetic and genomic studies. Although it attracted a large amount of attention in the scientific community and several related statistical approaches have been proposed in the literature, powerful and robust statistical tests are still highly desired and yet to be developed in this area. In this paper, we propose a powerful and robust association test, which combines information from each individual single-nucleotide polymorphisms based on sequential independent burden tests. We compare the proposed approach with some popular tests through a comprehensive simulation study and real data application. Our results show that, in general, the new test is more powerful; the gain in detecting power can be substantial in many situations, compared to other methods.  相似文献   

6.
Over the past few years, an increasing number of studies have identified rare variants that contribute to trait heritability. Due to the extreme rarity of some individual variants, gene‐based association tests have been proposed to aggregate the genetic variants within a gene, pathway, or specific genomic region as opposed to a one‐at‐a‐time single variant analysis. In addition, in longitudinal studies, statistical power to detect disease susceptibility rare variants can be improved through jointly testing repeatedly measured outcomes, which better describes the temporal development of the trait of interest. However, usual sandwich/model‐based inference for sequencing studies with longitudinal outcomes and rare variants can produce deflated/inflated type I error rate without further corrections. In this paper, we develop a group of tests for rare‐variant association based on outcomes with repeated measures. We propose new perturbation methods such that the type I error rate of the new tests is not only robust to misspecification of within‐subject correlation, but also significantly improved for variants with extreme rarity in a study with small or moderate sample size. Through extensive simulation studies, we illustrate that substantially higher power can be achieved by utilizing longitudinal outcomes and our proposed finite sample adjustment. We illustrate our methods using data from the Multi‐Ethnic Study of Atherosclerosis for exploring association of repeated measures of blood pressure with rare and common variants based on exome sequencing data on 6,361 individuals.  相似文献   

7.
Testing the equality of 2 proportions for a control group versus a treatment group is a well‐researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one‐sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well‐known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial.  相似文献   

8.
Mortality rates are probably the most important indicator for the performance of kidney transplant centers. Motivated by the national evaluation of mortality rates at kidney transplant centers in the USA, we seek to categorize the transplant centers based on the mortality outcome. We describe a Dirichlet process model and a Dirichlet process mixture model with a half‐cauchy prior for the estimation of the risk‐adjusted effects of the transplant centers, with strategies for improving the model performance, interpretability, and classification ability. We derive statistical measures and create graphical tools to rate transplant centers and identify outlying groups of centers with exceptionally good or poor performance. The proposed method was evaluated through simulation and then applied to assess kidney transplant centers from a national organ failure registry. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
In a vaccine safety trial, the primary interest is to demonstrate that the vaccine is sufficiently safe, rejecting the null hypothesis that the relative risk of an adverse event attributable to the new vaccine is above a prespecified value, greater than one. We evaluate the exact probability of type I error of the likelihood score test, with sample size determined by normal approximation, by enumeration of the binomial outcomes in the rejection region and show that it exceeds the nominal level. In the case of rare adverse events, we recommend the Poisson approximation as an alternative and develop the corresponding conditional and unconditional tests. We give sample size and power calculations for these tests. We also propose optimal randomization strategies which either (i) minimize the total number of adverse cases or (ii) minimize the expected number of subjects when the vaccine is unsafe. We illustrate the proposed methods using a hypothetical vaccine safety study.  相似文献   

10.
BACKGROUND: Application of case-crossover designs provides an alternative to time-series analysis for analyzing the health-related effects of air pollution. Although some case-crossover studies can control for trend and seasonality by design, to date they have been analyzed as matched case-control studies. Such analyses may exhibit biases and a lower statistical efficiency than traditional time series analyzed with Poisson. METHODS: In this article, case-crossover studies are treated as cohort studies in which each subject is observed for a short period of time before and/or after the event, thus making possible analyzing with Andersen-Gill and generalized linear mixed models. We conducted a simulation study to compare the behavior of these models applied to case-crossover designs with time series analyzed with Poisson and with case-crossover analyzed by conditional logistic regression. To this end, we created a random variable that follows a Poisson distribution of low (2/day) and high mean events (22/day). This variable is a function of an unobserved confounding variable (that introduces trend and seasonality) and data on small particulate matter (PM10) from Barcelona. In addition, scenarios were created to assess the effect on exposure exerted by autocorrelation and the magnitude of the pollutant coefficient. RESULTS: The full semisymmetric design analyzed with generalized linear mixed models yields good coverage and a high statistical power for air-pollution effect magnitudes close to the real values but shows bias for high effect magnitudes. This bias seems to be attributable to autocorrelation in the exposure variable. CONCLUSIONS: Longitudinal approaches applied to case-crossover designs may prove useful for analyzing the acute effects of environmental exposures.  相似文献   

11.
Meta-analysis has become a key component of well-designed genetic association studies due to the boost in statistical power achieved by combining results across multiple samples of individuals and the need to validate observed associations in independent studies. Meta-analyses of genetic association studies based on multiple SNPs and traits are subject to the same multiple testing issues as single-sample studies, but it is often difficult to adjust accurately for the multiple tests. Procedures such as Bonferroni may control the type-I error rate but will generally provide an overly harsh correction if SNPs or traits are correlated. Depending on study design, availability of individual-level data, and computational requirements, permutation testing may not be feasible in a meta-analysis framework. In this article, we present methods for adjusting for multiple correlated tests under several study designs commonly employed in meta-analyses of genetic association tests. Our methods are applicable to both prospective meta-analyses in which several samples of individuals are analyzed with the intent to combine results, and retrospective meta-analyses, in which results from published studies are combined, including situations in which (1) individual-level data are unavailable, and (2) different sets of SNPs are genotyped in different studies due to random missingness or two-stage design. We show through simulation that our methods accurately control the rate of type I error and achieve improved power over multiple testing adjustments that do not account for correlation between SNPs or traits.  相似文献   

12.
Statistical inference based on correlated count measurements are frequently performed in biomedical studies. Most of existing sample size calculation methods for count outcomes are developed under the Poisson model. Deviation from the Poisson assumption (equality of mean and variance) has been widely documented in practice, which indicates urgent needs of sample size methods with more realistic assumptions to ensure valid experimental design. In this study, we investigate sample size calculation for clinical trials with correlated count measurements based on the negative binomial distribution. This approach is flexible to accommodate overdispersion and unequal measurement intervals, as well as arbitrary randomization ratios, missing data patterns, and correlation structures. Importantly, the derived sample size formulas have closed forms both for the comparison of slopes and for the comparison of time-averaged responses, which greatly reduces the burden of implementation in practice. We conducted extensive simulation to demonstrate that the proposed method maintains the nominal levels of power and type I error over a wide range of design configurations. We illustrate the application of this approach using a real epileptic trial.  相似文献   

13.
This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided.  相似文献   

14.
We elicit time and risk preferences for kidney transplantation from the entire population of patients of the largest Italian transplant centre using a discrete choice experiment (DCE). We measure patients’ willingness-to-wait (WTW) for receiving a kidney with one-year longer expected graft survival, or a low risk of complication. Using a mixed logit in WTW-space model, we find heterogeneity in patients’ preferences. Our model allows WTW to vary with patients’ age and duration of dialysis. The results suggest that WTW correlates with age and duration of dialysis, and that accounting for patients’ preferences in the design of kidney allocation protocols could increase their welfare. The implication for transplant practice is that eliciting patients’ preferences could help in the allocation of “non-ideal” kidneys.  相似文献   

15.
In this paper, we describe an adjusted method to facilitate non-inferiority tests in a three-arm design. While the methodology is readily available in the situation of homogeneous group variances, the adjusted method will also maintain the alpha-level in the presence of heteroscedasticity. We propose explicit criteria for an optimal allocation. Depending on the pattern of heterogeneity, remarkably unbalanced designs are power optimal. We will apply the method to a randomized clinical trial and a toxicological experiment.  相似文献   

16.
A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random‐censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor.  相似文献   

17.
This article presents a new approach to the problem of deriving an optimal design for a randomized group sequential clinical trial based on right-censored event times. We are motivated by the fact that, if the proportional hazards assumption is not met, then a conventional design's actual power can differ substantially from its nominal value. We combine Bayesian decision theory, Bayesian model selection and forward simulation (FS) to obtain a group sequential procedure that maintains targeted false-positive rate and power, under a wide range of true event time distributions. At each interim analysis, the method adaptively chooses the most likely model and then applies the decision bounds that are optimal under the chosen model. A simulation study comparing this design with three conventional designs shows that, over a wide range of distributions, our proposed method performs at least as well as each conventional design, and in many cases it provides a much smaller trial.  相似文献   

18.
19.
While data sets based on dense genome scans are becoming increasingly common, there are many theoretical questions that remain unanswered. How can a large number of markers in high linkage disequilibrium (LD) and rare disease variants be simulated efficiently? How should markers in high LD be analyzed: individually or jointly? Are there fast and simple methods to adjust for correlation of tests? What is the power penalty for conservative Bonferroni adjustments? Assuming that association scans are adequately powered, we attempt to answer these questions. Performance of single‐point and multipoint tests, and their hybrids, is investigated using two simulation designs. The first simulation design uses theoretically derived LD patterns. The second design uses LD patterns based on real data. For the theoretical simulations we used polychoric correlation as a measure of LD to facilitate simulation of markers in LD and rare disease variants. Based on the simulation results of the two studies, we conclude that statistical tests assuming only additive genotype effects (i.e. Armitage and especially multipoint T2) should be used cautiously due to their suboptimal power in certain settings. A false discovery rate (FDR)‐adjusted combination of tests for additive, dominant and recessive effects had close to optimal power. However, the common genotypic χ2 test performed adequately and could be used in lieu of the FDR combination. While some hybrid methods yield (sometimes spectacularly) higher power they are computationally intensive. We also propose an “exact” method to adjust for multiple testing, which yields nominally higher power than the Bonferroni correction. Genet. Epidemiol. 2008. © 2008 Wiley‐Liss, Inc.  相似文献   

20.
Statistical methods for the analysis of recurrent events are often evaluated in simulation studies. A factor rarely varied in such studies is the underlying event generation process. If the relative performance of statistical methods differs across generation processes, then studies based upon one process may mislead. This paper describes the simulation of recurrent events data using four models of the generation process: Poisson, mixed Poisson, autoregressive, and Weibull. For each model four commonly used statistical methods for the analysis of recurrent events (Cox's proportional hazards method, the Andersen-Gill method, negative binomial regression, the Prentice-Williams-Peterson method) were applied to 200 simulated data sets, and the mean estimates, standard errors, and confidence intervals obtained. All methods performed well for the Poisson process. Otherwise, negative binomial regression only performed well for the mixed Poisson process, as did the Andersen-Gill method with a robust estimate of the standard error. The Prentice-Williams-Peterson method performed well only for the autoregressive and Weibull processes. So the relative performance of statistical methods depended upon the model of event generation used to simulate data. In conclusion, it is important that simulation studies of statistical methods for recurrent events include simulated data sets based upon a range of models for event generation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号