首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  完全免费   4篇
  预防医学   9篇
  2018年   1篇
  2017年   1篇
  2015年   1篇
  2013年   1篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2009年   1篇
  2002年   1篇
排序方式: 共有9条查询结果,搜索用时 31 毫秒
1
1.
Studies of individuals sampled in unbalanced clusters have become common in health services and epidemiological research, but available tools for power/sample size estimation and optimal design are currently limited. This paper presents and illustrates power estimation formulas for t-test comparisons of effect of an exposure at the cluster level on continuous outcomes in unbalanced studies with unequal numbers of clusters and/or unequal numbers of subjects per cluster in each exposure arm. Iterative application of these power formulas obtains minimal sample size needed and/or minimal detectable difference. SAS subroutines to implement these algorithms are given in the Appendices. When feasible, power is optimized by having the same number of clusters in each arm k A =k B and (irrespective of numbers of clusters in each arm) the same total number of subjects in each arm n A k A =n B k B . Cost beneficial upper limits for numbers of subjects per cluster may be approximately (5/ρ) −5 or less where ρ is the intraclass correlation. The methods presented here for simple cluster designs may be extended to some settings involving complex hierarchical weighted cluster samples.  相似文献
2.
This paper is concerned with evaluating whether an interaction between two sets of risk factors for a binary trait is removable and, when it is removable, fitting a parsimonious additive model using a suitable link function to estimate the disease odds (on the natural logarithm scale). Statisticians define the term ‘interaction’ as a departure from additivity in a linear model on a specific scale on which the data are measured. Certain interactions may be eliminated via a transformation of the outcome such that the relationship between the risk factors and the outcome is additive on the transformed scale. Such interactions are known as removable interactions. We develop a novel test statistic for detecting the presence of a removable interaction in case–control studies. We consider the Guerrero and Johnson family of transformations and show that this family constitutes an appropriate link function for fitting an additive model when an interaction is removable. We use simulation studies to examine the type I error and power of the proposed test and to show that, when an interaction is removable, an additive model based on the Guerrero and Johnson link function leads to more precise estimates of the disease odds parameters and a better fit. We illustrate the proposed test and use of the transformation by using case–control data from three published studies. Finally, we indicate how one can check that, after transformation, no further interaction is significant. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献
3.
Dynamic allocation has received considerable attention since it was first proposed in the 1970s as an alternative means of allocating treatments in clinical trials which helps to secure the balance of prognostic factors across treatment groups. The purpose of this paper is to present a generalized multidimensional dynamic allocation method that simultaneously balances treatment assignments at three key levels: within the overall study, within each level of each prognostic factor, and within each stratum, that is, combination of levels of different factors Further it offers capabilities for unbalanced and adaptive designs for trials. The treatment balancing performance of the proposed method is investigated through simulations which compare multidimensional dynamic allocation with traditional stratified block randomization and the Pocock–Simon method. On the basis of these results, we conclude that this generalized multidimensional dynamic allocation method is an improvement over conventional dynamic allocation methods and is flexible enough to be applied for most trial settings including Phases I, II and III trials. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献
4.
Meta‐analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta‐analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case–control ratios. Here, we investigate the power loss problem by the standard meta‐analysis methods for unbalanced studies, and further propose novel meta‐analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta‐score‐statistics that can accurately approximate the joint‐score‐statistics with combined individual‐level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene‐level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene‐level tests with 26 unbalanced studies of age‐related macular degeneration . In addition, we took the meta‐analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta‐analyzing multi‐ethnic samples. In summary, our improved meta‐score‐statistics with corrections for population stratification can be used to construct both single‐variant and gene‐level association studies, providing a useful framework for ensuring well‐powered, convenient, cross‐study analyses.  相似文献
5.
The collection of repeated measurements over time on an experimental unit to study the changes over time of a certain characteristic is common in biological and clinical studies. Data of this type are also often referred to as growth curve data or repeated measures data. There arise situations when one is interested in an estimate of the time to an event, based on a characteristic that indicates progression towards the event. The assessment of the progression of labor during childbirth based on cervical dilation is one such example. Here increase in the dilation of the cervix indicates progression towards delivery. Based on how long one has been in labor and an estimate of the time to complete dilation one might make crucial decisions like the decision to administer a drug or to perform a C‐section. Here a repeated measures approach is developed to model the time to the event. The parameters of the model are estimated by a maximum likelihood approach. A general model is developed for a class of data structures and a nonlinear model is developed specific to the labor progression data. Simulations are performed to assess the methodology and conditions are suggested for predicting the time to an event. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献
6.
Minimization is a dynamic randomization technique that has been widely used in clinical trials for achieving a balance of prognostic factors across treatment groups, but most often it has been used in the setting of equal treatment allocations. Although unequal treatment allocation is frequently encountered in clinical trials, an appropriate minimization procedure for such trials has not been published. The purpose of this paper is to present novel strategies for applying minimization methodology to such clinical trials. Two minimization techniques are proposed and compared by probability calculation and simulation studies. In the first method, called naïve minimization, probability assignment is based on a simple modification of the original minimization algorithm, which does not account for unequal allocation ratios. In the second method, called biased‐coin minimization (BCM), probability assignment is based on allocation ratios and optimized to achieve an ‘unbiased’ target allocation ratio. The performance of the two methods is investigated in various trial settings including different number of treatments, prognostic factors and sample sizes. The relative merits of the different distance metrics are also explored. On the basis of the results, we conclude that BCM is the preferable method for randomization in clinical trials involving unequal treatment allocations. The choice of different distance metrics slightly affects the performance of the minimization and may be optimized according to the specific feature of trials. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献
7.
卫生人力资源投入短缺和地区分布不均是低收入国家贫困人口不能获得卫生服务的主要原因。本文回顾了近期国际卫生人力政策研究的动向,探讨了加强卫生人力资源的几个关键问题,并且列举了一些旨在应对卫生人力挑战的可能举措。  相似文献
8.
Modeling of correlated biomarkers jointly has been shown to improve the efficiency of parameter estimates, leading to better clinical decisions. In this paper, we employ a joint modeling approach to a unique diabetes dataset, where blood glucose (continuous) and urine glucose (ordinal) measures of disease severity for diabetes are known to be correlated. The postulated joint model assumes that the outcomes are from distributions that are in the exponential family and hence modeled as multivariate generalized linear mixed effects model associated through correlated and/or shared random effects. The Markov chain Monte Carlo Bayesian approach is used to approximate posterior distribution and draw inference on the parameters. This proposed methodology provides a flexible framework to account for the hierarchical structure of the highly unbalanced data as well as the association between the 2 outcomes. The results indicate improved efficiency of parameter estimates when blood glucose and urine glucose are modeled jointly. Moreover, the simulation studies show that estimates obtained from the joint model are consistently less biased and more efficient than those in the separate models.  相似文献
9.
We used theoretical and simulation‐based approaches to study Type I error rates for one‐stage and two‐stage analytic methods for cluster‐randomized designs. The one‐stage approach uses the observed data as outcomes and accounts for within‐cluster correlation using a general linear mixed model. The two‐stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one‐stage and two‐stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one‐stage and six two‐stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two‐stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one‐stage model with Kenward–Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one‐stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster‐randomized trials, the Kenward–Roger method is the preferred one‐stage approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号