首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
3.
4.
3‐carboxy‐4‐methyl‐5‐propyl‐2‐furanpropanoic acid (CMPF) is a known metabolite of furan fatty acids and was first referred to as a urofuran fatty acid, as it was found in urine of humans and other species after consumption of furan fatty acids or foods containing furan fatty acids. More recently, CMPF has been identified as a highly prominent metabolite following the consumption of fish oil, fish oil fractions and diets rich in fish, and can be regarded as biomarker of oil‐rich fish or fish oil intakes. As furan fatty acids are known to occur in fish and fish oil (at a low level), it is possible that the CMPF in plasma arises from these furan fatty acids. On a structural basis, this is a likely explanation rather than the CMPF being an actual metabolite of long‐chain marine omega‐3 fatty acids. Recent studies in high fat‐fed mice given purified CMPF suggest that CMPF might contribute to the improved metabolic effects observed following consumption of long‐chain marine omega‐3 fatty acids but much is still to be known about the relationships between CMPF and health.  相似文献   

5.
6.
While intent‐to‐treat (ITT) analysis is widely accepted for superiority trials, there remains debate about its role in non‐inferiority trials. It has often been said that ITT analysis tends to be anti‐conservative in demonstrating non‐inferiority, suggesting that per‐protocol (PP) analysis may be preferable for non‐inferiority trials, despite the inherent bias of such analyses. We propose using randomization‐based g‐estimation analyses that more effectively preserve the integrity of randomization than do the more widely used PP analyses. Simulation studies were conducted to investigate the impacts of different types of treatment changes on the conservatism or anti‐conservatism of analyses using the ITT, PP, and g‐estimation methods in a time‐to‐event outcome. The ITT results were anti‐conservative for all simulations. Anti‐conservativeness increased with the percentage of treatment change and was more pronounced for outcome‐dependent treatment changes. PP analysis, in which treatment‐switching cases were censored at the time of treatment change, maintained type I error near the nominal level for independent treatment changes, whereas for outcome‐dependent cases, PP analysis was either conservative or anti‐conservative depending on the mechanism underlying the percentage of treatment changes. G‐estimation analysis maintained type I error near the nominal level even for outcome‐dependent treatment changes, although information on unmeasured covariates is not used in the analysis. Thus, randomization‐based g‐estimation analyses should be used to supplement the more conventional ITT and PP analyses, especially for non‐inferiority trials. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Parent‐of‐origin effects have been pointed out to be one plausible source of the heritability that was unexplained by genome‐wide association studies. Here, we consider a case‐control mother‐child pair design for studying parent‐of‐origin effects of offspring genes on neonatal/early‐life disorders or pregnancy‐related conditions. In contrast to the standard case‐control design, the case‐control mother‐child pair design contains valuable parental information and therefore permits powerful assessment of parent‐of‐origin effects. Suppose the region under study is in Hardy‐Weinberg equilibrium, inheritance is Mendelian at the diallelic locus under study, there is random mating in the source population, and the SNP under study is not related to risk for the phenotype under study because of linkage disequilibrium (LD) with other SNPs. Using a maximum likelihood method that simultaneously assesses likely parental sources and estimates effect sizes of the two offspring genotypes, we investigate the extent of power increase for testing parent‐of‐origin effects through the incorporation of genotype data for adjacent markers that are in LD with the test locus. Our method does not need to assume the outcome is rare because it exploits supplementary information on phenotype prevalence. Analysis with simulated SNP data indicates that incorporating genotype data for adjacent markers greatly help recover the parent‐of‐origin information. This recovery can sometimes substantially improve statistical power for detecting parent‐of‐origin effects. We demonstrate our method by examining parent‐of‐origin effects of the gene PPARGC1A on low birth weight using data from 636 mother‐child pairs in the Jerusalem Perinatal Study.  相似文献   

8.
This study compares the five‐level EuroQol five‐dimension questionnaire (EQ‐5D‐5L) crosswalks and the 5L value sets for England, the Netherlands, and Spain and explores the implication of using one or the other for the results of cost–utility analyses. Data from two randomized controlled trials in depression and diabetes were used. Utility value distributions were compared, and mean differences in utility values between the EQ‐5D‐5L crosswalk and the 5L value set were described by country. Quality‐adjusted life years (QALYs) were calculated using the area‐under‐the‐curve method. Incremental cost‐effectiveness ratios (ICERs) were calculated, and uncertainty around ICERs was estimated using bootstrapping and graphically shown in cost‐effectiveness acceptability curves. For all countries investigated, utility value distributions differed between the EQ‐5D‐5L crosswalk and 5L value set. In both case studies, mean utility values were lower for the EQ‐5D‐5L crosswalk compared with the 5L value set in England and Spain, but higher in the Netherlands. However, these differences in utility values did not translate into relevant differences across utility estimation methods in incremental QALYs and the interventions' probability of cost‐effectiveness. Thus, our results suggest that EQ‐5D‐5L crosswalks and 5L value sets can be used interchangeably in patients affected by mild or moderate conditions. Further research is needed to establish whether these findings are generalizable to economic evaluations among severely ill patients.  相似文献   

9.
Numerous meta‐analyses in healthcare research combine results from only a small number of studies, for which the variance representing between‐study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta‐analysis. We present two methods for implementing Bayesian meta‐analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta‐analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta‐analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log‐normal distributions for the between‐study variance, applicable to meta‐analyses of binary outcomes on the log odds‐ratio scale. The methods are applied to two example meta‐analyses, incorporating the relevant predictive distributions as prior distributions for between‐study heterogeneity. We have provided resources to facilitate Bayesian meta‐analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

10.
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score‐based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio‐of‐mediator‐probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score‐based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2‐step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio‐of‐mediator‐probability weighting analysis a solution to the 2‐step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance‐covariance matrix for the indirect effect and direct effect 2‐step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score‐based weighting.  相似文献   

11.
Abstract: Disclosing an HIV diagnosis to his mother may be the first step in a man's successful management of his illness, but it may also lead to added stress due to stigmatization. Analyzing data provided by 166 HIV‐positive men who lived in the southeastern United States, we found that the most powerful correlate of disclosure was exposure to HIV through homosexual contact. Additionally, those who had AIDS rather than HIV and exhibited more severe symptoms were significantly more likely to have disclosed to their mothers; older and more highly educated men were significantly less likely to have done so. We discuss the implications of our findings for maternal caregiving to adult sons in middle and later life.  相似文献   

12.
Despite the successful discovery of hundreds of variants for complex human traits using genome‐wide association studies, the degree to which genes and environmental risk factors jointly affect disease risk is largely unknown. One obstacle toward this goal is that the computational effort required for testing gene‐gene and gene‐environment interactions is enormous. As a result, numerous computationally efficient tests were recently proposed. However, the validity of these methods often relies on unrealistic assumptions such as additive main effects, main effects at only one variable, no linkage disequilibrium between the two single‐nucleotide polymorphisms (SNPs) in a pair or gene‐environment independence. Here, we derive closed‐form and consistent estimates for interaction parameters and propose to use Wald tests for testing interactions. The Wald tests are asymptotically equivalent to the likelihood ratio tests (LRTs), largely considered to be the gold standard tests but generally too computationally demanding for genome‐wide interaction analysis. Simulation studies show that the proposed Wald tests have very similar performances with the LRTs but are much more computationally efficient. Applying the proposed tests to a genome‐wide study of multiple sclerosis, we identify interactions within the major histocompatibility complex region. In this application, we find that (1) focusing on pairs where both SNPs are marginally significant leads to more significant interactions when compared to focusing on pairs where at least one SNP is marginally significant; and (2) parsimonious parameterization of interaction effects might decrease, rather than increase, statistical power.  相似文献   

13.
14.
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

15.
16.
A compartment model for cancer incidence and mortality is developed in which healthy subjects may develop cancer and subsequently die of cancer or another cause. In order to adequately represent the experience of a defined population, it is also necessary to allow for subjects who are diagnosed at death, as well as subjects who migrate and are subsequently lost to follow‐up. Expressions are derived for the number of cancer deaths as a function of the number of incidence cases and vice versa, which allows for the use of mortality statistics to obtain estimates of incidence using survival information. In addition, the model can be used to obtain estimates of cancer prevalence, which is useful for health care planning. The method is illustrated using data on lung cancer among males in Connecticut. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we formalize the application of multivariate meta‐analysis and meta‐regression to synthesize estimates of multi‐parameter associations obtained from different studies. This modelling approach extends the standard two‐stage analysis used to combine results across different sub‐groups or populations. The most straightforward application is for the meta‐analysis of non‐linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta‐analysis is implemented in the package mvmeta within the statistical environment R . As an illustrative example, we propose a two‐stage analysis for investigating the non‐linear exposure–response relationship between temperature and non‐accidental mortality using time‐series data from multiple cities. Multivariate meta‐analysis represents a useful analytical tool for studying complex associations through a two‐stage procedure. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
19.
The percentile‐finding experimental design known variously as ‘forced‐choice fixed‐staircase’, ‘geometric up‐and‐down’ or ‘k‐in‐a‐row’ (KR) was introduced by Wetherill four decades ago. To date, KR has been by far the most widely used up‐and‐down (U&D) design for estimating non‐median percentiles; it is implemented most commonly in sensory studies. However, its statistical properties have not been fully documented, and the existence of a unique mode in its asymptotic treatment distribution has been recently disputed. Here we revisit the KR design and its basic properties. We find that KR does generate a unique stationary mode near its target percentile, and also displays better operational characteristics than two other U&D designs that have been studied more extensively. Supporting proofs and numerical calculations are presented. A recent experimental example from anesthesiology serves to highlight some of the ‘up‐and‐down’ design family's properties and advantages. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
This is the second in a series of papers that deal with care‐giving in Canada, as based on data available from the Canadian General Social Survey (2007). Building on the first paper, which reviewed the differences between short‐term, long‐term and end‐of‐life (EOL) caregivers, this paper uniquely examines the caregiver supports employed by EOL caregivers when compared to non‐EOL caregivers (short‐term and long‐term caregivers combined). Both papers employ data from Statistics Canada's General Social Survey (GSS Cycle 21: 2007). The GSS includes three modules, where respondents were asked questions about the unpaid home care assistance that they had provided in the last 12 months to someone at EOL or with either a long‐term health condition or a physical limitation. The objective of this research paper was to investigate the link between the impact of the care‐giving experience and the caregiver supports received, while also examining the differences in these across EOL and non‐EOL caregivers. By way of factor analysis and regression modelling, we examine differences between two types of caregivers: (i) EOL and (ii) non‐EOL caregivers. The study revealed that with respect to socio‐demographic characteristics, health outcomes and caregiver supports, EOL caregivers were consistently worse off. This suggests that although all non‐EOL caregivers are experiencing negative impacts from their care‐giving role, comparatively greater supports are needed for EOL caregivers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号