首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
The incubation period, the time between infection and disease onset, is important in the surveillance and control of infectious diseases but is often coarsely observed. Coarse data arises because the time of infection, the time of disease onset or both are not known precisely. Accurate estimates of an incubation period distribution are useful in real‐time outbreak investigations and in modeling public health interventions. We compare two methods of estimating such distributions. The first method represents the data as doubly interval‐censored. The second introduces a data reduction technique that makes the computation more tractable. In a simulation study, the methods perform similarly when estimating the median, but the first method yields more reliable estimates of the distributional tails. We conduct a sensitivity analysis of the two methods to violations of model assumption and we apply these methods to historical incubation period data on influenza A and respiratory syncytial virus. The analysis of reduced data is less computationally intensive and performs well for estimating the median under a wide range of conditions. However for estimation of the tails of the distribution, the doubly interval‐censored analysis is the recommended procedure. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
In biomedical research and practice, continuous biomarkers are often used for diagnosis and prognosis, with a cut‐point being established on the measurement to aid binary classification. When survival time is examined for the purposes of disease prognostication and is found to be related to the baseline measure of a biomarker, employing a single cut‐point on the biomarker may not be very informative. Using survival time‐dependent sensitivity and specificity, we extend a concordance probability‐based objective function to select survival time‐related cut‐points. To estimate the objective function with censored survival data, we adopt a non‐parametric procedure for time‐dependent receiver operational characteristics curves, which uses nearest neighbor estimation techniques. In a simulation study, the proposed method, when used to select a cut‐point to optimally predict survival at a given time within a specified range, yields satisfactory results. We apply the procedure to estimate survival time‐dependent cut‐point on the prognostic biomarker of serum bilirubin among patients with primary biliary cirrhosis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
In conventional survival analysis there is an underlying assumption that all study subjects are susceptible to the event. In general, this assumption does not adequately hold when investigating the time to an event other than death. Owing to genetic and/or environmental etiology, study subjects may not be susceptible to the disease. Analyzing nonsusceptibility has become an important topic in biomedical, epidemiological, and sociological research, with recent statistical studies proposing several mixture models for right‐censored data in regression analysis. In longitudinal studies, we often encounter left, interval, and right‐censored data because of incomplete observations of the time endpoint, as well as possibly left‐truncated data arising from the dissimilar entry ages of recruited healthy subjects. To analyze these kinds of incomplete data while accounting for nonsusceptibility and possible crossing hazards in the framework of mixture regression models, we utilize a logistic regression model to specify the probability of susceptibility, and a generalized gamma distribution, or a log‐logistic distribution, in the accelerated failure time location‐scale regression model to formulate the time to the event. Relative times of the conditional event time distribution for susceptible subjects are extended in the accelerated failure time location‐scale submodel. We also construct graphical goodness‐of‐fit procedures on the basis of the Turnbull–Frydman estimator and newly proposed residuals. Simulation studies were conducted to demonstrate the validity of the proposed estimation procedure. The mixture regression models are illustrated with alcohol abuse data from the Taiwan Aboriginal Study Project and hypertriglyceridemia data from the Cardiovascular Disease Risk Factor Two‐township Study in Taiwan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Multi‐state models are useful for modelling disease progression where the state space of the process is used to represent the discrete disease status of subjects. Often, the disease process is only observed at clinical visits, and the schedule of these visits can depend on the disease status of patients. In such situations, the frequency and timing of observations may depend on transition times that are themselves unobserved in an interval‐censored setting. There is a potential for bias if we model a disease process with informative observation times as a non‐informative observation scheme with pre‐specified examination times. In this paper, we develop a joint model for the disease and observation processes to ensure valid inference because the follow‐up process may itself contain information about the disease process. The transitions for each subject are modelled using a Markov process, where bivariate subject‐specific random effects are used to link the disease and observation models. Inference is based on a Bayesian framework, and we apply our joint model to the analysis of a large study examining functional decline trajectories of palliative care patients. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
Analysing the determinants and consequences of hospital‐acquired infections involves the evaluation of large cohorts. Infected patients in the cohort are often rare for specific pathogens, because most of the patients admitted to the hospital are discharged or die without such an infection. Death and discharge are competing events to acquiring an infection, because these individuals are no longer at risk of getting a hospital‐acquired infection. Therefore, the data is best analysed with an extended survival model – the extended illness‐death model. A common problem in cohort studies is the costly collection of covariate values. In order to provide efficient use of data from infected as well as uninfected patients, we propose a tailored case‐cohort approach for the extended illness‐death model. The basic idea of the case‐cohort design is to only use a random sample of the full cohort, referred to as subcohort, and all cases, namely the infected patients. Thus, covariate values are only obtained for a small part of the full cohort. The method is based on existing and established methods and is used to perform regression analysis in adapted Cox proportional hazards models. We propose estimation of all cause‐specific cumulative hazards and transition probabilities in an extended illness‐death model based on case‐cohort sampling. As an example, we apply the methodology to infection with a specific pathogen using a large cohort from Spanish hospital data. The obtained results of the case‐cohort design are compared with the results in the full cohort to investigate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Many biomedical and clinical studies with time‐to‐event outcomes involve competing risks data. These data are frequently subject to interval censoring. This means that the failure time is not precisely observed but is only known to lie between two observation times such as clinical visits in a cohort study. Not taking into account the interval censoring may result in biased estimation of the cause‐specific cumulative incidence function, an important quantity in the competing risks framework, used for evaluating interventions in populations, for studying the prognosis of various diseases, and for prediction and implementation science purposes. In this work, we consider the class of semiparametric generalized odds rate transformation models in the context of sieve maximum likelihood estimation based on B‐splines. This large class of models includes both the proportional odds and the proportional subdistribution hazard models (i.e., the Fine–Gray model) as special cases. The estimator for the regression parameter is shown to be consistent, asymptotically normal and semiparametrically efficient. Simulation studies suggest that the method performs well even with small sample sizes. As an illustration, we use the proposed method to analyze data from HIV‐infected individuals obtained from a large cohort study in sub‐Saharan Africa. We also provide the R function ciregic that implements the proposed method and present an illustrative example. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)‐infected individuals or AIDS in HIV‐infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow‐up. Transition times between disease states are therefore interval censored. Multi‐state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non‐informative, and hence the examination process is ignorable in a likelihood‐based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow‐up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non‐ignorable. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period.  相似文献   

9.
Multistate models with interval‐censored data, such as the illness‐death model, are still not used to any considerable extent in medical research regardless of the significant literature demonstrating their advantages compared to usual survival models. Possible explanations are their uncommon availability in classical statistical software or, when they are available, by the limitations related to multivariable modelling to take confounding into consideration. In this paper, we propose a strategy based on propensity scores that allows population causal effects to be estimated: the inverse probability weighting in the illness semi‐Markov model with interval‐censored data. Using simulated data, we validated the performances of the proposed approach. We also illustrated the usefulness of the method by an application aiming to evaluate the relationship between the inadequate size of an aortic bioprosthesis and its degeneration or/and patient death. We have updated the R package multistate to facilitate the future use of this method.  相似文献   

10.
Joint latent class modeling is an appealing approach for evaluating the association between a longitudinal biomarker and clinical outcome when the study population is heterogeneous. The link between the biomarker trajectory and the risk of event is reflected by the latent classes, which accommodate the underlying population heterogeneity. The estimation of joint latent class models may be complicated by the censored data in the biomarker measurements due to detection limits. We propose a modified likelihood function under the parametric assumption of biomarker distribution and develop a Monte Carlo expectation‐maximization algorithm for joint analysis of a biomarker and a binary outcome. We conduct simulation studies to demonstrate the satisfactory performance of our Monte Carlo expectation‐maximization algorithm and the superiority of our method to the naive imputation method for handling censored biomarker data. In addition, we apply our method to the Genetic and Inflammatory Markers of Sepsis study to investigate the role of inflammatory biomarker profile in predicting 90‐day mortality for patients hospitalized with community‐acquired pneumonia.  相似文献   

11.
Cai Wu  Liang Li 《Statistics in medicine》2018,37(21):3106-3124
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time‐to‐event outcomes with competing events. We consider the time‐dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time‐dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end‐stage renal disease, accounting for the competing risk of pre–end‐stage renal disease death, and evaluate its numerical performance in extensive simulation studies.  相似文献   

12.
Kuk AY  Ma S 《Statistics in medicine》2005,24(16):2525-2537
The incubation period of SARS is the time between infection of disease and onset of symptoms. Knowledge about the distribution of incubation times is crucial in determining the length of quarantine period and is an important parameter in modelling the spread and control of SARS. As the exact time of infection is unknown for most patients, the incubation time cannot be determined. What is observable is the serial interval which is the time from the onset of symptoms in an index case to the onset of symptoms in a subsequent case infected by the index case. By constructing a convolution likelihood based on the serial interval data, we are able to estimate the incubation distribution which is assumed to be Weibull, and justifications are given to support this choice over other distributions. The method is applied to data provided by the Ministry of Health of Singapore and the results justify the choice of a ten-day quarantine period. The indirect estimate obtained using the method of convolution likelihood is validated by means of comparison with a direct estimate obtained directly from a subset of patients for whom the incubation time can be ascertained. Despite its name, the proposed indirect estimate is actually more precise than the direct estimate because serial interval data are recorded for almost all patients, whereas exact incubation times can be determined for only a small subset. It is possible to obtain an even more efficient estimate by using the combined data but the improvement is not substantial.  相似文献   

13.
Predicting an individual's risk of experiencing a future clinical outcome is a statistical task with important consequences for both practicing clinicians and public health experts. Modern observational databases such as electronic health records provide an alternative to the longitudinal cohort studies traditionally used to construct risk models, bringing with them both opportunities and challenges. Large sample sizes and detailed covariate histories enable the use of sophisticated machine learning techniques to uncover complex associations and interactions, but observational databases are often ‘messy’, with high levels of missing data and incomplete patient follow‐up. In this paper, we propose an adaptation of the well‐known Naive Bayes machine learning approach to time‐to‐event outcomes subject to censoring. We compare the predictive performance of our method with the Cox proportional hazards model which is commonly used for risk prediction in healthcare populations, and illustrate its application to prediction of cardiovascular risk using an electronic health record dataset from a large Midwest integrated healthcare system. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
For cost‐effectiveness and efficiency, many large‐scale general‐purpose cohort studies are being assembled within large health‐care providers who use electronic health records. Two key features of such data are that incident disease is interval‐censored between irregular visits and there can be pre‐existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan–Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval‐censored incident disease that we call prevalence–incidence models. Parameters for parametric prevalence–incidence models, such as the logistic regression and Weibull survival (logistic–Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non‐parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan–Meier, logistic–Weibull, and non‐parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan–Meier provided poor estimates while the logistic–Weibull model was a close fit to the non‐parametric. Our findings support our use of logistic–Weibull models to develop the risk estimates that underlie current US risk‐based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.  相似文献   

15.
Event history studies based on disease clinic data often face several complications. Specifically, patients may visit the clinic irregularly, and the intermittent observation times could depend on disease‐related variables; this can cause a failure time outcome to be dependently interval‐censored. We propose a weighted estimating function approach so that dependently interval‐censored failure times can be analysed consistently. A so‐called inverse‐intensity‐of‐visit weight is employed to adjust for the informative inspection times. Left truncation of failure times can also be easily handled. Additionally, in observational studies, treatment assignments are typically non‐randomized and may depend on disease‐related variables. An inverse‐probability‐of‐treatment weight is applied to estimating functions to further adjust for measured confounders. Simulation studies are conducted to examine the finite sample performances of the proposed estimators. Finally, the Toronto Psoriatic Arthritis Cohort Study is used for illustration. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

16.
There is considerable literature on the risk of HIV infection for individuals suffering from hemophilia A in the United Kingdom (U.K.) during the period 1979–1984 when the sources of Factor VIII clotting factor were contaminated with HIV. Toward the end of this period, several investigators reported HIV prevalence among hemophiliacs, often classified by the severity of disease and, to some extent, the source of the clotting factor with which individuals were infused. In the U.K., hemophilia A patients typically received clotting factor from the local National Health Service (NHS) supplies or from commercial product usually imported from the United States. Litigation on behalf of U.K. hemophiliacs, their survivors, and estates remains unresolved, cases in which it becomes important to quantify the fraction of U.K. hemophiliac HIV infections attributable to imported blood product. For HIV‐infected individuals who received Factor VIII from one source exclusively, the source of infection is clear, assuming no other risk factors. For patients who used both types of clotting factor, the source of infection is uncertain. For the U.K. as a whole, we produce quantitative estimates of the conditional probability of infection due to a specific source, based on HIV prevalence in groups exclusively exposed to a specific source and the percentage of clotting factor that was imported. With plausible estimates of these input parameters, an estimate of the conditional probability of infection due to imported product is 0.93 (95 per cent CI: 0.90–0.96). This estimate is relatively insensitive to changes in input parameters, but may vary over subgroups of hemophiliacs. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
Frailty models are multiplicative hazard models for studying association between survival time and important clinical covariates. When some values of a clinical covariate are unobserved but known to be below a threshold called the limit of detection (LOD), naive approaches ignoring this problem, such as replacing the undetected value by the LOD or half of the LOD, often produce biased parameter estimate with larger mean squared error of the estimate. To address the LOD problem in a frailty model, we propose a flexible smooth nonparametric density estimator along with Simpson's numerical integration technique. This is an extension of an existing method in the likelihood framework for the estimation and inference of the model parameters. The proposed new method shows the estimators are asymptotically unbiased and gives smaller mean squared error of the estimates. Compared with the existing method, the proposed new method does not require distributional assumptions for the underlying covariates. Simulation studies were conducted to evaluate the performance of the new method in realistic scenarios. We illustrate the use of the proposed method with a data set from Genetic and Inflammatory Markers of Sepsis study in which interlekuin‐10 was subject to LOD. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Recently, multivariate random‐effects meta‐analysis models have received a great deal of attention, despite its greater complexity compared to univariate meta‐analyses. One of its advantages is its ability to account for the within‐study and between‐study correlations. However, the standard inference procedures, such as the maximum likelihood or maximum restricted likelihood inference, require the within‐study correlations, which are usually unavailable. In addition, the standard inference procedures suffer from the problem of singular estimated covariance matrix. In this paper, we propose a pseudolikelihood method to overcome the aforementioned problems. The pseudolikelihood method does not require within‐study correlations and is not prone to singular covariance matrix problem. In addition, it can properly estimate the covariance between pooled estimates for different outcomes, which enables valid inference on functions of pooled estimates, and can be applied to meta‐analysis where some studies have outcomes missing completely at random. Simulation studies show that the pseudolikelihood method provides unbiased estimates for functions of pooled estimates, well‐estimated standard errors, and confidence intervals with good coverage probability. Furthermore, the pseudolikelihood method is found to maintain high relative efficiency compared to that of the standard inferences with known within‐study correlations. We illustrate the proposed method through three meta‐analyses for comparison of prostate cancer treatment, for the association between paraoxonase 1 activities and coronary heart disease, and for the association between homocysteine level and coronary heart disease. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

19.
It is often the case that interest lies in the effect of an exposure on each of several distinct event types. For example, we are motivated to investigate in the impact of recent injection drug use on deaths due to each of cancer, end‐stage liver disease, and overdose in the Canadian Co‐infection Cohort (CCC). We develop a marginal structural model that permits estimation of cause‐specific hazards in situations where more than one cause of death is of interest. Marginal structural models allow for the causal effect of treatment on outcome to be estimated using inverse‐probability weighting under the assumption of no unmeasured confounding; these models are particularly useful in the presence of time‐varying confounding variables, which may also mediate the effect of exposures. An asymptotic variance estimator is derived, and a cumulative incidence function estimator is given. We compare the performance of the proposed marginal structural model for multiple‐outcome data to that of conventional competing risks models in simulated data and demonstrate the use of the proposed approach in the CCC. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Age‐specific disease incidence rates are typically estimated from longitudinal data, where disease‐free subjects are followed over time and incident cases are observed. However, longitudinal studies have substantial cost and time requirements, not to mention other challenges such as loss to follow up. Alternatively, cross‐sectional data can be used to estimate age‐specific incidence rates in a more timely and cost‐effective manner. Such studies rely on self‐report of onset age. Self‐reported onset age is subject to measurement error and bias. In this paper, we use a Bayesian bivariate smoothing approach to estimate age‐specific incidence rates from cross‐sectional survey data. Rates are modeled as a smooth function of age and lag (difference between age and onset age), with larger values of lag effectively down weighted, as they are assumed to be less reliable. We conduct an extensive simulation study to investigate the extent to which measurement error and bias in the reported onset age affects inference using the proposed methods. We use data from a national headache survey to estimate age‐ and gender‐specific migraine incidence rates. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号