首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   214篇
  免费   41篇
基础医学   5篇
临床医学   3篇
内科学   5篇
神经病学   3篇
外科学   3篇
综合类   2篇
预防医学   208篇
眼科学   1篇
药学   21篇
肿瘤学   4篇
  2024年   1篇
  2023年   3篇
  2022年   1篇
  2021年   6篇
  2020年   14篇
  2019年   13篇
  2018年   7篇
  2017年   24篇
  2016年   14篇
  2015年   6篇
  2014年   14篇
  2013年   19篇
  2012年   8篇
  2011年   14篇
  2010年   11篇
  2009年   10篇
  2008年   19篇
  2007年   13篇
  2006年   12篇
  2005年   12篇
  2004年   15篇
  2003年   6篇
  2002年   7篇
  2000年   2篇
  1995年   1篇
  1990年   1篇
  1989年   1篇
  1982年   1篇
排序方式: 共有255条查询结果,搜索用时 0 毫秒
1.
《The Journal of arthroplasty》2021,36(10):3372-3377
Many outcomes in arthroplasty research are analyzed as time-to-event outcomes using survival analysis methods. When comparison groups are defined after a time-delayed exposure or intervention, a period of immortal time arises and can lead to biased results. In orthopedics research, immortal time bias often arises when a minimum amount of follow-up is required for study inclusion or when comparing outcomes in staged bilateral vs unilateral arthroplasty patients. We present an explanation of immortal time and the associated bias, describe how to correctly account for it using proper data preparation and statistical techniques, and provide an illustrative example using real-world arthroplasty data. We offer practical guidelines for identifying and properly handling immortal time to avoid bias. Please visit the following https://youtu.be/58p8w5o-ci4 for a video that explains the highlights of the paper in practical terms.  相似文献   
2.
This paper provides guidance for researchers with some mathematical background on the conduct of time‐to‐event analysis in observational studies based on intensity (hazard) models. Discussions of basic concepts like time axis, event definition and censoring are given. Hazard models are introduced, with special emphasis on the Cox proportional hazards regression model. We provide check lists that may be useful both when fitting the model and assessing its goodness of fit and when interpreting the results. Special attention is paid to how to avoid problems with immortal time bias by introducing time‐dependent covariates. We discuss prediction based on hazard models and difficulties when attempting to draw proper causal conclusions from such models. Finally, we present a series of examples where the methods and check lists are exemplified. Computational details and implementation using the freely available R software are documented in Supplementary Material. The paper was prepared as part of the STRATOS initiative.  相似文献   
3.
In cystic fibrosis (CF), sweat chloride concentration has been proposed as an index of CFTR function for testing systemic drugs designed to activate mutant CFTR. This suggestion arises from the assumption that greater residual CFTR function should lead to a lower sweat chloride concentration, as well as protection against severe lung disease. This logic gives rise to the hypothesis that the lower the sweat chloride concentration, the less severe the lung disease. In order to test this hypothesis, we studied 230 patients homozygous for the DeltaF508 allele, and 34 patients with at least one allele associated with pancreatic sufficiency, born since January 1, 1955, who have pulmonary function data and sweat chloride concentrations recorded in our CF center database, and no culture positive for B. cepacia. We calculated a severity index for pulmonary disease, using an approach which takes into account all available pulmonary function data as well as the patient's current age and survival status. Patients with alleles associated with pancreatic sufficiency had significantly better survival (P = 0.0083), lower sweat chloride concentration (81.4 +/- 23.8 vs. 103.2 +/- 14.2 mEq/l, P < 0.0001), slower rate of decline of FEV(1) % predicted (-0.75 +/- 0.34 vs. -2.34 +/- 0.17% predicted per year), and a better severity index than patients homozygous for the DeltaF508 allele (median 73rd percentile vs. median 55th percentile, P = 0.0004). However, the sweat chloride concentration did not correlate with the severity index, either in the population as a whole, or in the population of patients with alleles associated with pancreatic sufficiency, who are thought to have some residual CFTR function. These data suggest that, by itself, sweat chloride concentration does not necessarily predict a milder pulmonary course in patients with cystic fibrosis.  相似文献   
4.
This paper considers Cox proportional hazard models estimation under informative right censored data using maximum penalized likelihood, where dependence between censoring and event times are modelled by a copula function and a roughness penalty function is used to restrain the baseline hazard as a smooth function. Since the baseline hazard is nonnegative, we propose a special algorithm where each iteration involves updating regression coefficients by the Newton algorithm and baseline hazard by the multiplicative iterative algorithm. The asymptotic properties for both regression coefficients and baseline hazard estimates are developed. The simulation study investigates the performance of our method and also compares it with an existing maximum likelihood method. We apply the proposed method to a dementia patients dataset.  相似文献   
5.
Failure time studies based on observational cohorts often have to deal with irregular intermittent observation of individuals, which produces interval‐censored failure times. When the observation times depend on factors related to a person's failure time, the failure times may be dependently interval censored. Inverse‐intensity‐of‐visit weighting methods have been developed for irregularly observed longitudinal or repeated measures data and recently extended to parametric failure time analysis. This article develops nonparametric estimation of failure time distributions using weighted generalized estimating equations and monotone smoothing techniques. Simulations are conducted for examination of the finite sample performance of proposed estimators. This research is motivated in part by the Toronto Psoriatic Arthritis Cohort Study, and the proposed methodology is applied to this study.  相似文献   
6.
In many medical studies, estimation of the association between treatment and outcome of interest is often of primary scientific interest. Standard methods for its evaluation in survival analysis typically require the assumption of independent censoring. This assumption might be invalid in many medical studies, where the presence of dependent censoring leads to difficulties in analyzing covariate effects on disease outcomes. This data structure is called “semicompeting risks data,” for which many authors have proposed an artificial censoring technique. However, confounders with large variability may lead to excessive artificial censoring, which subsequently results in numerically unstable estimation. In this paper, we propose a strategy for weighted estimation of the associations in the accelerated failure time model. Weights are based on propensity score modeling of the treatment conditional on confounder variables. This novel application of propensity scores avoids excess artificial censoring caused by the confounders and simplifies computation. Monte Carlo simulation studies and application to AIDS and cancer research are used to illustrate the methodology.  相似文献   
7.
8.
In contrast to the usual ROC analysis with a contemporaneous reference standard, the time‐dependent setting introduces the possibility that the reference standard refers to an event at a future time and may not be known for every patient due to censoring. The goal of this research is to determine the sample size required for a study design to address the question of the accuracy of a diagnostic test using the area under the curve in time‐dependent ROC analysis. We adapt a previously published estimator of the time‐dependent area under the ROC curve, which is a function of the expected conditional survival functions. This estimator accommodates censored data. The estimation of the required sample size is based on approximations of the expected conditional survival functions and their variances, derived under parametric assumptions of an exponential failure time and an exponential censoring time. We also consider different patient enrollment strategies. The proposed method can provide an adequate sample size to ensure that the test's accuracy is estimated to a prespecified precision. We present results of a simulation study to assess the accuracy of the method and its robustness to departures from the parametric assumptions. We apply the proposed method to design of a study of positron emission tomography as predictor of disease free survival in women undergoing therapy for cervical cancer. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
9.
Varying‐coefficient models have claimed an increasing portion of statistical research and are now applied to censored data analysis in medical studies. We incorporate such flexible semiparametric regression tools for interval censored data with a cured proportion. We adopted a two‐part model to describe the overall survival experience for such complicated data. To fit the unknown functional components in the model, we take the local polynomial approach with bandwidth chosen by cross‐validation. We establish consistency and asymptotic distribution of the estimation and propose to use bootstrap for inference. We constructed a BIC‐type model selection method to recommend an appropriate specification of parametric and nonparametric components in the model. We conducted extensive simulations to assess the performance of our methods. An application on a decompression sickness data illustrates our methods. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
10.
Missing (censored) death times for lung candidates in urgent need of transplant are a signpost of success for allocation policy makers. However, statisticians analyzing these data must properly account for dependent censoring as the sickest patients are removed from the waitlist. Multiple imputation allows the creation of complete data sets that can be used for a variety of standard analyses in this setting. We propose an approach to multiply impute lung candidate outcomes that incorporates (i) time‐varying factors predicting removal from the waitlist and (ii) estimates of transplant urgency via restricted mean models. The measures of transplant urgency and benefit for individual patient profiles are discussed in the context of lung allocation score modeling in the USA. Marginal survival estimates in the event that a transplant does not occur are also provided. Simulations suggest that the proposed imputation method gives attractive results when compared with existing methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号