首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   232篇
  免费   26篇
  国内免费   3篇
基础医学   11篇
口腔科学   2篇
临床医学   14篇
内科学   6篇
神经病学   14篇
特种医学   17篇
外科学   1篇
综合类   7篇
预防医学   153篇
药学   35篇
肿瘤学   1篇
  2024年   1篇
  2023年   1篇
  2022年   1篇
  2021年   4篇
  2020年   8篇
  2019年   14篇
  2018年   14篇
  2017年   7篇
  2016年   10篇
  2015年   7篇
  2014年   7篇
  2013年   14篇
  2012年   7篇
  2011年   14篇
  2010年   9篇
  2009年   11篇
  2008年   14篇
  2007年   14篇
  2006年   16篇
  2005年   19篇
  2004年   12篇
  2003年   9篇
  2002年   14篇
  2001年   4篇
  2000年   4篇
  1999年   3篇
  1998年   5篇
  1997年   4篇
  1996年   4篇
  1995年   1篇
  1994年   1篇
  1993年   2篇
  1992年   1篇
  1991年   1篇
  1990年   3篇
  1985年   1篇
排序方式: 共有261条查询结果,搜索用时 31 毫秒
1.
2.
The “winner's curse” is a subtle and difficult problem in interpretation of genetic association, in which association estimates from large‐scale gene detection studies are larger in magnitude than those from subsequent replication studies. This is practically important because use of a biased estimate from the original study will yield an underestimate of sample size requirements for replication, leaving the investigators with an underpowered study. Motivated by investigation of the genetics of type 1 diabetes complications in a longitudinal cohort of participants in the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Genetics Study, we apply a bootstrap resampling method in analysis of time to nephropathy under a Cox proportional hazards model, examining 1,213 single‐nucleotide polymorphisms (SNPs) in 201 candidate genes custom genotyped in 1,361 white probands. Among 15 top‐ranked SNPs, bias reduction in log hazard ratio estimates ranges from 43.1% to 80.5%. In simulation studies based on the observed DCCT/EDIC genotype data, genome‐wide bootstrap estimates for false‐positive SNPs and for true‐positive SNPs with low‐to‐moderate power are closer to the true values than uncorrected naïve estimates, but tend to overcorrect SNPs with high power. This bias‐reduction technique is generally applicable for complex trait studies including quantitative, binary, and time‐to‐event traits.  相似文献   
3.
It has long been recognized that the diffusion tensor model is inappropriate to characterize complex fiber architecture, causing tensor‐derived measures such as the primary eigenvector and fractional anisotropy to be unreliable or misleading in these regions. There is however still debate about the impact of this problem in practice. A recent study using a Bayesian automatic relevance detection (ARD) multicompartment model suggested that a third of white matter (WM) voxels contain crossing fibers, a value that, whilst already significant, is likely to be an underestimate. The aim of this study is to provide more robust estimates of the proportion of affected voxels, the number of fiber orientations within each WM voxel, and the impact on tensor‐derived analyses, using large, high‐quality diffusion‐weighted data sets, with reconstruction parameters optimized specifically for this task. Two reconstruction algorithms were used: constrained spherical deconvolution (CSD), and the ARD method used in the previous study. We estimate the proportion of WM voxels containing crossing fibers to be ~90% (using CSD) and 63% (using ARD). Both these values are much higher than previously reported, strongly suggesting that the diffusion tensor model is inadequate in the vast majority of WM regions. This has serious implications for downstream processing applications that depend on this model, particularly tractography, and the interpretation of anisotropy and radial/axial diffusivity measures. Hum Brain Mapp 34:2747–2766, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   
4.
In network meta‐analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between‐studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta‐analyses. In this article, we develop several improved inference methods for network meta‐analyses to resolve these problems. We first introduce 2 efficient likelihood‐based inference methods, the likelihood ratio test–based and efficient score test–based methods, in a general framework of network meta‐analysis. Then, to improve the small‐sample inferences, we developed improved higher‐order asymptotic methods using Bartlett‐type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case‐by‐case analyses and to permit flexible application to various statistical models network meta‐analyses. These methods can also be straightforwardly applied to multivariate meta‐regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood–based inference method. Applications to 2 network meta‐analysis datasets are provided.  相似文献   
5.
When summarizing the benchmarks for nursing quality indicators with confidence intervals around the means, bounds too high or too low are sometimes found due to small sample size or violation of the normality assumption. Transforming the data or truncating the confidence intervals at realistic values can solve the problem of out of range values. However, truncation does not improve upon the non-normality of the data, and transformations are not always successful in normalizing the data. The percentile bootstrap has the advantage of providing realistic bounds while not relying upon the assumption of normality and may provide a convenient way of obtaining appropriate confidence intervals around the mean for nursing quality indicators.  相似文献   
6.
7.
The Gompertz demographic model describes rates of aging and age-independent mortality with the parameters and A, respectively. Estimates of these parameters have traditionally been based on the assumption that mortality rates are constant over short to moderate time periods. This assumption is questionable even for very large samples assayed over short time intervals. In this article, we compare several methods for estimating the Gompertz parameters, including some that do not assume constant mortality rates. A maximum likelihood method that does not assume constant mortality rates is shown to be best, based on the bias and variance of the Gompertz parameter estimates. Moreover, we show how the Gompertz equation can then be used to predict mean longevity and the time of the nth percentile of mortality. Methods are also developed that assign confidence intervals to such estimates. In some cases, these statistics may be estimated accurately from only the early deaths of a large cohort, thus providing an opportunity to estimate longevity on long-lived organisms quickly.  相似文献   
8.
Auld MC 《Health economics》2002,11(6):471-483
Using a unique longitudinal dataset tracking the experiences of patients diagnosed with HIV+ disease, this paper develops and estimates a model capable of recovering the effect of revisions in life expectancy on labor market outcomes. The data allow us to estimate the effect of changes in health status (as objectively measured by CD4 counts) and the impact of learning that one is HIV+, which we interpret as a negative shock to life expectancy. Both parametric and distribution-free models robustly indicate that decreases in health have little effect on labor demand but decrease probability of employment. We conclude that, in this sample, negative association between income and health is attributable mostly to the effect of altered incentives induced by changes in life expectancy.  相似文献   
9.
The paper considers the statistical problem of estimating the origin of DNA replication within the human ribosomal DNA (rDNA) locus and the issue of assessing the standard error of the estimate. Based on mapping the cumulative replication index (CRI), two different modelling schemes are suggested and investigated. The statistical problem of constructing a confidence interval for the origin of DNA replication is related to Fieller's problem of obtaining a confidence interval for the ratio of two normal means. Standard normal theory, the delta and bootstrap methods are used to estimate the standard error of the estimate of the origin of DNA replication, as well as the variation of the replication rate.  相似文献   
10.
Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号