全文获取类型
收费全文 | 13242篇 |
免费 | 1660篇 |
国内免费 | 300篇 |
专业分类
耳鼻咽喉 | 47篇 |
儿科学 | 139篇 |
妇产科学 | 109篇 |
基础医学 | 1179篇 |
口腔科学 | 206篇 |
临床医学 | 1155篇 |
内科学 | 1083篇 |
皮肤病学 | 67篇 |
神经病学 | 712篇 |
特种医学 | 627篇 |
外科学 | 1232篇 |
综合类 | 2152篇 |
现状与发展 | 1篇 |
一般理论 | 8篇 |
预防医学 | 3716篇 |
眼科学 | 109篇 |
药学 | 1435篇 |
8篇 | |
中国医学 | 739篇 |
肿瘤学 | 478篇 |
出版年
2024年 | 100篇 |
2023年 | 435篇 |
2022年 | 717篇 |
2021年 | 977篇 |
2020年 | 892篇 |
2019年 | 743篇 |
2018年 | 617篇 |
2017年 | 705篇 |
2016年 | 623篇 |
2015年 | 563篇 |
2014年 | 940篇 |
2013年 | 1065篇 |
2012年 | 600篇 |
2011年 | 725篇 |
2010年 | 537篇 |
2009年 | 509篇 |
2008年 | 554篇 |
2007年 | 552篇 |
2006年 | 522篇 |
2005年 | 428篇 |
2004年 | 397篇 |
2003年 | 321篇 |
2002年 | 262篇 |
2001年 | 174篇 |
2000年 | 130篇 |
1999年 | 105篇 |
1998年 | 108篇 |
1997年 | 106篇 |
1996年 | 78篇 |
1995年 | 58篇 |
1994年 | 75篇 |
1993年 | 64篇 |
1992年 | 51篇 |
1991年 | 53篇 |
1990年 | 44篇 |
1989年 | 49篇 |
1988年 | 37篇 |
1987年 | 47篇 |
1986年 | 36篇 |
1985年 | 31篇 |
1984年 | 32篇 |
1983年 | 20篇 |
1982年 | 25篇 |
1981年 | 11篇 |
1980年 | 23篇 |
1979年 | 8篇 |
1978年 | 8篇 |
1977年 | 8篇 |
1976年 | 12篇 |
1974年 | 7篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Modeling and validating Bayesian accrual models on clinical data and simulations using adaptive priors 下载免费PDF全文
Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision‐making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
992.
In biomedical research and practice, continuous biomarkers are often used for diagnosis and prognosis, with a cut‐point being established on the measurement to aid binary classification. When survival time is examined for the purposes of disease prognostication and is found to be related to the baseline measure of a biomarker, employing a single cut‐point on the biomarker may not be very informative. Using survival time‐dependent sensitivity and specificity, we extend a concordance probability‐based objective function to select survival time‐related cut‐points. To estimate the objective function with censored survival data, we adopt a non‐parametric procedure for time‐dependent receiver operational characteristics curves, which uses nearest neighbor estimation techniques. In a simulation study, the proposed method, when used to select a cut‐point to optimally predict survival at a given time within a specified range, yields satisfactory results. We apply the procedure to estimate survival time‐dependent cut‐point on the prognostic biomarker of serum bilirubin among patients with primary biliary cirrhosis. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
993.
Kaifeng Lu 《Statistics in medicine》2015,34(5):782-795
Pattern‐mixture models provide a general and flexible framework for sensitivity analyses of nonignorable missing data in longitudinal studies. The delta‐adjusted pattern‐mixture models handle missing data in a clinically interpretable manner and have been used as sensitivity analyses addressing the effectiveness hypothesis, while a likelihood‐based approach that assumes data are missing at random is often used as the primary analysis addressing the efficacy hypothesis. We describe a method for power calculations for delta‐adjusted pattern‐mixture model sensitivity analyses in confirmatory clinical trials. To apply the method, we only need to specify the pattern probabilities at postbaseline time points, the expected treatment differences at postbaseline time points, the conditional covariance matrix of postbaseline measurements given the baseline measurement, and the delta‐adjustment method for the pattern‐mixture model. We use an example to illustrate and compare various delta‐adjusted pattern‐mixture models and use simulations to confirm the analytic results. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
994.
Count data often arise in biomedical studies, while there could be a special feature with excessive zeros in the observed counts. The zero‐inflated Poisson model provides a natural approach to accounting for the excessive zero counts. In the semiparametric framework, we propose a generalized partially linear single‐index model for the mean of the Poisson component, the probability of zero, or both. We develop the estimation and inference procedure via a profile maximum likelihood method. Under some mild conditions, we establish the asymptotic properties of the profile likelihood estimators. The finite sample performance of the proposed method is demonstrated by simulation studies, and the new model is illustrated with a medical care dataset. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
995.
Sensitivity of regression calibration to non‐perfect validation data with application to the Norwegian Women and Cancer Study 下载免费PDF全文
John P. Buonaccorsi Ingvild Dalen Petter Laake Anette Hjartåker Dagrun Engeset Magne Thoresen 《Statistics in medicine》2015,34(8):1389-1403
Measurement error occurs when we observe error‐prone surrogates, rather than true values. It is common in observational studies and especially so in epidemiology, in nutritional epidemiology in particular. Correcting for measurement error has become common, and regression calibration is the most popular way to account for measurement error in continuous covariates. We consider its use in the context where there are validation data, which are used to calibrate the true values given the observed covariates. We allow for the case that the true value itself may not be observed in the validation data, but instead, a so‐called reference measure is observed. The regression calibration method relies on certain assumptions.This paper examines possible biases in regression calibration estimators when some of these assumptions are violated. More specifically, we allow for the fact that (i) the reference measure may not necessarily be an ‘alloyed gold standard’ (i.e., unbiased) for the true value; (ii) there may be correlated random subject effects contributing to the surrogate and reference measures in the validation data; and (iii) the calibration model itself may not be the same in the validation study as in the main study; that is, it is not transportable. We expand on previous work to provide a general result, which characterizes potential bias in the regression calibration estimators as a result of any combination of the violations aforementioned. We then illustrate some of the general results with data from the Norwegian Women and Cancer Study. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
996.
Common clinical studies assess the quality of prognostic factors, such as gene expression signatures, clinical variables or environmental factors, and cluster patients into various risk groups. Typical examples include cancer clinical trials where patients are clustered into high or low risk groups. Whenever applied to survival data analysis, such groups are intended to represent patients with similar survival odds and to select the most appropriate therapy accordingly. The relevance of such risk groups, and of the related prognostic factors, is typically assessed through the computation of a hazard ratio. We first stress three limitations of assessing risk groups through the hazard ratio: (1) it may promote the definition of arbitrarily unbalanced risk groups; (2) an apparently optimal group hazard ratio can be largely inconsistent with the p‐value commonly associated to it; and (3) some marginal changes between risk group proportions may lead to highly different hazard ratio values. Those issues could lead to inappropriate comparisons between various prognostic factors. Next, we propose the balanced hazard ratio to solve those issues. This new performance metric keeps an intuitive interpretation and is as simple to compute. We also show how the balanced hazard ratio leads to a natural cut‐off choice to define risk groups from continuous risk scores. The proposed methodology is validated through controlled experiments for which a prescribed cut‐off value is defined by design. Further results are also reported on several cancer prognosis studies, and the proposed methodology could be applied more generally to assess the quality of any prognostic markers. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
997.
Penalization,bias reduction,and default priors in logistic and related categorical and survival regressions 下载免费PDF全文
Penalization is a very general method of stabilizing or regularizing estimates, which has both frequentist and Bayesian rationales. We consider some questions that arise when considering alternative penalties for logistic regression and related models. The most widely programmed penalty appears to be the Firth small‐sample bias‐reduction method (albeit with small differences among implementations and the results they provide), which corresponds to using the log density of the Jeffreys invariant prior distribution as a penalty function. The latter representation raises some serious contextual objections to the Firth reduction, which also apply to alternative penalties based on t‐distributions (including Cauchy priors). Taking simplicity of implementation and interpretation as our chief criteria, we propose that the log‐F(1,1) prior provides a better default penalty than other proposals. Penalization based on more general log‐F priors is trivial to implement and facilitates mean‐squared error reduction and sensitivity analyses of penalty strength by varying the number of prior degrees of freedom. We caution however against penalization of intercepts, which are unduly sensitive to covariate coding and design idiosyncrasies. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
998.
Analysis of transtheoretical model of health behavioral changes in a nutrition intervention study—a continuous time Markov chain model with Bayesian approach 下载免费PDF全文
Junsheng Ma Wenyaw Chan Chu‐Lin Tsai Momiao Xiong Barbara C. Tilley 《Statistics in medicine》2015,34(27):3577-3589
Continuous time Markov chain (CTMC) models are often used to study the progression of chronic diseases in medical research but rarely applied to studies of the process of behavioral change. In studies of interventions to modify behaviors, a widely used psychosocial model is based on the transtheoretical model that often has more than three states (representing stages of change) and conceptually permits all possible instantaneous transitions. Very little attention is given to the study of the relationships between a CTMC model and associated covariates under the framework of transtheoretical model. We developed a Bayesian approach to evaluate the covariate effects on a CTMC model through a log‐linear regression link. A simulation study of this approach showed that model parameters were accurately and precisely estimated. We analyzed an existing data set on stages of change in dietary intake from the Next Step Trial using the proposed method and the generalized multinomial logit model. We found that the generalized multinomial logit model was not suitable for these data because it ignores the unbalanced data structure and temporal correlation between successive measurements. Our analysis not only confirms that the nutrition intervention was effective but also provides information on how the intervention affected the transitions among the stages of change. We found that, compared with the control group, subjects in the intervention group, on average, spent substantively less time in the precontemplation stage and were more/less likely to move from an unhealthy/healthy state to a healthy/unhealthy state. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
999.
A joint model for interval‐censored functional decline trajectories under informative observation 下载免费PDF全文
Mary Louise Lesperance Veronica Sabelnykova Farouk Salim Nathoo Francis Lau Michael G. Downing 《Statistics in medicine》2015,34(29):3929-3948
Multi‐state models are useful for modelling disease progression where the state space of the process is used to represent the discrete disease status of subjects. Often, the disease process is only observed at clinical visits, and the schedule of these visits can depend on the disease status of patients. In such situations, the frequency and timing of observations may depend on transition times that are themselves unobserved in an interval‐censored setting. There is a potential for bias if we model a disease process with informative observation times as a non‐informative observation scheme with pre‐specified examination times. In this paper, we develop a joint model for the disease and observation processes to ensure valid inference because the follow‐up process may itself contain information about the disease process. The transitions for each subject are modelled using a Markov process, where bivariate subject‐specific random effects are used to link the disease and observation models. Inference is based on a Bayesian framework, and we apply our joint model to the analysis of a large study examining functional decline trajectories of palliative care patients. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
1000.
Joseph G. Ibrahim Ming‐Hui Chen Yeongjin Gwon Fang Chen 《Statistics in medicine》2015,34(28):3724-3749
The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A‐to‐Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Frequentist properties of power priors in posterior inference are established, and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献