首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   147篇
  免费   20篇
  国内免费   2篇
基础医学   19篇
临床医学   5篇
内科学   30篇
神经病学   6篇
特种医学   9篇
外科学   17篇
综合类   9篇
预防医学   45篇
眼科学   1篇
药学   19篇
中国医学   4篇
肿瘤学   5篇
  2023年   4篇
  2022年   8篇
  2021年   4篇
  2020年   11篇
  2019年   10篇
  2018年   10篇
  2017年   6篇
  2016年   9篇
  2015年   5篇
  2014年   5篇
  2013年   14篇
  2012年   11篇
  2011年   15篇
  2010年   4篇
  2009年   8篇
  2008年   4篇
  2007年   3篇
  2006年   4篇
  2005年   3篇
  2004年   5篇
  2003年   4篇
  2002年   3篇
  2001年   1篇
  2000年   3篇
  1997年   5篇
  1996年   3篇
  1994年   4篇
  1993年   1篇
  1983年   1篇
  1980年   1篇
排序方式: 共有169条查询结果,搜索用时 46 毫秒
1.
The weighted rank pairwise correlation (WRPC) statistic has been proposed as a robust test of genetic linkage, particularly adapted to the analysis of large and complex pedigrees and for age-dependent and heterogeneous diseases. In this paper a simulation study is presented. Validity and power of the WRPC test are studied and compared to the Haseman-Elston sibpair method for various types of problems. The power of the WRPC test is slightly lower than the Haseman-Elston method for analyzing a large number of small randomly chosen pedigrees. It is higher however in presence of genetic heterogeneity or for analyzing large individual pedigrees. Recently, evidence of linkage of Alzheimer's disease with a locus on chromosome 14, D14s43, has been obtained by the Lod-score method. We reanalyze these data using the WRPC test, essentially confirming the results of the Lod-score method. The WRPC test statistic is higher than the equivalent Lod-score statistic for the two pedigrees which show strong evidence of linkage with the two methods. The global WRPC test statistic is slightly lower than the Lod-score test statistic. The WRPC test, however, makes no hypothesis of a specific genetic transmission model and can be computed very quickly; in addition, an exact P-value can be computed by simulation for individual pedigrees. © 1994 Wiley-Liss, Inc.  相似文献   
2.
ObjectiveTo describe nursing home residents’ (NHRs) functional trajectories and mortality after a transfer to the emergency department (ED).DesignCase-control observational multicenter study.Setting and ParticipantsIn total, 1037 NHRs presenting to 17 EDs in France over 4 nonconsecutive weeks in 2016.MethodsFinite mixture models were fitted to longitudinal data on activities of daily living (ADL) scores before transfer (time 1), during hospitalization (time 2), and within 1 week after discharge (time 3) to identify groups of NHRs following similar functional evolution. Factors associated with mortality were investigated by Cox regressions.ResultsTrajectory modeling identified 4 distinct trajectories of ADL. The first showed a high and stable (across time 1, time 2, and time 3) functional capacity around 5.2/6 ADL points, with breathlessness as the main condition leading to transfer. The second displayed an initial 37.8% decrease in baseline ADL performance (between time 1 and time 2), followed by a 12.5% recovery of baseline ADL performance (time 2?time 3), with fractures as the main condition. The third displayed a similar initial decrease, followed by a 6.7% recovery. The fourth displayed an initial 70.1% decrease, followed by an 8.5% recover, with more complex geriatric polypathology situations. Functional decline was more likely after being transferred for a cerebrovascular condition or for a fracture, after being discharged from ED to a surgery department, and with a heavier burden of distressing symptoms during transfer. Mortality after ED transfer was more likely in older NHRs, those in a more severe condition, those who were hospitalized more frequently in the past month, and those transferred for cerebrovascular conditions or breathlessness.Conclusions and ImplicationsIdentified trajectories and factors associated with functional decline and mortality should help clinicians decide whether to transfer NHRs to ED. NHRs with high functional ability seem to benefit from ED transfers whereas on-site alternatives should be sought for those with poor functional ability.  相似文献   
3.
Abstract

This paper proposes the development of a drug product Manufacturing Classification System (MCS) based on processing route. It summarizes conclusions from a dedicated APS conference and subsequent discussion within APS focus groups and the MCS working party. The MCS is intended as a tool for pharmaceutical scientists to rank the feasibility of different processing routes for the manufacture of oral solid dosage forms, based on selected properties of the API and the needs of the formulation. It has many applications in pharmaceutical development, in particular, it will provide a common understanding of risk by defining what the “right particles” are, enable the selection of the best process, and aid subsequent transfer to manufacturing. The ultimate aim is one of prediction of product developability and processability based upon previous experience.

This paper is intended to stimulate contribution from a broad range of stakeholders to develop the MCS concept further and apply it to practice. In particular, opinions are sought on what API properties are important when selecting or modifying materials to enable an efficient and robust pharmaceutical manufacturing process. Feedback can be given by replying to our dedicated e-mail address (mcs@apsgb.org); completing the survey on our LinkedIn site; or by attending one of our planned conference roundtable sessions.  相似文献   
4.
Mathematical models of natural systems are abstractions of much more complicated processes. Developing informative and realistic models of such systems typically involves suitable statistical inference methods, domain expertise, and a modicum of luck. Except for cases where physical principles provide sufficient guidance, it will also be generally possible to come up with a large number of potential models that are compatible with a given natural system and any finite amount of data generated from experiments on that system. Here we develop a computational framework to systematically evaluate potentially vast sets of candidate differential equation models in light of experimental and prior knowledge about biological systems. This topological sensitivity analysis enables us to evaluate quantitatively the dependence of model inferences and predictions on the assumed model structures. Failure to consider the impact of structural uncertainty introduces biases into the analysis and potentially gives rise to misleading conclusions.Using simple models to study complex systems has become standard practice in different fields, including systems biology, ecology, and economics. Although we know and accept that such models do not fully capture the complexity of the underlying systems, they can nevertheless provide meaningful predictions and insights (1). A successful model is one that captures the key features of the system while omitting extraneous details that hinder interpretation and understanding. Constructing such a model is usually a nontrivial task involving stages of refinement and improvement.When dealing with models that are (necessarily and by design) gross oversimplifications of the reality they represent, it is important that we are aware of their limitations and do not seek to overinterpret them. This is particularly true when modeling complex systems for which there are only limited or incomplete observations. In such cases, we expect there to be numerous models that would be supported by the observed data, many (perhaps most) of which we may not yet have identified. The literature is awash with papers in which a single model is proposed and fitted to a dataset, and conclusions drawn without any consideration of (i) possible alternative models that might describe the observed behavior and known facts equally well (or even better); or (ii) whether the conclusions drawn from different models (still consistent with current observations) would agree with one another.We propose an approach to assess the impact of uncertainty in model structure on our conclusions. Our approach is distinct from—and complementary to—existing methods designed to address structural uncertainty, including model selection, model averaging, and ensemble modeling (29). Analogous to parametric sensitivity analysis (PSA), which assesses the sensitivity of a model’s behavior to changes in parameter values, we consider the sensitivity of a model’s output to changes in its inherent structural assumptions. PSA techniques can usually be classified as (i) local analyses, in which we identify a single “optimal” vector of parameter values, and then quantify the degree to which small perturbations to these values change our conclusions or predictions; or (ii) global analyses, where we consider an ensemble of parameter vectors (e.g., samples from the posterior distribution in the Bayesian formalism) and quantify the corresponding variability in the model’s output. Although several approaches fall within these categories (1012), all implicitly condition on a particular model architecture. Here we present a method for performing sensitivity analyses for ordinary differential equation (ODE) models where the architecture of these models is not perfectly known, which is likely to be the case for all realistic complex systems. We do this by considering network representations of our models, and assessing the sensitivity of our inferences to the network topology. We refer to our approach as topological sensitivity analysis (TSA).Here we illustrate TSA in the context of parameter inference, but we could also apply our method to study other conclusions drawn from ODE models (e.g., model forecasts or steady-state analyses). When we use experimental data to infer parameters associated with a specific model it is critical to assess the uncertainty associated with our parameter estimates (13), particularly if we wish to relate model parameters to physical (e.g., reaction rate) constants in the real world. Too often this uncertainty is estimated only by considering the variation in a parameter estimate conditional on a particular model, while ignoring the component of uncertainty that stems from potential model misspecification. The latter can, in principle, be considered within model selection or averaging frameworks, where several distinct models are proposed and weighted according to their ability to fit the observed data (25). However, the models tend to be limited to a small, often diverse, group that act as exemplars for each competing hypothesis but ignore similar model structures that could represent the same hypotheses. Moreover, we know that model selection results can be sensitive to the particular experiments performed (14).We assume that an initial model, together with parameters or plausible parameter ranges, has been proposed to describe the dynamics of a given system. This model may have been constructed based on expert knowledge of the system, selected from previous studies, or (particularly in the case of large systems) proposed automatically using network inference algorithms (1519), for example. Using TSA, we aim to identify how reliant any conclusions and inferences are on the particular set of structural assumptions made in this initial candidate model. We do this by identifying alterations to model topology that maintain consistency with the observed dynamics and test how these changes impact the conclusions we draw (Fig. 1). Analogous to PSA we may perform local or global analyses—by testing a small set of “close” models with minor structural changes, or performing large-scale searches of diverse model topologies, respectively. To do this we require efficient techniques for exploring the space of network topologies and, for each topology, inferring the parameters of the corresponding ODE models.Open in a separate windowFig. 1.Overview of TSA applied to parameter inference. (A) Model space includes our initial candidate model and a series of altered topologies that are consistent with our chosen rules (e.g., all two-edge, three-node networks, where nodes indicate species and directed edges show interactions). One topology may correspond to one or several ODE models depending on the parametric forms we choose to represent interactions. (B) We test each ODE model to see whether it can generate dynamics consistent with our candidate model and the available experimental data. For TSA, we select a group of these compatible models and compare the conclusions we would draw using each of them. (C) Associated with each model m is a parameter space Θm (gray); using Bayesian methods we can infer the joint posterior parameter distribution (dashed contours) for a particular model and dataset. (D) In some cases, equivalent parameters will be present in several selected models (e.g., θ1, which is associated with the same interaction in models a–c). We can compare the marginal posterior distribution inferred using each model for a common parameter to test whether our inferences are robust to topological changes, or rely on one specific set of model assumptions (i.e., sensitive). Different models may result in marginal distributions that differ in position and/or shape for equivalent parameters, but we cannot tell from this alone which model better represents reality—this requires model selection approaches (24).Even for networks with relatively few nodes (corresponding to ODE models involving few interacting entities), the number of possible topologies can be enormous. Searching this “model space” presents formidable computational challenges. We use here a gradient-matching parameter inference approach that exploits the fact that the nth node, xn, in our network representation is conditionally independent of all other nodes given its regulating parents, Pa(xn) (2026). The exploration of network topologies is then reduced to the much simpler problem of considering, independently for each n, the possible parent sets of xn in an approach that is straightforwardly parallelized.We use biological examples to illustrate local and global searches of model spaces to identify alternative model structures that are consistent with available data. In some cases we find that even minor structural uncertainty in model topology can render our conclusions—here parameter inferences—unreliable and make PSA results positively misleading. However, other inferences are robust across diverse compatible model structures, allowing us to be more confident in assigning scientific meaning to the inferred parameter values.  相似文献   
5.
目的:通过移动等中心模拟系统误差,探讨宫颈癌术后调强放射治疗(IMRT)中剂量分布受系统摆位误差的敏感程度。方法:分别制定30例宫颈癌术后IMRT计划,在治疗计划中移动等中心,假设每次治疗时系统误差都为同一方向,每位患者沿原始x、y、z轴各移动等中心±3.0、±5.0和±7.0 mm模拟左右、腹背、头脚方向系统摆位误差对剂量分布的影响,在不改变优化条件的情况,重新计算剂量分布,得到30个参考计划与540个再计划DVH参数。配对t检验不同方向差异。结果:误差为3、5和7 mm时,CTV D98和PTV V95下降平均偏差分别为0.16%和0.55%、0.44%和1.72%、0.89%和3.41%;小肠、直肠、膀胱、左股骨头和右股骨头V50超标频率分别为2.22%、0.00%、0.00%、0.00%和0.00%,11.11%、2.22%、0.00%、4.44%和4.44%,15.56%、6.67%、2.78%、13.33%和14.44%。采用配对t检验对不同方向误差进行对比时发现:(1)y轴方向摆位误差比x和z轴方向对CTV D98和PTV V95影响更敏感(P<0.05, P<0.05);(2)背方向摆位误差比其他方向对小肠和膀胱V50 影响更敏感(P<0.05, P<0.05);(3)腹方向摆位误差比其他方向对直肠V50影响更敏感(P<0.05);(4)右方向摆位误差比其他方向对左股骨头V50影响更敏感(P<0.05);(5)左方向摆位误差比其他方向对右股骨头V50影响更敏感(P<0.05)。结论:摆位误差较小时(<5 mm),靶区剂量和小肠、膀胱、直肠、左右股骨头V50受摆位误差敏感程度较小,宫颈癌术后IMRT计划较稳健。当摆位误差增大时,宫颈癌术后IMRT计划不再稳健,治疗前一定需要寻找原因,如有必要还需重新做体位固定装置。  相似文献   
6.
Presented are results of a study of the application of linear quadratic optimal model-following control applied to a Bell 205 helicopter in hover. The primary objective of good in-flight stability robustness and performance was accomplished via singular value analysis using perturbed systems. Nominal aircraft models were compared with experimental data and discrepancies quantified in a robustness criterion. Current military handling quality requirements were specified as a target model to be followed. The linear quadratic optimal control and command feedforward was found suitable for these requirements. Design analyses enabled consideration of the tuning process, where effects of variations in selected tuning parameters demonstrated their sensitivity to the design.  相似文献   
7.
The purpose of this study is to enable high spatial resolution voxel‐wise quantitative analysis of myocardial perfusion in dynamic contrast‐enhanced cardiovascular MR, in particular by finding the most favorable quantification algorithm in this context. Four deconvolution algorithms—Fermi function modeling, deconvolution using B‐spline basis, deconvolution using exponential basis, and autoregressive moving average modeling —were tested to calculate voxel‐wise perfusion estimates. The algorithms were developed on synthetic data and validated against a true gold‐standard using a hardware perfusion phantom. The accuracy of each method was assessed for different levels of spatial averaging and perfusion rate. Finally, voxel‐wise analysis was used to generate high resolution perfusion maps on real data acquired from five patients with suspected coronary artery disease and two healthy volunteers. On both synthetic and perfusion phantom data, the B‐spline method had the highest error in estimation of myocardial blood flow. The autoregressive moving average modeling and exponential methods gave accurate estimates of myocardial blood flow. The Fermi model was the most robust method to noise. Both simulations and maps in the patients and hardware phantom showed that voxel‐wise quantification of myocardium perfusion is feasible and can be used to detect abnormal regions. Magn Reson Med, 2012. © 2012 Wiley Periodicals, Inc.  相似文献   
8.
Chu HM  Ette EI 《The AAPS journal》2005,7(1):E249-E258
This study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naive data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.  相似文献   
9.
BACKGROUND AND OBJECTIVES: Since July 1 1999, four laboratories in the Netherlands have been routinely screening plasma minipools for the release of labile blood components utilizing hepatitis C virus nucleic acid amplification technology (HCV NAT). This report describes the performance evaluation of the HCV NAT method and the quality control results obtained during 6 months of routine screening. MATERIALS AND METHODS: Plasma minipools of 48 donations were prepared on a Tecan Genesis robot. HCV RNA was isolated from 2 ml of plasma by using the NucliSens Extractor and amplified and detected with the Cobas HCV Amplicor 2.0 test system. For validation of the test system the laboratories used viral quality control (VQC) reagents of CLB. RESULTS: Initial robustness experiments demonstrated consistent detection of PeliSpy HCV RNA samples of 140 genome equivalents/ml (geq/ml) in each station of the installed Nuclisens Extractors. Further 'stress' tests with a highly viraemic sample of approximately 5 x 10(6) geq/ml did not contaminate negative samples processed on all Extractor stations in subsequent runs. In the validation period prior to July 1999, 1021 pools were tested with the following performance characteristics: 0.1%, initially false reactive; 0.89%, failure of internal control detection; 0.97%, no eluate generated by the Extractor; and 100% reactivity of the PeliSpy 140 geq/ml control in 176 Extractor runs and a 98% reactivity rate of the PeliSpy 38 geq/ml control in 102 test runs. By testing the PeliCheck HCV RNA genotype 1 dilution panels 49 times, an overall 95% detection limit of 30 geq/ml ( approximately 8 IU/ml) and a 50% detection limit of 5 geq/ml was found by the four laboratories. In the first 6 months of routine screening, the minimum requirement for invalid results (2%) was exceeded with some batches of silica and NucliSens Extractor cartridges. From November 1999 to February 2000, the manufacturer (Organon Teknika) improved the protocol for silica absorption of the Nuclisens Extractor -- the cartridge design as well as the software of the Extractor. During the next 6 months of observation in 2000, the percentages of false initial reactives and invalids were 0.05% and 1.4%, respectively, in 8962 pools tested. Of these invalid results, 0.74% and 0.66% were caused by Extractor failure and negative internal control signals, respectively. The PeliSpy HCV RNA 'stop or go' run control of 140 geq/ml was 100% reactive, but invalid in 16/1375 (1.2%) of cases. The PeliSpy run control of 38 geq/ml for monitoring sensitivity of reagent batches was reactive in 95% of 123 samples tested. CONCLUSIONS: Each of the four HCV NAT laboratories in the Netherlands have achieved similar detection limits that are well below the sensitivity requirements of the regulatory bodies. After improvement of the NucliSens Extractor procedure, the robustness of the test system has proved to be acceptable for routine screening and timely release of all labile blood components.  相似文献   
10.
For the pathogenesis of complex diseases, gene-environment (G-E) interactions have been shown to have important implications. G-E interaction analysis can be challenging with the need to jointly analyze a large number of main effects and interactions and to respect the “main effects, interactions” hierarchical constraint. Extensive methodological developments on G-E interaction analysis have been conducted in recent literature. Despite considerable successes, most of the existing studies are still limited as they cannot accommodate long-tailed distributions/data contamination, make the restricted assumption of linear effects, and cannot effectively accommodate missingness in E variables. To directly tackle these problems, a semiparametric model is assumed to accommodate nonlinear effects, and the Huber loss function and Qn estimator are adopted to accommodate long-tailed distributions/data contamination. A regression-based multiple imputation approach is developed to accommodate missingness in E variables. For model estimation and selection of relevant variables, we adopt an effective sparse boosting approach. The proposed approach is practically well motivated, has intuitive formulations, and can be effectively realized. In extensive simulations, it significantly outperforms multiple direct competitors. The analysis of The Cancer Genome Atlas data on stomach adenocarcinoma and cutaneous melanoma shows that the proposed approach makes sensible discoveries with satisfactory prediction and stability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号