首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1961篇
  免费   281篇
  国内免费   29篇
耳鼻咽喉   6篇
儿科学   9篇
妇产科学   5篇
基础医学   607篇
口腔科学   24篇
临床医学   289篇
内科学   78篇
皮肤病学   28篇
神经病学   216篇
特种医学   409篇
外科学   93篇
综合类   150篇
预防医学   137篇
眼科学   62篇
药学   26篇
中国医学   17篇
肿瘤学   115篇
  2024年   2篇
  2023年   83篇
  2022年   98篇
  2021年   105篇
  2020年   109篇
  2019年   92篇
  2018年   69篇
  2017年   61篇
  2016年   75篇
  2015年   91篇
  2014年   147篇
  2013年   123篇
  2012年   112篇
  2011年   119篇
  2010年   101篇
  2009年   109篇
  2008年   136篇
  2007年   126篇
  2006年   103篇
  2005年   89篇
  2004年   55篇
  2003年   43篇
  2002年   38篇
  2001年   40篇
  2000年   27篇
  1999年   13篇
  1998年   16篇
  1997年   16篇
  1996年   11篇
  1995年   13篇
  1994年   7篇
  1993年   10篇
  1992年   3篇
  1991年   5篇
  1990年   3篇
  1989年   4篇
  1988年   2篇
  1987年   1篇
  1985年   5篇
  1984年   2篇
  1982年   1篇
  1981年   2篇
  1977年   1篇
  1976年   2篇
  1975年   1篇
排序方式: 共有2271条查询结果,搜索用时 15 毫秒
1.
《Cancer radiothérapie》2022,26(8):1008-1015
PurposeDeep learning (DL) techniques are widely used in medical imaging and in particular for segmentation. Indeed, manual segmentation of organs at risk (OARs) is time-consuming and suffers from inter- and intra-observer segmentation variability. Image segmentation using DL has given very promising results. In this work, we present and compare the results of segmentation of OARs and a clinical target volume (CTV) in thoracic CT images using three DL models.Materials and methodsWe used CT images of 52 patients with breast cancer from a public dataset. Automatic segmentation of the lungs, the heart and a CTV was performed using three models based on the U-Net architecture. Three metrics were used to quantify and compare the segmentation results obtained with these models: the Dice similarity coefficient (DSC), the Jaccard coefficient (J) and the Hausdorff distance (HD).ResultsThe obtained values of DSC, J and HD were presented for each segmented organ and for the three models. Examples of automatic segmentation were presented and compared to the corresponding ground truth delineations. Our values were also compared to recent results obtained by other authors.ConclusionThe performance of three DL models was evaluated for the delineation of the lungs, the heart and a CTV. This study showed clearly that these 2D models based on the U-Net architecture can be used to delineate organs in CT images with a good performance compared to other models. Generally, the three models present similar performances. Using a dataset with more CT images, the three models should give better results.  相似文献   
2.
目的:探讨建立一种放射治疗全身器官剂量数据库平台的可行性。方法:使用基于深度学习的自动勾画软件DeepViewer?1例食管癌患者的全身CT上勾画全身器官,然后利用基于GPU加速的蒙特卡罗软件ARCHER计算相应的器官剂量分布,最后利用Lyman-Kutcher-Burman(LKB)模型评估放疗患者正常组织并发症概率(NTCP)。结果:针对该病例,成功建立基于DeepViewer?ARCHER和LKB模型的全身器官剂量数据库,发现距离靶区越近的器官剂量越大,其中心脏与靶区间距离最小,剂量为14.11 Gy,但因其模型参数特殊,通过LKB模型计算的NTCP为0.00%;左、右肺的剂量分别为3.19和1.16 Gy,但是NTCP值却很大,分别为2.13%和1.60%。对于距离靶区较远的头颈部器官(视交叉、视神经和眼)和腹部器官(直肠、膀胱和股骨头)剂量分别约为9和2 mGy,并且NTCP均近似为0.00%。结论:研究结果证明通过自动勾画软件DeepViewer?蒙特卡罗软件ARCHER和LKB模型建立全身器官剂量数据库的可行性。  相似文献   
3.
4.
[目的]了解舌象分割技术的研究现况及发展趋势,为舌象客观化提供参考。[方法]基于中国知网、维普、万方及Pubmed数据库检索建库至2018年公开发表的舌象分割技术文献,利用Note Express 3.20、Excel 2016、SATI3.2、UCINET 6.0软件进行文献计量学及关键词共现网络分析。[结果]舌象分割技术文献研究整体呈上升趋势,北京工业大学居首位。主要基金资助为国家自然科学基金。根据普赖斯定律的要求,基本形成"核心作者群"。舌象分割以Snake模型和基于阈值的分割技术为主。关键词集中分布在"舌诊""图像分割""舌诊客观化"等方面。[结论]舌象分割技术研究仍需受到重视,国家政府应加大资助力度,加强研究机构间合作,尚存在一些舌象分割评价标准问题,需待改进。  相似文献   
5.
脑部胶质瘤是临床中常见的一种原发性脑肿瘤,具有复发率高、死亡率高以及治愈率低的特点。常规临床诊断主要依靠计算机断层扫描(CT)和磁共振成像(MRI)检查技术进行鉴别。随着成像技术和机器学习方法的不断发展,多模态影像智能分析技术已经逐步成为研究热点,在脑胶质瘤的病灶分割测量、肿瘤分级、预后生存周期预测和基因型辨别等方面具有重要的应用前景。本文重点介绍基于机器学习和多模态影像在脑胶质瘤临床辅助诊断和预后评估中的应用进展。  相似文献   
6.
Abstract

Purpose: In the context of assistive technology, mobility takes the meaning of “moving safely, gracefully, and comfortably”.The aim of this article is to provide a system which will be a convenient means of navigation for the Visually Impaired people, in the public transport system.

Method: A blind regular commuter who travels by public transport facility finds difficulty in identifying the vehicle that is nearing the stop. Hence, a real-time system that dynamically identifies the nearing vehicle and informs the commuters is necessary. This paper proposes such a system namely the “Vehicle Board Recognition System” (VBRS). Computer Vision techniques such as segmentation, object recognition, text detection and optical character recognition are utilized to build the system, which will detect, analyze, derive and communicate the information to the passengers.

Results: Thanks to the rapid development in technology, there are several navigation systems both hand held and wearable, available to help visually impaired (VI) people move comfortably both indoor and outdoor. Many blind people are not comfortable in using these devices or they are not affordable for them. Thus the proposed system gives them the comfort of navigation.

Conclusion: This system can be installed in the bus stop to assist the Visually Impaired, from externally rather than their hand held or wearable assistive devices.
  • Implications for rehabilitation
  • This proposed system will help the visually impaired to

  • ensure secure navigation

  • be independent of the others

  • develop self confidence.

  • overcome the training, affordability of wearable/ handheld devices.

  相似文献   
7.
Investigative studies of white matter (WM) brain structures using diffusion MRI (dMRI) tractography frequently require manual WM bundle segmentation, often called “virtual dissection.” Human errors and personal decisions make these manual segmentations hard to reproduce, which have not yet been quantified by the dMRI community. It is our opinion that if the field of dMRI tractography wants to be taken seriously as a widespread clinical tool, it is imperative to harmonize WM bundle segmentations and develop protocols aimed to be used in clinical settings. The EADC‐ADNI Harmonized Hippocampal Protocol achieved such standardization through a series of steps that must be reproduced for every WM bundle. This article is an observation of the problematic. A specific bundle segmentation protocol was used in order to provide a real‐life example, but the contribution of this article is to discuss the need for reproducibility and standardized protocol, as for any measurement tool. This study required the participation of 11 experts and 13 nonexperts in neuroanatomy and “virtual dissection” across various laboratories and hospitals. Intra‐rater agreement (Dice score) was approximately 0.77, while inter‐rater was approximately 0.65. The protocol provided to participants was not necessarily optimal, but its design mimics, in essence, what will be required in future protocols. Reporting tractometry results such as average fractional anisotropy, volume or streamline count of a particular bundle without a sufficient reproducibility score could make the analysis and interpretations more difficult. Coordinated efforts by the diffusion MRI tractography community are needed to quantify and account for reproducibility of WM bundle extraction protocols in this era of open and collaborative science.  相似文献   
8.
ObjectiveTo compare the lumen parameters measured by the location-adaptive threshold method (LATM), in which the inter- and intra-scan attenuation variabilities of coronary computed tomographic angiography (CCTA) were corrected, and the scan-adaptive threshold method (SATM), in which only the inter-scan variability was corrected, with the reference standard measurement by intravascular ultrasonography (IVUS).Materials and MethodsThe Hounsfield unit (HU) values of whole voxels and the centerline in each of the cross-sections of the 22 target coronary artery segments were obtained from 15 patients between March 2009 and June 2010, in addition to the corresponding voxel size. Lumen volume was calculated mathematically as the voxel volume multiplied by the number of voxels with HU within a given range, defined as the lumen for each method, and compared with the IVUS-derived reference standard. Subgroup analysis of the lumen area was performed to investigate the effect of lumen size on the studied methods. Bland-Altman plots were used to evaluate the agreement between the measurements.ResultsLumen volumes measured by SATM was significantly smaller than that measured by IVUS (mean difference, 14.6 mm3; 95% confidence interval [CI], 4.9–24.3 mm3); the lumen volumes measured by LATM and IVUS were not significantly different (mean difference, −0.7 mm3; 95% CI, −9.1–7.7 mm3). The lumen area measured by SATM was significantly smaller than that measured by LATM in the smaller lumen area group (mean of difference, 1.07 mm2; 95% CI, 0.89–1.25 mm2) but not in the larger lumen area group (mean of difference, −0.07 mm2; 95% CI, −0.22–0.08 mm2). In the smaller lumen group, the mean difference was lower in the Bland-Altman plot of IVUS and LATM (0.46 mm2; 95% CI, 0.27–0.65 mm2) than in that of IVUS and SATM (1.53 mm2; 95% CI, 1.27–1.79 mm2).ConclusionSATM underestimated the lumen parameters for computed lumen segmentation in CCTA, and this may be overcome by using LATM.  相似文献   
9.
The hippocampus encodes distinct contexts with unique patterns of activity. Representational shifts with changes in context, referred to as remapping, have been extensively studied. However, less is known about transitions between representations. In this study, we leverage a large dataset of neuronal recordings taken while rats performed an olfactory memory task with a predictable temporal structure involving trials and intertrial intervals (ITIs), separated by salient boundaries at the trial start and trial end. We found that trial epochs were associated with stable hippocampal representations despite moment‐to‐moment variability in stimuli and behavior. Representations of trial and ITI epochs were far more distinct than spatial factors would predict and the transitions between the two were abrupt. The boundary was associated with a large spike in multiunit activity, with many individual cells specifically active at the start or end of each trial. Both epochs and boundaries were encoded by hippocampal populations, and these representations carried information on orthogonal axes readily identified using principal component analysis. We suggest that the hippocampus orthogonalizes representations of the trial and ITI epochs and the activity spike at trial boundaries might serve to drive hippocampal activity from one stable state to the other.  相似文献   
10.
A continuous stream of syllables is segmented into discrete constituents based on the transitional probabilities (TPs) between adjacent syllables by means of statistical learning. However, we still do not know whether people attend to high TPs between frequently co‐occurring syllables and cluster them together as parts of the discrete constituents or attend to low TPs aligned with the edges between the constituents and extract them as whole units. Earlier studies on TP‐based segmentation also have not distinguished between the segmentation process (how people segment continuous speech) and the learning product (what is learnt by means of statistical learning mechanisms). In the current study, we explored the learning outcome separately from the learning process, focusing on three possible learning products: holistic constituents that are retrieved from memory during the recognition test, clusters of frequently co‐occurring syllables, or a set of statistical regularities which can be used to reconstruct legitimate candidates for discrete constituents during the recognition test. Our data suggest that people employ boundary‐finding mechanisms during online segmentation by attending to low inter‐syllabic TPs during familiarization and also identify potential candidates for discrete constituents based on their statistical congruency with rules extracted during the learning process. Memory representations of recurrent constituents embedded in the continuous speech stream during familiarization facilitate subsequent recognition of these discrete constituents.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号