首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   464221篇
  免费   63390篇
  国内免费   6116篇
耳鼻咽喉   6376篇
儿科学   14618篇
妇产科学   8734篇
基础医学   67056篇
口腔科学   17636篇
临床医学   43294篇
内科学   106699篇
皮肤病学   15838篇
神经病学   42905篇
特种医学   14099篇
外国民族医学   65篇
外科学   59356篇
综合类   15186篇
现状与发展   15篇
一般理论   133篇
预防医学   41716篇
眼科学   9553篇
药学   34432篇
  80篇
中国医学   6570篇
肿瘤学   29366篇
  2023年   2990篇
  2022年   6251篇
  2021年   12087篇
  2020年   11646篇
  2019年   19047篇
  2018年   22354篇
  2017年   19616篇
  2016年   19373篇
  2015年   21216篇
  2014年   24183篇
  2013年   27760篇
  2012年   30257篇
  2011年   31497篇
  2010年   23381篇
  2009年   17424篇
  2008年   23308篇
  2007年   23104篇
  2006年   21646篇
  2005年   21668篇
  2004年   19379篇
  2003年   18017篇
  2002年   15611篇
  2001年   12063篇
  2000年   12685篇
  1999年   10516篇
  1998年   3766篇
  1997年   3165篇
  1996年   2841篇
  1995年   2518篇
  1994年   2249篇
  1993年   1847篇
  1992年   4894篇
  1991年   4420篇
  1990年   4058篇
  1989年   3793篇
  1988年   3327篇
  1987年   3146篇
  1986年   2931篇
  1985年   2677篇
  1984年   1942篇
  1983年   1522篇
  1982年   880篇
  1979年   1451篇
  1978年   966篇
  1975年   928篇
  1974年   983篇
  1973年   1041篇
  1972年   874篇
  1971年   847篇
  1970年   856篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
71.
BACKGROUND AND PURPOSE:Accurate and reliable detection of white matter hyperintensities and their volume quantification can provide valuable clinical information to assess neurologic disease progression. In this work, a stacked generalization ensemble of orthogonal 3D convolutional neural networks, StackGen-Net, is explored for improving automated detection of white matter hyperintensities in 3D T2-FLAIR images.MATERIALS AND METHODS:Individual convolutional neural networks in StackGen-Net were trained on 2.5D patches from orthogonal reformatting of 3D-FLAIR (n = 21) to yield white matter hyperintensity posteriors. A meta convolutional neural network was trained to learn the functional mapping from orthogonal white matter hyperintensity posteriors to the final white matter hyperintensity prediction. The impact of training data and architecture choices on white matter hyperintensity segmentation performance was systematically evaluated on a test cohort (n = 9). The segmentation performance of StackGen-Net was compared with state-of-the-art convolutional neural network techniques on an independent test cohort from the Alzheimer’s Disease Neuroimaging Initiative-3 (n = 20).RESULTS:StackGen-Net outperformed individual convolutional neural networks in the ensemble and their combination using averaging or majority voting. In a comparison with state-of-the-art white matter hyperintensity segmentation techniques, StackGen-Net achieved a significantly higher Dice score (0.76 [SD, 0.08], F1-lesion (0.74 [SD, 0.13]), and area under precision-recall curve (0.84 [SD, 0.09]), and the lowest absolute volume difference (13.3% [SD, 9.1%]). StackGen-Net performance in Dice scores (median = 0.74) did not significantly differ (P = .22) from interobserver (median = 0.73) variability between 2 experienced neuroradiologists. We found no significant difference (P = .15) in white matter hyperintensity lesion volumes from StackGen-Net predictions and ground truth annotations.CONCLUSIONS:A stacked generalization of convolutional neural networks, utilizing multiplanar lesion information using 2.5D spatial context, greatly improved the segmentation performance of StackGen-Net compared with traditional ensemble techniques and some state-of-the-art deep learning models for 3D-FLAIR.

White matter hyperintensities (WMHs) correspond to pathologic features of axonal degeneration, demyelination, and gliosis observed within cerebral white matter.1 Clinically, the extent of WMHs in the brain has been associated with cognitive impairment, Alzheimer’s disease and vascular dementia, and increased risk of stroke.2,3 The detection and quantification of WMH volumes to monitor lesion burden evolution and its correlation with clinical outcomes have been of interest in clinical research.4,5 Although the extent of WMHs can be visually scored,6 the categoric nature of such scoring systems makes quantitative evaluation of disease progression difficult. Manually segmenting WMHs is tedious, prone to inter- and intraobserver variability, and is, in most cases, impractical. Thus, there is an increased interest in developing fast, accurate, and reliable computer-aided automated techniques for WMH segmentation.Convolutional neural network (CNN)-based approaches have been successful in several semantic segmentation tasks in medical imaging.7 Recent works have proposed using deep learning–based methods for segmenting WMHs using 2D-FLAIR images.8-11 More recently, a WMH segmentation challenge12 was also organized (http://wmh.isi.uu.nl/) to facilitate comparison of automated segmentation of WMHs of presumed vascular origin in 2D multislice T2-FLAIR images. Architectures that used an ensemble of separately trained CNNs showed promising results in this challenge, with 3 of the top 5 winners using ensemble-based techniques.12Conventional 2D-FLAIR images are typically acquired with thick slices (3–4 mm) and possible slice gaps. Partial volume effects from a thick slice are likely to affect the detection of smaller lesions, both in-plane and out-of-plane. 3D-FLAIR images, with isotropic resolution, have been shown to achieve higher resolution and contrast-to-noise ratio13 and have shown promising results in MS lesion detection using 3D CNNs.14 Additionally, the isotropic resolution enables viewing and evaluation of the images in multiple planes. This multiplanar reformatting of 3D-FLAIR without the use of interpolating kernels is only possible due to the isotropic nature of the acquisition. Network architectures that use information from the 3 orthogonal views have been explored in recent works for CNN-based segmentation of 3D MR imaging data.15 The use of data from multiple planes allows more spatial context during training without the computational burden associated with full 3D training.16 The use of 3 orthogonal views simultaneously mirrors how humans approach this segmentation task.Ensembles of CNNs have been shown to average away the variances in the solution and the choice of model- and configuration-specific behaviors of CNNs.17 Traditionally, the solutions from these separately trained CNNs are combined by averaging or using a majority consensus. In this work, we propose the use of a stacked generalization framework (StackGen-Net) for combining multiplanar lesion information from 3D CNN ensembles to improve the detection of WMH lesions in 3D-FLAIR. A stacked generalization18 framework learns to combine solutions from individual CNNs in the ensemble. We systematically evaluated the performance of this framework and compared it with traditional ensemble techniques, such as averaging or majority voting, and state-of-the-art deep learning techniques.  相似文献   
72.
73.
74.
Three‐dimensional (3D) printing technology, virtual reality, and augmented reality technology have been used to help surgeons to complete complex total hip arthroplasty, while their respective shortcomings limit their further application. With the development of technology, mixed reality (MR) technology has been applied to improve the success rate of complicated hip arthroplasty because of its unique advantages. We presented a case of a 59‐year‐old man with an intertrochanteric fracture in the left femur, who had received a prior left hip fusion. After admission to our hospital, a left total hip arthroplasty was performed on the patient using a combination of MR technology and 3D printing technology. Before surgery, 3D reconstruction of a certain bony landmark exposed in the surgical area was first performed. Then a veneer part was designed according to the bony landmark and connected to a reference registration landmark outside the body through a connecting rod. After that, the series of parts were made into a holistic reference registration instrument using 3D printing technology, and the patient's data for bone and surrounding tissue, along with digital 3D information of the reference registration instrument, were imported into the head‐mounted display (HMD). During the operation, the disinfected reference registration instrument was installed on the selected bony landmark, and then the automatic real‐time registration was realized by HMD through recognizing the registration landmark on the reference registration instrument, whereby the patient's virtual bone and other anatomical structures were quickly and accurately superimposed on the real body of the patient. To the best of our knowledge, this is the first report to use MR combined with 3D printing technology in total hip arthroplasty.  相似文献   
75.
76.
文题释义:股骨头坏死中日友好医院分型的有限元分析:根据李子荣等提出的中日友好医院分型,建立股骨头坏死三维模型,分为 M型(内侧型)、C型(中央型)和 L型(外侧型),其中 L型包括L1型(次外侧型)、L2型(极外侧型)和 L3型(全头型)。通过对建立的模型进行有限元分析,为该分型的保髋治疗提供了一定力学依据,显示外侧柱的存留是精准预防塌陷的重要因素,为进一步实现个体化治疗提供力学基础。 腓骨支撑坏死股骨头保髋手术:是对于早中期股骨头坏死需要保留股骨头患者进行的一种手术方式。首先需对股骨头进行髓芯减压,清除一定坏死骨,空腔填塞松质骨(髂骨为主),打压结实后植入腓骨(异体或自体)支撑,给坏死区的提供力学支撑及生物学修复,预防股骨头进一步坏死及塌陷。 背景:研究报道股骨头坏死的保髋疗效与外侧柱存留密切相关,中日友好医院分型是根据三柱结构确立的,对股骨头塌陷的预测准确性高。 目的:建立股骨头坏死中日友好医院分型各分型仿真的三维有限元模型,通过有限元分析各分型腓骨植入的力学变化,探讨外侧柱存留对保髋疗效的意义,为该分型的塌陷精准预测提供基础。 方法:建立正常股骨头、中日友好医院分型(M型、C型、L1型、L2型、L3型)股骨头坏死及其腓骨植入3组11种三维有限元模型,运用ANSYS软件进行有限元分析计算,观察各组模型的最大应力值、最大位移值及股骨头内部载荷传递模式。 结果与结论:①坏死组位移最大,应变最大,且因坏死分型不同而位移不同,位移变化如下:M型相似文献   
77.
78.
Hailey–Hailey disease (HHD), also known as benign familial pemphigus, is an autosomal dominant skin condition that affects the adhesion of epidermal keratinocytes. Although the initial manifestation of flaccid vesicles on erythematous or normal skin in flexure sites frequently goes unnoticed, large, macerated, exudative plaques of superficial erosions with crusting are observed at the time of diagnosis. There is no specific treatment for HHD, and most cases are symptomatically supported. However, infrared laser ablation has been somewhat helpful. We present a case successfully treated with fractional CO2 laser showing a long-term favourable outcome and no adverse effects. Thus, this modality could be an alternative to full ablation for this condition.  相似文献   
79.
80.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号