首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   129篇
  免费   15篇
儿科学   3篇
妇产科学   9篇
基础医学   23篇
临床医学   18篇
内科学   26篇
皮肤病学   3篇
神经病学   9篇
特种医学   2篇
外科学   12篇
综合类   7篇
预防医学   5篇
眼科学   3篇
药学   6篇
肿瘤学   18篇
  2023年   1篇
  2022年   2篇
  2021年   10篇
  2020年   8篇
  2019年   7篇
  2018年   7篇
  2017年   5篇
  2016年   2篇
  2015年   5篇
  2014年   4篇
  2013年   5篇
  2012年   20篇
  2011年   14篇
  2010年   9篇
  2009年   4篇
  2008年   6篇
  2007年   5篇
  2006年   6篇
  2005年   6篇
  2004年   7篇
  2003年   3篇
  2002年   2篇
  2001年   5篇
  1997年   1篇
排序方式: 共有144条查询结果,搜索用时 15 毫秒
91.
In Gal repressosome assembly, a DNA loop is formed by the interaction of two GalR, bound to two distal operators, and the binding of the histone-like protein, HU, to an architecturally critical position on DNA to facilitate the GalR-GalR interaction. We show that GalR piggybacks HU to the critical position on the DNA through a specific GalR-HU interaction. This is the first example of HU making a specific contact with another protein. The GalR-HU contact that results in cooperative binding of the two proteins to DNA may be transient and absent in the final repressosome structure. A sequence-independent DNA-binding protein being recruited to an architectural site on DNA through a specific association with a regulatory protein may be a common mode for assembly of complex nucleoprotein structures.  相似文献   
92.
93.
Every year, millions of brain MRI scans are acquired in hospitals, which is a figure considerably larger than the size of any research dataset. Therefore, the ability to analyze such scans could transform neuroimaging research. Yet, their potential remains untapped since no automated algorithm is robust enough to cope with the high variability in clinical acquisitions (MR contrasts, resolutions, orientations, artifacts, and subject populations). Here, we present SynthSeg+, an AI segmentation suite that enables robust analysis of heterogeneous clinical datasets. In addition to whole-brain segmentation, SynthSeg+ also performs cortical parcellation, intracranial volume estimation, and automated detection of faulty segmentations (mainly caused by scans of very low quality). We demonstrate SynthSeg+ in seven experiments, including an aging study on 14,000 scans, where it accurately replicates atrophy patterns observed on data of much higher quality. SynthSeg+ is publicly released as a ready-to-use tool to unlock the potential of quantitative morphometry.

Neuroimaging plays a prominent role in our attempt to understand the human brain, as it enables an array of analyses such as volumetry, morphology, connectivity, physiology, and molecular studies. A prerequisite for almost all these analyses is the contouring of brain structures, a task known as image segmentation. In this context, MRI is the imaging technique of choice, since it enables the acquisition of noninvasive scans in vivo with excellent soft-tissue contrast.The vast majority of neuroimaging studies rely on prospective datasets of high-quality MRI scans and especially on 1 mm T1-weighted acquisitions. Indeed, these scans present a remarkable white–gray matter contrast and can be easily analyzed with widespread neuroimaging packages, such as SPM (1), FSL (2), or FreeSurfer (3), to derive quantitative morphometric measurements. Meanwhile, brain MRI scans acquired in the clinic (e.g., for diagnostic purposes) present much higher variability in acquisition protocols, and thus cannot be analyzed with conventional neuroimaging softwares. This variability is threefold. First, clinical scans use a wide range of MR sequences and contrasts, which are chosen depending on the tissue properties to highlight. Then, they often present real-life artifacts that are uncommon in research datasets, such as very low signal-to-noise ratio, or incomplete field of view. Finally, instead of using 3D scans at high resolution like in research, physicians usually prefer to acquire a sparse set of 2D images in parallel planes, which are faster to inspect but introduce considerable variability in terms of slice spacing, thickness, and orientation.The ability to analyze clinical datasets is highly desirable since they represent the overwhelming majority of brain MRI scans. For example, 10 million brain clinical scans were acquired in the US in 2019 alone (4). This figure is orders of magnitude larger than the size of the biggest research studies such as ENIGMA (5) or UK BioBank (6), which comprise tens of thousands of subjects. Hence, analyzing such clinical data would considerably increase the sample size and statistical power of the current neuroimaging studies. Furthermore, it would also enable the analysis of populations that are currently underrepresented in research studies, e.g., UK BioBank and ADNI (7) with 95% white subjects (8, 9), but that are more easily found in clinical datasets. Therefore, there is a clear need for an automated segmentation tool that is robust to MR contrast, resolution, clinical artifacts, and subject populations.Related Works.The gold standard in brain MRI segmentation is manual delineation. However, this tedious procedure requires costly expertise and is untenable for large-scale clinical applications. Alternatively, one could only consider high-quality scans (i.e., 1 mm T1-weighted scans) that can be analyzed with neuroimaging softwares, but this would drastically decrease effective sample sizes, because such scans are expensive and seldom acquired in the clinic.Several methods have been proposed for segmentation of MRI scans of variable contrast or resolution. First, contrast-adaptiveness has classically been addressed with Bayesian strategies using unsupervised likelihood model (10). Nevertheless, the accuracy of these methods progressively deteriorates at decreasing resolutions due to partial volume effects, where voxel intensities become less representative of the underlying tissues (11). While such effects can theoretically be modeled within the Bayesian framework (12), the resulting algorithm quickly becomes intractable at decreasing resolutions, thus precluding analysis of clinical scans with large slice thickness.The modern segmentation literature mostly relies on supervised convolutional neural networks (CNNs) (13, 14), which obtain fast and accurate results on their training domain (i.e., scans with similar contrast and resolution). However, CNNs suffer from the “domain-gap” problem (15), where networks do not generalize well to data with different resolution (16) or MR contrast (17), even within the same modality (e.g., T1-weighted scans acquired with different parameters or hardware) (18). Data augmentation techniques have addressed this problem in intramodality scenarios by applying spatial and intensity transforms to the training data (19). However, the resulting CNNs still need to be retrained for each new MR contrast or resolution, which necessitates costly labeled images. Another approach to bridging the domain gap is domain adaptation, where CNNs are explicitly trained to generalize from a “source” domain with labeled data, to a specific “target” domain, where no labeled examples are available (18, 20). Although these methods alleviate the need for supervision in the target domain, they still need to be retrained for each new domain, which makes them impractical to apply at scale on highly heterogeneous clinical data.Very recently, we proposed SynthSeg (21), a method that can segment brain scans of any contrast and resolution without retraining. This was achieved by adopting a domain randomization approach (22), where a 3D CNN is trained on synthetic scans of fully randomized contrast and resolution. Consequently, SynthSeg learns domain-agnostic representations, which provide it with an outstanding generalization ability compared with previous methods (21). However, SynthSeg frequently falters when applied to clinical scans with low signal-to-noise ratio, poor tissue contrast, or acquired at very low resolution—an issue that we address in the present article.Several strategies have been introduced to improve the robustness of CNNs, most notably hierarchical models. These models divide the final task into easier operations such as progressive refining of segmentations at increasing resolutions (23) or segmenting the same image with increasingly finer labels (24). Although hierarchical models can help improving performance, they may still struggle to produce topologically plausible segmentations in difficult cases, which is a well-known problem for CNNs (25). Recent approaches have sought to solve this problem by modeling high-order topological relations, either by aligning predictions and ground truths in latent space during training (26), by correcting predictions with a registered atlas (27), or with denoising networks (28).While the aforementioned methods substantially improve robustness, they do not guarantee accurate results in every case. Hence, the ability to identify erroneous predictions is crucial, especially when analyzing clinical data of varying quality. Traditionally, this has been achieved with visual quality control (QC), but several automated strategies have now been proposed to replace this tedious procedure. A first class of methods seeks to register predictions to a pool of reference segmentations to compute similarity scores (29), but the required registrations remain time-consuming. Therefore, recent techniques employ fast CNNs to model QC as a regression task, where faulty segmentations are rejected by applying a thresholding criterion on the regressed scores (3032).Contributions. In this article, we present SynthSeg+, a clinical brain MRI segmentation suite that is robust to MR contrast, resolution, clinical artifacts, and a wide range of subject populations. Specifically, the proposed method leverages a deep learning architecture composed of hierarchical networks and denoisers. This architecture is trained on synthetic data with the domain randomization approach introduced by SynthSeg, and is shown to considerably increase the robustness of the original method to clinical artifacts. Furthermore, SynthSeg+ includes new modules for cortex parcellation, automated failure detection, and estimation of intracranial volume (ICV, a crucial covariate in volumetry). All aspects of our method are thoroughly evaluated on more than 15,000 highly heterogeneous clinical scans, where SynthSeg+ is shown to enable automated segmentation and volumetry of large, uncurated clinical datasets. A first version of this work was presented at MICCAI 2022, for whole-brain segmentation only (33). Here, we considerably extend our previous conference paper by adding cortical parcellation, automated QC, and ICV estimation, as well as by evaluating our approach in five new experiments. The proposed method can be run with FreeSurfer using the simple following command: mri_synthseg ––i [input] ––o [output] ––robust  相似文献   
94.
95.
96.
Here we report a unique case of tuberculoid leprosy and cytomegalovirus retinitis in a 27-year-old female patient with AIDS, suggestive of highly active antiretroviral therapy (HAART)-induced immune restoration disease. After initiation of HAART, the patient presented with decreased visual acuity, hypoesthetic patch with local nerve thickening, and an increase in her CD4+ T cell count. On further investigations cytomegalovirus retinitis and tuberculoid leprosy were confirmed. To our knowledge no case with such a co-existence has previously been reported.  相似文献   
97.
Benign breast disease (BBD) is a very common condition, diagnosed in approximately half of all American women throughout their lifecourse. White women with BBD are known to be at substantially increased risk of subsequent breast cancer; however, nothing is known about breast cancer characteristics that develop after a BBD diagnosis in African‐American women. Here, we compared 109 breast cancers that developed in a population of African‐American women with a history of BBD to 10,601 breast cancers that developed in a general population of African‐American women whose cancers were recorded by the Metropolitan Detroit Cancer Surveillance System (MDCSS population). Demographic and clinical characteristics of the BBD population were compared to the MDCSS population, using chi‐squared tests, Fisher's exact tests, t‐tests, and Wilcoxon tests where appropriate. Kaplan–Meier curves and Cox regression models were used to examine survival. Women in the BBD population were diagnosed with lower grade (p = 0.02), earlier stage cancers (p = 0.003) that were more likely to be hormone receptor‐positive (p = 0.03) compared to the general metropolitan Detroit African‐American population. In situ cancers were more common among women in the BBD cohort (36.7%) compared to the MDCSS population (22.1%, p < 0.001). Overall, women in the BBD population were less likely to die from breast cancer after 10 years of follow‐up (p = 0.05), but this association was not seen when analyses were limited to invasive breast cancers. These results suggest that breast cancers occurring after a BBD diagnosis may have more favorable clinical parameters, but the majority of cancers are still invasive, with survival rates similar to the general African‐American population.  相似文献   
98.
99.

Purpose  

Effective postoperative pain management is important for older surgical patients because pain affects perioperative outcomes. A prospective cohort study was conducted to describe the direct and indirect effects of patient risk factors and pain treatment in explaining levels of postoperative pain in older surgical patients.  相似文献   
100.
对ASC—H的妇女行高危人乳头状瘤病毒DNA检测的临床意义   总被引:1,自引:1,他引:0  
目的 探讨高危人乳头瘤病毒DNA(hrHPv DNA)辅助检测对宫颈非典型鳞状上皮,不除外高度鳞状上皮内病变(ASC-H)女性疾病的风险评估.方法 对1187例ASC-H妇女进行hrHPV感染的流行病学和组织病理学资料调查及分析.结果 在277,400例宫颈细胞学检查中判读为ASC-H的1187例中,589例(49.6%)hrHPV DNA检测结果为阳性.对其中505例的组织病理学随访,32.7%hrHPV DNA阳性者系宫颈高度鳞状上皮内病变(CIN2/3),hrHPV DNA阴性妇女仅有1.2%呈CIN2/3改变.hrHPV DNA检测结果 提示CIN 2/3改变的阳性预测值(PPV)、阴性预测值(NPV)、敏感性和特异性分别是32.7%、98.8%、96.6%和58.6%.该PPV约是单纯细胞学结果 ASC-H预测CIN改变的2倍.宫颈/移行区(EC/TZS)样本的存在与否并不影响hrHPV DNA或CIN2/3级的检出率.结论 对细胞学判为ASC-H的妇女行hrHPV DNA检测具有较高的临床价值.对hrHPV DNA阴性患者可单纯行常规细胞学随访和hrHPV DNA检测,而不需要进行阴道镜检查,尤其是对40岁以上妇女.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号