首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Purpose

   Abnormalities of aortic surface and aortic diameter can be related to cardiovascular disease and aortic aneurysm. Computer-based aortic segmentation and measurement may aid physicians in related disease diagnosis. This paper presents a fully automated algorithm for aorta segmentation in low-dose non-contrast CT images.

Methods

   The original non-contrast CT scan images as well as their pre-computed anatomy label maps are used to locate the aorta and identify its surface. First a seed point is located inside the aortic lumen. Then, a cylindrical model is progressively fitted to the 3D image space to track the aorta centerline. Finally, the aortic surface is located based on image intensity information. This algorithm has been trained and tested on 359 low-dose non-contrast CT images from VIA-ELCAP and LIDC public image databases. Twenty images were used for training to obtain the optimal set of parameters, while the remaining images were used for testing. The segmentation result has been evaluated both qualitatively and quantitatively. Sixty representative testing images were used to establish a partial ground truth by manual marking on several axial image slices.

Results

   Compared to ground truth marking, the segmentation result had a mean Dice Similarity Coefficient of 0.933 (maximum 0.963 and minimum 0.907). The average boundary distance between manual segmentation and automatic segmentation was 1.39 mm with a maximum of 1.79 mm and a minimum of 0.83 mm.

Conclusion

   Both qualitative and quantitative evaluations have shown that the presented algorithm is able to accurately segment the aorta in low-dose non-contrast CT images.  相似文献   

2.
In this study, we performed dual-modality optical coherence tomography (OCT) characterization (volumetric OCT imaging and quantitative optical coherence elastography) on human breast tissue specimens. We trained and validated a U-Net for automatic image segmentation. Our results demonstrated that U-Net segmentation can be used to assist clinical diagnosis for breast cancer, and is a powerful enabling tool to advance our understanding of the characteristics for breast tissue. Based on the results obtained from U-Net segmentation of 3D OCT images, we demonstrated significant morphological heterogeneity in small breast specimens acquired through diagnostic biopsy. We also found that breast specimens affected by different pathologies had different structural characteristics. By correlating U-Net analysis of structural OCT images with mechanical measurement provided by quantitative optical coherence elastography, we showed that the change of mechanical properties in breast tissue is not directly due to the change in the amount of dense or porous tissue.  相似文献   

3.
Incorporating human domain knowledge for breast tumor diagnosis is challenging because shape, boundary, curvature, intensity or other common medical priors vary significantly across patients and cannot be employed. This work proposes a new approach to integrating visual saliency into a deep learning model for breast tumor segmentation in ultrasound images. Visual saliency refers to image maps containing regions that are more likely to attract radiologists’ visual attention. The proposed approach introduces attention blocks into a U-Net architecture and learns feature representations that prioritize spatial regions with high saliency levels. The validation results indicate increased accuracy for tumor segmentation relative to models without salient attention layers. The approach achieved a Dice similarity coefficient (DSC) of 90.5% on a data set of 510 images. The salient attention model has the potential to enhance accuracy and robustness in processing medical images of other organs, by providing a means to incorporate task-specific knowledge into deep learning architectures.  相似文献   

4.

Purpose

Automated segmentation is required for radiotherapy treatment planning, and multi-atlas methods are frequently used for this purpose. The combination of multiple intermediate results from multi-atlas segmentation into a single segmentation map can be achieved by label fusion. A method that includes expert knowledge in the label fusion phase of multi-atlas-based segmentation was developed. The method was tested by application to prostate segmentation, and the accuracy was compared to standard techniques.

Methods

The selective and iterative method for performance level estimation (SIMPLE) algorithm for label fusion was modified with a weight map given by an expert that indicates the importance of each region in the evaluation of segmentation results. Voxel-based weights specified by an expert when performing the label fusion step in atlas-based segmentation were introduced into the modified SIMPLE algorithm. These weights incorporate expert knowledge on accuracy requirements in different regions of a segmentation. Using this knowledge, segmentation accuracy in regions known to be important can be improved by sacrificing segmentation accuracy in less important regions. Contextual information such as the presence of vulnerable tissue is then used in the segmentation process. This method using weight maps to fine-tune the result of multi-atlas-based segmentation was tested using a set of 146 atlas images consisting of an MR image of the lower abdomen and a prostate segmentation. Each image served as a target in a set of leave-one-out experiments. These experiments were repeated for a weight map derived from the clinical practice in our hospital.

Results

The segmentation accuracy increased 6 % in regions that border vulnerable tissue using expert-specified voxel-based weight maps. This was achieved at the cost of a 4 % decrease in accuracy in less clinically relevant regions.

Conclusion

The inclusion of expert knowledge in a multi-atlas-based segmentation procedure was shown to be feasible for prostate segmentation. This method allows an expert to ensure that automatic segmentation is most accurate in critical regions. This improved local accuracy can increase the practical value of automatic segmentation.  相似文献   

5.
ObjectivesThe application of deep learning to medical image segmentation has received considerable attention. Nevertheless, when segmenting thyroid ultrasound images, it is difficult to achieve good segmentation results using deep learning methods because of the large number of nonthyroidal regions and insufficient training data.MethodsIn this study, a Super-pixel U-Net, designed by adding a supplementary path to U-Net, was devised to boost the segmentation results of thyroids. The improved network can introduce more information into the network, boosting auxiliary segmentation results. A multi-stage modification is introduced in this method, which includes boundary segmentation, boundary repair, and auxiliary segmentation. To reduce the negative effects of non-thyroid regions in the segmentation, U-Net was utilized to obtain rough boundary outputs. Subsequently, another U-Net is trained to improve and repair the coverage of the boundary outputs. Super-pixel U-Net was applied in the third stage to assist in the segmentation of the thyroid more precisely. Finally, multidimensional indicators were used to compare the segmentation results of the proposed method with those of other comparison experiments.DiscussionThe proposed method achieved an F1 Score of 0.9161 and an IoU of 0.9279. Furthermore, the proposed method also exhibits better performance in terms of shape similarity, with an average convexity of 0.9395. an average ratio of 0.9109, an average compactness of 0.8976, an average eccentricity of 0.9448, and an average rectangularity of 0.9289. The average area estimation indicator was 0.8857.ConclusionThe proposed method exhibited superior performance, proving the improvements of the multi-stage modification and Super-pixel U-Net.  相似文献   

6.

Purpose

Breast cancer is the most common form of cancer among women worldwide. Ultrasound imaging is one of the most frequently used diagnostic tools to detect and classify abnormalities of the breast. Recently, computer-aided diagnosis (CAD) systems using ultrasound images have been developed to help radiologists to increase diagnosis accuracy. However, accurate ultrasound image segmentation remains a challenging problem due to various ultrasound artifacts. In this paper, we investigate approaches developed for breast ultrasound (BUS) image segmentation.

Methods

In this paper, we reviewed the literature on the segmentation of BUS images according to the techniques adopted, especially over the past 10 years. By dividing into seven classes (i.e., thresholding-based, clustering-based, watershed-based, graph-based, active contour model, Markov random field and neural network), we have introduced corresponding techniques and representative papers accordingly.

Results

We have summarized and compared many techniques on BUS image segmentation and found that all these techniques have their own pros and cons. However, BUS image segmentation is still an open and challenging problem due to various ultrasound artifacts introduced in the process of imaging, including high speckle noise, low contrast, blurry boundaries, low signal-to-noise ratio and intensity inhomogeneity

Conclusions

To the best of our knowledge, this is the first comprehensive review of the approaches developed for segmentation of BUS images. With most techniques involved, this paper will be useful and helpful for researchers working on segmentation of ultrasound images, and for BUS CAD system developers.
  相似文献   

7.
Segmentation in image processing finds immense application in various areas. Image processing techniques can be used in medical applications for various diagnoses. In this article, we attempt to apply segmentation techniques to the brain images. Segmentation of brain magnetic resonance images (MRI) can be used to identify various neural disorders. We can segment abnormal tissues from the MRI, which and can be used for early detection of brain tumors. The segmentation, when applied to MRI, helps in extracting the different brain tissues such as white matter, gray matter and cerebrospinal fluid. Segmentation of these tissues helps in determining the volume of these tissues in the three-dimensional brain MRI. The study of volume changes helps in analyzing many neural disorders such as epilepsy and Alzheimer disease. We have proposed a hybrid method combining the classical Fuzzy C Means algorithm with neural network for segmentation.  相似文献   

8.

Purpose

Automatic segmentation of the retinal vasculature is a first step in computer-assisted diagnosis and treatment planning. The extraction of retinal vessels in pediatric retinal images is challenging because of comparatively wide arterioles with a light streak running longitudinally along the vessel’s center, the central vessel reflex. A new method for automatic segmentation was developed and tested.

Method

   A supervised method for retinal vessel segmentation in the images of multi-ethnic school children was developed based on ensemble classifier of bootstrapped decision trees. A collection of dual Gaussian, second derivative of Gaussian and Gabor filters, along with the generalized multiscale line strength measure and morphological transformation is used to generate the feature vector. The feature vector encodes information to handle the normal vessels as well as the vessels with the central reflex. The methodology is evaluated on CHASE_DB1, a relatively new public retinal image database of multi-ethnic school children, which is a subset of retinal images from the Child Heart and Health Study in England (CHASE) dataset.

Results

   The segmented retinal images from the CHASE_DB1 database produced best case accuracy, sensitivity and specificity of 0.96, 0.74 and 0.98, respectively, and worst case measures of 0.94, 0.67 and 0.98, respectively.

Conclusion

   A new retinal blood vessel segmentation algorithm was developed and tested with a shared database. The observed accuracy, speed, robustness and simplicity suggest that the algorithm may be a suitable tool for automated retinal image analysis in large population-based studies.  相似文献   

9.

Purpose

Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach.

Methods

The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result.

Results

We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net.

Conclusion

Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.
  相似文献   

10.
Image assessment of the arterial system plays an important role in the diagnosis of cardiovascular diseases. The segmentation of the lumen and media-adventitia in intravascular (IVUS) images of the coronary artery is the first step towards the evaluation of the morphology of the vessel under analysis and the identification of possible atherosclerotic lesions. In this study, a fully automatic method for the segmentation of the lumen in IVUS images of the coronary artery is presented. The proposed method relies on the K-means algorithm and the mean roundness to identify the region corresponding to the potential lumen. An approach to identify and eliminate side branches on bifurcations is also proposed to delimit the area with the potential lumen regions. Additionally, an active contour model is applied to refine the contour of the lumen region. In order to evaluate the segmentation accuracy, the results of the proposed method were compared against manual delineations made by two experts in 326 IVUS images of the coronary artery. The average values of the Jaccard measure, Hausdorff distance, percentage of area difference and Dice coefficient were 0.88 ± 0.06, 0.29 ± 0.17  mm, 0.09 ± 0.07 and 0.94 ± 0.04, respectively, in 324 IVUS images successfully segmented. Additionally, a comparison with the studies found in the literature showed that the proposed method is slight better than the majority of the related methods that have been proposed. Hence, the new automatic segmentation method is shown to be effective in detecting the lumen in IVUS images without using complex solutions and user interaction.  相似文献   

11.

Objective

We propose a hybrid interactive approach for the segmentation of anatomic structures in medical images with higher accuracy at lower user interaction cost.

Materials and methods

Eighteen brain MR scans from the Internet Brain Segmentation Repository are used for brain structure segmentation. A MR scan and a CT scan of an old female are used for orbital structure segmentation. The proposed approach combines shape-based interpolation, radial basis function (RBF)-based warping and model-based segmentation. With this approach, to segment a structure in a 3D image, we first delineate the structure in several slices using interactive methods, and then use shape-based interpolation to automatically generate an initial 3D model of the structure from the segmented slices. To refine the initial model, we specify a set of additional points on the structure boundary in the image, and use a RBF to warp the model so that it passes the specified points. Finally, we adopt a point-anchored active surface approach to further deform the model for a better fitting of the model with its corresponding structure in image.

Results

Two brain structures and 15 orbital structures are segmented. For each structure, it needs only to semi- automatically segment three to five 2D slices and specify two to nine additional points on the structure boundary. The time cost for each structure is about 1–3 min. The overlap ratio of the segmentation results and the ground truth is higher than 96%.

Conclusion

The proposed method for the segmentation of anatomic structure achieved higher accuracy at lower user interaction cost, and therefore promising in many applications such as surgery planning and simulation, atlas construction, and morphometric analysis of anatomic structures.  相似文献   

12.
静脉超声图像存在噪点多、阈值分割效果不佳的问题,对此本文提出一种基于ResNet34主干网络的ResNet34-UNet分割网络模型,利用ResNet34网络残差学习的结构特点,在保证网络能够提取充足图像特征的前提下,有效避免梯度消失和网络退化问题,且34层的网络深度维持了较小的网络规模;利用U-Net结构特有的长连接(Skip Connection)模块,将静脉超声图像的深层特征与浅层特征有效融合,使静脉的识别精度得以较大幅度的提升,实现了静脉边缘的平滑分割。将300张静脉超声图像作为训练集,200张作为测试集,通过随机旋转、翻转、投影等操作进行数据集的增强,经过十轮迭代训练后得到模型的准确度(ACC)达96.3%,较全卷积神经网络(FCN)高5.9%,较DeepLab v3+高5.2%。结果表明基于ResNet34-UNet的静脉分割方法能够准确地分割静脉超声图像,为后续超声影像下静脉的自动识别与跟踪提供了技术参考。  相似文献   

13.
Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.OCIS codes: (100.0100) Image processing, (110.4500) Optical coherence tomography, (170.4470) Ophthalmology  相似文献   

14.

Purpose

Existing computer-aided detection schemes for lung nodule detection require a large number of calculations and tens of minutes per case; there is a large gap between image acquisition time and nodule detection time. In this study, we propose a fast detection scheme of lung nodule in chest CT images using cylindrical nodule-enhancement filter with the aim of improving the workflow for diagnosis in CT examinations.

Methods

Proposed detection scheme involves segmentation of the lung region, preprocessing, nodule enhancement, further segmentation, and false-positive (FP) reduction. As a nodule enhancement, our method employs a cylindrical shape filter to reduce the number of calculations. False positives (FPs) in nodule candidates are reduced using support vector machine and seven types of characteristic parameters.

Results

The detection performance and speed were evaluated experimentally using Lung Image Database Consortium publicly available image database. A 5-fold cross-validation result demonstrates that our method correctly detects 80 % of nodules with 4.2 FPs per case, and detection speed of proposed method is also 4–36 times faster than existing methods.

Conclusion

Detection performance and speed indicate that our method may be useful for fast detection of lung nodules in CT images.  相似文献   

15.
We study the use of raw ultrasound waveforms, often referred to as the “Radio Frequency” (RF) data, for the semantic segmentation of ultrasound scans to carry out dense and diagnostic labeling. We present W-Net, a novel Convolution Neural Network (CNN) framework that employs the raw ultrasound waveforms in addition to the grey ultrasound image to semantically segment and label tissues for anatomical, pathological, or other diagnostic purposes. To the best of our knowledge, this is also the first deep-learning or CNN approach for segmentation that analyzes ultrasound raw RF data along with the grey image.We chose subcutaneous tissue (SubQ) segmentation as our initial clinical goal for dense segmentation since it has diverse intermixed tissues, is challenging to segment, and is an underrepresented research area. SubQ potential applications include plastic surgery, adipose stem-cell harvesting, lymphatic monitoring, and possibly detection/treatment of certain types of tumors. Unlike prior work, we seek to label every pixel in the image, without the use of a background class. A custom dataset consisting of hand-labeled images by an expert clinician and trainees are used for the experimentation, currently labeled into the following categories: skin, fat, fat fascia/stroma, muscle, and muscle fascia. We compared W-Net and attention variant of W-Net (AW-Net) with U-Net and Attention U-Net (AU-Net). Our novel W-Net’s RF-Waveform encoding architecture outperformed regular U-Net and AU-Net, achieving the best mIoU accuracy (averaged across all tissue classes). We study the impact of RF data on dense labeling of the SubQ region, which is followed by the analyses of the generalization capability of the networks to patients and analysis on the SubQ tissue classes, determining that fascia tissues, especially muscle fascia in particular, are the most difficult anatomic class to recognize for both humans and AI algorithms.We present diagnostic semantic segmentation, which is semantic segmentation carried out for the purposes of direct diagnostic pixel labeling, and apply it to breast tumor detection task on a publicly available dataset to segment pixels into malignant tumor, benign tumor, and background tissue class. Using the segmented image we diagnose the patient by classifying the breast lesion as either benign or malignant. We demonstrate the diagnostic capability of RF data with the use of W-Net, which achieves the best segmentation scores across all classes.  相似文献   

16.

Purpose

   Image noise in computed tomography (CT) images may have significant local variation due to tissue properties, dose, and location of the X-ray source. We developed and tested an automated tissue-based estimator method for estimating local noise in CT images.

Method

   An automated TBE method for estimating the local noise in CT image in 3 steps was developed: (1) Partition the image into homogeneous and transition regions, (2) For each pixel in the homogeneous regions, compute the standard deviation in a $15\times 15\times 1$ voxel local region using only pixels from the same homogeneous region, and (3) Interpolate the noise estimate from the homogeneous regions in the transition regions. Noise-aware fat segmentation was implemented. Experiments were conducted on the anthropomorphic phantom and in vivo low-dose chest CT scans to validate the TBE, characterize the magnitude of local noise variation, and determine the sensitivity of noise estimates to the size of the region in which noise is computed. The TBE was tested on all scans from the Early Lung Cancer Action Program public database. The TBE was evaluated quantitatively on the phantom data and qualitatively on the in vivo data.

Results

   The results show that noise can vary locally by over 200 Hounsfield units on low-dose in vivo chest CT scans and that the TBE can characterize these noise variations within 5 %. The new fat segmentation algorithm successfully improved segmentation on all 50 scans tested.

Conclusion

   The TBE provides a means to estimate noise for image quality monitoring, optimization of denoising algorithms, and improvement of segmentation algorithms. The TBE was shown to accurately characterize the large local noise variations that occur due to changes in material, dose, and X-ray source location.  相似文献   

17.

Purpose

Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from \(\mu \hbox {CT}\) images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate \(\mu \hbox {CT}\) segmentation algorithms.

Methods

We propose a new framework for cochlea segmentation in ex vivo \(\mu \hbox {CT}\) images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration.

Results

We tested the proposed approach in ten \(\mu \hbox {CT}\) data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236–253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215–226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map.

Conclusion

The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.
  相似文献   

18.
目的 评估3D U-Net深度学习(DL)模型基于盆腔T2WI自动分割盆腔软组织结构的可行性.方法 回顾性分析147例经病理证实或盆腔MRI随访观察确诊的前列腺癌或良性前列腺增生患者,其中28例接受2次、121例接受1次盆腔MR扫描,共175组T2WI;手动标注T2WI所示软组织结构,包括前列腺、膀胱、直肠、双侧精囊腺...  相似文献   

19.
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations.We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.  相似文献   

20.
Low-field (<1T) magnetic resonance imaging (MRI) scanners remain in widespread use in low- and middle-income countries (LMICs) and are commonly used for some applications in higher income countries e.g. for small child patients with obesity, claustrophobia, implants, or tattoos. However, low-field MR images commonly have lower resolution and poorer contrast than images from high field (1.5T, 3T, and above). Here, we present Image Quality Transfer (IQT) to enhance low-field structural MRI by estimating from a low-field image the image we would have obtained from the same subject at high field. Our approach uses (i) a stochastic low-field image simulator as the forward model to capture uncertainty and variation in the contrast of low-field images corresponding to a particular high-field image, and (ii) an anisotropic U-Net variant specifically designed for the IQT inverse problem. We evaluate the proposed algorithm both in simulation and using multi-contrast (T1-weighted, T2-weighted, and fluid attenuated inversion recovery (FLAIR)) clinical low-field MRI data from an LMIC hospital. We show the efficacy of IQT in improving contrast and resolution of low-field MR images. We demonstrate that IQT-enhanced images have potential for enhancing visualisation of anatomical structures and pathological lesions of clinical relevance from the perspective of radiologists. IQT is proved to have capability of boosting the diagnostic value of low-field MRI, especially in low-resource settings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号