共查询到20条相似文献,搜索用时 0 毫秒
1.
Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end-user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals. 相似文献
3.
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation. 相似文献
4.
Deep learning has received extensive research interest in developing new medical image processing algorithms, and deep learning based models have been remarkably successful in a variety of medical imaging tasks to support disease detection and diagnosis. Despite the success, the further improvement of deep learning models in medical image analysis is majorly bottlenecked by the lack of large-sized and well-annotated datasets. In the past five years, many studies have focused on addressing this challenge. In this paper, we reviewed and summarized these recent studies to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks. Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application scenarios, including classification, segmentation, detection, and image registration. We also discuss major technical challenges and suggest possible solutions in the future research efforts. 相似文献
5.
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3. 相似文献
6.
The medical imaging literature has witnessed remarkable progress in high-performing segmentation models based on convolutional neural networks. Despite the new performance highs, the recent advanced segmentation models still require large, representative, and high quality annotated datasets. However, rarely do we have a perfect training dataset, particularly in the field of medical imaging, where data and annotations are both expensive to acquire. Recently, a large body of research has studied the problem of medical image segmentation with imperfect datasets, tackling two major dataset limitations: scarce annotations where only limited annotated data is available for training, and weak annotations where the training data has only sparse annotations, noisy annotations, or image-level annotations. In this article, we provide a detailed review of the solutions above, summarizing both the technical novelties and empirical results. We further compare the benefits and requirements of the surveyed methodologies and provide our recommended solutions. We hope this survey article increases the community awareness of the techniques that are available to handle imperfect medical image segmentation datasets. 相似文献
7.
8.
A survey of medical image registration 总被引:6,自引:0,他引:6
The purpose of this paper is to present a survey of recent (published in 1993 or later) publications concerning medical image registration techniques. These publications will be classified according to a model based on nine salient criteria, the main dichotomy of which is extrinsic versus intrinsic methods. The statistics of the classification show definite trends in the evolving registration techniques, which will be discussed. At this moment, the bulk of interesting intrinsic methods is based on either segmented points or surfaces, or on techniques endeavouring to use the full information content of the images involved. 相似文献
9.
Aniwat Juhong Bo Li Cheng-You Yao Chia-Wei Yang Dalen W. Agnew Yu Leo Lei Xuefei Huang Wibool Piyawattanametha Zhen Qiu 《Biomedical optics express》2023,14(1):18
Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 相似文献
10.
目的 基于深度学习(DL)卷积神经网络(CNN)算法,利用医学影像数据实现识别阈下抑郁(StD)患者。方法 对56例StD患者和70名正常人采集MRI和fMRI数据,分别输入所构建的CNN,利用网络融合技术对2种不同模态数据进行综合分析,得到分类结果;最后调整网络结构与模型参数,实现分类效果最优化。结果 单独MRI数据模型分类精度为73.02%,单独fMRI数据模型分类精度为65.08%;2种模态结合,最终分类精度升至78.57%。结论 利用DL可识别StD患者与正常人;采用多种模态输入法可提高分类准确度。 相似文献
11.
Recently, a series of deep learning methods based on the convolutional neural networks (CNNs) have been introduced for classification of hyperspectral images (HSIs). However, in order to obtain the optimal parameters, a large number of training samples are required in the CNNs to avoid the overfitting problem. In this paper, a novel method is proposed to extend the training set for deep learning based hyperspectral image classification. First, given a small-sample-size training set, the principal component analysis based edge-preserving features (PCA-EPFs) and extended morphological attribute profiles (EMAPs) are used for HSI classification so as to generate classification probability maps. Second, a large number of pseudo training samples are obtained by the designed decision function which depends on the classification probabilities. Finally, a deep feature fusion network (DFFN) is applied to classify HSI with the training set consists of the original small-sample-size training set and the added pseudo training samples. Experiments performed on several hyperspectral data sets demonstrate the state-of-the-art performance of the proposed method in terms of classification accuracies. 相似文献
13.
为解决存在平移变换和旋转变换的2幅刚性医学图像的配准问题,结合图像信号傅里叶变换的平移特性和旋转特性,提出了一种基于频域与时域相结合的医学图像配准算法.该算法将旋转调整和平移调整分而治之,首先以图像信号的幅度谱巨相关确定旋转参数,再以图像的时域体素互相关确定平移参数.将该算法用于对核医学图像配准及多模态医学图像配准实验,结果表明该算法是一种速度快、精度高、鲁棒性好的医学图像配准方法. 相似文献
14.
Deformable models in medical image analysis: a survey 总被引:2,自引:0,他引:2
This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking. 相似文献
15.
Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster. 相似文献
16.
Supervised deep learning needs a large amount of labeled data to achieve high performance. However, in medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective. Federated learning (FL) can learn a shared model from decentralized data. But traditional FL requires fully-labeled data for training, which is very expensive to obtain. Self-supervised contrastive learning (CL) can learn from unlabeled data for pre-training, followed by fine-tuning with limited annotations. However, when adopting CL in FL, the limited data diversity on each site makes federated contrastive learning (FCL) ineffective. In this work, we propose two federated self-supervised learning frameworks for volumetric medical image segmentation with limited annotations. The first one features high accuracy and fits high-performance servers with high-speed connections. The second one features lower communication costs, suitable for mobile devices. In the first framework, features are exchanged during FCL to provide diverse contrastive data to each site for effective local CL while keeping raw data private. Global structural matching aligns local and remote features for a unified feature space among different sites. In the second framework, to reduce the communication cost for feature exchanging, we propose an optimized method FCLOpt that does not rely on negative samples. To reduce the communications of model download, we propose the predictive target network update (PTNU) that predicts the parameters of the target network. Based on PTNU, we propose the distance prediction (DP) to remove most of the uploads of the target network. Experiments on a cardiac MRI dataset show the proposed two frameworks substantially improve the segmentation and generalization performance compared with state-of-the-art techniques. 相似文献
17.
梅寒婷 《影像研究与医学应用》2020,(3):1-2
随着医学影像技术近几十年的不断发展,DR、CT、磁共振以及分子成像等技术与设备研发取得十足的发展。结合pacs系统,数字医学影像设备在医学临床中的应用进入数字化、智能化时代。快速、清晰的图像获取,辅助医生更好地判断患者的疾病类型,并准确找到病因,从而进行精准治疗。提升诊断水平的同时,推动医学的发展,为人类的健康做出极大贡献。 相似文献
18.
A deep learning framework for hyperspectral image classification using spatial pyramid pooling 总被引:1,自引:0,他引:1
In this letter, a new deep learning framework for spectral–spatial classification of hyperspectral images is presented. The proposed framework serves as an engine for merging the spatial and spectral features via suitable deep learning architecture: stacked autoencoders (SAEs) and deep convolutional neural networks (DCNNs) followed by a logistic regression (LR) classifier. In this framework, SAEs is aimed to get useful high-level features for the one-dimensional features which is suitable for the dimension reduction of spectral features, while DCNNs can learn rich features from the training data automatically and has achieved state-of-the-art performance in many image classification databases. Though the DCNNs has shown robustness to distortion, it only extracts features of the same scale, and hence is insufficient to tolerate large-scale variance of object. As a result, spatial pyramid pooling (SPP) is introduced into hyperspectral image classification for the first time by pooling the spatial feature maps of the top convolutional layers into a fixed-length feature. Experimental results with widely used hyperspectral data indicate that classifiers built in this deep learning-based framework provide competitive performance. 相似文献
19.
Bofan Song Sumsum Sunny Shaobai Li Keerthi Gurushanth Pramila Mendonca Nirza Mukhia Sanjana Patrick Shubha Gurudath Subhashini Raghavan Imchen Tsusennaro Shirley T. Leivon Trupti Kolur Vivek Shetty Vidya R. Bushan Rohan Ramesh Tyler Peterson Vijay Pillai Petra Wilder-Smith Alben Sigamani Amritha Suresh moni Abraham Kuriakose Praveen Birur Rongguang Liang 《Biomedical optics express》2021,12(10):6422
In medical imaging, deep learning-based solutions have achieved state-of-the-art performance. However, reliability restricts the integration of deep learning into practical medical workflows since conventional deep learning frameworks cannot quantitatively assess model uncertainty. In this work, we propose to address this shortcoming by utilizing a Bayesian deep network capable of estimating uncertainty to assess oral cancer image classification reliability. We evaluate the model using a large intraoral cheek mucosa image dataset captured using our customized device from high-risk population to show that meaningful uncertainty information can be produced. In addition, our experiments show improved accuracy by uncertainty-informed referral. The accuracy of retained data reaches roughly 90% when referring either 10% of all cases or referring cases whose uncertainty value is greater than 0.3. The performance can be further improved by referring more patients. The experiments show the model is capable of identifying difficult cases needing further inspection. 相似文献
20.
In this paper, we propose a novel mutual consistency network (MC-Net+) to effectively exploit the unlabeled data for semi-supervised medical image segmentation. The MC-Net+ model is motivated by the observation that deep models trained with limited annotations are prone to output highly uncertain and easily mis-classified predictions in the ambiguous regions (e.g., adhesive edges or thin branches) for medical image segmentation. Leveraging these challenging samples can make the semi-supervised segmentation model training more effective. Therefore, our proposed MC-Net+ model consists of two new designs. First, the model contains one shared encoder and multiple slightly different decoders (i.e., using different up-sampling strategies). The statistical discrepancy of multiple decoders’ outputs is computed to denote the model’s uncertainty, which indicates the unlabeled hard regions. Second, we apply a novel mutual consistency constraint between one decoder’s probability output and other decoders’ soft pseudo labels. In this way, we minimize the discrepancy of multiple outputs (i.e., the model uncertainty) during training and force the model to generate invariant results in such challenging regions, aiming at regularizing the model training. We compared the segmentation results of our MC-Net+ model with five state-of-the-art semi-supervised approaches on three public medical datasets. Extension experiments with two standard semi-supervised settings demonstrate the superior performance of our model over other methods, which sets a new state of the art for semi-supervised medical image segmentation. Our code is released publicly at https://github.com/ycwu1997/MC-Net. 相似文献