首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Left atrial (LA) and atrial scar segmentation from late gadolinium enhanced magnetic resonance imaging (LGE MRI) is an important task in clinical practice. The automatic segmentation is however still challenging due to the poor image quality, the various LA shapes, the thin wall, and the surrounding enhanced regions. Previous methods normally solved the two tasks independently and ignored the intrinsic spatial relationship between LA and scars. In this work, we develop a new framework, namely AtrialJSQnet, where LA segmentation, scar projection onto the LA surface, and scar quantification are performed simultaneously in an end-to-end style. We propose a mechanism of shape attention (SA) via an implicit surface projection to utilize the inherent correlation between LA cavity and scars. In specific, the SA scheme is embedded into a multi-task architecture to perform joint LA segmentation and scar quantification. Besides, a spatial encoding (SE) loss is introduced to incorporate continuous spatial information of the target in order to reduce noisy patches in the predicted segmentation. We evaluated the proposed framework on 60 post-ablation LGE MRIs from the MICCAI2018 Atrial Segmentation Challenge. Moreover, we explored the domain generalization ability of the proposed AtrialJSQnet on 40 pre-ablation LGE MRIs from this challenge and 30 post-ablation multi-center LGE MRIs from another challenge (ISBI2012 Left Atrium Fibrosis and Scar Segmentation Challenge). Extensive experiments on public datasets demonstrated the effect of the proposed AtrialJSQnet, which achieved competitive performance over the state-of-the-art. The relatedness between LA segmentation and scar quantification was explicitly explored and has shown significant performance improvements for both tasks. The code has been released via https://zmiclab.github.io/projects.html.  相似文献   

2.
Radiotherapy is a treatment where radiation is used to eliminate cancer cells. The delineation of organs-at-risk (OARs) is a vital step in radiotherapy treatment planning to avoid damage to healthy organs. For nasopharyngeal cancer, more than 20 OARs are needed to be precisely segmented in advance. The challenge of this task lies in complex anatomical structure, low-contrast organ contours, and the extremely imbalanced size between large and small organs. Common segmentation methods that treat them equally would generally lead to inaccurate small-organ labeling. We propose a novel two-stage deep neural network, FocusNetv2, to solve this challenging problem by automatically locating, ROI-pooling, and segmenting small organs with specifically designed small-organ localization and segmentation sub-networks while maintaining the accuracy of large organ segmentation. In addition to our original FocusNet, we employ a novel adversarial shape constraint on small organs to ensure the consistency between estimated small-organ shapes and organ shape prior knowledge. Our proposed framework is extensively tested on both self-collected dataset of 1,164 CT scans and the MICCAI Head and Neck Auto Segmentation Challenge 2015 dataset, which shows superior performance compared with state-of-the-art head and neck OAR segmentation methods.  相似文献   

3.
White matter hyperintensities (WMHs) have been associated with various cerebrovascular and neurodegenerative diseases. Reliable quantification of WMHs is essential for understanding their clinical impact in normal and pathological populations. Automated segmentation of WMHs is highly challenging due to heterogeneity in WMH characteristics between deep and periventricular white matter, presence of artefacts and differences in the pathology and demographics of populations. In this work, we propose an ensemble triplanar network that combines the predictions from three different planes of brain MR images to provide an accurate WMH segmentation. In the loss functions the network uses anatomical information regarding WMH spatial distribution in loss functions, to improve the efficiency of segmentation and to overcome the contrast variations between deep and periventricular WMHs. We evaluated our method on 5 datasets, of which 3 are part of a publicly available dataset (training data for MICCAI WMH Segmentation Challenge 2017 - MWSC 2017) consisting of subjects from three different cohorts, and we also submitted our method to MWSC 2017 to be evaluated on the unseen test datasets. On evaluating our method separately in deep and periventricular regions, we observed robust and comparable performance in both regions. Our method performed better than most of the existing methods, including FSL BIANCA, and on par with the top ranking deep learning methods of MWSC 2017.  相似文献   

4.
Detection of cells and particles in microscopy images is a common and challenging task. In recent years, detection approaches in computer vision achieved remarkable improvements by leveraging deep learning. Microscopy images pose challenges like small and clustered objects, low signal to noise, and complex shape and appearance, for which current approaches still struggle. We introduce Deep Consensus Network, a new deep neural network for object detection in microscopy images based on object centroids. Our network is trainable end-to-end and comprises a Feature Pyramid Network-based feature extractor, a Centroid Proposal Network, and a layer for ensembling detection hypotheses over all image scales and anchors. We suggest an anchor regularization scheme that favours prior anchors over regressed locations. We also propose a novel loss function based on Normalized Mutual Information to cope with strong class imbalance, which we derive within a Bayesian framework. In addition, we introduce an improved algorithm for Non-Maximum Suppression which significantly reduces the algorithmic complexity. Experiments on synthetic data are performed to provide insights into the properties of the proposed loss function and its robustness. We also applied our method to challenging data from the TUPAC16 mitosis detection challenge and the Particle Tracking Challenge, and achieved results competitive or better than state-of-the-art.  相似文献   

5.
Simultaneous and automatic segmentation of the blood pool and myocardium is an important precondition for early diagnosis and pre-operative planning in patients with complex congenital heart disease. However, due to the high diversity of cardiovascular structures and changes in mechanical properties caused by cardiac defects, the segmentation task still faces great challenges. To overcome these challenges, in this study we propose an integrated multi-task deep learning framework based on the dilated residual and hybrid pyramid pooling network (DRHPPN) for joint segmentation of the blood pool and myocardium. The framework consists of three closely connected progressive sub-networks. An inception module is used to realize the initial multi-level feature representation of cardiovascular images. A dilated residual network (DRN), as the main body of feature extraction and pixel classification, preliminary predicts segmentation regions. A hybrid pyramid pooling network (HPPN) is designed for facilitating the aggregation of local information to global information, which complements DRN. Extensive experiments on three-dimensional cardiovascular magnetic resonance (CMR) images (the available dataset of the MICCAI 2016 HVSMR challenge) demonstrate that our approach can accurately segment the blood pool and myocardium and achieve competitive performance compared with state-of-the-art segmentation methods.  相似文献   

6.
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.  相似文献   

7.
8.
Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms.  相似文献   

9.
Manual segmentation of stacks of 2D biomedical images (e.g., histology) is a time-consuming task which can be sped up with semi-automated techniques. In this article, we present a suggestive deep active learning framework that seeks to minimise the annotation effort required to achieve a certain level of accuracy when labelling such a stack. The framework suggests, at every iteration, a specific region of interest (ROI) in one of the images for manual delineation. Using a deep segmentation neural network and a mixed cross-entropy loss function, we propose a principled strategy to estimate class probabilities for the whole stack, conditioned on heterogeneous partial segmentations of the 2D images, as well as on weak supervision in the form of image indices that bound each ROI. Using the estimated probabilities, we propose a novel active learning criterion based on predictions for the estimated segmentation performance and delineation effort, measured with average Dice scores and total delineated boundary length, respectively, rather than common surrogates such as entropy. The query strategy suggests the ROI that is expected to maximise the ratio between performance and effort, while considering the adjacency of structures that may have already been labelled – which decrease the length of the boundary to trace. We provide quantitative results on synthetically deformed MRI scans and real histological data, showing that our framework can reduce labelling effort by up to 60–70% without compromising accuracy.  相似文献   

10.
The detection of nuclei and cells in histology images is of great value in both clinical practice and pathological studies. However, multiple reasons such as morphological variations of nuclei or cells make it a challenging task where conventional object detection methods cannot obtain satisfactory performance in many cases. A detection task consists of two sub-tasks, classification and localization. Under the condition of dense object detection, classification is a key to boost the detection performance. Considering this, we propose similarity based region proposal networks (SRPN) for nuclei and cells detection in histology images. In particular, a customised convolution layer termed as embedding layer is designed for network building. The embedding layer is added into the region proposal networks, enabling the networks to learn discriminative features based on similarity learning. Features obtained by similarity learning can significantly boost the classification performance compared to conventional methods. SRPN can be easily integrated into standard convolutional neural networks architectures such as the Faster R-CNN and RetinaNet. We test the proposed approach on tasks of multi-organ nuclei detection and signet ring cells detection in histological images. Experimental results show that networks applying similarity learning achieved superior performance on both tasks when compared to their counterparts. In particular, the proposed SRPN achieve state-of-the-art performance on the MoNuSeg benchmark for nuclei segmentation and detection while compared to previous methods, and on the signet ring cell detection benchmark when compared with baselines. The sourcecode is publicly available at: https://github.com/sigma10010/nuclei_cells_det.  相似文献   

11.
Left ventricular (LV) segmentation is essential for the early diagnosis of cardiovascular diseases, which has been reported as the leading cause of death all over the world. However, automated LV segmentation from cardiac magnetic resonance images (CMRI) using the traditional convolutional neural networks (CNNs) is still a challenging task due to the limited labeled CMRI data and low tolerances to irregular scales, shapes and deformations of LV. In this paper, we propose an automated LV segmentation method based on adversarial learning by integrating a multi-stage pose estimation network (MSPN) and a co-discrimination network. Different from existing CNNs, we use a MSPN with multi-scale dilated convolution (MDC) modules to enhance the ranges of receptive field for deep feature extraction. To fully utilize both labeled and unlabeled CMRI data, we propose a novel generative adversarial network (GAN) framework for LV segmentation by combining MSPN with co-discrimination networks. Specifically, the labeled CMRI are first used to initialize our segmentation network (MSPN) and co-discrimination network. Our GAN training includes two different kinds of epochs fed with both labeled and unlabeled CMRI data alternatively, which are different from the traditional CNNs only relied on the limited labeled samples to train the segmentation networks. As both ground truth and unlabeled samples are involved in guiding training, our method not only can converge faster but also obtain a better performance in LV segmentation. Our method is evaluated using MICCAI 2009 and 2017 challenge databases. Experimental results show that our method has obtained promising performance in LV segmentation, which also outperforms the state-of-the-art methods in terms of LV segmentation accuracy from the comparison results.  相似文献   

12.
Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of “deconvolutional” capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects’ thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules’ ability to generalize to unseen handling of rotations/reflections on natural images.  相似文献   

13.
In medical image segmentation, supervised machine learning models trained using one image modality (e.g. computed tomography (CT)) are often prone to failure when applied to another image modality (e.g. magnetic resonance imaging (MRI)) even for the same organ. This is due to the significant intensity variations of different image modalities. In this paper, we propose a novel end-to-end deep neural network to achieve multi-modality image segmentation, where image labels of only one modality (source domain) are available for model training and the image labels for the other modality (target domain) are not available. In our method, a multi-resolution locally normalized gradient magnitude approach is firstly applied to images of both domains for minimizing the intensity discrepancy. Subsequently, a dual task encoder-decoder network including image segmentation and reconstruction is utilized to effectively adapt a segmentation network to the unlabeled target domain. Additionally, a shape constraint is imposed by leveraging adversarial learning. Finally, images from the target domain are segmented, as the network learns a consistent latent feature representation with shape awareness from both domains. We implement both 2D and 3D versions of our method, in which we evaluate CT and MRI images for kidney and cardiac tissue segmentation. For kidney, a public CT dataset (KiTS19, MICCAI 2019) and a local MRI dataset were utilized. The cardiac dataset was from the Multi-Modality Whole Heart Segmentation (MMWHS) challenge 2017. Experimental results reveal that our proposed method achieves significantly higher performance with a much lower model complexity in comparison with other state-of-the-art methods. More importantly, our method is also capable of producing superior segmentation results than other methods for images of an unseen target domain without model retraining. The code is available at GitHub (https://github.com/MinaJf/LMISA) to encourage method comparison and further research.  相似文献   

14.
We present our novel deep multi-task learning method for medical image segmentation. Existing multi-task methods demand ground truth annotations for both the primary and auxiliary tasks. Contrary to it, we propose to generate the pseudo-labels of an auxiliary task in an unsupervised manner. To generate the pseudo-labels, we leverage Histogram of Oriented Gradients (HOGs), one of the most widely used and powerful hand-crafted features for detection. Together with the ground truth semantic segmentation masks for the primary task and pseudo-labels for the auxiliary task, we learn the parameters of the deep network to minimize the loss of both the primary task and the auxiliary task jointly. We employed our method on two powerful and widely used semantic segmentation networks: UNet and U2Net to train in a multi-task setup. To validate our hypothesis, we performed experiments on two different medical image segmentation data sets. From the extensive quantitative and qualitative results, we observe that our method consistently improves the performance compared to the counter-part method. Moreover, our method is the winner of FetReg Endovis Sub-challenge on Semantic Segmentation organised in conjunction with MICCAI 2021. Code and implementation details are available at:https://github.com/thetna/medical_image_segmentation.  相似文献   

15.
Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery. So far, state-of-the-art approaches completely rely on the availability of a ground-truth supervision signal, obtained via manual annotation, thus expensive to collect at large scale. In this paper, we present FUN-SIS, a Fully-UNsupervised approach for binary Surgical Instrument Segmentation. FUN-SIS trains a per-frame segmentation model on completely unlabelled endoscopic videos, by solely relying on implicit motion information and instrument shape-priors. We define shape-priors as realistic segmentation masks of the instruments, not necessarily coming from the same dataset/domain as the videos. The shape-priors can be collected in various and convenient ways, such as recycling existing annotations from other datasets. We leverage them as part of a novel generative-adversarial approach, allowing to perform unsupervised instrument segmentation of optical-flow images during training. We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noise properties. We validate the proposed contributions on three surgical datasets, including the MICCAI 2017 EndoVis Robotic Instrument Segmentation Challenge dataset. The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches. This suggests the tremendous potential of the proposed method to leverage the great amount of unlabelled data produced in the context of minimally invasive surgery.  相似文献   

16.
To fully define the target objects of interest in clinical diagnosis, many deep convolution neural networks (CNNs) use multimodal paired registered images as inputs for segmentation tasks. However, these paired images are difficult to obtain in some cases. Furthermore, the CNNs trained on one specific modality may fail on others for images acquired with different imaging protocols and scanners. Therefore, developing a unified model that can segment the target objects from unpaired multiple modalities is significant for many clinical applications. In this work, we propose a 3D unified generative adversarial network, which unifies the any-to-any modality translation and multimodal segmentation in a single network. Since the anatomical structure is preserved during modality translation, the auxiliary translation task is used to extract the modality-invariant features and generate the additional training data implicitly. To fully utilize the segmentation-related features, we add a cross-task skip connection with feature recalibration from the translation decoder to the segmentation decoder. Experiments on abdominal organ segmentation and brain tumor segmentation indicate that our method outperforms the existing unified methods.  相似文献   

17.
There is a large body of literature linking anatomic and geometric characteristics of kidney tumors to perioperative and oncologic outcomes. Semantic segmentation of these tumors and their host kidneys is a promising tool for quantitatively characterizing these lesions, but its adoption is limited due to the manual effort required to produce high-quality 3D segmentations of these structures. Recently, methods based on deep learning have shown excellent results in automatic 3D segmentation, but they require large datasets for training, and there remains little consensus on which methods perform best. The 2019 Kidney and Kidney Tumor Segmentation challenge (KiTS19) was a competition held in conjunction with the 2019 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) which sought to address these issues and stimulate progress on this automatic segmentation problem. A training set of 210 cross sectional CT images with kidney tumors was publicly released with corresponding semantic segmentation masks. 106 teams from five continents used this data to develop automated systems to predict the true segmentation masks on a test set of 90 CT images for which the corresponding ground truth segmentations were kept private. These predictions were scored and ranked according to their average Sørensen-Dice coefficient between the kidney and tumor across all 90 cases. The winning team achieved a Dice of 0.974 for kidney and 0.851 for tumor, approaching the inter-annotator performance on kidney (0.983) but falling short on tumor (0.923). This challenge has now entered an “open leaderboard” phase where it serves as a challenging benchmark in 3D semantic segmentation.  相似文献   

18.
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.  相似文献   

19.
Skin lesion segmentation from dermoscopy images is a fundamental yet challenging task in the computer-aided skin diagnosis system due to the large variations in terms of their views and scales of lesion areas. We propose a novel and effective generative adversarial network (GAN) to meet these challenges. Specifically, this network architecture integrates two modules: a skip connection and dense convolution U-Net (UNet-SCDC) based segmentation module and a dual discrimination (DD) module. While the UNet-SCDC module uses dense dilated convolution blocks to generate a deep representation that preserves fine-grained information, the DD module makes use of two discriminators to jointly decide whether the input of the discriminators is real or fake. While one discriminator, with a traditional adversarial loss, focuses on the differences at the boundaries of the generated segmentation masks and the ground truths, the other examines the contextual environment of target object in the original image using a conditional discriminative loss. We integrate these two modules and train the proposed GAN in an end-to-end manner. The proposed GAN is evaluated on the public International Skin Imaging Collaboration (ISIC) Skin Lesion Challenge Datasets of 2017 and 2018. Extensive experimental results demonstrate that the proposed network achieves superior segmentation performance to state-of-the-art methods.  相似文献   

20.
The automatic segmentation of lumbar anatomy is a fundamental problem for the diagnosis and treatment of lumbar disease. The recent development of deep learning techniques has led to remarkable progress in this task, including the possible segmentation of nerve roots, intervertebral discs, and dural sac in a single step. Despite these advances, lumbar anatomy segmentation remains a challenging problem due to the weak contrast and noise of input images, as well as the variability of intensities and size in lumbar structures across different subjects. To overcome these challenges, we propose a coarse-to-fine deep neural network framework for lumbar anatomy segmentation, which obtains a more accurate segmentation using two strategies. First, a progressive refinement process is employed to correct low-confidence regions by enhancing the feature representation in these regions. Second, a grayscale self-adjusting network (GSA-Net) is proposed to optimize the distribution of intensities dynamically. Experiments on datasets comprised of 3D computed tomography (CT) and magnetic resonance (MR) images show the advantage of our method over current segmentation approaches and its potential for diagnosing and lumbar disease treatment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号