首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Whole abdominal organ segmentation is important in diagnosing abdomen lesions, radiotherapy, and follow-up. However, oncologists’ delineating all abdominal organs from 3D volumes is time-consuming and very expensive. Deep learning-based medical image segmentation has shown the potential to reduce manual delineation efforts, but it still requires a large-scale fine annotated dataset for training, and there is a lack of large-scale datasets covering the whole abdomen region with accurate and detailed annotations for the whole abdominal organ segmentation. In this work, we establish a new large-scale Whole abdominal ORgan Dataset (WORD) for algorithm research and clinical application development. This dataset contains 150 abdominal CT volumes (30495 slices). Each volume has 16 organs with fine pixel-level annotations and scribble-based sparse annotations, which may be the largest dataset with whole abdominal organ annotation. Several state-of-the-art segmentation methods are evaluated on this dataset. And we also invited three experienced oncologists to revise the model predictions to measure the gap between the deep learning method and oncologists. Afterwards, we investigate the inference-efficient learning on the WORD, as the high-resolution image requires large GPU memory and a long inference time in the test stage. We further evaluate the scribble-based annotation-efficient learning on this dataset, as the pixel-wise manual annotation is time-consuming and expensive. The work provided a new benchmark for the abdominal multi-organ segmentation task, and these experiments can serve as the baseline for future research and clinical application development.  相似文献   

2.
One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters.  相似文献   

3.
In post-operative radiotherapy for prostate cancer, precisely contouring the clinical target volume (CTV) to be irradiated is challenging, because the cancerous prostate gland has been surgically removed, so the CTV encompasses the microscopic spread of tumor cells, which cannot be visualized in clinical images like computed tomography or magnetic resonance imaging. In current clinical practice, physicians’ segment CTVs manually based on their relationship with nearby organs and other clinical information, but this allows large inter-physician variability. Automating post-operative prostate CTV segmentation with traditional image segmentation methods has yielded suboptimal results. We propose using deep learning to accurately segment post-operative prostate CTVs. The model proposed is trained using labels that were clinically approved and used for patient treatment. To segment the CTV, we segment nearby organs first, then use their relationship with the CTV to assist CTV segmentation. To ease the encoding of distance-based features, which are important for learning both the CTV contours’ overlap with the surrounding OARs and the distance from their borders, we add distance prediction as an auxiliary task to the CTV network. To make the DL model practical for clinical use, we use Monte Carlo dropout (MCDO) to estimate model uncertainty. Using MCDO, we estimate and visualize the 95% upper and lower confidence bounds for each prediction which informs the physicians of areas that might require correction. The model proposed achieves an average Dice similarity coefficient (DSC) of 0.87 on a holdout test dataset, much better than established methods, such as atlas-based methods (DSC<0.7). The predicted contours agree with physician contours better than medical resident contours do. A reader study showed that the clinical acceptability of the automatically segmented CTV contours is equal to that of approved clinical contours manually drawn by physicians. Our deep learning model can accurately segment CTVs with the help of surrounding organ masks. Because the DL framework can outperform residents, it can be implemented practically in a clinical workflow to generate initial CTV contours or to guide residents in generating these contours for physicians to review and revise. Providing physicians with the 95% confidence bounds could streamline the review process for an efficient clinical workflow as this would enable physicians to concentrate their inspecting and editing efforts on the large uncertain areas.  相似文献   

4.
Automatic segmentation of organs at risk is crucial to aid diagnoses and remains a challenging task in medical image analysis domain. To perform the segmentation, we use multi-task learning (MTL) to accurately determine the contour of organs at risk in CT images. We train an encoder-decoder network for two tasks in parallel. The main task is the segmentation of organs, entailing a pixel-level classification in the CT images, and the auxiliary task is the multi-label classification of organs, entailing an image-level multi-label classification of the CT images. To boost the performance of the multi-label classification, we propose a weighted mean cross entropy loss function for the network training, where the weights are the global conditional probability between two organs. Based on MTL, we optimize the false positive filtering (FPF) algorithm to decrease the number of falsely segmented organ pixels in the CT images. Specifically, we propose a dynamic threshold selection (DTS) strategy to prevent true positive rates from decreasing when using the FPF algorithm. We validate these methods on the public ISBI 2019 segmentation of thoracic organs at risk (SegTHOR) challenge dataset and a private medical organ dataset. The experimental results show that networks using our proposed methods outperform basic encoder-decoder networks without increasing the training time complexity.  相似文献   

5.
BackgroundFully automatic medical image segmentation has been a long pursuit in radiotherapy (RT). Recent developments involving deep learning show promising results yielding consistent and time efficient contours. In order to train and validate these systems, several geometric based metrics, such as Dice Similarity Coefficient (DSC), Hausdorff, and other related metrics are currently the standard in automated medical image segmentation challenges. However, the relevance of these metrics in RT is questionable. The quality of automated segmentation results needs to reflect clinical relevant treatment outcomes, such as dosimetry and related tumor control and toxicity. In this study, we present results investigating the correlation between popular geometric segmentation metrics and dose parameters for Organs-At-Risk (OAR) in brain tumor patients, and investigate properties that might be predictive for dose changes in brain radiotherapy.MethodsA retrospective database of glioblastoma multiforme patients was stratified for planning difficulty, from which 12 cases were selected and reference sets of OARs and radiation targets were defined. In order to assess the relation between segmentation quality -as measured by standard segmentation assessment metrics- and quality of RT plans, clinically realistic, yet alternative contours for each OAR of the selected cases were obtained through three methods: (i) Manual contours by two additional human raters. (ii) Realistic manual manipulations of reference contours. (iii) Through deep learning based segmentation results. On the reference structure set a reference plan was generated that was re-optimized for each corresponding alternative contour set. The correlation between segmentation metrics, and dosimetric changes was obtained and analyzed for each OAR, by means of the mean dose and maximum dose to 1% of the volume (Dmax 1%). Furthermore, we conducted specific experiments to investigate the dosimetric effect of alternative OAR contours with respect to the proximity to the target, size, particular shape and relative location to the target.ResultsWe found a low correlation between the DSC, reflecting the alternative OAR contours, and dosimetric changes. The Pearson correlation coefficient between the mean OAR dose effect and the Dice was -0.11. For Dmax 1%, we found a correlation of -0.13. Similar low correlations were found for 22 other segmentation metrics. The organ based analysis showed that there is a better correlation for the larger OARs (i.e. brainstem and eyes) as for the smaller OARs (i.e. optic nerves and chiasm). Furthermore, we found that proximity to the target does not make contour variations more susceptible to the dose effect. However, the direction of the contour variation with respect to the relative location of the target seems to have a strong correlation with the dose effect.ConclusionsThis study shows a low correlation between segmentation metrics and dosimetric changes for OARs in brain tumor patients. Results suggest that the current metrics for image segmentation in RT, as well as deep learning systems employing such metrics, need to be revisited towards clinically oriented metrics that better reflect how segmentation quality affects dose distribution and related tumor control and toxicity.  相似文献   

6.
Automated segmentation of pancreatic cancer is vital for clinical diagnosis and treatment. However, the small size and inconspicuous boundaries limit the segmentation performance, which is further exacerbated for deep learning techniques with the few training samples due to the high threshold of image acquisition and annotation. To alleviate this issue caused by the small-scale dataset, we collect idle multi-parametric MRIs of pancreatic cancer from different studies to construct a relatively large dataset for enhancing the CT pancreatic cancer segmentation. Therefore, we propose a deep learning segmentation model with the dual meta-learning framework for pancreatic cancer. It can integrate the common knowledge of tumors obtained from idle MRIs and salient knowledge from CT images, making high-level features more discriminative. Specifically, the random intermediate modalities between MRIs and CT are first generated to smoothly fill in the gaps in visual appearance and provide rich intermediate representations for ensuing meta-learning scheme. Subsequently, we employ intermediate modalities-based model-agnostic meta-learning to capture and transfer commonalities. At last, a meta-optimizer is utilized to adaptively learn the salient features within CT data, thus alleviating the interference due to internal differences. Comprehensive experimental results demonstrated our method achieved the promising segmentation performance, with a max Dice score of 64.94% on our private dataset, and outperformed state-of-the-art methods on a public pancreatic cancer CT dataset. The proposed method is an effective pancreatic cancer segmentation framework, which can be easily integrated into other segmentation networks and thus promises to be a potential paradigm for alleviating data scarcity challenges using idle data.  相似文献   

7.
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views.To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.  相似文献   

8.

Aim

External beam radiation therapy attempts to deliver a high dose of ionizing radiation to destroy cancerous tissue, while sparing healthy tissues and organs at risk (OAR). Recent advances in intensity modulated radiotherapy treatment call for a greater understanding of uncertainties in the treatment process and more rigorous protocols leading to greater precision in treatment delivery. The degree to which this can be achieved depends largely on the cancer site. The treatment of organs comprises soft tissue (e.g. in the abdomen) and those subject to rhythmic movements (e.g. lungs) causing inter and intra-fraction motion artifacts that are particularly problematic. Various methods have been developed to tackle the problems caused by organ motion during radiotherapy treatment, e.g. Real-time position management respiratory gating (Varian) and synchronized moving aperture radiation therapy, developed by researchers at Harvard Medical School.

Objective

The majority of the work focuses on tracking the position of the pathologic region, with the intra-fraction shape variation of the region being largely ignored.

Materials and Methods

This paper proposes a novel method that addresses both the position and shape variation caused by the intra-fraction movement.

Conclusion

We believe this approach is able to reduce the clinical target volume margin, hence, sparing yet more of the surrounding healthy tissues from radiation exposure and limiting irradiation of OAR.  相似文献   

9.
Post-prostatectomy radiotherapy requires accurate annotation of the prostate bed (PB), i.e., the residual tissue after the operative removal of the prostate gland, to minimize side effects on surrounding organs-at-risk (OARs). However, PB segmentation in computed tomography (CT) images is a challenging task, even for experienced physicians. This is because PB is almost a “virtual” target with non-contrast boundaries and highly variable shapes depending on neighboring OARs. In this work, we propose an asymmetric multi-task attention network (AMTA-Net) for the concurrent segmentation of PB and surrounding OARs. Our AMTA-Net mimics experts in delineating the non-contrast PB by explicitly leveraging its critical dependency on the neighboring OARs (i.e., the bladder and rectum), which are relatively easy to distinguish in CT images. Specifically, we first adopt a U-Net as the backbone network for the low-level (or prerequisite) task of the OAR segmentation. Then, we build an attention sub-network upon the backbone U-Net with a series of cascaded attention modules, which can hierarchically transfer the OAR features and adaptively learn discriminative representations for the high-level (or primary) task of the PB segmentation. We comprehensively evaluate the proposed AMTA-Net on a clinical dataset composed of 186 CT images. According to the experimental results, our AMTA-Net significantly outperforms current clinical state-of-the-arts (i.e., atlas-based segmentation methods), indicating the value of our method in reducing time and labor in the clinical workflow. Our AMTA-Net also presents better performance than the technical state-of-the-arts (i.e., the deep learning-based segmentation methods), especially for the most indistinguishable and clinically critical part of the PB boundaries. Source code is released at https://github.com/superxuang/amta-net.  相似文献   

10.
We propose a method for fast, accurate and robust localization of several organs in medical images. We generalize the global-to-local cascade of regression random forest to multiple organs. A first regressor encodes the global relationships between organs, learning simultaneously all organs parameters. Then subsequent regressors refine the localization of each organ locally and independently for improved accuracy. By combining the regression vote distribution and the organ shape prior (through probabilistic atlas representation) we compute confidence maps that are organ-dedicated probability maps. They are used within the cascade itself, to better select the test voxels for the second set of regressors, and to provide richer information than the classical bounding boxes result thanks to the shape prior.We propose an extensive study of the different learning and testing parameters, showing both their robustness to reasonable perturbations and their influence on the final algorithm accuracy. Finally we demonstrate the robustness and accuracy of our approach by evaluating the localization of six abdominal organs (liver, two kidneys, spleen, gallbladder and stomach) on a large and diverse database of 130 CT volumes. Moreover, the comparison of our results with two existing methods shows significant improvements brought by our approach and our deep understanding and optimization of the parameters.  相似文献   

11.
《Medical image analysis》2014,18(7):1233-1246
Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies.Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage.This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions.We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset.  相似文献   

12.
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary.  相似文献   

13.
In medical image segmentation, supervised machine learning models trained using one image modality (e.g. computed tomography (CT)) are often prone to failure when applied to another image modality (e.g. magnetic resonance imaging (MRI)) even for the same organ. This is due to the significant intensity variations of different image modalities. In this paper, we propose a novel end-to-end deep neural network to achieve multi-modality image segmentation, where image labels of only one modality (source domain) are available for model training and the image labels for the other modality (target domain) are not available. In our method, a multi-resolution locally normalized gradient magnitude approach is firstly applied to images of both domains for minimizing the intensity discrepancy. Subsequently, a dual task encoder-decoder network including image segmentation and reconstruction is utilized to effectively adapt a segmentation network to the unlabeled target domain. Additionally, a shape constraint is imposed by leveraging adversarial learning. Finally, images from the target domain are segmented, as the network learns a consistent latent feature representation with shape awareness from both domains. We implement both 2D and 3D versions of our method, in which we evaluate CT and MRI images for kidney and cardiac tissue segmentation. For kidney, a public CT dataset (KiTS19, MICCAI 2019) and a local MRI dataset were utilized. The cardiac dataset was from the Multi-Modality Whole Heart Segmentation (MMWHS) challenge 2017. Experimental results reveal that our proposed method achieves significantly higher performance with a much lower model complexity in comparison with other state-of-the-art methods. More importantly, our method is also capable of producing superior segmentation results than other methods for images of an unseen target domain without model retraining. The code is available at GitHub (https://github.com/MinaJf/LMISA) to encourage method comparison and further research.  相似文献   

14.
Multi-sequence cardiac magnetic resonance (CMR) provides essential pathology information (scar and edema) to diagnose myocardial infarction. However, automatic pathology segmentation can be challenging due to the difficulty of effectively exploring the underlying information from the multi-sequence CMR data. This paper aims to tackle the scar and edema segmentation from multi-sequence CMR with a novel auto-weighted supervision framework, where the interactions among different supervised layers are explored under a task-specific objective using reinforcement learning. Furthermore, we design a coarse-to-fine framework to boost the small myocardial pathology region segmentation with shape prior knowledge. The coarse segmentation model identifies the left ventricle myocardial structure as a shape prior, while the fine segmentation model integrates a pixel-wise attention strategy with an auto-weighted supervision model to learn and extract salient pathological structures from the multi-sequence CMR data. Extensive experimental results on a publicly available dataset from Myocardial pathology segmentation combining multi-sequence CMR (MyoPS 2020) demonstrate our method can achieve promising performance compared with other state-of-the-art methods. Our method is promising in advancing the myocardial pathology assessment on multi-sequence CMR data. To motivate the community, we have made our code publicly available via https://github.com/soleilssss/AWSnet/tree/master.  相似文献   

15.
Knee cartilage and bone segmentation is critical for physicians to analyze and diagnose articular damage and knee osteoarthritis (OA). Deep learning (DL) methods for medical image segmentation have largely outperformed traditional methods, but they often need large amounts of annotated data for model training, which is very costly and time-consuming for medical experts, especially on 3D images. In this paper, we report a new knee cartilage and bone segmentation framework, KCB-Net, for 3D MR images based on sparse annotation. KCB-Net selects a small subset of slices from 3D images for annotation, and seeks to bridge the performance gap between sparse annotation and full annotation. Specifically, it first identifies a subset of the most effective and representative slices with an unsupervised scheme; it then trains an ensemble model using the annotated slices; next, it self-trains the model using 3D images containing pseudo-labels generated by the ensemble method and improved by a bi-directional hierarchical earth mover’s distance (bi-HEMD) algorithm; finally, it fine-tunes the segmentation results using the primal–dual Internal Point Method (IPM). Experiments on four 3D MR knee joint datasets (the SKI10 dataset, OAI ZIB dataset, Iowa dataset, and iMorphics dataset) show that our new framework outperforms state-of-the-art methods on full annotation, and yields high quality results for small annotation ratios even as low as 10%.  相似文献   

16.
3D ultrasound measurement of large organ volume   总被引:5,自引:0,他引:5  
Freehand 3D ultrasound is particularly appropriate for the measurement of organ volumes. For small organs, which can be fully examined with a single sweep of the ultrasound probe, the results are known to be much more accurate than those using conventional 2D ultrasound. However, large or complex shaped organs are difficult to quantify in this manner because multiple sweeps are required to cover the entire organ. Typically, there are significant registration errors between the various sweeps, which generate artifacts in an interpolated voxel array, making segmentation of the organ very difficult. This paper describes how sequential freehand 3D ultrasound, which does not employ an interpolated voxel array, can be used to measure the volume of large organs. Partial organ cross-sections can be segmented in the original B-scans, and then combined, without the need for image-based registration, to give the organ volume. The inherent accuracy (not including position sensor and segmentation errors) is demonstrated in simulation to be within +/- 2%. The in vivo precision of the complete system is demonstrated (by repeated observations of a human liver) to be +/- 5%.  相似文献   

17.
Instrument segmentation plays a vital role in 3D ultrasound (US) guided cardiac intervention. Efficient and accurate segmentation during the operation is highly desired since it can facilitate the operation, reduce the operational complexity, and therefore improve the outcome. Nevertheless, current image-based instrument segmentation methods are not efficient nor accurate enough for clinical usage. Lately, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, have been used in different volumetric segmentation tasks. However, 2D FCN cannot exploit the 3D contextual information in the volumetric data, while 3D FCN requires high computation cost and a large amount of training data. Moreover, with limited computation resources, 3D FCN is commonly applied with a patch-based strategy, which is therefore not efficient for clinical applications. To address these, we propose a POI-FuseNet, which consists of a patch-of-interest (POI) selector and a FuseNet. The POI selector can efficiently select the interested regions containing the instrument, while FuseNet can make use of 2D and 3D FCN features to hierarchically exploit contextual information. Furthermore, we propose a hybrid loss function, which consists of a contextual loss and a class-balanced focal loss, to improve the segmentation performance of the network. With the collected challenging ex-vivo dataset on RF-ablation catheter, our method achieved a Dice score of 70.5%, superior to the state-of-the-art methods. In addition, based on the pre-trained model from ex-vivo dataset, our method can be adapted to the in-vivo dataset on guidewire and achieves a Dice score of 66.5% for a different cardiac operation. More crucially, with POI-based strategy, segmentation efficiency is reduced to around 1.3 seconds per volume, which shows the proposed method is promising for clinical use.  相似文献   

18.
Automatic sigmoid colon segmentation in CT for radiotherapy treatment planning is challenging due to complex organ shape, close distances to other organs, and large variations in size, shape, and filling status. The patient bowel is often not evacuated, and CT contrast enhancement is not used, which further increase problem difficulty. Deep learning (DL) has demonstrated its power in many segmentation problems. However, standard 2-D approaches cannot handle the sigmoid segmentation problem due to incomplete geometry information and 3-D approaches often encounters the challenge of a limited training data size. Motivated by human's behavior that segments the sigmoid slice by slice while considering connectivity between adjacent slices, we proposed an iterative 2.5-D DL approach to solve this problem. We constructed a network that took an axial CT slice, the sigmoid mask in this slice, and an adjacent CT slice to segment as input and output the predicted mask on the adjacent slice. We also considered other organ masks as prior information. We trained the iterative network with 50 patient cases using five-fold cross validation. The trained network was repeatedly applied to generate masks slice by slice. The method achieved average Dice similarity coefficients of 0.82 0.06 and 0.88 0.02 in 10 test cases without and with using prior information.  相似文献   

19.
The morphological evaluation of tumor-infiltrating lymphocytes (TILs) in hematoxylin and eosin (H& E)-stained histopathological images is the key to breast cancer (BCa) diagnosis, prognosis, and therapeutic response prediction. For now, the qualitative assessment of TILs is carried out by pathologists, and computer-aided automatic lymphocyte measurement is still a great challenge because of the small size and complex distribution of lymphocytes. In this paper, we propose a novel dense dual-task network (DDTNet) to simultaneously achieve automatic TIL detection and segmentation in histopathological images. DDTNet consists of a backbone network (i.e., feature pyramid network) for extracting multi-scale morphological characteristics of TILs, a detection module for the localization of TIL centers, and a segmentation module for the delineation of TIL boundaries, where a boundary-aware branch is further used to provide a shape prior to segmentation. An effective feature fusion strategy is utilized to introduce multi-scale features with lymphocyte location information from highly correlated branches for precise segmentation. Experiments on three independent lymphocyte datasets of BCa demonstrate that DDTNet outperforms other advanced methods in detection and segmentation metrics. As part of this work, we also propose a semi-automatic method (TILAnno) to generate high-quality boundary annotations for TILs in H& E-stained histopathological images. TILAnno is used to produce a new lymphocyte dataset that contains 5029 annotated lymphocyte boundaries, which have been released to facilitate computational histopathology in the future.  相似文献   

20.
Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery. So far, state-of-the-art approaches completely rely on the availability of a ground-truth supervision signal, obtained via manual annotation, thus expensive to collect at large scale. In this paper, we present FUN-SIS, a Fully-UNsupervised approach for binary Surgical Instrument Segmentation. FUN-SIS trains a per-frame segmentation model on completely unlabelled endoscopic videos, by solely relying on implicit motion information and instrument shape-priors. We define shape-priors as realistic segmentation masks of the instruments, not necessarily coming from the same dataset/domain as the videos. The shape-priors can be collected in various and convenient ways, such as recycling existing annotations from other datasets. We leverage them as part of a novel generative-adversarial approach, allowing to perform unsupervised instrument segmentation of optical-flow images during training. We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noise properties. We validate the proposed contributions on three surgical datasets, including the MICCAI 2017 EndoVis Robotic Instrument Segmentation Challenge dataset. The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches. This suggests the tremendous potential of the proposed method to leverage the great amount of unlabelled data produced in the context of minimally invasive surgery.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号