首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Paediatric echocardiography is a standard method for screening congenital heart disease (CHD). The segmentation of paediatric echocardiography is essential for subsequent extraction of clinical parameters and interventional planning. However, it remains a challenging task due to (1) the considerable variation of key anatomic structures, (2) the poor lateral resolution affecting accurate boundary definition, (3) the existence of speckle noise and artefacts in echocardiographic images. In this paper, we propose a novel deep network to address these challenges comprehensively. We first present a dual-path feature extraction module (DP-FEM) to extract rich features via a channel attention mechanism. A high- and low-level feature fusion module (HL-FFM) is devised based on spatial attention, which selectively fuses rich semantic information from high-level features with spatial cues from low-level features. In addition, a hybrid loss is designed to deal with pixel-level misalignment and boundary ambiguities. Based on the segmentation results, we derive key clinical parameters for diagnosis and treatment planning. We extensively evaluate the proposed method on 4,485 two-dimensional (2D) paediatric echocardiograms from 127 echocardiographic videos. The proposed method consistently achieves better segmentation performance than other state-of-the-art methods, whichdemonstratesfeasibility for automatic segmentation and quantitative analysis of paediatric echocardiography. Our code is publicly available at https://github.com/end-of-the-century/Cardiac.  相似文献   

2.
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.  相似文献   

3.
ABSTRACT

Semantic segmentation models based on deep learning have shown remarkable performance in road extraction from high-resolution aerial images. However, it is still a difficult task to segment multiscale roads with high completeness and accuracy from complex backgrounds. To deal with this problem, this letter proposes an end to end network named Multiple features integrated with convolutional long-short time memory unit network (MFI-CLSTMN). First, in MFI-CLSTMN, the ConvLSTM unit is designed to explore and integrate the sequential correlations among features, which can alleviate the feature loss caused by the max-pooling operation. Second, the structure of dense concatenation and multiscale up-sampling combines detailed features with semantic information to preserve road details. At last, at the optimization stage, a self-adaptive composite loss function is added to handle class imbalance, such that MFI-CLSTMN can effectively train hard examples and avoids local optimum. Experiments demonstrate that MFI-CLSTMN has higher segmentation accuracy and lower computational complexity than four comparative state-of-the-art models in a consistent environment. Moreover, MFI-CLSTMN can especially protect road segmentation from netsplit and brokenness, which is hard for other models to achieve.  相似文献   

4.
Optical coherence tomography angiography(OCTA) is an advanced noninvasive vascular imaging technique that has important implications in many vision-related diseases. The automatic segmentation of retinal vessels in OCTA is understudied, and the existing segmentation methods require large-scale pixel-level annotated images. However, manually annotating labels is time-consuming and labor-intensive. Therefore, we propose a dual-consistency semi-supervised segmentation network incorporating multi-scale self-supervised puzzle subtasks(DCSS-Net) to tackle the challenge of limited annotations. First, we adopt a novel self-supervised task in assisting semi-supervised networks in training to learn better feature representations. Second, we propose a dual-consistency regularization strategy that imposed data-based and feature-based perturbation to effectively utilize a large number of unlabeled data, alleviate the overfitting of the model, and generate more accurate segmentation predictions. Experimental results on two OCTA retina datasets validate the effectiveness of our DCSS-Net. With very little labeled data, the performance of our method is comparable with fully supervised methods trained on the entire labeled dataset.  相似文献   

5.
Road segmentation from high-resolution visible remote sensing images provides an effective way for automatic road network forming. Recently, deep learning methods based on convolutional neural networks (CNNs) are widely applied in road segmentation. However, it is a challenge for most CNN-based methods to achieve high segmentation accuracy when processing high-resolution visible remote sensing images with rich details. To handle this problem, we propose a road segmentation method based on a Y-shaped convolutional network (indicated as Y-Net). Y-Net contains a two-arm feature extraction module and a fusion module. The feature extraction module includes a deep downsampling-to-upsampling sub-network for semantic features and a convolutional sub-network without downsampling for detail features. The fusion module combines all features for road segmentation. Benefiting from this scheme, the Y-Net can well segment multi-scale roads (both wide and narrow roads) from high-resolution images. The testing and comparative experiments on a public dataset and a private dataset show that Y-Net has higher segmentation accuracy than four other state-of-art methods, FCN (Fully Convolutional Network), U-Net, SegNet, and FC-DenseNet (Fully Convolutional DenseNet). Especially, Y-Net accurately segments contours of narrow roads, which are missed by the comparative methods.  相似文献   

6.
OBJECTIVE: The purpose of this study was to extract quantitative features for characterization of cervical lymph nodes on sonographic images and analyze the effect of a semiautomated segmentation method on the feature extraction. METHODS: Contours of 186 cervical lymph nodes on sonographic images were separately delineated by 2 radiologists (R1 and R2) and a semiautomated segmentation method. For each node, 10 kinds of sonographic features (including 3 parameters of size; 12 parameters of margin; 4 parameters of nodal border; 10 parameters of echogeneity; and 1 parameter of shape, echogenicity, medulla ratio, medulla distribution, vascular density, and vascular pattern, respectively) were quantified by a computerized scheme based on the segmented contour. Correlations between the quantitative parameter and the radiologists' consensus grading were computed to assess the effectiveness of these parameters. Concerning the 14 best correlated parameters, the effect of the segmentation stage on the feature extraction was estimated by comparing the parameter values calculated under different segmentations in terms of relative ultimate measurement accuracy. RESULTS: Good correlations between the computerized scheme and radiologists were seen in features of size, nodal border, shape, echogenicity, medulla ratio, medulla distribution, vascular density, and vascular pattern, whereas 10 of 12 parameters of margin features and 8 of 10 parameters of echogeneity features showed poor correlations. Paired t tests comparing the relative ultimate measurement accuracy computed using the R1-R2 and the R1-computer pairing showed no significant difference on 11 parameters for the 14 parameters analyzed. CONCLUSIONS: The computerized feature parameters may be used as assisted indices for evaluating cervical lymphadenopathies from sonographic images. The semiautomated segmentation method satisfied the accuracy requirement of the feature extraction.  相似文献   

7.
Automatic medical image segmentation plays a crucial role in many medical image analysis applications, such as disease diagnosis and prognosis. Despite the extensive progress of existing deep learning based models for medical image segmentation, they focus on extracting accurate features by designing novel network structures and solely utilize fully connected (FC) layer for pixel-level classification. Considering the insufficient capability of FC layer to encode the extracted diverse feature representations, we propose a Hierarchical Segmentation (HieraSeg) Network for medical image segmentation and devise a Hierarchical Fully Connected (HFC) layer. Specifically, it consists of three classifiers and decouples each category into several subcategories by introducing multiple weight vectors to denote the diverse characteristics in each category. A subcategory-level and a category-level learning schemes are then designed to explicitly enforce the discrepant subcategories and automatically capture the most representative characteristics. Hence, the HFC layer can fit the variant characteristics so as to derive an accurate decision boundary. To enhance the robustness of HieraSeg Network with the variability of lesions, we further propose a Dynamic-Weighting HieraSeg (DW-HieraSeg) Network, which introduces an Image-level Weight Net (IWN) and a Pixel-level Weight Net (PWN) to learn data-driven curriculum. Through progressively incorporating informative images and pixels in an easy-to-hard manner, DW-HieraSeg Network is able to eliminate local optimums and accelerate the training process. Additionally, a class balanced loss is proposed to constrain the PWN for preventing the overfitting problem in minority category. Comprehensive experiments on three benchmark datasets, EndoScene, ISIC and Decathlon, show our newly proposed HieraSeg and DW-HieraSeg Networks achieve state-of-the-art performance, which clearly demonstrates the effectiveness of the proposed approaches for medical image segmentation.  相似文献   

8.
《Medical image analysis》2015,20(1):220-249
ContributionsWe propose a novel framework for joint 3-D vessel segmentation and centerline extraction. The approach is based on multivariate Hough voting and oblique random forests (RFs) that we learn from noisy annotations. It relies on steerable filters for the efficient computation of local image features at different scales and orientations.ExperimentsWe validate both the segmentation performance and the centerline accuracy of our approach both on synthetic vascular data and four 3-D imaging datasets of the rat visual cortex at 700 nm resolution. First, we evaluate the most important structural components of our approach: (1) Orthogonal subspace filtering in comparison to steerable filters that show, qualitatively, similarities to the eigenspace filters learned from local image patches. (2) Standard RF against oblique RF. Second, we compare the overall approach to different state-of-the-art methods for (1) vessel segmentation based on optimally oriented flux (OOF) and the eigenstructure of the Hessian, and (2) centerline extraction based on homotopic skeletonization and geodesic path tracing.ResultsOur experiments reveal the benefit of steerable over eigenspace filters as well as the advantage of oblique split directions over univariate orthogonal splits. We further show that the learning-based approach outperforms different state-of-the-art methods and proves highly accurate and robust with regard to both vessel segmentation and centerline extraction in spite of the high level of label noise in the training data.  相似文献   

9.
Graph refinement, or the task of obtaining subgraphs of interest from over-complete graphs, can have many varied applications. In this work, we extract trees or collection of sub-trees from image data by, first deriving a graph-based representation of the volumetric data and then, posing the tree extraction as a graph refinement task. We present two methods to perform graph refinement. First, we use mean-field approximation (MFA) to approximate the posterior density over the subgraphs from which the optimal subgraph of interest can be estimated. Mean field networks (MFNs) are used for inference based on the interpretation that iterations of MFA can be seen as feed-forward operations in a neural network. This allows us to learn the model parameters using gradient descent. Second, we present a supervised learning approach using graph neural networks (GNNs) which can be seen as generalisations of MFNs. Subgraphs are obtained by training a GNN-based graph refinement model to directly predict edge probabilities. We discuss connections between the two classes of methods and compare them for the task of extracting airways from 3D, low-dose, chest CT data. We show that both the MFN and GNN models show significant improvement when compared to one baseline method, that is similar to a top performing method in the EXACT’09 Challenge, and a 3D U-Net based airway segmentation model, in detecting more branches with fewer false positives.  相似文献   

10.
With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of “big data”. It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine “sub-visual” image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales.The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular “omics” features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology.  相似文献   

11.
Medical image segmentation can provide a reliable basis for further clinical analysis and disease diagnosis. With the development of convolutional neural networks (CNNs), medical image segmentation performance has advanced significantly. However, most existing CNN-based methods often produce unsatisfactory segmentation masks without accurate object boundaries. This problem is caused by the limited context information and inadequate discriminative feature maps after consecutive pooling and convolution operations. Additionally, medical images are characterized by high intra-class variation, inter-class indistinction and noise, extracting powerful context and aggregating discriminative features for fine-grained segmentation remain challenging. In this study, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation to capture richer context and preserve fine spatial information, which incorporates encoder-decoder architecture. In each stage of the encoder sub-network, a proposed pyramid edge extraction module first obtains multi-granularity edge information. Then a newly designed mini multi-task learning module for jointly learning segments the object masks and detects lesion boundaries, in which a new interactive attention layer is introduced to bridge the two tasks. In this way, information complementarity between different tasks is achieved, which effectively leverages the boundary information to offer strong cues for better segmentation prediction. Finally, a cross feature fusion module acts to selectively aggregate multi-level features from the entire encoder sub-network. By cascading these three modules, richer context and fine-grain features of each stage are encoded and then delivered to the decoder. The results of extensive experiments on five datasets show that the proposed BA-Net outperforms state-of-the-art techniques.  相似文献   

12.
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.  相似文献   

13.
Automatic and accurate segmentation of dental models is a fundamental task in computer-aided dentistry. Previous methods can achieve satisfactory segmentation results on normal dental models; however, they fail to robustly handle challenging clinical cases such as dental models with missing, crowding, or misaligned teeth before orthodontic treatments. In this paper, we propose a novel end-to-end learning-based method, called TSegNet, for robust and efficient tooth segmentation on 3D scanned point cloud data of dental models. Our algorithm detects all the teeth using a distance-aware tooth centroid voting scheme in the first stage, which ensures the accurate localization of tooth objects even with irregular positions on abnormal dental models. Then, a confidence-aware cascade segmentation module in the second stage is designed to segment each individual tooth and resolve ambiguities caused by aforementioned challenging cases. We evaluated our method on a large-scale real-world dataset consisting of dental models scanned before or after orthodontic treatments. Extensive evaluations, ablation studies and comparisons demonstrate that our method can generate accurate tooth labels robustly in various challenging cases and significantly outperforms state-of-the-art approaches by 6.5% of Dice Coefficient, 3.0% of F1 score in term of accuracy, while achieving 20 times speedup of computational time.  相似文献   

14.
Segmentation of vascular structures is a difficult and challenging task. In this article, we present an algorithm devised for the segmentation of such structures. Our technique consists in a geometric deformable model with associated energy functional that incorporates high-order multiscale features in a non-parametric statistical framework. Although the proposed segmentation method is generic, it has been applied to the segmentation of cerebral aneurysms in 3DRA and CTA. An evaluation study over 10 clinical datasets indicate that the segmentations obtained by our method present a high overlap index with respect to the ground-truth (91.13% and 73.31%, respectively) and that the mean error distance from the surface to the ground truth is close to the in-plane resolution (0.40 and 0.38 mm, respectively). Besides, our technique favorably compares to other alternative techniques based on deformable models, namely parametric geodesic active regions and active contours without edges.  相似文献   

15.
High performance of deep learning models on medical image segmentation greatly relies on large amount of pixel-wise annotated data, yet annotations are costly to collect. How to obtain high accuracy segmentation labels of medical images with limited cost (e.g. time) becomes an urgent problem. Active learning can reduce the annotation cost of image segmentation, but it faces three challenges: the cold start problem, an effective sample selection strategy for segmentation task and the burden of manual annotation. In this work, we propose a Hybrid Active Learning framework using Interactive Annotation (HAL-IA) for medical image segmentation, which reduces the annotation cost both in decreasing the amount of the annotated images and simplifying the annotation process. Specifically, we propose a novel hybrid sample selection strategy to select the most valuable samples for segmentation model performance improvement. This strategy combines pixel entropy, regional consistency and image diversity to ensure that the selected samples have high uncertainty and diversity. In addition, we propose a warm-start initialization strategy to build the initial annotated dataset to avoid the cold-start problem. To simplify the manual annotation process, we propose an interactive annotation module with suggested superpixels to obtain pixel-wise label with several clicks. We validate our proposed framework with extensive segmentation experiments on four medical image datasets. Experimental results showed that the proposed framework achieves high accuracy pixel-wise annotations and models with less labeled data and fewer interactions, outperforming other state-of-the-art methods. Our method can help physicians efficiently obtain accurate medical image segmentation results for clinical analysis and diagnosis.  相似文献   

16.
The current practice in assessing sonographic findings of chronic inflamed thyroid tissue is mainly qualitative, based just on a physician's experience. This study shows that inflamed and healthy tissues can be differentiated by automatic texture analysis of B-mode sonographic images. Feature selection is the most important part of this procedure. We employed two selection schemes for finding recognition-optimal features: one based on compactness and separability and the other based on classification error. The full feature set included Muzzolini's spatial features and Haralick's co-occurrence features. These features were selected on a set of 2430 sonograms of 81 subjects, and the classifier performance was evaluated on a test set of 540 sonograms of 18 independent subjects. A classification success rate of 100% was achieved with as few as one optimal feature among the 129 texture characteristics tested. Both selection schemes agreed on the best features. The results were confirmed on the independent test set. The stability of the results with respect to sonograph setting, thyroid gland segmentation and scanning direction was tested.  相似文献   

17.
Accurate vertebral body (VB) detection and segmentation are critical for spine disease identification and diagnosis. Existing automatic VB detection and segmentation methods may cause false-positive results to the background tissue or inaccurate results to the desirable VB. Because they usually cannot take both the global spine pattern and the local VB appearance into consideration concurrently. In this paper, we propose a Sequential Conditional Reinforcement Learning network (SCRL) to tackle the simultaneous detection and segmentation of VBs from MR spine images. The SCRL, for the first time, applies deep reinforcement learning into VB detection and segmentation. It innovatively models the spatial correlation between VBs from top to bottom as sequential dynamic-interaction processes, thereby globally focusing detection and segmentation on each VB. Simultaneously, SCRL also perceives the local appearance feature of each desirable VB comprehensively, thereby achieving accurate detection and segmentation result. Particularly, SCRL seamlessly combines three parts: 1) Anatomy-Modeling Reinforcement Learning Network dynamically interacts with the image and focuses an attention-region on the VB; 2) Fully-Connected Residual Neural Network learns rich global context information of the VB including both the detailed low-level features and the abstracted high-level features to detect the accurate bounding-box of the VB based on the attention-region; 3) Y-shaped Network learns comprehensive detailed texture information of VB including multi-scale, coarse-to-fine features to segment the boundary of VB from the attention-region. On 240 subjects, SCRL achieves accurate detection and segmentation results, where on average the detection IoU is 92.3%, segmentation Dice is 92.6%, and classification mean accuracy is 96.4%. These excellent results demonstrate that SCRL can be an efficient aided-diagnostic tool to assist clinicians when diagnosing spinal diseases.  相似文献   

18.
To fully define the target objects of interest in clinical diagnosis, many deep convolution neural networks (CNNs) use multimodal paired registered images as inputs for segmentation tasks. However, these paired images are difficult to obtain in some cases. Furthermore, the CNNs trained on one specific modality may fail on others for images acquired with different imaging protocols and scanners. Therefore, developing a unified model that can segment the target objects from unpaired multiple modalities is significant for many clinical applications. In this work, we propose a 3D unified generative adversarial network, which unifies the any-to-any modality translation and multimodal segmentation in a single network. Since the anatomical structure is preserved during modality translation, the auxiliary translation task is used to extract the modality-invariant features and generate the additional training data implicitly. To fully utilize the segmentation-related features, we add a cross-task skip connection with feature recalibration from the translation decoder to the segmentation decoder. Experiments on abdominal organ segmentation and brain tumor segmentation indicate that our method outperforms the existing unified methods.  相似文献   

19.
The automated processing of retinal images is a widely researched area in medical image analysis. Screening systems based on the automated and accurate recognition of retinopathies enable the earlier diagnosis of diseases like diabetic retinopathy, hypertension and their complications. The segmentation of the vascular system is a crucial task in the field: on the one hand, the accurate extraction of the vessel pixels aids the detection of other anatomical parts (like the optic disc Hoover and Goldbaum, 2003) and lesions (like microaneurysms Sopharak et al., 2013); on the other hand, the geometrical features of the vascular system and their temporal changes are shown to be related to diseases, like the vessel tortuosity to Fabry disease Sodi et al., 2013 and the arteriolar-to-venus (A/V) ratio to hypertension (Pakter et al., 2005).In this study, a novel technique based on template matching and contour reconstruction is proposed for the segmentation of the vasculature. In the template matching step generalized Gabor function based templates are used to extract the center lines of vessels. Then, the intensity characteristics of vessel contours measured in training databases are reconstructed. The method was trained and tested on two publicly available databases, DRIVE and STARE; and reached an average accuracy of 0.9494 and 0.9610, respectively. We have also carried out cross-database tests and found that the accuracy scores are higher than that of any previous technique trained and tested on the same database.  相似文献   

20.
Trophectoderm (TE) is one of the main components of a day-5 human embryo (blastocyst) that correlates with the embryo’s quality. Precise segmentation of TE is an important step toward achieving automatic human embryo quality assessment based on morphological image features. Automatic segmentation of TE, however, is a challenging task and previous work on this is quite limited. In this paper, four fully convolutional deep models are proposed for accurate segmentation of trophectoderm in microscopic images of the human blastocyst. In addition, a multi-scaled ensembling method is proposed that aggregates five models trained at various scales offering trade-offs between the quantity and quality of the spatial information. Furthermore, synthetic embryo images are generated for the first time to address the lack of data in training deep learning models. These synthetically generated images are proven to be effective to fill the generalization gap in deep learning when limited data is available for training. Experimental results confirm that the proposed models are capable of segmenting TE regions with an average Precision, Recall, Accuracy, Dice Coefficient and Jaccard Index of 83.8%, 90.1%, 96.9%, 86.61% and 76.71%, respectively. Particularly, the proposed Inceptioned U-Net model outperforms state-of-the-art by 10.3% in Accuracy, 9.3% in Dice Coefficient and 13.7% in Jaccard Index. Further experiments are conducted to highlight the effectiveness of the proposed models compared to some recent deep learning based segmentation methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号