共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present a Deep Convolutional Neural Networks (CNNs) for fully automatic brain tumor segmentation for both high- and low-grade gliomas in MRI images. Unlike normal tissues or organs that usually have a fixed location or shape, brain tumors with different grades have shown great variation in terms of the location, size, structure, and morphological appearance. Moreover, the severe data imbalance exists not only between the brain tumor and non-tumor tissues, but also among the different sub-regions inside brain tumor (e.g., enhancing tumor, necrotic, edema, and non-enhancing tumor).Therefore, we introduce a hybrid model to address the challenges in the multi-modality multi-class brain tumor segmentation task. First, we propose the dynamic focal Dice loss function that is able to focus more on the smaller tumor sub-regions with more complex structures during training, and the learning capacity of the model is dynamically distributed to each class independently based on its training performance in different training stages. Besides, to better recognize the overall structure of the brain tumor and the morphological relationship among different tumor sub-regions, we relax the boundary constraints for the inner tumor regions in coarse-to-fine fashion. Additionally, a symmetric attention branch is proposed to highlight the possible location of the brain tumor from the asymmetric features caused by growth and expansion of the abnormal tissues in the brain. Generally, to balance the learning capacity of the model between spatial details and high-level morphological features, the proposed model relaxes the constraints of the inner boundary and complex details and enforces more attention on the tumor shape, location, and the harder classes of the tumor sub-regions. The proposed model is validated on the publicly available brain tumor dataset from real patients, BRATS 2019. The experimental results reveal that our model improves the overall segmentation performance in comparison with the state-of-the-art methods, with major progress on the recognition of the tumor shape, the structural relationship of tumor sub-regions, and the segmentation of more challenging tumor sub-regions, e.g., the tumor core, and enhancing tumor. 相似文献
2.
Vrooman HA Cocosco CA van der Lijn F Stokking R Ikram MA Vernooij MW Breteler MM Niessen WJ 《NeuroImage》2007,37(1):71-81
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible. 相似文献
3.
This paper describes a framework for automatic brain tumor segmentation from MR images. The detection of edema is done simultaneously with tumor segmentation, as the knowledge of the extent of edema is important for diagnosis, planning, and treatment. Whereas many other tumor segmentation methods rely on the intensity enhancement produced by the gadolinium contrast agent in the T1-weighted image, the method proposed here does not require contrast enhanced image channels. The only required input for the segmentation procedure is the T2 MR image channel, but it can make use of any additional non-enhanced image channels for improved tissue segmentation. The segmentation framework is composed of three stages. First, we detect abnormal regions using a registered brain atlas as a model for healthy brains. We then make use of the robust estimates of the location and dispersion of the normal brain tissue intensity clusters to determine the intensity properties of the different tissue types. In the second stage, we determine from the T2 image intensities whether edema appears together with tumor in the abnormal regions. Finally, we apply geometric and spatial constraints to the detected tumor and edema regions. The segmentation procedure has been applied to three real datasets, representing different tumor shapes, locations, sizes, image intensities, and enhancement. 相似文献
4.
Doolittle ND 《Seminars in Oncology Nursing》2004,20(4):224-230
OBJECTIVES: To review the incidence of metastatic and primary brain tumors and the most widely used brain tumor classification systems, and to discuss discoveries advancing the understanding, classification, and grading of selected brain tumor histologies. DATA SOURCES: Journal articles, text books, epidemiologic and statistical reports. CONCLUSION: Recent advances in understanding the molecular biology of brain tumors have shown that molecular and genetic signatures may predict brain tumor behavior and may soon guide tumor classification, diagnosis, and tumor-specific treatment strategies. IMPLICATIONS FOR NURSING PRACTICE: Understanding recent advances in the molecular biology of brain tumors is important because these advances may soon guide treatment decisions. New tumor-specific therapeutic opportunities may improve outcomes as well as the care of persons with brain tumors. 相似文献
5.
We describe an automated 3-D segmentation system for in vivo brain magnetic resonance images (MRI). Our segmentation method combines a variety of filtering, segmentation, and registration techniques and makes maximum use of the available a priori biomedical expertise, both in an implicit and an explicit form. We approach the issue of boundary finding as a process of fitting a group of deformable templates (simplex mesh surfaces) to the contours of the target structures. These templates evolve in parallel, supervised by a series of rules derived from analyzing the template's dynamics and from medical experience. The templates are also constrained by knowledge on the expected textural and shape properties of the target structures. We apply our system to segment four brain structures (corpus callosum, ventricles, hippocampus, and caudate nuclei) and discuss its robustness to imaging characteristics and acquisition noise. 相似文献
6.
Cristina Suárez-Mejías José A. Pérez-Carrasco Carmen Serrano José L. López-Guerra Tomás Gómez-Cía Carlos L. Parra-Calderón Begoña Acha 《International journal of computer assisted radiology and surgery》2017,12(12):2055-2067
Purpose
In 2005, an application for surgical planning called AYRA\({\textregistered }\) was designed and validated by different surgeons and engineers at the Virgen del Rocío University Hospital, Seville (Spain). However, the segmentation methods included in AYRA and in other surgical planning applications are not able to segment accurately tumors that appear in soft tissue. The aims of this paper are to offer an exhaustive validation of an accurate semiautomatic segmentation tool to delimitate retroperitoneal tumors from CT images and to aid physicians in planning both radiotherapy doses and surgery.Methods
A panel of 6 experts manually segmented 11 cases of tumors, and the segmentation results were compared exhaustively with: the results provided by a surgical planning tool (AYRA), the segmentations obtained using a radiotherapy treatment planning system (Pinnacle\(^{\textregistered }\)), the segmentation results obtained by a group of experts in the delimitation of retroperitoneal tumors and the segmentation results using the algorithm under validation.Results
11 cases of retroperitoneal tumors were tested. The proposed algorithm provided accurate results regarding the segmentation of the tumor. Moreover, the algorithm requires minimal computational time—an average of 90.5% less than that required when manually contouring the same tumor.Conclusion
A method developed for the semiautomatic selection of retroperitoneal tumor has been validated in depth. AYRA, as well as other surgical and radiotherapy planning tools, could be greatly improved by including this algorithm.7.
We describe a knowledge-based approach to cortical surface segmentation that uses learned knowledge of the overall shape and range of variation of the cortex (excluding the detailed gyri and sulci) to guide the search for the grey-CSF boundary in a structural MRI image volume. The shape knowledge is represented by a radial surface model, which is a type of geometric constraint network (GCN) that we hypothesize can represent shape by networks of locally interacting constraints. The shape model is used in a protocol for visualization-based mapping of cortical stimulation mapping (CSM) sites onto the brain surface, prior to integration with other mapping modalities or as input to existing surface analysis and reconfiguration programs. Example results are presented for CSM data related to language organization in the cortex, but the methods should be applicable to other situations where a realistic visualization of the brain surface, as seen at neurosurgery, is desired. 相似文献
8.
Popuri K Cobzas D Murtha A Jägersand M 《International journal of computer assisted radiology and surgery》2012,7(4):493-506
Purpose
Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems.Methods
We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue.Results
We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm).Conclusions
Using priors on the brain/tumor appearance, our proposed automatic 3D variational segmentation method was able to better disambiguate the tumor from the surrounding tissue.9.
《Remote sensing letters.》2013,4(1):73-82
Image segmentation is a basic and important procedure in object-based classification of remote-sensing data. This study presents an approach to multi-scale optimal segmentation (OS), given that single-scale segmentation may not be the most suitable approach to map a variety of land-cover types characterized by various spatial structures; it objectively measures the appropriate segmentation scale for each object at various scales and projects them onto a single layer. A 1.8 m spatial resolution Worldview-2 image was used to perform successive multi-scale segmentations. The pixel standard deviation of an object was used to measure the optimal scale that occurred on the longest, feature unchanged scale range during multi-scale segmentation. Results indicate that the classification of multi-scale object OS can improve the overall accuracy by five percentage points compared to traditional single segmentation. 相似文献
10.
Watershed segmentation for breast tumor in 2-D sonography 总被引:4,自引:0,他引:4
Automatic contouring for breast tumors using medical ultrasound (US) imaging may assist physicians without relevant experience, in making correct diagnoses. This study integrates the advantages of neural network (NN) classification and morphological watershed segmentation to extract precise contours of breast tumors from US images. Textural analysis is employed to yield inputs to the NN to classify ultrasonic images. Autocovariance coefficients specify texture features to classify breasts imaged by US using a self-organizing map (SOM). After the texture features in sonography have been classified, an adaptive preprocessing procedure is selected by SOM output. Finally, watershed transformation automatically determines the contours of the tumor. In this study, the proposed method was trained and tested using images from 60 patients. The results of computer simulations reveal that the proposed method always identified similar contours and regions-of-interest (ROIs) to those obtained by manual contouring (by an experienced physician) of the breast tumor in ultrasonic images. As US imaging becomes more widespread, a functional automatic contouring method is essential and its clinical application is becoming urgent. Such a method provides robust and fast automatic contouring of US images. This study is not to emphasize that the automatic contouring technique is superior to the one undertaken manually. Both automatic and manual contours did not, after all, necessarily result in the same factual pathologic border. In computer-aided diagnosis (CAD) applications, automatic segmentation can save much of the time required to sketch a precise contour, with very high stability. 相似文献
11.
12.
Tissue-level semantic segmentation is a vital step in computational pathology. Fully-supervised models have already achieved outstanding performance with dense pixel-level annotations. However, drawing such labels on the giga-pixel whole slide images is extremely expensive and time-consuming. In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images, finally reducing the annotation efforts. We propose a two-step model including a classification and a segmentation phases. In the classification phase, we propose a CAM-based model to generate pseudo masks by patch-level labels. In the segmentation phase, we achieve tissue semantic segmentation by our propose Multi-Layer Pseudo-Supervision. Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations. As a part of this paper, we introduce a new weakly-supervised semantic segmentation (WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conduct several experiments to evaluate our proposed model on two datasets. Our proposed model outperforms five state-of-the-art WSSS approaches. Note that we can achieve comparable quantitative and qualitative results with the fully-supervised model, with only around a 2% gap for MIoU and FwIoU. By comparing with manual labeling on a randomly sampled 100 patches dataset, patch-level labeling can greatly reduce the annotation time from hours to minutes. The source code and the released datasets are available at: https://github.com/ChuHan89/WSSS-Tissue. 相似文献
13.
Breast cancer is one of the most common causes of death among women worldwide. Early signs of breast cancer can be an abnormality depicted on breast images (e.g., mammography or breast ultrasonography). However, reliable interpretation of breast images requires intensive labor and physicians with extensive experience. Deep learning is evolving breast imaging diagnosis by introducing a second opinion to physicians. However, most deep learning-based breast cancer analysis algorithms lack interpretability because of their black box nature, which means that domain experts cannot understand why the algorithms predict a label. In addition, most deep learning algorithms are formulated as a single-task-based model that ignores correlations between different tasks (e.g., tumor classification and segmentation). In this paper, we propose an interpretable multitask information bottleneck network (MIB-Net) to accomplish simultaneous breast tumor classification and segmentation. MIB-Net maximizes the mutual information between the latent representations and class labels while minimizing information shared by the latent representations and inputs. In contrast from existing models, our MIB-Net generates a contribution score map that offers an interpretable aid for physicians to understand the model’s decision-making process. In addition, MIB-Net implements multitask learning and further proposes a dual prior knowledge guidance strategy to enhance deep task correlation. Our evaluations are carried out on three breast image datasets in different modalities. Our results show that the proposed framework is not only able to help physicians better understand the model’s decisions but also improve breast tumor classification and segmentation accuracy over representative state-of-the-art models. Our code is available at https://github.com/jxw0810/MIB-Net. 相似文献
14.
15.
Spatial normalization and segmentation of infant brain MRI data based on adult or pediatric reference data may not be appropriate due to the developmental differences between the infant input data and the reference data. In this study we have constructed infant templates and a priori brain tissue probability maps based on the MR brain image data from 76 infants ranging in age from 9 to 15 months. We employed two processing strategies to construct the infant template and a priori data: one processed with and one without using a priori data in the segmentation step. Using the templates we constructed, comparisons between the adult templates and the new infant templates are presented. Tissue distribution differences are apparent between the infant and adult template, particularly in the gray matter (GM) maps. The infant a priori information classifies brain tissue as GM with higher probability than adult data, at the cost of white matter (WM), which presents with lower probability when compared to adult data. The differences are more pronounced in the frontal regions and in the cingulate gyrus. Similar differences are also observed when the infant data is compared to a pediatric (age 5 to 18) template. The two-pass segmentation approach taken here for infant T1W brain images has provided high quality tissue probability maps for GM, WM, and CSF, in infant brain images. These templates may be used as prior probability distributions for segmentation and normalization; a key to improving the accuracy of these procedures in special populations. 相似文献
16.
《Medical image analysis》2014,18(3):591-604
Labeling a histopathology image as having cancerous regions or not is a critical task in cancer diagnosis; it is also clinically important to segment the cancer tissues and cluster them into various classes. Existing supervised approaches for image classification and segmentation require detailed manual annotations for the cancer pixels, which are time-consuming to obtain. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL) (along the line of weakly supervised learning) for histopathology image segmentation. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), medical image segmentation (cancer vs. non-cancer tissue), and patch-level clustering (different classes). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to performing the above three tasks in an integrated framework. In addition, we introduce contextual constraints as a prior for MCIL, which further reduces the ambiguity in MIL. Experimental results on histopathology colon cancer images and cytology images demonstrate the great advantage of MCIL over the competing methods. 相似文献
17.
The neocortical surface has a rich and complex structure comprised of folds (gyri) and fissures (sulci). Sulci are important macroscopic landmarks for orientation on the cortex. A precise segmentation and labeling of sulci is helpful in human brain mapping studies relating brain anatomy and function. Due to their structural complexity and inter-subject variability, this is considered as a non-trivial task. An automatic algorithm is proposed to accurately segment neocortical sulci: vertices of a white/gray matter interface mesh are classified under a Bayesian framework as belonging to gyral and sulcal compartments using information about their geodesic depth and local curvature. Then, vertices are collected into sulcal regions by a watershed-like growing method. Experimental results demonstrate that the method is accurate and robust. 相似文献
18.
Accurate liver tumor segmentation without contrast agents (non-enhanced images) avoids the contrast-agent-associated time-consuming and high risk, which offers radiologists quick and safe assistance to diagnose and treat the liver tumor. However, without contrast agents enhancing, the tumor in liver images presents low contrast and even invisible to naked eyes. Thus the liver tumor segmentation from non-enhanced images is quite challenging. We propose a Weakly-Supervised Teacher-Student network (WSTS) to address the liver tumor segmentation in non-enhanced images by leveraging additional box-level-labeled data (labeled with a tumor bounding-box). WSTS deploys a weakly-supervised teacher-student framework (TCH-ST), namely, a Teacher Module learns to detect and segment the tumor in enhanced images during training, which facilitates a Student Module to detect and segment the tumor in non-enhanced images independently during testing. To detect the tumor accurately, the WSTS proposes a Dual-strategy DRL (DDRL), which develops two tumor detection strategies by creatively introducing a relative-entropy bias in the DRL. To accurately predict a tumor mask for the box-level-labeled enhanced image and thus improve tumor segmentation in non-enhanced images, the WSTS proposes an Uncertainty-Sifting Self-Ensembling (USSE). The USSE exploits the weakly-labeled data with self-ensembling and evaluates the prediction reliability with a newly-designed Multi-scale Uncertainty-estimation. WSTS is validated with a 2D MRI dataset, where the experiment achieves 83.11% of Dice and 85.12% of Recall in 50 patient testing data after training by 200 patient data (half amount data is box-level-labeled). Such a great result illustrates the competence of WSTS to segment the liver tumor from non-enhanced images. Thus, WSTS has excellent potential to assist radiologists by liver tumor segmentation without contrast-agents. 相似文献
19.
Juan Eugenio Iglesias Mert Rory Sabuncu Koen Van Leemput 《Medical image analysis》2013,17(8):1181-1191
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations.We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion. 相似文献
20.
In recent years, deep learning technology has shown superior performance in different fields of medical image analysis. Some deep learning architectures have been proposed and used for computational pathology classification, segmentation, and detection tasks. Due to their simple, modular structure, most downstream applications still use ResNet and its variants as the backbone network. This paper proposes a modular group attention block that can capture feature dependencies in medical images in two independent dimensions: channel and space. By stacking these group attention blocks in ResNet-style, we obtain a new ResNet variant called ResGANet. The stacked ResGANet architecture has 1.51–3.47 times fewer parameters than the original ResNet and can be directly used for downstream medical image segmentation tasks. Many experiments show that the proposed ResGANet is superior to state-of-the-art backbone models in medical image classification tasks. Applying it to different segmentation networks can improve the baseline model in medical image segmentation tasks without changing the network architecture. We hope that this work provides a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future. 相似文献