首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 83 毫秒
1.
Automatic fetal brain tissue segmentation can enhance the quantitative assessment of brain development at this critical stage. Deep learning methods represent the state of the art in medical image segmentation and have also achieved impressive results in brain segmentation. However, effective training of a deep learning model to perform this task requires a large number of training images to represent the rapid development of the transient fetal brain structures. On the other hand, manual multi-label segmentation of a large number of 3D images is prohibitive. To address this challenge, we segmented 272 training images, covering 19–39 gestational weeks, using an automatic multi-atlas segmentation strategy based on deformable registration and probabilistic atlas fusion, and manually corrected large errors in those segmentations. Since this process generated a large training dataset with noisy segmentations, we developed a novel label smoothing procedure and a loss function to train a deep learning model with smoothed noisy segmentations. Our proposed methods properly account for the uncertainty in tissue boundaries. We evaluated our method on 23 manually-segmented test images of a separate set of fetuses. Results show that our method achieves an average Dice similarity coefficient of 0.893 and 0.916 for the transient structures of younger and older fetuses, respectively. Our method generated results that were significantly more accurate than several state-of-the-art methods including nnU-Net that achieved the closest results to our method. Our trained model can serve as a valuable tool to enhance the accuracy and reproducibility of fetal brain analysis in MRI.  相似文献   

2.
The recent development of motion robust super-resolution fetal brain MRI holds out the potential for dramatic new advances in volumetric and morphometric analysis. Volumetric analysis based on volumetric and morphometric biomarkers of the developing fetal brain must include segmentation. Automatic segmentation of fetal brain MRI is challenging, however, due to the highly variable size and shape of the developing brain; possible structural abnormalities; and the relatively poor resolution of fetal MRI scans. To overcome these limitations, we present a novel, constrained, multi-atlas, multi-shape automatic segmentation method that specifically addresses the challenge of segmenting multiple structures with similar intensity values in subjects with strong anatomic variability. Accordingly, we have applied this method to shape segmentation of normal, dilated, or fused lateral ventricles for quantitative analysis of ventriculomegaly (VM), which is a pivotal finding in the earliest stages of fetal brain development, and warrants further investigation. Utilizing these innovative techniques, we introduce novel volumetric and morphometric biomarkers of VM comparing these values to those that are generated by standard methods of VM analysis, i.e., by measuring the ventricular atrial diameter (AD) on manually selected sections of 2D ultrasound or 2D MRI. To this end, we studied 25 normal and abnormal fetuses in the gestation age (GA) range of 19 to 39 weeks (mean=28.26, stdev=6.56). This heterogeneous dataset was essentially used to 1) validate our segmentation method for normal and abnormal ventricles; and 2) show that the proposed biomarkers may provide improved detection of VM as compared to the AD measurement.  相似文献   

3.
目的 采用自动脑分割技术分析中国正常中老年人群脑结构体积的差异。方法 对96名正常中老年志愿者采集MRI,以自动脑分割技术获得脑内各结构体积,观察脑内各结构体积在不同性别、年龄及左右侧之间的差异,分析年龄对脑内结构体积的影响。结果 不同性别之间,颅脑总体积绝对值,右侧壳核、左、右侧尾状核、右侧苍白球、右侧额叶灰质及左侧顶叶灰质标准体积值差异均有统计学意义(P均<0.05)。左右侧之间,丘脑、壳核、尾狀核、额叶、顶叶、枕叶、颞叶、扣带回灰质、岛叶及侧脑室标准体积值差异均有统计学意义(P均<0.05)。脑脊液、右侧脑室及第三脑室体积与年龄呈正相关(r=0.60、0.51、0.57,P均<0.05),双侧额叶和左侧岛叶,皮层灰质、顶叶灰质、颞叶灰质及左侧枕叶灰质体积与年龄均呈负相关(P均<0.05)。结论 自动脑分割技术能直观快速显示正常中老年人脑结构体积。不同性别健康中老年人脑内结构体积存在差异,随年龄增加,脑脊液、侧脑室及第三脑室体积增大,皮层灰质、额叶灰白质、顶枕颞叶灰质及岛叶体积缩小。  相似文献   

4.
Most image segmentation algorithms are trained on binary masks formulated as a classification task per pixel. However, in applications such as medical imaging, this “black-and-white” approach is too constraining because the contrast between two tissues is often ill-defined, i.e., the voxels located on objects’ edges contain a mixture of tissues (a partial volume effect). Consequently, assigning a single “hard” label can result in a detrimental approximation. Instead, a soft prediction containing non-binary values would overcome that limitation. In this study, we introduce SoftSeg, a deep learning training approach that takes advantage of soft ground truth labels, and is not bound to binary predictions. SoftSeg aims at solving a regression instead of a classification problem. This is achieved by using (i) no binarization after preprocessing and data augmentation, (ii) a normalized ReLU final activation layer (instead of sigmoid), and (iii) a regression loss function (instead of the traditional Dice loss). We assess the impact of these three features on three open-source MRI segmentation datasets from the spinal cord gray matter, the multiple sclerosis brain lesion, and the multimodal brain tumor segmentation challenges. Across multiple random dataset splittings, SoftSeg outperformed the conventional approach, leading to an increase in Dice score of 2.0% on the gray matter dataset (p=0.001), 3.3% for the brain lesions, and 6.5% for the brain tumors. SoftSeg produces consistent soft predictions at tissues’ interfaces and shows an increased sensitivity for small objects (e.g., multiple sclerosis lesions). The richness of soft labels could represent the inter-expert variability, the partial volume effect, and complement the model uncertainty estimation, which is typically unclear with binary predictions. The developed training pipeline can easily be incorporated into most of the existing deep learning architectures. SoftSeg is implemented in the freely-available deep learning toolbox ivadomed (https://ivadomed.org).  相似文献   

5.
A fully automated brain tissue segmentation method is optimized and extended with white matter lesion segmentation. Cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM) are segmented by an atlas-based k-nearest neighbor classifier on multi-modal magnetic resonance imaging data. This classifier is trained by registering brain atlases to the subject. The resulting GM segmentation is used to automatically find a white matter lesion (WML) threshold in a fluid-attenuated inversion recovery scan. False positive lesions are removed by ensuring that the lesions are within the white matter. The method was visually validated on a set of 209 subjects. No segmentation errors were found in 98% of the brain tissue segmentations and 97% of the WML segmentations. A quantitative evaluation using manual segmentations was performed on a subset of 6 subjects for CSF, GM and WM segmentation and an additional 14 for the WML segmentations. The results indicated that the automatic segmentation accuracy is close to the interobserver variability of manual segmentations.  相似文献   

6.
Magnetic resonance imaging (MRI)-guided partial volume effect correction (PVC) in brain positron emission tomography (PET) is now a well-established approach to compensate the large bias in the estimate of regional radioactivity concentration, especially for small structures. The accuracy of the algorithms developed so far is, however, largely dependent on the performance of segmentation methods partitioning MRI brain data into its main classes, namely gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). A comparative evaluation of three brain MRI segmentation algorithms using simulated and clinical brain MR data was performed, and subsequently their impact on PVC in 18F-FDG and 18F-DOPA brain PET imaging was assessed. Two algorithms, the first is bundled in the Statistical Parametric Mapping (SPM2) package while the other is the Expectation Maximization Segmentation (EMS) algorithm, incorporate a priori probability images derived from MR images of a large number of subjects. The third, here referred to as the HBSA algorithm, is a histogram-based segmentation algorithm incorporating an Expectation Maximization approach to model a four-Gaussian mixture for both global and local histograms. Simulated under different combinations of noise and intensity non-uniformity, MR brain phantoms with known true volumes for the different brain classes were generated. The algorithms' performance was checked by calculating the kappa index assessing similarities with the "ground truth" as well as multiclass type I and type II errors including misclassification rates. The impact of image segmentation algorithms on PVC was then quantified using clinical data. The segmented tissues of patients' brain MRI were given as input to the region of interest (RoI)-based geometric transfer matrix (GTM) PVC algorithm, and quantitative comparisons were made. The results of digital MRI phantom studies suggest that the use of HBSA produces the best performance for WM classification. For GM classification, it is suggested to use the EMS. Segmentation performed on clinical MRI data show quite substantial differences, especially when lesions are present. For the particular case of PVC, SPM2 and EMS algorithms show very similar results and may be used interchangeably. The use of HBSA is not recommended for PVC. The partial volume corrected activities in some regions of the brain show quite large relative differences when performing paired analysis on 2 algorithms, implying a careful choice of the segmentation algorithm for GTM-based PVC.  相似文献   

7.
背景:由于脑部MR图像中信息对比度不高,各种脑部组织的形状复杂等特点,分割方法的选择比较困难,单一的算法很难获得满意的分割结果。目的:针对脑部MRI的特点综合利用现有的算法开发和定制有效的分割应用算法。方法:根据邻域连接和Canny水平集分割算法的优缺点,结合图像特征,用邻域连接方法的分割结果作为Canny水平集分割算法的先验分割模型,借以确定出Canny算法的下限阈值,从而完成两种算法的混合分割。结果与结论:采用实验所用混合方法得到的白质和灰质的分割结果,经与专家手工分割结果对比,证明该方法取得了较好的分割效果,从而证明综合利用现有的算法,不仅避免了重复劳动,还能开发和定制出更加有效的分割应用算法,具备很好的应用潜力。  相似文献   

8.
We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively.  相似文献   

9.
Central nervous system abnormalities in fetuses are fairly common, happening in 0.1% to 0.2% of live births and in 3% to 6% of stillbirths. So initial detection and categorization of fetal Brain abnormalities are critical. Manually detecting and segmenting fetal brain magnetic resonance imaging (MRI) could be time-consuming, and susceptible to interpreter experience. Artificial intelligence (AI) algorithms and machine learning approaches have a high potential for assisting in the early detection of these problems, improving the diagnosis process and follow-up procedures. The use of AI and machine learning techniques in fetal brain MRI was the subject of this narrative review paper. Using AI, anatomic fetal brain MRI processing has investigated models to predict specific landmarks and segmentation automatically. All gestation age weeks (17-38 wk) and different AI models (mainly Convolutional Neural Network and U-Net) have been used. Some models' accuracy achieved 95% and more. AI could help preprocess and post-process fetal images and reconstruct images. Also, AI can be used for gestational age prediction (with one-week accuracy), fetal brain extraction, fetal brain segmentation, and placenta detection. Some fetal brain linear measurements, such as Cerebral and Bone Biparietal Diameter, have been suggested. Classification of brain pathology was studied using diagonal quadratic discriminates analysis, K-nearest neighbor, random forest, naive Bayes, and radial basis function neural network classifiers. Deep learning methods will become more powerful as more large-scale, labeled datasets become available. Having shared fetal brain MRI datasets is crucial because there aren not many fetal brain pictures available. Also, physicians should be aware of AI's function in fetal brain MRI, particularly neuroradiologists, general radiologists, and perinatologists.  相似文献   

10.
Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team’s algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.  相似文献   

11.
Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.295.62%, 0.87–0.9, 1.76–2.97 mm and 0.67–0.78, obtained by other methods, respectively.  相似文献   

12.
13.
Objective Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. Materials and methods To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes–Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.’s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. Results The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. Conclusion A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods.  相似文献   

14.
Segmentation in image processing finds immense application in various areas. Image processing techniques can be used in medical applications for various diagnoses. In this article, we attempt to apply segmentation techniques to the brain images. Segmentation of brain magnetic resonance images (MRI) can be used to identify various neural disorders. We can segment abnormal tissues from the MRI, which and can be used for early detection of brain tumors. The segmentation, when applied to MRI, helps in extracting the different brain tissues such as white matter, gray matter and cerebrospinal fluid. Segmentation of these tissues helps in determining the volume of these tissues in the three-dimensional brain MRI. The study of volume changes helps in analyzing many neural disorders such as epilepsy and Alzheimer disease. We have proposed a hybrid method combining the classical Fuzzy C Means algorithm with neural network for segmentation.  相似文献   

15.
儿童线粒体脑肌病的MRI表现   总被引:9,自引:2,他引:7  
目的:回顾性研究20例线粒体脑肌病患儿的MRI表现.方法:20例证实为线粒体脑肌病的患儿,脑内均有MRI阳性表现,研究其MRI表现的类型,结果:20例患儿脑内病灶均表现为T1低,T2高信号,8例有不同程度的脑萎缩,18例主要为灰质受累,其中4例同时螺及灰质和白质,2例主要为白质受累,结论:儿童线粒体脑肌病的MRI表现是多样性的,当MRI表现为灰质异常信号,脑萎缩,不典型梗塞或白质病变且合并临床难以解释的多系统症状时,应考虑到该病的可能.  相似文献   

16.
Johansen-Berg H 《NeuroImage》2012,62(2):1293-1298
The brain is continually changing its function and structure in response to changing environmental demands. Magnetic resonance imaging (MRI) methods can be used to repeatedly scan the same individuals over time and in this way have provided powerful tools for assessing such brain change. Functional MRI has provided important insights into changes that occur with learning or recovery but this review will focus on the complementary information that can be provided by structural MRI methods. Structural methods have been powerful in indicating when and where changes occur in both gray and white matter with learning and recovery. However, the measures that we derive from structural MRI are typically ambiguous in biological terms. An important future challenge is to develop methods that will allow us to determine precisely what has changed.  相似文献   

17.
76-space analysis of grey matter diffusivity: methods and applications   总被引:1,自引:0,他引:1  
Liu T  Young G  Huang L  Chen NK  Wong ST 《NeuroImage》2006,31(1):51-65
Diffusion-weighted imaging (DWI) and diffusion tensor imaging (DTI) allow in vivo investigation of molecular motion of tissue water at a microscopic level in cerebral gray matter (GM) and white matter (WM). DWI/DTI measure of water diffusion has been proven to be invaluable for the study of many neurodegenerative diseases (e.g., Alzheimer's disease and Creutzfeldt-Jakob disease) that predominantly involve GM. Thus, quantitative analysis of GM diffusivity is of scientific interest and is promised to have a clinical impact on the investigation of normal brain aging and neuropathology. In this paper, we propose an automated framework for analysis of GM diffusivity in 76 standard anatomic subdivisions of gray matter to facilitate studies of neurodegenerative and other gray matter neurological diseases. The computational framework includes three enabling technologies: (1) automatic parcellation of structural MRI GM into 76 precisely defined neuroanatomic subregions ("76-space"), (2) automated segmentation of GM, WM and CSF based on DTI data, and (3) automatic measurement of the average apparent diffusion coefficient (ADC) in each segmented GM subregion. We evaluate and validate this computational framework for 76-space GM diffusivity analysis using data from normal volunteers and from patients with Creutzfeldt-Jakob disease.  相似文献   

18.
目的 介绍一种动态模糊聚类算法并利用该算法对磁共振图像进行分割研究。方法 首先对磁共振颅脑图像进行预处理去掉颅骨和肌肉等非脑组织,只保留大脑组织,然后利用模糊K- 均值聚类算法计算脑白质、脑灰质和脑脊液的模糊类属函数。结果 模糊K- 均值聚类算法能很好地分割出磁共振颅脑图像中的灰质、白质和脑脊液。结论 利用模糊K- 均值聚类算法分割磁共振颅脑图像能获得较好的分割效果。  相似文献   

19.
Intensity normalization is an important pre-processing step in the study and analysis of Magnetic Resonance Images (MRI) of human brains. As most parametric supervised automatic image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. One of the fast and accurate approaches proposed for intensity normalization is that of Nyul and colleagues. In this work, we present, for the first time, an extensive validation of this approach in real clinical domain where even after intensity inhomogeneity correction that accounts for scanner-specific artifacts, the MRI volumes can be affected from variations such as data heterogeneity resulting from multi-site multi-scanner acquisitions, the presence of multiple sclerosis (MS) lesions and the stage of disease progression in the brain. Using the distributional divergence criteria, we evaluate the effectiveness of the normalization in rendering, under the distributional assumptions of segmentation approaches, intensities that are more homogenous for the same tissue type while simultaneously resulting in better tissue type separation. We also demonstrate the advantage of the decile based piece-wise linear approach on the task of MS lesion segmentation against a linear normalization approach over three image segmentation algorithms: a standard Bayesian classifier, an outlier detection based approach and a Bayesian classifier with Markov Random Field (MRF) based post-processing. Finally, to demonstrate the independence of the effectiveness of normalization from the complexity of segmentation algorithm, we evaluate the Nyul method against the linear normalization on Bayesian algorithms of increasing complexity including a standard Bayesian classifier with Maximum Likelihood parameter estimation and a Bayesian classifier with integrated data priors, in addition to the above Bayesian classifier with MRF based post-processing to smooth the posteriors. In all relevant cases, the observed results are verified for statistical relevance using significance tests.  相似文献   

20.
This paper describes methods for white matter segmentation in brain images and the generation of cortical surfaces from the segmentations. We have developed a system that allows a user to start with a brain volume, obtained by modalities such as MRI or cryosection, and constructs a complete digital representation of the cortical surface. The methodology consists of three basic components: local parametric modeling and Bayesian segmentation; surface generation and local quadratic coordinate fitting; and surface editing. Segmentations are computed by parametrically fitting known density functions to the histogram of the image using the expectation maximization algorithm [DLR77]. The parametric fits are obtained locally rather than globally over the whole volume to overcome local variations in gray levels. To represent the boundary of the gray and white matter we use triangulated meshes generated using isosurface generation algorithms [GH95]. A complete system of local parametric quadratic charts [JWM+95] is superimposed on the triangulated graph to facilitate smoothing and geodesic curve tracking. Algorithms for surface editing include extraction of the largest closed surface. Results for several macaque brains are presented comparing automated and hand surface generation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号