首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Developing predictive intelligence in neuroscience for learning how to generate multimodal medical data from a single modality can improve neurological disorder diagnosis with minimal data acquisition resources. Existing deep learning frameworks are mainly tailored for images, which might fail in handling geometric data (e.g., brain graphs). Specifically, predicting a target brain graph from a single source brain graph remains largely unexplored. Solving such problem is generally challenged with domain fracturecaused by the difference in distribution between source and target domains. Besides, solving the prediction and domain fracture independently might not be optimal for both tasks. To address these challenges, we unprecedentedly propose a Learning-guided Graph Dual Adversarial Domain Alignment (LG-DADA) framework for predicting a target brain graph from a source brain graph. The proposed LG-DADA is grounded in three fundamental contributions: (1) a source data pre-clustering step using manifold learning to firstly handle source data heterogeneity and secondly circumvent mode collapse in generative adversarial learning, (2) a domain alignment of source domain to the target domain by adversarially learning their latent representations, and (3) a dual adversarial regularization that jointly learns a source embedding of training and testing brain graphs using two discriminators and predict the training target graphs. Results on morphological brain graphs synthesis showed that our method produces better prediction accuracy and visual quality as compared to other graph synthesis methods.  相似文献   

2.
3.
Brain connectivity networks, derived from magnetic resonance imaging (MRI), non-invasively quantify the relationship in function, structure, and morphology between two brain regions of interest (ROIs) and give insights into gender-related connectional differences. However, to the best of our knowledge, studies on gender differences in brain connectivity were limited to investigating pairwise (i.e., low-order) relationships across ROIs, overlooking the complex high-order interconnectedness of the brain as a network. A few recent works on neurological disorders addressed this limitation by introducing the brain multiplex which is composed of a source network intra-layer, a target intra-layer, and a convolutional interlayer capturing the high-level relationship between both intra-layers. However, brain multiplexes are built from at least two different brain networks hindering their application to connectomic datasets with single brain networks (e.g., functional networks). To fill this gap, we propose Adversarial Brain Multiplex Translator (ABMT), the first work for predicting brain multiplexes from a source network using geometric adversarial learning to investigate gender differences in the human brain. Our framework comprises: (i) a geometric source to target network translator mimicking a U-Net architecture with skip connections, (ii) a conditional discriminator which distinguishes between predicted and ground truth target intra-layers, and finally (iii) a multi-layer perceptron (MLP) classifier which supervises the prediction of the target multiplex using the subject class label (e.g., gender). Our experiments on a large dataset demonstrated that predicted multiplexes significantly boost gender classification accuracy compared with source networks and unprecedentedly identify both low and high-order gender-specific brain multiplex connections. Our ABMT source code is available on GitHub at https://github.com/basiralab/ABMT.  相似文献   

4.
精神分裂症(SZ)为慢性精神疾病,常伴感知、思维、情感、行为等多方面障碍。MRI可用于观察SZ患者脑结构及功能异常,为识别精神障碍生物标记物提供重要支持。多项研究基于多模态MRI构建脑结构及功能网络,采用人脑连接组学分析方法,发现SZ脑复杂网络异常,如最短路径长度增大、聚类系数及网络效率下降、核心节点受损等,进一步支持SZ失连接假说。本文针对SZ脑结构及功能网络、多模态网络等最新研究进行综述,探讨SZ脑复杂网络拓扑结构及属性特异性的特点,讨论现有研究方法存在的问题以及未来发展方向。  相似文献   

5.
Uncovering the non-trivial brain structure–function relationship is fundamentally important for revealing organizational principles of human brain. However, it is challenging to infer a reliable relationship between individual brain structure and function, e.g., the relations between individual brain structural connectivity (SC) and functional connectivity (FC). Brain structure–function displays a distributed and heterogeneous pattern, that is, many functional relationships arise from non-overlapping sets of anatomical connections. This complex relation can be interwoven with widely existed individual structural and functional variations. Motivated by the advances of generative adversarial network (GAN) and graph convolutional network (GCN) in the deep learning field, in this work, we proposed a multi-GCN based GAN (MGCN-GAN) to infer individual SC based on corresponding FC by automatically learning the complex associations between individual brain structural and functional networks. The generator of MGCN-GAN is composed of multiple multi-layer GCNs which are designed to model complex indirect connections in brain network. The discriminator of MGCN-GAN is a single multi-layer GCN which aims to distinguish the predicted SC from real SC. To overcome the inherent unstable behavior of GAN, we designed a new structure-preserving (SP) loss function to guide the generator to learn the intrinsic SC patterns more effectively. Using Human Connectome Project (HCP) dataset and Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset as test beds, our MGCN-GAN model can generate reliable individual SC from FC. This result implies that there may exist a common regulation between specific brain structural and functional architectures across different individuals.  相似文献   

6.
We develop a deep learning framework, spatio-temporal directed acyclic graph with attention mechanisms (ST-DAG-Att), to predict cognition and disease using functional magnetic resonance imaging (fMRI). This ST-DAG-Att framework comprises of two neural networks, (1) spatio-temporal graph convolutional network (ST-graph-conv) to learn the spatial and temporal information of functional time series at multiple temporal and spatial graph scales, where the graph is represented by the brain functional network, the spatial convolution is over the space of this graph, and the temporal convolution is over the time dimension; (2) functional connectivity convolutional network (FC-conv) to learn functional connectivity features, where the functional connectivity is derived from embedded multi-scale fMRI time series and the convolutional operation is applied along both edge and node dimensions of the brain functional network. This framework also consists of an attention component, i.e., functional connectivity-based spatial attention (FC-SAtt), that generates a spatial attention map through learning the local dependency among high-level features of functional connectivity and emphasizing meaningful brain regions. Moreover, both the ST-graph-conv and FC-conv networks are designed as feed-forward models structured as directed acyclic graphs (DAGs). Our experiments employ two large-scale datasets, Adolescent Brain Cognitive Development (ABCD, n=7693) and Open Access Series of Imaging Study-3 (OASIS-3, n=1786). Our results show that the ST-DAG-Att model is generalizable from cognition prediction to age prediction. It is robust to independent samples obtained from different sites of the ABCD study. It outperforms the existing machine learning techniques, including support vector regression (SVR), elastic net’s mixture with random forest, spatio-temporal graph convolution, and BrainNetCNN.  相似文献   

7.
Most image segmentation algorithms are trained on binary masks formulated as a classification task per pixel. However, in applications such as medical imaging, this “black-and-white” approach is too constraining because the contrast between two tissues is often ill-defined, i.e., the voxels located on objects’ edges contain a mixture of tissues (a partial volume effect). Consequently, assigning a single “hard” label can result in a detrimental approximation. Instead, a soft prediction containing non-binary values would overcome that limitation. In this study, we introduce SoftSeg, a deep learning training approach that takes advantage of soft ground truth labels, and is not bound to binary predictions. SoftSeg aims at solving a regression instead of a classification problem. This is achieved by using (i) no binarization after preprocessing and data augmentation, (ii) a normalized ReLU final activation layer (instead of sigmoid), and (iii) a regression loss function (instead of the traditional Dice loss). We assess the impact of these three features on three open-source MRI segmentation datasets from the spinal cord gray matter, the multiple sclerosis brain lesion, and the multimodal brain tumor segmentation challenges. Across multiple random dataset splittings, SoftSeg outperformed the conventional approach, leading to an increase in Dice score of 2.0% on the gray matter dataset (p=0.001), 3.3% for the brain lesions, and 6.5% for the brain tumors. SoftSeg produces consistent soft predictions at tissues’ interfaces and shows an increased sensitivity for small objects (e.g., multiple sclerosis lesions). The richness of soft labels could represent the inter-expert variability, the partial volume effect, and complement the model uncertainty estimation, which is typically unclear with binary predictions. The developed training pipeline can easily be incorporated into most of the existing deep learning architectures. SoftSeg is implemented in the freely-available deep learning toolbox ivadomed (https://ivadomed.org).  相似文献   

8.
The Magn. Reson. Imaging (MRI) study of normal brain development currently conducted by the Brain Development Cooperative Group represents the most extensive MRI study of brain and behavioral development from birth through young adulthood ever conducted. This multi-center project, sponsored by four Institutes of the National Institutes of Health, uses a combined longitudinal and cross-sectional design to characterize normal, healthy brain and behavioral development. Children, ages newborn through 18-plus years of age, receive comprehensive behavioral, neurological and multimodal MRI evaluations via Objective-2 (birth through 4-years 5-months of age) and Objective-1 (4-years 6-months through 18 years of age and older). This report presents methods (e.g., neurobehavioral assessment, brain scan) and representative preliminary results (e.g., growth, behavior, brain development) for children from newborn through 4-years 5-months of age. To date, 75 participants from birth through 4-years 5-months have been successfully brain scanned during natural sleep (i.e., without sedation); most with multiple longitudinal scans (i.e., 45 children completing at least three scans, 22 completing four or more scans). Results from this younger age range will increase our knowledge and understanding of healthy brain and neurobehavioral development throughout an important, dynamic, and rapid growth period within the human life span; determine developmental associations among measures of brain, other physical characteristics, and behavior; and facilitate the development of automated, quantitative MR image analyses for neonates, infants and young children. The correlated brain MRI and neurobehavioral database will be released for use by the research and clinical communities at a future date.  相似文献   

9.
Recent studies have shown that multimodal neuroimaging data provide complementary information of the brain and latent space-based methods have achieved promising results in fusing multimodal data for Alzheimer’s disease (AD) diagnosis. However, most existing methods treat all features equally and adopt nonorthogonal projections to learn the latent space, which cannot retain enough discriminative information in the latent space. Besides, they usually preserve the relationships among subjects in the latent space based on the similarity graph constructed on original features for performance boosting. However, the noises and redundant features significantly corrupt the graph. To address these limitations, we propose an Orthogonal Latent space learning with Feature weighting and Graph learning (OLFG) model for multimodal AD diagnosis. Specifically, we map multiple modalities into a common latent space by orthogonal constrained projection to capture the discriminative information for AD diagnosis. Then, a feature weighting matrix is utilized to sort the importance of features in AD diagnosis adaptively. Besides, we devise a regularization term with learned graph to preserve the local structure of the data in the latent space and integrate the graph construction into the learning processing for accurately encoding the relationships among samples. Instead of constructing a similarity graph for each modality, we learn a joint graph for multiple modalities to capture the correlations among modalities. Finally, the representations in the latent space are projected into the target space to perform AD diagnosis. An alternating optimization algorithm with proved convergence is developed to solve the optimization objective. Extensive experimental results show the effectiveness of the proposed method.  相似文献   

10.
It has been proven that neuropsychiatric disorders (NDs) can be associated with both structures and functions of brain regions. Thus, data about structures and functions could be usefully combined in a comprehensive analysis. While brain structural MRI (sMRI) images contain anatomic and morphological information about NDs, functional MRI (fMRI) images carry complementary information. However, efficient extraction and fusion of sMRI and fMRI data remains challenging. In this study, we develop an enhanced multi-modal graph convolutional network (MME-GCN) in a binary classification between patients with NDs and healthy controls, based on the fusion of the structural and functional graphs of the brain region. First, based on the same brain atlas, we construct structural and functional graphs from sMRI and fMRI data, respectively. Second, we use machine learning to extract important features from the structural graph network. Third, we use these extracted features to adjust the corresponding edge weights in the functional graph network. Finally, we train a multi-layer GCN and use it in binary classification task. MME-GCN achieved 93.71% classification accuracy on the open data set provided by the Consortium for Neuropsychiatric Phenomics. In addition, we analyzed the important features selected from the structural graph and verified them in the functional graph. Using MME-GCN, we found several specific brain connections important to NDs.  相似文献   

11.
The detection and pathogenic factors analysis of Parkinson’s disease (PD) has a practical significance for its diagnosis and treatment. However, the traditional research paradigms are commonly based on single neural imaging data, which is easy to ignore the complementarity between multimodal imaging genetics data. The existing researches also pay little attention to the comprehensive framework of patient detection and pathogenic factors analysis for PD. Based on functional magnetic resonance imaging (fMRI) data and single nucleotide polymorphism (SNP) data, a novel brain disease multimodal data analysis model is proposed in this paper. Firstly, according to the complementarity between the two types of data, the classical correlation analysis method is used to construct the fusion feature of subjects. Secondly, based on the artificial neural network, the fusion feature analysis tool named clustering evolutionary random neural network ensemble (CERNNE) is designed. This method integrates multiple neural networks constructed randomly, and uses clustering evolution strategy to optimize the ensemble learner by adaptive selective integration, selecting the discriminative features for PD analysis and ensuring the generalization performance of the ensemble model. By combining with data fusion scheme, the CERNNE is applied to forming a multi-task analysis framework, recognizing PD patients and predicting PD-associated brain regions and genes. In the multimodal data experiment, the proposed framework shows better classification performance and pathogenic factors predicting ability, which provides a new perspective for the diagnosis of PD.  相似文献   

12.
Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). Recently, rather than predicting categorical variables as in classification, several pattern regression methods have also been used to estimate continuous clinical variables from brain images. However, most existing regression methods focus on estimating multiple clinical variables separately and thus cannot utilize the intrinsic useful correlation information among different clinical variables. On the other hand, in those regression methods, only a single modality of data (usually only the structural MRI) is often used, without considering the complementary information that can be provided by different modalities. In this paper, we propose a general methodology, namely multi-modal multi-task (M3T) learning, to jointly predict multiple variables from multi-modal data. Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different variables. Specifically, our method contains two key components, i.e., (1) a multi-task feature selection which selects the common subset of relevant features for multiple variables from each modality, and (2) a multi-modal support vector machine which fuses the above-selected features from all modalities to predict multiple (regression and classification) variables. To validate our method, we perform two sets of experiments on ADNI baseline MRI, FDG-PET, and cerebrospinal fluid (CSF) data from 45 AD patients, 91 MCI patients, and 50 healthy controls (HC). In the first set of experiments, we estimate two clinical variables such as Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), as well as one categorical variable (with value of 'AD', 'MCI' or 'HC'), from the baseline MRI, FDG-PET, and CSF data. In the second set of experiments, we predict the 2-year changes of MMSE and ADAS-Cog scores and also the conversion of MCI to AD from the baseline MRI, FDG-PET, and CSF data. The results on both sets of experiments demonstrate that our proposed M3T learning scheme can achieve better performance on both regression and classification tasks than the conventional learning methods.  相似文献   

13.
With the recent technological advances, biological datasets, often represented by networks (i.e., graphs) of interacting entities, proliferate with unprecedented complexity and heterogeneity. Although modern network science opens new frontiers of analyzing connectivity patterns in such datasets, we still lack data-driven methods for extracting an integral connectional fingerprint of a multi-view graph population, let alone disentangling the typical from the atypical variations across the population samples. We present the multi-view graph normalizer network (MGN-Net2), a graph neural network based method to normalize and integrate a set of multi-view biological networks into a single connectional template that is centered, representative, and topologically sound. We demonstrate the use of MGN-Net by discovering the connectional fingerprints of healthy and neurologically disordered brain network populations including Alzheimer’s disease and Autism spectrum disorder patients. Additionally, by comparing the learned templates of healthy and disordered populations, we show that MGN-Net significantly outperforms conventional network integration methods across extensive experiments in terms of producing the most centered templates, recapitulating unique traits of populations, and preserving the complex topology of biological networks. Our evaluations showed that MGN-Net is powerfully generic and easily adaptable in design to different graph-based problems such as identification of relevant connections, normalization and integration.  相似文献   

14.
Histotripsy has been previously applied to target various cranial locations in vitro through an excised human skull. Recently, a transcranial magnetic resonance (MR)-guided histotripsy (tcMRgHt) system was developed, enabling pre-clinical investigations of tcMRgHt for brain surgery. To determine the feasibility of in vivo transcranial histotripsy, tcMRgHt treatment was delivered to eight pigs using a 700-kHz, 128-element, MR-compatible phased-array transducer inside a 3-T magnetic resonance imaging (MRI) scanner. After craniotomy to open an acoustic window to the brain, histotripsy was applied through an excised human calvarium to target the inside of the pig brain based on pre-treatment MRI and fiducial markers. MR images were acquired pre-treatment, immediately post-treatment and 2–4 h post-treatment to evaluate the acute treatment outcome. Successful histotripsy ablation was observed in all pigs. The MR-evident lesions were well confined within the targeted volume, without evidence of excessive brain edema or hemorrhage outside of the target zone. Histology revealed tissue homogenization in the ablation zones with a sharp demarcation between destroyed and unaffected tissue, which correlated well with the radiographic treatment zones on MRI. These results are the first to support the in vivo feasibility of tcMRgHt in the pig brain, enabling further investigation of the use of tcMRgHt for brain surgery.  相似文献   

15.
Fusing multi-modality data is crucial for accurate identification of brain disorder as different modalities can provide complementary perspectives of complex neurodegenerative disease. However, there are at least four common issues associated with the existing fusion methods. First, many existing fusion methods simply concatenate features from each modality without considering the correlations among different modalities. Second, most existing methods often make prediction based on a single classifier, which might not be able to address the heterogeneity of the Alzheimer’s disease (AD) progression. Third, many existing methods often employ feature selection (or reduction) and classifier training in two independent steps, without considering the fact that the two pipelined steps are highly related to each other. Forth, there are missing neuroimaging data for some of the participants (e.g., missing PET data), due to the participants’ “no-show” or dropout. In this paper, to address the above issues, we propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain the more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that our proposed models outperform other state-of-the-art methods.  相似文献   

16.

Purpose

We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols.

Methods

GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer.

Results

GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast.

Conclusions

GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.
  相似文献   

17.
Multimodal fusion of different types of neural image data provides an irreplaceable opportunity to take advantages of complementary cross-modal information that may only partially be contained in single modality. To jointly analyze multimodal data, deep neural networks can be especially useful because many studies have suggested that deep learning strategy is very efficient to reveal complex and non-linear relations buried in the data. However, most deep models, e.g., convolutional neural network and its numerous extensions, can only operate on regular Euclidean data like voxels in 3D MRI. The interrelated and hidden structures that beyond the grid neighbors, such as brain connectivity, may be overlooked. Moreover, how to effectively incorporate neuroscience knowledge into multimodal data fusion with a single deep framework is understudied. In this work, we developed a graph-based deep neural network to simultaneously model brain structure and function in Mild Cognitive Impairment (MCI): the topology of the graph is initialized using structural network (from diffusion MRI) and iteratively updated by incorporating functional information (from functional MRI) to maximize the capability of differentiating MCI patients from elderly normal controls. This resulted in a new connectome by exploring “deep relations” between brain structure and function in MCI patients and we named it as Deep Brain Connectome. Though deep brain connectome is learned individually, it shows consistent patterns of alteration comparing to structural network at group level. With deep brain connectome, our developed deep model can achieve 92.7% classification accuracy on ADNI dataset.  相似文献   

18.
Group-based brain connectivity networks have great appeal for researchers interested in gaining further insight into complex brain function and how it changes across different mental states and disease conditions. Accurately constructing these networks presents a daunting challenge given the difficulties associated with accounting for inter-subject topological variability. Viable approaches to this task must engender networks that capture the constitutive topological properties of the group of subjects' networks that it is aiming to represent. The conventional approach has been to use a mean or median correlation network (Achard et al., 2006; Song et al., 2009; Zuo et al., 2011) to embody a group of networks. However, the degree to which their topological properties conform with those of the groups that they are purported to represent has yet to be explored. Here we investigate the performance of these mean and median correlation networks. We also propose an alternative approach based on an exponential random graph modeling framework and compare its performance to that of the aforementioned conventional approach. Simpson et al. (2011) illustrated the utility of exponential random graph models (ERGMs) for creating brain networks that capture the topological characteristics of a single subject's brain network. However, their advantageousness in the context of producing a brain network that "represents" a group of brain networks has yet to be examined. Here we show that our proposed ERGM approach outperforms the conventional mean and median correlation based approaches and provides an accurate and flexible method for constructing group-based representative brain networks.  相似文献   

19.
To fully define the target objects of interest in clinical diagnosis, many deep convolution neural networks (CNNs) use multimodal paired registered images as inputs for segmentation tasks. However, these paired images are difficult to obtain in some cases. Furthermore, the CNNs trained on one specific modality may fail on others for images acquired with different imaging protocols and scanners. Therefore, developing a unified model that can segment the target objects from unpaired multiple modalities is significant for many clinical applications. In this work, we propose a 3D unified generative adversarial network, which unifies the any-to-any modality translation and multimodal segmentation in a single network. Since the anatomical structure is preserved during modality translation, the auxiliary translation task is used to extract the modality-invariant features and generate the additional training data implicitly. To fully utilize the segmentation-related features, we add a cross-task skip connection with feature recalibration from the translation decoder to the segmentation decoder. Experiments on abdominal organ segmentation and brain tumor segmentation indicate that our method outperforms the existing unified methods.  相似文献   

20.
Multi-modal structural Magnetic Resonance Image (MRI) provides complementary information and has been used widely for diagnosis and treatment planning of gliomas. While machine learning is popularly adopted to process and analyze MRI images, most existing tools are based on complete sets of multi-modality images that are costly and sometimes impossible to acquire in real clinical scenarios. In this work, we address the challenge of multi-modality glioma MRI synthesis often with incomplete MRI modalities. We propose 3D Common-feature learning-based Context-aware Generative Adversarial Network (CoCa-GAN) for this purpose. In particular, our proposed CoCa-GAN method adopts the encoder-decoder architecture to map the input modalities into a common feature space by the encoder, from which (1) the missing target modality(-ies) can be synthesized by the decoder, and also (2) the jointly conducted segmentation of the gliomas can help the synthesis task to better focus on the tumor regions. The synthesis and segmentation tasks share the same common feature space, while multi-task learning boosts both their performances. In particular, for the encoder to derive the common feature space, we propose and validate two different models, i.e., (1) early-fusion CoCa-GAN (eCoCa-GAN) and (2) intermediate-fusion CoCa-GAN (iCoCa-GAN). The experimental results demonstrate that the proposed iCoCa-GAN outperforms other state-of-the-art methods in synthesis of missing image modalities. Moreover, our method is flexible to handle the arbitrary combination of input/output image modalities, which makes it feasible to process brain tumor MRI data in real clinical circumstances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号