首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 0 毫秒
1.
Magnetoencephalography (MEG) was used to study the neural mechanisms underlying face and gaze processing in ten normally developing boys aged between 8 and 11 years and 12 adult males. The participants performed two tasks in which they had to decide whether images presented sequentially in pairs, depicted the same person or the same motorbike. In the first task, the participants saw pictures of faces in which the eyes were either open or shut and pictures of motorbikes. In the second task, participants saw pairs of faces with gaze averted to the left or right. In children there was no evidence of the face sensitive, low amplitude short latency (30-60 ms) activity seen previously in adults. A strong, midline posterior response at approximately 100 ms was observed in children, which was earlier and somewhat stronger to faces than to motorbikes; in adults the signal at this latency was weak. A clear face sensitive response was seen in adults at 135 ms, predominantly over the right inferior occipito-temporal regions. Although activity was observed in the children at the same latency, it was less prominent, not lateralized and was evoked similarly by faces and motorbikes. Averted gaze conditions evoked strong right-lateralized activity at approximately 245 ms in children only. These findings indicate that even in middle childhood the neural mechanisms underlying face processing are less specialized than in adults, with greater early activation of posterior occipital cortices and less specific activation of ventral occipito-temporal cortex.  相似文献   

2.
Face and gaze processing were studied using magnetoencephalography in 10 children with autism and 10 normally developing children, aged between 7 and 12 years. The children performed two tasks in which they had to discriminate whether images of faces presented sequentially in pairs were identical. The images showed four different categories of gaze: direct gaze, eyes averted (left or right) and closed eyes but there was no instruction to focus on the direction of gaze. Images of motorbikes were used as control stimuli. Faces evoked strong activity over posterior brain regions at about 100 ms in both groups of children. A response at 140 ms to faces observed over extrastriate cortices, thought to be homologous to the N170 in adults, was weak and bilateral in both groups and somewhat weaker (approaching significance) in the children with autism than in the control children. The response to motorbikes differed between the groups at 100 and 140 ms. Averted eyes evoked a strong right lateralized component at 240 ms in the normally developing children that was weak in the clinical group. By contrast, direct gaze evoked a left lateralized component at 240 ms only in children with autism. The findings suggest that face and gaze processing in children with autism follows a trajectory somewhat similar to that seen in normal development but with subtle differences. There is also a possibility that other categories of object may be processed in an unusual way. The inter-relationships between these findings remain to be elucidated.  相似文献   

3.
According to a non‐hierarchical view of human cortical face processing, selective responses to faces may emerge in a higher‐order area of the hierarchy, in the lateral part of the middle fusiform gyrus (fusiform face area [FFA]) independently from face‐selective responses in the lateral inferior occipital gyrus (occipital face area [OFA]), a lower order area. Here we provide a stringent test of this hypothesis by gradually revealing segmented face stimuli throughout strict linear descrambling of phase information [Ales et al., 2012]. Using a short sampling rate (500 ms) of fMRI acquisition and single subject statistical analysis, we show a face‐selective responses emerging earlier, that is, at a lower level of structural (i.e., phase) information, in the FFA compared with the OFA. In both regions, a face detection response emerging at a lower level of structural information for upright than inverted faces, both in the FFA and OFA, in line with behavioral responses and with previous findings of delayed responses to inverted faces with direct recordings of neural activity were also reported. Overall, these results support the non‐hierarchical view of human cortical face processing and open new perspectives for time‐resolved analysis at the single subject level of fMRI data obtained during continuously evolving visual stimulation. Hum Brain Mapp 38:120–139, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

4.
The early dissociation in cortical responses to faces and objects was explored with magnetoencephalographic (MEG) recordings and source localization. To control for differences in the low-level stimulus features, which are known to modulate early brain responses, we created a novel set of stimuli so that their combinations did not have any differences in the visual-field location, spatial frequency, or luminance contrast. Differing responses to face and object (flower) stimuli were found at about 100 ms after stimulus onset in the occipital cortex. Our data also confirm that the brain response to a complex visual stimulus is not merely a sum of the responses to its constituent parts; the nonlinearity in the response was largest for meaningful stimuli.  相似文献   

5.
Faces are multi-dimensional stimuli bearing important social signals, such as gaze direction and emotion expression. To test whether perception of these two facial attributes recruits distinct cortical areas within the right hemisphere, we used single-pulse transcranial magnetic stimulation (TMS) in healthy volunteers while they performed two different tasks on the same face stimuli. In each task, two successive faces were presented with varying eye-gaze directions and emotional expressions, separated by a short interval of random duration. TMS was applied over either the right somatosensory cortex or the right superior lateral temporal cortex, 100 or 200 ms after presentation of the second face stimulus. Participants performed a speeded matching task on the second face during one of two possible conditions, requiring judgements about either gaze direction or emotion expression (same/different as the first face). Our results reveal a significant task-stimulation site interaction, indicating a selective TMS-related interference following stimulations of somatosensory cortex during the emotional expression task. Conversely, TMS of the superior lateral temporal cortex selectively interfered with the gaze direction task. We also found that the interference effect was specific to the stimulus content in each condition, affecting judgements of gaze shifts (not static eye positions) with TMS over the right superior temporal cortex, and judgements of fearful expressions (not happy expressions) with TMS over the right somatosensory cortex. These results provide for the first time a double dissociation in normal subjects during social face recognition, due to transient disruption of non-overlapping brain regions. The present study supports a critical role of the somatosensory and superior lateral temporal regions in the perception of fear expression and gaze shift in seen faces, respectively.  相似文献   

6.
Recognising a person''s identity often relies on face and body information, and is tolerant to changes in low‐level visual input (e.g., viewpoint changes). Previous studies have suggested that face identity is disentangled from low‐level visual input in the anterior face‐responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognise three identities, and then recorded their brain activity using fMRI while they viewed face and body images of these three identities from different viewpoints. Participants'' task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high‐level identity information. Moreover, we could decode identity across fMRI activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies.  相似文献   

7.
The human superior temporal sulcus (STS) has been suggested to be involved in gaze processing, but temporal data regarding this issue are lacking. We investigated this topic by combining fMRI and MEG in four normal subjects. Photographs of faces with either averted or straight eye gazes were presented and subjects passively viewed the stimuli. First, we analyzed the brain areas involved using fMRI. A group analysis revealed activation of the STS for averted compared to straight gazes, which was confirmed in all subjects. We then measured brain activity using MEG, and conducted a 3D spatial filter analysis. The STS showed higher activity in response to averted versus straight gazes during the 150–200 ms period, peaking at around 170 ms, after stimulus onset. In contrast, the fusiform gyrus, which was detected by the main effect of stimulus presentations in fMRI analysis, exhibited comparable activity across straight and averted gazes at about 170 ms. These results indicate involvement of the human STS in rapid processing of the eye gaze of another individual.  相似文献   

8.
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived?We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit.Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus.  相似文献   

9.
It has been proposed that recent cultural inventions such as symbolic arithmetic recycle evolutionary older neural mechanisms. A central assumption of this hypothesis is that the degree to which a preexisting mechanism is recycled depends on the degree of similarity between its initial function and the novel task. To test this assumption, we investigated whether the brain region involved in magnitude comparison in the intraparietal sulcus (IPS), localized by a numerosity comparison task, is recruited to a greater degree by arithmetic problems that involve number comparison (single-digit subtractions) than by problems that involve retrieving number facts from memory (single-digit multiplications). Our results confirmed that subtractions are associated with greater activity in the IPS than multiplications, whereas multiplications elicit greater activity than subtractions in regions involved in verbal processing including the middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) that were localized by a phonological processing task. Pattern analyses further indicated that the neural mechanisms more active for subtraction than multiplication in the IPS overlap with those involved in numerosity comparison and that the strength of this overlap predicts interindividual performance in the subtraction task. These findings provide novel evidence that elementary arithmetic relies on the cooption of evolutionary older neural circuits.  相似文献   

10.
The hippocampal system contains neural populations that encode an animal's position and velocity as it navigates through space. Here, we show that such populations can embed two codes within their spike trains: a firing rate code ( R ) conveyed by within‐cell spike intervals, and a co‐firing rate code () conveyed by between‐cell spike intervals. These two codes behave as conjugates of one another, obeying an analog of the uncertainty principle from physics: information conveyed in R comes at the expense of information in , and vice versa. An exception to this trade‐off occurs when spike trains encode a pair of conjugate variables, such as position and velocity, which do not compete for capacity across R and . To illustrate this, we describe two biologically inspired methods for decoding R and , referred to as sigma and sigma‐chi decoding, respectively. Simulations of head direction and grid cells show that if firing rates are tuned for position (but not velocity), then position is recovered by sigma decoding, whereas velocity is recovered by sigma‐chi decoding. Conversely, simulations of oscillatory interference among theta‐modulated “speed cells” show that if co‐firing rates are tuned for position (but not velocity), then position is recovered by sigma‐chi decoding, whereas velocity is recovered by sigma decoding. Between these two extremes, information about both variables can be distributed across both channels, and partially recovered by both decoders. These results suggest that populations with different spatial and temporal tuning properties—such as speed versus grid cells—might not encode different information, but rather, distribute similar information about position and velocity in different ways across R and . Such conjugate coding of position and velocity may influence how hippocampal populations are interconnected to form functional circuits, and how biological neurons integrate their inputs to decode information from firing rates and spike correlations.  相似文献   

11.
The brain functions as a spatio-temporal information processing machine. Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data has been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain–Computer Interfaces (BCI), and games. Yet, there is no unifying computational framework to deal with all these types of data in order to better understand this data and the processes that generated it. Standard machine learning techniques only partially succeeded and they were not designed in the first instance to deal with such complex data. Therefore, there is a need for a new paradigm to deal with STBD. This paper reviews some methods of spiking neural networks (SNN) and argues that SNN are suitable for the creation of a unifying computational framework for learning and understanding of various STBD, such as EEG, fMRI, genetic, DTI, MEG, and NIRS, in their integration and interaction. One of the reasons is that SNN use the same computational principle that generates STBD, namely spiking information processing. This paper introduces a new SNN architecture, called NeuCube, for the creation of concrete models to map, learn and understand STBD. A NeuCube model is based on a 3D evolving SNN that is an approximate map of structural and functional areas of interest of the brain related to the modeling STBD. Gene information is included optionally in the form of gene regulatory networks (GRN) if this is relevant to the problem and the data. A NeuCube model learns from STBD and creates connections between clusters of neurons that manifest chains (trajectories) of neuronal activity. Once learning is applied, a NeuCube model can reproduce these trajectories, even if only part of the input STBD or the stimuli data is presented, thus acting as an associative memory. The NeuCube framework can be used not only to discover functional pathways from data, but also as a predictive system of brain activities, to predict and possibly, prevent certain events. Analysis of the internal structure of a model after training can reveal important spatio-temporal relationships ‘hidden’ in the data. NeuCube will allow the integration in one model of various brain data, information and knowledge, related to a single subject (personalized modeling) or to a population of subjects. The use of NeuCube for classification of STBD is illustrated in a case study problem of EEG data. NeuCube models result in a better accuracy of STBD classification than standard machine learning techniques. They are robust to noise (so typical in brain data) and facilitate a better interpretation of the results and understanding of the STBD and the brain conditions under which data was collected. Future directions for the use of SNN for STBD are discussed.  相似文献   

12.
In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three‐stage deep feature learning and fusion framework, where deep neural network is trained stage‐wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high‐level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high‐level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high‐level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state‐of‐the‐art methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号