首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
Humans can identify individual faces under different viewpoints, even after a single encounter. We determined brain regions responsible for processing face identity across view changes after variable delays with several intervening stimuli, using event-related functional magnetic resonance imaging during a long-term repetition priming paradigm. Unfamiliar faces were presented sequentially either in a frontal or three-quarter view. Each face identity was repeated once after an unpredictable lag, with either the same or another viewpoint. Behavioral data showed significant priming in response time, irrespective of view changes. Brain imaging results revealed a reduced response in the lateral occipital and fusiform cortex with face repetition. Bilateral face-selective fusiform areas showed view-sensitive repetition effects, generalizing only from three-quarter to front-views. More medial regions in the left (but not in the right) fusiform showed repetition effects across all types of viewpoint changes. These results reveal that distinct regions within the fusiform cortex hold view-sensitive or view-invariant traces of novel faces, and that face identity is represented in a view-sensitive manner in the functionally defined face-selective areas of both hemispheres. In addition, our finding of a better generalization after exposure to a 3/4-view than to a front-view demonstrates for the first time a neural substrate in the fusiform cortex for the common recognition advantage of three-quarter faces. This pattern provides new insights into the nature of face representation in the human visual system.  相似文献   

2.
Recognising a person''s identity often relies on face and body information, and is tolerant to changes in low‐level visual input (e.g., viewpoint changes). Previous studies have suggested that face identity is disentangled from low‐level visual input in the anterior face‐responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognise three identities, and then recorded their brain activity using fMRI while they viewed face and body images of these three identities from different viewpoints. Participants'' task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high‐level identity information. Moreover, we could decode identity across fMRI activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies.  相似文献   

3.
Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal‐movement and ventral‐form visual cortex regions. Here, we explored, for the first time, whether similar dorsal–ventral interactions (assessed via functional connectivity), might also exist for visual‐speech processing. We then examined whether altered dorsal–ventral connectivity is observed in adults with high‐functioning autism spectrum disorder (ASD), a disorder associated with impaired visual‐speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal‐movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral‐form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT—right OFA, left TVSA—left FFA). The results confirmed our hypothesis that functional connectivity between dorsal‐movement and ventral‐form regions exists during visual‐speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face‐to‐face communication.  相似文献   

4.
Abnormal activation of the social brain during face perception in autism   总被引:1,自引:0,他引:1  
ASD involves a fundamental impairment in processing social-communicative information from faces. Several recent studies have challenged earlier findings that individuals with autism spectrum disorder (ASD) have no activation of the fusiform gyrus (fusiform face area, FFA) when viewing faces. In this study, we examined activation to faces in the broader network of face-processing modules that comprise what is known as the social brain. Using 3T functional resonance imaging, we measured BOLD signal changes in 10 ASD subjects and 7 healthy controls passively viewing nonemotional faces. We replicated our original findings of significant activation of face identity-processing areas (FFA and inferior occipital gyrus, IOG) in ASD. However, in addition, we identified hypoactivation in a more widely distributed network of brain areas involved in face processing [including the right amygdala, inferior frontal cortex (IFC), superior temporal sulcus (STS), and face-related somatosensory and premotor cortex]. In ASD, we found functional correlations between a subgroup of areas in the social brain that belong to the mirror neuron system (IFC, STS) and other face-processing areas. The severity of the social symptoms measured by the Autism Diagnostic Observation Schedule was correlated with the right IFC cortical thickness and with functional activation in that area. When viewing faces, adults with ASD show atypical patterns of activation in regions forming the broader face-processing network and social brain, outside the core FFA and IOG regions. These patterns suggest that areas belonging to the mirror neuron system are involved in the face-processing disturbances in ASD.  相似文献   

5.
Brain imaging studies in humans have shown that face processing in several areas is modulated by the affective significance of faces, particularly with fearful expressions, but also with other social signals such gaze direction. Here we review haemodynamic and electrical neuroimaging results indicating that activity in the face-selective fusiform cortex may be enhanced by emotional (fearful) expressions, without explicit voluntary control, and presumably through direct feedback connections from the amygdala. fMRI studies show that these increased responses in fusiform cortex to fearful faces are abolished by amygdala damage in the ipsilateral hemisphere, despite preserved effects of voluntary attention on fusiform; whereas emotional increases can still arise despite deficits in attention or awareness following parietal damage, and appear relatively unaffected by pharmacological increases in cholinergic stimulation. Fear-related modulations of face processing driven by amygdala signals may implicate not only fusiform cortex, but also earlier visual areas in occipital cortex (e.g., V1) and other distant regions involved in social, cognitive, or somatic responses (e.g., superior temporal sulcus, cingulate, or parietal areas). In the temporal domain, evoked-potentials show a widespread time-course of emotional face perception, with some increases in the amplitude of responses recorded over both occipital and frontal regions for fearful relative to neutral faces (as well as in the amygdala and orbitofrontal cortex, when using intracranial recordings), but with different latencies post-stimulus onset. Early emotional responses may arise around 120ms, prior to a full visual categorization stage indexed by the face-selective N170 component, possibly reflecting rapid emotion processing based on crude visual cues in faces. Other electrical components arise at later latencies and involve more sustained activities, probably generated in associative or supramodal brain areas, and resulting in part from the modulatory signals received from amygdala. Altogether, these fMRI and ERP results demonstrate that emotion face perception is a complex process that cannot be related to a single neural event taking place in a single brain regions, but rather implicates an interactive network with distributed activity in time and space. Moreover, although traditional models in cognitive neuropsychology have often considered that facial expression and facial identity are processed along two separate pathways, evidence from fMRI and ERPs suggests instead that emotional processing can strongly affect brain systems responsible for face recognition and memory. The functional implications of these interactions remain to be fully explored, but might play an important role in the normal development of face processing skills and in some neuropsychiatric disorders.  相似文献   

6.
Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top‐down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task‐relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements – the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region‐of‐interest‐based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from ‘eye’ and ‘mouth’ regions, and from the remaining non‐diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local ‘diagnostic’ and widespread ‘non‐diagnostic’ cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala).  相似文献   

7.
The ability to recognize objects across different viewpoints (view invariance) is a remarkable property of the primate visual system. According to a prominent theory, view information is represented by view-selective mechanisms at early stages of visual processing and gradually becomes view invariant in high-level visual areas. Single-cell recording studies have also reported an intermediate step of partial view invariance for mirror-symmetric face views. Nevertheless, similar evidence for this type of hierarchical processing for face view has not been reported yet in the human visual cortex. The present functional magnetic resonance imaging study used state-of-the-art multivariate pattern analysis to explore face-view tuning in the human visual cortex. Our results revealed that consistent with a view-selective representation, face view can be successfully decoded in face and object-selective regions as well as in early visual cortex. Critically, similar neural representations for mirror-symmetric views were found in high-level but not in low-level visual areas. Our results support the notion of gradual emergence of view-invariant representation with invariance for mirror-symmetric images as an intermediate step and propose putative neural correlates of mirror-image confusion in the human brain.  相似文献   

8.
Functional Magnetic Resonance Imaging (fMRI) was used to identify a small area in the human posterior fusiform gyrus that responds selectively to faces (PF). In the same subjects, phase‐encoded rotating and expanding checkerboards were used with fMRI to identify the retinotopic visual areas V1, V2, V3, V3A, VP and V4v. PF was found to lie anterior to area V4v, with a small gap present between them. Further recordings in some of the same subjects used moving low‐contrast rings to identify the visual motion area MT. PF was found to lie ventral to MT. In addition, preliminary evidence was found using fMRI for a small area that responded to inanimate objects but not to faces in the collateral sulcus medial to PF. The retinotopic visual areas and MT responded equally to faces, control randomized stimuli, and objects. Weakly face‐selective responses were also found in ventrolateral occipitotemporal cortex anterior to V4v, as well as in the middle temporal gyrus anterior to MT. We conclude that the fusiform face area in humans lies in non‐retinotopic visual association cortex of the ventral form‐processing stream, in an area that may be roughly homologous in location to area TF or CITv in monkeys. Hum. Brain Mapping 7:29–37, 1999. © 1999 Wiley‐Liss, Inc.  相似文献   

9.
The extent to which the brain regions associated with face processing are selective for that specific function remains controversial. In addition, little is known regarding the extent to which face-responsive brain regions are selective for human faces. To study regional selectivity of face processing, we used functional magnetic resonance imaging to examine whole brain activation in response to human faces, dog faces, and houses. Fourteen healthy right-handed volunteers participated in a passive viewing, blocked experiment. Results indicate that the lateral fusiform gyrus (Brodmann's area 37) responds maximally to both dog and human faces when compared with other sites, followed by the middle/inferior occipital gyrus (BA 18/19). Sites that were activated by houses versus dog and human faces included the medial fusiform gyrus (BA 19/37), the posterior cingulate (BA 30), and the superior occipital gyrus (BA 19). The only site that displayed significant differences in activation between dog and human faces was the lingual/medial fusiform gyrus. In this site, houses elicited the strongest activation, followed by dog faces, while the response to human faces was negligible and did not differ from fixation. The parahippocampal gyrus/amygdala was the sole site that displayed significant activation to human faces, but not to dog faces or houses.  相似文献   

10.
Spatial attention has been argued to be adaptive by enhancing the processing of visual stimuli within the ‘spotlight of attention’. We previously reported that crude threat cues (backward masked fearful faces) facilitate spatial attention through a network of brain regions consisting of the amygdala, anterior cingulate and contralateral visual cortex. However, results from previous functional magnetic resonance imaging (fMRI) dot-probe studies have been inconclusive regarding a fearful face-elicited contralateral modulation of visual targets. Here, we tested the hypothesis that the capture of spatial attention by crude threat cues would facilitate processing of subsequently presented visual stimuli within the masked fearful face-elicited ‘spotlight of attention’ in the contralateral visual cortex. Participants performed a backward masked fearful face dot-probe task while brain activity was measured with fMRI. Masked fearful face left visual field trials enhanced activity for spatially congruent targets in the right superior occipital gyrus, fusiform gyrus and lateral occipital complex, while masked fearful face right visual field trials enhanced activity in the left middle occipital gyrus. These data indicate that crude threat elicited spatial attention enhances the processing of subsequent visual stimuli in contralateral occipital cortex, which may occur by lowering neural activation thresholds in this retinotopic location.  相似文献   

11.
Face recognition is more strongly impaired by stimulus inversion than nonface object recognition. This phenomenon, known as the face inversion effect (FIE), suggests that the visual system contains specialized processing mechanisms that are more engaged by upright faces than by inverted faces or nonface objects. Neuroimaging and neuropsychological studies indicate that environmental scenes may also recruit specialized-purpose processing machinery but a comparable inversion effect for scenes has not been established. Here we demonstrate that both face and scene inversion lead to behavioral penalties during performance of a continuous visual matching task; however, the scene inversion effect was less robust and declined in magnitude over the course of the experiment. Scene inversion led to greater neural response in the functionally defined lateral occipital (LO) object area for inverted versus upright scenes and reduced response in the parahippocampal place area (PPA), while face inversion lead to greater response in LO and the right middle fusiform (MF) object area for inverted versus upright faces but no change in the fusiform face area (FFA). A whole-brain analysis revealed several regions that responded more strongly to either upright versus inverted faces or upright versus inverted scenes, some of which may be involved in post-recognition processing. These results demonstrate that both face and scene inversion cause a shift from specialized processing streams towards generic object-processing mechanisms; however, this shift only leads to a reliable behavioral penalty in the case of face inversion.  相似文献   

12.
The occipito-temporal cortex is strongly implicated in carrying out the high-level computations associated with vision. In human neuroimaging studies, focal regions are consistently found within this broad region that respond strongly and selectively to faces, bodies, or objects. A notable feature of these selective regions is that they are found in pairs. In the posterior-lateral occipito-temporal cortex, focal selectivity is found for faces (occipital face area), bodies (extrastriate body area), and objects (lateral occipital). These three areas are found bilaterally and at close quarters to each other. Likewise, in the ventro-medial occipito-temporal cortex, three similar category-selective regions are found, also in proximity to each other: for faces (fusiform face area), bodies (fusiform body area), and objects (posterior fusiform). Here we review some of the extensive evidence on the functional properties of these areas with two aims. First, we seek to identify principles that distinguish the posterior-lateral and ventro-medial clusters of selective regions but that apply generally within each cluster across the three stimulus kinds. Our review identifies and elaborates several principles by which these relationships hold. In brief, the posterior-lateral representations are more primitive, local, and stimulus-driven relative to the ventro-medial representations, which in contrast are more invariant to visual features, global, and linked to the subjective percept. Second, because the evidence base of studies that compare both posterior-lateral and ventro-medial representations of faces, bodies, and objects is still relatively small, we seek to provoke more cross-talk among the research strands that are traditionally separate. We identify several promising approaches for such future work.  相似文献   

13.
According to a non‐hierarchical view of human cortical face processing, selective responses to faces may emerge in a higher‐order area of the hierarchy, in the lateral part of the middle fusiform gyrus (fusiform face area [FFA]) independently from face‐selective responses in the lateral inferior occipital gyrus (occipital face area [OFA]), a lower order area. Here we provide a stringent test of this hypothesis by gradually revealing segmented face stimuli throughout strict linear descrambling of phase information [Ales et al., 2012]. Using a short sampling rate (500 ms) of fMRI acquisition and single subject statistical analysis, we show a face‐selective responses emerging earlier, that is, at a lower level of structural (i.e., phase) information, in the FFA compared with the OFA. In both regions, a face detection response emerging at a lower level of structural information for upright than inverted faces, both in the FFA and OFA, in line with behavioral responses and with previous findings of delayed responses to inverted faces with direct recordings of neural activity were also reported. Overall, these results support the non‐hierarchical view of human cortical face processing and open new perspectives for time‐resolved analysis at the single subject level of fMRI data obtained during continuously evolving visual stimulation. Hum Brain Mapp 38:120–139, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

14.
We report a functional magnetic resonance imaging (fMRI) adaptation study of two well-described patients, DF and PS, who present face identity recognition impairments (prosopagnosia) following brain-damage. Comparing faces to non-face objects elicited activation in all visual areas of the cortical face processing network that were spared subsequent to brain damage. The common brain lesion in the two patients was in the right inferior occipital cortex, in the territory of the right “occipital face area” (‘OFA’), which strengthens the critical role of this region in processing faces. Despite the lesion to the right ‘OFA’, there was normal range of sensitivity to faces in the right “fusiform face area” (‘FFA’) in both patients, supporting a non-hierarchical model of face processing at the cortical level. At the same time, however, sensitivity to individual face representations, as indicated by release from adaptation to identity, was abnormal in the right ‘FFA’ of both patients. This suggests that the right ‘OFA’ is necessary to individualize faces, perhaps through reentrant interactions with other cortical face sensitive areas. The lateral occipital area (LO) is damaged bilaterally in patient DF, who also shows visual object agnosia. However, in patient PS, in whom LO was spared, sensitivity to individual representations of non-face objects was still found in this region, as in the normal brain, consistent with her preserved object recognition abilities. Taken together, these observations, which fruitfully combine functional imaging and neuropsychology, place strong constraints on the possible functional organization of the cortical areas mediating face processing in the human brain.  相似文献   

15.
We have been studying the underlying mechanisms of face perception in humans using magneto‐ (MEG) and electro‐encephalography (EEG) including (1) perception by viewing the static face, (2) differences in perception by viewing the eyes and whole face, (3) the face inversion effect, (4) the effect of gaze direction, (5) perception of eye motion, (6) perception of mouth motion, and (7) the interaction between auditory and visual stimuli related to the vowel sounds. In this review article, we mainly summarize our results obtained on 3, 5, and 6 above. With the presentation of both upright and inverted unfamiliar faces, the inferior temporal cortex (IT) centered on the fusiform gyrus, and the lateral temporal cortex (LT) near the superior temporal sulcus were activated simultaneously, but independently, between 140 and 200 ms post‐stimulus. The right hemisphere IT and LT were both active in all subjects, and those in the left hemisphere in half of the subjects. Latencies with inverted faces relative to those with upright faces were longer in the right hemisphere, and shorter in the left hemisphere. Since the activated regions under upright and those under inverted face stimuli did not show a significant difference, we consider that differences in processing upright versus inverted faces are attributable to temporal processing differences rather than to processing of information by different brain regions. When viewing the motion of the mouth and eyes, a large clear MEG component, 1M (mean peak latency of approximately 160 ms), was elicited to both mouth and eye movement, and was generated mainly in the occipito‐temporal border, at human MT/V5. The 1M to mouth movement and the 1M to eye movement showed no significant difference in amplitude or generator location. Therefore, our results indicate that human MT/V5 is active in the perception of both mouth and eye motion, and that the perception of movement of facial parts is probably processed similarly.  相似文献   

16.
Goto Y  Kinoe H  Nakashima T  Tobimatsu S 《Neuroreport》2005,16(12):1329-1334
The visual evoked potentials elicited by mosaic pictures were used to elucidate the initial step of face perception. Three different mosaic levels (subthreshold, threshold, suprathreshold) for familiar and unfamiliar faces and objects were randomly presented for 250 ms. The latencies of occipital N1 and posterior-temporal N2 were shortened by decreasing the mosaic levels of faces but not for object. The N2 amplitude significantly increased at threshold and suprathreshold levels for familiar and unfamiliar faces. The latency difference between N1 and N2 at threshold level for a familiar face was significantly shortened compared with that for an unfamiliar face. Our findings suggest the initial step of face perception is already set in the primary visual cortex, and familiarity can facilitate the corticocortical processing of face information.  相似文献   

17.
Extensive studies have demonstrated that face processing ability develops gradually during development until adolescence. However, the underlying mechanism is unclear. One hypothesis is that children and adults represent faces in qualitatively different fashions with different group templates. An alternative hypothesis emphasizes the development as a quantitative change with a decrease of variation in representations. To test these hypotheses, we used between-participant correlation to measure activation pattern similarity both within and between late-childhood children and adults. We found that activation patterns for faces in the fusiform face area and occipital face area were less similar within the children group than within the adults group, indicating children had a greater variation in representing faces. Interestingly, the activation pattern similarity of children to their own group template was not significantly larger than that to adults’ template, suggesting children and adults shared a template in representing faces. Further, the decrease in representation variance was likely a general principle in the ventral visual cortex, as a similar result was observed in a scene-selective region when perceiving scenes. Taken together, our study provides evidence that development of object representation may result from a homogenization process that shifts from greater variance in late-childhood to homogeneity in adults.  相似文献   

18.
Women typically remember more female than male faces, whereas men do not show a reliable own-gender bias. However, little is known about the neural correlates of this own-gender bias in face recognition memory. Using functional magnetic resonance imaging (fMRI), we investigated whether face gender modulated brain activity in fusiform and inferior occipital gyri during incidental encoding of faces. Fifteen women and 14 men underwent fMRI while passively viewing female and male faces, followed by a surprise face recognition task. Women recognized more female than male faces and showed higher activity to female than male faces in individually defined regions of fusiform and inferior occipital gyri. In contrast, men’s recognition memory and blood-oxygen-level-dependent response were not modulated by face gender. Importantly, higher activity in the left fusiform gyrus (FFG) to one gender was related to better memory performance for that gender. These findings suggest that the FFG is involved in the gender bias in memory for faces, which may be linked to differential experience with female and male faces.  相似文献   

19.
Goyal MS  Hansen PJ  Blakemore CB 《Neuroreport》2006,17(13):1381-1384
When blind people touch Braille characters, blood flow increases in visual areas, leading to speculation that visual circuitry assists tactile discrimination in the blind. We tested this hypothesis in a functional magnetic resonance imaging study designed to reveal activation appropriate to the nature of tactile stimulation. In late-blind individuals, hMT/V5 and fusiform face area activated during visual imagery of moving patterns or faces. When they touched a doll's face, right fusiform face area was again activated. Equally, hMT/V5 was activated when objects moved over the skin. We saw no difference in hMT/V5 or fusiform face area activity during motion or face perception in the congenitally blind. We conclude that specialized visual areas, once established through visual experience, assist equivalent tactile identification tasks years after the onset of blindness.  相似文献   

20.
Face recognition is a primary social skill which depends on a distributed neural network. A pronounced face recognition deficit in the absence of any lesion is seen in congenital prosopagnosia. This study investigating 24 congenital prosopagnosic subjects and 25 control subjects aims at elucidating its neural basis with fMRI and voxel-based morphometry. We found a comprehensive behavioral pattern, an impairment in visual recognition for faces and buildings that spared long-term memory for faces with negative valence. Anatomical analysis revealed diminished gray matter density in the bilateral lingual gyrus, the right middle temporal gyrus, and the dorsolateral prefrontal cortex. In most of these areas, gray matter density correlated with memory success. Decreased functional activation was found in the left fusiform gyrus, a crucial area for face processing, and in the dorsolateral prefrontal cortex, whereas activation of the medial prefrontal cortex was enhanced. Hence, our data lend strength to the hypothesis that congenital prosopagnosia is explained by network dysfunction and suggest that anatomic curtailing of visual processing in the lingual gyrus plays a substantial role. The dysfunctional circuitry further encompasses the fusiform gyrus and the dorsolateral prefrontal cortex, which may contribute to their difficulties in long-term memory for complex visual information. Despite their deficits in face identity recognition, processing of emotion related information is preserved and possibly mediated by the medial prefrontal cortex. Congenital prosopagnosia may, therefore, be a blueprint of differential curtailing in networks of visual cognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号