首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.  相似文献   

2.
Neural processes for naturalistic drawing might be discerned into object recognition and analysis, attention processes guiding eye hand interaction, encoding of visual features in an allocentric reference frame, a transfer into the motor command and precise motor guidance with tight sensorimotor feedback. Cerebral representations in a real life paradigm during naturalistic drawing have sparsely been investigated. Using a functional Magnetic Resonance Imaging (fMRI) paradigm we measured 20 naive subjects during drawing a portrait from a frontal face presented as a photograph. Participants were asked to draw the portrait in either a naturalistic or a sketchy characteristic way. Tracing the contours of the face with a pencil or passive viewing of the face served as control conditions. Compared to passive viewing, naturalistic and sketchy drawing recruited predominantly the dorsal visual pathway, somatosensory and motor areas and bilateral BA 44. The right occipital lobe, middle temporal (MT) and the fusiform face area were increasingly active during drawing compared to passive viewing as well. Compared to tracing with a pencil, both drawing tasks increasingly involved the bilateral precuneus together with the cuneus and right inferior temporal lobe. Overall, our study identified cerebral areas characteristic for previously proposed aspects of drawing: face perception and analysis (fusiform gyrus and higher visual areas), encoding and retrieval of locations in an allocentric reference frame (precuneus), and continuous feedback processes during motor output (parietal sulcus, cerebellar hemisphere).  相似文献   

3.
Whether a single perceptual process or separate and possibly independent processes support facial identity and expression recognition is unclear. We used a morphed-face discrimination test to examine sensitivity to facial expression and identity information in patients with occipital or temporal lobe damage, and structural and functional MRI to correlate behavioral deficits with damage to the core regions of the face-processing network. We found selective impairments of identity perception in two patients with right inferotemporal lesions and two prosopagnosic patients with damage limited to the anterior temporal lobes. Of these four patients one exhibited damage to the right fusiform and occipital face areas, while the remaining three showed sparing of these regions. Thus impaired identity perception can occur with damage not only to the fusiform and occipital face areas, but also to other medial occipitotemporal structures that likely form part of a face recognition network. Impaired expression perception was seen in the fifth patient with damage affecting the face-related portion of the posterior superior temporal sulcus. This subject also had difficulty in discriminating identity when irrelevant variations in expression needed to be discounted. These neuropsychological and neuroimaging data provide evidence to complement models which address the separation of expression and identity perception within the face-processing network.  相似文献   

4.
Brain imaging studies in humans have shown that face processing in several areas is modulated by the affective significance of faces, particularly with fearful expressions, but also with other social signals such gaze direction. Here we review haemodynamic and electrical neuroimaging results indicating that activity in the face-selective fusiform cortex may be enhanced by emotional (fearful) expressions, without explicit voluntary control, and presumably through direct feedback connections from the amygdala. fMRI studies show that these increased responses in fusiform cortex to fearful faces are abolished by amygdala damage in the ipsilateral hemisphere, despite preserved effects of voluntary attention on fusiform; whereas emotional increases can still arise despite deficits in attention or awareness following parietal damage, and appear relatively unaffected by pharmacological increases in cholinergic stimulation. Fear-related modulations of face processing driven by amygdala signals may implicate not only fusiform cortex, but also earlier visual areas in occipital cortex (e.g., V1) and other distant regions involved in social, cognitive, or somatic responses (e.g., superior temporal sulcus, cingulate, or parietal areas). In the temporal domain, evoked-potentials show a widespread time-course of emotional face perception, with some increases in the amplitude of responses recorded over both occipital and frontal regions for fearful relative to neutral faces (as well as in the amygdala and orbitofrontal cortex, when using intracranial recordings), but with different latencies post-stimulus onset. Early emotional responses may arise around 120ms, prior to a full visual categorization stage indexed by the face-selective N170 component, possibly reflecting rapid emotion processing based on crude visual cues in faces. Other electrical components arise at later latencies and involve more sustained activities, probably generated in associative or supramodal brain areas, and resulting in part from the modulatory signals received from amygdala. Altogether, these fMRI and ERP results demonstrate that emotion face perception is a complex process that cannot be related to a single neural event taking place in a single brain regions, but rather implicates an interactive network with distributed activity in time and space. Moreover, although traditional models in cognitive neuropsychology have often considered that facial expression and facial identity are processed along two separate pathways, evidence from fMRI and ERPs suggests instead that emotional processing can strongly affect brain systems responsible for face recognition and memory. The functional implications of these interactions remain to be fully explored, but might play an important role in the normal development of face processing skills and in some neuropsychiatric disorders.  相似文献   

5.
fMRI studies have revealed three scene-selective regions in human visual cortex [the parahippocampal place area (PPA), transverse occipital sulcus (TOS), and retrosplenial cortex (RSC)], which have been linked to higher-order functions such as navigation, scene perception/recognition, and contextual association. Here, we document corresponding (presumptively homologous) scene-selective regions in the awake macaque monkey, based on direct comparison to human maps, using identical stimuli and largely overlapping fMRI procedures. In humans, our results showed that the three scene-selective regions are centered near-but distinct from-the gyri/sulci for which they were originally named. In addition, all these regions are located within or adjacent to known retinotopic areas. Human RSC and PPA are located adjacent to the peripheral representation of primary and secondary visual cortex, respectively. Human TOS is located immediately anterior/ventral to retinotopic area V3A, within retinotopic regions LO-1, V3B, and/or V7. Mirroring the arrangement of human regions fusiform face area (FFA) and PPA (which are adjacent to each other in cortex), the presumptive monkey homolog of human PPA is located adjacent to the monkey homolog of human FFA, near the posterior superior temporal sulcus. Monkey TOS includes the region predicted from the human maps (macaque V4d), extending into retinotopically defined V3A. A possible monkey homolog of human RSC lies in the medial bank, near peripheral V1. Overall, our findings suggest a homologous neural architecture for scene-selective regions in visual cortex of humans and nonhuman primates, analogous to the face-selective regions demonstrated earlier in these two species.  相似文献   

6.
The N170 waveform is larger over posterior temporal cortex when healthy subjects view faces than when they view other objects. Source analyses have produced mixed results regarding whether this effect originates in the fusiform face area (FFA), lateral occipital cortex, or superior temporal sulcus (STS), components of the core face network. In a complementary approach, we assessed the face-selectivity of the right N170 in five patients with acquired prosopagnosia, who also underwent structural and functional magnetic resonance imaging. We used a non-parametric bootstrap procedure to perform single-subject analyses, which reliably confirmed N170 face-selectivity in each of 10 control subjects. Anterior temporal lesions that spared the core face network did not affect the face-selectivity of the N170. A face-selective N170 was also present in another subject who had lost only the right FFA. However, face-selectivity was absent in two patients with lesions that eliminated the occipital face area (OFA) and FFA, sparing only the STS. Thus while the right FFA is not necessary for the face-selectivity of the N170, neither is the STS sufficient. We conclude that the face-selective N170 in prosopagnosia requires residual function of at least two components of the core face-processing network.  相似文献   

7.
Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel‐based global brain connectivity method based on resting‐state fMRI to characterize the within‐network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the “Reading the Mind in the Eyes” Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting‐state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC‐pSTS and OFA‐pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub‐like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930–1940, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

8.
Visual agnosia has been well studied by anatomical, neuropsychological and neuroimaging studies. However, functional changes in the brain have been rarely assessed by electrophysiological methods. We carried out electrophysiological examinations on a 23-year-old man with associative visual agnosia, prosopagnosia and cerebral achromatopsia to evaluate the higher brain dysfunctions of visual recognition. Electrophysiological methods consisted of achromatic, chromatic and category-specific visual evoked potentials (CS-VEPs), and event-related potentials (ERPs) with color and motion discrimination tasks. Brain magnetic resonance imaging revealed large white matter lesions in the bilateral temporo-occipital lobes involving the lingual and fusiform gyri (V4) and inferior longitudinal fasciculi due to multiple sclerosis. Examinations including CS-VEPs demonstrated dysfunctions of face and object perception while sparing semantic word perception after primary visual cortex (V1) in the ventral pathway. ERPs showed abnormal color perception in the ventral pathway with normal motion perception in the dorsal pathway. These electrophysiological findings were consistent with lesions in the ventral pathway that were detected by clinical and neuroimaging findings. Therefore, CS-VEPs and ERPs with color and motion discrimination tasks are useful methods for assessing the functional changes of visual recognition such as visual agnosia.  相似文献   

9.
Although the ability to recognize faces and objects from a variety of viewpoints is crucial to our everyday behavior, the underlying cortical mechanisms are not well understood. Recently, neurons in a face-selective region of the monkey temporal cortex were reported to be selective for mirror-symmetric viewing angles of faces as they were rotated in depth (Freiwald and Tsao, 2010). This property has been suggested to constitute a key computational step in achieving full view-invariance. Here, we measured functional magnetic resonance imaging activity in nine observers as they viewed upright or inverted faces presented at five different angles (-60, -30, 0, 30, and 60°). Using multivariate pattern analysis, we show that sensitivity to viewpoint mirror symmetry is widespread in the human visual system. The effect was observed in a large band of higher order visual areas, including the occipital face area, fusiform face area, lateral occipital cortex, mid fusiform, parahippocampal place area, and extending superiorly to encompass dorsal regions V3A/B and the posterior intraparietal sulcus. In contrast, early retinotopic regions V1-hV4 failed to exhibit sensitivity to viewpoint symmetry, as their responses could be largely explained by a computational model of low-level visual similarity. Our findings suggest that selectivity for mirror-symmetric viewing angles may constitute an intermediate-level processing step shared across multiple higher order areas of the ventral and dorsal streams, setting the stage for complete viewpoint-invariant representations at subsequent levels of visual processing.  相似文献   

10.
Social Phobia (SP) is a marked and persistent fear of social or performance situations in which the person is exposed to unfamiliar people or to possible scrutiny by others. Faces of others are perceived as threatening by social phobic patients (SPP). To investigate how face processing is altered in the distributed neural system for face perception in Social Phobia, we designed an event-related fMRI study in which Healthy Controls (HC) and SPP were presented with angry, fearful, disgusted, happy and neutral faces and scrambled pictures (visual baseline). As compared to HC, SPP showed increased neural activity not only in regions involved in emotional processing including left amygdala and insula, as expected from previous reports, but also in the bilateral superior temporal sulcus (STS), a part of the core system for face perception that is involved in the evaluation of expression and personal traits. In addition SPP showed a significantly weaker activation in the left fusiform gyrus, left dorsolateral prefrontal cortex, and bilateral intraparietal sulcus as compared to HC. These effects were found not only in response to emotional faces but also to neutral faces as compared to scrambled pictures. Thus, SPP showed enhanced activity in brain areas related to processing of information about emotional expression and personality traits. In contrast, brain activity was decreased in areas for attention and for processing other information from the face, perhaps as a result of a feeling of wariness. These results indicate a differential modulation of neural activity throughout the different parts of the distributed neural system for face perception in SPP as compared to HC.  相似文献   

11.
Face perception is mediated by a distributed cortical network   总被引:11,自引:0,他引:11  
The neural system associated with face perception in the human brain was investigated using functional magnetic resonance imaging (fMRI). In contrast to many studies that focused on discreet face-responsive regions, the objective of the current study was to demonstrate that regardless of stimulus format, emotional valence, or task demands, face perception evokes activation in a distributed cortical network. Subjects viewed various stimuli (line drawings of unfamiliar faces and photographs of unfamiliar, famous, and emotional faces) and their phase scrambled versions. A network of face-responsive regions was identified that included the inferior occipital gyrus, fusiform gyrus, superior temporal sulcus, hippocampus, amygdala, inferior frontal gyrus, and orbitofrontal cortex. Although bilateral activation was found in all regions, the response in the right hemisphere was stronger. This hemispheric asymmetry was manifested by larger and more significant clusters of activation and larger number of subjects who showed the effect. A region of interest analysis revealed that while all face stimuli evoked activation within all regions, viewing famous and emotional faces resulted in larger spatial extents of activation and higher amplitudes of the fMRI signal. These results indicate that a mere percept of a face is sufficient to localize activation within the distributed cortical network that mediates the visual analysis of facial identity and expression.  相似文献   

12.
Orienting attention involuntarily to the location of a sudden sound improves perception of subsequent visual stimuli that appear nearby. The neural substrates of this cross-modal attention effect were investigated by recording event-related potentials to the visual stimuli using a dense electrode array and localizing their brain sources through inverse dipole modeling. A spatially nonpredictive auditory precue modulated visual-evoked neural activity first in the superior temporal cortex at 120-140 msec and then in the ventral occipital cortex of the fusiform gyrus 15-25 msec later. This spatio-temporal sequence of brain activity suggests that enhanced visual perception produced by the cross-modal orienting of spatial attention results from neural feedback from the multimodal superior temporal cortex to the visual cortex of the ventral processing stream.  相似文献   

13.
The ventral stream of the human extrastriate visual cortex shows a considerable functional heterogeneity from early visual processing (posterior) to higher, domain‐specific processing (anterior). The fusiform gyrus hosts several of those “high‐level” functional areas. We recently found a subdivision of the posterior fusiform gyrus on the microstructural level, that is, two distinct cytoarchitectonic areas, FG1 and FG2 (Caspers et al., Brain Structure & Function, 2013). To gain a first insight in the function of these two areas, here we studied their behavioral involvement and coactivation patterns by means of meta‐analytic connectivity modeling based on the BrainMap database ( www.brainmap.org ), using probabilistic maps of these areas as seed regions. The coactivation patterns of the areas support the concept of a common involvement in a core network subserving different cognitive tasks, that is, object recognition, visual language perception, or visual attention. In addition, the analysis supports the previous cytoarchitectonic parcellation, indicating that FG1 appears as a transitional area between early and higher visual cortex and FG2 as a higher‐order one. The latter area is furthermore lateralized, as it shows strong relations to the visual language processing system in the left hemisphere, while its right side is stronger associated with face selective regions. These findings indicate that functional lateralization of area FG2 relies on a different pattern of connectivity rather than side‐specific cytoarchitectonic features. Hum Brain Mapp 35:2754–2767, 2014. © 2013 Wiley Periodicals, Inc .  相似文献   

14.
Positron emission tomography (PET) was used to identify the neural systems involved in discriminating the shape, color, and speed of a visual stimulus under conditions of selective and divided attention. Psychophysical evidence indicated that the sensitivity for discriminating subtle stimulus changes in a same-different matching task was higher when subjects selectively attended to one attribute than when they divided attention among the attributes. PET measurements of brain activity indicated that modulations of extrastriate visual activity were primarily produced by task conditions of selective attention. Attention to speed activated a region in the left inferior parietal lobule. Attention to color activated a region in the collateral sulcus and dorsolateral occipital cortex, while attention to shape activated collateral sulcus (similarly to color), fusiform and parahippocampal gyri, and temporal cortex along the superior temporal sulcus. Outside the visual system, selective and divided attention activated nonoverlapping sets of brain regions. Selective conditions activated globus pallidus, caudate nucleus, lateral orbitofrontal cortex, posterior thalamus/colliculus, and insular-premotor regions, while the divided condition activated the anterior cingulate and dorsolateral prefrontal cortex. The results in the visual system demonstrate that selective attention to different features modulates activity in distinct regions of extrastriate cortex that appear to be specialized for processing the selected feature. The disjoint pattern of activations in extravisual brain regions during selective- and divided-attention conditions also suggests that preceptual judgements involve different neural systems, depending on attentional strategies.  相似文献   

15.
The sequence of neural activation during a visual search task was investigated using magnetoencephalography and the source locations for the activations were analyzed using a single-dipole algorithm. Five components (M1-5) were detected at mean latencies of 110, 146, 196, 250 and 333 ms in both of two different stimulus conditions; a target popped out in one stimulus condition (pop-out), while it did not in the other condition (non-pop-out). Statistical analysis showed that the M3 amplitude was larger and the M5 latency was shorter in the pop-out condition than in the non-pop-out condition, while there was no difference in the other components between the conditions. Neural sources were localized in the calcarine sulcus (M1) and the posterior fusiform gyrus (M2) of the hemisphere contralateral to the stimuli, the intraparietal sulcus and the posterior superior temporal sulcus (M3) in either of the hemispheres, and the calcarine sulcus (M4) of the same hemisphere in which the early processing (M1 and M2) occurred. The criteria for source localization were not satisfied for M5. The results suggest that the processing for pop-out and non-pop-out stimuli share a common mechanism; after early feature processing in the occipital cortex (M1 and M2), visual information is processed in the parietal and temporal regions (M3) and then some of this information is fed back to the occipital cortex (M4).  相似文献   

16.
The map of category-selectivity in human ventral temporal cortex (VTC) provides organizational constraints to models of object recognition. One important principle is lateral-medial response biases to stimuli that are typically viewed in the center or periphery of the visual field. However, little is known about the relative temporal dynamics and location of regions that respond preferentially to stimulus classes that are centrally viewed, such as the face- and word-processing networks. Here, word- and face-selective regions within VTC were mapped using intracranial recordings from 36 patients. Partially overlapping, but also anatomically dissociable patches of face- and word-selectivity, were found in VTC. In addition to canonical word-selective regions along the left posterior occipitotemporal sulcus, selectivity was also located medial and anterior to face-selective regions on the fusiform gyrus at the group level and within individual male and female subjects. These regions were replicated using 7 Tesla fMRI in healthy subjects. Left hemisphere word-selective regions preceded right hemisphere responses by 125 ms, potentially reflecting the left hemisphere bias for language, with no hemispheric difference in face-selective response latency. Word-selective regions along the posterior fusiform responded first, then spread medially and laterally, then anteriorally. Face-selective responses were first seen in posterior fusiform regions bilaterally, then proceeded anteriorally from there. For both words and faces, the relative delay between regions was longer than would be predicted by purely feedforward models of visual processing. The distinct time courses of responses across these regions, and between hemispheres, suggest that a complex and dynamic functional circuit supports face and word perception.SIGNIFICANCE STATEMENT Representations of visual objects in the human brain have been shown to be organized by several principles, including whether those objects tend to be viewed centrally or peripherally in the visual field. However, it remains unclear how regions that process objects that are viewed centrally, such as words and faces, are organized relative to one another. Here, invasive and noninvasive neuroimaging suggests that there is a mosaic of regions in ventral temporal cortex that respond selectively to either words or faces. These regions display differences in the strength and timing of their responses, both within and between brain hemispheres, suggesting that they play different roles in perception. These results illuminate extended, bilateral, and dynamic brain pathways that support face perception and reading.  相似文献   

17.
Face perception is highly lateralized to the right hemisphere (RH) in humans, as supported originally by observations of face recognition impairment (prosopagnosia) following brain damage. Divided visual field presentations, neuroimaging and event-related potential studies have supported this view. While the latter studies are typically performed in right-handers, the few reported cases of prosopagnosia with unilateral left damage were left-handers, suggesting that handedness may shift or qualify the lateralization of face perception. We tested this hypothesis by recording the whole set of face-sensitive areas in 11 left-handers, using a face-localizer paradigm in functional magnetic resonance imaging (fMRI) (faces, cars, and their phase-scrambled versions). All face-sensitive areas identified (superior temporal sulcus, inferior occipital cortex, anterior infero-temporal cortex, amygdala) were strongly right-lateralized in left-handers, this right lateralization bias being as large as in a population of right-handers (40) tested with the same paradigm (Rossion et al., 2012). The notable exception was the so-called ‘Fusiform face area’ (FFA), an area that was slightly left lateralized in the population of left-handers. Since the left FFA is localized closely to an area sensitive to word form in the human brain (‘Visual Word Form Area’ – VWFA), the enhanced left lateralization of the FFA in left-handers may be due to a decreased competition with the representation of words. The implications for the neural basis of face perception, aetiology of brain lateralization in general, and prosopagnosia are also discussed.  相似文献   

18.
We investigated the degree to which activation in regions of the brain known to participate in social perception is influenced by the presence or absence of the face and other body parts. Subjects continuously viewed a static image of a lecture hall in which actors appeared briefly in various poses. There were three conditions: Body-Face, in which the actor appeared with limbs, torso, and face clearly visible; Body-Only, in which the actor appeared with his or her face occluded by a book; and Face-Only, in which the actor appeared behind a podium with only face and shoulders visible. Using event-related functional MRI, we obtained strong activation in those regions previously identified as important for face and body perception. These included portions of the fusiform (FFG) and lingual gyri within ventral occipitotemporal cortex (VOTC), and portions of the middle occipital gyrus (corresponding to the previously defined extrastriate body area, or EBA) and posterior superior temporal sulcus (pSTS) within lateral occipitotemporal cortex (LOTC). Activation of the EBA was strongest for the Body-Only condition; indeed, exposing the face decreased EBA activation evoked by the body. In marked contrast, activation in the pSTS was largest when the face was visible, regardless of whether the body was also visible. Activity within the lateral lingual gyrus and adjacent medial FFG was strongest for the Body-Only condition, while activation in the lateral FFG was greatest when both the face and body were visible. These results provide new information regarding the importance of a visible face in both the relative activation and deactivation of brain structures engaged in social perception.  相似文献   

19.
Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal‐movement and ventral‐form visual cortex regions. Here, we explored, for the first time, whether similar dorsal–ventral interactions (assessed via functional connectivity), might also exist for visual‐speech processing. We then examined whether altered dorsal–ventral connectivity is observed in adults with high‐functioning autism spectrum disorder (ASD), a disorder associated with impaired visual‐speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal‐movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral‐form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT—right OFA, left TVSA—left FFA). The results confirmed our hypothesis that functional connectivity between dorsal‐movement and ventral‐form regions exists during visual‐speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face‐to‐face communication.  相似文献   

20.
The temporal and spatial processing of face perception in normal subjects was traced by magnetoencephalography (MEG) and electroencephalography (EEG). We used 5 different visual stimuli: (1) face with opened eyes, (2) face with closed eyes, (3) eyes, (4) scrambled face, and (5) hand, and they were shown in random order. Subjects were asked to count the number of hand stimuli. To analyze the complicated brain responses to visual stimuli, we used brain electric source analysis (BESA) as the spatio-temporal multiple source model. In MEG recording, the 1M and 2M components were identified in all subjects. The 1M component was recorded to all kinds of stimuli. The 2M component was clearly identified only to face stimulation in all subjects, but to eyes stimulation in only 3 subjects with a small amplitude. The 2M component was not identified to scrambled face nor hand stimulation. The 2M component was recorded from the right hemisphere in all subjects, but in only 5 of 10 subjects from the left hemisphere. The mean peak latencies of the 1M and 2M components were approximately 132 and 179 ms, respectively. The interpeak latency between 1M and 2M was approximately 47 ms on average but the interindividual difference was large. There was no significant difference of the 2M latency between face with opened eyes and face with closed eyes. The 1M component was generated in the primary visual cortex in the bilateral hemispheres, and the 2M component was generated in the inferior temporal cortex, around the fusiform gyrus. In the EEG recording, face-specific components, positive at the vertex, P200 (Cz), and the negative at the temporal areas, N190 (T5') and N190 (T6'), were clearly recorded. The EEG results were fundamentally compatible with the MEG results. The amplitude of the component recorded from the right hemisphere was significantly larger than that from the left hemisphere. These findings suggest that the fusiform gyrus is considered to play an important role in face perception in humans, and that the right hemisphere is more dominant. Face perception takes place approximately 47 ms after the primary response to visual stimulation in the primary visual cortex, but the period of information transfer to the fusiform gyrus is variable among subjects. Detailed temporal and spatial analyses of the processing of face perception can be achieved with MEG.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号