首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
We investigated the spatio-temporal dynamic of attentional bias towards fearful faces. Twelve participants performed a covert spatial orienting task while recording visual event-related brain potentials (VEPs). Each trial consisted of a pair of faces (one emotional and one neutral) briefly presented in the upper visual field, followed by a unilateral bar presented at the location of one of the faces. Participants had to judge the orientation of the bar. Comparing VEPs to bars shown at the location of an emotional (valid) versus neutral (invalid) face revealed an early effect of spatial validity: the lateral occipital P1 component (approximately 130 ms post-stimulus) was selectively increased when a bar replaced a fearful face compared to when the same bar replaced a neutral face. This effect was not found with upright happy faces or inverted fearful faces. A similar amplification of P1 has previously been observed in electrophysiological studies of spatial attention using non-emotional cues. In a behavioural control experiment, participants were also better at discriminating the orientation of the bar when it replaced a fearful rather than a neutral face. In addition, VEPs time-locked to the face-pair onset revealed a C1 component (approximately 90 ms) that was greater for fearful than happy faces. Source localization (LORETA) confirmed an extrastriate origin of the P1 response showing a spatial validity effect, and a striate origin of the C1 response showing an emotional valence effect. These data suggest that activity in primary visual cortex might be enhanced by fear cues as early as 90 ms post-stimulus, and that such effects might result in a subsequent facilitation of sensory processing for a stimulus appearing at the same location. These results provide evidence for neural mechanisms allowing rapid, exogenous spatial orienting of attention towards fear stimuli.  相似文献   

2.
Rapid face-selective adaptation of an early extrastriate component in MEG   总被引:1,自引:0,他引:1  
Adaptation paradigms are becoming increasingly popular for characterizing visual areas in neuroimaging, but the relation of these results to perception is unclear. Neurophysiological studies have generally reported effects of stimulus repetition starting at 250-300 ms after stimulus onset, well beyond the latencies of components associated with perception (100-200 ms). Here we demonstrate adaptation for earlier evoked components when 2 stimuli (S1 and S2) are presented in close succession. Using magnetoencephalography, we examined the M170, a "face-selective" response at 170 ms after stimulus onset that shows a larger response to faces than to other stimuli. Adaptation of the M170 occurred only when stimuli were presented with relatively short stimulus onset asynchronies (< 800 ms) and was larger for faces preceded by faces than by houses. This face-selective adaptation is not merely low-level habituation to physical stimulus attributes, as photographic, line-drawing, and 2-tone face images produced similar levels of adaptation. Nor does it depend on the amplitude of the S1 response: adaptation remained greater for faces than houses even when the amplitude of the S1 face response was reduced by visual noise. These results indicate that rapid adaptation of early, short-latency responses not only exists but also can be category selective.  相似文献   

3.
Rapid adaptation of the m170 response: importance of face parts   总被引:1,自引:0,他引:1  
Face perception is often characterized as depending on configural, rather than part-based, processing. Here we examined the relative contributions of configuration and parts to early "face-selective" processing at the M170, a magnetoencephalographic response approximately 170 ms after stimulus onset, using adaptation. Previously (Harris and Nakayama 2007), we showed that rapid successive presentation of 2 stimuli (stimulus-onset asynchrony < 800 ms) attenuates the M170 response. Such adaptation is face-selective, with greater attenuation when faces are preceded by other faces than by houses. This technique therefore provides an independent method to assess the nature of this early neurophysiological marker. In these experiments, we measured the adapting power of face configurations versus parts using upright and inverted faces (Experiment 1), face-like configurations of black ovals versus scrambled nonface configurations of face parts (Experiment 2), and isolated face parts (Experiment 3). Although face configurations alone do not produce face-selective adaptation, scrambled and even isolated face parts adapt the M170 response to a similar extent as full faces. Thus, at least for the relatively early face-selective M170 response, face parts produce face-selective adaptation but face configurations do not. These results suggest that face parts are important at relatively early stages of face perception.  相似文献   

4.
To find cortical correlates of face recognition, we manipulatedthe recognizability of face images in a parametric manner bymasking them with narrow-band spatial noise. Face recognitionperformance was best at the lowest and highest noise spatialfrequencies (NSFs, 2 and 45 c/image, respectively), and degradedgradually towards central NSFs (11–16 c/image). The strengthof the 130–180 ms neuromagnetic response (M170) in thetemporo-occipital cortex paralleled the recognition performance,whereas the mid-occipital response at 70–120 ms actedin the opposite manner, being strongest for the central NSFs.To noise stimuli without faces, M170 was small and rather insensitiveto NSF, whereas the mid-occipital responses resembled closelythe responses to the combined face and noise stimuli. Theseresults suggest that the 100 ms mid-occipital response is sensitiveto the central spatial frequencies that are critical for facerecognition, whereas the M170 response is sensitive to the visibilityof a face and closely related to face recognition.  相似文献   

5.
Activation in or near the fusiform gyrus was estimated to faces and control stimuli. Activation peaked at 165 ms and was strongest to digitized photographs of human faces, regardless of whether they were presented in color or grayscale, suggesting that face- and color-specific areas are functionally separate. Schematic sketche evoked approximately 30% less activation than did face photographs. Scrambling the locations of facial features reduced the response by approximately 25% in either hemisphere, suggesting that configurational versus analytic processing is not lateralized at this latency. Animal faces evoked approximately 50% less activity, and common objects, animal bodies or sensory controls evoked approximately 80% less activity than human faces. The (small) responses evoked by meaningless control images were stronger when they included surfaces and shading, suggesting that the fusiform gyrus may use these features in constructing its face-specific response. Putative fusiform activation was not significantly related to stimulus repetition, gender or emotional expression. A midline occipital source significantly distinguished between faces and control images as early as 110 ms, but was more sensitive to sensory qualities. This source significantly distinguished happy and sad faces from those with neutral expressions. We conclude that the fusiform gyrus may selectively encode faces at 165 ms, transforming sensory input for further processing.  相似文献   

6.
We used whole-head magnetoencephalography measurements to investigate the spatiotemporal pattern of neural activity related to language production. Eight participants overtly responded by repeating aloud or vocalizing an internally generated verb to auditorily or visually presented nouns. Activity peaked within primary sensory (auditory or visual) cortices between 75 and 130 ms after stimulus onset, association cortices (inferior and superior temporal gyri) between 130 and 170 ms, and inferior frontal and premotor areas between 150 and 240 ms. Common to auditory and visual modalities, peak activity at about 220 ms was significantly larger in bilateral inferior frontal and left precentral regions when participants generated a verb than when they repeated a noun. These early differences in frontal regions may reflect the allocation of resources to the processing of low-level perceptions that are projected to the premotor areas early in the preparation of language production.  相似文献   

7.
The ERP component N170 is face-sensitive, yet its specificity for faces is controversial. We recorded ERPs while subjects viewed upright and inverted faces and seven object categories. Peak, topography and segmentation analyses were performed. N170 was earlier and larger to faces than to all objects. The classic increase in amplitude and latency was found for inverted faces on N170 but also on P1. Segmentation analyses revealed an extra map found only for faces, reflecting an extra cluster of activity compared to objects. While the N1 for objects seems to reflect the return to baseline from the P1, the N170 for faces reflects a supplement activity. The electrophysiological 'specificity' of faces could lie in the involvement of extra generators for face processing compared to objects and the N170 for faces seems qualitatively different from the N1 for objects. Object and face processing also differed as early as 120 ms.  相似文献   

8.
The aim of this study was to determine the extent to which the neural representation of faces in visual cortex is viewpoint dependent or viewpoint invariant. Magnetoencephalography was used to measure evoked responses to faces during an adaptation paradigm. Using familiar and unfamiliar faces, we compared the amplitude of the M170 response to repeated images of the same face with images of different faces. We found a reduction in the M170 amplitude to repeated presentations of the same face image compared with images of different faces when shown from the same viewpoint. To establish if this adaptation to the identity of a face was invariant to changes in viewpoint, we varied the viewing angle of the face within a block. We found a reduction in response was no longer evident when images of the same face were shown from different viewpoints. This viewpoint-dependent pattern of results was the same for both familiar and unfamiliar faces. These results imply that either the face-selective M170 response reflects an early stage of face processing or that the computations underlying face recognition depend on a viewpoint-dependent neuronal representation.  相似文献   

9.
Cortical potentials were recorded from implanted electrodes during a difficult working memory task requiring rapid storage, modification and retrieval of multiple memoranda. Synchronous event-related potentials were generated in distributed occipital, parietal, Rolandic and prefrontal sites beginning approximately 130 ms after stimulus onset and continuing for >500 ms. Coherent phase-locked, event-related oscillations supported interaction between these dorsal stream structures throughout the task period. The Rolandic structures generated early as well as sustained potentials to sensory stimuli in the absence of movement. Activation peaks and phase lags between synaptic populations suggested that perceptual processing occurred exclusively in the visual association cortex from approximately 90 to 130 ms, with its results projected to fronto-parietal areas for interpretation from approximately 130 to 280 ms. The direction of interaction then appeared to reverse from approximately 300 to 400 ms, consistent with mental arithmetic being performed by fronto-parietal areas operating upon a visual scratch pad in the dorsolateral occipital cortex. A second reversal, from approximately 420 to 600 ms, may have represented an updating of memoranda stored in fronto-parietal sites. Lateralized perisylvian oscillations suggested an articulatory loop. Anterior cingulate activity was evoked by feedback signals indicating errors. These results indicate how a fronto-centro-parietal 'central executive' might interact with an occipital visual scratch pad, perisylvian articulatory loop and limbic monitor to implement the sequential stages of a complex mental operation.  相似文献   

10.
Electrophysiological and hemodynamic correlates of processing isolated faces have been investigated extensively over the last decade. A question not addressed thus far is whether the visual scene, which normally surrounds a face or a facial expression, has an influence on how the face is processed. Here we investigated this issue by presenting faces in natural contexts and measuring whether the emotional content of the scene influences processing of a facial expression. Event-related potentials were recorded to faces (fearful/neutral) embedded in scene contexts (fearful/neutral) while participants performed an orientation-decision task (face upright or inverted). Two additional experiments were run, one to examine the effects of context that occur without a face and the other to evaluate the effects of faces isolated from contexts. Faces without any context showed the largest N170 amplitudes. The presence of a face in a fearful context enhances the N170 amplitude over a face in neutral contexts, an effect that is strongest for fearful faces on left occipito-temporal sites. This N170 effect, and the corresponding topographic distribution, was not found for contexts-only, indicating that the increased N170 amplitude results from the combination of face and fearful context. These findings suggest that the context in which a face appears may influence how it is encoded.  相似文献   

11.
High-arousing emotional stimuli facilitate early visual cortex, thereby acting as strong competitors for processing resources in visual cortex. The present study used an electrophysiological approach for continuously measuring the time course of competition for processing resources in the visual pathway arising from emotionally salient but task-irrelevant input while performing a foreground target detection task. Steady-state visual evoked potentials (SSVEPs) were recorded to rapidly flickering squares superimposed upon neutral and emotionally high-arousing pictures, and variations in SSVEP amplitude over time were calculated. As reflected in SSVEP amplitude and target detection rates, arousing emotional background pictures withdrew processing resources from the detection task compared with neutral ones for several hundred milliseconds after stimulus onset. SSVEP amplitude was found to bear a close temporal relationship with accurate target detection as a function of time after stimulus onset.  相似文献   

12.
Single and multi-unit recordings in primates have identified spatially localized neuronal activity correlating with an animal's behavioral performance. Due to the invasive nature of these experiments, it has been difficult to identify such correlates in humans. We report the first non-invasive neural measurements of perceptual decision making, via single-trial EEG analysis, that lead to neurometric functions predictive of psychophysical performance for a face versus car categorization task. We identified two major discriminating components. The earliest correlating with psychophysical performance was consistent with the well-known face-selective N170. The second component, which was a better match to the psychometric function, did not occur until at least 130 ms later. As evidence for faces versus cars decreased, onset of the later, but not the earlier, component systematically shifted forward in time. In addition, a choice probability analysis indicated strong correlation between the neural responses of the later component and our subjects' behavioral judgements. These findings demonstrate a temporal evolution of component activity indicative of an evidence accumulation process which begins after early visual perception and has a processing time that depends on the strength of the evidence.  相似文献   

13.
Affectively arousing visual stimuli have been suggested to automatically attract attentional resources in order to optimize sensory processing. The present study crosses the factors of spatial selective attention and affective content, and examines the relationship between instructed (spatial) and automatic attention to affective stimuli. In addition to response times and error rate, electroencephalographic data from 129 electrodes were recorded during a covert spatial attention task. This task required silent counting of random-dot targets embedded in a 10 Hz flicker of colored pictures presented to both hemifields. Steady-state visual evoked potentials (ssVEPs) were obtained to determine amplitude and phase of electrocortical responses to pictures. An increase of ssVEP amplitude was observed as an additive function of spatial attention and emotional content. Statistical parametric mapping of this effect indicated occipito-temporal and parietal cortex activation contralateral to the attended visual hemifield in ssVEP amplitude modulation. This difference was most pronounced during selection of the left visual hemifield, at right temporal electrodes. In line with this finding, phase information revealed accelerated processing of aversive arousing, compared to affectively neutral pictures. The data suggest that affective stimulus properties modulate the spatiotemporal process along the ventral stream, encompassing amplitude amplification and timing changes of posterior and temporal cortex.  相似文献   

14.
The primate posterior parietal cortex (PPC) plays an important role in representing and recalling spatial relationships and in the ability to orient visual attention. This is evidenced by the parietal activation observed in brain imaging experiments performed during visuo- spatial tasks, and by the contralateral neglect syndrome that often accompanies parietal lesions. Individual neurons in monkey parietal cortex respond vigorously to the appearance of single, behaviorally relevant stimuli, but little is known about how they respond to more complex visual displays. The current experiments addressed this issue by recording activity from single neurons in area 7a of the PPC in monkeys performing a spatial version of a match-to-sample task. The task required them to locate salient stimuli in multiple-stimulus displays and release a lever after a subsequent stimulus appeared at the same location. Neurons responded preferentially to the appearance of salient stimuli inside their receptive fields. The presence of multiple stimuli did not affect appreciably the spatial tuning of responses in the majority of neurons or the population code for the location of the salient stimulus. Responses to salient stimuli could be distinguished from background stimuli approximately 100 ms after the onset of the cue. These results suggest that area 7a neurons represent the location of the stimulus attracting the animal's attention and can provide the spatial information required for directing attention to a salient stimulus in a complex scene.  相似文献   

15.
This and the following two papers describe event-related potentials (ERPs) evoked by visual stimuli in 98 patients in whom electrodes were placed directly upon the cortical surface to monitor medically intractable seizures. Patients viewed pictures of faces, scrambled faces, letter-strings, number-strings, and animate and inanimate objects. This paper describes ERPs generated in striate and peristriate cortex, evoked by faces, and evoked by sinusoidal gratings, objects and letter-strings. Short-latency ERPs generated in striate and peristriate cortex were sensitive to elementary stimulus features such as luminance. Three types of face-specific ERPs were found: (i) a surface-negative potential with a peak latency of approximately 200 ms (N200) recorded from ventral occipitotemporal cortex, (ii) a lateral surface N200 recorded primarily from the middle temporal gyrus, and (iii) a late positive potential (P350) recorded from posterior ventral occipitotemporal, posterior lateral temporal and anterior ventral temporal cortex. Face-specific N200s were preceded by P150 and followed by P290 and N700 ERPs. N200 reflects initial face-specific processing, while P290, N700 and P350 reflect later face processing at or near N200 sites and in anterior ventral temporal cortex. Face-specific N200 amplitude was not significantly different in males and females, in the normal and abnormal hemisphere, or in the right and left hemisphere. However, cortical patches generating ventral face-specific N200s were larger in the right hemisphere. Other cortical patches in the same region of extrastriate cortex generated grating-sensitive N180s and object-specific or letter-string-specific N200s, suggesting that the human ventral object recognition system is segregated into functionally discrete regions.  相似文献   

16.
Learning perceptual skills is characterized by rapid improvements in performance within the first hour of training (fast perceptual learning) followed by more gradual improvements that take place over several daily practice sessions (slow perceptual learning). Although it is widely accepted that slow perceptual learning is accompanied by enhanced stimulus representation in sensory cortices, there is considerable controversy about the neural substrates underlying early and rapid improvements in learning perceptual skills. Here we measured event-related brain potentials while listeners were presented with 2 phonetically different vowels. Listeners' ability to identify both vowels improved gradually during the first hour of testing and was paralleled by enhancements in an early evoked response ( approximately 130 ms) localized in the right auditory cortex and a late evoked response ( approximately 340 ms) localized in the right anterior superior temporal gyrus and/or inferior prefrontal cortex. These neuroplastic changes depended on listeners' attention and were preserved only if practice was continued; familiarity with the task structure (procedural learning) was not sufficient. We propose that the early increases in cortical responsiveness reflect goal-directed changes in the tuning properties of auditory neurons involved in parsing concurrent speech signals. Importantly, the neuroplastic changes occurred rapidly, demonstrating the flexibility of human speech segregation mechanisms.  相似文献   

17.
When observers must discriminate a weak sensory signal in noise, early sensory areas seem to reflect the instantaneous strength of the sensory signal. In contrast, high-level parietal and prefrontal areas appear to integrate these signals over time with activity peaking at the time of the observer's decision. Here, we used functional magnetic resonance imaging to investigate how the brain forms perceptual decisions about complex visual forms in a challenging task, requiring the discrimination of ambiguous 2-tone Mooney faces and visually similar nonface images. Face-selective areas in the ventral visual cortex showed greater activity when subjects reported perceiving a face as compared with a nonface, even on error trials. More important, activity was closely related to the time of the subject's decision for face judgments, even on individual trials, and resembled the time course of activity in motor cortex corresponding to the subject's behavioral report. These results indicate that perceptual decisions about ambiguous face-like stimuli are reflected early in the sensorimotor pathway, in face-selective regions of the ventral visual cortex. Activity in these areas may represent a potential rate-limiting step in the pathway from sensation to action when subjects must reach a decision about ambiguous face-like stimuli.  相似文献   

18.
Visual prosthesis can elicit phosphenes by stimulating the retina, optic nerve, or visual cortex along the visual pathway. Psychophysical studies have demonstrated that visual function can be partly recovered with phosphene‐based prosthetic vision. This study investigated the cognitive process of prosthetic vision through a face recognition task. Both behavioral response and the face‐specific N170 component of event‐related potential were analyzed in the presence of face and non‐face stimuli with natural and simulated prosthetic vision. Our results showed that: (i) the accuracy of phosphene face recognition was comparable with that of the normal one when phosphene grid increased to 25 × 21 or more; (ii) shorter response time was needed for phosphene face recognition; and (iii) the N170 component was delayed and enhanced under phosphene stimuli. It was suggested that recognition of phosphene patterns employ a configuration‐based holistic processing mechanism with a distinct substage unspecific to faces.  相似文献   

19.
It is well established that spatially directed attention enhances visual perceptual processing. However, the earliest level at which processing can be affected remains unknown. To date, there has been no report of modulation of the earliest visual event-related potential component "C1" in humans, which indexes initial afference in primary visual cortex (V1). Thus it has been suggested that initial V1 activity is impenetrable, and that the earliest modulations occur in extrastriate cortex. However, the C1 is highly variable across individuals, to the extent that uniform measurement across a group may poorly reflect the dynamics of V1 activity. In the present study we employed an individualized mapping procedure to control for such variability. Parameters for optimal C1 measurement were determined in an independent, preliminary "probe" session and later applied in a follow-up session involving a spatial cueing task. In the spatial task, subjects were cued on each trial to direct attention toward 1 of 2 locations in anticipation of an imperative Gabor stimulus and were required to detect a region of lower luminance appearing within the Gabor pattern 30% of the time at the cued location only. Our data show robust spatial attentional enhancement of the C1, beginning as early as its point of onset (57 ms). Source analysis of the attentional modulations points to generation in striate cortex. This finding demonstrates that at the very moment that visual information first arrives in cortex, it is already being shaped by the brain's attentional biases.  相似文献   

20.
We aimed at testing the cortical representation of complex natural sounds within auditory cortex by conducting 2 human magnetoencephalography experiments. To this end, we employed an adaptation paradigm and presented subjects with pairs of complex stimuli, namely, animal vocalizations and spectrally matched noise. In Experiment 1, we presented stimulus pairs of same or different animal vocalizations and same or different noise. Our results suggest a 2-step process of adaptation effects: first, we observed a general item-unspecific reduction of the N1m peak amplitude at 100 ms, followed by an item-specific amplitude reduction of the P2m component at 200 ms after stimulus onset for both animal vocalizations and noise. Multiple dipole source modeling revealed the right lateral Heschl's gyrus and the bilateral superior temporal gyrus as sites of adaptation. In Experiment 2, we tested for cross-adaptation between animal vocalizations and spectrally matched noise sounds, by presenting pairs of an animal vocalization and its corresponding or a different noise sound. We observed cross-adaptation effects for the P2m component within bilateral superior temporal gyrus. Thus, our results suggest selectivity of the evoked magnetic field at 200 ms after stimulus onset in nonprimary auditory cortex for the spectral fine structure of complex sounds rather than their temporal dynamics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号