首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Although the ability to recognize faces and objects from a variety of viewpoints is crucial to our everyday behavior, the underlying cortical mechanisms are not well understood. Recently, neurons in a face-selective region of the monkey temporal cortex were reported to be selective for mirror-symmetric viewing angles of faces as they were rotated in depth (Freiwald and Tsao, 2010). This property has been suggested to constitute a key computational step in achieving full view-invariance. Here, we measured functional magnetic resonance imaging activity in nine observers as they viewed upright or inverted faces presented at five different angles (-60, -30, 0, 30, and 60°). Using multivariate pattern analysis, we show that sensitivity to viewpoint mirror symmetry is widespread in the human visual system. The effect was observed in a large band of higher order visual areas, including the occipital face area, fusiform face area, lateral occipital cortex, mid fusiform, parahippocampal place area, and extending superiorly to encompass dorsal regions V3A/B and the posterior intraparietal sulcus. In contrast, early retinotopic regions V1-hV4 failed to exhibit sensitivity to viewpoint symmetry, as their responses could be largely explained by a computational model of low-level visual similarity. Our findings suggest that selectivity for mirror-symmetric viewing angles may constitute an intermediate-level processing step shared across multiple higher order areas of the ventral and dorsal streams, setting the stage for complete viewpoint-invariant representations at subsequent levels of visual processing.  相似文献   

2.
The present study examined the coding of spatial position in object selective cortex. Using functional magnetic resonance imaging (fMRI) and pattern classification analysis, we find that three areas in object selective cortex, the lateral occipital cortex area (LO), the fusiform face area (FFA), and the parahippocampal place area (PPA), robustly code the spatial position of objects. The analysis further revealed several anisotropies (e.g., horizontal/vertical asymmetry) in the representation of visual space in these areas. Finally, we show that the representation of information in these areas permits object category information to be extracted across varying locations in the visual field; a finding that suggests a potential neural solution to accomplishing translation invariance.  相似文献   

3.
Attentional control of the processing of neural and emotional stimuli   总被引:4,自引:0,他引:4  
A typical scene contains many different objects that compete for neural representation due to the limited processing capacity of the visual system. At the neural level, competition among multiple stimuli is evidenced by the mutual suppression of their visually evoked responses and occurs most strongly at the level of the receptive field. The competition among multiple objects can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that biasing signals due to selective attention can modulate neural activity in visual cortex not only in the presence but also in the absence of visual stimulation. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals likely derives from a distributed network of areas in frontal and parietal cortex. Competition suggests that once attentional resources are depleted, no further processing is possible. Yet, existing data suggest that emotional stimuli activate brain regions "automatically," largely immune from attentional control. We tested the alternative possibility, namely, that the neural processing of stimuli with emotional content is not automatic and instead requires some degree of attention. Our results revealed that, contrary to the prevailing view, all brain regions responding differentially to emotional faces, including the amygdala, did so only when sufficient attentional resources were available to process the faces. Thus, similar to the processing of other stimulus categories, the processing of facial expression is under top-down control.  相似文献   

4.
The brain network for the recognition of biological motion includes visual areas and structures of the mirror-neuron system. The latter respond during action execution as well as during action recognition. As motor and somatosensory areas predominantly represent the contralateral side of the body and visual areas predominantly process stimuli from the contralateral hemifield, we were interested in interactions between visual hemifield and action recognition. In the present study, human participants detected the facing direction of profile views of biological motion stimuli presented in the visual periphery. They recognized a right-facing body view of human motion better in the right visual hemifield than in the left; and a left-facing body view better in the left visual hemifield than in the right. In a subsequent fMRI experiment, performed with a similar task, two cortical areas in the left and right hemispheres were significantly correlated with the behavioural facing effect: primary somatosensory cortex (BA 2) and inferior frontal gyrus (BA 44). These areas were activated specifically when point-light stimuli presented in the contralateral visual hemifield displayed the side view of their contralateral body side. Our results indicate that the hemispheric specialization of one's own body map extends to the visual representation of the bodies of others.  相似文献   

5.
Our visual system can extract summary statistics from large collections of similar objects without forming detailed representations of the individual objects in the ensemble. Such object ensemble representation is adaptive and allows us to overcome the capacity limitation associated with representing specific objects. Surprisingly, little is known about the neural mechanisms supporting such object ensemble representation. Here we showed human observers identical photographs of the same object ensemble, different photographs depicting the same ensemble, or different photographs depicting different ensembles. We observed fMRI adaptation in anterior-medial ventral visual cortex whenever object ensemble statistics repeated, even when local image features differed across photographs. Interestingly, such object ensemble processing is closely related to texture and scene processing in the brain. In contrast, the lateral occipital area, a region involved in object-shape processing, showed adaptation only when identical photographs were repeated. These results provide the first step toward understanding the neural underpinnings of real-world object ensemble representation.  相似文献   

6.
Rolls ET 《Neuropsychologia》2007,45(1):124-143
Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximising the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalisation and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses, generalise to other views of the same face. A theory is described of how such invariant representations may be produced in a hierarchically organised set of visual cortical areas with convergent connectivity. The theory proposes that neurons in these visual areas use a modified Hebb synaptic modification rule with a short-term memory trace to capture whatever can be captured at each stage that is invariant about objects as the objects change in retinal view, position, size and rotation. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye gaze, face view and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found, and also the orbitofrontal cortex, in which some neurons are tuned to face identity and others to face expression. In humans, activation of the orbitofrontal cortex is found when a change of face expression acts as a social signal that behaviour should change; and damage to the orbitofrontal cortex can impair face and voice expression identification, and also the reversal of emotional behaviour that normally occurs when reinforcers are reversed.  相似文献   

7.
Humans can identify individual faces under different viewpoints, even after a single encounter. We determined brain regions responsible for processing face identity across view changes after variable delays with several intervening stimuli, using event-related functional magnetic resonance imaging during a long-term repetition priming paradigm. Unfamiliar faces were presented sequentially either in a frontal or three-quarter view. Each face identity was repeated once after an unpredictable lag, with either the same or another viewpoint. Behavioral data showed significant priming in response time, irrespective of view changes. Brain imaging results revealed a reduced response in the lateral occipital and fusiform cortex with face repetition. Bilateral face-selective fusiform areas showed view-sensitive repetition effects, generalizing only from three-quarter to front-views. More medial regions in the left (but not in the right) fusiform showed repetition effects across all types of viewpoint changes. These results reveal that distinct regions within the fusiform cortex hold view-sensitive or view-invariant traces of novel faces, and that face identity is represented in a view-sensitive manner in the functionally defined face-selective areas of both hemispheres. In addition, our finding of a better generalization after exposure to a 3/4-view than to a front-view demonstrates for the first time a neural substrate in the fusiform cortex for the common recognition advantage of three-quarter faces. This pattern provides new insights into the nature of face representation in the human visual system.  相似文献   

8.
People are extremely proficient at recognizing faces that are familiar to them, but are much worse at matching unfamiliar faces. We used fMR-adaptation to ask whether this difference in recognition might be reflected by an image-invariant representation for familiar faces in face-selective regions of the human ventral visual processing stream. Consistent with models of face processing, we found adaptation to repeated images of the same face image in the fusiform face area (FFA), but not in the superior-temporal face region (STS). To establish if the neural representation in the FFA was invariant to changes in view, we presented different images of the same face. Contrary to our hypothesis, we found that the response in the FFA to different images of the same person was the same as the response to images of different people. A group analysis showed a distributed pattern of adaptation to the same image of a face, which extended beyond the face-selective areas, including other regions of the ventral visual stream. However, this analysis failed to reveal any regions showing significant image-invariant adaptation. These results suggest that information about faces is represented in a distributed network using an image-dependent neural code.  相似文献   

9.
The representation of visual objects in primate brain is distributed and multiple neurons are involved in encoding each object. One way to understand the neural basis of object representation is to estimate the number of neural dimensions that are needed for veridical representation of object categories. In this study, the characteristics of the match between physical‐shape and neural representational spaces in monkey inferior temporal (IT) cortex were evaluated. Specifically, we examined how the number of neural dimensions, stimulus behavioral saliency and stimulus category selectivity of neurons affected the correlation between shape and neural representational spaces in IT cortex. Single‐unit recordings from monkey IT cortex revealed that there was a significant match between face space and its neural representation at lower neural dimensions, whereas the optimal match for the non‐face objects was observed at higher neural dimensions. There was a statistically significant match between the face and neural spaces only in the face‐selective neurons, whereas a significant match was observed for non‐face objects in all neurons regardless of their category selectivity. Interestingly, the face neurons showed a higher match for the non‐face objects than for the faces at higher neural dimensions. The optimal representation of face space in the responses of the face neurons was a low dimensional map that emerged early (~150 ms post‐stimulus onset) and was followed by a high dimensional and relatively late (~300 ms) map for the non‐face stimuli. These results support a multiplexing function for the face neurons in the representation of very similar shape spaces, but with different dimensionality and timing scales.  相似文献   

10.
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise‐based image classification to BOLD responses recorded in high‐level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face‐selective areas in the human ventral cortex. Using behaviorally and neurally‐derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally‐coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise‐based image classification in conjunction with fMRI to help uncover the structure of high‐level perceptual representations. Hum Brain Mapp 34:3101–3115, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

11.
People perceive and evaluate others on the basis of social categories, such as race, gender and age. Initial processing of targets in terms of visually salient social categories is often characterized as inevitable. In the current study, we investigated the influence of processing goals on the representation of race in the visual processing stream. Participants were assigned to one of two mixed-race teams and categorized faces according to their group membership or skin color. To assess neural representations of race, we employed multivariate pattern analysis to examined neural activity related to the presentation of Black and White faces. As predicted, patterns of neural activity within the early visual cortex and fusiform gyri (FG) could decode the race of face stimuli above chance and were moderated by processing goals. Race decoding in early visual cortex was above chance in both categorization tasks and below chance in a prefrontal control region. More importantly, race decoding was greater in the FG during the group membership vs skin color categorization task. The results suggest that, ironically, explicit racial categorization can diminish the representation of race in the FG. These findings suggest that representations of race are dynamic, reflecting current processing goals.  相似文献   

12.
Detecting a change in a visual stimulus is particularly difficult when it is accompanied by a visual disruption such as a saccade or flicker. In order to say whether a stimulus has changed across such a disruption, some neural trace must persist. Here we investigated whether two different regions of the human extrastriate visual cortex contain neuronal populations encoding such a trace. Participants viewed a stimulus that included various objects and a short blank period (flicker) made it difficult to distinguish whether an object in the stimulus had changed or not. By applying transcranial magnetic stimulation (TMS) during the visual disruption we show that the lateral occipital (LO) cortex, but not the occipital face area, contains a sustained representation of a visual stimulus. TMS over LO improved the sensitivity and response bias for detecting changes by selectively reducing false alarms. We suggest that TMS enhanced the initial object representation and thus boosted neural events associated with object repetition. Our findings show that neuronal signals in the human LO cortex carry a sustained neural trace that is necessary for detecting the repetition of a stimulus.  相似文献   

13.
Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.  相似文献   

14.
Polymicrogyrias (PMG) are cortical malformations resulting from developmental abnormalities. In animal models PMG has been associated with abnormal anatomy, function, and organization. The purpose of this study was to describe the function and organization of human polymicrogyric cortex using functional magnetic resonance imaging. Three patients with epilepsy and bilateral parasagittal occipital polymicrogyri were studied. They all had normal vision as tested by Humphrey visual field perimetry. The functional organization of the visual cortex was reconstructed using phase-encoded retinotopic mapping analysis. This method sequentially stimulates each point in the visual field along the axes of a polar-coordinate system, thereby reconstructing the representation of the visual field on the cortex. We found normal cortical responses and organization of early visual areas (V1, V2, and V3/VP). The locations of these visual areas overlapped substantially with the PMG. In five out of six hemispheres the reconstructed primary visual cortex completely fell within polymicrogyric areas. Our results suggest that human polymicrogyric cortex is not only organized in a normal fashion, but is also actively involved in processing of visual information and contributes to normal visual perception.  相似文献   

15.
Humans can attend to different objects independent of their spatial locations. While selecting an object has been shown to modulate object processing in high-level visual areas in occipitotemporal cortex, where/how behavioral importance (i.e., priority) for objects is represented is unknown. Here we examined the patterns of distributed neural activity during an object-based selection task. We measured brain activity with functional magnetic resonance imaging (fMRI), while participants viewed two superimposed, dynamic objects (left- and right-pointing triangles) and were cued to attend to one of the triangle objects. Enhanced fMRI response was observed for the attention conditions compared to a neutral condition, but no significant difference was found in overall response amplitude between two attention conditions. By using multi-voxel pattern classification (MVPC), however, we were able to distinguish the neural patterns associated with attention to different objects in early visual cortex (V1 to hMT+) and lateral occipital complex (LOC). Furthermore, distinct multi-voxel patterns were also observed in frontal and parietal areas. Our results demonstrate that object-based attention has a wide-spread modulation effect along the visual hierarchy and suggest that object-specific priority information is represented by patterned neural activity in the dorsal frontoparietal network.  相似文献   

16.
Artificial percepts (phosphenes) can be induced by applying transcranial magnetic stimulation (TMS) over human visual cortex. Although phosphenes have been used to study visual awareness, the neural mechanisms generating them have not yet been delineated. We directly tested the two leading hypotheses of how phosphenes arise. These hypotheses correspond to the two competing views of the neural genesis of awareness: the early, feedforward view and the late, recurrent feedback model. We combined online TMS and EEG recordings to investigate whether the electrophysiological correlates of conscious phosphene perception are detectable early after TMS onset as an immediate local effect of TMS, or only at longer latencies, after interactions of TMS‐induced activity with other visual areas. Stimulation was applied at the intensity threshold at which participants saw a phosphene on half of the trials, and brain activity was recorded simultaneously with electroencephalography. Phosphene perception was associated with a differential pattern of TMS‐evoked brain potentials that started 160–200 ms after stimulation and encompassed a wide array of posterior areas. This pattern was differentiated from the TMS‐evoked potential after stimulation of a control site. These findings suggest that conscious phosphene perception is not a local phenomenon, but arises only after extensive recurrent processing. Hum Brain Mapp, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

17.
The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse-cheese) or not (e.g., crown-mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.  相似文献   

18.
Since the discovery of "face cells" in the early 1980s, single-cell recording experiments in non-human primates have made significant contributions toward the elucidation of neural mechanisms underlying face perception and recognition. In this paper, we review the recent progress in face cell studies, including the recent remarkable findings of the face patches that are scattered around the anterior temporal cortical areas of monkeys. In particular, we focus on the neural representations of facial identity within these areas. The identification of faces requires both discrimination of facial identities and generalization across facial views. It has been indicated by some laboratories that the population of face cells found in the anterior ventral inferior temporal cortex of monkeys represent facial identity in a manner which is facial view-invariant. These findings suggest a relatively distributed representation that operates for facial identification. It has also been shown that certain individual neurons in the medial temporal lobe of humans represent view-invariant facial identity. This finding suggests a relatively sparse representation that may be employed for memory formation. Finally, we summarize our recent study, showing that the population of face cells in the anterior ventral inferior temporal cortex of monkeys that represent view-invariant facial identity, can also represent learned paired associations between an abstract picture and a particular facial identity, extending our understanding of the function of the anterior ventral inferior temporal cortex in the recognition of associative meanings of faces.  相似文献   

19.
The processing of auditory spatial information in cortical areas of the human brain outside of the primary auditory cortex remains poorly understood. Here we investigated the role of the superior temporal gyrus (STG) and the occipital cortex (OC) in spatial hearing using repetitive transcranial magnetic stimulation (rTMS). The right STG is known to be of crucial importance for visual spatial awareness, and has been suggested to be involved in auditory spatial perception. We found that rTMS of the right STG induced a systematic error in the perception of interaural time differences (a primary cue for sound localization in the azimuthal plane). This is in accordance with the recent view, based on both neurophysiological data obtained in monkeys and human neuroimaging studies, that information on sound location is processed within a dorsolateral "where" stream including the caudal STG. A similar, but opposite, auditory shift was obtained after rTMS of secondary visual areas of the right OC. Processing of auditory information in the OC has previously been shown to exist only in blind persons. Thus, the latter finding provides the first evidence of an involvement of the visual cortex in spatial hearing in sighted human subjects, and suggests a close interconnection of the neural representation of auditory and visual space. Because rTMS induced systematic shifts in auditory lateralization, but not a general deterioration, we propose that rTMS of STG or OC specifically affected neuronal circuits transforming auditory spatial coordinates in order to maintain alignment with vision.  相似文献   

20.
The neural basis of biased competition in human visual cortex   总被引:10,自引:0,他引:10  
A typical scene contains many different objects that compete for neural representation due to the limited processing capacity of the visual system. At the neural level, competition among multiple stimuli is evidenced by the mutual suppression of their visually evoked responses and occurs most strongly at the level of the receptive field. The competition among multiple objects can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that biasing signals due to selective attention can modulate neural activity in visual cortex not only in the presence, but also in the absence of visual stimulation. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals likely derives from a distributed network of areas in frontal and parietal cortex. Attention-related activity in frontal and parietal areas does not reflect attentional modulation of visually evoked responses, but rather the attentional operations themselves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号