首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Complex motor sequencing and sensory integration are two key items in scales assessing neurological soft signs. However, the underlying neural mechanism and heritability of these two functions is not known. Using a healthy twin design, we adopted two functional brain imaging tasks focusing on fist‐edge‐palm (FEP) complex motor sequence and audiovisual integration (AVI). Fifty‐six monozygotic twins and 56 dizygotic twins were recruited in this study. The pre‐ and postcentral, temporal and parietal gyri, the supplementary motor area, and the cerebellum were activated during the FEP motor sequence, whereas the precentral, temporal, and fusiform gyri, the thalamus, and the caudate were activated during AVI. Activation in the supplementary motor area during FEP motor sequence and activation in the precentral gyrus and the thalamic nuclei during AVI exhibited significant heritability estimates, ranging from 0.5 to 0.62. These results suggest that activation in cortical motor areas, the thalamus and the cerebellum associated with complex motor sequencing and audiovisual integration function may be heritable.  相似文献   

2.
The pulvinar nuclei appear to function as the subcortical visual pathway that bypasses the striate cortex, rapidly processing coarse facial information. We investigated responses from monkey pulvinar neurons during a delayed non‐matching‐to‐sample task, in which monkeys were required to discriminate five categories of visual stimuli [photos of faces with different gaze directions, line drawings of faces, face‐like patterns (three dark blobs on a bright oval), eye‐like patterns and simple geometric patterns]. Of 401 neurons recorded, 165 neurons responded differentially to the visual stimuli. These visual responses were suppressed by scrambling the images. Although these neurons exhibited a broad response latency distribution, face‐like patterns elicited responses with the shortest latencies (approximately 50 ms). Multidimensional scaling analysis indicated that the pulvinar neurons could specifically encode face‐like patterns during the first 50‐ms period after stimulus onset and classify the stimuli into one of the five different categories during the next 50‐ms period. The amount of stimulus information conveyed by the pulvinar neurons and the number of stimulus‐differentiating neurons were consistently higher during the second 50‐ms period than during the first 50‐ms period. These results suggest that responsiveness to face‐like patterns during the first 50‐ms period might be attributed to ascending inputs from the superior colliculus or the retina, while responsiveness to the five different stimulus categories during the second 50‐ms period might be mediated by descending inputs from cortical regions. These findings provide neurophysiological evidence for pulvinar involvement in social cognition and, specifically, rapid coarse facial information processing.  相似文献   

3.
The visual attentional blink can be substantially reduced by delivering a task‐irrelevant sound synchronously with the second visual target (T2), and this effect is further modulated by the semantic congruency between the sound and T2. However, whether the cross‐modal benefit originates from audiovisual interactions or sound‐induced alertness remains controversial, and whether the semantic congruency effect is contingent on audiovisual temporal synchrony needs further investigation. The current study investigated these questions by recording event‐related potentials (ERPs) in a visual attentional blink task wherein a sound could either synchronize with T2, precede T2 by 200 ms, be delayed by 100 ms, or be absent, and could be either semantically congruent or incongruent with T2 when delivered. The behavioral data showed that both the cross‐modal boost of T2 discrimination and the further semantic modulation were the largest when the sound synchronized with T2. In parallel, the ERP data yielded that both the early occipital cross‐modal P195 component (192–228 ms after T2 onset) and late parietal cross‐modal N440 component (424–448 ms) were prominent only when the sound synchronized with T2, with the former being elicited solely when the sound was further semantically congruent whereas the latter occurring only when that sound was incongruent. These findings demonstrate not only that the cross‐modal boost of T2 discrimination during the attentional blink stems from early audiovisual interactions and the semantic congruency effect depends on audiovisual temporal synchrony, but also that the semantic modulation can unfold at the early stage of visual discrimination processing.  相似文献   

4.
What happens in our brains when we see a face? The neural mechanisms of face processing – namely, the face‐selective regions – have been extensively explored. Research has traditionally focused on visual cortex face‐regions; more recently, the role of face‐regions outside the visual cortex (i.e., non‐visual‐cortex face‐regions) has been acknowledged as well. The major quest today is to reveal the functional role of each this region in face processing. To make progress in this direction, it is essential to understand the extent to which the face‐regions, and particularly the non‐visual‐cortex face‐regions, process only faces (i.e., face‐specific, domain‐specific processing) or rather are involved in a more domain‐general cognitive processing. In the current functional MRI study, we systematically examined the activity of the whole face‐network during face‐unrelated reading task (i.e., written meaningful sentences with content unrelated to faces/people and non‐words). We found that the non‐visual‐cortex (i.e., right lateral prefrontal cortex and posterior superior temporal sulcus), but not the visual cortex face‐regions, responded significantly stronger to sentences than to non‐words. In general, some degree of sentence selectivity was found in all non‐visual‐cortex cortex. Present result highlights the possibility that the processing in the non‐visual‐cortex face‐selective regions might not be exclusively face‐specific, but rather more or even fully domain‐general. In this paper, we illustrate how the knowledge about domain‐general processing in face‐regions can help to advance our general understanding of face processing mechanisms. Our results therefore suggest that the problem of face processing should be approached in the broader scope of cognition in general.  相似文献   

5.
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face‐selective areas; in contrast, whether motion‐sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi‐voxel pattern analysis (MVPA) to explore facial expression decoding in both face‐selective and motion‐sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes‐obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye‐related information on facial expression decoding were also examined. It was found that motion‐sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face‐selective and motion‐sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye‐related information was also observed. Overall, the findings showed that emotional expressions are represented in motion‐sensitive areas in addition to conventional face‐selective areas, suggesting that motion‐sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye‐related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113–3125, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

6.
A face‐selective neural signal is reliably found in humans with functional MRI and event‐related potential (ERP) measures, which provide complementary information about the spatial and temporal properties of the neural response. However, because most neuroimaging studies so far have studied ERP and fMRI face‐selective markers separately, the relationship between them is still unknown. Here we simultaneously recorded fMRI and ERP responses to faces and chairs to examine the correlations across subjects between the magnitudes of fMRI and ERP face‐selectivity measures. Findings show that the face‐selective responses in the temporal lobe (i.e., fusiform gyrus—FFA) and superior temporal sulcus (fSTS), but not the face‐selective response in the occipital cortex (OFA), were highly correlated with the face‐selective N170 component. In contrast, the OFA was correlated with earlier ERPs at about 110 ms after stimulus‐onset. Importantly, these correlations reveal a temporal dissociation between the face‐selective area in the occipital lobe and face‐selective areas in the temporal lobe. Despite the very different time‐scale of the fMRI and EEG signals, our data show that a correlation analysis across subjects may be informative with respect to the latency in which different brain regions process information. Hum Brain Mapp, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

7.
The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining “conflicting” versus “validating” AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals. Hum Brain Mapp 35:5587–5605, 2014. © 2014 Wiley Periodicals, Inc .  相似文献   

8.
Background: Semantic dementia (SD) has been recognized as a representative of dementia with presenile onset; however, recent epidemiological studies have shown that SD also occurs in the elderly. There have been few studies about the differences of clinical profiles between early‐onset SD (EO‐SD) and late‐onset SD (LO‐SD). Age‐associated changes in the brain might cause some additional cognitive and behavioural profiles of LO‐SD in contrast to the typical EO‐SD cases. The aim of the present study was to clarify the characteristics of neuropsychological, and behavioural and psychological symptoms of dementia (BPSD) profiles of LO‐SD patients observed in screening tests in comparison with EO‐SD patients and late‐onset Alzheimer's disease (LO‐AD) patients as controls. Methods: Study participants were LO‐SD (n = 10), EO‐SD (n = 15) and LO‐AD (n = 47). We examined the Mini‐Mental State Examination (MMSE), the Raven's Coloured Progressive Matrices (RCPM), the Short‐Memory Questionnaire (SMQ), the Neuropsychiatric Inventory (NPI) and the Stereotypy Rating Inventory (SRI). Results: Both SD groups scored significantly lower than the LO‐AD patients in ‘naming’ of the MMSE. In the ‘construction’ score of the MMSE and the RCPM score, however, the LO‐SD patients as well as the LO‐AD patients were significantly lower than the EO‐SD patients. In the SMQ score, ‘euphoria’ and ‘disinhibition’ scores of the NPI, the SRI total and subscale scores, both SD groups were significantly higher, whereas in the ‘delusion’ score of the NPI, both SD groups were significantly lower than the LO‐AD patients. Conclusions: Visuospatial and constructive skills of LO‐SD patients might be mildly deteriorated compared with EO‐SD patients, whereas other cognitive and behavioural profiles of LO‐SD are similar to EO‐SD. Age‐associated changes in the brain should be considered when we diagnose SD in elderly patients.  相似文献   

9.
Social status is a salient cue that shapes our perceptions of other people and ultimately guides our social interactions. Despite the pervasive influence of status on social behavior, how information about the status of others is represented in the brain remains unclear. Here, we tested the hypothesis that social status information is embedded in our neural representations of other individuals. Participants learned to associate faces with names, job titles that varied in associated status, and explicit markers of reputational status (star ratings). Trained stimuli were presented in an functional magnetic resonance imaging experiment where participants performed a target detection task orthogonal to the variable of interest. A network of face‐selective brain regions extending from the occipital lobe to the orbitofrontal cortex was localized and served as regions of interest. Using multivoxel pattern analysis, we found that face‐selective voxels in the lateral orbitofrontal cortex – a region involved in social and nonsocial valuation, could decode faces based on their status. Similar effects were observed with two different status manipulations – one based on stored semantic knowledge (e.g., different careers) and one based on learned reputation (e.g., star ranking). These data suggest that a face‐selective region of the lateral orbitofrontal cortex may contribute to the perception of social status, potentially underlying the preferential attention and favorable biases humans display toward high‐status individuals.  相似文献   

10.
Previous neuroimaging studies have shown that the patterns of brain activity during the processing of personally relevant names (e.g., own name, friend's name, partner's name, etc.) and the names of famous people (e.g., celebrities) are different. However, it is not known how the activity in this network is influenced by the modality of the presented stimuli. In this fMRI study, we investigated the pattern of brain activations during the recognition of aurally and visually presented full names of the subject, a significant other, a famous person and unknown individuals. In both modalities, we found that the processing of self‐name and the significant other's name was associated with increased activation in the medial prefrontal cortex (MPFC). Acoustic presentations of these names also activated bilateral inferior frontal gyri (IFG). This pattern of results supports the role of MPFC in the processing of personally relevant information, irrespective of their modality. Hum Brain Mapp 34:2069–2077, 2013. © 2011 Wiley Periodicals, Inc.  相似文献   

11.
12.
Task‐irrelevant visual stimuli can enhance auditory perception. However, while there is some neurophysiological evidence for mechanisms that underlie the phenomenon, the neural basis of visually induced effects on auditory perception remains unknown. Combining fMRI and EEG with psychophysical measurements in two independent studies, we identified the neural underpinnings and temporal dynamics of visually induced auditory enhancement. Lower‐ and higher‐intensity sounds were paired with a non‐informative visual stimulus, while participants performed an auditory detection task. Behaviourally, visual co‐stimulation enhanced auditory sensitivity. Using fMRI, enhanced BOLD signals were observed in primary auditory cortex for low‐intensity audiovisual stimuli which scaled with subject‐specific enhancement in perceptual sensitivity. Concordantly, a modulation of event‐related potentials could already be observed over frontal electrodes at an early latency (30–80 ms), which again scaled with subject‐specific behavioural benefits. Later modulations starting around 280 ms, that is in the time range of the P3, did not fit this pattern of brain‐behaviour correspondence. Hence, the latency of the corresponding fMRI‐EEG brain‐behaviour modulation points at an early interplay of visual and auditory signals in low‐level auditory cortex, potentially mediated by crosstalk at the level of the thalamus. However, fMRI signals in primary auditory cortex, auditory thalamus and the P50 for higher‐intensity auditory stimuli were also elevated by visual co‐stimulation (in the absence of any behavioural effect) suggesting a general, intensity‐independent integration mechanism. We propose that this automatic interaction occurs at the level of the thalamus and might signify a first step of audiovisual interplay necessary for visually induced perceptual enhancement of auditory perception.  相似文献   

13.
The goal of the study was to determine whether the semantic variant of primary progressive aphasia (svPPA) affects the intrinsic connectivity network anchored to left and right anterior hippocampus, but spares the posterior hippocampus. A resting‐state functional connectivity MRI (rs‐fcMRI) study was conducted in a group of patients with svPPA and in controls, using a seed‐to‐voxel approach. In comparison to controls, massively reduced connectivity was found in the anterior hippocampus, mainly the left one, for svPPA patients but not in the left or right posterior hippocampus. In svPPA, the anterior hippocampus showed reduced functional connectivity with regions implicated in the semantic memory network. Significant correlation was also found between the functional connectivity strength of the left anterior hippocampus and the ventromedial cortex, and performance in semantic tasks. These findings indicate that the functional disconnection of the anterior hippocampus may be a promising in vivo biomarker of svPPA and illustrate the role of this hippocampal subregion in the semantic memory system.  相似文献   

14.
Object recognition benefits maximally from multimodal sensory input when stimulus presentation is noisy, or degraded. Whether this advantage can be attributed specifically to the extent of overlap in object‐related information, or rather, to object‐unspecific enhancement due to the mere presence of additional sensory stimulation, remains unclear. Further, the cortical processing differences driving increased multisensory integration (MSI) for degraded compared with clear information remain poorly understood. Here, two consecutive studies first compared behavioral benefits of audio‐visual overlap of object‐related information, relative to conditions where one channel carried information and the other carried noise. A hierarchical drift diffusion model indicated performance enhancement when auditory and visual object‐related information was simultaneously present for degraded stimuli. A subsequent fMRI study revealed visual dominance on a behavioral and neural level for clear stimuli, while degraded stimulus processing was mainly characterized by activation of a frontoparietal multisensory network, including IPS. Connectivity analyses indicated that integration of degraded object‐related information relied on IPS input, whereas clear stimuli were integrated through direct information exchange between visual and auditory sensory cortices. These results indicate that the inverse effectiveness observed for identification of degraded relative to clear objects in behavior and brain activation might be facilitated by selective recruitment of an executive cortical network which uses IPS as a relay mediating crossmodal sensory information exchange.  相似文献   

15.
To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured event-related potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or in non-focus position as determined by the context, and this word was either accented or unaccented.Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males.  相似文献   

16.
The self is a multifaceted phenomenon that integrates information and experience across multiple time scales. How temporal integration on the psychological level of the self is related to temporal integration on the neuronal level remains unclear. To investigate temporal integration on the psychological level, we modified a well‐established self‐matching paradigm by inserting temporal delays. On the neuronal level, we indexed temporal integration in resting‐state EEG by two related measures of scale‐free dynamics, the power law exponent and autocorrelation window. We hypothesized that the previously established self‐prioritization effect, measured as decreased response times or increased accuracy for self‐related stimuli, would change with the insertion of different temporal delays between the paired stimuli, and that these changes would be related to temporal integration on the neuronal level. We found a significant self‐prioritization effect on accuracy in all conditions with delays, indicating stronger temporal integration of self‐related stimuli. Further, we observed a relationship between temporal integration on psychological and neuronal levels: higher degrees of neuronal integration, that is, higher power‐law exponent and longer autocorrelation window, during resting‐state EEG were related to a stronger increase in the self‐prioritization effect across longer temporal delays. We conclude that temporal integration on the neuronal level serves as a template for temporal integration of the self on the psychological level. Temporal integration can thus be conceived as the “common currency” of neuronal and psychological levels of self.  相似文献   

17.
Spaced presentations of to‐be‐learned items during encoding leads to superior long‐term retention over massed presentations. Despite over a century of research, the psychological and neural basis of this spacing effect however is still under investigation. To test the hypotheses that the spacing effect results either from reduction in encoding‐related verbal maintenance rehearsal in massed relative to spaced presentations (deficient processing hypothesis) or from greater encoding‐related elaborative rehearsal of relational information in spaced relative to massed presentations (encoding variability hypothesis), we designed a vocabulary learning experiment in which subjects encoded paired‐associates, each composed of a known word paired with a novel word, in both spaced and massed conditions during functional magnetic resonance imaging. As expected, recall performance in delayed cued‐recall tests was significantly better for spaced over massed conditions. Analysis of brain activity during encoding revealed that the left frontal operculum, known to be involved in encoding via verbal maintenance rehearsal, was associated with greater performance‐related increased activity in the spaced relative to massed condition. Consistent with the deficient processing hypothesis, a significant decrease in activity with subsequent episodes of presentation was found in the frontal operculum for the massed but not the spaced condition. Our results suggest that the spacing effect is mediated by activity in the frontal operculum, presumably by encoding‐related increased verbal maintenance rehearsal, which facilitates binding of phonological and word level verbal information for transfer into long‐term memory. Hum Brain Mapp, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

18.
Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted.  相似文献   

19.
There are two main behavioral expressions of multisensory integration (MSI) in speech; the perceptual enhancement produced by the sight of the congruent lip movements of the speaker, and the illusory sound perceived when a speech syllable is dubbed with incongruent lip movements, in the McGurk effect. These two models have been used very often to study MSI. Here, we contend that, unlike congruent audiovisually (AV) speech, the McGurk effect involves brain areas related to conflict detection and resolution. To test this hypothesis, we used fMRI to measure blood oxygen level dependent responses to AV speech syllables. We analyzed brain activity as a function of the nature of the stimuli—McGurk or non‐McGurk—and the perceptual outcome regarding MSI—integrated or not integrated response—in a 2 × 2 factorial design. The results showed that, regardless of perceptual outcome, AV mismatch activated general‐purpose conflict areas (e.g., anterior cingulate cortex) as well as specific AV speech conflict areas (e.g., inferior frontal gyrus), compared with AV matching stimuli. Moreover, these conflict areas showed stronger activation on trials where the McGurk illusion was perceived compared with non‐illusory trials, despite the stimuli where physically identical. We conclude that the AV incongruence in McGurk stimuli triggers the activation of conflict processing areas and that the process of resolving the cross‐modal conflict is critical for the McGurk illusion to arise. Hum Brain Mapp 38:5691–5705, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

20.
18F‐FPEB is a promising PET tracer for studying the metabotropic glutamate subtype 5 receptor (mGluR5) expression in neuropsychiatric disorders. To assess the potential of 18F‐FPEB for longitudinal mGluR5 evaluation in patient studies, we evaluated the long‐term test‐retest reproducibility using various kinetic models in the human brain. Nine healthy volunteers underwent consecutive scans separated by a 6‐month period. Dynamic PET was combined with arterial sampling and radiometabolite analysis. Total distribution volume (VT) and nondisplaceable binding potential (BPND) were derived from a two‐tissue compartment model without constraints (2TCM) and with constraining the K1/k2 ratio to the value of either cerebellum (2TCM‐CBL) or pons (2TCM‐PONS). The effect of fitting different functions to the tracer parent fractions and reducing scan duration were assessed. Regional absolute test‐retest variability (aTRV), coefficient of repeatability (CR) and intraclass correlation coefficient (ICC) were computed. The 2TCM‐CBL showed best fits. The mean 6‐month aTRV of VT ranged from 8 to 13% (CR < 25%) with ICC > 0.6 for all kinetic models. BPND from 2TCM‐CBL with a sigmoid fit for the parent fractions showed the best reproducibility, with aTRV ≤ 7% (CR < 16%) and ICC > 0.9 in most regions. Reducing the scan duration from 90 to 60 min did not affect reproducibility. These results demonstrate for the first time that 18F‐FPEB brain PET has good long‐term reproducibility, therefore validating its use to monitor mGluR5 expression in longitudinal clinical studies. We suggest a 2TCM‐CBL with fitting a sigmoid function to the parent fractions to be optimal for this tracer. Synapse, 2016. © 2016 Wiley Periodicals, Inc. Synapse 70:153–162, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号