首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
While the neural correlates of unconscious perception and subliminal priming have been largely studied for visual stimuli, little is known about their counterparts in the auditory modality. Here we used a subliminal speech priming method in combination with fMRI to investigate which regions of the cerebral network for language can respond in the absence of awareness. Participants performed a lexical decision task on target items preceded by subliminal primes, which were either phonetically identical or different from the target. Moreover, the prime and target could be spoken by the same speaker or by two different speakers. Word repetition reduced the activity in the insula and in the left superior temporal gyrus. Although the priming effect on reaction times was independent of voice manipulation, neural repetition suppression was modulated by speaker change in the superior temporal gyrus while the insula showed voice-independent priming. These results provide neuroimaging evidence of subliminal priming for spoken words and inform us on the first, unconscious stages of speech perception.  相似文献   

2.
We examined the influence of stimulus duration of foreign consonant vowel stimuli on the MMNm (magnetic counter part of mismatch negativity). In Experiment 1, /ra/ and /la/ stimuli were synthesized and subjects were native Japanese speakers who are known to have difficulty discriminating the stimuli. "Short" duration stimuli were terminated in the middle of the consonant-to-vowel transition (110 ms). They were nevertheless clearly identifiable by English speakers. A clear MMNm was observed only for short-duration stimuli but not for untruncated long-duration (150-ms) stimuli. We suggest that the diminished MMNm for longer duration stimuli result from more effective masking by the longer vowel part. In Experiment 2 we examined this hypothesis by presenting only the third formant (F3) component of the original stimuli, since the acoustic difference between /la/ and /ra/ is most evident in the third formant, whereas F1 and F2 play a major role in vowel perception. If the MMNm effect depends on the acoustic property of F3, a stimulus duration effect comparable to that found with the original /la/ and /ra/ stimuli might be expected. However, if the effect is attributable to the masking effect from the vowel, no influence of stimulus duration would be expected, since neither stimulus contains F1 and F2 components. In fact, the results showed that the "F3 only" stimuli did not show a duration effect; MMNm was always elicited independent of stimulus duration. The MMN stimulus duration effect is thus suggested to come from the backward masking of foreign consonants by subsequent vowels.  相似文献   

3.
Children with speech difficulty of no known etiology are a heterogeneous group. While speech errors are often attributed to auditory processing or oro-motor skill, an alternative proposal is a cognitive-linguistic processing difficulty. Research studies most often focus on only one of these aspects of the speech processing chain. This study investigated abilities in all three domains in children with speech difficulty (n = 78) and matched controls (n = 87). It was hypothesized that groups of children with speech difficulty would perform less well than controls on all tasks, but that the proportion of children with speech difficulty performing within the normal range would differ across tasks. The input processing task required children to perceive the auditory-visual illusion in speech perception, where listeners perceive when they hear presented in synchrony with the lip movements for . Diadochokinetic, isolated and sequenced movements tasks assessed oro-motor skills. Two non-verbal tasks evaluated rule derivation. The results indicated that rule derivation best discriminated typically developing and speech difficulty groups. Few children were identified as having an input or output difficulty, whereas difficulties with rule-derivation were common. The data support the notion that speech difficulty is, most often, associated with a central processing difficulty.  相似文献   

4.
Normal listeners are often surprisingly poor at processing pitch changes. The neural bases of this difficulty were explored using magnetoencephalography (MEG) by comparing participants who obtained poor thresholds on a pitch-direction task with those who obtained good thresholds. Source-space projected data revealed that during an active listening task, the poor threshold group displayed greater activity in the left auditory cortical region when determining the direction of small pitch glides, whereas there was no difference in the good threshold group. In a passive listening task, a mismatch response (MMNm) was identified for pitch-glide direction deviants, with a tendency to be smaller in the poor listeners. The results imply that the difficulties in pitch processing are already apparent during automatic sound processing, and furthermore suggest that left hemisphere auditory regions are used by these listeners to consciously determine the direction of a pitch change. This is in line with evidence that the left hemisphere has a poor frequency resolution, and implies that normal listeners may use the sub-optimal hemisphere to process pitch changes.  相似文献   

5.
Specht K  Reul J 《NeuroImage》2003,20(4):1944-1954
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and sounds to the right was already detectable when short stimuli were used.  相似文献   

6.
Seeing John Malkovich: the neural substrates of person categorization   总被引:2,自引:0,他引:2  
Neuroimaging data have implicated regions of the ventral temporal cortex (e.g., fusiform gyrus) as functionally important in face recognition. Recent evidence, however, suggests that these regions are not face-specific, but rather reflect subordinate-level categorical processing underpinned by perceptual expertise. Moreover, when people possess expertise for a particular class of stimuli (e.g., faces), subordinate-level identification is thought to be an automatic process. To investigate the neural substrates of person construal, we used functional magnetic resonance imaging (fMRI) to contrast brain activity while participants judged faces at different levels of semantic specificity (i.e., identity vs. occupation). The results revealed that participants were quicker to access identity than occupational knowledge. In addition, greater activity was observed in bilateral regions of the fusiform gyrus on identity than occupation trials. Taken together, these findings support the viewpoint that person construal is characterized by the ability to access subordinate-level semantic information about people, a capacity that is underpinned by neural activity in discrete regions of the ventral temporal cortex.  相似文献   

7.
Objective.Develop and test methods for representing and classifying breath sounds in an intensive care setting. Methods.Breath sounds were recorded over the bronchial regions of the chest. The breath sounds were represented by their averaged power spectral density, summed into feature vectors across the frequency spectrum from 0 to 800 Hertz. The sounds were segmented by individual breath and each breath was divided into inspiratory and expiratory segments. Sounds were classified as normal or abnormal. Different back-propagation neural network configurations were evaluated. The number of input features, hidden units, and hidden layers were varied.Results.2127 individual breath sounds from the ICU patients and 321breaths from training tapes were obtained. Best overall classification rate for the ICU breath sounds was 73% with 62% sensitivity and 85% specificity. Best overall classification rate for the training tapes was 91% with 87%sensitivity and 95% specificity. Conclusions.Long term monitoring of lung sounds is not feasible unless several barriers can be overcome. Several choices in signal representation and neural network design greatly improved the classification rates of breath sounds. The analysis of transmitted sounds from the trachea to the lung is suggested as an area for future study. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

8.
Scanning silence: mental imagery of complex sounds   总被引:1,自引:0,他引:1  
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.  相似文献   

9.
Nath AR  Beauchamp MS 《NeuroImage》2012,59(1):781-787
The McGurk effect is a compelling illusion in which humans perceive mismatched audiovisual speech as a completely different syllable. However, some normal individuals do not experience the illusion, reporting that the stimulus sounds the same with or without visual input. Converging evidence suggests that the left superior temporal sulcus (STS) is critical for audiovisual integration during speech perception. We used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) to measure brain activity as McGurk perceivers and non-perceivers were presented with congruent audiovisual syllables, McGurk audiovisual syllables, and non-McGurk incongruent syllables. The inferior frontal gyrus showed an effect of stimulus condition (greater responses for incongruent stimuli) but not susceptibility group, while the left auditory cortex showed an effect of susceptibility group (greater response in susceptible individuals) but not stimulus condition. Only one brain region, the left STS, showed a significant effect of both susceptibility and stimulus condition. The amplitude of the response in the left STS was significantly correlated with the likelihood of perceiving the McGurk effect: a weak STS response meant that a subject was less likely to perceive the McGurk effect, while a strong response meant that a subject was more likely to perceive it. These results suggest that the left STS is a key locus for interindividual differences in speech perception.  相似文献   

10.
The aim of the present study was to find a functional MRI correlate in human auditory cortex of the psychoacoustical effect of release from masking, using amplitude-modulated noise stimuli. A sinusoidal target signal was embedded in a bandlimited white noise, which was either unmodulated or (co)modulated. Psychoacoustical thresholds were measured for the target signals in both types of masking noise, using an adaptive procedure. The mean threshold difference between the unmodulated and the comodulated condition, i.e., the release from masking, was 15 dB. The same listeners then participated in an fMRI experiment, recording activation of auditory cortex in response to tones in the presence of modulated and unmodulated noise maskers at five different signal-to-noise ratios. In general, a spatial dissociation of changes of overall level and signal-to-noise ratio in auditory cortex was found, replicating a previous fMRI study on pure-tone masking. The comparison of the fMRI activation maps for a signal presented in modulated and in unmodulated noise reveals that those regions in the antero-lateral part of Heschl's gyrus previously shown to represent the audibility of a tonal target (rather than overall level) exhibit a stronger activation for the modulated than for the unmodulated conditions. This result is interpreted as a physiological correlate of the psychoacoustical effect of comodulation masking release at the level of the auditory cortex.  相似文献   

11.
The neural correlates of clearly perceived visual stimuli have been reported previously in contrast to unperceived stimuli, but it is uncertain whether intermediate or graded perceptual experiences correlate with different patterns of neural activity. In this study, the subjective appearance of briefly presented visual stimuli was rated individually by subjects with respect to perceptual clarity: clear, vague or no experience of a stimulus. Reports of clear experiences correlated with activation in a widespread network of brain areas, including parietal cortex, prefrontal cortex, premotor cortex, supplementary motor areas, insula and thalamus. The reports of graded perceptual clarity were reflected in graded neural activity in a network comprising the precentral gyrus, intraparietal sulcus, basal ganglia and the insula. In addition, the reports of vague experiences demonstrated unique patterns of activation. Different degrees of perceptual clarity were reflected both in the degree to which activation was found within parts of the network serving a clear conscious percept, and additional unique activation patterns for different degrees of perceptual clarity. Our findings support theories proposing the involvement of a widespread network of brain areas during conscious perception.  相似文献   

12.
Purpose: Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Methods: Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the “crescent of sound”) with and without FM systems. Results: The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from?±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Conclusions: Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke.
  • Implications for Rehabilitation
  • This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP.

  • All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the speech signal by 90° compared with unaided listening.

  • Personal FM systems are feasible in stroke patients, and may be of benefit in just under 20% of this population, who are not eligible for conventional hearing aids.

  相似文献   

13.
Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.  相似文献   

14.
Analysis of the spectral envelope of sounds by the human brain   总被引:6,自引:0,他引:6  
Spectral envelope is the shape of the power spectrum of sound. It is an important cue for the identification of sound sources such as voices or instruments, and particular classes of sounds such as vowels. In everyday life, sounds with similar spectral envelopes are perceived as similar: we recognize a voice or a vowel regardless of pitch and intensity variations, and we recognize the same vowel regardless of whether it is voiced (a spectral envelope applied to a harmonic series) or whispered (a spectral envelope applied to noise). In this functional magnetic resonance imaging (fMRI) experiment, we investigated the basis for analysis of spectral envelope by the human brain. Changing either the pitch or the spectral envelope of harmonic sounds produced similar activation within a bilateral network including Heschl's gyrus and adjacent cortical areas in the superior temporal lobe. Changing the spectral envelope of continuously alternating noise and harmonic sounds produced additional right-lateralized activation in superior temporal sulcus (STS). Our findings show that spectral shape is abstracted in superior temporal sulcus, suggesting that this region may have a generic role in the spectral analysis of sounds. These distinct levels of spectral analysis may represent early computational stages in a putative anteriorly directed stream for the categorization of sound.  相似文献   

15.
Autism Spectrum Disorders (ASD) are neurodevelopmental disorders characterised by impaired social interaction and communication, restricted interests and repetitive behaviours. The severity of these characteristics are posited to lie on a continuum extending into the typical population, and typical adults' performance on behavioural tasks that are impaired in ASD is correlated with the extent to which they display autistic traits (as measured by Autism Spectrum Quotient, AQ). Individuals with ASD also show structural and functional differences in brain regions involved in social perception. Here we show that variation in AQ in typically developing individuals is associated with altered brain activity in the neural circuit for social attention perception while viewing others' eye gaze. In an fMRI experiment, participants viewed faces looking at variable or constant directions. In control conditions, only the eye region was presented or the heads were shown with eyes closed but oriented at variable or constant directions. The response to faces with variable vs. constant eye gaze direction was associated with AQ scores in a number of regions (posterior superior temporal sulcus, intraparietal sulcus, temporoparietal junction, amygdala, and MT/V5) of the brain network for social attention perception. No such effect was observed for heads with eyes closed or when only the eyes were presented. The results demonstrate a relationship between neurophysiology and autism spectrum traits in the typical (non-ASD) population and suggest that changes in the functioning of the neural circuit for social attention perception is associated with an extended autism spectrum in the typical population.  相似文献   

16.
Ye Z  Kutas M  St George M  Sereno MI  Ling F  Münte TF 《NeuroImage》2012,59(4):3662-3667
Temporal connectives (before/after) give us the freedom to describe a sequence of events in different orders. Studies have suggested that ‘before-initiating’ sentences, in which events are expressed in an order inconsistent with their actual order of occurrence, might need additional computation(s) during comprehension. The results of independent component analysis suggest that these computations are supported by a neural network connecting the bilateral caudate nucleus with the right middle frontal gyrus, left precentral gyrus, bilateral parietal lobule and inferior temporal gyrus. Among those regions, the caudate nucleus and the left middle frontal gyrus showed greater activations for ‘before’ than ‘after’ sentences. The functional network observed in this study may support sequence learning and processing in a general sense.  相似文献   

17.
The cerebellum is thought to be engaged not only in motor control, but also in the neural network dedicated to visual processing of body motion. However, the pattern of connectivity within this network, in particular, between the cortical circuitry for observation of others' actions and the cerebellum remains largely unknown. By combining functional magnetic resonance imaging (fMRI) with functional connectivity analysis and dynamic causal modelling (DCM), we assessed cerebro-cerebellar connectivity during a visual perceptual task with point-light displays depicting human locomotion. In the left lateral cerebellum, regions in the lobules Crus I and VIIB exhibited increased fMRI response to biological motion. The outcome of the connectivity analyses delivered the first evidence for reciprocal communication between the left lateral cerebellum and the right posterior superior temporal sulcus (STS). Through communication with the right posterior STS that is a key node not only for biological motion perception but also for social interaction and visual tasks on theory of mind, the left cerebellum might be involved in a wide range of social cognitive functions.  相似文献   

18.
Cognitive processing in headache associated with sexual Cognitive processing as measured by event-related potentials (ERP) in patients suffering from the explosive subtype of headache associated with sexual activity (HSA type 2) was investigated. Visual ERP were measured in 24 patients with HSA type 2 outside the headache period. The differences of the first and the second part of measurement were evaluated separately to determine the amount of cognitive habituation. Twenty-four sex- and age-matched healthy subjects and 24 patients with migraine without aura served as controls. A missing increase of P3 latency during the second part of the measurement was found in 79% of patients with HSA type 2 and in 75% with migraine, but only in 17% of the healthy controls (P < 0.001). The P3 amplitude was increased during the second part in 71% of patients with HSA type 2 and in 79% with migraine, but only in 33% of the healthy controls (P = 0.02). Mean P3 latency was decreased and mean P3 amplitude was increased during the second part of the measurement in HSA type 2 and in migraine but not in the healthy control group. Patients with HSA type 2 have a loss of cognitive habituation as measured by ERP. This specific information processing is very similar to that in migraine observed in previous studies.  相似文献   

19.
Twenty healthy young adults underwent functional magnetic resonance imaging (fMRI) of the brain while performing a visual inspection time task. Inspection time is a forced-choice, two-alternative visual backward-masking task in which the subject is briefly shown two parallel vertical lines of markedly different lengths and must decide which is longer. As stimulus duration decreases, performance declines to chance levels. Individual differences in inspection time correlate with higher cognitive functions. An event-related design was used. The hemodynamic (blood oxygenation level-dependent; BOLD) response was computed as both a function of the eight levels of stimulus duration, from 6 ms (where performance is almost at chance) to 150 ms (where performance is nearly perfect), and a function of the behavioral responses. Random effects analysis showed that the difficulty of the visual discrimination was related to bilateral activation in the inferior fronto-opercular cortex, superior/medial frontal gyrus, and anterior cingulate gyrus, and bilateral deactivation in the posterior cingulate gyrus and precuneus. Examination of the time courses of BOLD responses showed that activation was related specifically to the more difficult, briefer stimuli and that deactivation was found across most stimulus levels. Functional connectivity suggested the existence of two networks. One comprised the fronto-opercular area, intrasylvian area, medial frontal gyrus, and the anterior cingulate cortex (ACC), possibly associated with processing of visually degraded percepts. A posterior network of sensory-related and associative regions might subserve processing of a visual discrimination task that has high processing demands and combines several fundamental cognitive domains. fMRI can thus reveal information about the neural correlates of mental events which occur over very short durations.  相似文献   

20.
胸椎掌按法所致“咔哒”声响与最大按压力的量效关系   总被引:2,自引:0,他引:2  
目的: 研究胸椎掌按法作用时“咔哒”声响与最大按压力的量效关系,为推拿手法的量化提供定量化依据。方法: 利用压力检测系统,检测并记录胸椎掌按法操作时出现“咔哒”声响时术者手掌按压患者胸椎棘突的最大按压力。结果:以“咔哒”声响作为胸椎掌按压法成功的标志,通过出现声响的最大按压力(247.21±8.02 mmHg)和没有出现“咔哒”声响的最大按压力(251.15±2.87mmHg)的比较,两者之间没有显著差异(P>0.05)。结论:胸椎掌按时按压力大小和“咔哒”声响的发生没有直接关系。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号