首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Latency of electric (e.g., P1 and N1) and magnetic (e.g., M100) auditory evoked components depends on age in typically developing children, with longer latencies for younger (4-6 years) and shorter, adult-like latencies for older (14-16 years) children. Age-related changes in evoked components provide indirect measures of auditory system maturation and reflect changes that occur during development. We use magnetoencephalography (MEG) to investigate maturational changes in cortical auditory systems in left (LH) and right (RH) hemispheres in children with autism disorder (AD) and Controls. We recorded auditory evoked responses over left and right temporal lobes in 17 Control and 15 AD children in the age range 8-16 years and measured M100 latency as a function of age, subject group and hemisphere. Linear regression analyses of age and M100 latency provided an estimate of the rate of latency change (ms/year) by hemisphere and subject group. Controls: M100 latency for the group ranged from 100.8 to 166.1 ms and varied linearly in both hemispheres, decreasing at a rate of -4 ms/year (LH) and -4.5 ms/year (RH). AD: M100 latency ranged from 116.2 to 186.2 ms. Slopes of regression lines did not differ from zero in either LH or RH. M100 latency showed a tendency to vary with age in LH, decreasing at a rate of -4.6 ms/year. M100 latency in RH increased slightly (at a rate of 0.8 ms/year) with age. Results provide evidence for a differential auditory system development in AD children which may reflect abnormalities in cortical maturational processes in AD.  相似文献   

2.
Reading difficulties are associated with problems in processing and manipulating speech sounds. Dyslexic individuals seem to have, for instance, difficulties in perceiving the length and identity of consonants. Using magnetoencephalography (MEG), we characterized the spatio-temporal pattern of auditory cortical activation in dyslexia evoked by three types of natural bisyllabic pseudowords (/ata/, /atta/, and /a a/), complex nonspeech sound pairs (corresponding to /atta/ and /a a/) and simple 1-kHz tones. The most robust difference between dyslexic and non-reading-impaired adults was seen in the left supratemporal auditory cortex 100 msec after the onset of the vowel /a/. This N100m response was abnormally strong in dyslexic individuals. For the complex nonspeech sounds and tone, the N100m response amplitudes were similar in dyslexic and nonimpaired individuals. The responses evoked by syllable /ta/ of the pseudoword /atta/ also showed modest latency differences between the two subject groups. The responses evoked by the corresponding nonspeech sounds did not differ between the two subject groups. Further, when the initial formant transition, that is, the consonant, was removed from the syllable /ta/, the N100m latency was normal in dyslexic individuals. Thus, it appears that dyslexia is reflected as abnormal activation of the auditory cortex already 100 msec after speech onset, manifested as abnormal response strengths for natural speech and as delays for speech sounds containing rapid frequency transition. These differences between the dyslexic and nonimpaired individuals also imply that the N100m response codes stimulus-specific features likely to be critical for speech perception. Which features of speech (or nonspeech stimuli) are critical in eliciting the abnormally strong N100m response in dyslexic individuals should be resolved in future studies.  相似文献   

3.
Cross-modal fusion phenomena suggest specific interactions of auditory and visual sensory information both within the speech and nonspeech domains. Using whole-head magnetoencephalography, this study recorded M50 and M100 fields evoked by ambiguous acoustic stimuli that were visually disambiguated to perceived /ta/ or /pa/ syllables. As in natural speech, visual motion onset preceded the acoustic signal by 150 msec. Control conditions included visual and acoustic nonspeech signals as well as visual-only and acoustic-only stimuli. (a) Both speech and nonspeech motion yielded a consistent attenuation of the auditory M50 field, suggesting a visually induced "preparatory baseline shift" at the level of the auditory cortex. (b) Within the temporal domain of the auditory M100 field, visual speech and nonspeech motion gave rise to different response patterns (nonspeech: M100 attenuation; visual /pa/: left-hemisphere M100 enhancement; /ta/: no effect). (c) These interactions could be further decomposed using a six-dipole model. One of these three pairs of dipoles (V270) was fitted to motion-induced activity at a latency of 270 msec after motion onset, that is, the time domain of the auditory M100 field, and could be attributed to the posterior insula. This dipole source responded to nonspeech motion and visual /pa/, but was found suppressed in the case of visual /ta/. Such a nonlinear interaction might reflect the operation of a binary distinction between the marked phonological feature "labial" versus its underspecified competitor "coronal." Thus, visual processing seems to be shaped by linguistic data structures even prior to its fusion with auditory information channel.  相似文献   

4.
Linear coding of voice onset time   总被引:1,自引:0,他引:1  
Voice onset time (VOT) provides an important auditory cue for recognizing spoken consonant-vowel syllables. Although changes in the neuromagnetic response to consonant-vowel syllables with different VOT have been examined, such experiments have only manipulated VOT with respect to voicing. We utilized the characteristics of a previously developed asymmetric VOT continuum [Liederman, J., Frye, R. E., McGraw Fisher, J., Greenwood, K., & Alexander, R. A temporally dynamic contextual effect that disrupts voice onset time discrimination of rapidly successive stimuli. Psychonomic Bulletin and Review, 12, 380-386, 2005] to determine if changes in the prominent M100 neuromagnetic response were linearly modulated by VOT. Eight right-handed, English-speaking, normally developing participants performed a VOT discrimination task during a whole-head neuromagnetic recording. The M100 was identified in the gradiometers overlying the right and left temporal cortices and single dipoles were fit to each M100 waveform. A repeated measures analysis of variance with post hoc contrast test for linear trend was used to determine whether characteristics of the M100 were linearly modulated by VOT. The morphology of the M100 gradiometer waveform and the peak latency of the dipole waveform were linearly modulated by VOT. This modulation was much greater in the left, as compared to the right, hemisphere. The M100 dipole moved in a linear fashion as VOT increased in both hemispheres, but along different axes in each hemisphere. This study suggests that VOT may linearly modulate characteristics of the M100, predominately in the left hemisphere, and suggests that the VOT of consonant-vowel syllables, instead of, or in addition to, voicing, should be examined in future experiments.  相似文献   

5.
Collins M  Frew A 《Laterality》2001,6(2):111-132
A priming experiment, with normal university students as subjects, was used to investigate whether the right cerebral hemisphere contributes to the comprehension of low-imagery words. Each hemisphere's access to semantic representations of low-imagery words was gauged by comparing responses to low-imagery targets preceded by associated low-imagery primes (e.g., BELIEF-IDEAL) with responses to the same targets when they were preceded by unrelated primes (e.g., FATE-IDEAL). All primes and targets were independently projected to the left or right visual fields (LVF or RVF), and temporally separated by a stimulus onset asynchrony of 250 ms. There was a clear RVF advantage in response speed and accuracy measures, confirming the left hemisphere's advantage in processing low-imagery words. Nonetheless, the priming effects provided evidence that the right hemisphere contributes to the comprehension of low-imagery words, as primes projected to the RVF equally facilitated responses to associated targets subsequently appearing in either visual field. In contrast, primes directed to the LVF did not facilitate responses to associated targets projected to the LVF or RVF. The results suggest that low-imagery words projected to the left hemisphere activated low-imagery associates in both hemispheres to an equivalent degree, whereas low-imagery primes directed to the right hemisphere failed to activate low-imagery associates in either hemisphere. Like Kounios and Holcomb's (1994) study of event-related response potentials evoked by abstract and concrete words, the findings indicate that while the left hemisphere is the primary processor of low-imagery/abstract words, the right hemisphere plays a subsidiary role in the comprehension of these words.  相似文献   

6.
Lexical morphology involves two types of suffixes: inflectional suffixes, which have a grammatical function, and derivational suffixes with a word formation function. In this study, functional magnetic resonance imaging (fMRI) was used during processing of Italian derived and inflected words. In the derivational task, subjects were asked to produce nouns derived from verbs and from adjectives (e.g., to observe – observation; kind – kindness). After the presentation of the derived noun, they had to generate the corresponding verb (e.g., failure – to fail: generation task). In the inflectional task, subjects had to produce the past participle of the verb or the plural form of the adjective or the derived noun. Behavioural data were collected in separate sessions in two different conditions. In the first experiment, as in the fMRI study, vocal reaction times (RTs) were measured from the offset of the auditory stimulus to the onset of the participant's response. In the second experiment, run with a different group of participants, RTs were recorded starting from the onset of the auditory stimulus to the onset of the response. The fMRI results showed that, relative to the inflectional task and to a repetition task, the derivational task, but not the verb generation task, brought about an activation of left fronto-parietal regions, documenting a specific involvement of these areas in the processing of derived words. Although less extended, similar activation was found for verb inflection but was absent for noun and adjective plural forms. Analysis of behavioral data indicated that an explanation in terms of task difficulty was unlikely related to the imaging results.  相似文献   

7.
Auditory deviance detection has been associated with a human auditory-evoked potential (AEP), the mismatch negativity, generated in the auditory cortex 100-200 ms from sound change onset. Yet, single-unit recordings in animals suggest much earlier (~20-40 ms), and anatomically lower (i.e., thalamus and midbrain) deviance detection. In humans, recordings of the scalp middle-latency AEPs have confirmed early (~30-40 ms) deviance detection. However, involvement of the human auditory brainstem in deviance detection has not yet been demonstrated. Here we recorded the auditory brainstem frequency-following response (FFR) to consonant-vowel stimuli (/ba/, /wa/) in young adults, with stimuli arranged in oddball and reversed oddball blocks (deviant probability, p=0.2), allowing for the comparison of FFRs to the same physical stimuli presented in different contextual roles. Whereas no effect was observed for the /wa/ syllable, we found for the /ba/ syllable a reduction in the brainstem FFR to deviant stimuli compared with standard ones and to similar stimuli arranged in a control block, with five equiprobable, rarely occurring sounds. These findings demonstrate that the human auditory brainstem is able to encode regularities in the recent auditory past to detect novel events, and confirm the multiple anatomical and temporal scales of human deviance detection.  相似文献   

8.
Although raising the sides of the tongue to form a seal with the palate and upper teeth – lateral bracing – plays a key role in controlling airflow direction, providing overall tongue stability and building up oral pressure during alveolar consonant production, details of this articulatory gesture remain poorly understood. This study examined the dynamics of lateral bracing during the onset of alveolar stops /t/, /d/, /n/ produced by15 typical English-speaking adults using electropalatography. Percent tongue palate contact in the lateral regions over a 150-ms period from the preceding schwa to stop closure was measured. Rapid rising of the sides of the tongue from the back towards the front during the 50-ms period before closure was observed, with oral stops showing significantly more contact than nasal stops. This feature corresponds to well-documented formant transitions detectable from acoustic analysis. Possible explanations for increased contact for oral stops and clinical implications are discussed.  相似文献   

9.
We tested the hypothesis that division of inputs between the hemispheres could prevent interword letter migrations in the form of illusory conjunctions. The task was to decide whether a centrally-presented consonant-vowel-consonant (CVC) target word matched one of four CVC words presented to a single hemisphere or divided between the hemispheres in a subsequent test display. During half of the target-absent trials, known as conjunction trials, letters from two separate words (e.g., "tag" and "cop") in the test display could be mistaken for a target word (e.g., "top"). For the other half of the target-absent trails, the test display did not match any target consonants (Experiment 1, N = 16) or it matched one target consonant (Experiment 2, N = 29), the latter constituting true "feature" trials. Bi- as compared to unihemispheric presentation significantly reduced the number of conjunction, but not feature, errors. Illusory conjunctions did not occur when the words were presented to separate hemispheres.  相似文献   

10.
Rojas DC  Teale P  Sheeder J  Reite M 《Neuroreport》1999,10(16):3321-3325
The 100 ms latency auditory evoked magnetic response (M100) has been implicated in the earliest stage of acoustic memory encoding in the brain. Sex differences in this response have been found in its location within the brain and its functional properties. We recorded the M100 in 25 adults in response to changes in interstimulus interval of an auditory stimulus. Response amplitudes of the M100 were used to compute a measure of the M100 refractory period, which has been proposed to index the decay time constant of echoic memory. This time constant was significantly longer in both hemispheres of the female participants when compared to the male participants. Possible implications of this for behavioral sex differences in human memory performance are discussed.  相似文献   

11.
Increased tongue–palate contact for perceptually acceptable alveolar stops has been observed in children with speech sound disorders (SSD). This is a retrospective study that further investigated this issue by using quantitative measures to compare the target alveolar stops /t/, /d/ and /n/ produced in words by nine children with SSD (20 tokens of /t/, 13 /d/ and 11 /n/) to those produced by eight typical children (32 /t/, 24 /d/ and 16 /n/). The results showed that children with SSD had significantly higher percent contact than the typical children for target /t/; the difference for /d/ and /n/ was not significant. Children with SSD generally showed more contact in the posterior central area of the palate than the typical children. The results suggested that broader tongue–palate contact is a general articulatory feature for children with SSD and its differential effect on error perception might be related to the different articulatory requirements.  相似文献   

12.
Recently, a growing number of studies have been published involving phonetic and acoustic analyses on the rare motor-speech disorder known as Foreign Accent Syndrome (FAS). These studies have relied on pre- and post-trauma speech samples to investigate the acoustic and phonetic properties of individual cases of FAS speech. This study presents detailed acoustic analyses of the speech characteristics of two new cases of FAS using identical pre- and post-recovery speech samples, thus affording a new level of control in the study of Foreign Accent Syndrome. Participants include a 48-year-old female who began speaking with an “Eastern European” accent following a traumatic brain injury, and a 45-year-old male who presented with a “British” accent following a subcortical cerebral vascular accident (CVA). The acoustic analysis was based on 18 real words comprised of the stop consonants /p/, /t/, /k/; /b/, /d/, /g/ combined with the peripheral vowels /i/, /a/ and /u/ and ending in a voiceless stop. Computer-based acoustic measures included: (1) voice onset time (VOT), (2) vowel durations, (3) whole word durations, (4) first, second and third formant frequencies, and (5) fundamental frequency. Formant frequencies were measured at three points in the vowel duration: (a) 20%, (b) 50%, and (c) 80% to assess differences in vowel ‘onglides’ and ‘offglides’. The acoustic analysis allowed precise quantification of the major phonetic features associated with the foreign quality of participants' FAS speech. Results indicated post-recovery changes in both duration and frequency measures, including a tendency toward more normal VOT production of voiced stops, changes in average vowel durations, as well as evidence from formant frequency values of vowel backing for both participants. The implications of this study for future research and clinical applications are also considered.  相似文献   

13.
Lin YY  Chen WT  Liao KK  Yeh TC  Wu ZA  Ho LT 《Neuroreport》2005,16(5):469-473
To study the role of neuromagnetic auditory approximately 100-ms responses (N100m) in phonetic processing, we recorded N100m in 24 right-handed Chinese participants using a whole-head neuromagnetometer. The stimuli included vowel /a/ and consonant-vowels /ba/ and /da/, spoken by one Chinese speaker, and a 1-kHz tone. N100m to tones was larger in the right hemisphere, whereas that to speech sounds was bilaterally similar. The amplitude ratio of speech to non-speech N100m was larger in the left hemisphere. N100m dipoles in the left hemisphere were approximately 2 mm more anterior for speech than for tone stimuli. The results suggest that N100m reflects both acoustics and phonetic processing. Moreover, the ratio of speech to non-speech activation in individual hemispheres may be useful for language lateralization.  相似文献   

14.
The vocalization-related cortical fields (VRCF) following vowel vocalization were studied by magnetoencephalography (MEG) in eight normal subjects. A multiple-source model, BESA (Brain Electric Source Analysis), was applied to elucidate the generating mechanism of VRCF in the period from 150 ms before to 150 ms after the onset of vocalization. Six sources provided satisfactory solutions for VRCF activities during that period. Sources 1 and 2, which were activated from approximately 150 ms before the vocalization onset, were located in laryngeal motor areas of the left and right hemispheres, respectively. Sources 5 and 6 were located in the truncal motor area in each hemisphere, and they were very similar to sources 1 and 2 in terms of temporal change of activities. Sources 3 and 4 were located in the auditory cortices of the left and right hemispheres, respectively, and they appeared to be activated just after the vocalization onset. However, all six sources were temporally overlapped in the period approximately 0-100 ms after the vocalization onset. The present results suggested that the bilateral motor cortices, probably laryngeal and truncal areas, were activated just before the vocalization. We considered that the activities of the bilateral auditory areas after the vocalization were the response of the subject's central auditory system to his/her own voice. The motor and auditory activities were temporally overlapped, and BESA was very useful to separate the activities of each source.  相似文献   

15.
We recorded magnetoencephalographic auditory evoked fields from the left and right hemispheres of six normal adult female subjects, in response to unattended tone pips. Magnetic field data was used to estimate location, orientation, depth, and strength of the 100-ms latency evoked field component (M100). M100 source locations did not significantly differ in left and right hemispheres. Previous studies in six normal males demonstrated M100 sources to be significantly further anterior in the right hemisphere. Compared to the previously reported male subjects, M100 sources in the right hemisphere of these females were significantly further posteriorly, by a mean value of 2.1 cm. These findings, while preliminary, support sex-related functional and/or structural differences in the superior temporal lobe, and would be compatible with the female right temporal planum extending further posteriorly. They suggest MEG recordings may be a useful addition to studies of sex-related difference in brain function and/or structure.  相似文献   

16.
We recorded magnetoencephalographic (MEG) auditory evoked fields (EF) from the L and R hemispheres of 12 paris of twins, 6 monozygotic (MZ), and 6 dizygotic (DZ) and localized the source of the 100 msec latency EF component termed the M100. M100 sources exhibited greater similarity in location in MZ twin pairs, especially in the L hemisphere. These findings support the hypothesis that the functional location of processing of nonmeaningful unattended auditory stimuli may depend more heavily on left hemisphere structures. Furthermore, genetic effects are evident in these left hemisphere structures and their activity, as is a substantial amount of environmental variance.  相似文献   

17.
A dichotic monitoring task involving detection of target words (e.g., BLACK) in either ear was used. Word pairs that could fuse to produce target word perception (e.g. BACK+LACK=BLACK) were included. Two patients with partially sectioned corpus callosa showed left ear extinction for targets but their fusion rates were normal. A congenitally acallosal patient had unusually high response rates to targets in either ear, as well as high fusion rates. The results indicate that intact commissural connections between the auditory cortices are not required for phonological fusion to occur. An hierarchical model of contralateral suppression of ipsilateral auditory input is proposed.  相似文献   

18.
This study was designed to localize the neuroanatomic generator of the 100 ms latency magnetic auditory evoked field (EF) component (M100) activated by an unattended tone pip. Magnetic EFs in response to 25 ms duration, 90 dB, 1 kHz tone pips were recorded from both hemispheres of nine normal adults, five males and four females, using a seven-channel second-order gradiometer. The source of the M100 was estimated, with confidence intervals, bu a least squares based inverse solution algorithm. Magnetic resonance (MR) images of the brain were acquired with a 1.5 T system using a standard head coil. The superior temporal gyri (STG) were manually segmented from 1.7 mm thick coronal images, and the superior surfaces were then rendered from the 3-D volume data. Translation and rotation matrices were identified to locate the magnetoencephalography (MEG) determined sources within the reconstructed STGs. This population of 18 STGs in 9 individuals demonstrated two transverse gyri in 4 of 9 left hemispheres, and 5 of 9 right hemispheres. All 9 left hemisphere M100 sources were in or included Heschl's gyrus(i) in the confidence intervals. Seven of the 9 included Heschl's gyrus(i) on the right; the remaining two, both males, had sources slightly anterior to Heschl's gyrus(i). We conclude that all M100 source location estimates were compatible with an auditory koniocortex source in or adjacent to Heschl's gyri.  相似文献   

19.
We describe, for the first time, the use of high-resolution event-related brain potentials (hrERP) to identify the spatio-temporal characteristics of neural systems involved in phonological analysis. Subjects studied a visual word/non-word that was followed by the brief presentation of a prime letter (e.g. House, M) with the instruction to anticipate the word/non-word formed by replacing the word's first letter with the prime letter. After the prime letter, an auditory target word/non-word was presented that either matched/mismatched expectations (e.g., Mouse/Barn). ERPs were recorded to the onset of the auditory targets and scalp topographical maps were derived for the phonological mismatch negativity (PMN). The PMN reflected phonological analysis and examination of the peak topography revealed that the response was characterized by a prominent frontal, right-asymmetrical distribution. Spatial de-blurring (using current source density maps) indicated that the PMN scalp topography resulted primarily from an active left anterior source. The current results provide the initial evidence for the localization of the intra-cranial generator(s) involved in phonological analysis.  相似文献   

20.
The spatio-temporal dynamics of cortical activation underlying auditory word recognition, particularly its phonological stage, was studied with whole-head magnetoencephalography (MEG). Subjects performed a visuo-auditory priming task known to evoke the phonological mismatch negativity (PMN) response that is elicited by violations of phonological expectancies. Words and non-words were presented in separate conditions. In each of the 318 trials, the subjects first saw a word/non-word (e.g., 'cat') that was soon followed by a prime letter (e.g., 'h'). Their task was to replace mentally the sound of the first letter of the word/non-word with the prime letter, thus resulting in a new word/non-word (e.g., 'hat'). Finally, an auditory word/non-word either matching or mismatching with the anticipated item was presented. In most subjects, a PMNm followed by a later, N400m-like negativity was obtained in the left hemisphere to the mismatching auditory stimuli. A similar response pattern was obtained in the right hemisphere only in a few subjects. Source localization of the N1m, an index of acoustic analysis, and the PMNm and N400m-like responses was performed using L1 minimum-norm estimation. In the left hemisphere, the PMNm source for the words was significantly more anterior than the source of the N400m-like response; for the non-words, the PMNm source was significantly more anterior than the sources of the N1m and the N400m-like response. These results suggest that the left-hemisphere neuronal networks involved in sub-lexical phonological analysis are at least partly different from those responsible for the earlier (acoustic) and later (whole item) processing of speech input.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号