首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The suppression of the auditory N1 event‐related potential (ERP) to self‐initiated sounds became a popular tool to tap into sensory‐specific forward modeling. It is assumed that processing in the auditory cortex is attenuated due to a match between sensory stimulation and a specific sensory prediction afforded by a forward model of the motor command. The present study shows that N1 suppression was dramatically increased with long (~3 s) stimulus onset asynchronies (SOA), whereas P2 suppression was equal in all SOA conditions (0.8, 1.6, 3.2 s). Thus, the P2 was found to be more sensitive to self‐initiation effects than the N1 with short SOAs. Moreover, only the unspecific but not the sensory‐specific N1 components were suppressed for self‐initiated sounds suggesting that N1‐suppression effects mainly reflect an attenuated orienting response. We argue that the N1‐suppression effect is a rather indirect measure of sensory‐specific forward models.  相似文献   

2.
The gap‐startle paradigm has been used as a behavioral method for tinnitus screening in animal studies. This study aimed to investigate gap prepulse inhibition (GPI) of the auditory late response (ALR) as the objective response of the gap‐intense sound paradigm in humans. ALRs were recorded in response to gap‐intense and no‐gap‐intense sound stimuli in 27 healthy subjects. The amplitudes of the baseline‐to‐peak (N1, P2, and N2) and the peak‐to‐peak (N1P2 and P2N2) were compared between two averaged ALRs. The variations in the inhibition ratios of N1P2 and P2N2 during the experiment were analyzed by increasing stimuli repetitions. The effect of stimulus parameter adjustments on GPI ratios was evaluated. No‐gap‐intense sound stimuli elicited greater peak amplitudes than gap‐intense sound stimuli, and significant differences were found across all peaks. The overall mean inhibition ratios were significantly lower than 1.0, where the value 1.0 indicates that there were no differences between gap‐intense and no‐gap‐intense sound responses. The initial decline in GPI ratios was shown in N1P2 and P2N2 complexes, and this reduction was nearly complete after 100 stimulus repetitions. Significant effects of gap length and interstimulus interval on GPI ratios were observed. We found significant inhibition of ALR peak amplitudes in performing the gap‐intense sound paradigm in healthy subjects. The N1P2 complex represented GPI well in terms of suppression degree and test‐retest reliability. Our findings offer practical information for the comparative study of healthy subjects and tinnitus patients using the gap‐intense sound paradigm with the ALR.  相似文献   

3.
Auditory object perception requires binding of elementary features of complex stimuli. Synchronization of high‐frequency oscillation in neural networks has been proposed as an effective alternative to binding via hard‐wired connections because binding in an oscillatory network can be dynamically adjusted to the ever‐changing sensory environment. Previously, we demonstrated in young adults that gamma oscillations are critical for sensory integration and found that they were affected by concurrent noise. Here, we aimed to support the hypothesis that stimulus evoked auditory 40‐Hz responses are a component of thalamocortical gamma oscillations and examined whether this oscillatory system may become less effective in aging. In young and older adults, we recorded neuromagnetic 40‐Hz oscillations, elicited by monaural amplitude‐modulated sound. Comparing responses in quiet and under contralateral masking with multitalker babble noise revealed two functionally distinct components of auditory 40‐Hz responses. The first component followed changes in the auditory input with high fidelity and was of similar amplitude in young and older adults. The second, significantly smaller in older adults, showed a 200‐ms interval of amplitude and phase rebound and was strongly attenuated by contralateral noise. The amplitude of the second component was correlated with behavioral speech‐in‐noise performance. Concurrent noise also reduced the P2 wave of auditory evoked responses at 200‐ms latency, but not the earlier N1 wave. P2 modulation was reduced in older adults. The results support the model of sensory binding through thalamocortical gamma oscillations. Limitation of neural resources for this process in older adults may contribute to their speech‐in‐noise understanding deficits.  相似文献   

4.
Recent studies reveal that multisensory convergence can occur in early sensory cortical areas. However, the behavioral importance of the multisensory integration in such early cortical areas is unknown. Here, we used c-Fos immunohistochemistry to explore neuronal populations specifically activated during the facilitation of reaction time induced by the temporally congruent audiovisual stimuli in rats. Our newly developed analytical method for c-Fos mapping revealed a pronounced up-regulation of c-Fos expression particularly in layer 4 of the lateral secondary visual area (V2L). A local injection of a GABA A receptor agonist, muscimol, into V2L completely suppressed the audiovisual facilitation of reaction time without affecting responses to unimodal stimuli. Such a selective suppression was not found following the injection of muscimol into the primary auditory and visual areas. To examine whether or not the rats might have shown the facilitated responses because of increment of stimulus intensity caused by the two modal stimuli, the behavioral facilitation induced by the high-intensity unimodal stimuli was tested by the injection of muscimol into V2L, which turned out not to affect the facilitation. These results suggest that V2L, an early visual area, is critically involved in the multisensory facilitation of reaction time induced by the combination of auditory and visual stimuli.  相似文献   

5.
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition‐suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound‐flash incongruence reduced accuracy in a same‐different location discrimination task (i.e., the ventriloquism effect) and reduced the location‐specific repetition‐suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information.  相似文献   

6.
During social interaction speech is perceived simultaneously by audition and vision. We studied interactions in the processing of auditory (A) and visual (V) speech signals in the human brain by comparing neuromagnetic responses to phonetically congruent audiovisual (AV) syllables with the arithmetic sum of responses to A and V syllables. Differences between AV and A+V responses were found bilaterally in the auditory cortices 150-200 ms and in the right superior temporal sulcus (STS) 250-600 ms after stimulus onset, showing that both sensory-specific and multisensory regions of the human temporal cortices are involved in AV speech processing. Importantly, our results suggest that AV interaction in the auditory cortex precedes that in the multisensory STS region.  相似文献   

7.
We examined maturation of speech-sound-related indices of auditory event-related brain potentials (ERPs). ERPs were elicited by syllables and nonphonetic correlates in children and adults. Compared with syllables, nonphonetic stimuli elicited larger N1 and P2 in adults and P1 in children. Because the nonphonetics were more perceptually salient, this N1 effect was consistent with known N1 sensitivity to sound onset features. Based on stimulus dependence and independent component structure, children's P1 appeared to contain overlapping P2-like activity. In both subject groups, syllables elicited larger N2/N4 peaks. This might reflect sound content feature processing, more extensive for speech than nonspeech sounds. Therefore, sound detection mechanisms (N1, P2) still develop whereas sound content processing (N2, N4) is largely mature during mid-childhood; in children and adults, speech sounds are processed more extensively than nonspeech sounds 200-400 ms poststimulus.  相似文献   

8.
We studied attention effects on the integration of written and spoken syllables in fluent adult readers by using event‐related brain potentials. Auditory consonant‐vowel syllables, including consonant and frequency changes, were presented in synchrony with written syllables or their scrambled images. Participants responded to longer‐duration auditory targets (auditory attention), longer‐duration visual targets (visual attention), longer‐duration auditory and visual targets (audiovisual attention), or counted backwards mentally. We found larger negative responses for spoken consonant changes when they were accompanied by written syllables than when they were accompanied by scrambled text. This effect occurred at an early latency (~ 140 ms) during audiovisual attention and later (~ 200 ms) during visual attention. Thus, audiovisual attention boosts the integration of speech sounds and letters.  相似文献   

9.
The correction of ballistocardiogram artifacts in simultaneous EEG‐fMRI often yields unsatisfactory results. To improve the signal‐to‐noise ratio (SNR) of results, we inferred EEG signal uncertainty from postcorrection artifact residuals and computed the uncertainty‐weighted mean of ERPs. Using an uncertainty‐weighted mean significantly and consistently reduced both inter‐ and intrasubject SEM in the analysis of auditory evoked responses (AER, indicated by the N1‐P2 complex) and in the effects of an auditory oddball paradigm (N1‐P3 complex, standard‐deviant difference). SNR increased by 3% on average for the AER amplitude (intrasubject) and 17% on average for the auditory oddball ERP (intersubject). This demonstrates that weighting by uncertainty complements existing artifact correction algorithms to increase SNR in ERPs. More specifically, it is an efficient method to utilize seemingly corrupt (difficult‐to‐correct) EEG data that might otherwise be discarded.  相似文献   

10.
In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion.  相似文献   

11.
Self‐suppression refers to the phenomenon that sensations initiated by our own movements are typically less salient, and elicit an attenuated neural response, compared to sensations resulting from changes in the external world. Evidence for self‐suppression is provided by previous ERP studies in the auditory modality, which have found that healthy participants typically exhibit a reduced auditory N1 component when auditory stimuli are self‐initiated as opposed to externally initiated. However, the literature investigating self‐suppression in the visual modality is sparse, with mixed findings and experimental protocols. An EEG study was conducted to expand our understanding of self‐suppression across different sensory modalities. Healthy participants experienced either an auditory (tone) or visual (pattern‐reversal) stimulus following a willed button press (self‐initiated), a random interval (externally initiated, unpredictable onset), or a visual countdown (externally initiated, predictable onset—to match the intrinsic predictability of self‐initiated stimuli), while EEG was continuously recorded. Reduced N1 amplitudes for self‐ versus externally initiated tones indicated that self‐suppression occurred in the auditory domain. In contrast, the visual N145 component was amplified for self‐ versus externally initiated pattern reversals. Externally initiated conditions did not differ as a function of their predictability. These findings highlight a difference in sensory processing of self‐initiated stimuli across modalities, and may have implications for clinical disorders that are ostensibly associated with abnormal self‐suppression.  相似文献   

12.
Low band‐gap conjugated polymers based on naphthalene bisimide (NBI) and 3,4‐ethylenedioxythiophene (EDOT) were synthesized by Stille cross‐coupling reaction. The alternating conjugated poly(EDOT‐NBI) ( P1 ) and random poly(EDOT‐NBI) ( P2 ) are both solution‐processable due to the existence of bulky 2,6‐diisopropylphenyl substituent. Their optical and electrochemical properties were characterized. P1 and P2 films show optical band gaps of 1.75 and 1.38 eV estimated from UV‐Vis absorption spectra. Cyclic voltammograms of both polymers display reversible reduction peaks with onset reduction potentials at ?0.55 V for P1 and ?0.61 V for P2 , which correspond to the electron affinity (EA) values (LUMO energy level) of 3.85 and 3.79 eV, respectively. The ionization potential (IP, HOMO level) values of 5.60 eV for P1 and 5.17 eV for P2 were also calculated by combining solid‐state optical and electrochemical data. A double heterojunction device was fabricated. It exhibits an open circuit voltage of 0.30 V and average power conversion efficiency of 0.15%.

  相似文献   


13.
Mild cognitive impairment (MCI) is considered an intermediate transitional stage for the development of dementia, especially Alzheimer's disease. The identification of neurophysiological biomarkers for MCI will allow improvement in detecting and tracking the progression of cognitive impairment. The primary objective of this study was to compare cortical auditory evoked potentials between older adults with and without probable MCI to identify potential neurophysiological indicators of cognitive impairment. We applied a temporal‐spatial principal component analysis to the evoked potentials achieved during the processing of pure tones and speech sounds, to facilitate the separation of the components of the P1‐N1‐P2 complex. The probable MCI group showed a significant amplitude increase in a factor modeling N1b for speech sounds (Cohen's d = .84) and a decrease in a factor around the P2 time interval, especially for pure tones (Cohen's d = 1.17). Moreover, both factors showed a fair discrimination value between groups (area under the curve [AUC] = .698 for N1b in speech condition; AUC = .746 for P2 in tone condition), with high sensitivity to detect MCI cases (86% and 91%, respectively). The results for N1b suggest that MCI participants may suffer from a deficit to inhibit irrelevant speech information, and the decrease of P2 amplitude could be a signal of cholinergic hypoactivation. Therefore, both components could be proposed as early biomarkers of cognitive impairment.  相似文献   

14.
The N1 and P2 event‐related potentials (ERPs) are attenuated when the eliciting sounds coincide with our own actions. Although this ERP attenuation could be caused by central processes, it may also reflect a peripheral mechanism: the coactivation of the stapedius muscle with the task‐relevant effector, which reduces signal transmission efficiency in the middle ear, reducing the effective intensity of concurrently presented tones, which, in turn, elicit lower amplitude auditory ERPs. Because stapedius muscle contraction attenuates frequencies below 2 kHz, no attenuation should occur at frequencies above 2 kHz. A self‐induced tone paradigm was administered with 0.5, 2.0, and 8.0 kHz pure tones. Self‐induced tones elicited attenuated N1 and P2 ERPs, but the magnitude of attenuation was not affected by tone frequency. This result does not support the hypothesis that ERP attenuation to self‐induced tones are caused by stapedius muscle contractions.  相似文献   

15.
Inflammatory processes induced by IL‐1β are critical for host defence responses, but are also implicated in disease. Zinc deficiency is a common consequence of, or contributor to, human inflammatory disease. However, the molecular mechanisms through which zinc contributes to inflammatory disease remain largely unknown. We report here that zinc metabolism regulates caspase‐1 activation and IL‐1β secretion. One of the endogenous mediators of IL‐1β secretion is adenosine triphosphate, acting via the P2X7‐receptor and caspase‐1 activation in cells primed with an inflammatory stimulus such as LPS. We show that this process is selectively abolished by a brief pre‐treatment with the zinc chelator N,N,N′,N′‐tetrakis‐(2‐pyridylmethyl) ethylene diamine (TPEN). These effects on IL‐1β secretion were independent of rapid changes in free zinc within the cell, not a direct effect on caspase‐1 activity, and upstream of caspase‐1 activation. TPEN did however inhibit the activity of pannexin‐1, a hemi‐channel critical for adenosine triphosphate and nigericin‐induced IL‐1β release. These data provide new insights into the mechanisms of caspase‐1 activation and how zinc metabolism contributes to inflammatory mechanisms.  相似文献   

16.
Experimental paradigms investigating the processing of self‐induced stimuli are often based on the implicit assumption that motor processes are invariable regardless of their consequences: It is presumed that actions with different sets of predictable sensory consequences do not differ in their physical characteristics or in their brain signal reflections. The present experiment explored this assumption in the context of action‐related auditory attenuation by comparing actions (pinches) with and without auditory consequences. The results show that motor processes are not invariable: Pinches eliciting a tone were softer than pinches without auditory effects. This indicates that self‐induced auditory stimuli are not perceived as irrelevant side effects: The tones are used as feedback to optimize the tone‐eliciting actions. The comparison of ERPs related to actions with different physical parameters (strong and soft pinches) revealed a significant ERP difference in the time range of the action‐related N1 attenuation (strong pinches resulted in more negative amplitudes), suggesting that a motor correction bias may contribute to this auditory ERP attenuation effect, which is usually attributed to action‐related predictive processes.  相似文献   

17.
The photorefractivity of a non‐conjugated main‐chain polymeric composite, composed of electron‐rich N,N‐di‐toly‐N,N′‐diphenylbiphenyldiamine doped with 2‐{3‐[(E)‐2‐(piperidino)‐1‐ethenyl]‐5,5‐dimethyl‐2‐cyclohexenylidene}‐malononitrile (P‐IP‐DC) and [6,6]‐phenyl‐C61‐butyric acid methyl ester (PCBM), is studied using two‐beam coupling and degenerate four‐wave mixing at 633 nm. The N,N′‐di‐toly‐N,N′‐diphenylbiphenyldiamine‐based composite shows a gain coefficient of 215 cm?1 at 80 V μm?1 with s‐polarized beams and a diffraction efficiency of 67% at 30 V μm?1, with a response time of the diffraction efficiency of 344 ms at Tg + 4.6 °C. Comparing the photorefractive grating response time of the 3,3′‐dicarbazole‐containing polymer composite with that of the N,N′‐di‐toly‐N,N′‐diphenylbiphenyldiamine‐based composite shows that the former is more than three times faster.  相似文献   

18.
The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82–87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing.  相似文献   

19.
π‐Conjugated polymers consisting of 9,10‐disubstituted 9,10‐dihydrophenenthrene, with the substituents octyl, 2‐ethylhexyl, ‐OSiBu3, etc., units are prepared by organometallic polycondensation. Homopolymers ( PH2Ph(9,10‐R) ) have a π‐conjugation system similar to that of polymers of 9,9‐dialkylfluorene and show UV‐Vis peaks at ≈380 nm. In addition to the peak at ≈380 nm, some homopolymers give rise to a peak at a longer wavelength, suggesting molecular assembly of the polymers. X‐ray diffraction data support the molecular assembly. The homopolymers show photoluminescence (PL) with PL peaks at ≈430 nm, and PL spectrum of the polymer film is essentially unchanged after heating the polymer film at 150 °C in air. The homopolymers undergo electrochemical p‐doping at about 1.5 V versus Ag+/Ag.  相似文献   

20.
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the “cocktail‐party” problem. Twenty‐eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior‐contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail‐party conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号